Kafka Eco System
ðð§ðžð―ðķð°ð:
ððžA stream of messages belonging to a particular category is called a Topic.
ððžIts is a logical feed name where to which records are published(Similar to Table in DB )
ððžUnique identification of table is called name of the topic - can not be duplicated
ððžA topic is a storage mechanism for a sequence of events
ððžEvents are immutable
ððžkeep events in the same order as they occur in time. So, each new event is always added to the end of the Message.
2. ððĢððĨð§ðð§ððĒðĄðĶ:
ððžTopics are split into partition
ððžAll the messages within a partition are ordered and immutable
ððžAll the messages within the partition has a unique ID associated is called OFFSET.
ððžKafka uses topic partitioning to improve scalability.
ððžKafka guarantees the order of the events within the same topic partition. However, by default, it does not guarantee the order of events across all partitions.
3.ððĨððĢðððððĶ:
ððžReplicas are backs of partition
ððžReplicas are never read or write data
ððžThey are used to prevent data loss (Fault - Tolerance)
4.ððĢðĨðĒððĻðððĨ:
ððžProducers publish messages by appending to the end of a topic partition.
ððžEach message will be stored in the broker disk and will receive an offset (unique identifier). This offset is unique at the partition level, each partition has its owns offsets. That is one more reason that makes Kafka so special, it stores the messages in the disk (like a database, and in fact, Kafka is a database too) to be able to recover them later if necessary. Different from a messaging system, that the message is deleted after being consumed;
ððžThe producers use the offset to read the messages, they read from the oldest to the newest. In case of consumer failure, when it recovers, will start reading from the last offset;
ððžBy default, if a message contains a key (i.e. the key is NOT null), the hashed value of the key is used to decide in which partition the message is stored.
ððžProducers publish messages by appending to the end of a topic partition. By default, if a message contains a key (i.e. the key is NOT null), the hashed value of the key is used to decide in which partition the message is stored.
ððžAll messages with the same key will be stored in the same topic partition. This behavior is essential to ensure that messages for the same key are consumed and processed in order from the same topic partition.
ððžProducers write the messages to the topic level(All the partitions of that topic) or specific partition of the topic using Producing API's
ððžIf the key is null, the producer behaves differently according to the Kafka version:
up to Kafka 2.3: a round-robin partitioner is used to balance the messages across all partitions
Kafka 2.4 or newer: a sticky partitioner is used which leads to larger batches and reduced latency and is particularly beneficial for very high throughput scenarios
5.Consumer:
ððžConsumers are applications which read/consume data from the topics within a cluster using consuming API's
ððžConsumers can read either on the topic level (All the partitions of the topic) or specific partition of the topics
ððžEach message published to a topic is delivered to a consumer that is subscribed to that topic.
ððžA consumer can read data from any position of the partition, and internally the position is stored as a pointer called offset. In most of the cases, a consumer advances its offset linearly, but it could be in any order, or starting from any given position.
ððžEach consumer belongs to a consumer group. A consumer group may consist of multiple consumer instances.
ððžThis is the reason why a consumer group can be both, fault tolerant and scalable.
ððž If one of several consumer instances in a group dies, the topic partitions are reassigned to other consumer instances such that the remaining ones continue to process messages form all partitions.
ððž If a consumer group contains more than one consumer instance, each consumer will only receive messages from a subset of the partitions. When a consumer group only contains one consumer instance, this consumer is responsible for processing all messages of all topic partitions.
ððžMessage consumption can be parallelized in a consumer group by adding more consumer instances to the group, up to the number of a topic’s partitions.
ððž if a topic has 8 partitions, a consumer group can support up to 8 consumer instances which all consume in parallel, each from 1 topic partition.
ððžAdding more consumers in a consumer group than the number of partitions for a topic, then they will stay in an idle state, without getting any message
6.Kafka Broker:
ððžThat Kafka broker is a program that runs on the Java Virtual Machine (Java version 11+)
ððžA Kafka broker is used to manage storing the data records/messages on the topic. It can be understood as the mediator between the two
ððžThe Kafka broker is responsible for transferring the conversation that the publisher is pushing in the Kafka log commit and the subscriber shall be consuming these messages.
ððžEnabling the delivery of the data records/ message to process to the right consumer.
7. Kafka Cluster
ððžAn ensemble of Kafka brokers working together is called a Kafka cluster. Some clusters may contain just one broker or others may contain three or potentially hundreds of brokers. Companies like Netflix and Uber run hundreds or thousands of Kafka brokers to handle their data.
ððžA broker in a cluster is identified by a unique numeric ID. In the figure below, the Kafka cluster is made up of three Kafka brokers.
ððžTo configure the number of the partitions in each broker, we need to configure something called Replication Factor when creating a topic. Let’s say that we have three brokers in our cluster, a topic with three partitions and a Replication Factor of three, in that case, each broker will be responsible for one partition of the topic.
ððžAs you can see in the above image, Topic_1 has three partitions, each broker is responsible for a partition of the topic, so, the Replication Factor of the Topic_1 is three.
ððžIt’s very important that the number of the partitions match the number of the brokers, in this way, each broker will be responsible for a single partition of the topic
Will write further more
Comments
Post a Comment