๐ถSpark context is the traditional entry point to any Spark application
๐ถIt represents the connection to the spark cluster and is the place where the user can configure the common properties for the entire application to create RDD
๐ถSparkContext is designed for low-level programming & fine-grained control over the Spark application
๐ถGet the current status of the spark application
๐ถAccess various service
๐ถCancel a job
๐ถCancel a stage
๐ถClosure cleaning
๐ถRegister Spark-Listener
๐ถProgrammable Dynamic allocation
๐ถAccess persistent RDD
Common Challenges
๐ถManaging multiple contexts
๐ถNeed proper context initialization
๐ถfailing to manage context correctly leads to performance
๐ถBefore Spark 2.0, have to create specific Spark contexts for any other interaction (Hive, SQL & Streaming Context)
๐ถIn a Multiuser-Multi Application case-conflict can arise when multiple users or applications try to use the same context
๐ถLegacy code base
Spark Session:
๐ถSpark Session was introduced in Spark 2.0
๐ถNo Need to create multiple contexts
๐ถSpark session integrates with spark contexts and provides a high-level API for working with structured data through SQL, streaming data with spark streaming
๐ถImprove the performance by improving Catalyst Optimizer predominantly for Spark SQL Queries
๐ถDataFrame provides a table structure of data
๐ถSpark session can be integrated with Jupyter Notebook
๐ถSpark session is a combination of all 3 different contexts internally spark session creates a new spark context for all the operation
๐ถSpark Session addresses the multi-user accessing the same spark context issue
๐ถSpark sessions handle the isolation and resource management more efficiently
๐๐ผA stream of messages belonging to a particular category is called a Topic.
๐๐ผIts is a logical feed name where to which records are published(Similar to Table in DB )
๐๐ผUnique identification of table is called name of the topic - can not be duplicated
๐๐ผA topic is a storage mechanism for a sequence of events
๐๐ผEvents are immutable
๐๐ผkeep events in the same order as they occur in time. So, each new event is always added to the end of the Message.
2. ๐๐ฃ๐๐ฅ๐ง๐๐ง๐๐ข๐ก๐ฆ:
๐๐ผTopics are split into partition
๐๐ผAll the messages within a partition are ordered and immutable
๐๐ผAll the messages within the partition has a unique ID associated is called OFFSET.
๐๐ผKafka uses topic partitioning to improve scalability.
๐๐ผKafka guarantees the order of the events within the same topic partition. However, by default, it does not guarantee the order of events across all partitions.
3.๐๐ฅ๐๐ฃ๐๐๐๐๐ฆ:
๐๐ผReplicas are backs of partition
๐๐ผReplicas are never read or write data
๐๐ผThey are used to prevent data loss (Fault - Tolerance)
4.๐๐ฃ๐ฅ๐ข๐๐จ๐๐๐ฅ:
Producer
๐๐ผProducers publish messages by appending to the end of a topic partition.
๐๐ผEach message will be stored in the broker disk and will receive an offset (unique identifier). This offset is unique at the partition level, each partition has its owns offsets. That is one more reason that makes Kafka so special, it stores the messages in the disk (like a database, and in fact, Kafka is a database too) to be able to recover them later if necessary. Different from a messaging system, that the message is deleted after being consumed;
๐๐ผThe producers use the offset to read the messages, they read from the oldest to the newest. In case of consumer failure, when it recovers, will start reading from the last offset;
๐๐ผBy default, if a message contains a key (i.e. the key is NOT null), the hashed value of the key is used to decide in which partition the message is stored.
๐๐ผProducers publish messages by appending to the end of a topic partition. By default, if a message contains a key (i.e. the key is NOT null), the hashed value of the key is used to decide in which partition the message is stored.
๐๐ผAll messages with the same key will be stored in the same topic partition. This behavior is essential to ensure that messages for the same key are consumed and processed in order from the same topic partition.
๐๐ผProducers write the messages to the topic level(All the partitions of that topic) or specific partition of the topic using Producing API's
๐๐ผIf the key is null, the producer behaves differently according to the Kafka version:
up to Kafka 2.3: a round-robin partitioner is used to balance the messages across all partitions
Kafka 2.4 or newer: a sticky partitioner is used which leads to larger batches and reduced latency and is particularly beneficial for very high throughput scenarios
5.Consumer:
๐๐ผConsumers are applications which read/consume data from the topics within a cluster using consuming API's
๐๐ผConsumers can read either on the topic level (All the partitions of the topic) or specific partition of the topics
๐๐ผEach message published to a topic is delivered to a consumer that is subscribed to that topic.
๐๐ผA consumer can read data from any position of the partition, and internally the position is stored as a pointer called offset. In most of the cases, a consumer advances its offset linearly, but it could be in any order, or starting from any given position.
๐๐ผEach consumer belongs to a consumer group. A consumer group may consist of multiple consumer instances.
๐๐ผThis is the reason why a consumer group can be both, fault tolerant and scalable.
๐๐ผ If one of several consumer instances in a group dies, the topic partitions are reassigned to other consumer instances such that the remaining ones continue to process messages form all partitions.
๐๐ผ If a consumer group contains more than one consumer instance, each consumer will only receive messages from a subset of the partitions. When a consumer group only contains one consumer instance, this consumer is responsible for processing all messages of all topic partitions.
๐๐ผMessage consumption can be parallelized in a consumer group by adding more consumer instances to the group, up to the number of a topic’s partitions.
๐๐ผ if a topic has 8 partitions, a consumer group can support up to 8 consumer instances which all consume in parallel, each from 1 topic partition.
๐๐ผAdding more consumers in a consumer group than the number of partitions for a topic, then they will stay in an idle state, without getting any message
Consumer
6.Kafka Broker:
๐๐ผThat Kafka broker is a program that runs on the Java Virtual Machine (Java version 11+)
๐๐ผA Kafka broker is used to manage storing the data records/messages on the topic. It can be understood as the mediator between the two
๐๐ผThe Kafka broker is responsible for transferring the conversation that the publisher is pushing in the Kafka log commit and the subscriber shall be consuming these messages.
๐๐ผEnabling the delivery of the data records/ message to process to the right consumer.
7. Kafka Cluster
๐๐ผAn ensemble of Kafka brokers working together is called a Kafka cluster. Some clusters may contain just one broker or others may contain three or potentially hundreds of brokers. Companies like Netflix and Uber run hundreds or thousands of Kafka brokers to handle their data.
๐๐ผA broker in a cluster is identified by a unique numeric ID. In the figure below, the Kafka cluster is made up of three Kafka brokers.
๐๐ผTo configure the number of the partitions in each broker, we need to configure something called Replication Factor when creating a topic. Let’s say that we have three brokers in our cluster, a topic with three partitions and a Replication Factor of three, in that case, each broker will be responsible for one partition of the topic.
๐๐ผAs you can see in the above image, Topic_1 has three partitions, each broker is responsible for a partition of the topic, so, the Replication Factor of the Topic_1 is three.
๐๐ผIt’s very important that the number of the partitions match the number of the brokers, in this way, each broker will be responsible for a single partition of the topic