Q-1). What are the traditional methods of message transfer?

A-1). The traditional methods are as follows:

  1. Queue: A pool of consumer reads messages from a server and each message goes to one of the consumers.
  2. Publish-Subscribe: Message broadcasted to all the consumers.

Q-2). What is a broker in Kafka?

A-2). Broker is basically a Server in Kafka Cluster

Q-3). What maximum message size can Kafka server receive?

A-3). Kafka Server can receive a message size of 1 million bytes.

Q-4). What is meant by SerDes?

A-4). SerDes (Serializer and Deserializer) modifies the data whenever necessary for any Kafka stream.

Q-5). What is meant by partition offset?

A-5). The offset uniquely identifies a record in a partition. Topics can have multiple partition logs that allows consumers to read in parallel. Consumers can read messages from the specific offset as well as from a choicable offset as well.

Q-6). What is load balancing?

A-6). The load balancer distributed loads or replicates messages in multiple systems in case load gets increased.

Q-7). What is Kafka Producer Acknowledgement?

A-7). An acknowledgement or ack sent by the broker to producer to acknowledge receipt of the message. Ack level defines the no. of acknowledgements that the producer requires before considering a request is complete.

Q-8). What is a Smart Producer/Dumb Broker?

A-8). A Smart Producer/Dumb Broker is a broker that does not attempt to track which messages have not read by the consumers rather it retains the unread messages.

Q-9). What is consumer lag?

A-9). Any reads in Kafka lag behind any writes in Kafka as there is always some delay between writing and consuming the message. The delta between the consuming offset and the latest offset is called Consumer Lag.

Q-10). What are the cases where Kafka doesn’t fit?

A-10). Kafka will not fit in the situations where there is a lack of monitoring tool and a wildcard option is not available to select the Topics. Simple thing is Kafka is a bit difficult to configure and one needs good implementation knowledge.

Q-11). What is fault tolerance?

A-11). Fault Tolerance means that the system is protected and available even in the nodes in the cluster fails.

Q-12). What is MirrorMaker?

A-12). MirrorMaker is a utility that makes the copies of clusters within the different or identical data centers.

Q-13). How is Kafka tuned for optical performance?

A-13). Before tuning Kafka, one need to tune the different components of Kafka like Kafka Producers, Kafka Brokers and Kafka Consumers.

Q-14). What do you understand by Kafka Multi-Tenancy?

A-14). Kafka can be deployed as a multi-tenant by enabling the configurations for different topics for consuming and producing.

Q-15). What are the benefits of creating Kafka Cluster?

A-15). The Kafka cluster has zero downtime if we expand the cluster. The cluster manages the replication and persistence of messages. The cluster also offers strong durability because of cluster centric design

Q-16). If replica stays out of ISR for a long time, what is indicated?

A-16). If replica is staying out of ISR for a long time then it means that the follower can not fetch data as fast as data is accumulated at the leader.

Q-17). What happens if the preferred replica is not in the ISR?

A-17). The controller will fail to move the leadership to the preferred replica, if it is not in ISR.

Q-18). How the churn can be reduced in ISR, and when does the broker leave it?

A-18). ISR has all committed messages and it should have the replicas till there is a real failure. A replica is dropped out of ISR if it deviates from the leader.

Q-19). How can the throughput of a remote consumer can be improved?

A-19). If the consumer is not located in the same data center as the broker, it requires tuning the socket buffer size to reduce the long network latency

Q-20). How can the Kafka cluster can be rebalanced?

A-20). When a consumer adds a new disk or nodes to the existing nodes, partitions are not automatically balanced. If several nodes in a topic are already equal to the replication factor, then adding disks will not help in rebalancing. Instead, Kafka reassign commands are recommended after adding new hosts.

Q-21). Is getting message offset possible after producing?

A-21). In most queue systems, Producer roles is to forget and fire the messages. One will get the offset from the Kafka Broker if it is a Consumer.

Q-22). What are the three broker configuration files?

A-22). The essential configuration files are as follows: broker.id, log.dirs and zookeeper.connect.

Q-23). How is the log cleaner configured?

A-23). It is enabled by default and starts the pool of the cleaner threads. To enable log cleaner for a particular topic one has to add one property as log.cleanup.policy=compact. This can be done either by alter topic command or at the topic creation time.

Q-24). How does Kafka communicate with server and clients?

A-24). The communication between the servers and the clients is done with a high-performance TCP protocol. This protocol maintains backward compatibility with the earlier version.

Q-25). Who is the Producer in Kafka?

A-25). The Producer publishes and sends the data to the broker service. The producer writes data to the Topics and the consumers consumes the data from the topics.