Here I will describe some common problems faced by Developers while implementing Spring Apache Kafka with Confluent.

Q-1). How to adjust upscaling and downscaling in Kafka?

A-1). In the case of Kafka, one has to balance it manually to reduce resource bottlenecks. It is always wise to use a distributed messaging platform, where one can easily can scale up or down based on the resource allocations.

Q-2). What is MirrorMaker in Kafka?

A-2). MirrorMaker is one of the features of Kafka that allows you to make copies of clusters. It doesn’t replicate the topic offset in the clusters. For this one has to create and assign a unique key for each and every message and it is very scary while working at scale.

Q-3). Are all messaging paradigms included in Kafka?

A-3). 2 major messaging paradigms are not included in Kafka. They are as follows: point-to-point queues and request/reply queues.

Q-4). Whether changing a message reduce performance?

A-4). Yes, changing the messages on the fly will reduce the performance. Kafka is a distributed messaging system, if you want to use Kafka only to deliver messages then it is fine, but if you want to modify the message before sending it will impact the performance. Though, Kafka supports changing data on the fly. But the system it uses can have some limitations. Kafka uses system calls to do it, and modifying messages will make the platform slower.

Q-5). Whether someone will face problems while transferring data on the fly?

A-5). Yes, one will face problems while transferring data in Kafka on the fly. If anyone tries to use Kafka in systems like big data integration or in migration projects, it will be a little bit complex.

Q-6). Will it be feasible to store data in Kafka for the long term?

A-6). The simple answer is No. It will impact the performance, but most importantly it will increase the storage costs. One of the best solutions is to store the data in Kafka for a short period and then migrate the data to a database, either relational or non-relational.

Q-7). What are the perfect data retention settings?

A-7). Kafka stores messages in Topics. So this data is taking up some disk space in your brokers. So for this, you need to set the retention period or configurable size.

Q-8). What is the problem faced by Kafka Liveliness Check Program?

A-8). Kafka Liveliness Check problems happen when the Kafka host is not able to reach the host where the broker is running. As an effect, the broker will keep on restarting. From an automation perspective, if you enable a liveliness check, then make sure that the client-serving port is open. You can write a piece of code to restart the broker if the port is not open. The entire infrastructure is useless if the broker falls into a dead loop and keeps restarting. The solution is to simply turn off the liveliness check.

Q-9). What is the impact of adding a new Broker?

A-9). Adding a new Broker in the production can seriously impact the performance and cause latency and missing file problems. The basic thing is a broker can work properly before the partition reassign process is completed. Moving thousands of partitions to the staging cluster can take hours. And the performance will suffer until all the partitions are moved. So one must be careful and have a proper plan before adding a broker into a cluster.

Q-10). What are the in-sync replica alerts?

A-10). The in-sync replica alerts of the topics tell us that the data is simply not replicated by the brokers. These alerts indicate a probability of data loss. It normally takes place when the down-level clients were affected by the volume of data, and you have nothing to do. The solution is to fix the affected broker for the sake of the entire system being operational again.