Here I will describe about some common problems as faced by Developers while implementing Spring Apache Kafka with Spring Boot.

Q-1). Is it good to use default settings of Kafka?

A-1). No, it is not the best way to use default settings of Apache Kafka. Default settings comes with a single partition and replication factor as 1. And this is not acceptable in the Production environment.

Solutions – Both the below solutions are to be applied on the Broker level

  • Disable the auto.create.topics.enable property so that every topic needs to be created manually.
  • Override the default.replication.factorand num.partitions properties to set with your own value.

Q-2). Is it feasible to create 1 partition for now and later based on the traffic increase the partition?

A-2). There are drawbacks if you create a partition with lesser no as well as with higher no. If you create partition with lesser no. let’s say 1 then the partition may not get located on all possible brokers leading to nonuniform cluster utilization. And if you make partition with higher no then there is least probability that messages with different keys will land on the same partition. Also there will be memory related issues. So, my suggestion is to first analyze the volume of data and based on that analysis create the partition.

Q-3). Can we use default configurations for Producer/Publisher?

A-3). For a producer at least 3 configurations keys are required. They are as follows:

  • bootstrap servers
  • key serializer
  • value serializer

But to make the Producer more resilient we can define some other properties like :

acks, min.insync.replicas, replica.lag.time.max.ms, retries, enable.idempotent, max.in.flight.requests.per.connection, buffer.memory, max.block.ms, linger.ms, batch.size.

All these properties are interrelated. A one liner for all the properties.

acks – acks is the Acknowledgement that the Producer receive from the Kafka Broker that the message is successfully published to that broker. Default value is 1. Another value of acks is all, it means that the producer will get acknowledgement from all the in-sync replicas of this topic.

min.insync.replicas – In-sync replica means if acks=all then the producer will receive acknowledgement from the in sync replicas of the topic. A replica is defined as the copy of the message in one of the brokers. In other words we can say that no. of replicas is the no. of brokers. Among these replicas one is the leader and the others are the followers. Default value is 1. It means that if all the followers go down then the ISR (in-sync replica) only consists of the leader.

replica.lag.time.max.ms – There is a concept that the in-sync replica catches with the leaders in the last 10 seconds. This time is configurable by this property. It means if a broker goes down due to network issues or any other issues then it couldn’t follow up the leader and after 10 seconds it will removed from the ISR.

retries – It means that if any commit fails max no. of retries the producer will try to send the message in the Broker. By default the value is 0. But if it is n then the producer will try n no. of times to commit the message in the broker.

enable.idempotent – Let’s say retries = 2 it means that the producer will publish the same message twice. So the broker will have duplicate messages. To avoid this we can use this property and set the value as true. So it means that if the producer resend the message the consumer should receive the message only once.

max.in.flight.requests.per.connection – It represents the no. of acknowledged messages in the producer side. Default value is 5. But it is dependent on the the above property i.e. enable.idempotent. If we set this idempotent property then no need to set this property and if you don’t set the idempotent property then you set this property value as 1.

buffer.memory – When the producer sends a message it doesn’t get immediately send to the broker. It is added to the internal buffer. Default size of this buffer is 32 MB.

max.block.ms – This property comes into play when the producer is sending messages faster than they can be transmitted to the broker, then the buffer size will increase and the sending will be blocked for 1 minute. This is the default value for this property.

linger.ms – It is the delay time before the batches are ready to be sent. Default value is 0 means batches will be immediately sent if there is only 1 message in the batch.

batch.size – This is the maximum size of the single batch.

Q-4). Is it feasible to use basic Java Consumer?

A-4). In my opinion it is better not to use basic Java Consumer.

Q-5). Why it is not feasible to use basic Java Consumer?

A-5). In basic java consumer, there is a class named as KafkaConsumer class which can be used by a single thread. Then an “infinite loop” is required for continuously polling the broker messages. There is another concept known as Heartbeats which is handled by another thread and it periodically sends a message to the broker to check whether it is working or not. This period is defined by one property as max.poll.interval.ms and it’s default value is 5 minutes. if this time is not met then the Consumer will leave the Consumer Group. And it is very scary and difficult to maintain as the message can be deliver to another instance. The solution for this type of problem is to limit number of records fetched in a single poll invocation.

Q-6). What are the options instead of the basic Java Client?

A-6). There are various options instead of using basic Java Client and the options are as follows:

  • Spring for Apache Kafka
  • Alpakka Kafka
  • FS2 Kafka
  • Micronaut Kafka
  • Quarkus Kafka

Q-7). What is the exactly once semantic concepts?

A-7). Kafka supports the concept of exactly once. It means delivering of the messages in the broker from the producer will be exactly once. Theoretically, there is no such thing of exactly once.

To achieve this, you need to do certain configurations like:

  • First you need to enable some configurations on Producer side like enable.idempotent which requires specific values for max.in.flight.requests.per.connection, retries and acks.
  • Secondly, if there is a failure to read message by the Consumer, you can process more than once, But the solution will be de-duplicate by storing the messages in database.
  • Lastly, using Kafka Streams which explicitly defines a setting exactly once

Q-8). Who cares about monitoring when the system works?

A-8). Monitoring is obviously required for Kafka Cluster as there are lot of things which can actually go wrong. If you don’t want to lose data or the availability then you should actually leverage the Kafka Metrics. Different tools are available like JMX or Kafka Exporters.

Q-9). Is it good to upgrade project dependencies without looking at the Release Notes?

A-9). The best option is to read the Release Notes before bumping the project dependencies. Kafka 0.11 clients are generally forward and backward compatible.

Q-10). What is the reason to throw this exception message as the Consumer is not a part of the active group?

A-10). This problem comes when there is a mismatch of the count of headers used in the Producer in compare to the Consumers or you have used a different header key in the Producer and in the Consumer you have used another different key.

As for example :

In Producer you have used 5 headers while publishing a message, while in Consumer you have used 4 headers then this problem will come.

Or let’s say you have used a header key named as app.name in the Producer side while in the Consumer side you define the header key as application.name. Then in this scenario as well it will throw the above exception message.