Here I will describe about some common problems as faced by Developers while implementing Spring RabbitMQ with Spring Boot.
Q-1). Is it a best practice to overload Queue with lots of messages?
A-1). No, it is not a best practice to overload Queue with lots of messages. It is always advisable that to keep your Queue short if possible. Because many messages in a queue can put a heavy load on RAM usage. RabbitMQ starts flushing (page out) messages to disk. This page out process takes time and blocks the queue from processing messages when there are many messages to page out, deteriorating queue speed. Having many messages in the queue might negatively affect the performance of the broker. In addition, it is time consuming to restart a cluster with many messages since the index has to be rebuilt. It also takes time to sync messages between nodes in the cluster after a restart.
Q-2). How to map the Consumer Object with the Publisher Object?
A-2). Nowadays, we use spring-amqp plugin to implement the RabbitMQ with a Spring Boot application. Now there can be a scenario that where the Publisher is not from your organization. So they have their own object by which they publish in RabbitMQ and you being the consumer, need to consume the message from that Queue. The above library provides a solution to define the mapping class. Just like below:
DefaultClassMapper mapper = new DefaultClassMapper();
mapper.setDefaultType(RabbitMQRequest.class);
mapper.setTrustedPackages("*");
Map<String, Class<?>> idClassMapping = new HashMap<>();
idClassMapping.put("com.abc.RabbitMQRequest", com.springcavaj.rabbitmq.model.RabbitMQRequest.class);
mapper.setIdClassMapping(idClassMapping);
Here in the above code snippet you can see the highlighted red line. In this line, I have defined the key as “com.abc.RabbitMQRequest” and value as com.springcavaj.rabbitmq.model.RabbitMQRequest.class. What does this means? The key is the package structure used by the publisher to publish the message in the queue. But your organization support a different package name. So to map your consume object with the publisher request object you need to inform queue about the mapping. Once you define the mapping, set the map to the DefaultClassMapper object as mapper.setIdClassMapping(idClassMapping).
If still it is throwing exception while consuming the message from the queue then use another line as mapper.setDefaultTypes(RabbitMQRequest.class); and mapper.setTrustedPackages(“*”);.
Q-3). Why to use Quorum Queues?
A-3). Quorum Queue is a replicated queue. It has a leader and multiple followers. A quorum queue with replication factor of 5 will consist of 5 replicated queues, the leader and 4 followers. Each replicated queue will be hosted on a different node (broker). Clients or publishers will always interact with the leader, which then replicates all the commands (write, read, ack, etc.) to the followers. The followers do not interact with the clients at all, they exist for redundancy. When a broker goes offline, a follower replica on another broker will be elected leader and service will continue.
Q-4). What is a Mirrored Queue and what is the problem using Mirrored Queue?
A-4). Mirrored Queue is also a replicated queue. They are mirrored to two nodes by default. This can be overridden to 3, 4 or 5 nodes. But it is not advisable due to a lot of infra cluster traffic. This is also a problem implementing Mirrored Queue.
Q-5). Do I need to change anything in code to use Quorum Queues?
A-5). Yes, you need to declare a quorum queue using a queue declare argument. While apart from that all things like publishing/consuming messages, receiving manual acknowledgement are normal.
Q-6). What are the features not available in Quorum Queue?
A-6). The features that are not available in Quorum Queue are as follows:
- Non-durable messages
- Queue Exclusivity
- Queue/Message TTL (not available till date)
- Dead length exchange and limits are available. While some policies are not available.
- Priorities
Q-7). What is the advantage of enabling lazy queues?
A-7). Lazy Queue feature was available from RabbitMQ 3.6 version. Lazy Queues are the queues where the messages are automatically stored to disk, thereby minimizing the RAM usage, but extending the throughput time. Messages will not get flushed to disk without a warning. Also it will not experience a sudden hit to the queue performance.
A recommendation – If you are sending a lot of messages at once (processed from batch jobs) or if you think that your consumers will not keep up with the speed of the publishers all the time, we recommend that you enable lazy queues.
Q-8). How to limit the Queue size?
A-8). It is recommended that for applications that often get hit by spikes of messages, and where the throughput is more important than anything else, is to set a max-length on the queue. This keeps the queue short by discarding messages from the head of the queue so that it never gets larger than the max-length setting.
Q-9). What will be the number of queues according to industry standard?
A-9). Queues are single threaded in RabbitMQ. And one queue can handle 50,000 messages. You will achieve better throughput on a multi-core system if you have multiple queues and consumers and if you have many queues as cores on the underlying nodes.
Q-10). For better performance is it better to split queue in different cores?
A-10). One CPU core serves one queue by default. For better performance, split the queues in different cores, in different nodes.
Q-11). Is it fine to set own names on temporary queues?
A-11). No, this is not a great idea to set your name on temporary queues. Instead, you should let the server choose a random queue name instead of making up your own names.
Q-12). Is it a good practice to auto delete queues while it is not being used?
A-12). Yes, it is a good practice to auto-delete queues while it is not being used. There are 3 ways by which you can auto-delete the unused queues.
- TTL Policy – Set a TTL Policy of 30 days. It means that the queue will be deleted if it doesn’t consume any messages for the last 30 days.
- Auto delete – A queue will be auto deleted once the last consumer cancelled or when it has lost the TCP connection with the server.
- Exclusive Queue – An Exclusive queue can only be used by its declaring connection. This queues are deleted when the declaring connection (TCP connection) is closed or lost.
Q-13). How to handle the payload (messages) in Queue?
A-13). It is not a good practice to send large messages and also send multiple small messages. A better way is to send a large message and let the consumer split it at their end. However, it is a bad alternative to bundle multiple messages as it will take more processing time.
Q-14). Is it good to share channels between threads?
A-14). No, it is not good to share channels between threads. As most clients don’t make channels thread safe. Ignoring this fact if you share channels between threads it would have a serious negative impact on the performance.
Q-15). Is it good to open and close connections/channels repeatedly?
A-15). No. Don’t open or close connections/channels repeatedly. Doing so gives you higher latency, as more TCP packages has to be sent and received.
Q-16). Will publisher and consumer use separate connections?
A-16). It is a good practice to always separate the publisher and consumer connections. Otherwise, the server will be overwhelmed due to high rate of publishing messages and low rate of consuming messages.
Q-17). What will be the impact of large no. of connections and channels?
A-17). For every connection and channel, RabbitMQ Management Interface provides the performance, metrics which are collected, analyzed and displayed. So if you have large connections and channels it will directly impact the RabbitMQ Management Interface performance.
Q-18). What will happen if there are more unacknowledged messages?
A-18). All unacknowledged messages must reside in RAM on the servers. if you have too many unacknowledged messages, you will run out of memory.
Q-19). Can I make messages persistent which are published in the queue and can we make the queue durable?
A-19). If you don’t want to lose any messages from the queue, then create the Queue as durable and send the messages in the queue with delivery mode as persistent.
Q-20). How to set correct prefetch value?
A-20). There are certain scenarios to set the prefetch value.
If you have one single or few consumers processing messages quickly, we recommend prefetching messages at once.
If you have about the same processing time all the time and network behavior remains the same, simply take the total round trip time and divide by the processing time on the client for each message to get an estimated prefetch value.
If the situation is many consumers and short processing time, then set a lower prefetch value.
If the situation is many consumers and long processing time, then set the prefetch value to one (1).
One thing to remember that, if your client auto acknowledge messages then prefetch value will have no effect.