Encountering a situation where MongoDB operations are being starved in a RabbitMQ consumer.
rabbitConn.createChannel(function(err, channel) {
channel.consume(q.queue, async function(msg) {
// Consumer is active on Queue A based on binding key.
await Conversations.findOneAndUpdate(
{'_id': 'someID'},
{'$push': {'messages': {'body': 'message body'}}}, function(error, count) {
// Using callback to ensure immediate execution as per Mongoose documentation.
});
});
});
The issue arises when a large number of messages are being processed causing MongoDB operations to wait until the queue is empty. For instance, if there are 1000 messages in the queue, all 1000 are read first before triggering MongoDB operations.
- Would separating the workers into a different Node.js process resolve this issue?
Ans: Attempted decoupling the workers from the main thread with no success.
- Implemented a load balancer with 10 workers but it doesn't seem to prioritize MongoDB operations. Is the event loop neglecting them?
Ans: The 10 workers continue reading from the queue and only execute findOneAndUpdate once the queue is empty.
Any insights or assistance would be highly valued.
Thank you