Currently, I am tackling a feature that involves tracking an asynchronous request within a synchronous one. Let me elaborate.
The code snippet I am working with looks something like this:
const myObj = {};
function sendMessage(requestId, data) {
kafkaProducer("requestTopic", {requestId, ...data});
return new Promise(resolve => {
// The challenge lies here
const dataINeed = myObj[requestId];
// This object is intended to be temporary only
delete myObj[requestId];
resolve(dataINeed);
})
}
kafkaConsumer("callbackTopic", (response) => {
myObj[response.requestId] = response.data;
}
I understand that the current code does not function as expected and this is the issue at hand. My goal is to resolve() the Promise only when the consumer actually generates the object, or on timeout.
Attaching the Promise to the consumer directly is not a viable option, as simultaneous requests necessitate their own distinct data rather than the data from the first response.
An alternative approach could involve creating a new consumer with a unique topic for each request. Upon consumption, the consumer would be terminated and the topic deleted in order to prevent overloading the kafka cluster with excessive ghost topics and partitions. However, it remains uncertain if this solution is advisable.
What prompts this necessity?
This functionality aims to replace traditional database requests by implementing a Database as a Service model while maintaining a structure akin to what APIs typically utilize (such as mongoose).