What does "Ack" mean in Apache Storm / Hadoop?

Can anyone tell me what "Ack" means for Apache Storm / Hadoop? Does this mean that you are "feeding" the tuple when it is considered complete and will not fail? Is dequeueing similar to how ESB deletes messages after they have been processed? Where did this strange word come from and does it mean something? I looked here but was still a little confused: https://storm.apache.org/documentation/Concepts.html

+3


source to share


1 answer


As for the "Ack" in the context of Apache Storm, it lets the original Spout know that the tuple has been fully processed.

If Storm detects that the tuple has been fully processed, Storm will call the ack method on the original Spout task with the message ID that Spout provided Storm.



Link

This is a way to ensure that a particular tuple has completely dropped the topology.

+4


source







All Articles