Publish / Sign Reliable Messages: Redis VS RabbitMQ

Background

I am making a sample publish / subscribe application where the publisher sends messages to the consumer.

The publisher and consumer are on different machines, and communication between them can sometimes be interrupted.

purpose

The goal is to make sure that no matter what happens to the connection or to the machines themselves, the message sent by the publisher is always received by the consumer .

Ordering messages is optional.

Problem

According to my research, RabbitMQ is the right choice for this scenario:

However, while RabbitMQ has a publish and subscriber tutorial, this tutorial does not introduce us to persistent queues, nor does it mention confirms , which I believe is the key to ensuring message delivery.

On the other hand, Redis can do this as well:

but I couldn't find any official tutorials or examples, and my current understatement leads me to believe that persistent queues and message acknowledgments should be done by us since Redis is mainly used in memory data store and not as a RabbitMQ message broker ...

Questions

  • For this use case, what would be the easiest solution to implement? (Redis solution or RabbitMQ solution?)
  • Please provide a link to an example with what you think will be best.
+3


source to share


3 answers


Background

I originally wanted to publish and subscribe with persistent messages and queues.

This is theoretically unsuitable for posting and signing:

  • This template doesn't care if messages are accepted or not. The publisher is just a fan of the posts and if there are any listening subscribers fine, otherwise it doesn't care.

Indeed, looking at my needs, I would need a more work queue template or even an RPC template.

Analysis

People say both should be light, but this is really subjective.

RabbitMQ has the best official documentation overall with clear examples in most languages, while Redis information is mostly related to third party blogs and sparse repositories with sparse repositories - making them much more difficult to find.

In terms of examples, RabbitMQ has two examples that clearly answer my questions:

By mixing the two, I was able to post multiple reliable consumer posts to the publisher even if one of them didn't work. Messages are not lost or forgotten.

Falling Rabbit MQ:



  • The biggest problem with this approach is that if a consumer / worker crashes, you need to define the logic yourself to make sure tasks are not lost. This is because once the task is complete, after an RPC pattern with strong queues from work queues, the server will continue to send messages to the worker until it comes back. But the worker doesn't know if he has read the response from the server or not, so it will take multiple ACKs from the server. To fix this, each work message must have an ID that you save to disk (in case of failure), or the requests must be idempotent.
  • Another problem is that if the connection is lost, the clients blow up errors as they cannot connect. It is also something that you must prepare in advance.

As far as redis is concerned, it has a good example of long queues on this blog:

What follows the official recommendation. You can check out the github repo for more information.

Falling redis:

  • As with rabbitmq, you also need to handle work failures, otherwise the running tasks will be lost.
  • You need to do a survey. Every consumer should ask the manufacturer if there is news every X seconds.

This is, in my opinion, the worst rabbit.

Conclusion

I will end up doing a rabbit for the following reasons:

  • More robust official online documentation with examples.
  • You don't need consumers to poll the manufacturer.
  • Error handling is as easy as redis.

With that in mind, in this particular case, I'm pretty sure redis is the worst rabbit in this scenario.

Hope this helps.

+6


source


In terms of implementation, they should be lightweight - they both have libraries in different languages, check redis here and here for rabbitmq . I'll just be honest here: I'm not using javascript, so I don't know how the reputable libraries are implemented or maintained.

Regarding what you didn't find in the tutorial (or perhaps missed in the second , where there are a few words about long queues and persistent messages and message acknowledgments) there are some things well explained:



Publisher certificates aren't really included in the tutorial, but there is an example on github in the amqp.node repository .

There
publisher -> exchange -> queue -> consumer


is some kind of persistence in mq bunny messages (in most cases) like this and at each of these stops. Also, if you get into clusters and queue mirroring, you will achieve even greater reliability (and availability, of course).

+1


source


I think they are both easy to use as there are many libraries developed for them.

There are several names like disque, bull, kue, amqplib, etc.

The documentation for these is pretty good. You can just copy and paste and run it in minutes.

I am using seneca

and seneca amqp is a pretty good example

https://github.com/senecajs/seneca-amqp-transport

0


source







All Articles