Request-response via queue with Hazelcast

I wonder if I can make a request-response with this:

  • 1 instance / member hazelcast (center point)
  • 1 application sending hazelcast client request via queue
  • 1 app with hazelcast client waiting for queue requests

The first application also receives a response in another queue sent by the second application.

Is this a good way to continue? Or are you thinking of a better solution?

Thank!

+3


source to share


5 answers


I made a test for myself and confirmed that it works well with a certain constraint.

Architecture - Producer-Hazelcast_node-Consumer (s)

Using two Hazelcast queues, one for Request, one for Response, I could measure rounding down to 1ms.



Load balancing works fine if I put multiple consumers in the request queue.

If I add another node and connect clients to each node, then the round trip is above 15ms. This is due to replication between the two forest stone nodes. If I kill node the clients continue to work. Thus, the transition to another resource works at the expense of time.

+1


source


For the last couple of days I've also been working on a "soa like" solution using hazelcast queues to communicate between different processes on different machines.

My main goals were to

  • One-to-one relationship with guaranteed one-to-many response

  • one-to-one communication one way

  • "one-to-one" communication with a response in a specific time

In short, today I abandoned this approach due to the following reasons:

  • a lot of complex code with executor services, callers, executables, interrupt handling, shutdown handling, hazel transactions, etc.

  • deceptive messages in the case of one-to-one switching where the receiver has a shorter lifetime than the sender

  • lose messages if i destroy some cluster members at the right time

  • all members of the cluster should be able to deserialize the message because it can be stored anywhere. Therefore, messages may not be "specific" for certain clients and services.



I switched to a much simpler approach:

  • all "services" are registered in the MultiMap ("service registry") using the lizard cluster UUID as the key. Each entry contains some meta information such as service id, load factor, start time, host, pid, etc.

  • clients pick the UUID of one of the entries in this MultiMap and use the DistributedTask to select a specific cluster member to invoke the service and optionally receive a response (by time)

  • only the service client and service need to have a specific DistributedTask implementation in their classpath, all other cluster members don't bother

  • clients can easily identify dead entries in the service registry itself: if they cannot see a cluster member with a specific UUID (hazelcastInstance.getCluster (). getMembers ()), the service died, probably unexpectedly. Clients can then select "live" recordings, recordings with less load factor, replays in the case of idempotent services, and so on.

Programming becomes very simple and powerful using the second approach (like timeouts or canceling tasks), much less the code to support.

Hope this helps!

+3


source


In the past, we have created an SOA system that uses a Hazelcast queue as a bus. Here are some of the titles.

and. Each service has an income Q. The service name is simply the name of the queue. You can have as many service providers as you like. You can scale up and down. All you need is these service providers to poll this queue and process the incoming requests.

b. Since the system is completely asynchronous, in order to match request and response, there is also a call id for both on-demand and on-demand.

from. Each client sends a request to the queue for the service it wants to invoke. The request has all the parameters for the service, the name of the queue to send the response, and the call id. The queue name can just be the address of the client. This way, each customer will have their own unique queue.

After receiving the request, the service provider processes it and sends a response to the response queue

f. Each client also continually checks its input queue for responses to requests it sends.

The main disadvantage of this design is that queues are not as scalable as maps. Thus, it is not very scalable. However, it can still handle 5K requests per second.

+2


source


Can't you use a correlation id to execute request-response in one queue in the carousel? This is an identifier that should uniquely identify a conversation between two queue providers / consumers.

+1


source


What is the purpose of this @unludo tweak ?. I'm just curious

0


source







All Articles