Automating balancing / scaling balancing microservices

Reading microservices for days now I was wondering how people go about automating load balancing and scaling these things?

I have a specific scenario in mind for what I would like to achieve, but not sure if this is possible, or maybe I am thinking about it the wrong way. So this is it ...


Let's say I have a cluster of three CoreOS machines named A, B and C.

The first thing I want is transparent deployment, which I can probably use float for.

Then I would like to detect when one of the services is under a huge load and deploys another instance and deploys one and the first, automatically load balance so as not to disrupt other services that use it (traffic goes through the load balancer from now on).

Another way could be that I manually deploy a different version of the services, which then automatically loads the load and traffic router into the load balancer.

Then the last question, how is this all different from something like Akka cluster and how the development of these technologies differs from a microservice?

+3


source to share


1 answer


In my opinion, the question you asked hints at your answer "(traffic goes through load balancing from now on )".

I would say - traffic should always go through load balancing .

In your simplest case, when you have 1 instance of each service, it still has to go through the load balancer (by the way, I think it's a good idea to have at least 2 of them all).



In this case, when you get 3x traffic and want to deploy another container of the same service, once the container is up and running, it must register with the service discovery tool and automatically update the load balancing configuration to add a new 'upstream' ".

And then by using this approach you can scale / scale down your services more easily.

+4


source







All Articles