Understanding docker rock from a high availability perspective

I am currently trying to figure out what it takes to build a docker swarm in order to make some services available. I have read a lot of documentation about docker swarms, but if my understanding is correct, docker swarm will just execute the service on any host. What happens if the host fails? Will swarm manager restart services running on this host / node on another? Is there a better explanation for this than the original documentation found here ?

+3


source to share


1 answer


In fact, nothing is more complicated. As said, Swarm (and Kubernetes and most other tools in this space) is declarative, meaning that you give it the state you want (ie "I want 4 redis instances") and Swarm will be converge to the state system. If you have 3 nodes, then it will schedule 1 redis on Node 1, 1 on Node 2 and 2 on Node 3. If Node 2 dies, then the system is no longer "compatible" with your declared state, and Roy will pay another redis on Node 1 or 3 (depending on strategy, etc.).

Now this dynamism of scheduling containers / tasks / instances is causing another problem - detection. Roy does this by maintaining an internal DNS registry and creating VIPs (virtual IP addresses) for each service. Instead of accessing / tracking each redis instance, I can instead provide a service alias and Swarm will automatically route traffic wherever it needs to go.



There are other considerations, of course:

  • Can your service support multiple backend instances? Is he stateless? Sessions? Cache? Etc ...
  • What is "HA"? Multi-node? Multi-AZ? Multi-region? Etc ...
+2


source







All Articles