How do I use docker in both dev and production environment?

We currently employ a chef for both the production environment and development. I love the concept of docker that runs isolated containers for different service roles. And I think this will work great when creating a development environment. I am a bit confused about how we should use it in a production environment (or should I use it in a production environment?).

During production, each service is already running on its own server instances. I find it inefficient to run them inside a container rather than running directly on the host operating system.

On the other hand, if we only use docker in the dev environment, we end up writing 2 copies of the system config, one in docker and one in chef, which is also not ideal.

Any suggestions or advice would be appreciated.

+3


source to share


2 answers


During production, each service is already running on its own server instances. I find it inefficient to run them inside a container rather than running directly on the host operating system.

The advantage of docker in production is ease of deployment. To keep performance at its best, install docker on each of your production machines and each of those docker hosts only runs one container. This way your applications will have access to the same amount of system resources as before.

To reduce the overhead that docker can cause, there are a few tricks:

fast network



By default, docker will create a new networking stack for your containers, but if you use the - net = host option when starting a new container, then the container will use the docker network host stack. This will result in the container not having any network performance overhead.

Also note that --net=host

you do -p

n't need to expose ports with the docker run option when used , nor do you need to expose them. Any listening port from your container processes will be available on the docker ip server.

fast file system

The docker container file system is a Union file system , slow compared to jagged files. To keep good disk performance, make sure the processes running in your container are doing intensive read / write operations at the docker data level . Data volumes are not part of a tiered container filesystem and will have the performance of a docker host filesystem.

+6


source


Docker is actually quite efficient - the overhead is not that big because it doesn't seem like a virtualization layer, it's just a container with its own namespace and FS. Deploying the same production plant has several advantages:



  • You only need to check "once" - the same thing happens in production that you run locally and there is very little chance of configuration problems.
  • You can use an exact exact cloud image for your entire instance, which is your main docked Linux - everything else is handled by Docker, so 90% of the Chef / Puppet need is taken care of.
  • Tracking configuration changes is arguably easier with docker as there are mostly no scripts, so again other configuration management tools need to be used.
  • You can run multiple containers of the same image on your production host if you want to take advantage of multiple processors and don't have to worry about how these processes interact well, because each one has its own FS, etc. etc.
+1


source







All Articles