How Docker Shares Resources

I have studied Docker and I understand from this post that launching multiple docker containers should be fast because they share kernel level resources through the "LXC Host", however I have not found any documentation on how this relationship works which relate to docker configuration and at what level resources are allocated.

What is the involvement of Docker image and Docker container with shared resources, and how are resources allocated?

Edit:

Speaking of a "core" where resources are shared, which core is it? Does this refer to the OS host (the layer that the docker binary lives on), or does it refer to the kernel of the image that the container is based on? Wouldn't containers based on different Linux distributions run on different types of kernels?

Edit 2:

One last edit to make my question a little clearer, I'm curious if docker really doesn't run the full OS image as they suggest on this page under "How Docker differs from VM"

The following statement seems to contradict the above diagram, taken from here:

The container consists of the operating system, user-added files, and metadata. As we have seen, each container is built from an image.

+3


source to share


2 answers


Strictly speaking, Docker should no longer use LXC, user tools. It still uses the same underlying technologies with its home container library, libcontainer. In fact, Docker can use different system tools to abstraction between the process and the kernel: enter image description here The kernel should not be different for different distributions, but you cannot run an OS other than Linux. The core of the host and containers is the same, but it maintains a kind of contextual understanding to separate them from each other.



Each container has a separate OS in every way outside of the kernel. It has its own user-space applications / libraries and for all intents and purposes it behaves as if it has its own kernel.

+3


source


It is not so much a question of which resources are being shared, which resources are not being used. LXC works by creating limited visibility namespaces - to the process table, to the mount table, to network shares, etc. - but anything that is not explicitly limited and the namespace is split.

This means, of course, that the backends for all of these components are separate too - you don't have to pretend they each have a different set of page tables, because you don't pretend to run more than one kernel; it's still a kernel, all the same memory allocation pools, all the same hardware devices doing bit-twiddling (vs all the overhead of emulating hardware for a virtual machine, and each guest separately rolls up their virtual devices); the same block caches; etc etc.



Quite frankly, the question is almost too broad to answer, since the only real answer to what is common is "almost everything" and how it shares, "not doing duplicate work in the first place" (as normal virtual machines do by emulating hardware rather than exchanging just one core interacting with real hardware). This is also because kernel exploits are so dangerous on LXC-based systems - it's all one kernel, so there is no nontrivial difference between ring 0 in one container and ring 0 in another.

0


source







All Articles