Puzzled by setting up cpushare on Docker.
I wrote a test program cputest.py in python like this:
import time while True: for _ in range(10120*40): pass time.sleep(0.008)
which stands at 80% when running in a container , without interfering with other runnng containers.
Then I ran this program in two containers with the following two commands:
docker run -d -c 256 --cpuset=1 IMAGENAME python /cputest.py docker run -d -c 1024 --cpuset=1 IMAGENAME python /cputest.py
and used "up" to view their processor cost. It turned out that they are relatively worth 30% and 67% . I am very puzzled by this result. Can anyone kindly explain this to me? Many thanks!
source to share
I sat down last night and tried to figure it out myself, but in the end I couldn't explain the 70/30 split. So, I sent an email to other developers and got this answer, which I think makes sense:
I think you are slightly misunderstanding how task scheduling works, so the math doesn't work. I'll try to dig out a good article, but at a basic level, the kernel assigns time slices to each task that needs to run, and allocates slices for the tasks that are prioritized.
So with these priorities and a hard loop of code (no sleep), the kernel assigns 4/5 slots to a and 1/5 to b. Hence the split is 80/20.
However, when you add in a dream, it becomes more complex. Sleep basically tells the kernel to do the current task, and then execution will fall back to that task after it times out. This can be longer than the specified time, especially if higher priority tasks are running. When nothing else starts the kernel, then it just sits idle for sleep time.
But when you have two tasks, sleep allows the two tasks to intertwine. Therefore, when you sleep, the other can fulfill. This probably leads to complex execution that cannot be modeled with simple mathematical calculations. Feel free to prove to me what's wrong there!
I think another reason for the 70/30 split is how you do "80% load". The numbers you choose for cycle and sleep just work on your PC with one task. You could try to move the loop based on the elapsed time - so the loop is for 0.8 and then sleep at 0.2. This might give you something closer to 80/20, but I don't know.
So your call is
skewing your expected numbers, the deletion
causes the CPU load to be much closer to what you expected.
source to share