Why do my docker mount volume files turn into folders inside the container?

The script is docker inside / next to docker via sock binding for the purpose of having an easily deployable and scalable mediation agent for CI / CD tools (in this particular case, VSTS). The reason for this is that various projects that I want to test use docker / compose to run tests and set up the CI / CD worker to be docker-compatible / make up a bunch of time, it becomes cumbersome and time consuming. (This will eventually be deployed to 4+ Kubernete clusters)

Anyway the problem is:

Replication steps

  • Run the vsts-agent image

docker run \ -it \ -v /var/run/docker.sock:/var/run/docker.sock \ nullvoxpopuli/vsts-agent-with-aws-ecr:latest \ /bin/bash

  1. Run another image (for docker emulation / compiling current tests)

echo 'test' > test-file.txt docker run -it -v file-test.txt:/file-test.txt busybox /bin/sh

  1. Check if test-file.txt exists

cd / ls -la # shows that test-file.txt is a directory

So
 - why are files mounted as folders inside containers?
 - what do I need to do to set the volumes correctly?

Solution A - thanks to @BMitch

# On Host machine
docker run -it \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -v /tmp/vsts/work/:/tmp/vsts/work \
   nullvoxpopuli/vsts-agent-with-aws-ecr:latest \
   /bin/bash

# In vsts-agent-with-aws-ecr
cd /tmp/vsts/work/
git clone https://NullVoxPopuli@bitbucket.org/group/project.git
cd project/
./scripts/run/eslint.sh
# Success! (this uses docker-compose to map files to the node-based docker image)

      

+3


source to share


1 answer


Docker creates containers and mounts volumes from the docker host. Whenever a file or directory on the volume does not exist, it is initialized as an empty directory. Therefore, if you are executing docker commands from inside a container to a docker socket, those commands are interpreted outside the container on the docker host where the file does not exist. In addition, the command docker run

requires the full path to the one that is mounted when you need the host volumes, otherwise it will be interpreted as a named volume.

At this point, you probably want to do the following:

docker volume rm file-test.txt
docker run -it -v $(pwd)/file-test.txt:/file-test.txt busybox /bin/sh

      

If instead you are trying to include a file from a container into another container, you can initialize the named volume with input redirection like this:

tar -cC . . | docker run -i --rm -v file-test:/target busybox tar -xC /target
docker run -it -v file-test:/data busybox /bin/sh

      



This uses tar to copy the contents of the current directory to stdout, which is processed by the interactive docker command, which then fetches the contents of that directory to / target inside a container, which is a named volume. Please note, I did not mount the volume in the root directory in this second example, since named volumes are directories and I did not want to replace the root filesystem.

Another option is to share the volume mount point between multiple containers on the docker host so that the files you edited inside one container are uploaded to the host, where they were installed in another container and are visible there:

docker run \
  -it \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -v /container-data:/container-data \
  nullvoxpopuli/vsts-agent-with-aws-ecr:latest \
  /bin/bash
echo 'test' > /container-data/test-file.txt
docker run -it -v /container-data:/container-data busybox /bin/sh

      

I do not recommend mounting individual files in a container if those files can be changed while the container is running. File changes often cause the modified inode and docker to have the old inode installed in the container. As a result, changes inside or outside the container in the file may not be reflected on the other side, and if you change the file inside the container, this change may be lost when the container is deleted. The solution to the inode problem is to set the entire directory to a container.

+2


source







All Articles