Executing `ls` on docker with shared volume results in" No such file or directory "
I am writing a node program that uses dockernode as a Docker client. The program creates a container with a volume that is bound to a directory on the host when the container starts. One started off, I am trying to print the contents of the shared volume to prove it is working correctly. However, I keep getting (ls: /tmp/app: No such file or directory
.
Here is my code ...
var Docker = require('dockerode'),
docker = new Docker(),
mkdirp = require('mkdirp'),
volume = (process.env.HOME || process.env.HOMEPATH || process.env.USERPROFILE) + '/' + Date.now();
function handleError(action, err) {
if (err) {
console.error('error while ' + action + '...');
console.error(err);
}
}
mkdirp.sync(volume);
docker.createContainer({
Image: 'ubuntu',
Volumes: {
'/tmp/app': {}
}
}, function(err, container) {
handleError('building', err);
container.start({
Binds: [volume + ':/tmp/app']
}, function(err, data) {
handleError('starting', err);
container.exec({
AttachStdout: true,
AttachStderr: true,
Tty: false,
Cmd: ['/bin/ls', '/tmp/app']
}, function(err, exec) {
handleError('executing `ls /tmp/app`', err);
exec.start(function(err, stream) {
handleError('handling response from `ls /tmp/app`', err);
stream.setEncoding('utf8');
stream.pipe(process.stdout);
});
});
});
});
I managed to do it without exec, where I create a container, attach to it, start it with a command ls
, wait for it to finish, and then kill it and delete it. But I'm looking to use exec, so I can issue multiple commands after the container has started. I am trying to piece this together from examples in the dockerode library and the remote Docker API documentation. I just don't know where I am going wrong.
For reference, here is the code without exec ...
docker.createContainer({
Image: 'ubuntu',
Cmd: ['/bin/ls', '/tmp/app'],
Volumes: {
'/tmp/app': {}
}
}, function(err, container) {
console.log('attaching to... ' + container.id);
container.attach({stream: true, stdout: true, stderr: true, tty: true}, function(err, stream) {
handleError('attaching', err);
stream.pipe(process.stdout);
console.log('starting... ' + container.id);
container.start({
Binds: [volume + ':/tmp/app']
}, function(err, data) {
handleError('starting', err);
});
container.wait(function(err, data) {
handleError('waiting', err);
console.log('killing... ' + container.id);
container.kill(function(err, data) {
handleError('killing', err);
console.log('removing... ' + container.id);
container.remove(function(err, data) {
handleError('removing', err);
});
});
});
});
});
source to share
I've been struggling with the same problem for a while but found a solution. The Remote API seems to not accept the command with arguments as a single line, but you need to strip each argument as a new token in the Cmd properties array; for example with gcc:
"Cmd":["/bin/bash","-c","gcc -Wall -std=c99 hello.c -o hello.bin"]
After this modification, it works correctly.
Official documentation may be better for configuring remote API.
source to share