Prevent 'cp' from outputting filename from bash script

I have a shell script that copies multiple files to the current directory, compresses them, and pipes the compressed file to standard output.

On the client side, I am using plink to execute the script and stream stdin to a file.

It almost works.

The cp command seems to output the filename, which is copied when executed from the script. If I execute ' cp / path / to / file1. 'in the shell, he does it quietly; if I execute it in a script it produces "file1".

How can this be prevented? I tried to pipe the output of the cp command to / dev / null and to a dummy text file, but no luck.

thanks for any help.

script

#!/bin/bash

cp /path/to/file1 .
cp /path/to/file2 .
cp /path/to/file3 .

tar -cvzf package.tgz file1 file2 file3

cat package.tgz

      

output

file1
file2
file3
<<binary data>>

      

0


source to share


3 answers


It's not cp, it's tar. You pass it -v, which makes it print the filenames.



+19


source


Aha! I have always assumed that filenames emitted by tar go to stderr

, but this is not always the case: only if you write a tar file to stdout

, files written -v

go to stderr

:

$ tar cvf - share > /dev/null
share/                         # this must be going
share/.DS_Store                # to stderr since we
share/man/                     # redirected stdout to
share/man/.DS_Store            # /dev/null above.
share/man/man1/
share/man/man1/diffmerge.man1

      

Counterexample:



$ tar cvf blah.tar share > /dev/null

      

This did not bring up a list of filenames as they were posted to /dev/null

. I think you will learn something new every day. :-)

+2


source


As others have pointed out, the -v (verbose) option to tar prints the filenames in STDERR. You can also make your script more efficient if tar writes the compressed file stream to STDOUT:

tar zcf - file1 file2 file3

      

In this example, the "-" parameter, passed as a filename, writes tar to STDOUT.

0


source







All Articles