Delete first line of all files in a folder (on ubuntu)
I have a folder with 2800 .txt files and I need to remove the first line from all files. The file names are all different, except for the fact that they end in .txt.
it would be possible to do this by keeping the same filename (instead of sending the output (file without the first line) to another file) ...
source to share
You can do a bash script. Something like that:
#!/bin/bash
for filename in *;
do
tail -n +2 "${filename}"
done
Run it from the command line: $ <script_file.sh>
Take this with a grain of salt. I am not actually working on a * nix machine. See here for different ways to remove the first line of a file. Also note that tail
much faster is assumed than sed
if performance is important to you.
source to share
For a small number of files that I would write,
for f in *.txt; do sed 1d "$f" >tmpfile; mv tmpfile "$f"; done
However, for a large number of files, this won't work because the shell will expand *.txt
to an argument list that is too long.
In this case (which is similar to your case) the best approach is
ls | grep '.txt$' | while read f; do sed 1d "$f" >tmpfile; mv tmpfile "$f"; done
However, you should be aware that there are fundamental problems with this (as @EdMorton highlights in the comments on an earlier version of this answer). One is that you will get into trouble if you are unfortunate enough to have a directory whose name ends with .txt
(you could handle this with help test -f
if you really felt the need). Another is that it read
can have problems if there are some odd characters in one of the filenames (such as \n
, say, or one or the other type of quotation marks). You can handle things like this by playing around with IFS
(see comments), but it's better to have a quick glance at the files you're working on and fix such bad filenames first.
What you shouldn't be doing is ls *.txt | ...
, as if the number of files is long enough for f in *.txt; do ...
not to work, then it ls *.txt
won't work either.
There are more difficult things you can do.
find . -type f -name \*.txt | while read f; do ...
This selects files ending with .txt
, but I always find file
untidy options for typing or reading, and it feels like ls
+ here grep
.
Another possibility is
find . -type f -name \*.txt -exec sed -i 1d '{}' \;
It's pretty reliable, but like most non-trivial commands file
, it looks like a mess and you have to remember the implicit syntax find
. Also , which won't work if yours sed
doesn't support the no-argument option-i
(POSIX sed
doesn't -i
, and sed
for OS X and other BSDs, effectively requires an extension to be specified). Also, this approach is limited to one command, so it won't work if you need to do more for files.
source to share