Removing blank lines
I have a csv file where every other line is blank. I've tried everything, nothing deletes lines. What should be made easier is that the numbers 44 are displayed on every valid line. Things I've tried:
grep -ir 44 file.csv
sed '/^$/d' <file.csv
cat -A file.csv
sed 's/^ *//; s/ *$//; /^$/d' <file.csv
egrep -v "^$" file.csv
awk 'NF' file.csv
grep '\S' file.csv
sed 's/^ *//; s/ *$//; /^$/d; /^\s*$/d' <file.csv
cat file.csv | tr -s \n
Decided that I represent blank lines, but import them into Google Sheets and there they are still! Beginning to doubt my sanity! Can anyone please help?
Use the option -i
to replace the original file with the edited one.
sed -i '/^[ \t]*$/d' file.csv
Alternatively output to another file and rename it, which does exactly what it does -i
.
sed '/^[[:blank:]]*$/d' file.csv > file.csv.out && mv file.csv.out file.csv
sed -n -i '/44/p' file
-n means skip printing
-i inplace (overwrite the same file)
- / 44 / p print lines where "44" exists
without '44' present
sed -i '/^\s*$/d' file
\ s matches a space, ^ startofline, $ endofline, d delete line
Given:
$ cat bl.txt
Line 1 (next line has a tab)
Line 2 (next has several space)
Line 3
You can remove blank lines with Perl:
$ perl -lne 'print unless /^\s*$/' bl.txt
Line 1 (next line has a tab)
Line 2 (next has several space)
Line 3
AWK:
$ awk 'NF>0' bl.txt
Line 1 (next line has a tab)
Line 2 (next has several space)
Line 3
sed + tr:
$ cat bl.txt | tr '\t' ' ' | sed '/^ *$/d'
Line 1 (next line has a tab)
Line 2 (next has several space)
Line 3
Just sed:
$ sed '/^[[:space:]]*$/d' bl.txt
Line 1 (next line has a tab)
Line 2 (next has several space)
Line 3
Aside from the fact that your commands don't show that you are writing their output to a new file to be used in place of the original, there is nothing wrong with them, EXCEPT, that:
cat file.csv | tr -s \n
it should be:
cat file.csv | tr -s '\n' # more efficient alternative: tr -s '\n' < file.csv
Otherwise, the shell eats \
and all it sees tr
is that n
.
Note that the above only excludes truly blank lines, while some of your other commands also eliminate blank lines (blank or all-whitespace).
Also, -i
(for case insensitivity) in is grep -ir 44 file.csv
pointless, and when used -r
(for recursive searches) the fact that the search is only will not change file.csv
, it will add the filename followed by it :
to each matching line.
If you did capture the output in a new file and that file still has blank lines, cat -A
( cat -et
on BSD-like platforms) that you already mention in your question should show you if any unusual characters are present in the file as ^<char>
sequences , for example ^M
for symbols \r
.
If you like awk
this should do:
awk '/44/' file
It will only print lines containing 44