How to find the number of columns in a tab separated file

I have a tab delimited file with 1 billion rows of them (imagine 200+ columns instead of 3):

abc -0.123  0.6524  0.325
foo -0.9808 0.874   -0.2341 
bar 0.23123 -0.123124   -0.1232

      

If the number of columns is unknown, how do I find the number of columns in a split tab file?

I've tried this:

import io
with io.open('bigfile', 'r') as fin:
    num_columns = len(fin.readline().split('\t'))

      

And (from @EdChum, Read a separate section with the first column as key and the rest as values ):

import pandas as pd
num_columns = pd.read_csv('bigfile', sep='\s+', nrows=1).shape[1]  

      

How else can I get the number of columns? And what is the most efficient way? (imagine if I suddenly get a file with an unknown number of columns, for example over a million columns)

+3


source to share


2 answers


Some timings in a file with 100,000 columns, counting seems to be fastest, but disabled by one:

In [14]: %%timeit                    
with open("test.csv" ) as f:
    r = csv.reader(f, delimiter="\t")
    len(next(r))
   ....: 
10 loops, best of 3: 88.7 ms per loop

In [15]: %%timeit                    
with open("test.csv" ) as f:
    next(f).count("\t")
   ....: 
100 loops, best of 3: 11.9 ms per loop
with io.open('test.csv', 'r') as fin:
    num_columns = len(next(fin).split('\t'))
    ....: 
 10 loops, best of 3: 133 ms per loop

      

Using str.translate actually seems to be the fastest, although you need to add 1 again:

In [5]: %%timeit
with open("test.csv" ) as f:
    n = next(f)
    (len(n) - len(n.translate(None, "\t")))
   ...: 
100 loops, best of 3: 9.9 ms per loop

      

Pandas solution gives me error:



in pandas.parser.TextReader._read_low_memory (pandas/parser.c:7977)()

StopIteration: 

      

Using readline adds additional overhead:

In [19]: %%timeit
with open("test.csv" ) as f:
    f.readline().count("\t")
   ....: 
10 loops, best of 3: 28.9 ms per loop
In [30]: %%timeit
with io.open('test.csv', 'r') as fin:
    num_columns = len(fin.readline().split('\t'))
   ....: 
10 loops, best of 3: 136 ms per loop

      

Various results using python 3.4:

In [7]: %%timeit
with io.open('test.csv', 'r') as fin:
    num_columns = len(next(fin).split('\t'))
   ...: 
10 loops, best of 3: 102 ms per loop

In [8]: %%timeit
with open("test.csv" ) as f:
    f.readline().count("\t")
   ...: 

100 loops, best of 3: 12.7 ms per loop   
In [9]:     
In [9]: %%timeit
with open("test.csv" ) as f:
    next(f).count("\t")
   ...: 
100 loops, best of 3: 11.5 ms per loop    
In [10]: %%timeit
with io.open('test.csv', 'r') as fin:
    num_columns = len(next(fin).split('\t'))
   ....: 
10 loops, best of 3: 89.9 ms per loop    
In [11]: %%timeit
with io.open('test.csv', 'r') as fin:
    num_columns = len(fin.readline().split('\t'))
   ....: 
10 loops, best of 3: 92.4 ms per loop   
In [13]: %%timeit     
with open("test.csv" ) as f:
    r = csv.reader(f, delimiter="\t")
    len(next(r))
   ....: 
10 loops, best of 3: 176 ms per loop

      

+2


source


There is a method str.count()

:



h = file.open('path', 'r')
columns = h.readline().count('\t') + 1
h.close()

      

0


source







All Articles