How to create pandas framework in python from csv with extra delimiters?
I have a large csv (about 400k lines) that I want to turn into a dataframe in python. The original file has two columns: a text column followed by an int (or NAN) column.
Example:
... P-X1-6030-07-A01 368963 P-X1-6030-08-A01 368964 P-X1-6030-09-A01 368965 P-A-1-1011-14-G-01 368967 P-A-1-1014-01-G-05 368968 P-A-1-1017-02-D-01 368969 ...
I want to further split a text column into a series of columns following a pattern of the last three lines of example text ( P A 1 1017 02 D 01 368969
for example)
Noting that a textbox can have different formatting ( P-X1
vs P-X-1
) how can this be accomplished?
source to share
First try
The spec for read_csv
specifies that it accepts a regular expression, but that seems to be wrong. After checking the source, it seems it just takes a series of characters that it can use to populate the character set, and then +
, so the sep arguments below will be used to create a regex like
`[- ]+`.
Import required libraries to recreate:
import pandas as pd
import StringIO
You can use aset characters as delimiters, parsing inconsistent strings is not possible with pd.read_csv
, but if you want to parse them separately:
pd.read_csv(StringIO.StringIO('''P-X1-6030-07-A01 368963
P-X1-6030-08-A01 368964
P-X1-6030-09-A01 368965'''), sep=r'- ') # sep arg becomes regex, i.e. `[- ]+`
and
pd.read_csv(StringIO.StringIO('''P-A-1-1011-14-G-01 368967
P-A-1-1014-01-G-05 368968
P-A-1-1017-02-D-01 368969'''), sep=r'- ')
But read_csv doesn't seem to be able to use real regex for the delimiter.
Final decision
This means that we need a custom solution:
import re
import StringIO
import pandas as pd
txt = '''P-X1-6030-07-A01 368963
P-X1-6030-08-A01 368964
P-X1-6030-09-A01 368965
P-A-1-1011-14-G-01 368967
P-A-1-1014-01-G-05 368968
P-A-1-1017-02-D-01 368969'''
fileobj = StringIO.StringIO(txt)
def df_from_file(fileobj):
'''
takes a file object, returns DataFrame with columns grouped by
contiguous runs of either letters or numbers (but not both together)
'''
# unfortunately, we must materialize the data before putting it in the DataFrame
gen_records = [re.findall(r'(\d+|[A-Z]+)', line) for line in fileobj]
return pd.DataFrame.from_records(gen_records)
df = df_from_file(fileobj)
and now returns df:
0 1 2 3 4 5 6 7
0 P X 1 6030 07 A 01 368963
1 P X 1 6030 08 A 01 368964
2 P X 1 6030 09 A 01 368965
3 P A 1 1011 14 G 01 368967
4 P A 1 1014 01 G 05 368968
5 P A 1 1017 02 D 01 368969
source to share