Building pandas dataframe from csv with dictionary columns

I have a csv that contains multiple columns filled with one dict. There are thousands of lines. I want to pull these signals and make columns from their keys and fill the cells with their values, filling in NaN where the value is missing. So that:

   id                            attributes
0   255RSSSTCHL-QLTDGLZD-BLK     {"color": "Black", "hardware": "Goldtone"}
1   C3ACCRDNFLP-QLTDS-S-BLK      {"size": "Small", "color": "Black"}

      

becomes:

   id                            size   color   hardware  
0   255RSSSTCHL-QLTDGLZD-BLK     NaN    Black   Goldtone
1   C3ACCRDNFLP-QLTDS-S-BLK      Small  Black   NaN

      

There are a few columns like "id" that I would like to leave untouched in the resulting DataFrame and there are a few columns like "attributes" that are filled with dicts that I want to release to the columns. I have truncated them to the example above for illustration purposes.

+3


source to share


2 answers


DF source:

In [172]: df
Out[172]:
                         id                               attributes                       attr2
0  255RSSSTCHL-QLTDGLZD-BLK  {"color":"Black","hardware":"Goldtone"}  {"aaa":"aaa", "bbb":"bbb"}
1   C3ACCRDNFLP-QLTDS-S-BLK         {"size":"Small","color":"Black"}               {"ccc":"ccc"}

      

Solution 1:

import ast

attr_cols = ['attributes','attr2']

def f(df, attr_col):
    return df.join(df.pop(attr_col) \
             .apply(lambda x: pd.Series(ast.literal_eval(x))))


for col in attr_cols:
    df = f(df, col)

      

Solution 2: Thanks to @DYZ for the hint :



import json

attr_cols = ['attributes','attr2']

def f(df, attr_col):
    return df.join(df.pop(attr_col) \
             .apply(lambda x: pd.Series(json.loads(x))))

for col in attr_cols:
    df = f(df, col)

      

Result:

In [175]: df
Out[175]:
                         id  color  hardware   size  aaa  bbb  ccc
0  255RSSSTCHL-QLTDGLZD-BLK  Black  Goldtone    NaN  aaa  bbb  NaN
1   C3ACCRDNFLP-QLTDS-S-BLK  Black       NaN  Small  NaN  NaN  ccc

      

Timeline: for 20,000 lines DF:

In [198]: df = pd.concat([df] * 10**4, ignore_index=True)

In [199]: df.shape
Out[199]: (20000, 3)

In [201]: %paste
def f_ast(df, attr_col):
    return df.join(df.pop(attr_col) \
             .apply(lambda x: pd.Series(ast.literal_eval(x))))

def f_json(df, attr_col):
    return df.join(df.pop(attr_col) \
             .apply(lambda x: pd.Series(json.loads(x))))
## -- End pasted text --

In [202]: %%timeit
     ...: for col in attr_cols:
     ...:     f_ast(df.copy(), col)
     ...:
1 loop, best of 3: 33.1 s per loop

In [203]:

In [203]: %%timeit
     ...: for col in attr_cols:
     ...:     f_json(df.copy(), col)
     ...:
1 loop, best of 3: 30 s per loop

In [204]: df.shape
Out[204]: (20000, 3)

      

+2


source


You can embed string parsing in the call pd.read_csv

using the parameterconverters

import pandas as pd
from io import StringIO
from cytoolz.dicttoolz import merge as dmerge
from json import loads

txt = """id|attributes|attr2
255RSSSTCHL-QLTDGLZD-BLK|{"color":"Black","hardware":"Goldtone"}|{"aaa":"aaa", "bbb":"bbb"}
C3ACCRDNFLP-QLTDS-S-BLK|{"size":"Small","color":"Black"}|{"ccc":"ccc"}"""

converters = dict(attributes=loads, attr2=loads)

df = pd.read_csv(StringIO(txt), sep='|', index_col='id', converters=converters)
df

      

enter image description here



Then we can merge

dictionaries on each line and convert to pd.DataFrame

. I will use the cytoolz.dicttoolz.merge

imported one as dmerge

above.

pd.DataFrame(df.apply(dmerge, 1).values.tolist(), df.index).reset_index()

                         id  aaa  bbb  ccc  color  hardware   size
0  255RSSSTCHL-QLTDGLZD-BLK  aaa  bbb  NaN  Black  Goldtone    NaN
1   C3ACCRDNFLP-QLTDS-S-BLK  NaN  NaN  ccc  Black       NaN  Small

      

0


source







All Articles