Insert dictionary (JSON) as is into Pandas Dataframe

I have a usecase where I need to convert existing data columns to JSON and only store in one column.

So far I have tried this:

import pandas as pd
import json
df=pd.DataFrame([{'a':'sjdfb','b':'jsfubs'},{'a':'ouhbsdv','b':'cm osdn'}]) #Random data
jsonresult1=df.to_json(orient='records')
# '[{"a":"sjdfb","b":"jsfubs"},{"a":"ouhbsdv","b":"cm osdn"}]'

      

But I want the data to be just a string representation of a dictionary, not a list. So I tried this:

>>>jsonresult2=df.to_dict(orient='records')
>>>jsonresult2
# [{'a': 'sjdfb', 'b': 'jsfubs'}, {'a': 'ouhbsdv', 'b': 'cm osdn'}]

      

This is what I wanted the data to look like, but when I try to make this DataFrame the dataframe is again in 2 column format [a, b]. The string representation of these dictionary objects will insert the column data into the dataframe in the required format.

>>>for i in range(len(jsonresult2)):
...    jsonresult3.append(str(jsonresult2[i]))
...
>>> jsonresult3
["{'a': 'sjdfb', 'b': 'jsfubs'}", "{'a': 'ouhbsdv', 'b': 'cm osdn'}"]

      

This is exactly what I wanted. And when I click this on the dataframe I get:

 >>> df1
                              0
 ++++++++++++++++++++++++++++++++++++
 0  |   {'a': 'sjdfb', 'b': 'jsfubs'}
 1  |{'a': 'ouhbsdv', 'b': 'cm osdn'}

      

But I think this is a very inefficient way. How do I make it look and work in an optimized way? My data may exceed 10M lines. And it takes too long.

+3


source to share


2 answers


First I convert to dictionary ... Make a series ... then apply pd.json.dumps

pd.Series(df.to_dict('records'), df.index).apply(pd.json.dumps)

0       {"a":"sjdfb","b":"jsfubs"}
1    {"a":"ouhbsdv","b":"cm osdn"}
dtype: object

      

Or shorter code

df.apply(pd.json.dumps, 1)

0       {"a":"sjdfb","b":"jsfubs"}
1    {"a":"ouhbsdv","b":"cm osdn"}
dtype: object

      

We can improve performance by building the strings themselves

v = df.values.tolist()
c = df.columns.values.tolist()

pd.Series([str(dict(zip(c, row))) for row in v], df.index)

0       {'a': 'sjdfb', 'b': 'jsfubs'}
1    {'a': 'ouhbsdv', 'b': 'cm osdn'}
dtype: object

      




If there is a memory problem, I would save df

to a csv and read it line by line, building a new series or datafile along the way.

df.to_csv('test.csv')

      

It's slower, but it gets around some memory issues.

s = pd.Series()
with open('test.csv') as f:
    c = f.readline().strip().split(',')[1:]
    for row in f:
        row = row.strip().split(',')
        s.set_value(row[0], str(dict(zip(c, row[1:]))))

      




Or you can skip exporting the file if you can keep df

in memory

s = pd.Series()
c = df.columns.values.tolist()
for t in df.itertuples():
    s.set_value(t.Index, str(dict(zip(c, t[1:]))))

      

+3


source


l = [{'a':'sjdfb','b':'jsfubs'},{'a':'ouhbsdv','b':'cm osdn'}]

#convert json elements to strings and then load to df.
pd.DataFrame([str(e) for e in l])
Out[949]: 
                                  0
0     {'a': 'sjdfb', 'b': 'jsfubs'}
1  {'a': 'ouhbsdv', 'b': 'cm osdn'}

      

Delay



%timeit pd.DataFrame([str(e) for e in l])
10000 loops, best of 3: 159 ยตs per loop

%timeit pd.Series(df.to_dict('records'), df.index).apply(pd.json.dumps)
1000 loops, best of 3: 254 ยตs per loop

      

0


source







All Articles