Extract specific data from numpy ndarray

New in pandas any help is appreciated

Snapshot dataset

def csv_reader(fileName):
    reqcols=['_id__$oid','payload','channel']
    io = pd.read_csv(fileName,sep=",",usecols=reqcols)
    print(io['payload'].values)
    return io  

      

Output line io ['payload]:

{
    "destination_ip": "172.31.14.66",
    "date": "2014-10-19T01:32:36.669861",
    "classification": "Potentially Bad Traffic",
    "proto": "UDP",
    "source_ip": "172.31.0.2",
    "priority": "`2",
    "header": "1:2003195:5",
    "signature": "ET POLICY Unusual number of DNS No Such Name Responses ",
    "source_port": "53",
    "destination_port": "34638",
    "sensor": "5cda4a12-4730-11e4-9ee4-0a0b6e7c3e9e"
}

      

I am trying to retrieve specific data from an ndarray object. What is the method that can be used to retrieve from dataframe

"destination_ip": "172.31.13.124",
"proto": "ICMP",
"source_ip": "201.158.32.1",
"date": "2014-09-28T14:49:43.391463",
"sensor": "139cfdf2-471e-11e4-9ee4-0a0b6e7c3e9e"

      

+3


source to share


4 answers


It seems to me you need to first convert the string

relay dicts

to dictionaries

in each row json.loads

or ast.literal_eval

column payload

, and then create a new one DataFrame

by constructor, filter the columns by a subset, and add the original columns if necessary concat

:

d = {'_id__$oid': ['542f8', '542f8', '542f8'], 'channel': ['snort_alert', 'snort_alert', 'snort_alert'], 'payload': ['{"destination_ip":"172.31.14.66","date": "2014-10-19T01:32:36.669861","classification":"Potentially Bad Traffic","proto":"UDP","source_ip":"172.31.0.2","priority":"2","header":"1:2003195:5","signature":"ET POLICY Unusual number of DNS No Such Name Responses ","source_port":"53","destination_port":"34638","sensor":"5cda4a12-4730-11e4-9ee4-0a0b6e7c3e9e"}', '{"destination_ip":"172.31.14.66","date": "2014-10-19T01:32:36.669861","classification":"Potentially Bad Traffic","proto":"UDP","source_ip":"172.31.0.2","priority":"2","header":"1:2003195:5","signature":"ET POLICY Unusual number of DNS No Such Name Responses ","source_port":"53","destination_port":"34638","sensor":"5cda4a12-4730-11e4-9ee4-0a0b6e7c3e9e"}', '{"destination_ip":"172.31.14.66","date": "2014-10-19T01:32:36.669861","classification":"Potentially Bad Traffic","proto":"UDP","source_ip":"172.31.0.2","priority":"2","header":"1:2003195:5","signature":"ET POLICY Unusual number of DNS No Such Name Responses ","source_port":"53","destination_port":"34638","sensor":"5cda4a12-4730-11e4-9ee4-0a0b6e7c3e9e"}']}
reqcols=['_id__$oid','payload','channel']
df = pd.DataFrame(d)
print (df)
  _id__$oid      channel                                            payload
0     542f8  snort_alert  {"destination_ip":"172.31.14.66","date": "2014...
1     542f8  snort_alert  {"destination_ip":"172.31.14.66","date": "2014...
2     542f8  snort_alert  {"destination_ip":"172.31.14.66","date": "2014...

      




import json
import ast
df.payload = df.payload.apply(json.loads)
#another slowier solution
#df.payload = df.payload.apply(ast.literal_eval)

required = ["destination_ip", "proto", "source_ip", "date", "sensor"]
df1 = pd.DataFrame(df.payload.values.tolist())[required]
print (df1)
  destination_ip proto   source_ip                        date  \
0   172.31.14.66   UDP  172.31.0.2  2014-10-19T01:32:36.669861   
1   172.31.14.66   UDP  172.31.0.2  2014-10-19T01:32:36.669861   
2   172.31.14.66   UDP  172.31.0.2  2014-10-19T01:32:36.669861   

                                 sensor  
0  5cda4a12-4730-11e4-9ee4-0a0b6e7c3e9e  
1  5cda4a12-4730-11e4-9ee4-0a0b6e7c3e9e  
2  5cda4a12-4730-11e4-9ee4-0a0b6e7c3e9e  

df2 = pd.concat([df[['_id__$oid','channel']], df1], axis=1)
print (df2)
  _id__$oid      channel destination_ip proto   source_ip  \
0     542f8  snort_alert   172.31.14.66   UDP  172.31.0.2   
1     542f8  snort_alert   172.31.14.66   UDP  172.31.0.2   
2     542f8  snort_alert   172.31.14.66   UDP  172.31.0.2   

                         date                                sensor  
0  2014-10-19T01:32:36.669861  5cda4a12-4730-11e4-9ee4-0a0b6e7c3e9e  
1  2014-10-19T01:32:36.669861  5cda4a12-4730-11e4-9ee4-0a0b6e7c3e9e  
2  2014-10-19T01:32:36.669861  5cda4a12-4730-11e4-9ee4-0a0b6e7c3e9e  

      

Delay

#[30000 rows x 3 columns]
df = pd.concat([df]*10000).reset_index(drop=True)
print (df)

In [38]: %timeit pd.DataFrame(df.payload.apply(json.loads).values.tolist())[required]
1 loop, best of 3: 379 ms per loop

In [39]: %timeit pd.read_json('[{}]'.format(df.payload.str.cat(sep=',')))[required]
1 loop, best of 3: 528 ms per loop

In [40]: %timeit pd.DataFrame(df.payload.apply(ast.literal_eval).values.tolist())[required]
1 loop, best of 3: 1.98 s per loop

      

+2


source


Using @jezrael sample df

d = {'_id__$oid': ['542f8', '542f8', '542f8'], 'channel': ['snort_alert', 'snort_alert', 'snort_alert'], 'payload': ['{"destination_ip":"172.31.14.66","date": "2014-10-19T01:32:36.669861","classification":"Potentially Bad Traffic","proto":"UDP","source_ip":"172.31.0.2","priority":"2","header":"1:2003195:5","signature":"ET POLICY Unusual number of DNS No Such Name Responses ","source_port":"53","destination_port":"34638","sensor":"5cda4a12-4730-11e4-9ee4-0a0b6e7c3e9e"}', '{"destination_ip":"172.31.14.66","date": "2014-10-19T01:32:36.669861","classification":"Potentially Bad Traffic","proto":"UDP","source_ip":"172.31.0.2","priority":"2","header":"1:2003195:5","signature":"ET POLICY Unusual number of DNS No Such Name Responses ","source_port":"53","destination_port":"34638","sensor":"5cda4a12-4730-11e4-9ee4-0a0b6e7c3e9e"}', '{"destination_ip":"172.31.14.66","date": "2014-10-19T01:32:36.669861","classification":"Potentially Bad Traffic","proto":"UDP","source_ip":"172.31.0.2","priority":"2","header":"1:2003195:5","signature":"ET POLICY Unusual number of DNS No Such Name Responses ","source_port":"53","destination_port":"34638","sensor":"5cda4a12-4730-11e4-9ee4-0a0b6e7c3e9e"}']}
df = pd.DataFrame(d)

      

decision

  • Smash it all payload

    together with vecorizedstr.cat

  • Take it all apart with pd.read_json




cols = 'destination_ip proto source_ip date sensor'.split()
df.drop(
    'payload', 1
).join(
    pd.read_json('[{}]'.format(df.payload.str.cat(sep=',')))[cols]
)

      

enter image description here

+1


source


It is pretty straight forward to refer to columns in pandas. Just pass in the list of columns you want:

Code:

columns = ["destination_ip", "proto", "source_ip", "date", "sensor"]
extracted_data = df[columns]

      

Test code:

data = {
    "destination_ip": "172.31.14.66",
    "date": "2014-10-19T01:32:36.669861",
    "classification": "Potentially Bad Traffic",
    "proto": "UDP",
    "source_ip": "172.31.0.2",
    "priority": "`2",
    "header": "1:2003195:5",
    "signature": "ET POLICY Unusual number of DNS No Such Name Responses ",
    "source_port": "53",
    "destination_port": "34638",
    "sensor": "5cda4a12-4730-11e4-9ee4-0a0b6e7c3e9e"
}
df = pd.DataFrame([data, data])

columns = ["destination_ip", "proto", "source_ip", "date", "sensor"]
print(df[columns])

      

Results:

  destination_ip proto   source_ip                        date  \
0   172.31.14.66   UDP  172.31.0.2  2014-10-19T01:32:36.669861   
1   172.31.14.66   UDP  172.31.0.2  2014-10-19T01:32:36.669861   

                                 sensor  
0  5cda4a12-4730-11e4-9ee4-0a0b6e7c3e9e  
1  5cda4a12-4730-11e4-9ee4-0a0b6e7c3e9e  

      

0


source


The problem is that payload

- this is a column of your CSV input and it is a JSON string. So you can first read_csv()

, as you did, to parse the whole file, but you need to parse every JSON object internally. Let's use this sample data:

payload = pd.Series(['{"a":1, "b":2}', '{"b":4, "c":5}'])

      

Now, make one line of JSON:

json = ','.join(payload).join('[]')

      

What gives:

'[{"a":1, "b":2}, {"b":4, "c":5}]'

      

Then let's analyze it:

pd.read_json(json)

      

To obtain:

     a  b    c
0  1.0  2  NaN
1  NaN  4  5.0

      

0


source







All Articles