Entropy of IP packet information

I have a .csv file full of package header information. Several lines:

28;03/07/2000;11:27:51;00:00:01;8609;4961;8609;097.139.024.164;131.084.001.031;0;-
29;03/07/2000;11:27:51;00:00:01;29396;4962;29396;058.106.180.191;131.084.001.031;0;-
30;03/07/2000;11:27:51;00:00:01;26290;4963;26290;060.075.194.137;131.084.001.031;0;-
31;03/07/2000;11:27:51;00:00:01;28324;4964;28324;038.087.169.169;131.084.001.031;0;- 

      

there are about ~ 33k lines in total (each line represents information from different packet headers). Now I need to calculate the entropy using the source and destination addresses.

With code I wrote:

def openFile(file_name):
    srcFile = open(file_name, 'r')
    enter code heredataset = []
    for line in srcFile:
        newLine = line.split(";")
        dataset.append(newLine)
    return dataset

      

I receive a refund that looks like

dataset = [
    ['28', '03/07/2000', '11:27:51', '00:00:01', '8609', '4961', '8609', '097.139.024.164', '131.084.001.031', '0', '-\n'], 
    ['29', '03/07/2000', '11:27:51', '00:00:01', '29396', '4962', '29396', '058.106.180.191', '131.084.001.031', '0', '-\n'], 
    ['30', '03/07/2000', '11:27:51', '00:00:01', '26290', '4963', '26290', '060.075.194.137', '131.084.001.031', '0', '-\n'],
    ['31', '03/07/2000', '11:27:51', '00:00:01', '28324', '4964', '28324', '038.087.169.169', '131.084.001.031', '0', '-']
]

      

and I pass it to my Entropy function:

#---- Entropy += - prob * math.log(prob, 2) ---------
def Entropy(data):
    entropy = 0
    counter = 0 # -- counter for occurances of the same ip address
    #-- For loop to iterate through every item in outer list
    for item in range(len(data)):
        #-- For loop to iterate through inner list
        for x in data[item]:
            if x == data[item][8]: 
                counter += 1
        prob = float(counter) / len(data)
        entropy += -prob * math.log(prob, 2)
    print("\n")
    print("Entropy: {}".format(entropy))

      

the code runs without any error, but it gives bad entropy, and I feel it due to poor calculation of the probability (this second for the loop is suspicious) or incorrect entropy formula. Is there a way to find the likelihood of an IP infection without another for the loop? Any code editing is appreciated

+3


source to share


1 answer


Using the numpy

built-in module as well collections

, you can greatly simplify your code:



import numpy as np
import collections

sample_ips = [
    "131.084.001.031",
    "131.084.001.031",
    "131.284.001.031",
    "131.284.001.031",
    "131.284.001.000",
]

C = collections.Counter(sample_ips)
counts  = np.array(C.values(),dtype=float)
prob    = counts/counts.sum()
shannon_entropy = (-prob*np.log2(prob)).sum()
print (shannon_entropy)

      

+3


source







All Articles