Parse HTML table data to JSON and save to text file in Python 2.7

I am trying to extract data on crime among states from this web page, link to web page    http://www.disastercenter.com/crime/uscrime.htm

I can get this into a text file. But I would like to receive the response in Json format. How to do it in python.

Here is my code:

import urllib        
import re     

from bs4 import BeautifulSoup    
link = "http://www.disastercenter.com/crime/uscrime.htm"    
f = urllib.urlopen(link)    
myfile = f.read()    
soup = BeautifulSoup(myfile)    
soup1=soup.find('table', width="100%")    
soup3=str(soup1)    
result = re.sub("<.*?>", "", soup3)    
print(result)    
output=open("output.txt","w")    
output.write(result)    
output.close()    

      

+3


source to share


2 answers


The following code will get data from two tables and output the whole thing as a json formatted string.


Working example (Python 2.7.9):



from lxml import html
import requests
import re as regular_expression
import json

page = requests.get("http://www.disastercenter.com/crime/uscrime.htm")
tree = html.fromstring(page.text)

tables = [tree.xpath('//table/tbody/tr[2]/td/center/center/font/table/tbody'),
          tree.xpath('//table/tbody/tr[5]/td/center/center/font/table/tbody')]

tabs = []

for table in tables:
    tab = []
    for row in table:
        for col in row:
            var = col.text_content()
            var = var.strip().replace(" ", "")
            var = var.split('\n')
            if regular_expression.match('^\d{4}$', var[0].strip()):
                tab_row = {}
                tab_row["Year"] = var[0].strip()
                tab_row["Population"] = var[1].strip()
                tab_row["Total"] = var[2].strip()
                tab_row["Violent"] = var[3].strip()
                tab_row["Property"] = var[4].strip()
                tab_row["Murder"] = var[5].strip()
                tab_row["Forcible_Rape"] = var[6].strip()
                tab_row["Robbery"] = var[7].strip()
                tab_row["Aggravated_Assault"] = var[8].strip()
                tab_row["Burglary"] = var[9].strip()
                tab_row["Larceny_Theft"] = var[10].strip()
                tab_row["Vehicle_Theft"] = var[11].strip()
                tab.append(tab_row)
    tabs.append(tab)

json_data = json.dumps(tabs)

output = open("output.txt", "w")
output.write(json_data)
output.close()

      

+3


source


This might be what you want if you can use requests and lxml . The data structure presented here is very simple, customize it to suit your needs.

First, get the response from the requested url and parse the result in an HTML tree:

import requests        
from lxml import etree
import json

response = requests.get("http://www.disastercenter.com/crime/uscrime.htm")
tree = etree.HTML(response.text)

      

Assuming you want to extract both tables, create this XPath and unpack the results. totals

- "Number of crimes", and rates

- "Crime rate per 100,000 people":

xpath = './/table[@width="100%"][@style="background-color: rgb(255, 255, 255);"]//tbody'
totals, rates = tree.findall(xpath)

      

Fetch raw data ( td.find('./')

means first child, whatever its tag) and clear lines ( r''

raw lines are required for Python 2.x):



raw_data = []
for tbody in totals, rates:
    rows = []
    for tr in tbody.getchildren():
        row = []
        for td in tr.getchildren():
            child = td.find('./')
            if child is not None and child.tag != 'br':
                row.append(child.text.strip(r'\xa0').strip(r'\n').strip())
            else:
                row.append('')
        rows.append(row)
    raw_data.append(rows)

      

Anchor the table headers on the first two lines, then remove the extra lines seen as 11th and 12th steps in slice notation:

data = {}
data['tags'] = [tag0 + tag1 for tag0, tag1 in zip(raw_data[0][0], raw_data[0][1])]

for raw in raw_data:
    del raw[::12]
    del raw[::11]

      

Save the rest of the raw data and create a JSON file (optional: exclude whitespace with separators=(',', ':')

):

data['totals'], data['rates'] = raw_data[0], raw_data[1]
with open('data.json', 'w') as f:
    json.dump(data, f, separators=(',', ':'))

      

+1


source







All Articles