Web staple table data in Python

I am trying to clear the datasheet from a webpage, all the tutorials I found on the internet are too specific and don't explain what each argument / element is, so I can't figure out how it works in my example. Any advice on where to find good tutorials for cleaning up such data would be appreciated:

query = urllib.urlencode({'q': company})
page = requests.get('http://www.hoovers.com/company-information/company-search.html?term=company')
tree = html.fromstring(page.text)

table =tree.xpath('//[@id="shell"]/div/div/div[2]/div[5]/div[1]/div/div[1]')

#Can't get xpath correct
#This will create a list of companies:
companies = tree.xpath('//...') 
#This will create a list of locations
locations = tree.xpath('//....')

      

I've also tried:

hoover = 'http://www.hoovers.com/company-information/company-search.html?term=company'
req = urllib2.Request(hoover)
page = urllib2.urlopen(req)
soup = BeautifulSoup(page)

table = soup.find("table", { "class" : "clear data-table sortable-header dashed-table-tr alternate-rows" })

f = open('output.csv', 'w')
for row in table.findAll('tr'):
    f.write(','.join(''.join([str(i).replace(',','') for i in row.findAll('td',text=True) if i[0]!='&']).split('\n')[1;-1])+'\n')
f.close()    

      

But I am getting invalid syntax error on the second last line

+3


source to share


1 answer


Yes, wonderful soup. Here's a quick example to get the names:



hoover = 'http://www.hoovers.com/company-information/company-search.html?term=company'
req = urllib2.Request(hoover)
page = urllib2.urlopen(req)
soup = BeautifulSoup(page.text)
trs = soup.find("div", attrs={"class": "clear data-table sortable-header dashed-table-tr alternate-rows"}).find("table").findAll("tr")
for tr in trs:
    tds = tr.findAll("td")
    if len(tds) < 1:
        continue
    name = tds[0].text
    print name
f.close()

      

+3


source







All Articles