Selenium-webdriver: how to use for loop for find_elements

I want to get all links and start_time and end_time one page and then send to functions (parse_detail) to discard other information But I don't know how to use selenium to loop

Here is my code AND there is an error:

for site in sites:
exceptions.TypeError: 'WebElement' object is not iterable

      

Please teach me how to use the noose, for example, in the village. Thank!

class ProductSpider(Spider):
    name = "city20140808"
    start_urls = ['http://wwwtt.tw/11']

    def __init__(self):
        self.driver = webdriver.Firefox()
        dispatcher.connect(self.spider_closed, signals.spider_closed)

    def parse(self, response):
        self.driver.get(response.url)

        item  = CitytalkItem()
        sites = self.driver.find_element_by_css_selector("div.body ")
        for site in sites:
            linkiwant = site.find_element_by_css_selector(".heading a")
            start = site.find_element_by_css_selector("div.content p.m span.date")
            end = site.find_element_by_css_selector("div.content p.m span.date")

            item['link'] = linkiwant.get_attribute("href") 
            item['start_date']  = start.text
            item['end_date']  = end.text
            yield Request(url=item['link'], meta={'items':items}, callback=self.parse_detail)  
     def parse_detail(self,response):
        item = response.meta['items']
        ........
        yield item

      

+3


source to share


1 answer


Instead find_element_by_css_selector()

, which returns one item, you need to use find_elements_by_css_selector()

which returns a list of items.



+5


source







All Articles