Scroll Crawl Spider doesn't support links

So I wrote a web crawler to fetch products from walmart.com. Here is my spider. I can't seem to figure out why it doesn't follow the links on the left. Then he pulls out the main page.

My goal is for it to follow all the links in the left ledge and then fetch each product from those pages.

I even tried using allow = () to match every link on the page, but that still doesn't work.

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.selector import HtmlXPathSelector
from scrapy.contrib.loader import XPathItemLoader
from scrapy.contrib.loader.processor import Join, MapCompose
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor as sle
from walmart_scraper.items import GroceryItem


class WalmartFoodSpider(CrawlSpider):
    name = "walmart_scraper"
    allowed_domains = ["www.walmart.com"]
    start_urls = ["http://www.walmart.com/cp/976759"]
    rules = (Rule(sle(restrict_xpaths=('//div[@class="lhn-menu-flyout-inner lhn-menu-flyout-2col"]/ul[@class="block-list"]/li/a',)),callback='parse',follow=True),)

    items_list_xpath = '//div[@class="js-tile tile-grid-unit"]'

item_fields = {'title': './/a[@class="js-product-title"]/h3[@class="tile-heading"]/div',
               'image_url': './/a[@class="js-product-image"]/img[@class="product-image"]/@src',
               'price': './/div[@class="tile-price"]/div[@class="item-price-            container"]/span[@class="price price-display"]|//div[@class="tile-price"]/div[@class="item-price-   container"]/span[@class="price price-display price-not-available"]',
               'category': '//nav[@id="breadcrumb-container"]/ol[@class="breadcrumb-list"]/li[@class="js-breadcrumb breadcrumb "][2]/a',
               'subcategory': '//nav[@id="breadcrumb-container"]/ol[@class="breadcrumb-list"]/li[@class="js-breadcrumb breadcrumb active"]/a',
               'url': './/a[@class="js-product-image"]/@href'}
def parse(self, response):

    selector = HtmlXPathSelector(response)

    # iterate over deals
    for item in selector.select(self.items_list_xpath):
        loader = XPathItemLoader(GroceryItem(), selector=item)

        # define processors
        loader.default_input_processor = MapCompose(unicode.strip)
        loader.default_output_processor = Join()

        # iterate over fields and add xpaths to the loader
        for field, xpath in self.item_fields.iteritems():
            loader.add_xpath(field, xpath)
        yield loader.load_item()

      

+3


source to share


1 answer


You shouldn't override the method parse()

when using CrawlSpider

. You must set the user callback

in Rule

a different name.
Here is an excerpt from the official documentation:



When writing spider's web rules, avoid using parsing as a callback because CrawlSpider uses the parse method to implement its logic. So if you override the parsing method, the spider scan will no longer work.

+4


source







All Articles