Scrap scrapes data but not output to file

I am getting empty json files despite successfully executing most of the lines in the scrapy shell. When I run the command scrapy crawl courses

when my bot courses happen:

from scrapy.spiders import CrawlSpider
from scrapy.linkextractors import LinkExtractor
from tutorial.items import CoursesItem
from bs4 import BeautifulSoup
import scrapy

class CoursesSpider(CrawlSpider):
    name = 'courses'
    allowed_domains = ['guide.berkeley.edu']
    start_urls = ['http://guide.berkeley.edu/courses/ast', 
                   ]

def parse(self, response):
    soup = BeautifulSoup(response.body_as_unicode(), 'lxml')
    items = []
    for course_info, course_desc, course_req in zip(soup.find_all('p',class_='courseblocktitle'), \
                                                soup.find_all('p', class_='courseblockdesc'), \
                                                soup.find_all('div', class_='course-section')):     
        item = CoursesItem()
        item['title'] = course_info.text
        item['description'] = course_desc.text
        item['requirements'] = course_req.text
        yield items

      

and settings.py is

BOT_NAME = 'courses'

SPIDER_MODULES = ['tutorial.spiders']
NEWSPIDER_MODULE = 'tutorial.spiders'

USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:39.0) Gecko/20100101 Firefox/39.0.3'

# ITEM_PIPELINES = {
#   'tutorial.pipelines.JsonExportPipeline': 300
# }

FEED_URI = 'output.json'
FEED_FORMAT = 'json'

      

As you can see in the comments section, I also tried to create a pipeline. My file looks like this:

from scrapy import signals
from scrapy.exporters import JsonLinesItemExporter

class JsonExportPipeline(object):

    def __init__(self):
        self.files = {}

    @classmethod
    def from_crawler(cls, crawler):
        pipeline = cls()
        crawler.signals.connect(pipeline.spider_opened, signals.spider_opened)
        crawler.signals.connect(pipeline.spider_closed, signals.spider_closed)
        return pipeline

    def spider_opened(self, spider):
        file = open('%s_spider.json' % spider.name, 'w+b')
        self.files[spider] = file
        self.exporter = JsonLinesItemExporter(file)
        self.exporter.start_exporting()

    def spider_closed(self, spider):
        self.exporter.finish_exporting()
        file = self.files.pop(spider)
        file.close()

    def process_item(self, item, spider):
        self.exporter.export_item(item)
        return item

      

But I feel like it might not be the case where the error is, although it is possible as I basically followed a few tutorials I found.

I used BeautifulSoup to make it easy to select items.

Last but not least, the terminal looks like this after launch.

2015-08-07 23:58:44 [scrapy] INFO: Scrapy 1.0.1 started (bot: courses)
2015-08-07 23:58:44 [scrapy] INFO: Optional features available: ssl, http11
2015-08-07 23:58:44 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'tu
torial.spiders', 'FEED_URI': 'output.json', 'SPIDER_MODULES': ['tutorial.spiders
'], 'BOT_NAME': 'courses', 'USER_AGENT': 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv
:39.0) Gecko/20100101 Firefox/39.0.3', 'FEED_FORMAT': 'json'}
2015-08-07 23:58:44 [scrapy] INFO: Enabled extensions: CloseSpider, FeedExporter
, TelnetConsole, LogStats, CoreStats, SpiderState
2015-08-07 23:58:44 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddl
eware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultH
eadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMidd
leware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2015-08-07 23:58:44 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2015-08-07 23:58:44 [scrapy] INFO: Enabled item pipelines:
2015-08-07 23:58:44 [scrapy] INFO: Spider opened
2015-08-07 23:58:44 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 i
tems (at 0 items/min)
2015-08-07 23:58:44 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6024
2015-08-07 23:58:45 [scrapy] DEBUG: Redirecting (301) to <GET http://guide.berke
ley.edu/courses/ast/> from <GET http://guide.berkeley.edu/courses/ast>
2015-08-07 23:58:45 [scrapy] DEBUG: Crawled (200) <GET http://guide.berkeley.edu
/courses/ast/> (referer: None)
2015-08-07 23:58:45 [scrapy] INFO: Closing spider (finished)
2015-08-07 23:58:45 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 537,
 'downloader/request_count': 2,
 'downloader/request_method_count/GET': 2,
 'downloader/response_bytes': 22109,
 'downloader/response_count': 2,
 'downloader/response_status_count/200': 1,
 'downloader/response_status_count/301': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2015, 8, 8, 6, 58, 45, 600000),
 'log_count/DEBUG': 3,
 'log_count/INFO': 7,
 'response_received_count': 1,
 'scheduler/dequeued': 2,
 'scheduler/dequeued/memory': 2,
 'scheduler/enqueued': 2,
 'scheduler/enqueued/memory': 2,
 'start_time': datetime.datetime(2015, 8, 8, 6, 58, 44, 663000)}
2015-08-07 23:58:45 [scrapy] INFO: Spider closed (finished)

      

I followed most of my options. Running the singular option --parse tells me that I am not good at elements, but even then I would like to know where to go beyond fixing parsing errors (i.e. output to json). Ultimately, I want to transfer all this data to the database.

I know this a lot to look at, but any help is appreciated, thanks!

+3


source to share


1 answer


You are spelling the wrong word. In the parameters of the parsing function -> element.

def parse(self, response):
    soup = BeautifulSoup(response.body_as_unicode(), 'lxml')
    items = []
    for ...
        item = CoursesItem()
        item['title'] = course_info.text
        item['description'] = course_desc.text
        item['requirements'] = course_req.text
        yield items # -> item

      



`

+1


source







All Articles