Error while using scrapy for python

I am trying to run scrapy to clean up websites and every time I try to run it I run into some problems. When I run the command line

scrapy crawl [FILE]

      

I am returning a bunch of errors starting with

Traceback (most recent call last):
File "C:\Users\lib\site-packages\boto\utils.py", line 210, in     r
etry_url
r = opener.open(req, timeout=timeout)
File "C:\Users\lib\urllib2.py", line 431, in open
  response = self._open(req, data)
File "C:\Users\lib\urllib2.py", line 449, in _open
  '_open', req)
File "C:\Users\lib\urllib2.py", line 409, in _call_chain
  result = func(*args)
File "C:\Users\lib\urllib2.py", line 1227, in http_open
   return self.do_open(httplib.HTTPConnection, req)
File "C:\Users\lib\urllib2.py", line 1197, in do_open
  raise URLError(err)
URLError: <urlopen error timed out>
2015-08-06 14:50:49 [boto] ERROR: Unable to read instance data, giving up

      

What exactly is stopping me from starting Scrapy?

EDIT I looked at stackoverflow and tweaked the settings a bit, which seemed to get rid of the error, but these errors still remain. I tried running scrapy shell and it also gives me an error, which I believe is related to the error I am currently getting.

2015-08-08 15:08:27 [scrapy] INFO: Scrapy 1.0.1 started (bot: scrapybot)
2015-08-08 15:08:27 [scrapy] INFO: Optional features available: ssl, http11, bot
o
2015-08-08 15:08:27 [scrapy] INFO: Overridden settings:     {'LOGSTATS_INTERVAL': 0}

2015-08-08 15:08:27 [scrapy] INFO: Enabled extensions: CloseSpider,       TelnetConsol
e, CoreStats, SpiderState
2015-08-08 15:08:28 [boto] DEBUG: Retrieving credentials from metadata server.
2015-08-08 15:08:29 [boto] ERROR: Caught exception reading instance data
Traceback (most recent call last):
File "C:\Users\lib\site-packages\boto\utils.py", line 210, in     retry_url
    r = opener.open(req, timeout=timeout)
File "C:\Users\lib\urllib2.py", line 431, in open
    response = self._open(req, data)
File "C:\Users\lib\urllib2.py", line 449, in _open
'_open', req)
File "C:\Users\lib\urllib2.py", line 409, in _call_chain
  result = func(*args)
File "C:\Users\lib\urllib2.py", line 1227, in http_open
  return self.do_open(httplib.HTTPConnection, req)
File "C:\Users\lib\urllib2.py", line 1197, in do_open
  raise URLError(err)
URLError: <urlopen error timed out>
2015-08-08 15:08:29 [boto] ERROR: Unable to read instance data, giving up
2015-08-08 15:08:29 [scrapy] INFO: Enabled downloader middlewares:     HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware,   RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2015-08-08 15:08:29 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2015-08-08 15:08:29 [scrapy] INFO: Enabled item pipelines:
2015-08-08 15:08:29 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023

      

+3


source to share


3 answers


Try disabling the S3 handler by adding the following line to the file ~/your_project/settings.py

:



DOWNLOAD_HANDLERS = {'s3': None}

      

+2


source


It looks like your program is shutting down due to too many frequent requests from pages of the same site. Try setting a delay between page loads.



Check out Documentation on scripting for delay loading .

+1


source


It might have something to do with using python using a proxy. To disable it, you can make the following changes:

import os

os.environ ['http_proxy'] = ''

+1


source







All Articles