Error importing urllib.parse in scrapy
I am trying to use scrapy. I tried the program. My python version is 2.7.9. After installation, when I typed scrapy in the terminal, it gave the following error:
File "/usr/bin/scrapy", line 7, in <module>
from scrapy.cmdline import execute
File "/usr/lib/python2.7/site-packages/scrapy/__init__.py", line 48, in <module>
from scrapy.spiders import Spider
File "/usr/lib/python2.7/site-packages/scrapy/spiders/__init__.py", line 10, in <module>
from scrapy.http import Request
File "/usr/lib/python2.7/site-packages/scrapy/http/__init__.py", line 10, in <module>
from scrapy.http.request import Request
File "/usr/lib/python2.7/site-packages/scrapy/http/request/__init__.py", line 12, in <module>
from scrapy.utils.url import escape_ajax
File "/usr/lib/python2.7/site-packages/scrapy/utils/url.py", line 9, in <module>
from six.moves.urllib.parse import (ParseResult, urlunparse, urldefrag,
ImportError: No module named urllib.parse
I know scrapy requires python 2.7, but urllib.parse is present in python 3, before it was urlparse. Looking at the error, it seems that there is a mistake in the screening setup. What to do? Several times I uninstalled and reinstalled the patchwork. But the problem still exists.
+3
source to share
1 answer
We can use separate imports for python 2 and python 3.
try:
from urllib.parse import urlparse
except ImportError:
from urlparse import urlparse
Got this answer from here. no module named urllib.parse (how to install it?)
0
source to share