微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

scrapy [boto] ERROR: Caught exception reading instance data URLError: <urlopen error [Errno 10051

履行进程中出现毛病:

2015-09-09 11:13:26 [boto] DEBUG: Retrieving credentials from Metadata server. 2015-09-09 11:13:27 [boto] ERROR: Caught exception reading instance data Traceback (most recent call last): File "D:anzhuangAnacondalibsite-packagesotoutils.py",line 210,in retry_url r = opener.open(req,timeout=timeout) File "D:anzhuangAnacondaliburllib2.py",line 431,in open response = self._open(req,data) File "D:anzhuangAnacondaliburllib2.py",line 449,in _open _open,req) File "D:anzhuangAnacondaliburllib2.py",line 409,in _call_chain result = func(*args) File "D:anzhuangAnacondaliburllib2.py",line 1227,in http_open return self.do_open(httplib.httpconnection,line 1197,in do_open raise URLError(err) URLError:2015-09-09 11:13:27 [boto] ERROR: Unable to read instance data,giving up

在setting.py中禁用s3 download就能够了
解决方法

DOWNLOAD_HANDLERS = {S3: None,}

stackoverflow上有些解释,有时间可以看看这篇毛病分析毛病
下面是我贴的部份内容
That particular error message is being generated by boto (boto 2.38.0 py27_0),which is used to connect to Amazon S3. Scrapy doesn’t have this enabled by default.

EDIT: In reply to the comments,this appears to be a bug with Scrapy when boto is present (bug here).

In response “how to disable the Download handler”,add the following to your settings.py file:

DOWNLOAD_HANDLERS = { S3:None,}

Your settings.py file should be in the root of your Scrapy project folder,(one level deeper than your scrapy.cfg file).

If you’ve already got DOWNLOAD_HANDLERS in your settings.py file,just add a new entry for ‘s3’ with a None value.

EDIT 2: I’d highly recommend looking at setting up virtual environments for your projects. Look into virtualenv,and it’s usage. I’d make this recommendation regardless of packages used for this project,but doubly so with your extreme number of packages.

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐