如何解决在 Python 中使用 doi 自动下载
我在一个 csv 文件中有大约 1000 个与相关 doi 相关的文章链接,我需要下载这些论文。
我尝试了以下方法:
List_dois = [""] #here I have inserted the list of 1000 doi
out = 'out_folder'
logging.basicConfig(filename='myapp.log',level=logging.INFO)
for doi in List_dois:
try:
SciHub(doi,out).download(choose_scihub_url_index=3)
time.sleep(10)
except:
logging.info("Error!",sys.exc_info()[0],doi)
Traceback (most recent call last):
File "/home/username/PycharmProjects/pythonProject3/main.py",line 65,in <module>
sci = SciHub(doi,out).download(choose_scihub_url_index=3)
File "/home/username/PycharmProjects/pythonProject3/venv/lib/python3.8/site-packages/scidownl/scihub.py",line 90,in download
self.download_pdf(pdf)
File "/home/username/PycharmProjects/pythonProject3/venv/lib/python3.8/site-packages/scidownl/scihub.py",line 147,in download_pdf
if self.is_captcha_page(res):
File "/home/usernamePycharmProjects/pythonProject3/venv/lib/python3.8/site-packages/scidownl/scihub.py",line 184,in is_captcha_page
return 'must-revalidate' in res.headers['Cache-Control']
File "/home/username/PycharmProjects/pythonProject3/venv/lib/python3.8/site-packages/requests/structures.py",line 54,in getitem
return self._store[key.lower()][1]
KeyError: 'cache-control'
我该如何解决这个问题? 睡眠时间不要增加太多,不然操作真的太长了...
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。