微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

如何查询特定年份的arXiv?

如何解决如何查询特定年份的arXiv?

我正在使用下面显示代码,以便从arXiv检索论文。我想检索标题中带有“机器”和“学习”字样的论文。论文数量很多,因此我想按年份(published)进行切片。

如何在search_query中请求2020年和2019年的记录?请注意,我对后过滤不感兴趣。

import urllib.request

import time
import Feedparser

# Base api query url
base_url = 'http://export.arxiv.org/api/query?';

# Search parameters
search_query = urllib.parse.quote("ti:machine learning")
start = 0
total_results = 5000
results_per_iteration = 1000
wait_time = 3

papers = []

print('Searching arXiv for %s' % search_query)

for i in range(start,total_results,results_per_iteration):
    
    print("Results %i - %i" % (i,i+results_per_iteration))
    
    query = 'search_query=%s&start=%i&max_results=%i' % (search_query,i,results_per_iteration)

    # perform a GET request using the base_url and query
    response = urllib.request.urlopen(base_url+query).read()

    # parse the response using Feedparser
    Feed = Feedparser.parse(response)

    # Run through each entry,and print out information
    for entry in Feed.entries:
        #print('arxiv-id: %s' % entry.id.split('/abs/')[-1])
        #print('Title:  %s' % entry.title)
        #Feedparser v4.1 only grabs the first author
        #print('First Author:  %s' % entry.author)
        paper = {}
        paper["date"] = entry.published
        paper["title"] = entry.title
        paper["first_author"] = entry.author
        paper["summary"] = entry.summary
        papers.append(paper)
    
    # Sleep a bit before calling the API again
    print('Bulk: %i' % 1)
    time.sleep(wait_time)

解决方法

根据arXiv documentation,没有publisheddate字段可用。

您可以做的是按日期sort the results(通过在查询参数中添加&sortBy=submittedDate&sortOrder=descending)并在到2018年时停止发出请求。

基本上,您的代码应像这样修改:

import urllib.request

import time
import feedparser

# Base api query url
base_url = 'http://export.arxiv.org/api/query?';

# Search parameters
search_query = urllib.parse.quote("ti:machine learning")
i = 0
results_per_iteration = 1000
wait_time = 3
papers = []
year = ""  
print('Searching arXiv for %s' % search_query)

while (year != "2018"): #stop requesting when papers date reach 2018
    print("Results %i - %i" % (i,i+results_per_iteration))
    
    query = 'search_query=%s&start=%i&max_results=%i&sortBy=submittedDate&sortOrder=descending' % (search_query,i,results_per_iteration)

    # perform a GET request using the base_url and query
    response = urllib.request.urlopen(base_url+query).read()

    # parse the response using feedparser
    feed = feedparser.parse(response)
    # Run through each entry,and print out information
    for entry in feed.entries:
        #print('arxiv-id: %s' % entry.id.split('/abs/')[-1])
        #print('Title:  %s' % entry.title)
        #feedparser v4.1 only grabs the first author
        #print('First Author:  %s' % entry.author)
        paper = {}
        paper["date"] = entry.published
        year = paper["date"][0:4]
        paper["title"] = entry.title
        paper["first_author"] = entry.author
        paper["summary"] = entry.summary
        papers.append(paper)
    # Sleep a bit before calling the API again
    print('Bulk: %i' % 1)
    i += results_per_iteration
    time.sleep(wait_time)

对于“后过滤”方法,一旦收集到足够的结果,我将执行以下操作:

papers2019 = [item for item in papers if item["date"][0:4] == "2019"]

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。