如何解决如何使用scrapy或任何其他工具来抓取使用if-on-match和cookie的页面?
我正在尝试抓取一个返回JSON对象的API,但它仅在第一次返回JSON,之后就不返回任何东西。我在Cookie上使用“ if-none-match”标头,但我想在没有Cookie的情况下使用它,因为我有很多此类API可以抓取。
这是我的蜘蛛代码:
import scrapy
from scrapy import Spider,Request
import json
from scrapy.crawler import CrawlerProcess
header_data = {'authority': 'shopee.com.my','method': 'GET','scheme': 'https','accept': '*/*','if-none-match-': '*','accept-encoding': 'gzip,deflate,br','accept-language': 'en-US,en;q=0.9','user-agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML,like Gecko) Chrome/85.0.4183.121 Safari/537.36','x-requested-with': 'XMLHttpRequest','x-shopee-language': 'en','Cache-Control': 'max-age=0',}
class TestSales(Spider):
name = "testsales"
allowed_domains = ['shopee.com','shopee.com.my','shopee.com.my/api/']
cookie_string = {'SPC_U':'-','SPC_IA':'-1','SPC_EC':'-','SPC_F':'7jrWAm4XYNNtyVAk83GPknN8NbCMQEIk','REC_T_ID':'476673f8-eeb0-11ea-8919-48df374df85c','_gcl_au':'1.1.1197882328.1599225148','_med':'refer','_fbp':'fb.2.1599225150134.114138691','language':'en','_ga':'GA1.3.1167355736.1599225151','SPC_SI':'mall.gTmrpiDl24JHLSNwnCw107mao3hd8qGP','csrftoken':'2ntG40uuWzOLUsjv5Sn8glBUQjXtbGgo','welcomePkgShown':'true','_gid':'GA1.3.590966412.1602427202','AMP_TOKEN':'%24NOT_FOUND','SPC_CT_21c6f4cb':'1602508637.vtyz9yfI6ckMZBdT9dlIcuaYf7crlEQ6NwQScaB2VXI=','SPC_CT_087ee755':'1602508652.ihdXyWUp3wFdbn1FGrKejd91MM8sJheyCPqcgmKqpdA=','_dc_gtm_UA-61915055-6':'1','SPC_R_T_ID':'vT4Yxil96kYSRG2GIhtzk8fRJldlPJ1/szTbz9sG21nTJr4zDoOnnxFEgYe2Ea+RhM0H8q0m/SFWBMO7ktpU5Kim0CJneelIboFavxAVwb0=','SPC_T_IV':'hhHcCbIpVvuchn7SbLYeFw==','SPC_R_T_IV':'hhHcCbIpVvuchn7SbLYeFw==','SPC_T_ID':'vT4Yxil96kYSRG2GIhtzk8fRJldlPJ1/szTbz9sG21nTJr4zDoOnnxFEgYe2Ea+RhM0H8q0m/SFWBMO7ktpU5Kim0CJneelIboFavxAVwb0='}
custom_settings = {
'AUTOTHRottLE_ENABLED' : 'True',# The initial download delay
'AUTOTHRottLE_START_DELAY' : '0.5',# The maximum download delay to be set in case of high latencies
'AUTOTHRottLE_MAX_DELAY' : '10',# The average number of requests Scrapy should be sending in parallel to
# each Remote Server
'AUTOTHRottLE_TARGET_CONCURRENCY' : '1.0',# 'DNSCACHE_ENABLED' : 'False',# 'COOKIES_ENABLED': 'False',}
def start_requests(self):
subcat_url = '/Baby-Toddler-Play-cat.27.23785'
id = subcat_url.split('.')[-1]
header_data['path'] = f'/api/v2/search_items/?by=sales&limit=50&match_id={id}&newest=0&order=desc&page_type=search&version=2'
header_data['referer'] = f'https://shopee.com.my{subcat_url}?page=0&sortBy=sales'
url = f'https://shopee.com.my/api/v2/search_items/?by=sales&limit=50&match_id={id}&newest=0&order=desc&page_type=search&version=2'
yield Request(url=url,headers=header_data,#cookies=self.cookie_string,cb_kwargs={'subcat': 'baby tobbler play cat','category': 'baby and toys' })
def parse(self,response,subcat,category):
# pass
try:
jdata = json.loads(response.body)
except Exception as e:
print(f'exception: {e}')
print(response.body)
return None
items = jdata['items']
for item in items:
name = item['name']
image_path = item['image']
absolute_image = f'https://cf.shopee.com.my/file/{image_path}_tn'
print(f'this is absolute image {absolute_image}')
subcategory = subcat
monthly_sold = 'pending'
price = float(item['price'])/100000
total_sold = item['sold']
location = item['shop_location']
stock = item['stock']
print(name)
print(price)
print(total_sold)
print(location)
print(stock)
app = CrawlerProcess()
app.crawl(TestSales)
app.start()
这是页面网址,您可以在浏览器中看到该网址:https://shopee.com.my/Baby-Toddler-Play-cat.27.23785?page=0&sortBy=sales
这是API网址,您也可以从该页面的开发人员工具中找到:https://shopee.com.my/api/v2/search_items/?by=sales&limit=50&match_id=23785&newest=0&order=desc&page_type=search&version=2
请告诉我如何处理“缓存”或“ if-none-match”,因为我不知道如何处理。 预先感谢!
解决方法
生成API GET请求所需要做的就是分类标识符( match_id )和开始项目编号(最新参数)。
使用链接模板 https://shopee.com.my/api/v2/search_items/?by=sales&limit=50&match_id={category_id}&newest={start_item_number}&order=desc&page_type=search&version=2 ,您可以获取任何API类别端点。
在这种情况下,无需管理cookie甚至标题。 API完全没有限制。
更新:
这对我来说很奏效:
from scrapy import Request
url = 'https://shopee.com.my/api/v2/search_items/?by=sales&limit=50&match_id=23785&newest=50&order=desc&page_type=search&version=2'
headers = {
"User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:81.0) Gecko/20100101 Firefox/81.0","Accept": "*/*","Accept-Language": "en-US,en;q=0.5","X-Requested-With": "XMLHttpRequest",}
request = Request(
url=url,method='GET',dont_filter=True,headers=headers,)
fetch(request)
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。