微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

Scrapy Cloud 跳过循环

如何解决Scrapy Cloud 跳过循环

这个蜘蛛应该遍历 https://lihkg.com/thread/`2169007 - i*10`/page/1。但出于某种原因,它会在循环中跳过页面

我查看了在 Scrapy Cloud 中抓取的项目,抓取了具有以下 url 的项目:

...
Item 10: https://lihkg.com/thread/2479941/page/1
Item 11: https://lihkg.com/thread/2479981/page/1
Item 12: https://lihkg.com/thread/2479971/page/1
Item 13: https://lihkg.com/thread/2479931/page/1
Item 14: https://lihkg.com/thread/2479751/page/1
Item 15: https://lihkg.com/thread/2479991/page/1
Item 16: https://lihkg.com/thread/1504771/page/1
Item 17: https://lihkg.com/thread/1184871/page/1
Item 18: https://lihkg.com/thread/1115901/page/1
Item 19: https://lihkg.com/thread/1062181/page/1
Item 20: https://lihkg.com/thread/1015801/page/1
Item 21: https://lihkg.com/thread/955001/page/1
Item 22: https://lihkg.com/thread/955011/page/1
Item 23: https://lihkg.com/thread/955021/page/1
Item 24: https://lihkg.com/thread/955041/page/1
...

大约有 100 万页被跳过。

代码如下:

from lihkg.items import LihkgItem
import scrapy
import time
from scrapy_splash import SplashRequest

class LihkgSpider13(scrapy.Spider):
    name = 'lihkg1-950000'
    http_user = '(my splash api key here)'
    allowed_domains = ['lihkg.com']
    start_urls = ['https://lihkg.com/']

    script1 = """
                function main(splash,args)
                splash.images_enabled = false
                assert (splash:go(args.url))
                assert (splash:wait(2))
                return {
                    html = splash: html(),png = splash:png(),har = splash:har(),}
                end
              """

    def parse(self,response):
        for i in range(152500):
            time.sleep(0)
            url = "https://lihkg.com/thread/" + str(2479991 - i*10) + "/page/1"
            yield SplashRequest (url=url,callback=self.parse_article,endpoint='execute',args={
                                    'html': 1,'lua_source': self.script1,'wait': 2,})

    def parse_article(self,response):
        item = LihkgItem()
        item['author'] = response.xpath('//*[@id="1"]/div/small/span[2]/a/text()').get()
        item['time'] = response.xpath('//*[@id="1"]/div/small/span[4]/@data-tip').get()
        item['texts'] = response.xpath('//*[@id="1"]/div/div[1]/div/text()').getall()
        item['images'] = response.xpath('//*[@id="1"]/div/div[1]/div/a/@href').getall()
        item['emoji'] = response.xpath('//*[@id="1"]/div/div[1]/div/img/@src').getall()
        item['title'] = response.xpath('//*[@id="app"]/nav/div[2]/div[1]/span/text()').get()
        item['likes'] = response.xpath('//*[@id="1"]/div/div[2]/div/div[1]/div/div[1]/label/text()').get()
        item['dislikes'] = response.xpath('//*[@id="1"]/div/div[2]/div/div[1]/div/div[2]/label/text()').get()
        item['category'] = response.xpath('//*[@id="app"]/nav/div[1]/div[2]/div/span/text()').get()
        item['url'] = response.url

        yield item

我在项目中启用了 Crawlera、DeltaFetch 和 DotScrapy Persistence。

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。