微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

从网站抓取图像,但图像是从某个服务器加载的 javascript

如何解决从网站抓取图像,但图像是从某个服务器加载的 javascript

我试图从这个网址中抓取图片和价格:

https://www.woolworths.co.za/prod/Food/Fruit-Vegetables-Salads/Salads-Herbs/Cucumbers/English-Cucumber-300-g-650-g/_/A-20004019

使用scrapy和scrapy-splash。

我遇到的问题是价格和图像似乎是使用 javascript 从某处获取的(如果我在 chrome 中加载网站并禁用 javascript,我看不到任何一个元素,当然,如果我使用scrapy.Request,它们就不会出现。

我尝试过使用 scrapy_splash.SplashRequest 但仍然没有运气。请帮忙。

这是我的蜘蛛的代码

import scrapy
from scrapy_splash import SplashRequest

class Myspider(scrapy.Spider):
      name = 'Wooliesspider'
      def __init__(self):
          self.headers = {"User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:70.0) Gecko/20100101 Firefox/70.0"}
      # download_delay = 10.0
      item_dict = {}
      # handle_httpstatus_list = [301]
      base_url = 'https://www.woolworths.co.za'
    
      def get_category_from_link(link):
          base = "https://www.woolworths.co.za/cat/Food/"
          text_after_base = link[:len(base)]
          category = text_after_base[:text_after_base.find('/')]
          return category
      def start_requests(self):
        urls = [
            'https://www.woolworths.co.za/dept/Food/_/N-1z13sk5'
        ]
        for url in urls:
            yield SplashRequest(url=url,callback=self.parse)

      def parse(self,response):
          containers = response.css('div.lazyload-container.landing__block.landing__block--half-fourth')
          #print(containers)
          link_tags = [c.css('a.landing__link') for c in containers]
          #print(link_tags[0])
          link = link_tags[0]
          url = self.base_url+link.attrib['href']
          print(url)
          yield SplashRequest(url,callback=self.parse_product_page)
          
        #   for link in link_tags:
        #       url = self.base_url+link.attrib['href']
        #       print(url)
        #       yield scrapy.Request(url,callback=self.parse_product_page)
              #input()

      def parse_product_page(self,response):
          print('parse_product_page')
          items = response.css('div.product-list__item')
          item = items[0]
          link = item.css('a.product--view').attrib['href']
          url = self.base_url+link
          print(url)
          yield SplashRequest(url=url,callback=self.parse_item_page,endpoint='render.html',args={'wait':0.5})
        
        # yield scrapy.Request(url,callback=self.parse_item_page)
        #   for item in items:
        #       link = item.css('a.product--view').attrib['href']
        #       url = self.base_url+link
        #       print(url)
        #       input()
        #       yield scrapy.Request(url,callback=self.parse_item_page)

      def parse_item_page(self,response):
          print('*****parse_item_page*****')

          print(response.css('figure.zoom'))
          print(response.css('span.price'))

这是控制台的输出

2021-07-18 21:49:11 [scrapy.utils.log] INFO: Scrapy 2.5.0 started (bot: SavR-Bot)
2021-07-18 21:49:11 [scrapy.utils.log] INFO: Versions: lxml 4.5.0.0,libxml2 2.9.5,cssselect 1.1.0,parsel 1.5.2,w3lib 1.21.0,Twisted 19.10.0,Python 3.8.2 (tags/v3.8.2:7b3ab59,Feb 25 2020,23:03:10) [MSC v.1916 64 bit (AMD64)],pyOpenSSL 19.1.0 (OpenSSL 1.1.1d  10 Sep 2019),cryptography 2.8,Platform Windows-10-10.0.19041-SP0
2021-07-18 21:49:11 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.selectreactor.SelectReactor
2021-07-18 21:49:11 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'SavR-Bot','DOWNLOAD_DELAY': 1.2,'DUPEFILTER_CLASS': 'scrapy_splash.SplashAwareDupeFilter','HTTPCACHE_STORAGE': 'scrapy_splash.SplashAwareFSCacheStorage','NEWSPIDER_MODULE': 'Scrapers.spiders','ROBOTSTXT_OBEY': True,'SPIDER_MODULES': ['Scrapers.spiders'],'USER_AGENT': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML,'
               'like Gecko) Chrome/34.0.1847.131 Safari/537.36'}
2021-07-18 21:49:11 [scrapy.extensions.telnet] INFO: Telnet Password: 875e1f88a8eb7813
2021-07-18 21:49:11 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats','scrapy.extensions.telnet.TelnetConsole','scrapy.extensions.logstats.LogStats']
2021-07-18 21:49:14 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware','scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware','scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware','scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware','scrapy.downloadermiddlewares.useragent.UserAgentMiddleware','scrapy.downloadermiddlewares.retry.RetryMiddleware','scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware','scrapy.downloadermiddlewares.redirect.RedirectMiddleware','scrapy.downloadermiddlewares.cookies.CookiesMiddleware','scrapy_splash.SplashCookiesMiddleware','scrapyjs.SplashMiddleware','scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware','scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware','scrapy.downloadermiddlewares.stats.DownloaderStats']
2021-07-18 21:49:15 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware','scrapy_splash.SplashDeduplicateArgsMiddleware','scrapy.spidermiddlewares.offsite.OffsiteMiddleware','scrapy.spidermiddlewares.referer.RefererMiddleware','scrapy.spidermiddlewares.urllength.UrlLengthMiddleware','scrapy.spidermiddlewares.depth.DepthMiddleware']
2021-07-18 21:49:15 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2021-07-18 21:49:15 [scrapy.core.engine] INFO: Spider opened
2021-07-18 21:49:15 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min),scraped 0 items (at 0 items/min)
2021-07-18 21:49:15 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2021-07-18 21:49:15 [py.warnings] WARNING: c:\users\user\appdata\local\programs\python\python38\lib\site-packages\scrapy_splash\request.py:41: ScrapyDeprecationWarning: Call to deprecated function to_native_str. Use to_unicode instead.
  url = to_native_str(url)

2021-07-18 21:49:16 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.woolworths.co.za/robots.txt> (referer: None)
2021-07-18 21:49:16 [scrapy.core.engine] DEBUG: Crawled (404) <GET http://localhost:8050/robots.txt> (referer: None)
2021-07-18 21:49:23 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.woolworths.co.za/dept/Food/_/N-1z13sk5 via http://localhost:8050/render.html> (referer: None)
https://www.woolworths.co.za/cat/Food/Food-Cupboard/International-Cuisine/_/N-1ele3tm
2021-07-18 21:49:31 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.woolworths.co.za/cat/Food/Food-Cupboard/International-Cuisine/_/N-1ele3tm via http://localhost:8050/render.html> (referer: None)
parse_product_page
https://www.woolworths.co.za/prod/Food/Food-Cupboard/Pasta-Rice-Grains/Pasta/Spaghetti-500-g/_/A-6009178658413
2021-07-18 21:49:36 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.woolworths.co.za/prod/Food/Food-Cupboard/Pasta-Rice-Grains/Pasta/Spaghetti-500-g/_/A-6009178658413 via http://localhost:8050/render.html> (referer: None)
*****parse_item_page*****
[]
[]
2021-07-18 21:49:36 [scrapy.core.engine] INFO: Closing spider (finished)
2021-07-18 21:49:36 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 1874,'downloader/request_count': 5,'downloader/request_method_count/GET': 2,'downloader/request_method_count/POST': 3,'downloader/response_bytes': 4552254,'downloader/response_count': 5,'downloader/response_status_count/200': 4,'downloader/response_status_count/404': 1,'elapsed_time_seconds': 21.456216,'finish_reason': 'finished','finish_time': datetime.datetime(2021,7,18,19,49,36,851956),'log_count/DEBUG': 5,'log_count/INFO': 10,'log_count/WARNING': 1,'request_depth_max': 2,'response_received_count': 5,'robotstxt/request_count': 2,'robotstxt/response_count': 2,'robotstxt/response_status_count/200': 1,'robotstxt/response_status_count/404': 1,'scheduler/dequeued': 6,'scheduler/dequeued/memory': 6,'scheduler/enqueued': 6,'scheduler/enqueued/memory': 6,'splash/render.html/request_count': 3,'splash/render.html/response_count/200': 3,'start_time': datetime.datetime(2021,15,395740)}
2021-07-18 21:49:36 [scrapy.core.engine] INFO: Spider closed (finished)

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。