微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

爬虫-提取百度搜索结果

代码

import json
import requests
from lxml import etree
import time
from tqdm import tqdm

# 想要搜索的词
word = '粮食'

# 链接
urls = 'https://www.baidu.com/s?wd='+word+'&pn='
headers = {
    "User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36"
}

for j in tqdm(range(100)):
    url = urls+str(j*10)
    response = requests.get(url, headers=headers)
    r = response.text
    html = etree.HTML(r, etree.HTMLParser())
    r1 = html.xpath('//h3')
    r2 = html.xpath('//*[@class="c-abstract"]')
    r3 = html.xpath('//*[@class="t"]/a/@href')

    for i in range(len(r2)):
        r11 = r1[i].xpath('string(.)')
        r22 = r2[i].xpath('string(.)')
        r33 = r3[i]
        with open('baidu_sousuo1.txt', 'a', encoding='utf-8') as c:
            c.write(json.dumps(r11,ensure_ascii=False) + '\n')
            c.write(json.dumps(r22, ensure_ascii=False) + '\n')
            c.write(json.dumps(r33, ensure_ascii=False) + '\n')
        # print(r11, end='\n')
        # print('------------------------')
        # print(r22, end='\n')
        # print(r33)
    time.sleep(1) # 暂停 1 秒

结果

在这里插入图片描述

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐