微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

python-3.x – SyntaxError:语法无效:除了urllib2.HTTPError,e:

我试图通过关键字刮取这个xml页面链接,但urllib2给我带来的错误,我无法解决 python3 …

from bs4 import BeautifulSoup
import requests
import smtplib
import urllib2
from lxml import etree
url = 'https://store.fabspy.com/sitemap_products_1.xml?from=5619742598&to=9172987078'
hdr = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 (KHTML,like Gecko) Chrome/23.0.1271.64 Safari/537.11','Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8','Accept-Charset': 'ISO-8859-1,utf-8;q=0.7,*;q=0.3','Accept-Encoding': 'none','Accept-Language': 'en-US,en;q=0.8','Connection': 'keep-alive'}
proxies = {'https': '209.212.253.44'}
req = urllib2.Request(url,headers=hdr,proxies=proxies)
try:
    page = urllib2.urlopen(req)
except urllib2.HTTPError as e:
    print(e.fp.read())
content = page.read()
def parse(self,response):
    try:
        print(response.status)
        print('???????????????????????????????????')
        if response.status == 200:
            self.driver.implicitly_wait(5)
            self.driver.get(response.url)
            print(response.url)
            print('!!!!!!!!!!!!!!!!!!!!')

            # DO STUFF
    except httplib.BadStatusLine:
        pass
while True:
    soup = BeautifulSoup(a.context,'lxml')
    links = soup.find_all('loc')
    for link in links:
        if 'notonesite' and 'winter' in link.text:
            print(link.text)
            jake = link.text

我只是试图通过代理发送urllib请求,看看链接是否在站点地图上…

解决方法

urllib2在python3中不可用.你应该使用 urllib.errorurllib.request

import urllib.request
import urllib.error
...
req = (url,headers=hdr) # doesn't take a proxies argument though...
...
try:
    page = urllib.request.urlopen(req)
except urllib.error.HTTPError as e:
...

…等等.但请注意,urllib.request.Request()不接受代理参数.有关代理处理,请参阅the documentation.

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐