如何解决无法使用 skimage 通过 url 读取图像
我抓取了一些图像链接并尝试使用 skimage 读取它并将它们转换为字符串,但失败了。这是我的代码:
import requests
from lxml import etree
from skimage import io
import pytesseract
pytesseract.pytesseract.tesseract_cmd = 'C:/Program Files/Tesseract-OCR/tesseract.exe'
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML,like Gecko) Chrome/87.0.4280.88 Safari/537.36','Cookie': 'antipas=3J5894C659Y4128mb084uM86; uuid=acee566c-264c-40f6-8b67-ef6708e60a35; ganji_uuid=9480781269465181883490; clueSourceCode=%2A%2300; sessionid=f03f9416-5158-4fe5-d3fd-5cf697aebf73; lg=1; cainfo=%7B%22ca_a%22%3A%22-%22%2C%22ca_b%22%3A%22-%22%2C%22ca_s%22%3A%22seo_google%22%2C%22ca_n%22%3A%22default%22%2C%22ca_medium%22%3A%22-%22%2C%22ca_term%22%3A%22-%22%2C%22ca_content%22%3A%22-%22%2C%22ca_campaign%22%3A%22-%22%2C%22ca_kw%22%3A%22-%22%2C%22ca_i%22%3A%22-%22%2C%22scode%22%3A%22-%22%2C%22keyword%22%3A%22-%22%2C%22ca_keywordid%22%3A%22-%22%2C%22display_finance_flag%22%3A%22-%22%2C%22platform%22%3A%221%22%2C%22version%22%3A1%2C%22client_ab%22%3A%22-%22%2C%22guid%22%3A%22acee566c-264c-40f6-8b67-ef6708e60a35%22%2C%22ca_city%22%3A%22bj%22%2C%22sessionid%22%3A%22f03f9416-5158-4fe5-d3fd-5cf697aebf73%22%7D; close_finance_popup=2021-01-28; _gl_tracker=%7B%22ca_source%22%3A%22-%22%2C%22ca_name%22%3A%22-%22%2C%22ca_kw%22%3A%22-%22%2C%22ca_id%22%3A%22-%22%2C%22ca_s%22%3A%22self%22%2C%22ca_n%22%3A%22-%22%2C%22ca_i%22%3A%22-%22%2C%22sid%22%3A63162699730%7D; cityDomain=gz; user_city_id=16; preTime=%7B%22last%22%3A1611873323%2C%22this%22%3A1611180370%2C%22pre%22%3A1611180370%7D'
}
# Get detail from url function:
def get_detail_urls(url):
rsp = requests.get(url,headers=headers)
text = rsp.content.decode('utf-8')
html = etree.HTML(text)
ul = html.xpath('//ul[@class="carlist clearfix js-top"]')[0]
lis = ul.xpath('./li')
detail_urls = []
for i in lis:
detail_url = i.xpath('./a/@href')
detail_url = 'https://www.guazi.com' + detail_url[0]
detail_urls.append(detail_url)
return detail_urls
# First URL
url = 'https://www.guazi.com/gz/01'
# Parse Data
detail_urls = get_detail_urls(url)
for detail_url in detail_urls:
resp = requests.get(detail_url,headers=headers)
text = resp.content.decode('utf-8')
html = etree.HTML(text)
imges = html.xpath('//li[@class="one"]//img[@*]')[0]
imges = etree.tostring(time).decode('utf-8')
imges = time.split(" ")[1].split("=")[1].replace('',"")
image = io.imread(imeges)
print(pytesseract.image_to_string(image))
当我运行上面的代码时:
Traceback (most recent call last): File "C:/Users/naive/Extract_Data/GuaZi.py",line 46,in <module>
image = io.imread(time) File "C:\Users\naive\AppData\Local\Programs\Python\Python38-32\lib\site-packages\skimage\io\_io.py",line 48,in imread
img = call_plugin('imread',fname,plugin=plugin,**plugin_args)File "C:\Users\naive\AppData\Local\Programs\Python\Python38-32\lib\site-packages\skimage\io\manage_plugins.py",line 207,in call_plugin
return func(*args,**kwargs)File "C:\Users\naive\AppData\Local\Programs\Python\Python38-32\lib\site-packages\skimage\io\_plugins\imageio_plugin.py",line 10,in imread
return np.asarray(imageio_imread(*args,**kwargs))File "C:\Users\naive\AppData\Local\Programs\Python\Python38-32\lib\site-packages\imageio\core\functions.py",line 265,in imread
reader = read(uri,format,"i",**kwargs)File "C:\Users\naive\AppData\Local\Programs\Python\Python38-32\lib\site-packages\imageio\core\functions.py",line 172,in get_reader
request = Request(uri,"r" + mode,**kwargs)File "C:\Users\naive\AppData\Local\Programs\Python\Python38-32\lib\site-packages\imageio\core\request.py",line 124,in __init__
self._parse_uri(uri)File "C:\Users\naive\AppData\Local\Programs\Python\Python38-32\lib\site-packages\imageio\core\request.py",line 260,in _parse_uri
raise FileNotFoundError("No such file: '%s'" % fn)FileNotFoundError: No such file: 'C:\Users\naive\Extract_Data\"https:\image1.guazistatic.com\qn200416174108c46443da16f09fbdb5460e27e065e319.jpg"'
我打印了我抓取的所有链接并将链接放入 io.imread() 并且它工作正常。不知道为什么我用直接抓取的链接就不行了。
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。