如何解决我该如何从有下载等待计时器的站点下载python文件
我今天有点奇怪。我试图浏览该网站并下载与某个模式匹配的所有ROM(模式部分还不在我的代码中):https://romhustler.org/
之所以这样做,是因为我有一个RPi 4运行记录,我想在其中放很多ROM。稍后我将处理文件夹排序以及所有内容
我有一个遍历所有网页内容的python脚本,可以获取下载前链接。但是,尝试让python等待下载计数器滴答答答,获取下载链接然后下载文件时,我遇到了极大的麻烦:https://romhustler.org/download/122039/RFloRzkzYjBxeUpmSXhmczJndVZvVXViV3d2bjExMUcwRmdhQzltaU1USXlNRE01ZkRJeE1TNHlOaTR4TVRFdU1qVXdmREUyTURJek9USXdOVFo4Wkc5M2JteHZZV1JmY0dGblpRPT0=
我见过很多人建议,从网站上下载时,请使用以下内容:
import requests
url = "https://romhustler.org/download/122039/RFloRzkzYjBxeUpmSXhmczJndVZvVXViV3d2bjExMUcwRmdhQzltaU1USXlNRE01ZkRJeE1TNHlOaTR4TVRFdU1qVXdmREUyTURJek9USXdOVFo4Wkc5M2JteHZZV1JmY0dGblpRPT0="
filename = "dummy.txt"
r = requests.get(url,allow_redirects=True)
with open(filename,'wb') as f:
f.write(url)
但是,这假定您已经有一个静态下载链接。问题是,像这样的网站会等待一段时间,然后为您提供下载链接(这可理解,可以防止人们向下载垃圾邮件。)我计划只让我的代码运行一整夜,因此等待每次下载等待9秒钟不是问题。我唯一遇到的问题实际上是试图获取下载链接。我的代码如下:
import requests,time,urllib
#https://raspBerrytips.com/download-retropie-roms/#Where_to_download_Retropie_ROMs
#https://raspBerrytips.com/add-games-raspBerry-pi/
#print("Site to download from: \"https://cvaddict.com/list.PHP\"")
#print("Site to download from: \"https://coolrom.com.au/roms/\"")
#print("Site to download from: \"https://www.freeroms.com/\"")
def splitTextToLines(text):
result = [""]
pos = 0
for i in text:
if (i != '\n'):
result[pos] += i
else:
result.append("")
pos += 1
return result
def getValues(valIn,key,endPoint="</div"):
divs = 0
running = False
myString = ""
results = []
for i in valIn:
if (key in i):
running = True
if (("<div" in i) and (running == True)):
divs += 1
if ((divs > 0) and (endPoint in i)):
divs -= 1
if (divs == 0):
running = False
results.append(myString)
myString = ""
if (divs > 0):
myString += i + "\n"
return results
def getLineOfValue(data,key):
looper = 0
for i in data:
if (key in i):
return looper
looper += 1
def getListSubstring(data,start,end):
looper = 0
result = []
for i in data:
if ((looper > start) and (looper < end)):
result.append(i)
looper += 1
return result
input("ready?")
for i in range(1,402):
print("Now scouring page:",i)
#print("Site to download from: \"https://romhustler.org/roms/index/page:" + str(i) + "\"")
rawSiteData = splitTextToLines(requests.get("https://romhustler.org/roms/index/page:" + str(i)).text)
scrapedSiteData = getLineOfValue(rawSiteData,"<div class=\"roms-listing w-console\"")
rowList = getValues(getListSubstring(rawSiteData,scrapedSiteData,len(rawSiteData) - 1),"<div class=\"row")
for rowSegment in rowList:
state = ""
for line in splitTextToLines(rowSegment):
#Get the class type
if ("<div class=\"row extend\">" in line):
state = "extended row"
if ("<div class=\"row \">" in line):
state = "standard row"
#If we have the class type,then get the download link
if (state == "extended row"):
pass
if (state == "standard row"):
for line in splitTextToLines(rowSegment):
if (("href=\"" in line) and ("/rom/" in line)):
startIndex = line.index("<a href=\"")+len("<a href=\"")
url = "https://romhustler.org" + line[startIndex : line.index("\">",startIndex)]
#We Now have the download url.
siteData_2 = splitTextToLines(requests.get(url).text)
scrapedSiteData_2 = getLineOfValue(siteData_2,"<div class=\"overview info download_list")
rowList_2 = getListSubstring(siteData_2,scrapedSiteData_2,len(siteData_2) - 1)
running_2 = True
for i in rowList_2:
if (("href=\"" in i) and running_2):
running_2 = False
url_2 = "https://romhustler.org"
startIndex = i.index("<a href=\"") + len("<a href=\"")
url_2 += i[startIndex : i.index("\"",startIndex+1)]
if ("/download/" in url_2):
#We Now have the download url.
siteData_3 = splitTextToLines(requests.get(url_2).text)
print(url_2)
for i in siteData_3:
if ("class=\"downloadLink\"" in i):
print(i)
#This is just here to stop it spamming console and pause the code
input()
if ((state != "") and (line != "")):
print(rowSegment)
如果您亲自尝试此代码,您将明白我的意思。在“ class =“ downloadLink”下,data-url变量完全为空白。在浏览器中,它会递减计数,然后填充。
所以我要问的是,是否有人会像浏览器一样对如何使python递减计数(或完全跳过计数)有任何线索,然后返回代码填充的下载网址。我怀疑花费一些时间和精力完全可以做到
提前谢谢大家! 安德烈
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。