微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

使用 Python 下载 NSE 2021 数据

如何解决使用 Python 下载 NSE 2021 数据

我在通过 Python 代码访问此类 URL 时遇到问题

https://www1.nseindia.com/content/historical/EQUITIES/2021/JAN/cm01JAN2021bhav.csv.zip

这在过去 3 年一直有效,直到 2020 年 12 月 31 日。该网站似乎实施了一些限制。

这里有类似的解决方

VB NSE 访问被拒绝 这个添加是: "User-Agent" : "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML,like Gecko) Chrome/23.0.1271.95 Safari/537.11" "Referer" : "https://www1 .nseindia.com/products/content/equities/equities/archieve_eq.htm"

原始代码在这里https://github.com/naveen7v/Bhavcopy/blob/master/Bhavcopy.py

即使在请求部分添加以下内容后它也不起作用

 headers = {'user-agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML,like Gecko) Chrome/23.0.1271.95 Safari/537.11'}
            ##print (Path)
            a=requests.get(Path,headers)  

有人可以帮忙吗?

解决方法

def download_bhavcopy(formated_date):

url = "https://www1.nseindia.com/content/historical/DERIVATIVES/{0}/{1}/fo{2}{1}{0}bhav.csv.zip".format(
    formated_date.split('-')[2],month_dict[formated_date.split('-')[1]],formated_date.split('-')[0])

print(url)
res=None

hdr = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML,like Gecko) Chrome/88.0.4324.190 Safari/537.36','Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*,q=0.8,application/signed-exchange;v=b3;q=0.9','Accept-Encoding': 'gzip,deflate,br','Accept-Charset': 'ISO-8859-1,utf-8;q=0.7,*;q=0.3','Accept-Language': 'en-IN,en;q=0.9,en-GB;q=0.8,en-US;q=0.7,hi;q=0.6','Connection': 'keep-alive','Host':'www1.nseindia.com','Cache-Control':'max-age=0','Referer':'https://www1.nseindia.com/products/content/derivatives/equities/fo.htm',}
cookie_dict={'bm_sv':'E2109FAE3F0EA09C38163BBF24DD9A7E~t53LAJFVQDcB/+q14T3amyom/sJ5dm1gV7z2R0E3DKg6WiKBpLgF0t1Mv32gad4CqvL3DIswsfAKTAHD16vNlona86iCn3267hHmZU/O7DrKPY73XE6C4p5geps7yRwXxoUOlsqqPtbPsWsxE7cyDxr6R+RFqYMoDc9XuhS7e18='}
session = requests.session()
for cookie in cookie_dict:
    session.cookies.set(cookie,cookie_dict[cookie])
    
response = session.get(url,headers = hdr)
if response.status_code == 200:
    print('Success!')
elif response.status_code == 404:
    print('Not Found.')
else :
    print('response.status_code ',response.status_code)

file_name="none";

try: 
    zipT=zipfile.ZipFile(io.BytesIO(response.content)   )
    
    zipT.extractall()
    file_name = zipT.filelist[0].filename
    print('file name '+ file_name)
except zipfile.BadZipFile: # if the zip file has any errors then it prints the error message which you wrote under the 'except' block
    print('Error: Zip file is corrupted')
except zipfile.LargeZipFile: # it raises an 'LargeZipFile' error because you didn't enable the 'Zip64'
    print('Error: File size if too large')    
    
print(file_name)
return file_name
,

检查您的网络浏览器中的链接并找到所需下载链接的 GET。 转到标题并检查用户代理 例如用户代理:Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:88.0) Gecko/20100101 Firefox/88.0

现在将您的代码修改为:

import requests

headers = {
    'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:88.0) Gecko/20100101 Firefox/88.0'
}

result = requests.get(URL,headers = headers)

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。