如何解决无法理解 csv.reader
这里是 Python 新手,请原谅我的无知。我正在使用从 Google 和 this site 中找到的脚本来处理 Google Analytics Reporting API (v4) 和 Google Search Console API。
该脚本尝试按页面和按查询获取基本分析数据,例如点击次数和展示次数,并将其保存到 csv 文件中。由于此数据分布在 2 个 API 中,因此脚本分别提取和保存数据,然后使用函数 update_file(flags)
组合 4 个 csv 文件的内容。例如,一个文件包含为相关网站生成展示次数的所有 Google 搜索查询的列表。此处的标志是命令行参数 - 站点 URL、开始日期和结束日期。
def update_file(flags):
#Combining all the data previously collected and stored in 3 seperate csv files into one global csv file
print("Merging files")
basic_url = flags.property_uri
basic_url = basic_url.replace("://","_")
basic_url = basic_url.replace(".","-")
matchfile ="Landingpage_analytics_"+flags.start_date+"_"+flags.end_date+"_"+basic_url+".csv"
matchfile2 ='Landingpage_'+flags.start_date+"_"+flags.end_date+"_"+basic_url+".csv"
matchfile3 ="Keyword_"+flags.start_date+"_"+flags.end_date+"_"+basic_url+".csv"
csvfile="Landingpage_keyword_"+basic_url+"_"+flags.start_date+"_"+flags.end_date+".csv"
newfile="Global_report_"+basic_url+"_"+flags.start_date+"_"+flags.end_date+".csv"
print(matchfile)
print(matchfile2)
print(matchfile3)
new_data=[]
new_data2=[]
new_data3=[]
with open(csvfile) as fin,open(matchfile) as fmatch,open(matchfile2) as fmatch2,open(matchfile3) as fmatch3,open(newfile,'w') as fout:
reader = csv.reader(fin)
checker = csv.reader(fmatch)
checker2 = csv.reader(fmatch2)
checker3 = csv.reader(fmatch3)
writer = csv.writer(fout)
header_reader = next(reader)
next(checker)
next(checker2)
next(checker3)
header_checker =["Clicks Keyword","Impressions Keyword","Avg. Pos. Keyword","CTR Keyword","Clicks Landingpage","Impressions Landingpage","Avg. Pos. Landingpage","CTR Landingpage","GA Sessions","GA Bouncerate","GA Avg session duration","GA Avg. Pages/Session"]
new_header = header_reader + header_checker
writer.writerow(new_header)
analytics = {}
keyword = {}
landingpage = {}
for a in checker3:
keyword[a[0]]=[a[2],a[1],a[3],float(a[2])/float(a[1])]
for b in checker2:
landingpage[b[0]]=[b[2],b[1],b[3],float(b[2])/float(b[1])]
for xl in checker:
url = flags.property_uri+xl[0]
analytics[url] = [xl[1],float(xl[2])/float(xl[1]),xl[3],xl[4]]
for i in reader:
if i[0] in keyword:
newrow = i + keyword[i[0]]
else:
# if keyword doesn't have specific data
newrow = i + ['N/A','N/A','N/A']
new_data.append(newrow)
for i in new_data:
if i[1] in landingpage:
newrow = i + landingpage[i[1]]
else:
# if landingpage doesn't have specific data
newrow = i + ['N/A','N/A']
new_data2.append(newrow)
for i in new_data2:
if i[1] in analytics:
newrow = i + analytics[i[1]]
new_data3.append(newrow)
new_data3.sort(key=lambda row: float(row[2]),reverse=True)
writer.writerows(new_data3)
update_file(flags)
函数不起作用,正在生成一个空文件。我尝试调试并发现 matchfile
、fmatch
和 csv 阅读器 checker
引用的文件在 update_file(flags)
运行之前有内容,但之后为空。
for xl in checker:
url = flags.property_uri+xl[0]
analytics[url] = [xl[1],xl[4]]
analytics
在这些行之后保持为空,但 keyword
和 landingpage
包含正确的数据并使用类似的代码。我不确定发生了什么。我是否遗漏了什么可能导致这些文件返回空的内容?
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。