如何解决在一个非常大的文件上应用FuzzyLogic
嗨,我正在尝试应用Fuzzylogic比较两个不同文件中的字符串。这段代码适用于较小的数据集。但是,当我尝试在大型数据集上应用相同的代码时,我遇到了内存问题。当我运行代码时,它显示为Low memory Unable to create 6.6GB array
。
from fuzzywuzzy import fuzz
from fuzzywuzzy import process
import difflib
import pandas as pd
df_To_beMatched = pd.read_excel('vendor_file.xlsx',encoding='utf-8',usecols=["vendOR_NAME"])
df_To_beMatched['vendOR_NAME'] = df_To_beMatched['vendOR_NAME'].fillna('')
original_list = df_To_beMatched['vendOR_NAME'].tolist()
# print(original_list)
df_exceptionlist = pd.read_excel('Exceptionfile.xlsx',usecols=["Entity_Name"])
df_exceptionlist['Entity_Name'] = df_exceptionlist['Entity_Name'].fillna('')
exception_list = df_exceptionlist['Entity_Name'].tolist()
# print(exception_list)
result = []
result_difflib = []
for exp in exception_list:
to_delete = exp
for orig in original_list:
original = orig
# print(to_delete,original)
ratio = fuzz.ratio(to_delete,original)
token = fuzz.token_set_ratio(to_delete,original)
partial_ratio = fuzz.partial_ratio(to_delete,original)
# print(ratio,to_delete,original)
if ratio > 75 and token > 75 and partial_ratio > 85:
print(ratio,original)
result.append({'Entity_Name': to_delete,'vendOR_NAME': original,'Ratio': ratio,'Token': token,'Status': 'Match'})
break
difflib_result = difflib.get_close_matches(to_delete,original_list)
matches = "^".join(difflib_result)
result_difflib.append({'Entity_Name': to_delete,'Matches': matches})
fuzzy_df = pd.DataFrame(result)
fuzzy_df.to_csv('FuzzyLogic_Results.csv',index=False)
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。