如何解决有没有办法减少此代码的运行时间以删除部分重复项?
所以这是删除数据同一列中部分重复项的代码,但是,我猜因为将每一行与其他行匹配的过程,该代码在偶数的数据集上运行需要大量时间2000 行。有什么办法可以减少运行时间吗?
这是代码-
from fuzzywuzzy import fuzz,process
rows = ["I have your Body Wash and I wonder if it contains animal ingredients. Also,which animal ingredients? I prefer not to use product with animal ingredients.","This also doesn't have the ADA on there. Is this a fake toothpaste an imitation of yours?","I have your Body Wash and I wonder if it contains animal ingredients. I prefer not to use product with animal ingredients.","I didn't see the ADA stamp on this Box. I just want to make sure it was still safe to use?","Hello,I was just wondering if the new toothpaste is ADA approved? It doesn’t say on the packaging",I was just wondering if the new toothpaste is ADA approved? It doesn’t say on the Box."]
clean = []
threshold = 80 # this is arbitrary
for row in rows:
# score each sentence against each other sentence
# [('string',score),..]
scores = process.extract(row,rows,scorer=fuzz.token_set_ratio)
# basic idea is if there is a close second match we want to evaluate
# and keep the longer of the two
if scores[1][1] > threshold:
clean.append(max([x[0] for x in scores[:2]],key=len))
else:
clean.append(scores[0][0])
# remove dupes
clean = set(clean)
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。