微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

Python 文本处理项目,类型错误:'int' 对象不可迭代

如何解决Python 文本处理项目,类型错误:'int' 对象不可迭代

一个从乱码中破译真实文本的项目中,我在 Github 上找到了这段代码,并做了一些轻微的编辑以更好地满足我的需求。测试时,我在第 17 行不断收到 TypeError,“如果 c.lower() inaccepted_chars],则返回 [c.lower() for c in line]”。

任何帮助或建议将不胜感激!

import math
import pickle

accepted_chars = 'abcdefghijklmnopqrstuvwxyz '

pos = dict([(char,idx) for idx,char in enumerate(accepted_chars)])

def normalize(line):
    """Return only the subset of chars from accepted_chars.
    This helps keep the  model relatively small by ignoring punctuation,infrequenty symbols,etc. """
    return [c.lower() for c in line if c.lower() in accepted_chars]

def ngram(n,l):
    """Return all n grams from l after normalizing """
    filtered = normalize(l)
    for start in range(0,len(filtered) - n + 1):
        yield ''.join(filtered[start:start + n])

def train():
    """ Write a simple model as a pickle file"""
    k = len(accepted_chars)
    # Assume we have seen 10 of each character pair.  This acts as a kind of
    # prior or smoothing factor.  This way,if we see a character transition
    # live that we've never observed in the past,we won't assume the entire
    # string has 0 probability.
    counts = [[10 for i in range(k)] for i in range(k)]

    # Count transitions from big text file,taken 
    # from http://norvig.com/spell-correct.html
    for line in open('big.txt'):
        for a,b in ngram(2,line):
            counts[pos[a]][pos[b]] += 1

    # normalize the counts so that they become log probabilities.  
    # We use log probabilities rather than straight probabilities to avoid
    # numeric underflow issues with long texts.
    # This contains a justification:
    # http://squarecog.wordpress.com/2009/01/10/dealing-with-underflow-in-joint-probability-calculations/

    for i,row in enumerate(counts):
        s = float(sum(row))
        for j in range(len(row)):
            row[j] = math.log(row[j] / s)

    # Find the probability of generating a few arbitrarily choosen good and
    # bad phrases.
    good_probs = [avg_transition_prob(l,counts) for l in open('good.txt')]
    bad_probs = [avg_transition_prob(l,counts) for l in open('bad.txt')]

    # Assert that we actually are capable of detecting the junk.
    assert min(good_probs) > max(bad_probs)

    #And pick a threshhold halfway between the worst good and best bad inputs.
    thresh = (min(good_probs) + max(bad_probs)) / 2
    pickle.dump({'mat': counts,'thresh': thresh},open('gib.model.pki','wb'))

def avg_transition_prob(l,log_prob_mat):
    """ Return the average transition prob from l through log_prob_mat """
    log_prob = 0.0
    transition_ct = 0
    for a,1):
        log_prob += log_prob_mat[pos[a]][pos[b]]
        transition_ct += 1
    return math.exp(log_prob / (transition_ct or 1))

if __name__ == '__main__':
    train()

解决方法

ngram(2,1) 使用第二个参数调用 normalize

normalize 然后这样做:

    return [c.lower() for c in line if c.lower() in accepted_chars]

因此,您不能执行 for c in 1

也许您打算将 l 放在那里?

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。