如何解决AttributeError: 'str' 对象没有属性 'ndim' [Python |凯拉斯]
以下是我的代码,与此处找到的代码大致相同:https://keras.io/examples/generative/lstm_character_level_text_generation/
它已经在所有时期工作了一天,但是,今天它运行但在随机时期出现错误并出现 AttributeError 错误,表示字符串没有 ndim 属性,因为输入和转换的数据没有意义从第 51-56 行进入一个 numpy 数组与之前工作时相同,那么它如何将此数据更改为字符串?以及在没有篡改输入数据或用于接收数据的代码的情况下,这一天是如何变化的。
def load_file(self,filename):
file = open(filename,'r')
content = file.read()
file.close()
return content
def sample(self,preds,temperature=1.0):
preds = np.asarray(preds).astype("float64")
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1,1)
return np.argmax(probas)
def train(self,epochs,batch_size):
content = self.load_file("data/ABC_cleaned/input.txt")
chars = sorted(list(set(content)))
char_indices = dict((c,i) for i,c in enumerate(chars))
indices_char = dict((i,c) for i,c in enumerate(chars))
maxlen = 40
step = 3
sentences = []
next_chars = []
for i in range(0,len(content) - maxlen,step):
sentences.append(content[i:i+maxlen])
next_chars.append(content[i+maxlen])
x = np.zeros((len(sentences),maxlen,len(chars)),dtype=np.bool)
y = np.zeros((len(sentences),dtype=np.bool)
for i,sentence in enumerate(sentences):
for t,char in enumerate(sentence):
x[i,t,char_indices[char]] = 1
y[i,char_indices[next_chars[i]]] = 1
print(type(x))
model = keras.Sequential()
model.add(input_layer.InputLayer(input_shape=(maxlen,len(chars))))
model.add(layers.LSTM(128))
model.add(layers.Dense(len(chars),activation='softmax'))
optimizer = optimizers.RMSprop(lr=0.01)
model.compile(loss="categorical_crossentropy",optimizer=optimizer)
for epoch in range(epochs):
model.fit(x,y,batch_size=batch_size,epochs=1)
print()
print("Generating text after epoch %d" % epoch)
start_index = np.random.randint(0,len(content) -maxlen - 1)
for diversity in [0.2,0.5,1.0,1.2]:
print("...diversity:",diversity)
generated = ""
sentence = content[start_index:start_index+maxlen]
print('...Generating with seed: "' + sentence + '"')
for i in range(400):
x_pred = np.zeros((1,len(chars)))
for t,char in enumerate(sentence):
x_pred[0,char_indices[char]] = 1.0
preds = model.predict(x_pred,verbose=0)[0]
next_index = self.sample(preds,diversity)
next_char = indices_char[next_index]
sentence = sentence[1:] + next_char
generated += next_char
print("...Generated: ",generated)
print()
topSeven = []
contentSong = []
fullAbc = ""
count = 0
if "X:" in generated:
index = generated.find("X:")
generated = generated[index:]
genList = generated.split('\n')
for line in genList:
if count > 6:
if line and generated[count+1]:
contentSong.append(line)
else:
contentSong.append(line)
break
if line.startswith(("X:","T:","%","S:","M:","L:","K:")):
topSeven.append(line)
count+=1
if len(topSeven) == 7:
for x in topSeven:
fullAbc += x + "\n"
for x in contentSong:
fullAbc += x + "\n"
with open("good_reels.txt",'a') as f:
f.write("\n" + fullAbc)
f.close()
break
解决方法
您在此代码中声明了两次 x
。先来
x = np.zeros((len(sentences),maxlen,len(chars)),dtype=np.bool)
y = np.zeros((len(sentences),dtype=np.bool)
这里是第二个:
if len(topSeven) == 7:
for x in topSeven:
fullAbc += x + "\n"
for x in contentSong:
fullAbc += x + "\n"
with open("good_reels.txt",'a') as f:
f.write("\n" + fullAbc)
f.close()
break
在第一次循环迭代中,x
确实是一个 numpy.ndarray
,它会按预期工作。当它到达第二个声明时,x
现在是一个 str
,它也会按预期工作。
在第二次循环迭代中,x
当前为 str
,而它预期为 numpy.ndarray
,并且会给出错误。
要修复它,只需将 x
的第二个声明重命名为 c
,例如,甚至删除它被声明的循环:
if len(topSeven) == 7:
fullAbc += '\n'.join(topSeven)
fullAbc += '\n'.join(contentSong)
with open("good_reels.txt",'a') as f:
f.write("\n" + fullAbc)
f.close()
break
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。