微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

在Twitter情绪分析中使用python的LIME文本解释器解释我的深度学习模型

如何解决在Twitter情绪分析中使用python的LIME文本解释器解释我的深度学习模型

我有一个带有情感标签的推文数据集。我已经对数据进行了预处理,并完成了语音标记(全部通过python中的NLTK)。预处理后的数据如下所示:

Pre-processed tweets

在预处理训练数据后,将使用以下代码进行准备:

full_text = list(train['content'].values) + list(test['content'].values)
tokenizer = Tokenizer(num_words=20000,lower = True,filters = '')
tokenizer.fit_on_texts(full_text)
train_tokenized = tokenizer.texts_to_sequences(train['content'])
test_tokenized = tokenizer.texts_to_sequences(test['content'])

max_len = 50
X_train = pad_sequences(train_tokenized,maxlen = max_len)
X_test = pad_sequences(test_tokenized,maxlen = max_len)
embed_size = 300
max_features = 20000

def get_coefs(word,*arr):
    return word,np.asarray(arr,dtype='float32')
def get_embed_mat(embedding_path):
    
    embedding_index = dict(get_coefs(*o.strip().split(" ")) for o in open(embedding_path,encoding="utf8"))

    word_index = tokenizer.word_index
    nb_words = min(max_features,len(word_index))
    print(nb_words)
    embedding_matrix = np.zeros((nb_words + 1,embed_size))
    for word,i in word_index.items():
        if i >= max_features:
            continue
        embedding_vector = embedding_index.get(word)
        if embedding_vector is not None:
            embedding_matrix[i] = embedding_vector
        
    return embedding_matrix

深度学习模型是使用Word嵌入作为层构建的。建立模型的代码如下:

def build_model1(lr = 0.0,lr_d = 0.0,units = 0,dr = 0.0):
    inp = Input(shape = (max_len,))
    x = Embedding(20001,embed_size,weights = [embedding_matrix],trainable = False)(inp)
    x1 = SpatialDropout1D(dr)(x)

    x_lstm = Bidirectional(LSTM(units,return_sequences = True))(x1)
    x1 = Conv1D(32,kernel_size=2,padding='valid',kernel_initializer='he_uniform')(x_lstm)
    avg_pool1_lstm1 = GlobalAveragePooling1D()(x1)
    max_pool1_lstm1 = GlobalMaxPooling1D()(x1)
    
    
    x_lstm = Bidirectional(LSTM(units,kernel_initializer='he_uniform')(x_lstm)
    avg_pool1_lstm = GlobalAveragePooling1D()(x1)
    max_pool1_lstm = GlobalMaxPooling1D()(x1)
    
    
    
    x = concatenate([avg_pool1_lstm1,max_pool1_lstm1,avg_pool1_lstm,max_pool1_lstm])
    #x = Batchnormalization()(x)
    x = Dropout(0.1)(Dense(128,activation='relu') (x))
    x = Batchnormalization()(x)
    x = Dropout(0.1)(Dense(64,activation='relu') (x))
    x = Dense(8,activation = "sigmoid")(x)
    model = Model(inputs = inp,outputs = x)
    model.compile(loss = "binary_crossentropy",optimizer = Adam(lr = lr,decay = lr_d),metrics = ["accuracy"])
    history = model.fit(X_train,y_one_hot,batch_size = 128,epochs = 20,validation_split=0.1,verbose = 1,callbacks = [check_point,early_stop])
    model = load_model(file_path)
    return model

我想使用LIME解释该模型的预测(如下图所示)。但这不起作用。

Lime Text explanation of Model

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。