如何解决从零开始的序列分类二元模型LSTM
我正在从头开始编写 LSTM 序列分类器(不使用 AI 库)。
我首先尝试使用经典 RNN,我从多对多模型开始,到多对一模型,前向传播看起来像这样:
def rnn_forward(inputs,rnnNet):
fw_cache = []
hidden_state = np.zeros((rnnNet.d[0],1))
fw_cache = []
for t in range(len(inputs)):
hidden_state = cm.tanh( np.dot(rnnNet.p['U'],inputs[t]) + np.dot(rnnNet.p['V'],hidden_state) + rnnNet.p['b_h'] )
fw_cache.append(hidden_state.copy())
outputs = cm.softmax( np.dot(rnnNet.p['W'],hidden_state) + rnnNet.p['b_o'],rnn=True)
return outputs,fw_cache
我可以相应地重写我的参数尺寸,这按预期工作。
然而,我很难在 LSTM 网络上做同样的事情。以下是前向道具:
def lstm_forward(inputs,lstmNet):
fw_cache = []
# lstmNet.d[0] is the hidden_size
h_prev = np.zeros((lstmNet.d[0],1))
C_prev = np.zeros((lstmNet.d[0],1))
for x in inputs:
cache = {'C': C_prev,'h': h_prev}
# Concatenate input and hidden state
cache['z'] = np.row_stack((cache['h'],x))
# Calculate forget gate
cache['f'] = cm.sigmoid(np.dot(lstmNet.p['W_f'],cache['z']) + lstmNet.p['b_f'])
# Calculate input gate
cache['i'] = cm.sigmoid(np.dot(lstmNet.p['W_i'],cache['z']) + lstmNet.p['b_i'])
# Calculate candidate
cache['g'] = cm.tanh(np.dot(lstmNet.p['W_g'],cache['z']) + lstmNet.p['b_g'])
# Calculate memory state
C_prev = cache['f'] * cache['C'] + cache['i'] * cache['g']
# Calculate output gate
cache['o'] = cm.sigmoid(np.dot(lstmNet.p['W_o'],cache['z']) + lstmNet.p['b_o'])
# Calculate hidden state
h_prev = cache['o'] * cm.tanh(cache['C'])
# Calculate logits
cache['v'] = np.dot(lstmNet.p['W_v'],h_prev) + lstmNet.p['b_v']
# Calculate softmax
fw_cache.append(copy.deepcopy(cache))
outputs = cm.softmax(cache['v'],fw_cache
我的参数是:
def init_params(lstmNet):
hidden_size = lstmNet.d[0]
vocab_size = lstmNet.d[1]
z_size = lstmNet.d[2]
output_size = lstmNet.d[3]
# Weight matrix (forget gate)
lstmNet.p['W_f'] = np.random.randn(hidden_size,z_size)
# Bias for forget gate
lstmNet.p['b_f'] = np.zeros((hidden_size,1))
# Weight matrix (input gate)
lstmNet.p['W_i'] = np.random.randn(hidden_size,z_size)
# Bias for input gate
lstmNet.p['b_i'] = np.zeros((hidden_size,1))
# Weight matrix (candidate)
lstmNet.p['W_g'] = np.random.randn(hidden_size,z_size)
# Bias for candidate
lstmNet.p['b_g'] = np.zeros((hidden_size,1))
# Weight matrix of the output gate !!! I expect this to change dimensions
lstmNet.p['W_o'] = np.random.randn(hidden_size,z_size)
lstmNet.p['b_o'] = np.zeros((hidden_size,1))
# Weight matrix relating the hidden-state to the output !!! I expect this to change dimensions
lstmNet.p['W_v'] = np.random.randn(vocab_size,hidden_size)
lstmNet.p['b_v'] = np.zeros((vocab_size,1))
如果您能从这个 LSTM 多对多模型传递到仅在最后一个单元格/输入上输出的多对一模型,我们将不胜感激。
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。