微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

Tensorflow 1的编码和解码注意

如何解决Tensorflow 1的编码和解码注意

我参加了使用Tensofrlow 1.13.2的this项目。
该项目使用时间序列的编码和解码。

使用双向RNN进行编码解码 我在代码中有this部分:

    self.encoder_input = tf.placeholder(dtype=tf.float32,shape=(None,opts['input_length'],1),name='encoder_input')
    self.decoder_input = tf.placeholder(dtype=tf.float32,name='decoder_input')
    self.classification_labels = tf.placeholder(dtype=tf.float32,2),name='classification_labels')
   
    # seq2seq
    with tf.variable_scope('seq2seq'):
        self.D_ENCODER = dilated_encoder(opts)
        self.h = self.D_ENCODER.encoder(self.encoder_input)
        
        self.S_DECOER = single_layer_decoder(opts)
        recons_input = self.S_DECOER.decoder(self.h,self.decoder_input)

这是编码器和解码器的代码

   cell_fw_list = [tf.nn.rnn_cell.GRUCell(num_units=units) for units in self.hidden_units]
        #state_fw.shape = [batchsize,units],...,[batchsize,units]
        outputs_fw,states_fw = drnn.multi_dRNN_with_dilations(cell_fw_list,inputs,self.dilations,scope='forward_drnn')

        batch_axis = 0
        time_axis = 1
        inputs_bw = array_ops.reverse(inputs,axis=[time_axis])

        cell_bw_list = [tf.nn.rnn_cell.GRUCell(num_units=units) for units in self.hidden_units]
        outputs_bw,states_bw = drnn.multi_dRNN_with_dilations(cell_bw_list,inputs_bw,scope='backward_drnn')        
        outputs_bw = array_ops.reverse(outputs_bw,axis=[time_axis])# 与输出相对

        states_fw = tf.concat(states_fw,axis=1)# [batchsize,units1 + units2 + units3]
        states_bw = tf.concat(states_bw,units1 + units2 + units3]
        final_states = tf.concat([states_fw,states_bw],2*(units1 + units2 + units3)]        
        
        return final_states
    
class single_layer_decoder():
    def __init__(self,opts):
        self.hidden_units = 2 * sum(opts['encoder_hidden_units'])
        
    def decoder(self,init_state,init_input):
        cell = tf.nn.rnn_cell.GRUCell(self.hidden_units)
        
        
        outputs,_ = tf.nn.dynamic_rnn(cell=cell,inputs=init_input,initial_state=init_state)
        
        recons = outputs[:,:,0]
        recons = tf.expand_dims(recons,axis=2)
        
        return recons

我正在尝试将双向RNN的编码器和解码器替换为基于注意力的层。
我在tensorflow版本1.13中关注了以下代码https://gist.github.com/iridiumblue/622a9525189d48e9c00659fea269bfa4

然后我更改了编码器:

 # seq2seq
 with tf.variable_scope('seq2seq'):
     self.D_ENCODER = AttentionWithContext()
     self.h = self.D_ENCODER(self.encoder_input)
        
     self.S_DECOER = single_layer_decoder(opts)
     recons_input = self.S_DECOER.decoder(self.h,self.decoder_input)
        

我的问题是如何使用解码器? 当我尝试运行程序时,出现此错误

ValueError: Dimensions must be equal,but are 2 and 401 for 'seq2seq/rnn/while/gru_cell/MatMul' (op: 'MatMul') with input shapes: [?,2],[401,800].

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。