微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

python – ConcatOp:输入的维度应该匹配

我正在开发一个带有张量流和python的深度学习模型:

>首先,使用CNN图层,获取功能.
>其次,重塑特征映射,我想使用LSTM层.

但是,尺寸不匹配的错误

ConcatOp:输入的尺寸应匹配:shape [0] = [71,48] vs. shape [1] = [1200,24]

W_conv1 = weight_variable([1,conv_size,1,12])
b_conv1 = bias_variable([12])

h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1)+ b_conv1)
h_pool1 = max_pool_1xn(h_conv1)

W_conv2 = weight_variable([1,conv_size,12,24])
b_conv2 = bias_variable([24])

h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_1xn(h_conv2)

W_conv3 = weight_variable([1,conv_size,24,48])
b_conv3 = bias_variable([48])

h_conv3 = tf.nn.relu(conv2d(h_pool2, W_conv3) + b_conv3)
h_pool3 = max_pool_1xn(h_conv3)


print(h_pool3.get_shape())
h3_rnn_input = tf.reshape(h_pool3, [-1,x_size/8,48])

num_layers = 1
lstm_size = 24
num_steps = 4

lstm_cell = tf.nn.rnn_cell.LSTMCell(lstm_size, initializer = tf.contrib.layers.xavier_initializer(uniform = False))
cell = tf.nn.rnn_cell.MultiRNNCell([lstm_cell]*num_layers)
init_state = cell.zero_state(batch_size,tf.float32)


cell_outputs = []
state = init_state
with tf.variable_scope("RNN") as scope:
for time_step in range(num_steps):
    if time_step > 0: scope.reuse_variables() 
    cell_output, state = cell(h3_rnn_input[:,time_step,:],state) ***** Error In here...

解决方法:

输入到rnn单元格时,输入张量和状态张量的批量大小应相同.

错误消息中,它说h3_rnn_input [:,time_step,:]的形状为[71,48]而状态的形状为[1200,24]

您需要做的是使第一个维度(batch_size)相同.

如果不打算使用数字71,请检查卷积部分.跨步/填充可能很重要.

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 [email protected] 举报,一经查实,本站将立刻删除。

相关推荐