微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

如何根据2D形状的一系列输入正确预测一个值?

如何解决如何根据2D形状的一系列输入正确预测一个值?

我使用的是 encoder-decoder 体系结构,在编码器和解码器中各有3层,在每个隐藏层中有128个神经元。 输入采用2D格式:第一列具有日期,第二列具有取决于日期的时间序列(形状:(5780,100,2))。 输出是第一列值中的单个值,代表发生断点的特定日期(形状:(5780,1,1))。断点是时间相关值之一,即第二列。

更好的输入图片是:

array([[  0.,1.        ],[  2.,1.14469799],[  4.,1.35245666],...,[ 96.,1.80030942],[ 98.,1.79964733],[100.,1.9898739]])

第一列为日期,第二列为对应的测量点。

输出仅是一个值,代表发生断点的日期:

array([[1108.]])

问题在于,经过训练后,所有不同测试数据的输出几乎完全相同,即它为同一天的所有不同材料的断点提供了一天(小数点后的变化可以忽略不计)。我尝试了高和低学习率(范围从1e-2到1e-5),训练时期的数量(300到3000)。我还改变了层数和每层神经元。

我还没有做的是批处理归一化或任何类型的归一化,但是我已经对具有相同梯度的相同数据进行了一些操作,并且效果很好。

在这里使用的体系结构如下:

nodes = 128
drp = 0.01

# Defining input layers and shapes
input_train = Input(shape = (complete_inputs.shape[1],complete_inputs.shape[2]))
output_train = Input(shape= (kp_targets.shape[1],kp_targets.shape[2]))

# Masking layer
masking_layer = Masking(mask_value=0,input_shape = input_train.shape)(input_train)

# Encoder layer. For simple S2S model,we only need the last state_h and the last state_c.
enc_first_layer = Bidirectional(LSTM(nodes,dropout=drp,return_sequences=True,return_state=True))(masking_layer)
enc_first_layer,enc_fwd_h1,enc_fwd_c1,enc_back_h1,enc_back_c1 = Bidirectional(LSTM(nodes,return_state=True))(enc_first_layer)
enc_stack_h,enc_fwd_h2,enc_fwd_c2,enc_back_h2,enc_back_c2 = Bidirectional(LSTM(nodes,return_state=True))(enc_first_layer)

enc_last_h1 = concatenate([enc_fwd_h1,enc_back_h1])
enc_last_h2 = concatenate([enc_fwd_h2,enc_back_h2])
enc_last_c1 = concatenate([enc_fwd_c1,enc_back_c1])
enc_last_c2 = concatenate([enc_fwd_c2,enc_back_c2])


# RepeatVector layer (using only the last hidden state of encoder)
rv = RepeatVector(output_train.shape[1])(enc_last_h2)

# Stacked decoder layer for alignment score calculation (using the last hidden state of encoder)
dec_stack_h = Bidirectional(LSTM(nodes,return_state=False,return_sequences=True))(rv,initial_state=[enc_fwd_h1,enc_back_c1])
dec_stack_h = Bidirectional(LSTM(nodes,return_sequences=True))(dec_stack_h)
dec_stack_h = Bidirectional(LSTM(nodes,return_sequences=True))(dec_stack_h,initial_state=[enc_fwd_h2,enc_back_c2])


# Attention layer (uses STACKED encoder output and dots it with stacked decoder output)
attention_ = dot([dec_stack_h,enc_stack_h],axes=[2,2])
attention_ = Activation('softmax')(attention_)

# Calculating the context vector
context = dot([attention_,1])

# Concat the context vector and stacked hidden states of decoder,and use it as input to the last dense layer
dec_combined_context = concatenate([context,dec_stack_h])


# Output Timedistributed dense layers
out = Timedistributed(Dense(nodes/2,activation='relu'))(dec_combined_context)
out = Timedistributed(Dense(output_train.shape[2],activation='linear'))(dec_combined_context)

# Compile model
model_attn = Model(inputs=input_train,outputs=out)
opt = optimizers.Adam(learning_rate=0.004)
model_attn.compile(optimizer=opt,loss=masked_mae)

这里可能出什么问题了?

仅是对该问题有更广泛的了解,我还牢记以下问题:此模型是否过大?是否有另一种机器/深度学习模型更适合根据我拥有的数据预测这种输出

这个问题我已经解决一个星期,没有任何改善,因此我们将不胜感激。

编辑1:尝试使用StandardScaler和更简单的体系结构进行规范化。到目前为止没有任何改善。以下是带有所有可能组合的注释掉部分的结构。

nodes = 130 # Tried with 10/30/40/80

model_attn = Sequential()
#model_attn.add(Masking(mask_value=0,input_shape = (complete_inputs.shape[1],complete_inputs.shape[2])))

#model_attn.add(Bidirectional(LSTM(nodes,dropout=0.1,return_sequences=True)))
#model_attn.add(Bidirectional(LSTM(nodes,return_sequences=True)))
model_attn.add(Bidirectional(LSTM(nodes,return_sequences=False)))

model_attn.add(Dense(1))
model_attn.compile(optimizer=optimizers.Adam(0.001),loss = 'MAE')


随着时间的流逝,损失没有减少:

model_attn.fit(complete_inputs,kp_targets,batch_size=350,epochs=300,shuffle=True,validation_split=0.1,callbacks=[callback])

Epoch 1/300
11/11 [==============================] - 18s 2s/step - loss: 0.7930 - val_loss: 0.3486
Epoch 2/300
11/11 [==============================] - 16s 1s/step - loss: 0.7544 - val_loss: 0.5152
Epoch 3/300
11/11 [==============================] - 16s 1s/step - loss: 0.7406 - val_loss: 0.4794
Epoch 4/300
11/11 [==============================] - 16s 1s/step - loss: 0.7385 - val_loss: 0.5361
Epoch 5/300
11/11 [==============================] - 16s 1s/step - loss: 0.7367 - val_loss: 0.4821
Epoch 6/300
11/11 [==============================] - 16s 1s/step - loss: 0.7350 - val_loss: 0.5518
Epoch 7/300
11/11 [==============================] - 18s 2s/step - loss: 0.7344 - val_loss: 0.5151
Epoch 8/300
11/11 [==============================] - 17s 2s/step - loss: 0.7339 - val_loss: 0.5646
Epoch 9/300
11/11 [==============================] - 16s 1s/step - loss: 0.7380 - val_loss: 0.5277
Epoch 10/300
11/11 [==============================] - 16s 1s/step - loss: 0.7382 - val_loss: 0.4879
Epoch 11/300
11/11 [==============================] - 16s 1s/step - loss: 0.7367 - val_loss: 0.5367
Epoch 12/300
11/11 [==============================] - 16s 1s/step - loss: 0.7382 - val_loss: 0.4910
Epoch 13/300
11/11 [==============================] - 16s 1s/step - loss: 0.7354 - val_loss: 0.5244
Epoch 14/300
11/11 [==============================] - 16s 1s/step - loss: 0.7386 - val_loss: 0.5043
Epoch 15/300
11/11 [==============================] - 16s 1s/step - loss: 0.7329 - val_loss: 0.5421
Epoch 16/300
11/11 [==============================] - 16s 1s/step - loss: 0.7376 - val_loss: 0.5023
Epoch 17/300
11/11 [==============================] - 16s 1s/step - loss: 0.7346 - val_loss: 0.4539
.....
.....

Epoch 27/300
11/11 [==============================] - 15s 1s/step - loss: 0.7388 - val_loss: 0.5649
Epoch 28/300
11/11 [==============================] - 16s 1s/step - loss: 0.7329 - val_loss: 0.6575
Epoch 29/300
11/11 [==============================] - 16s 1s/step - loss: 0.7400 - val_loss: 0.5123
Epoch 30/300
11/11 [==============================] - 16s 1s/step - loss: 0.7336 - val_loss: 0.4965
Epoch 31/300
11/11 [==============================] - 16s 1s/step - loss: 0.7328 - val_loss: 0.5069
Epoch 32/300
11/11 [==============================] - 17s 2s/step - loss: 0.7320 - val_loss: 0.5274
Epoch 33/300
11/11 [==============================] - 17s 2s/step - loss: 0.7302 - val_loss: 0.5968
Epoch 34/300
11/11 [==============================] - 16s 1s/step - loss: 0.7354 - val_loss: 0.6161
....
....
....
Epoch 184/300
11/11 [==============================] - 16s 1s/step - loss: 0.7088 - val_loss: 0.8242
Epoch 185/300
11/11 [==============================] - 16s 1s/step - loss: 0.7034 - val_loss: 0.7799
Epoch 186/300
11/11 [==============================] - 16s 1s/step - loss: 0.7098 - val_loss: 0.8179
Epoch 187/300
11/11 [==============================] - 16s 1s/step - loss: 0.7066 - val_loss: 0.7854
Epoch 188/300
11/11 [==============================] - 16s 1s/step - loss: 0.7142 - val_loss: 0.8340
Epoch 189/300
11/11 [==============================] - 16s 1s/step - loss: 0.7123 - val_loss: 0.7197

两个损失没有特定的增加或减少顺序。我也以最小的损失停止了训练,但是没有任何改善。

更新1:在StandardScalar的应用中存在错误。修复后,它似乎确实为测试数据集输出了不同的预测!

更新2:对于这些类型的预测,CNN也是一个不错的选择。但是,仍然需要对两种体系结构进行比较。将在这里更新我的发现!

更新3:对于这些类型的预测,CNN比LSTM更好,因为数据涉及更多的分类问题。尽管可以使用更多层的LSTM并可以调整超参数,但我的实验表明,对于类似的结果,CNN的执行速度至少比LSTM快12倍,并且内存使用量可能更低。

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。