微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

CNN 回归模型为所有输入提供相似的输出

如何解决CNN 回归模型为所有输入提供相似的输出

我正在尝试构建 CNN 回归模型。输入数据为5(256x256x5)个波段10年以上的卫星影像叠加,得到256x256x50的阵列。

channels=50
l2(0.0005)
model = models.Sequential()
input_shape=(img_size,img_size,channels)
chanDim=1
reg=l2(0.0005)
init='he_normal'
model.add(layers.Conv2D(64,(7,7),strides=(2,2),padding='valid',kernel_initializer=init,kernel_regularizer=reg,input_shape=input_shape))
model.add(layers.Activation('gelu'))
model.add(layers.Batchnormalization(axis=chanDim))

model.add(layers.Conv2D(32,(3,3),kernel_regularizer=reg))
model.add(layers.Activation('gelu'))
model.add(layers.Batchnormalization(axis=chanDim))

model.add(layers.Conv2D(64,kernel_regularizer=reg))
model.add(layers.Activation('relu'))
model.add(layers.Batchnormalization(axis=chanDim))
model.add(layers.Dropout(0.25))

model.add(layers.Conv2D(64,kernel_regularizer=reg))
model.add(layers.Activation('relu'))
model.add(layers.Batchnormalization(axis=chanDim))
model.add(layers.Conv2D(128,kernel_regularizer=reg))
model.add(layers.Activation('relu'))
model.add(layers.Batchnormalization(axis=chanDim))
model.add(layers.Dropout(0.25))

model.add(layers.Conv2D(128,kernel_regularizer=reg))
model.add(layers.Activation('relu'))
model.add(layers.Batchnormalization(axis=chanDim))

# model.add(layers.Conv2D(512,#                         kernel_initializer=init,#                         kernel_regularizer=reg))
# model.add(layers.Activation('relu'))
# model.add(layers.Batchnormalization(axis=chanDim))
# model.add(layers.Dropout(0.25))

model.add(layers.Flatten())
model.add(layers.Dense(128,activation='gelu'))
model.add(layers.Dense(64,activation='relu'))
model.add(layers.Dense(32,activation='relu'))
model.add(layers.Dropout(.5))
model.add(layers.Dense(16,activation='relu'))
model.add(layers.Dense(1,activation='relu'))
model.compile(optimizer=tf.keras.optimizers.Adam(lr=1e-7),loss='mae')

训练步骤:

Epoch 1/30
8/8 [==============================] - 208s 26s/step - loss: 1.3836 - val_loss: 1.3476
Epoch 2/30
8/8 [==============================] - 81s 11s/step - loss: 1.3826 - val_loss: 1.3476
Epoch 3/30
8/8 [==============================] - 61s 8s/step - loss: 1.3863 - val_loss: 1.3476
Epoch 4/30
8/8 [==============================] - 60s 8s/step - loss: 1.3837 - val_loss: 1.3476
Epoch 5/30
8/8 [==============================] - 61s 8s/step - loss: 1.3785 - val_loss: 1.3476
Epoch 6/30
8/8 [==============================] - 60s 8s/step - loss: 1.3863 - val_loss: 1.3476
Epoch 7/30
8/8 [==============================] - 60s 8s/step - loss: 1.3869 - val_loss: 1.3476
Epoch 8/30
8/8 [==============================] - 60s 8s/step - loss: 1.3665 - val_loss: 1.3476
Epoch 9/30
8/8 [==============================] - 60s 8s/step - loss: 1.3060 - val_loss: 1.3476
Epoch 10/30
8/8 [==============================] - 61s 8s/step - loss: 1.2391 - val_loss: 1.3443
Epoch 11/30
8/8 [==============================] - 60s 8s/step - loss: 1.1757 - val_loss: 1.2622
Epoch 12/30
8/8 [==============================] - 61s 8s/step - loss: 1.1277 - val_loss: 1.1432
Epoch 13/30
8/8 [==============================] - 60s 8s/step - loss: 1.0967 - val_loss: 1.0280
Epoch 14/30
8/8 [==============================] - 60s 8s/step - loss: 1.0408 - val_loss: 0.9306
Epoch 15/30
8/8 [==============================] - 61s 8s/step - loss: 1.0423 - val_loss: 0.8529
Epoch 16/30
8/8 [==============================] - 60s 8s/step - loss: 1.0277 - val_loss: 0.7910
Epoch 17/30
8/8 [==============================] - 61s 8s/step - loss: 1.0800 - val_loss: 0.7385
Epoch 18/30
8/8 [==============================] - 61s 8s/step - loss: 0.9982 - val_loss: 0.6957
Epoch 19/30
8/8 [==============================] - 62s 8s/step - loss: 1.0466 - val_loss: 0.6648
Epoch 20/30
8/8 [==============================] - 61s 8s/step - loss: 1.0755 - val_loss: 0.6431
Epoch 21/30
8/8 [==============================] - 61s 8s/step - loss: 0.9773 - val_loss: 0.6270
Epoch 22/30
8/8 [==============================] - 61s 8s/step - loss: 0.9878 - val_loss: 0.6173
Epoch 23/30
8/8 [==============================] - 62s 8s/step - loss: 0.9546 - val_loss: 0.6107
Epoch 24/30
8/8 [==============================] - 62s 8s/step - loss: 0.9736 - val_loss: 0.6066
Epoch 25/30
8/8 [==============================] - 62s 8s/step - loss: 0.9398 - val_loss: 0.6051
Epoch 26/30
8/8 [==============================] - 61s 8s/step - loss: 0.9513 - val_loss: 0.6064
Epoch 27/30
8/8 [==============================] - 61s 8s/step - loss: 0.9850 - val_loss: 0.6085
Epoch 28/30
8/8 [==============================] - 61s 8s/step - loss: 0.9534 - val_loss: 0.6120
<tensorflow.python.keras.callbacks.History at 0x7f7e8049b630>

但预测[:10] 和预期值[:10] 是:

[[0.75141275][0.9683605 ][1.0075892 ][0.9710504 ][1.0537224 ][0.95761603]
 [0.8781187 ][0.9666001 ][1.0071822 ][0.8568193 ]]

 [0.96850154 0.98255504 0.88197998 0.7692161  0.9462668  0.81489973
 0.99938562 0.93442511 0.98891429 0.97386952]

评估分数为:

  • mean_absolute_error:0.09588701954343189
  • mean_squared_error:0.12396534977645424
  • explained_variance_score:-0.4386057129990675
  • r2_score:-0.6250618533611494

actual vs prediction plot

有什么想法吗?

解决方法

有人建议我将此时间序列数据视为视频,因此使用 Conv3D 而不是 Conv2D 来解决此问题并且它有效,该模型现在无法预测相同的输出。所以,输入数据应该是 [10,256,10] 的形状,代表 [Year,Image shape,Image Shape,Channels/Bands] 的时间序列输入数据

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。