微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

用于预测的 PyTorch LSTM - 使用多个输入塑造数据

如何解决用于预测的 PyTorch LSTM - 使用多个输入塑造数据

我有一个包含 10 个输入的数据集,如图 dataset 所示。目标是 GHI。

我使用以下函数创建了数据序列( length=6 ),因为我想使用 6 小时的读数来预测下一小时的 GHI 值

def sliding_windows_mutli_features(data,seq_length):
x = []
y = []

for i in range((data.shape[0])-seq_length-1):
    _x = data[i:(i+seq_length),:] ## 10 columns for features  
    _y = data[i+seq_length,9] ## column index 9 contains the target
    x.append(_x)
    y.append(_y)

return np.array(x),np.array(y).reshape(-1,1)

在应用滑动函数并将它们转换为张量后,我得到了以下 x,y 用于训练和验证集:

x_train 是 torch.Size([11379,6,10])

y_train 是 torch.Size([11379,1])

x_val 是 torch.Size([1429,10])

y_val 是 torch.Size([1429,1])

这是 LSTM 代码

class LSTM(nn.Module):
def __init__(self,input_size=10,hidden_layer_size=128,output_size=1,seq_len=6,n_layers=2):
    super().__init__()
    self.hidden_layer_size = hidden_layer_size
    self.seq_len = seq_len
    self.n_layers = n_layers

    self.lstm = nn.LSTM(input_size,hidden_layer_size,num_layers=n_layers,dropout=0.5)

    self.linear = nn.Linear(hidden_layer_size,output_size)

def reset_hidden_state(self):
    self.hidden= (
    torch.zeros(self.n_layers,self.seq_len,self.hidden_layer_size),torch.zeros(self.n_layers,self.hidden_layer_size)
)

def forward(self,input_seq):
    lstm_out,self.hidden= self.lstm(input_seq.view(len(input_seq),-1),self.hidden)
    last_time_step = lstm_out.view(self.seq_len,len(input_seq),self.hidden_layer_size)[-1]
    y_pred = self.linear(last_time_step)
    return y_pred

这是火车功能

    def train_model(
  model,train_data,train_labels,val_data,val_labels
):
  loss_fn = torch.nn.MSELoss()

  optimiser = torch.optim.Adam(model.parameters(),lr=1e-3)
  num_epochs = 50

  train_hist = np.zeros(num_epochs)
  test_hist = np.zeros(num_epochs)

  for t in range(num_epochs):
    model.reset_hidden_state()

    y_pred = model(x_train)

    loss = loss_fn(y_pred.float(),y_train)

    if val_data is not None:
      with torch.no_grad():
        y_val_pred = model(x_val)
        test_loss = loss_fn(y_val_pred.float(),y_val)
      val_hist[t] = test_loss.item()

      if t % 10 == 0:  
        print(f'Epoch {t} train loss: {loss.item()} test loss: {test_loss.item()}')
    elif t % 10 == 0:
      print(f'Epoch {t} train loss: {loss.item()}')

    train_hist[t] = loss.item()
    
    optimiser.zero_grad()

    loss.backward()

    optimiser.step()
  
  return model.eval(),train_hist,val_hist

但是,由于此代码不起作用,因此在塑造数据时存在错误

model = LSTM()
model,val_hist = train_model(
  model,x_train,y_train,x_val,y_val)

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。