微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

DQN Pytorch Loss 不断增加

如何解决DQN Pytorch Loss 不断增加

我正在使用 pytorch 实现简单的 DQN 算法,以解决 gym 中的 CartPole 环境。我已经调试了一段时间了,我无法弄清楚为什么模型没有学习。

观察:

  • 使用 SmoothL1Loss性能MSEloss 差,但两者的损失都会增加
  • LR 中较小的 Adam 不起作用,我已经使用 0.0001、0.00025、0.0005 和认值进行了测试

注意事项:

  • 我已经分别调试了算法的各个部分,并且可以很有把握地说问题出在 learn 函数中。我想知道这个错误是由于我误解了 pytorch 中的 detach 还是我犯的其他一些框架错误
  • 我试图尽可能贴近原始论文(上面链接

参考文献:

import torch as T
import torch.nn as nn
import torch.nn.functional as F

import gym
import numpy as np


class ReplayBuffer:
    def __init__(self,mem_size,input_shape,output_shape):
        self.mem_counter = 0
        self.mem_size = mem_size
        self.input_shape = input_shape

        self.actions = np.zeros(mem_size)
        self.states = np.zeros((mem_size,*input_shape))
        self.states_ = np.zeros((mem_size,*input_shape))
        self.rewards = np.zeros(mem_size)
        self.terminals = np.zeros(mem_size)

    def sample(self,batch_size):
        indices = np.random.choice(self.mem_size,batch_size)
        return self.actions[indices],self.states[indices],\
            self.states_[indices],self.rewards[indices],\
            self.terminals[indices]

    def store(self,action,state,state_,reward,terminal):
        index = self.mem_counter % self.mem_size

        self.actions[index] = action
        self.states[index] = state
        self.states_[index] = state_
        self.rewards[index] = reward
        self.terminals[index] = terminal
        self.mem_counter += 1


class DeepQN(nn.Module):
    def __init__(self,output_shape,hidden_layer_dims):
        super(DeepQN,self).__init__()

        self.input_shape = input_shape
        self.output_shape = output_shape

        layers = []
        layers.append(nn.Linear(*input_shape,hidden_layer_dims[0]))
        for index,dim in enumerate(hidden_layer_dims[1:]):
            layers.append(nn.Linear(hidden_layer_dims[index],dim))
        layers.append(nn.Linear(hidden_layer_dims[-1],*output_shape))

        self.layers = nn.ModuleList(layers)

        self.loss = nn.MSELoss()
        self.optimizer = T.optim.Adam(self.parameters())

    def forward(self,states):
        for layer in self.layers[:-1]:
            states = F.relu(layer(states))
        return self.layers[-1](states)

    def learn(self,predictions,targets):
        self.optimizer.zero_grad()
        loss = self.loss(input=predictions,target=targets)
        loss.backward()
        self.optimizer.step()

        return loss


class Agent:
    def __init__(self,epsilon,gamma,output_shape):
        self.input_shape = input_shape
        self.output_shape = output_shape
        self.epsilon = epsilon
        self.gamma = gamma

        self.q_eval = DeepQN(input_shape,[64])
        self.memory = ReplayBuffer(10000,output_shape)

        self.batch_size = 32
        self.learn_step = 0

    def move(self,state):
        if np.random.random() < self.epsilon:
            return np.random.choice(*self.output_shape)
        else:
            self.q_eval.eval()
            state = T.tensor([state]).float()
            action = self.q_eval(state).max(axis=1)[1]
            return action.item()

    def sample(self):
        actions,states,states_,rewards,terminals = \
            self.memory.sample(self.batch_size)

        actions = T.tensor(actions).long()
        states = T.tensor(states).float()
        states_ = T.tensor(states_).float()
        rewards = T.tensor(rewards).view(self.batch_size).float()
        terminals = T.tensor(terminals).view(self.batch_size).long()

        return actions,terminals

    def learn(self,done):
        self.memory.store(action,done)

        if self.memory.mem_counter < self.batch_size:
            return

        self.q_eval.train()
        self.learn_step += 1
        actions,terminals = self.sample()
        indices = np.arange(self.batch_size)
        q_eval = self.q_eval(states)[indices,actions]
        q_next = self.q_eval(states_).detach()
        q_target = rewards + self.gamma * q_next.max(axis=1)[0] * (1 - terminals)

        loss = self.q_eval.learn(q_eval,q_target)
        self.epsilon *= 0.9 if self.epsilon > 0.1 else 1.0

        return loss.item()


def learn(env,agent,episodes=500):
    print('Episode: Mean Reward: Last Loss: Mean Step')

    rewards = []
    losses = [0]
    steps = []
    num_episodes = episodes
    for episode in range(num_episodes):
        done = False
        state = env.reset()
        total_reward = 0
        n_steps = 0

        while not done:
            action = agent.move(state)
            state_,done,_ = env.step(action)
            loss = agent.learn(state,done)

            state = state_
            total_reward += reward
            n_steps += 1

            if loss:
                losses.append(loss)

        rewards.append(total_reward)
        steps.append(n_steps)

        if episode % (episodes // 10) == 0 and episode != 0:
            print(f'{episode:5d} : {np.mean(rewards):5.2f} '
                  f': {np.mean(losses):5.2f}: {np.mean(steps):5.2f}')
            rewards = []
            losses = [0]
            steps = []

    print(f'{episode:5d} : {np.mean(rewards):5.2f} '
          f': {np.mean(losses):5.2f}: {np.mean(steps):5.2f}')
    return losses,rewards


if __name__ == '__main__':
    env = gym.make('CartPole-v1')
    agent = Agent(1.0,1.0,env.observation_space.shape,[env.action_space.n])

    learn(env,500)

解决方法

我认为主要的问题是折扣因素gamma。您将其设置为 1.0,这意味着您对未来的奖励给予与当前奖励相同的权重。通常在强化学习中,我们更关心眼前的奖励而不是未来,所以 gamma 应该总是小于 1。

只是为了尝试一下,我设置了 gamma = 0.99 并运行您的代码:

Episode: Mean Reward: Last Loss: Mean Step
  100 : 34.80 :  0.34: 34.80
  200 : 40.42 :  0.63: 40.42
  300 : 65.58 :  1.78: 65.58
  400 : 212.06 :  9.84: 212.06
  500 : 407.79 : 19.49: 407.79

如您所见,损失仍在增加(即使没有以前那么多),但奖励也在增加。您应该考虑到这里的损失不是衡量性能的好指标,因为您有一个移动目标。您可以通过使用目标网络来降低目标的不稳定性。通过额外的参数调整和目标网络,可能会使损失更加稳定。

另外一般请注意,在强化学习中,损失值不像在监督中那么重要;损失的减少并不总是意味着性能的提高,反之亦然。

问题是在训练步骤发生时 Q 目标正在移动;随着代理的运行,预测正确的奖励总和变得非常困难(例如,探索的状态和奖励越多,奖励方差越大),因此损失增加。这在更复杂的环境(更多状态、不同奖励等)中更加清晰。

与此同时,Q 网络在逼近每个动作的 Q 值方面变得越来越好,因此奖励(可能)会增加。

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。