为什么我的 DQN深度 Q 网络没有学习?

如何解决为什么我的 DQN深度 Q 网络没有学习?

我正在 OpenAI 健身房针对 CartPole 问题训练 DQN(深度 Q 网络),但是当我开始训练时,一集的总分会下降,而不是增加。我不知道这是否有帮助,但我注意到 AI 更喜欢一个动作而不是另一个动作并且拒绝做任何其他事情(除非它是由 epsilon 贪婪策略强迫的),至少在一段时间内是这样。我尽力了,但我就是不知道发生了什么。

这是我的代码:

import torch as t
import torch.nn as nn
import torch.nn.functional as f

import random as r


class QNet:
    def predict(self,x: t.Tensor) -> t.Tensor:
        pass

    @staticmethod
    def copy_weights(origin: [],target: []):
        for origin_layer,target_layer in zip(origin,target):
            target_layer.weight = nn.Parameter(origin_layer.weight.clone())


class Memory:
    def __init__(self,state: t.Tensor,next_state: t.Tensor,action: int,reward: float):
        self.state = state
        self.next_state = next_state
        self.action = action
        self.reward = reward


class ReplayMemory:
    def __init__(self,capacity: int):
        self.capacity = capacity
        self.memories = []

    def add_memory(self,memory: Memory):
        self.memories.append(memory)

        if len(self.memories) > self.capacity:
            self.memories.pop(0)

    def get_batch(self,size: int):
        if len(self.memories) < size:
            raise Exception("There are not enough memories to make a batch.")

        start_index = r.randint(0,len(self.memories) - size)
        end_index = start_index + size
        return self.memories[start_index:end_index]


class QLearning:
    def __init__(self,net: QNet,target_net: QNet,optimizer,gamma: float):
        self.net = net
        self.target_net = target_net
        self.optimizer = optimizer
        self.gamma = gamma

    def train(self,batch: [Memory]):
        batched_pred = []
        batched_opt_pred = []
        for sample in batch:
            pred = self.net.predict(sample.state)

            opt_pred = pred.clone()
            opt_pred[sample.action] = sample.reward
            if sample.next_state is not None:
                opt_pred[sample.action] += t.max(self.target_net.predict(sample.next_state)) * self.gamma

            batched_pred.append(pred)
            batched_opt_pred.append(opt_pred)

        loss = f.mse_loss(t.stack(batched_pred),t.stack(batched_opt_pred))
        self.optimizer.zero_grad()
        loss.backward()
        self.optimizer.step()

import gym

from qlearning import *

env = gym.make("CartPole-v1")
state = t.tensor(env.reset(),dtype=t.float)


class Agent(nn.Module,QNet):
    def __init__(self):
        super().__init__()

        self.l1 = nn.Linear(4,32)
        self.l2 = nn.Linear(32,16)
        self.l3 = nn.Linear(16,8)
        self.l4 = nn.Linear(8,4)
        self.l5 = nn.Linear(4,2)

    def predict(self,x):
        y = f.relu(self.l1(x))
        y = f.relu(self.l2(y))
        y = f.relu(self.l3(y))
        y = f.relu(self.l4(y))
        return self.l5(y)


agent = Agent()
target_agent = Agent()
q = QLearning(agent,target_agent,optim.Adam(agent.parameters(),lr=0.001),0.9)
replay_memory = ReplayMemory(100000)
epsilon = 1
epsilon_dec = 1 / 1000
total_reward = 0
for i in range(1000):
    env.render()

    action = 0
    if r.random() > epsilon:
        action = t.argmax(agent.predict(state)).item()
    else:
        action = env.action_space.sample()

    epsilon -= epsilon_dec

    next_state,reward,done,info = env.step(action)
    next_state = t.tensor(next_state,dtype=t.float)
    if done:
        reward = -1
        replay_memory.add_memory(Memory(state,None,action,reward))
    else:
        replay_memory.add_memory(Memory(state,next_state,reward))

    total_reward += reward

    if done:
        state = t.tensor(env.reset(),dtype=t.float)

        # print(int(total_reward))
        total_reward = 0

    if len(replay_memory.memories) >= 10:
        q.train(replay_memory.get_batch(10))

    if i % 10:
        QNet.copy_weights([agent.l1,agent.l2,agent.l3,agent.l4,agent.l5],[target_agent.l1,target_agent.l2,target_agent.l3,target_agent.l4,target_agent.l5])

    state = next_state
env.close()

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams[&#39;font.sans-serif&#39;] = [&#39;SimHei&#39;] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -&gt; systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping(&quot;/hires&quot;) public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate&lt;String
使用vite构建项目报错 C:\Users\ychen\work&gt;npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-
参考1 参考2 解决方案 # 点击安装源 协议选择 http:// 路径填写 mirrors.aliyun.com/centos/8.3.2011/BaseOS/x86_64/os URL类型 软件库URL 其他路径 # 版本 7 mirrors.aliyun.com/centos/7/os/x86
报错1 [root@slave1 data_mocker]# kafka-console-consumer.sh --bootstrap-server slave1:9092 --topic topic_db [2023-12-19 18:31:12,770] WARN [Consumer clie
错误1 # 重写数据 hive (edu)&gt; insert overwrite table dwd_trade_cart_add_inc &gt; select data.id, &gt; data.user_id, &gt; data.course_id, &gt; date_format(
错误1 hive (edu)&gt; insert into huanhuan values(1,&#39;haoge&#39;); Query ID = root_20240110071417_fe1517ad-3607-41f4-bdcf-d00b98ac443e Total jobs = 1
报错1:执行到如下就不执行了,没有显示Successfully registered new MBean. [root@slave1 bin]# /usr/local/software/flume-1.9.0/bin/flume-ng agent -n a1 -c /usr/local/softwa
虚拟及没有启动任何服务器查看jps会显示jps,如果没有显示任何东西 [root@slave2 ~]# jps 9647 Jps 解决方案 # 进入/tmp查看 [root@slave1 dfs]# cd /tmp [root@slave1 tmp]# ll 总用量 48 drwxr-xr-x. 2
报错1 hive&gt; show databases; OK Failed with exception java.io.IOException:java.lang.RuntimeException: Error in configuring object Time taken: 0.474 se
报错1 [root@localhost ~]# vim -bash: vim: 未找到命令 安装vim yum -y install vim* # 查看是否安装成功 [root@hadoop01 hadoop]# rpm -qa |grep vim vim-X11-7.4.629-8.el7_9.x
修改hadoop配置 vi /usr/local/software/hadoop-2.9.2/etc/hadoop/yarn-site.xml # 添加如下 &lt;configuration&gt; &lt;property&gt; &lt;name&gt;yarn.nodemanager.res