梯度计算所需的变量之一已通过就地操作进行了修改:

如何解决梯度计算所需的变量之一已通过就地操作进行了修改:

我正在尝试使用pytorch 1.5在“深度确定性策略梯度算法”中计算机丢失策略目标网络,并且出现以下错误。

strip

我的人脉和训练进度。在参与者网络中,输出向量的长度为20,这表示连续动作。评论者网的输入由状态向量和动作向量组成。

File "F:\agents\ddpg.py",line 128,in train_model
    policy_loss.backward()
  File "E:\conda\envs\pytorch\lib\site-packages\torch\tensor.py",line 198,in backward
    torch.autograd.backward(self,gradient,retain_graph,create_graph)
  File "E:\conda\envs\pytorch\lib\site-packages\torch\autograd\__init__.py",line 100,in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [128,1]],which is output 0 of TBackward,is at version 2; expected version 1 instead
. Hint: enable anomaly detection to find the operation that failed to compute its gradient,with torch.autograd.set_detect_anomaly(True).

"""
ddpg actor
"""
class MLP(nn.Module):
    def __init__(self,input_size,output_size,output_limit=1.0,hidden_sizes=(64,64),activation=torch.relu,output_activation=identity,use_output_layer=True,use_actor=False,):
        super(MLP,self).__init__()

        self.input_size = input_size
        self.output_size = output_size
        self.output_limit = output_limit
        self.hidden_sizes = hidden_sizes
        self.activation = activation
        self.output_activation = output_activation
        self.use_output_layer = use_output_layer
        self.use_actor = use_actor

        # Set hidden layers
        self.hidden_layers = nn.ModuleList()
        in_size = self.input_size
        for next_size in self.hidden_sizes:
            fc = nn.Linear(in_size,next_size)
            in_size = next_size
            self.hidden_layers.append(fc)

        # Set output layers
        if self.use_output_layer:
            self.output_layer1 = nn.Linear(in_size,self.output_size // 2)
            self.output_layer2 = nn.Linear(in_size,self.output_size // 2)
        else:
            self.output_layer = identity

    def forward(self,x):

        for hidden_layer in self.hidden_layers:
            x = self.activation(hidden_layer(x))
        x1 = torch.sigmoid(self.output_layer1(x))
        x2 = F.softmax(self.output_layer2(x),dim=0)
        out = torch.cat((x1,x2),dim=-1)


        # If the network is used as actor network,make sure output is in correct range
        out = out * self.output_limit if self.use_actor else out
        return out



"""
DDPG critic,TD3 critic,SAC qf,TAC qf
"""

class critic(nn.Module):
    def __init__(self,):
        super().__init__()

        self.input_size = input_size
        self.output_size = output_size
        self.output_limit = output_limit
        self.hidden_sizes = hidden_sizes
        self.activation = activation
        self.output_activation = output_activation
        self.use_output_layer = use_output_layer
        self.use_actor = use_actor

        # Set hidden layers
        self.hidden_layers = nn.ModuleList()
        in_size = self.input_size
        for next_size in self.hidden_sizes:
            fc = nn.Linear(in_size,next_size)
            in_size = next_size
            self.hidden_layers.append(fc)

        # Set output layers
        if self.use_output_layer:
            self.output_layer = nn.Linear(in_size,self.output_size)
        else:
            self.output_layer = identity

    def forward(self,x,a):
        q= torch.cat([x,a],dim=1)

        for hidden_layer in self.hidden_layers:
            q = self.activation(hidden_layer(q))
        q = torch.tanh(self.output_layer(q))

        return q

我也接受使用 def train_model(self): batch = self.replay_buffer.sample(self.batch_size) obs1 = batch['obs1'] obs2 = batch['obs2'] acts = batch['acts'] rews = batch['rews'] done = batch['done'] # Check shape of experiences # Prediction Q(s,?(s)),Q(s,a),Q‾(s',?‾(s')) with torch.autograd.set_detect_anomaly(True): print("obs1",obs1.shape) #(64,22) print("a1",self.policy(obs1).shape) #(64,20) q_pi = self.qf(obs1,self.policy(obs1)) q = self.qf(obs1,acts).squeeze(1) q_pi_target = self.qf_target(obs2,self.policy_target(obs2)).squeeze(1) # Target for Q regression q_backup = rews + self.gamma * (1 - done) * q_pi_target q_backup.to(self.device) # DDPG losses policy_loss = -q_pi.mean() qf_loss = F.mse_loss(q,q_backup.detach()) # Update Q-function network parameter self.qf_optimizer.zero_grad() qf_loss.backward() nn.utils.clip_grad_norm_(self.qf.parameters(),self.gradient_clip_qf) self.qf_optimizer.step() # Update policy network parameter self.policy_optimizer.zero_grad() # here is the error policy_loss.backward() nn.utils.clip_grad_norm_(self.policy.parameters(),self.gradient_clip_policy) self.policy_optimizer.step() # Polyak averaging for target parameter soft_target_update(self.policy,self.policy_target) soft_target_update(self.qf,self.qf_target) # Save losses self.policy_losses.append(policy_loss.item()) self.qf_losses.append(qf_loss.item()) 的提示所给出的建议 结果是

 with torch.autograd.set_detect_anomaly(True).

我找不到在我的代码中无法计算其梯度的原因。

解决方法

只需尝试避免执行该特定的就地操作并将其转换为非就地操作。

在为特定的就地操作创建计算图时,我看到了PyTorch反向模式AD遇到困难的情况(已经确认)。

这是当前限制。

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams['font.sans-serif'] = ['SimHei'] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -> systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping("/hires") public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate<String
使用vite构建项目报错 C:\Users\ychen\work>npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-
参考1 参考2 解决方案 # 点击安装源 协议选择 http:// 路径填写 mirrors.aliyun.com/centos/8.3.2011/BaseOS/x86_64/os URL类型 软件库URL 其他路径 # 版本 7 mirrors.aliyun.com/centos/7/os/x86
报错1 [root@slave1 data_mocker]# kafka-console-consumer.sh --bootstrap-server slave1:9092 --topic topic_db [2023-12-19 18:31:12,770] WARN [Consumer clie
错误1 # 重写数据 hive (edu)> insert overwrite table dwd_trade_cart_add_inc > select data.id, > data.user_id, > data.course_id, > date_format(
错误1 hive (edu)> insert into huanhuan values(1,'haoge'); Query ID = root_20240110071417_fe1517ad-3607-41f4-bdcf-d00b98ac443e Total jobs = 1
报错1:执行到如下就不执行了,没有显示Successfully registered new MBean. [root@slave1 bin]# /usr/local/software/flume-1.9.0/bin/flume-ng agent -n a1 -c /usr/local/softwa
虚拟及没有启动任何服务器查看jps会显示jps,如果没有显示任何东西 [root@slave2 ~]# jps 9647 Jps 解决方案 # 进入/tmp查看 [root@slave1 dfs]# cd /tmp [root@slave1 tmp]# ll 总用量 48 drwxr-xr-x. 2
报错1 hive> show databases; OK Failed with exception java.io.IOException:java.lang.RuntimeException: Error in configuring object Time taken: 0.474 se
报错1 [root@localhost ~]# vim -bash: vim: 未找到命令 安装vim yum -y install vim* # 查看是否安装成功 [root@hadoop01 hadoop]# rpm -qa |grep vim vim-X11-7.4.629-8.el7_9.x
修改hadoop配置 vi /usr/local/software/hadoop-2.9.2/etc/hadoop/yarn-site.xml # 添加如下 <configuration> <property> <name>yarn.nodemanager.res