如何解决pytorch张量到numpy转换不精确
我正在使用pytorch构建输出概率分布的网络模型,获得预测后,我希望将其转换为numpy数组以进行进一步处理,尽管这样做时,转换后的数组并不精确,也就是说,这些值不同于原始值(请参见下面的示例)。在我的情况下,这特别糟糕,因为考虑到不精确性,行的总和不再精确到1,这导致我在代码中进一步遇到问题,为什么会发生这种情况?任何帮助将不胜感激。
>> predictions
Out[34]:
tensor([[0.0155,0.0260,0.2671,0.6820,0.0093],[0.1231,0.1076,0.1660,0.5376,0.0658],[0.0734,0.0501,0.1683,0.6602,0.0480],...,[0.0260,0.0287,0.0465,0.8830,0.0159],[0.0251,0.0327,0.3148,0.5643,0.0632],[0.0105,0.0116,0.2723,0.6702,0.0354]],device='cuda:0',grad_fn=<CopySlices>)
>> predictions.cpu().detach().numpy()
Out[33]:
array([[0.01549727,0.02597333,0.26714763,0.68203545,0.00934631],[0.12310678,0.10758312,0.16596784,0.53758585,0.06575638],[0.07338995,0.05007047,0.16834012,0.6601813,0.04801816],[0.02603309,0.02868567,0.046453,0.8829591,0.01586908],[0.02508399,0.03266559,0.31480333,0.564255,0.06319207],[0.01048695,0.01161059,0.2722554,0.6702131,0.03543395]],dtype=float32)
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。