MPI_alltoallw 工作和 MPI_Ialltoallw 失败

如何解决MPI_alltoallw 工作和 MPI_Ialltoallw 失败

我正在尝试在大量代码中实现非阻塞通信。但是,对于这种情况,代码往往会失败。我已经重现了下面的错误。在一个 CPU 上运行时,以下代码在 switch 设置为 false 时有效,但在 switch 设置为 true 时失败。

program main
      
  use mpi

  implicit none

  logical :: switch
  integer,parameter :: maxSize=128
  integer scounts(maxSize),sdispls(maxSize)
  integer rcounts(maxSize),rdispls(maxSize)
  integer :: types(maxSize)
  double precision sbuf(maxSize),rbuf(maxSize)
  integer comm,size,rank,req
  integer ierr
  integer ii

  call MPI_INIT(ierr)
  comm = MPI_COMM_WORLD
  call MPI_Comm_size(comm,ierr)
  call MPI_Comm_rank(comm,ierr)

  switch = .true.

  ! Init
  sbuf(:) = rank
  scounts(:) = 0
  rcounts(:) = 0
  sdispls(:) = 0
  rdispls(:) = 0
  types(:) = MPI_INTEGER
  if (switch) then
    ! Send one time N double precision
    scounts(1)  = 1
    rcounts(1)  = 1
    sdispls(1)  = 0
    rdispls(1)  = 0
    call MPI_Type_create_subarray(1,(/maxSize/),&
                                     (/maxSize/),&
                                     (/0/),&
                                     MPI_ORDER_FORTRAN,MPI_DOUBLE_PRECISION,&
                                     types(1),ierr)
    call MPI_Type_commit(types(1),ierr)
  else
    ! Send N times one double precision
    do ii = 1,maxSize
      scounts(ii)  = 1
      rcounts(ii)  = 1
      sdispls(ii)  = ii-1
      rdispls(ii)  = ii-1
      types(ii)    = MPI_DOUBLE_PRECISION
    enddo
  endif

  call MPI_Ibarrier(comm,req,ierr)
  call MPI_Wait(req,MPI_STATUS_IGNORE,ierr)

  if (switch) then
    call MPI_Ialltoallw(sbuf,scounts,sdispls,types,&
                        rbuf,rcounts,rdispls,&
                        comm,ierr)
    call MPI_Wait(req,ierr)
    call MPI_TYPE_FREE(types(1),ierr)
  else
    call MPI_alltoallw(sbuf,&
                       rbuf,&
                       comm,ierr)
  endif

  call MPI_Finalize( ierr )

end program main

使用调试标志编译并使用 mpirun -np 1 valgrind --vgdb=yes --vgdb-error=0 ./a.out 运行会导致 valgrind 和 gdb 出现以下错误:

valgrind :
==249074== Invalid read of size 8
==249074==    at 0x4EB0A6D: release_vecs_callback (coll_base_util.c:222)
==249074==    by 0x4EB100A: complete_vecs_callback (coll_base_util.c:245)
==249074==    by 0x74AD1CC: ompi_request_complete (request.h:441)
==249074==    by 0x74AE86D: ompi_coll_libnbc_progress (coll_libnbc_component.c:466)
==249074==    by 0x4FC0C39: opal_progress (opal_progress.c:231)
==249074==    by 0x4E04795: ompi_request_wait_completion (request.h:415)
==249074==    by 0x4E047EB: ompi_request_default_wait (req_wait.c:42)
==249074==    by 0x4E80AF7: PMPI_Wait (pwait.c:74)
==249074==    by 0x48A30D2: mpi_wait (pwait_f.c:76)
==249074==    by 0x10961A: MAIN__ (tmp.f90:61)
==249074==    by 0x1096C6: main (tmp.f90:7)
==249074==  Address 0x7758830 is 0 bytes inside a block of size 8 free'd
==249074==    at 0x483CA3F: free (vg_replace_malloc.c:540)
==249074==    by 0x4899CCC: PMPI_IALLTOALLW (pialltoallw_f.c:125)
==249074==    by 0x1095FC: MAIN__ (tmp.f90:61)
==249074==    by 0x1096C6: main (tmp.f90:7)
==249074==  Block was alloc'd at
==249074==    at 0x483B7F3: malloc (vg_replace_malloc.c:309)
==249074==    by 0x4899B4A: PMPI_IALLTOALLW (pialltoallw_f.c:90)
==249074==    by 0x1095FC: MAIN__ (tmp.f90:61)
==249074==    by 0x1096C6: main (tmp.f90:7)
gdb :
Thread 1 received signal SIGTRAP,Trace/breakpoint trap.
0x0000000004eb0a6d in release_vecs_callback (request=0x7758af8) at ../../../../openmpi-4.1.0/ompi/mca/coll/base/coll_base_util.c:222
222             if (NULL != request->data.vecs.stypes[i]) {
(gdb) bt
#0  0x0000000004eb0a6d in release_vecs_callback (request=0x7758af8) at ../../../../openmpi-4.1.0/ompi/mca/coll/base/coll_base_util.c:222
#1  0x0000000004eb100b in complete_vecs_callback (req=0x7758af8) at ../../../../openmpi-4.1.0/ompi/mca/coll/base/coll_base_util.c:245
#2  0x00000000074ad1cd in ompi_request_complete (request=0x7758af8,with_signal=true) at ../../../../../openmpi-4.1.0/ompi/request/request.h:441
#3  0x00000000074ae86e in ompi_coll_libnbc_progress () at ../../../../../openmpi-4.1.0/ompi/mca/coll/libnbc/coll_libnbc_component.c:466
#4  0x0000000004fc0c3a in opal_progress () at ../../openmpi-4.1.0/opal/runtime/opal_progress.c:231
#5  0x0000000004e04796 in ompi_request_wait_completion (req=0x7758af8) at ../../openmpi-4.1.0/ompi/request/request.h:415
#6  0x0000000004e047ec in ompi_request_default_wait (req_ptr=0x1ffeffdbb8,status=0x1ffeffdbc0) at ../../openmpi-4.1.0/ompi/request/req_wait.c:42
#7  0x0000000004e80af8 in PMPI_Wait (request=0x1ffeffdbb8,status=0x1ffeffdbc0) at pwait.c:74
#8  0x00000000048a30d3 in ompi_wait_f (request=0x1ffeffe6cc,status=0x10c0a0 <mpi_fortran_status_ignore_>,ierr=0x1ffeffeee0) at pwait_f.c:76
#9  0x000000000010961b in MAIN__ () at tmp.f90:61

任何帮助将不胜感激。 Ubuntu 20.04。 gfortran 9.3.0。 OpenMP 4.1.0。谢谢。

解决方法

建议的程序当前在使用 Open MPI 时已损坏,请参阅问题 https://github.com/open-mpi/ompi/issues/8763。当前的解决方法是使用 MPICH

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams[&#39;font.sans-serif&#39;] = [&#39;SimHei&#39;] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -&gt; systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping(&quot;/hires&quot;) public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate&lt;String
使用vite构建项目报错 C:\Users\ychen\work&gt;npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-
参考1 参考2 解决方案 # 点击安装源 协议选择 http:// 路径填写 mirrors.aliyun.com/centos/8.3.2011/BaseOS/x86_64/os URL类型 软件库URL 其他路径 # 版本 7 mirrors.aliyun.com/centos/7/os/x86
报错1 [root@slave1 data_mocker]# kafka-console-consumer.sh --bootstrap-server slave1:9092 --topic topic_db [2023-12-19 18:31:12,770] WARN [Consumer clie
错误1 # 重写数据 hive (edu)&gt; insert overwrite table dwd_trade_cart_add_inc &gt; select data.id, &gt; data.user_id, &gt; data.course_id, &gt; date_format(
错误1 hive (edu)&gt; insert into huanhuan values(1,&#39;haoge&#39;); Query ID = root_20240110071417_fe1517ad-3607-41f4-bdcf-d00b98ac443e Total jobs = 1
报错1:执行到如下就不执行了,没有显示Successfully registered new MBean. [root@slave1 bin]# /usr/local/software/flume-1.9.0/bin/flume-ng agent -n a1 -c /usr/local/softwa
虚拟及没有启动任何服务器查看jps会显示jps,如果没有显示任何东西 [root@slave2 ~]# jps 9647 Jps 解决方案 # 进入/tmp查看 [root@slave1 dfs]# cd /tmp [root@slave1 tmp]# ll 总用量 48 drwxr-xr-x. 2
报错1 hive&gt; show databases; OK Failed with exception java.io.IOException:java.lang.RuntimeException: Error in configuring object Time taken: 0.474 se
报错1 [root@localhost ~]# vim -bash: vim: 未找到命令 安装vim yum -y install vim* # 查看是否安装成功 [root@hadoop01 hadoop]# rpm -qa |grep vim vim-X11-7.4.629-8.el7_9.x
修改hadoop配置 vi /usr/local/software/hadoop-2.9.2/etc/hadoop/yarn-site.xml # 添加如下 &lt;configuration&gt; &lt;property&gt; &lt;name&gt;yarn.nodemanager.res