Fortran Coder

查看: 1515|回复: 3
打印 上一主题 下一主题

[通用算法] FORTRAN并行代码收集数据出错

[复制链接]

58

帖子

21

主题

0

精华

专家

F 币
283 元
贡献
173 点
跳转到指定楼层
楼主
发表于 2023-5-3 23:48:27 | 只看该作者 |只看大图 回帖奖励 |倒序浏览 |阅读模式
大家好,

我用MPI_SCATTERV和MPI_GATHERV并行了FORTRAN代码,编译成功了,但在运行时,系统报错,说MPI_GATHERV收集数据出错。能麻烦大家帮我看下我的代码,哪里出了问题吗?谢谢啦。

这是我的代码。
[Fortran] 纯文本查看 复制代码
MODULE MOU
CONTAINS
SUBROUTINE PRO (br,n1,n2,n3)
IMPLICIT NONE
INTEGER, PARAMETER               ::      dp = SELECTED_REAL_KIND(15,14)
REAL (KIND=dp)                   ::      br(:,:,:,:)
INTEGER                          ::      i, j, k, l
INTEGER                          ::      n1, n2, n3
DO i = 1, n1, 1
   DO j = 1, n2, 1
      DO k = 1, n3, 1
         br(:,i,j,k) = (i*0.85+3.1)/(5.7-j)*(k**3-3.6)*DSQRT(br(:,i,j,k))
      END DO
   END DO
END DO
RETURN
END SUBROUTINE PRO
END MODULE MOU
PROGRAM EXAMPLE
USE MPI
USE MOU
IMPLICIT NONE
INTEGER, PARAMETER               ::      dp = SELECTED_REAL_KIND(15,14)
INTEGER                          ::      i, j, k, l
INTEGER                          ::      nu(4)
INTEGER, ALLOCATABLE             ::      ns(:), rd(:)
INTEGER                          ::      nr, er, rn, ts, rs
REAL (KIND=dp), ALLOCATABLE      ::      ar(:,:,:,:), br(:,:,:,:), cr(:,:,:,:)
INTEGER                          ::      world_size, world_rank, ierr
INTEGER                          ::      nt
INTEGER (KIND=MPI_ADDRESS_KIND)  ::      ra(2)
CALL MPI_INIT(ierr)
CALL MPI_COMM_SIZE(MPI_COMM_WORLD,world_size,ierr)
CALL MPI_COMM_RANK(MPI_COMM_WORLD,world_rank,ierr)
nu(1) = 5
nu(2) = 6
nu(3) = 3
nu(4) = 2
IF (world_rank == 0) THEN
   ALLOCATE (ar(nu(1),nu(2),nu(3),nu(4)))
   DO i = 1, nu(1), 1
      DO j = 1, nu(2), 1
         DO k = 1, nu(3), 1
            DO l = 1, nu(4), 1
               ar(i,j,k,l) = (i+1)*(j/0.5-6.3)*DSQRT(DBLE(k)+3.9)/(l**2+8.3)
            END DO
         END DO
      END DO
   END DO
END IF
CALL MPI_BARRIER(MPI_COMM_WORLD,ierr)
ALLOCATE (ns(0:world_size-1))
ALLOCATE (rd(0:world_size-1))
nr = nu(2)/world_size
er = MOD(nu(2),world_size)
j = 0
DO i = 0, world_size-1, 1
   IF (i > er) THEN
      ns(i) = nr
   ELSE
      ns(i) = nr + 1
   END IF
   rd(i) = j
   j = j+ns(i)
END DO
rn = ns(world_rank)
ALLOCATE (br(rn,nu(1),nu(3),nu(4)))
CALL MPI_TYPE_VECTOR(rn,1,nu(2),MPI_DOUBLE,nt,ierr)
CALL MPI_TYPE_COMMIT(nt,ierr)
ra(1) = 0
CALL MPI_TYPE_SIZE(MPI_DOUBLE,ts,ierr)
ra(2) = 1*ts
CALL MPI_TYPE_CREATE_RESIZED(nt,ra(1),ra(2),rs,ierr)
CALL MPI_TYPE_COMMIT(rs,ierr)
br = 0.0d0
CALL MPI_SCATTERV(ar,ns,rd,rs,br,rn*nu(4)*nu(3)*nu(1),MPI_DOUBLE,0,MPI_COMM_WORLD,ierr)
CALL PRO(br,nu(1),nu(3),nu(4))
IF (world_rank == 0) THEN
   ALLOCATE (cr(nu(1),nu(2),nu(3),nu(4)))
END IF
CALL MPI_GATHERV(br,rn*nu(4)*nu(3)*nu(1),MPI_DOUBLE,cr,ns,rd,MPI_DOUBLE,0,MPI_COMM_WORLD,ierr)
IF (world_rank == 0) THEN
   OPEN(UNIT=3, FILE='data.dat', STATUS='REPLACE')
   DO i = nu(1), 1
      DO j = 1, nu(2), 1
         DO k = 1, nu(3), 1
            WRITE (UNIT=3, FMT=*) cr(i,j,k,:)
         END DO
      END DO
   END DO
   DEALLOCATE (ar)
   DEALLOCATE (cr)
   CLOSE (UNIT=3)
END IF
DEALLOCATE (ns)
DEALLOCATE (rd)
DEALLOCATE (br)
STOP
END PROGRAM EXAMPLE

这是报错信息。
[Fortran] 纯文本查看 复制代码
Loading compiler version 2022.0.2
Loading tbb version 2021.5.1
Loading compiler-rt version 2022.0.2
Loading oclfpga version 2022.0.2
  Load "debugger" to debug DPC++ applications with the gdb-oneapi debugger.
  Load "dpl" for additional DPC++ APIs: [url]https://github.com/oneapi-src/oneDPL[/url]
Loading mkl version 2022.0.2
Loading mpi version 2021.5.1
Abort(69296142) on node 0 (rank 0 in comm 0): Fatal error in PMPI_Gatherv: Message truncated, error stack:
PMPI_Gatherv(397)..........................: MPI_Gatherv failed(sbuf=0x2b6dd01cb240, scount=881724165, MPI_DOUBLE, rbuf=0x2b6f901cc280, rcnts=0x2b6c0fc87fe0, displs=0x2b6c0fc87fc0, datatype=MPI_DOUBLE, root=0, comm=MPI_COMM_WORLD) failed
MPIDI_Gatherv_intra_composition_alpha(2631): 
MPIDI_NM_mpi_gatherv(396)..................: 
MPIR_Gatherv_allcomm_linear_ssend(65)......: 
MPIR_Localcopy(46).........................: Message truncated; -1536141272 bytes received but buffer size is 8056
Loading compiler version 2022.0.2
Loading tbb version 2021.5.1
Loading compiler-rt version 2022.0.2
Loading oclfpga version 2022.0.2
  Load "debugger" to debug DPC++ applications with the gdb-oneapi debugger.
  Load "dpl" for additional DPC++ APIs: [url]https://github.com/oneapi-src/oneDPL[/url]
Loading mkl version 2022.0.2
Loading mpi version 2021.5.1
Abort(136405006) on node 0 (rank 0 in comm 0): Fatal error in PMPI_Gatherv: Message truncated, error stack:
PMPI_Gatherv(397)..........................: MPI_Gatherv failed(sbuf=0x2b77a71a9d00, scount=210, MPI_DOUBLE, rbuf=0x2b77a71a9600, rcnts=0x2b77a7197fe0, displs=0x2b77a7197fc0, datatype=MPI_DOUBLE, root=0, comm=MPI_COMM_WORLD) failed
MPIDI_Gatherv_intra_composition_alpha(2631): 
MPIDI_NM_mpi_gatherv(396)..................: 
MPIR_Gatherv_allcomm_linear_ssend(65)......: 
MPIR_Localcopy(46).........................: Message truncated; 1680 bytes received but buffer size is 56

分享到:  微信微信
收藏收藏 点赞点赞 点踩点踩

798

帖子

2

主题

0

精华

大宗师

F 币
3793 元
贡献
2268 点
沙发
发表于 2023-5-4 07:48:51 | 只看该作者
1、传输的数据太大, 超过32位整型的范围;的确这么大,还是计算出错?
2、接收的空间太小;

无标题.png (66.74 KB, 下载次数: 165)

无标题.png

58

帖子

21

主题

0

精华

专家

F 币
283 元
贡献
173 点
板凳
 楼主| 发表于 2023-5-4 15:47:06 | 只看该作者
li913 发表于 2023-5-4 07:48
1、传输的数据太大, 超过32位整型的范围;的确这么大,还是计算出错?
2、接收的空间太小;
...

谢谢你的回复。

那请问,除去数据传输过大外,MPI_GATHERV命令我使用得是否正确呢?

另外,请问MPI中是否有命令,可以让每个进程单独把各自计算的数据写入文件,且各进程写入的数据不会彼此覆盖掉,进而不用收集到根进程再统一写入文件呢?

798

帖子

2

主题

0

精华

大宗师

F 币
3793 元
贡献
2268 点
地板
发表于 2023-5-5 16:42:04 | 只看该作者
Kieran 发表于 2023-5-4 15:47
谢谢你的回复。

那请问,除去数据传输过大外,MPI_GATHERV命令我使用得是否正确呢?

1、看起来没问题。但得运行才能确定。
2、每个进程输出不同文件。
您需要登录后才可以回帖 登录 | 极速注册

本版积分规则

捐赠本站|Archiver|关于我们 About Us|小黑屋|Fcode ( 京ICP备18005632-2号 )

GMT+8, 2024-4-26 07:50

Powered by Tencent X3.4

© 2013-2024 Tencent

快速回复 返回顶部 返回列表