MPI_GATHERV 覆盖命令中未引用的数组

MPI_GATHERV overwrites arrays that are not referenced in command

我有一个关于 MPI 的特殊问题,其中一个未在 MPI 命令中引用的数组被覆盖---发生了某种内存问题。

在 gatherv 的第一种情况下,mpi 按预期工作。在第二次调用 gatherv 时,来自第一个数组的信息受到影响!

我正在使用的代码非常大,但是我创建了一个大致显示问题的独立程序。

然而,在较小的程序中,虽然仍然存在问题,但程序会调用段错误,而不是像较大的程序那样继续运行。


    program main

      use mpi

      integer :: chunksize, send_count, i_start, i_end
      integer, allocatable :: rec_starts(:), rec_counts(:)

      integer, parameter :: dp = 8; ! double precision

      REAL(DP), allocatable:: array_2d(:,:)
      REAL(DP), allocatable:: array_3d(:,:,:)

      INTEGER, parameter:: num_skill=5, num_pref=2

      INTEGER, parameter:: num_ed=3, num_children=2, num_age=4, num_market=28, num_health=2, num_year=2
      INTEGER, parameter:: num_total_state_m=num_children*num_market*num_year*num_ed*num_age*num_health*num_ed*num_age*num_health  

      real(dp), dimension(num_skill,num_total_state_m) :: array_2d_local
      real(dp), dimension(num_pref,num_pref,num_total_state_m) :: array_3d_local

      integer i,j,k,l,m

      !mpi vars
      integer :: ierr, ntasks, mpi_id



      ! Set up MPI
      call mpi_init(ierr)
      call mpi_comm_size(mpi_comm_world, ntasks, ierr) !get number of tasks
      call mpi_comm_rank(mpi_comm_world, mpi_id, ierr) !get id of each task
      write(*,*) 'process ', mpi_id+1, 'of ', ntasks, 'is alive,', ' mpi_id:',mpi_id

      !calculate which 'i' this thread is responsible for
            chunksize = (num_total_state_m + ntasks - 1) / ntasks !note int/int rounds down
            i_start = (mpi_id)*chunksize + 1
            i_end = min((mpi_id+1)*chunksize,num_total_state_m)

      !set up practice matrices
      allocate(array_2d(num_skill,num_total_state_m), &
           array_3d(num_pref,num_pref,num_total_state_m))

      l = 1
      m = -1
      do i=1,num_skill
         do j=1, num_total_state_m
            if (mpi_id==0) array_2d_local(i,j) = l
            if (mpi_id==1) array_2d_local(i,j) = m
            l = l + 1
            m = m - 1
         end do
      end do

      l = 1
      m = -1
      do i=1, num_pref
         do j=1, num_pref
            do k=1, num_total_state_m
               if (mpi_id==0) array_3d_local(i,j,k) = l
               if (mpi_id==1) array_3d_local(i,j,k) = m
               l = l + 1
               m = m - 1
            end do
         end do
      end do


      ! Next send matricies
      allocate(rec_starts(ntasks), rec_counts(ntasks))
      do i=1, ntasks
         rec_counts(i) = min(num_total_state_m, i * chunksize) - (i-1)*chunksize
         rec_starts(i) = (i-1) * chunksize
      end do
      rec_counts = rec_counts * num_skill
      rec_starts = rec_starts * num_skill
      send_count = rec_counts(mpi_id+1)


      ! -m  (dimensions:num_skill, num_total_state_m)  double
      call mpi_gatherv(array_2d_local(:,i_start:i_end), send_count, &
           mpi_double_precision, &
           array_2d, rec_counts, rec_starts, mpi_double_precision, &
           0, mpi_comm_world, ierr)

      ! Next do 3d array
      ! IF THESE LINES ARE UNCOMMENTED, THE PROGRAM WORKS FINE!
      !do i=1, ntasks
      !   rec_counts(i) = min(num_total_state_m, i * chunksize) - (i-1)*chunksize
      !   rec_starts(i) = (i-1) * chunksize
      !end do
      rec_counts = rec_counts * num_pref
      rec_starts = rec_starts * num_pref
      send_count = rec_counts(mpi_id+1)
      ! -array_3d    (num_pref,num_pref,num_total_state_m)double
      print*, array_2d(1,1), mpi_id, 'before'
      call mpi_gatherv(array_3d_local(:,:,i_start:i_end), send_count, &
           mpi_double_precision, &
           array_3d, rec_counts, rec_starts, mpi_double_precision, &
           0, mpi_comm_world, ierr)
      print*, array_2d(1,1), mpi_id, 'after'


      deallocate(rec_starts, rec_counts)
      deallocate(array_2d, array_3d)



    end program main

这个小程序的输出如下所示:

    mpifort -fcheck=all -fbacktrace -g -Og -ffree-line-length-2048  main.f90 -o run_main
    mpiexec -np 2 run_main 2>&1 | tee run_main.log
     process            1 of            2 is alive, mpi_id:           0
     process            2 of            2 is alive, mpi_id:           1
       1.0000000000000000                0 before
       0.0000000000000000                1 before

    Program received signal SIGSEGV: Segmentation fault - invalid memory reference.

    Backtrace for this error:
    #0  0x101e87579
    #1  0x101e86945
    #2  0x7fff6a9ecb5c

在较大的程序中,程序没有段错误,打印输出看起来像这样

    1.0000000000000000                0 before
    0.0000000000000000                1 before
    -1.9018063100806379               0 after
    0.0000000000000000                1 after

我一直在查看其他 SO 帖子: MPI_Recv overwrites parts of memory it should not access MPI_Recv overwrites parts of memory it should not access

但是,作为 fortran/mpi 的非专家,不幸的是,对这些帖子的回复不足以让我理解这个问题。

非常感谢任何帮助或见解。谢谢!

编辑:谢谢,只是我是个白痴。如果其他人遇到此问题,请三重检查您的 recvcountsdispls

您的初始代码可以

  do i=1, ntasks
     rec_counts(i) = min(num_total_state_m, i * chunksize) - (i-1)*chunksize
     rec_starts(i) = (i-1) * chunksize
  end do
  rec_counts = rec_counts * num_skill
  rec_starts = rec_starts * num_skill
  send_count = rec_counts(mpi_id+1)

然后

  rec_counts = rec_counts * num_pref
  rec_starts = rec_starts * num_pref
  send_count = rec_counts(mpi_id+1)

你只是忘了除以 num_skill。 一个简单的解决方法是用

替换最后三行
  rec_counts = rec_counts * num_pref / num_skill
  rec_starts = rec_starts * num_pref / num_skill
  send_count = rec_counts(mpi_id+1)

如果您怀疑 MPI 库中存在错误,一个好的做法是尝试另一个(例如 MPICH(衍生)和 Open MPI)。如果您的应用程序同时崩溃,那么很可能是您的应用程序中存在错误。