coarray fortran 数组没有得到更新
coarray fortran array doesn't get updated
我刚开始学习使用 Coarry 和 Fortran。我有一个非常简单的程序。我有一个长度为 9 的数组,我想将其分配给 3 个进程,进行一些计算,然后将它们合并回单个数组(基本 MPI_scatter
/MPI_gather
之类的问题)。
program name
implicit none
integer:: arr(9), local(3)[*]
integer:: i, j, k, iz
arr = [(i, i = 1, 9)]
local(:)[1] = arr(1:3)
local(:)[2] = arr(4:6)
local(:)[3] = arr(7:9)
iz = this_image()
local = local*iz
sync all
if(iz == 1) then
arr(1:3) = local(:)[1]
arr(4:6) = local(:)[2]
arr(7:9) = local(:)[3]
write(*,'(*(i3))')arr
endif
end program name
我正在用
编译这个
gfortran -fcorray=lib -lcaf_mpi
并以此执行
mpirun -np 3 ./a.out
这应该是这样的打印输出
1 2 3 8 10 12 21 24 27
但是,如果我多次 运行 可执行文件(无需重新编译),有时它会显示多个结果,例如 1 2 3 4 5 6 21 24 27
等。这些值不会随着计算而更新。
我做错了什么?如何解决这个问题
欢迎来到共享内存编程的精彩世界!
您有竞争条件。有可能其中一个图像远远落后于其他图像,以至于它在另一个图像执行 local = local*iz
后重置(远程)数据。这是一种修复它的方法,要格外小心,只有一个进程会设置数据的给定部分:
ijb@ijb-Latitude-5410:~/work/stack$ cat caf.f90
program name
implicit none
integer:: arr(9), local(3)[*]
integer:: i, j, k, iz
iz = this_image()
! Only one image should ever write to a given memory location
! between synchronisation points. In a real code each different image
! would set up different parts of the array
If( iz == 1 ) Then
arr = [(i, i = 1, 9)]
local(:)[1] = arr(1:3)
local(:)[2] = arr(4:6)
local(:)[3] = arr(7:9)
End If
! Make sure the array is fully set up before you use it
sync all
local = local*iz
sync all
if(iz == 1) then
arr(1:3) = local(:)[1]
arr(4:6) = local(:)[2]
arr(7:9) = local(:)[3]
write(*,'(*(i3))')arr
endif
end program name
ijb@ijb-Latitude-5410:~/work/stack$ mpif90 -std=f2018 -fcheck=all -Wall -Wextra -O -g -fcoarray=lib caf.f90 -lcaf_openmpi
caf.f90:4:17:
4 | integer:: i, j, k, iz
| 1
Warning: Unused variable ‘j’ declared at (1) [-Wunused-variable]
caf.f90:4:20:
4 | integer:: i, j, k, iz
| 1
Warning: Unused variable ‘k’ declared at (1) [-Wunused-variable]
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
我刚开始学习使用 Coarry 和 Fortran。我有一个非常简单的程序。我有一个长度为 9 的数组,我想将其分配给 3 个进程,进行一些计算,然后将它们合并回单个数组(基本 MPI_scatter
/MPI_gather
之类的问题)。
program name
implicit none
integer:: arr(9), local(3)[*]
integer:: i, j, k, iz
arr = [(i, i = 1, 9)]
local(:)[1] = arr(1:3)
local(:)[2] = arr(4:6)
local(:)[3] = arr(7:9)
iz = this_image()
local = local*iz
sync all
if(iz == 1) then
arr(1:3) = local(:)[1]
arr(4:6) = local(:)[2]
arr(7:9) = local(:)[3]
write(*,'(*(i3))')arr
endif
end program name
我正在用
编译这个gfortran -fcorray=lib -lcaf_mpi
并以此执行
mpirun -np 3 ./a.out
这应该是这样的打印输出
1 2 3 8 10 12 21 24 27
但是,如果我多次 运行 可执行文件(无需重新编译),有时它会显示多个结果,例如 1 2 3 4 5 6 21 24 27
等。这些值不会随着计算而更新。
我做错了什么?如何解决这个问题
欢迎来到共享内存编程的精彩世界!
您有竞争条件。有可能其中一个图像远远落后于其他图像,以至于它在另一个图像执行 local = local*iz
后重置(远程)数据。这是一种修复它的方法,要格外小心,只有一个进程会设置数据的给定部分:
ijb@ijb-Latitude-5410:~/work/stack$ cat caf.f90
program name
implicit none
integer:: arr(9), local(3)[*]
integer:: i, j, k, iz
iz = this_image()
! Only one image should ever write to a given memory location
! between synchronisation points. In a real code each different image
! would set up different parts of the array
If( iz == 1 ) Then
arr = [(i, i = 1, 9)]
local(:)[1] = arr(1:3)
local(:)[2] = arr(4:6)
local(:)[3] = arr(7:9)
End If
! Make sure the array is fully set up before you use it
sync all
local = local*iz
sync all
if(iz == 1) then
arr(1:3) = local(:)[1]
arr(4:6) = local(:)[2]
arr(7:9) = local(:)[3]
write(*,'(*(i3))')arr
endif
end program name
ijb@ijb-Latitude-5410:~/work/stack$ mpif90 -std=f2018 -fcheck=all -Wall -Wextra -O -g -fcoarray=lib caf.f90 -lcaf_openmpi
caf.f90:4:17:
4 | integer:: i, j, k, iz
| 1
Warning: Unused variable ‘j’ declared at (1) [-Wunused-variable]
caf.f90:4:20:
4 | integer:: i, j, k, iz
| 1
Warning: Unused variable ‘k’ declared at (1) [-Wunused-variable]
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27
ijb@ijb-Latitude-5410:~/work/stack$ mpirun -np 3 ./a.out
1 2 3 8 10 12 21 24 27