使用 MPI-IO 编写 Fortran 格式的文件
Using MPI-IO to write Fortran-formatted files
我正在尝试使用 OVERFLOW-PLOT3D q 文件格式(此处定义:http://overflow.larc.nasa.gov/files/2014/06/Appendix_A.pdf)保存解决方案。对于单个格子来说,基本上就是,
READ(1) NGRID
READ(1) JD,KD,LD,NQ,NQC
READ(1) REFMACH,ALPHA,REY,TIME,GAMINF,BETA,TINF, &
IGAM,HTINF,HT1,HT2,RGAS1,RGAS2, &
FSMACH,TVREF,DTVREF
READ(1) ((((Q(J,K,L,N),J=1,JD),K=1,KD),L=1,LD),N=1,NQ)
除 NGRID、JD、KD、LD、NQ、NQC 和 IGAM 为整数外,所有变量均为双精度数。我需要使用 MPI-IO 导出解决方案。如果我举一个非常简单的单处理器示例,下面的代码将不起作用,但我不明白为什么。
call mpi_file_open( mpi_comm_world, fileOut, mpi_mode_wronly + mpi_mode_create, &
mpi_info_null, mpi_fh, ierr )
offset = 0
call mpi_file_seek( mpi_fh, offset, mpi_seek_set, ierr )
call mpi_file_write( mpi_fh, (/NGRID,JD,KD,LD,NQ,NQC/), 6, mpi_integer, mstat, ierr )
call mpi_file_write( mpi_fh, (/REFMACH,ALPHA,REY,TIME,GAMINF,BETA,TINF/), 7, mpi_double_precision, mstat, ierr )
call mpi_file_write( mpi_fh, IGAM, 1, mpi_integer, mstat, ierr )
call mpi_file_write( mpi_fh, (/HTINF,HT1,HT2,RGAS1,RGAS2,FSMACH,TVREF,DTVREF/), 8, mpi_double_precision, mstat, ierr )
call mpi_file_write( mpi_fh, Q, NQ*JD*KD*LD, mpi_double_precision, mstat, ierr )
Tecplot 无法识别格式。但是,如果我写一个简单的非 MPI 代码,比如这个:
open(2, file=fileOut, form='unformatted', convert='little_endian')
write(2) NGRID
write(2) JD, KD, LD, NQ, NQC
write(2) REFMACH,ALPHA,REY,TIME,GAMINF,BETA,TINF, &
IGAM,HTINF,HT1,HT2,RGAS1,RGAS2, &
FSMACH,TVREF,DTVREF
write(2) ((((Q(J,K,L,N),J=1,JD),K=1,KD),L=1,LD),N=1,NQ)
一切正常。我的 MPI-IO 代码有什么问题??
非常感谢您的帮助!
约阿希姆
注意:我不知道这是否相关,但如果我在最终写入语句之前添加一个 mpi_file_seek(offset),offset=144。 Tecplot 同意加载文件(但数据读取不正确)。这就奇怪了,因为正常的offset应该是7个整数+15个实数*8 = 148个字节...
编辑:@Jonathan Dursi,由于某种原因,您的方法似乎不适用于 Tecplot。下面的代码有什么问题吗? (针对单个处理器进行了简化)
call MPI_File_write(fileh, [4, ngrid, 4], 3, MPI_INTEGER, MPI_STATUS_IGNORE, ierr)
call MPI_File_write(fileh, [20, jd, kd, ld, nq, nqc, 20], 7, MPI_INTEGER, MPI_STATUS_IGNORE, ierr)
call MPI_File_write(fileh, [56], 1, MPI_INTEGER, MPI_STATUS_IGNORE, ierr)
call MPI_File_write(fileh, [refmach,alpha,rey,time,gaminf,beta,tinf], 7, MPI_double_precision, MPI_STATUS_IGNORE, ierr)
call MPI_File_write(fileh, [56], 1, MPI_INTEGER, MPI_STATUS_IGNORE, ierr)
call MPI_File_write(fileh, [4, IGAM, 4], 3, MPI_INTEGER, MPI_STATUS_IGNORE, ierr)
call MPI_File_write(fileh, [64], 1, MPI_INTEGER, MPI_STATUS_IGNORE, ierr)
call MPI_File_write(fileh, [HTINF,HT1,HT2,RGAS1,RGAS2,FSMACH,TVREF,DTVREF], 8, MPI_double_precision, MPI_STATUS_IGNORE, ierr)
call MPI_File_write(fileh, [64], 1, MPI_INTEGER, MPI_STATUS_IGNORE, ierr)
call MPI_File_write(fileh, [jd*kd*ld*nq*8], 1, MPI_INTEGER, MPI_STATUS_IGNORE, ierr)
call MPI_File_write(fileh, q, jd*kd*ld*nq, MPI_double_precision, MPI_STATUS_IGNORE, ierr)
call MPI_File_write(fileh, [jd*kd*ld*nq*8], 1, MPI_INTEGER, MPI_STATUS_IGNORE, ierr)
@francescalus 是对的 - Fortran sequential unformatted data 是基于记录的 - 这实际上对很多事情来说真的很好,但没有其他人使用它(甚至 MPI-IO 在 Fortran 中,它更像 C - 该文件只是一大串未区分的字节。
让我们看一下您在问题中编写程序的简化版本:
program testwrite
integer, parameter:: ngrid=2
integer, parameter:: jd=4, kd=3, ld=2, nq=1, nqc=-1
integer, parameter :: refmach=1, alpha=2, rey=3, time=4, gaminf=5
integer, parameter :: beta=6, tinf=7
integer, dimension(jd,kd,ld,nq) :: q
q = 0
open(2, file='ftest.dat', form='unformatted', convert='little_endian')
write(2) NGRID
write(2) JD, KD, LD, NQ, NQC
write(2) REFMACH,ALPHA,REY,TIME,GAMINF,BETA,TINF
write(2) ((((Q(J,K,L,N),J=1,JD),K=1,KD),L=1,LD),N=1,NQ)
close(2)
end program testwrite
运行 这个,然后用 od
查看生成的二进制文件(为了清楚地查看二进制文件,我将所有内容都设为整数):
$ gfortran -o fwrite fwrite.f90
$ ./fwrite
$ od --format "d" ftest.dat
0000000 4 2 4 20
0000020 4 3 2 1
0000040 -1 20 28 1
0000060 2 3 4 5
0000100 6 7 28 96
0000120 0 0 0 0
*
0000260 96
0000264
例如,我们在开头看到 ngrid (2) 整数,以 4/4 为结尾 - 记录的大小(以字节为单位)。然后,以 20/20 结尾,我们看到 5 个整数(5*4 字节)4,3,2,1,-1 -- jd, kd, ld, nq, nqc。最后,我们看到一堆由 96(= 4 字节/整数 *4*3*2*1)表示 q 的零。 (请注意,没有定义此行为的标准,但我不知道有任何主要的 Fortran 编译器不这样做;但是,当记录变得大于 4 字节整数所能描述的范围时,行为开始不同。
我们可以使用下面的简单程序来测试数据文件:
program testread
implicit none
integer :: ngrid
integer :: jd, kd, ld, nq, nqc
integer :: refmach, alpha, rey, time, gaminf
integer :: beta, tinf
integer :: j, k, l, n
integer, allocatable, dimension(:,:,:,:) :: q
character(len=64) :: filename
if (command_argument_count() < 1) then
print *,'Usage: read [filename]'
else
call get_command_argument(1, filename)
open(2, file=trim(filename), form='unformatted', convert='little_endian')
read(2) NGRID
read(2) JD, KD, LD, NQ, NQC
read(2) REFMACH,ALPHA,REY,TIME,GAMINF,BETA,TINF
allocate(q(jd, kd, ld, nq))
read(2) ((((Q(J,K,L,N),J=1,JD),K=1,KD),L=1,LD),N=1,NQ)
close(2)
print *, 'Ngrid = ', ngrid
print *, 'jd, kd, ld, nq, nqc = ', jd, kd, ld, nq, nqc
print *, 'q: min/mean/max = ', minval(q), sum(q)/size(q), maxval(q)
deallocate(q)
endif
end program testread
和运行给出
$ ./fread ftest.dat
Ngrid = 2
jd, kd, ld, nq, nqc = 4 3 2 1 -1
q: min/mean/max = 0 0 0
足够简单。
所以这种行为在 MPI-IO 中很容易模仿。这里实际上有三个部分——header、Q,我假设它是分布式的(比如 MPI 子数组),以及页脚(它只是数组的书挡)。
那么让我们看一下用 Fortran 编写的 MPI-IO 程序,它可以做同样的事情:
program mpiwrite
use mpi
implicit none
integer, parameter:: ngrid=2
integer, parameter:: jd=3, kd=3, ld=3, nlocq=3, nqc=-1
integer :: nq
integer, parameter :: refmach=1, alpha=2, rey=3, time=4, gaminf=5
integer, parameter :: beta=6, tinf=7
integer, dimension(jd,kd,ld,nlocq) :: q
integer :: intsize
integer :: subarray
integer :: fileh
integer(kind=MPI_Offset_kind) :: offset
integer :: comsize, rank, ierr
call MPI_Init(ierr)
call MPI_Comm_size(MPI_COMM_WORLD, comsize, ierr)
call MPI_Comm_rank(MPI_COMM_WORLD, rank, ierr)
nq = nlocq * comsize
q = rank
! create a subarray; each processor gets its own q-slice of the
! global array
call MPI_Type_create_subarray (4, [jd, kd, ld, nq], [jd, kd, ld, nlocq], &
[0, 0, 0, nlocq*rank], &
MPI_ORDER_FORTRAN, MPI_INTEGER, subarray, ierr)
call MPI_Type_commit(subarray, ierr)
call MPI_File_open(MPI_COMM_WORLD, 'mpi.dat', &
MPI_MODE_WRONLY + MPI_MODE_CREATE, &
MPI_INFO_NULL, fileh, ierr )
! the header size is:
! 1 field of 1 integer ( = 4*(1 + 1 + 1) = 12 bytes )
! +1 field of 5 integers( = 4*(1 + 5 + 1) = 28 bytes )
! +1 field of 7 integers( = 4*(1 + 7 + 1) = 36 bytes )
! +first bookend of array size = 4 bytes
offset = 12 + 28 + 36 + 4
! rank 1 writes the header and footer
if (rank == 0) then
call MPI_File_write(fileh, [4, ngrid, 4], 3, MPI_INTEGER, &
MPI_STATUS_IGNORE, ierr)
call MPI_File_write(fileh, [20, jd, kd, ld, nq, nqc, 20], 7, MPI_INTEGER, &
MPI_STATUS_IGNORE, ierr)
call MPI_File_write(fileh, &
[28, refmach, alpha, rey, time, gaminf, beta, tinf, 28],&
9, MPI_INTEGER, MPI_STATUS_IGNORE, ierr)
call MPI_File_write(fileh, [jd*kd*ld*nq*4], 1, MPI_INTEGER, &
MPI_STATUS_IGNORE, ierr)
call MPI_File_seek(fileh, offset+jd*kd*ld*nq*4, MPI_SEEK_CUR, ierr)
call MPI_File_write(fileh, [jd*kd*ld*nq*4], 1, MPI_INTEGER, &
MPI_STATUS_IGNORE, ierr)
endif
! now everyone dumps their part of the array
call MPI_File_set_view(fileh, offset, MPI_INTEGER, subarray, &
'native', MPI_INFO_NULL, ierr)
call MPI_File_write_all(fileh, q, jd*kd*ld*nlocq, MPI_INTEGER, &
MPI_STATUS_IGNORE, ierr)
call MPI_File_close(fileh, ierr)
CALL MPI_Finalize(ierr)
end program mpiwrite
在这个程序中,进程0负责写入header和记录字段。它首先写入三个 header 记录,每个记录以字节为单位的记录长度;然后它为大 Q 数组写入两个书挡。
然后,每个等级设置文件视图首先跳过 header 然后只描述它的全局数组部分(这里只填充它的等级号),并写出它的本地数据。这些都是non-overlapping条数据。
所以让我们尝试使用几种不同的尺寸:
$ mpif90 -o mpifwrite mpifwrite.f90
$ mpirun -np 1 ./mpifwrite
$ ./fread mpi.dat
Ngrid = 2
jd, kd, ld, nq, nqc = 3 3 3 3 -1
q: min/mean/max = 0 0 0
$ od --format="d" mpi.dat
0000000 4 2 4 20
0000020 3 3 3 3
0000040 -1 20 28 1
0000060 2 3 4 5
0000100 6 7 28 324
0000120 0 0 0 0
*
0000740 0 324
0000750
$ mpirun -np 3 ./mpifwrite
$ ./fread mpi.dat
Ngrid = 2
jd, kd, ld, nq, nqc = 3 3 3 9 -1
q: min/mean/max = 0 1 2
$ od --format="d" mpi.dat
0000000 4 2 4 20
0000020 3 3 3 9
0000040 -1 20 28 1
0000060 2 3 4 5
0000100 6 7 28 972
0000120 0 0 0 0
*
0000620 0 1 1 1
0000640 1 1 1 1
*
0001320 1 1 2 2
0001340 2 2 2 2
*
0002020 2 2 2 0
0002040 0 0 0 0
*
0002140 0 0 0 972
0002160
这是我们期望的输出。将事物扩展到多个数据类型或多个网格相对简单。
我正在尝试使用 OVERFLOW-PLOT3D q 文件格式(此处定义:http://overflow.larc.nasa.gov/files/2014/06/Appendix_A.pdf)保存解决方案。对于单个格子来说,基本上就是,
READ(1) NGRID
READ(1) JD,KD,LD,NQ,NQC
READ(1) REFMACH,ALPHA,REY,TIME,GAMINF,BETA,TINF, &
IGAM,HTINF,HT1,HT2,RGAS1,RGAS2, &
FSMACH,TVREF,DTVREF
READ(1) ((((Q(J,K,L,N),J=1,JD),K=1,KD),L=1,LD),N=1,NQ)
除 NGRID、JD、KD、LD、NQ、NQC 和 IGAM 为整数外,所有变量均为双精度数。我需要使用 MPI-IO 导出解决方案。如果我举一个非常简单的单处理器示例,下面的代码将不起作用,但我不明白为什么。
call mpi_file_open( mpi_comm_world, fileOut, mpi_mode_wronly + mpi_mode_create, &
mpi_info_null, mpi_fh, ierr )
offset = 0
call mpi_file_seek( mpi_fh, offset, mpi_seek_set, ierr )
call mpi_file_write( mpi_fh, (/NGRID,JD,KD,LD,NQ,NQC/), 6, mpi_integer, mstat, ierr )
call mpi_file_write( mpi_fh, (/REFMACH,ALPHA,REY,TIME,GAMINF,BETA,TINF/), 7, mpi_double_precision, mstat, ierr )
call mpi_file_write( mpi_fh, IGAM, 1, mpi_integer, mstat, ierr )
call mpi_file_write( mpi_fh, (/HTINF,HT1,HT2,RGAS1,RGAS2,FSMACH,TVREF,DTVREF/), 8, mpi_double_precision, mstat, ierr )
call mpi_file_write( mpi_fh, Q, NQ*JD*KD*LD, mpi_double_precision, mstat, ierr )
Tecplot 无法识别格式。但是,如果我写一个简单的非 MPI 代码,比如这个:
open(2, file=fileOut, form='unformatted', convert='little_endian')
write(2) NGRID
write(2) JD, KD, LD, NQ, NQC
write(2) REFMACH,ALPHA,REY,TIME,GAMINF,BETA,TINF, &
IGAM,HTINF,HT1,HT2,RGAS1,RGAS2, &
FSMACH,TVREF,DTVREF
write(2) ((((Q(J,K,L,N),J=1,JD),K=1,KD),L=1,LD),N=1,NQ)
一切正常。我的 MPI-IO 代码有什么问题?? 非常感谢您的帮助!
约阿希姆
注意:我不知道这是否相关,但如果我在最终写入语句之前添加一个 mpi_file_seek(offset),offset=144。 Tecplot 同意加载文件(但数据读取不正确)。这就奇怪了,因为正常的offset应该是7个整数+15个实数*8 = 148个字节...
编辑:@Jonathan Dursi,由于某种原因,您的方法似乎不适用于 Tecplot。下面的代码有什么问题吗? (针对单个处理器进行了简化)
call MPI_File_write(fileh, [4, ngrid, 4], 3, MPI_INTEGER, MPI_STATUS_IGNORE, ierr)
call MPI_File_write(fileh, [20, jd, kd, ld, nq, nqc, 20], 7, MPI_INTEGER, MPI_STATUS_IGNORE, ierr)
call MPI_File_write(fileh, [56], 1, MPI_INTEGER, MPI_STATUS_IGNORE, ierr)
call MPI_File_write(fileh, [refmach,alpha,rey,time,gaminf,beta,tinf], 7, MPI_double_precision, MPI_STATUS_IGNORE, ierr)
call MPI_File_write(fileh, [56], 1, MPI_INTEGER, MPI_STATUS_IGNORE, ierr)
call MPI_File_write(fileh, [4, IGAM, 4], 3, MPI_INTEGER, MPI_STATUS_IGNORE, ierr)
call MPI_File_write(fileh, [64], 1, MPI_INTEGER, MPI_STATUS_IGNORE, ierr)
call MPI_File_write(fileh, [HTINF,HT1,HT2,RGAS1,RGAS2,FSMACH,TVREF,DTVREF], 8, MPI_double_precision, MPI_STATUS_IGNORE, ierr)
call MPI_File_write(fileh, [64], 1, MPI_INTEGER, MPI_STATUS_IGNORE, ierr)
call MPI_File_write(fileh, [jd*kd*ld*nq*8], 1, MPI_INTEGER, MPI_STATUS_IGNORE, ierr)
call MPI_File_write(fileh, q, jd*kd*ld*nq, MPI_double_precision, MPI_STATUS_IGNORE, ierr)
call MPI_File_write(fileh, [jd*kd*ld*nq*8], 1, MPI_INTEGER, MPI_STATUS_IGNORE, ierr)
@francescalus 是对的 - Fortran sequential unformatted data 是基于记录的 - 这实际上对很多事情来说真的很好,但没有其他人使用它(甚至 MPI-IO 在 Fortran 中,它更像 C - 该文件只是一大串未区分的字节。
让我们看一下您在问题中编写程序的简化版本:
program testwrite
integer, parameter:: ngrid=2
integer, parameter:: jd=4, kd=3, ld=2, nq=1, nqc=-1
integer, parameter :: refmach=1, alpha=2, rey=3, time=4, gaminf=5
integer, parameter :: beta=6, tinf=7
integer, dimension(jd,kd,ld,nq) :: q
q = 0
open(2, file='ftest.dat', form='unformatted', convert='little_endian')
write(2) NGRID
write(2) JD, KD, LD, NQ, NQC
write(2) REFMACH,ALPHA,REY,TIME,GAMINF,BETA,TINF
write(2) ((((Q(J,K,L,N),J=1,JD),K=1,KD),L=1,LD),N=1,NQ)
close(2)
end program testwrite
运行 这个,然后用 od
查看生成的二进制文件(为了清楚地查看二进制文件,我将所有内容都设为整数):
$ gfortran -o fwrite fwrite.f90
$ ./fwrite
$ od --format "d" ftest.dat
0000000 4 2 4 20
0000020 4 3 2 1
0000040 -1 20 28 1
0000060 2 3 4 5
0000100 6 7 28 96
0000120 0 0 0 0
*
0000260 96
0000264
例如,我们在开头看到 ngrid (2) 整数,以 4/4 为结尾 - 记录的大小(以字节为单位)。然后,以 20/20 结尾,我们看到 5 个整数(5*4 字节)4,3,2,1,-1 -- jd, kd, ld, nq, nqc。最后,我们看到一堆由 96(= 4 字节/整数 *4*3*2*1)表示 q 的零。 (请注意,没有定义此行为的标准,但我不知道有任何主要的 Fortran 编译器不这样做;但是,当记录变得大于 4 字节整数所能描述的范围时,行为开始不同。
我们可以使用下面的简单程序来测试数据文件:
program testread
implicit none
integer :: ngrid
integer :: jd, kd, ld, nq, nqc
integer :: refmach, alpha, rey, time, gaminf
integer :: beta, tinf
integer :: j, k, l, n
integer, allocatable, dimension(:,:,:,:) :: q
character(len=64) :: filename
if (command_argument_count() < 1) then
print *,'Usage: read [filename]'
else
call get_command_argument(1, filename)
open(2, file=trim(filename), form='unformatted', convert='little_endian')
read(2) NGRID
read(2) JD, KD, LD, NQ, NQC
read(2) REFMACH,ALPHA,REY,TIME,GAMINF,BETA,TINF
allocate(q(jd, kd, ld, nq))
read(2) ((((Q(J,K,L,N),J=1,JD),K=1,KD),L=1,LD),N=1,NQ)
close(2)
print *, 'Ngrid = ', ngrid
print *, 'jd, kd, ld, nq, nqc = ', jd, kd, ld, nq, nqc
print *, 'q: min/mean/max = ', minval(q), sum(q)/size(q), maxval(q)
deallocate(q)
endif
end program testread
和运行给出
$ ./fread ftest.dat
Ngrid = 2
jd, kd, ld, nq, nqc = 4 3 2 1 -1
q: min/mean/max = 0 0 0
足够简单。
所以这种行为在 MPI-IO 中很容易模仿。这里实际上有三个部分——header、Q,我假设它是分布式的(比如 MPI 子数组),以及页脚(它只是数组的书挡)。
那么让我们看一下用 Fortran 编写的 MPI-IO 程序,它可以做同样的事情:
program mpiwrite
use mpi
implicit none
integer, parameter:: ngrid=2
integer, parameter:: jd=3, kd=3, ld=3, nlocq=3, nqc=-1
integer :: nq
integer, parameter :: refmach=1, alpha=2, rey=3, time=4, gaminf=5
integer, parameter :: beta=6, tinf=7
integer, dimension(jd,kd,ld,nlocq) :: q
integer :: intsize
integer :: subarray
integer :: fileh
integer(kind=MPI_Offset_kind) :: offset
integer :: comsize, rank, ierr
call MPI_Init(ierr)
call MPI_Comm_size(MPI_COMM_WORLD, comsize, ierr)
call MPI_Comm_rank(MPI_COMM_WORLD, rank, ierr)
nq = nlocq * comsize
q = rank
! create a subarray; each processor gets its own q-slice of the
! global array
call MPI_Type_create_subarray (4, [jd, kd, ld, nq], [jd, kd, ld, nlocq], &
[0, 0, 0, nlocq*rank], &
MPI_ORDER_FORTRAN, MPI_INTEGER, subarray, ierr)
call MPI_Type_commit(subarray, ierr)
call MPI_File_open(MPI_COMM_WORLD, 'mpi.dat', &
MPI_MODE_WRONLY + MPI_MODE_CREATE, &
MPI_INFO_NULL, fileh, ierr )
! the header size is:
! 1 field of 1 integer ( = 4*(1 + 1 + 1) = 12 bytes )
! +1 field of 5 integers( = 4*(1 + 5 + 1) = 28 bytes )
! +1 field of 7 integers( = 4*(1 + 7 + 1) = 36 bytes )
! +first bookend of array size = 4 bytes
offset = 12 + 28 + 36 + 4
! rank 1 writes the header and footer
if (rank == 0) then
call MPI_File_write(fileh, [4, ngrid, 4], 3, MPI_INTEGER, &
MPI_STATUS_IGNORE, ierr)
call MPI_File_write(fileh, [20, jd, kd, ld, nq, nqc, 20], 7, MPI_INTEGER, &
MPI_STATUS_IGNORE, ierr)
call MPI_File_write(fileh, &
[28, refmach, alpha, rey, time, gaminf, beta, tinf, 28],&
9, MPI_INTEGER, MPI_STATUS_IGNORE, ierr)
call MPI_File_write(fileh, [jd*kd*ld*nq*4], 1, MPI_INTEGER, &
MPI_STATUS_IGNORE, ierr)
call MPI_File_seek(fileh, offset+jd*kd*ld*nq*4, MPI_SEEK_CUR, ierr)
call MPI_File_write(fileh, [jd*kd*ld*nq*4], 1, MPI_INTEGER, &
MPI_STATUS_IGNORE, ierr)
endif
! now everyone dumps their part of the array
call MPI_File_set_view(fileh, offset, MPI_INTEGER, subarray, &
'native', MPI_INFO_NULL, ierr)
call MPI_File_write_all(fileh, q, jd*kd*ld*nlocq, MPI_INTEGER, &
MPI_STATUS_IGNORE, ierr)
call MPI_File_close(fileh, ierr)
CALL MPI_Finalize(ierr)
end program mpiwrite
在这个程序中,进程0负责写入header和记录字段。它首先写入三个 header 记录,每个记录以字节为单位的记录长度;然后它为大 Q 数组写入两个书挡。
然后,每个等级设置文件视图首先跳过 header 然后只描述它的全局数组部分(这里只填充它的等级号),并写出它的本地数据。这些都是non-overlapping条数据。
所以让我们尝试使用几种不同的尺寸:
$ mpif90 -o mpifwrite mpifwrite.f90
$ mpirun -np 1 ./mpifwrite
$ ./fread mpi.dat
Ngrid = 2
jd, kd, ld, nq, nqc = 3 3 3 3 -1
q: min/mean/max = 0 0 0
$ od --format="d" mpi.dat
0000000 4 2 4 20
0000020 3 3 3 3
0000040 -1 20 28 1
0000060 2 3 4 5
0000100 6 7 28 324
0000120 0 0 0 0
*
0000740 0 324
0000750
$ mpirun -np 3 ./mpifwrite
$ ./fread mpi.dat
Ngrid = 2
jd, kd, ld, nq, nqc = 3 3 3 9 -1
q: min/mean/max = 0 1 2
$ od --format="d" mpi.dat
0000000 4 2 4 20
0000020 3 3 3 9
0000040 -1 20 28 1
0000060 2 3 4 5
0000100 6 7 28 972
0000120 0 0 0 0
*
0000620 0 1 1 1
0000640 1 1 1 1
*
0001320 1 1 2 2
0001340 2 2 2 2
*
0002020 2 2 2 0
0002040 0 0 0 0
*
0002140 0 0 0 972
0002160
这是我们期望的输出。将事物扩展到多个数据类型或多个网格相对简单。