Petsc - 将分布式向量合并为本地向量
Petsc - combine distributed vector into local vector
我正在使用 Petsc,我想合并一个分布式 Vec,以便每个进程都有一个完整的 Vec 副本。我有一个最小的例子,它从一个数据数组开始,从中构造一个 MPI Vec,然后尝试使用 VecScatter 来组合来自多个进程的向量。当我这样做时,本地向量只接收存储在第 0 个进程中的值,它不接收来自其他进程的信息。如何组合分布式向量以生成完整的局部向量?
#include <petscvec.h>
double primes[] = {2,3,5,7,11,13,17};
int nprimes = 7;
int main(int argc,char **argv)
{
PetscInitialize(&argc,&argv, NULL,NULL);
MPI_Comm comm=MPI_COMM_WORLD;
Vec xpar,xseq;
PetscInt low,high;
IS index_set_global, index_set_local;
const PetscInt *indices;
VecScatter vc;
PetscErrorCode ierr;
//Set up parallel vector
ierr = VecCreateMPI(comm, PETSC_DETERMINE, nprimes, &xpar); CHKERRQ(ierr);
ierr = VecGetOwnershipRange(xpar, &low, &high); CHKERRQ(ierr);
ierr = ISCreateStride(comm, high - low, low, 1, &index_set_global); CHKERRQ(ierr);
ierr = ISGetIndices(index_set_global, &indices); CHKERRQ(ierr);
ierr = ISView(index_set_global, PETSC_VIEWER_STDOUT_WORLD); CHKERRQ(ierr);
ierr = VecSetValues(xpar, high - low, indices, primes + low, INSERT_VALUES);CHKERRQ(ierr);
ierr = VecAssemblyBegin(xpar); CHKERRQ(ierr);
ierr = VecAssemblyEnd(xpar); CHKERRQ(ierr);
ierr = VecView(xpar, PETSC_VIEWER_STDOUT_WORLD); CHKERRQ(ierr);
//Scatter parallel vector so all processes have full vector
ierr = VecCreateSeq(PETSC_COMM_SELF, nprimes, &xseq); CHKERRQ(ierr);
//ierr = VecCreateMPI(comm, high - low, nprimes, &xseq); CHKERRQ(ierr);
ierr = ISCreateStride(comm, high - low, 0, 1, &index_set_local); CHKERRQ(ierr);
ierr = VecScatterCreate(xpar, index_set_local, xseq, index_set_global, &vc); CHKERRQ(ierr);
ierr = VecScatterBegin(vc, xpar, xseq, ADD_VALUES, SCATTER_FORWARD); CHKERRQ(ierr);
ierr = VecScatterEnd(vc, xpar, xseq, ADD_VALUES, SCATTER_FORWARD); CHKERRQ(ierr);
ierr = PetscPrintf(PETSC_COMM_SELF, "\nPrinting out scattered vector\n"); CHKERRQ(ierr);
ierr = VecView(xseq, PETSC_VIEWER_STDOUT_WORLD); CHKERRQ(ierr);
PetscFinalize();
}
输出:
mpiexec -n 2 ./test
IS Object: 2 MPI processes
type: stride
[0] Index set is permutation
[0] Number of indices in (stride) set 4
[0] 0 0
[0] 1 1
[0] 2 2
[0] 3 3
[1] Number of indices in (stride) set 3
[1] 0 4
[1] 1 5
[1] 2 6
Vec Object: 2 MPI processes
type: mpi
Process [0]
2.
3.
5.
7.
Process [1]
11.
13.
17.
Printing out scattered vector
Printing out scattered vector
Vec Object: 1 MPI processes
type: seq
2.
3.
5.
7.
0.
0.
0.
VecScatterCreateToAll()
正是您所需要的:
Creates a vector and a scatter context that copies all vector values to each processor
用于ksp/.../ex49.c. Lasty, it is implemented in vecmpitoseq.c。
命名约定很可能受到 MPI 函数的启发,例如 MPI_Allgather()
, which distribute the gathered data to all processes while MPI_Gather()
仅收集指定根进程上的数据。
我正在使用 Petsc,我想合并一个分布式 Vec,以便每个进程都有一个完整的 Vec 副本。我有一个最小的例子,它从一个数据数组开始,从中构造一个 MPI Vec,然后尝试使用 VecScatter 来组合来自多个进程的向量。当我这样做时,本地向量只接收存储在第 0 个进程中的值,它不接收来自其他进程的信息。如何组合分布式向量以生成完整的局部向量?
#include <petscvec.h>
double primes[] = {2,3,5,7,11,13,17};
int nprimes = 7;
int main(int argc,char **argv)
{
PetscInitialize(&argc,&argv, NULL,NULL);
MPI_Comm comm=MPI_COMM_WORLD;
Vec xpar,xseq;
PetscInt low,high;
IS index_set_global, index_set_local;
const PetscInt *indices;
VecScatter vc;
PetscErrorCode ierr;
//Set up parallel vector
ierr = VecCreateMPI(comm, PETSC_DETERMINE, nprimes, &xpar); CHKERRQ(ierr);
ierr = VecGetOwnershipRange(xpar, &low, &high); CHKERRQ(ierr);
ierr = ISCreateStride(comm, high - low, low, 1, &index_set_global); CHKERRQ(ierr);
ierr = ISGetIndices(index_set_global, &indices); CHKERRQ(ierr);
ierr = ISView(index_set_global, PETSC_VIEWER_STDOUT_WORLD); CHKERRQ(ierr);
ierr = VecSetValues(xpar, high - low, indices, primes + low, INSERT_VALUES);CHKERRQ(ierr);
ierr = VecAssemblyBegin(xpar); CHKERRQ(ierr);
ierr = VecAssemblyEnd(xpar); CHKERRQ(ierr);
ierr = VecView(xpar, PETSC_VIEWER_STDOUT_WORLD); CHKERRQ(ierr);
//Scatter parallel vector so all processes have full vector
ierr = VecCreateSeq(PETSC_COMM_SELF, nprimes, &xseq); CHKERRQ(ierr);
//ierr = VecCreateMPI(comm, high - low, nprimes, &xseq); CHKERRQ(ierr);
ierr = ISCreateStride(comm, high - low, 0, 1, &index_set_local); CHKERRQ(ierr);
ierr = VecScatterCreate(xpar, index_set_local, xseq, index_set_global, &vc); CHKERRQ(ierr);
ierr = VecScatterBegin(vc, xpar, xseq, ADD_VALUES, SCATTER_FORWARD); CHKERRQ(ierr);
ierr = VecScatterEnd(vc, xpar, xseq, ADD_VALUES, SCATTER_FORWARD); CHKERRQ(ierr);
ierr = PetscPrintf(PETSC_COMM_SELF, "\nPrinting out scattered vector\n"); CHKERRQ(ierr);
ierr = VecView(xseq, PETSC_VIEWER_STDOUT_WORLD); CHKERRQ(ierr);
PetscFinalize();
}
输出:
mpiexec -n 2 ./test
IS Object: 2 MPI processes
type: stride
[0] Index set is permutation
[0] Number of indices in (stride) set 4
[0] 0 0
[0] 1 1
[0] 2 2
[0] 3 3
[1] Number of indices in (stride) set 3
[1] 0 4
[1] 1 5
[1] 2 6
Vec Object: 2 MPI processes
type: mpi
Process [0]
2.
3.
5.
7.
Process [1]
11.
13.
17.
Printing out scattered vector
Printing out scattered vector
Vec Object: 1 MPI processes
type: seq
2.
3.
5.
7.
0.
0.
0.
VecScatterCreateToAll()
正是您所需要的:
Creates a vector and a scatter context that copies all vector values to each processor
用于ksp/.../ex49.c. Lasty, it is implemented in vecmpitoseq.c。
命名约定很可能受到 MPI 函数的启发,例如 MPI_Allgather()
, which distribute the gathered data to all processes while MPI_Gather()
仅收集指定根进程上的数据。