使用 MPI+CUDA 和 PETSc 分布式阵列求解大型线性方程组

Solving large system of linear equations using MPI+CUDA and PETSc distributed arrays

我想使用 PETSc 库在我自己的程序中求解大型分布式跨进程线性方程组。此外,我想为此目的使用可用的 GPU 资源。我正在使用结构化网格来离散表示 3D 计算域,因此最好使用 PETSc 分布式阵列来避免进程之间的额外数据传输。

我已使用以下字符串配置 PETSc:./configure --prefix=/usr/local/petsc --with-mpi=1 --with-cuda=1 --with-cusp=1 --with-cusp-include=/usr/local/cuda/include/cusp/ --with-cusp-lib=

然后将其安装到 /usr/local/petsc 位置。

现在我正在尝试在简单的测试程序中创建 DMDA 对象:

#include <stdio.h>
#include <math.h>
#include <string.h>
#include <stdlib.h>

#include "cuda.h"
#include "mpi.h"

/* PETSc headers */
#include "petscsys.h"
#include "petscksp.h"
#include "petscdmda.h"
#include "petscksp.h"

int main(int argc, char *argv[])
{
    MPI_Init(&argc, &argv);

    PetscInitialize(&argc, &argv, NULL, NULL);

    DM da;
    Vec x, b;                /* right hand side, exact solution */
    Mat A;                   /* linear system matrix */ 
    KSP ksp;                 /* linear solver context */
    KSPType ksptype;
    PC pc;
    PCType pctype;
    PetscErrorCode ierr;

    PetscInt Nx = 100;
    PetscInt Ny = 100;
    PetscInt Nz = 100;

    PetscInt NPx = 1;
    PetscInt NPy = 1;
    PetscInt NPz = 1;

    ierr = DMDACreate3d(PETSC_COMM_WORLD, DM_BOUNDARY_NONE, DM_BOUNDARY_NONE, DM_BOUNDARY_NONE, DMDA_STENCIL_STAR, 
                        Nx, Ny, Nz, NPx, NPy, NPz, 1, 1, NULL, NULL, NULL, &da); CHKERRQ(ierr);

    ierr = DMSetMatType(da, MATMPIAIJ); CHKERRQ(ierr);

    /* Create distributed matrix object according to DA */
    ierr = DMCreateMatrix(da, &A); CHKERRQ(ierr);

    /* Initialize all matrix entries to zero */
    /*
    ierr = MatZeroEntries(A); CHKERRQ(ierr);
    */

    fprintf(stdout, "All was done.\n");

    PetscFinalize();
    MPI_Finalize(); 

    return 0;
}

但是当我 运行 时,我得到一个错误:

[0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------
[0]PETSC ERROR: No support for this operation for this object type
[0]PETSC ERROR: DM can not create LocalToGlobalMapping
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting.
[0]PETSC ERROR: Petsc Development GIT revision: v3.7.4-1919-g73530f8  GIT Date: 2016-11-09 03:25:31 +0000
[0]PETSC ERROR: Configure options --prefix=/usr/local/petsc --with-mpi=1 --with-cuda=1 --with-cusp=1 --with-cusp-include=/usr/local/cuda/include/cusp/ --with-cusp-lib=
[0]PETSC ERROR: #1 DMGetLocalToGlobalMapping() line 986 in ~/petsc/src/dm/interface/dm.c
[0]PETSC ERROR: #2 DMCreateMatrix_DA_3d_MPIAIJ() line 1051 in ~/petsc/src/dm/impls/da/fdda.c
[0]PETSC ERROR: #3 DMCreateMatrix_DA() line 760 in ~/petsc/src/dm/impls/da/fdda.c
[0]PETSC ERROR: #4 DMCreateMatrix() line 1201 in ~/petsc/src/dm/interface/dm.c
[0]PETSC ERROR: #5 main() line 47 in petsc_test.c
[0]PETSC ERROR: No PETSc Option Table entries
[0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint@mcs.anl.gov----------

这个简单的代码有什么问题?

更新: uname -a 命令的结果: Linux PC 4.4.0-47-generic #68-Ubuntu SMP Wed Oct 26 19:39:52 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux.

使用 MPI 规范的开放 MPI 实现。

从某个时候开始,人们应该在 DMDACreate3d() 之后显式调用 DMSetUp() 例程。有关详细信息,请参阅 here