MPI_Gather() 将中心元素转化为全局矩阵

MPI_Gather() the central elements into a global matrix

这是 的后续问题。这是情况:

id = 0 has this submatrix

|16.000000| |11.000000| |12.000000| |15.000000|
|6.000000| |1.000000| |2.000000| |5.000000|
|8.000000| |3.000000| |4.000000| |7.000000|
|14.000000| |9.000000| |10.000000| |13.000000|
-----------------------

id = 1 has this submatrix

|12.000000| |15.000000| |16.000000| |11.000000|
|2.000000| |5.000000| |6.000000| |1.000000|
|4.000000| |7.000000| |8.000000| |3.000000|
|10.000000| |13.000000| |14.000000| |9.000000|
-----------------------

id = 2 has this submatrix

|8.000000| |3.000000| |4.000000| |7.000000|
|14.000000| |9.000000| |10.000000| |13.000000|
|16.000000| |11.000000| |12.000000| |15.000000|
|6.000000| |1.000000| |2.000000| |5.000000|
-----------------------

id = 3 has this submatrix

|4.000000| |7.000000| |8.000000| |3.000000|
|10.000000| |13.000000| |14.000000| |9.000000|
|12.000000| |15.000000| |16.000000| |11.000000|
|2.000000| |5.000000| |6.000000| |1.000000|
-----------------------

The global matrix:

|1.000000| |2.000000| |5.000000| |6.000000|
|3.000000| |4.000000| |7.000000| |8.000000|
|11.000000| |12.000000| |15.000000| |16.000000|
|-3.000000| |-3.000000| |-3.000000| |-3.000000|

我想做的是只收集全局网格中的中心元素(不在边界的元素),所以全局网格应该是这样的:

 |1.000000| |2.000000| |5.000000| |6.000000|
 |3.000000| |4.000000| |7.000000| |8.000000|
 |9.000000| |10.000000| |13.000000| |14.000000|
 |11.000000| |12.000000| |15.000000| |16.000000|

和我得到的不一样。这是我的代码:

float **gridPtr;
float **global_grid;
lengthSubN = N/pSqrt; // N is the dim of global gird and pSqrt the sqrt of the number of processes
MPI_Type_contiguous(lengthSubN, MPI_FLOAT, &rowType);
MPI_Type_commit(&rowType);
if(id == 0) {
    MPI_Gather(&gridPtr[1][1], 1, rowType, global_grid[0], 1, rowType, 0, MPI_COMM_WORLD);
    MPI_Gather(&gridPtr[2][1], 1, rowType, global_grid[1], 1, rowType, 0, MPI_COMM_WORLD);
} else {
    MPI_Gather(&gridPtr[1][1], 1, rowType, NULL, 0, rowType, 0, MPI_COMM_WORLD);
    MPI_Gather(&gridPtr[2][1], 1, rowType, NULL, 0, rowType, 0, MPI_COMM_WORLD);
}
...
float** allocate2D(float** A, const int N, const int M) {
    int i;
    float *t0;

    A = malloc(M * sizeof (float*)); /* Allocating pointers */
    if(A == NULL)
        printf("MALLOC FAILED in A\n");
    t0 = malloc(N * M * sizeof (float)); /* Allocating data */
    if(t0 == NULL)
        printf("MALLOC FAILED in t0\n");
    for (i = 0; i < M; i++)
        A[i] = t0 + i * (N);

    return A;
}

编辑:

这是我在没有 MPI_Gather() 但有子数组的情况下的尝试:

    MPI_Datatype mysubarray;

    int starts[2] = {1, 1};
    int subsizes[2]  = {lengthSubN, lengthSubN};
    int bigsizes[2]  = {N_glob, M_glob};
    MPI_Type_create_subarray(2, bigsizes, subsizes, starts,
                             MPI_ORDER_C, MPI_FLOAT, &mysubarray);
    MPI_Type_commit(&mysubarray);
    MPI_Isend(&(gridPtr[0][0]), 1, mysubarray, 0, 3, MPI_COMM_WORLD, &req[0]);
    MPI_Type_free(&mysubarray);
    MPI_Barrier(MPI_COMM_WORLD);
    if(id == 0) {
      for(i = 0; i < p; ++i) {
        MPI_Irecv(&(global_grid[i][0]), lengthSubN * lengthSubN, MPI_FLOAT, i, 3, MPI_COMM_WORLD, &req[0]);
      }
    }
    if(id == 0)
            print(global_grid, N_glob, N_glob);

但结果是:

|1.000000| |2.000000| |3.000000| |4.000000|
|5.000000| |6.000000| |7.000000| |8.000000|
|9.000000| |10.000000| |11.000000| |12.000000|
|13.000000| |14.000000| |15.000000| |16.000000|

这不是我想要的。我必须找到一种方法来告诉 recv 它应该以另一种方式放置数据。所以,如果我这样做:

MPI_Irecv(&(global_grid[0][0]), 1, mysubarray, 0, 3, MPI_COMM_WORLD, &req[0]);

那么我会得到:

|-3.000000| |-3.000000| |-3.000000| |-3.000000|
|-3.000000| |1.000000| |2.000000| |-3.000000|
|-3.000000| |3.000000| |4.000000| |-3.000000|
|-3.000000| |-3.000000| |-3.000000| |-3.000000|

我无法给出完整的解决方案,但我会解释为什么您使用 MPI_Gather 的原始示例无法按预期工作。

lengthSubN=2 中,您定义了一个新的 2 个浮点数数据类型,它们存储在内存中相邻的这一行:

MPI_Type_contiguous(lengthSubN, MPI_FLOAT, &rowType);

现在,让我们看一下第一个 MPI_Gather 调用:

if(id == 0) {
    MPI_Gather(&gridPtr[1][1], 1, rowType, global_grid[0], 1, rowType, 0, MPI_COMM_WORLD);
} else {
    MPI_Gather(&gridPtr[1][1], 1, rowType, NULL, 0, rowType, 0, MPI_COMM_WORLD);
}

它需要 1 rowType,这是从每个等级的元素 gridPtr[1][1] 开始的 2 个相邻浮点数。这些是值:

id 0:  1.0   2.0
id 1:  5.0   6.0
id 2:  9.0  10.0
id 3: 13.0  14.0

并将它们相邻放置在global_grid[0]指向的接收缓冲区中。这个指针实际上指向了第一行的开始,这样内存就被填满了:

 1.0   2.0   5.0   6.0   9.0  10.0  13.0  14.0

但是,global_grid 每行只有 4 列,因此最后 4 个值换行到 global_grid[1] (*) 指向的第二行。这甚至可能是未定义的行为。因此,在 MPI_Gather 之后 global_grid 的内容是:

 1.0   2.0   5.0   6.0 
 9.0  10.0  13.0  14.0
-3.0  -3.0  -3.0  -3.0
-3.0  -3.0  -3.0  -3.0

第二个MPI_Gather以同样的方式工作,从global_grid的第二行开始写:

 3.0   4.0   7.0   8.0  11.0  12.0  15.0  16.0

因此它覆盖了上面的一些值,结果如观察到的那样:

 1.0   2.0   5.0   6.0 
 3.0   4.0   7.0   8.0
11.0  12.0  15.0  16.0
-3.0  -3.0  -3.0  -3.0

(*) allocate2d 实际上是为二维数据缓冲区分配连续内存。