如何使用 MPI 添加二维数组中的所有值

How to add all the values inside an 2d array using MPI

我正在尝试使用 MPI(赋值)在 C 中构建一个包含多维数组的程序。

下面的程序运行但在 2 行输出中给出了错误的值。 a是一个多维数组。我不包含任何 0 值。但是第二行输出是partial process: values are 0 and 0。为什么它打印 0 值,我的 a 数组中没有 0 值。

这是我的基本程序

#include <mpi.h>
#include <stdio.h>
#include <stdlib.h>
// size of array
#define n 6

int a[6][2] = { {2,3},{51,55},{88,199},{335,34534},{678,683},{98,99} };

// Temporary array for slave process
int a2[1000][2];

int main(int argc, char* argv[])
{

    int pid, np,
        elements_per_process,
        n_elements_recieved;
    // np -> no. of processes
    // pid -> process id

    MPI_Status status;

    // Creation of parallel processes
    MPI_Init(&argc, &argv);

    // find out process ID,
    // and how many processes were started
    MPI_Comm_rank(MPI_COMM_WORLD, &pid);
    MPI_Comm_size(MPI_COMM_WORLD, &np);

    // master process
    if (pid == 0) {

        int index, i;
        elements_per_process = n / np;

        // check if more than 1 processes are run
        if (np > 1) {
            // distributes the portion of array
            // to child processes to calculate
            // their partial sums
            for (i = 1; i < np - 1; i++) {
                index = i * elements_per_process;

                MPI_Send(&elements_per_process,
                    1, MPI_INT, i, 0,
                    MPI_COMM_WORLD);
                MPI_Send(&a[index],
                    elements_per_process,
                    MPI_INT, i, 0,
                    MPI_COMM_WORLD);
            }
            // last process adds remaining elements
            index = i * elements_per_process;
            int elements_left = n - index;

            MPI_Send(&elements_left,
                1, MPI_INT,
                i, 0,
                MPI_COMM_WORLD);
            MPI_Send(&a[index],
                elements_left,
                MPI_INT, i, 0,
                MPI_COMM_WORLD);
        }

        // master process add its own sub array
        for (i = 0; i < elements_per_process; i++)
            printf("master process: values are %d and %d\n", a[i][0], a[i][1]);

        // collects partial sums from other processes
        int tmp;
        for (i = 1; i < np; i++) {
            MPI_Recv(&tmp, 1, MPI_INT,
                MPI_ANY_SOURCE, 0,
                MPI_COMM_WORLD,
                &status);
            int sender = status.MPI_SOURCE;
        }

    }
    // slave processes
    else {
        MPI_Recv(&n_elements_recieved,
            1, MPI_INT, 0, 0,
            MPI_COMM_WORLD,
            &status);

        // stores the received array segment
        // in local array a2
        MPI_Recv(&a2, n_elements_recieved,
            MPI_INT, 0, 0,
            MPI_COMM_WORLD,
            &status);

        // calculates its partial sum
        int useless_fornow = -1;
        for (int i = 0; i < n_elements_recieved; i++) {
            printf("partial process: values are %d and %d \n", a2[i][0], a2[i][1]);
        }
        // sends the partial sum to the root process
        MPI_Send(&useless_fornow, 1, MPI_INT,
            0, 0, MPI_COMM_WORLD);
    }

    // cleans up all MPI state before exit of process
    MPI_Finalize();

    return 0;
}

这是输出:

partial process: values are 678 and 683

partial process: values are 0 and 0

master process: values are 2 and 3

master process: values are 51 and 55

partial process: values are 88 and 199

partial process: values are 0 and 0

我 运行 它有 3 个进程使用这个命令 mpiexec.exe -n 3 Project1.exe

master向其他进程发送&a[index],即:

  • 对于进程 1,索引是 2,所以 master 正在发送 {88,199};
  • 对于进程 2,索引是 4,所以 master 正在发送 {678,683}

因此,要发送不同的元素,您需要修复索引计算。

第二个MPI_Recv

MPI_Recv(&a2, n_elements_recieved, MPI_INT, 0, 0, MPI_COMM_WORLD, &status);

你指定接收的元素要复制到的位置是&a2,也就是二维数组的开始a2。并且您还表示您希望收到 n_elements_recieved。所以 master 向每个进程发送一个数组,每个进程都希望收到一个数组,到目前为止一切顺利。问题出在您打印收到的数据的逻辑上,即:

for (int i = 0; i < n_elements_recieved; i++) {
    printf("partial process: values are %d and %d \n", a2[i][0], a2[i][1]);
}

您正在按列打印,但您收到的是一维数组而不是二维数组。

在我看来,您可以简单地使用以下方法:

每个进程首先接收它们将在下一个 MPI_Recv 调用中接收的元素总数:

MPI_Recv(&n_elements_recieved, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, &status);

然后他们分配一个具有该大小的数组:

int *tmp = malloc(sizeof(int) * n_elements_recieved);

然后他们收到数据:

MPI_Recv(tmp, n_elements_recieved, MPI_INT, 0, 0, MPI_COMM_WORLD, &status);

最后打印数组中的所有元素:

for(int i = 0; i < n_elements_recieved; i++)
   printf("partial process: values are %d \n", tmp[i]);

如果你想让 master 进程将整个二维数组发送给所有其他进程,你可以使用 MPI_Bcast:

Broadcasts a message from the process with rank "root" to all other processes of the communicator

您可以利用二维数组在内存中连续分配的事实,并执行单个 MPI_Bcast 广播二维数组,这大大简化了代码,如您所见:

#include <mpi.h>
#include <stdio.h>
#include <stdlib.h>

int main(int argc, char* argv[])
{
    int pid, np;
    MPI_Status status;
    MPI_Init(&argc, &argv);
    MPI_Comm_rank(MPI_COMM_WORLD, &pid);
    MPI_Comm_size(MPI_COMM_WORLD, &np);

    int rows = (pid == 0) ? 6 : 0;
    int cols = (pid == 0) ? 2 : 0;    
    MPI_Bcast(&rows, 1, MPI_INT, 0, MPI_COMM_WORLD);
    MPI_Bcast(&cols, 1, MPI_INT, 0, MPI_COMM_WORLD);
    printf("%d, %d\n",rows, cols);
 
    int a[6][2];
    if(pid == 0){
    // just simulating some data.
        int tmp[6][2] = { {2,3},{51,55},{88,199},{335,34534},{678,683},{98,99} };
        for(int i = 0; i < 6; i++)
           for(int j = 0; j < 2; j++)
              a[i][j] = tmp[i][j];
    }
    MPI_Bcast(&a, rows * cols, MPI_INT, 0, MPI_COMM_WORLD);
    MPI_Finalize();

    return 0;
}

而不是 3 MPI_Send/MPI_Recv 每个 进程,你只需要 3 MPI_Bcast 用于所有进程。