在mpi mpich中询问MPI_Reduce和MPI_Bcast

Asking about MPI_Reduce and MPI_Bcast in mpi mpich

我是 MPI 新手。我的程序计算从 1 到 100 的总和,但返回一个错误,我不明白为什么。 我正在学习 MPI_Reduce 和 MPI_Bcast,所以我尽可能多地使用它们。 这是我的程序。

// include something


int main (int argc, char * argv[])
{
    int rank, size, root = 0;
    int i,j,k,S[100],n=100,p, sum;

    MPI_Init( &argc, &argv ); 
    MPI_Comm_rank( MPI_COMM_WORLD, &rank );
    MPI_Comm_size( MPI_COMM_WORLD, &size );

    //get n
    if(rank==root){
        n=100;
    }
    //send data to all process
    MPI_Bcast( &n, n, MPI_INT,root, MPI_COMM_WORLD );

    p=n/rank;
    while(p>0){
        for(i=1;i<p;i++){
            for(k=0;k<rank;k++){
                S[k]=i+i*rank;
            }
        }
        p=p/2;
    }
    //get data from all process
    MPI_Reduce( S, &sum, n, MPI_INT, MPI_SUM, root, MPI_COMM_WORLD );

    if(rank==root){
        printf("Gia tri cua S trong root: %d", sum);
    }

    MPI_Finalize();
    return 0;
}

这是我的错误:

job aborted:
[ranks] message

[0] process exited without calling finalize

[1-4] terminated

---- error analysis -----

[0] on DESKTOP-GFD7NIE
mpi.exe ended prematurely and may have crashed. exit code 0xc0000094

---- error analysis -----

我对MPI也有不明白的地方,希望大家帮我解答一下:
1) 如果我有这样的代码:

//include something
int main(){
    MPI_Init( &argc, &argv ); 
    int rank, size, root = 0;
    MPI_Comm_rank( MPI_COMM_WORLD, &rank );
    MPI_Comm_size( MPI_COMM_WORLD, &size );
    //code 1
    if(rank==0){ 
    //code 2
    }
}

也就是说每个进程都会执行代码1,只有rank 0会执行代码2,对吗?

2) 根据this,函数MPI_Reduce(const void *sendbuf, void *recvbuf, int count, MPI_Datatype datatype, MPI_Op op, int root , MPI_Comm comm) 有 recvbuf。但是没看明白,是变量will receive data from sendbuf还是别的?

感谢您的帮助。

我修改了你的程序来计算从 0 到 9(即 45)的总和。 用mpic++编译,运行开头有2个进程,在注释中使用"cout"可以更好地理解哪个rank在做什么。

localsum是每个rank的总和,它给每个rank一个整数。

master进程globalsum给1个整数

#include <iostream>
using namespace std;

int main (int argc, char * argv[])
{
int rank, size, root = 0;
int j,k,S[10],n=10,p;

MPI_Init( &argc, &argv );
MPI_Comm_rank( MPI_COMM_WORLD, &rank );
MPI_Comm_size( MPI_COMM_WORLD, &size );

//get n
if(!rank){
    n=10;
}
//send data to all process
MPI_Bcast( &n, n, MPI_INT,root, MPI_COMM_WORLD );

int localsum = 0, globalsum = 0 ;
for (int i = rank; i < n; i += size ) {
    S[i] = i;
    localsum += S[i];
    // cout << rank << " " << S[i] <<  endl;
}

// cout << localsum << endl;

//get data from all process
MPI_Reduce( &localsum, &globalsum, 1, MPI_INT, MPI_SUM, 0, MPI_COMM_WORLD );

if(!rank){
    cout << "Globalsum: " << globalsum << endl;
}

MPI_Finalize();
return 0;
}