MPI_Scatter & MPI_Bcast 解决方案在一个应用程序中。如何让分区打印分区大小

MPI_Scatter & MPI_Bcast solution in one application. How do I get the partition print the partition size

我是 MPI 的新手,此应用程序涉及 MPI_Bcast 和 MPI_Scatter 的实现。要求首先是根应该使用 MPI_Bcast 向节点广播分区的大小,然后将每个数组的部分分散到一个节点。我的根工作正常,但节点没有收到数组的值,因此平均值的计算出现偏差。下面是我到目前为止的代码

/** includes **/
   #include <iostream>
   #include <mpi.h>


    // function that will implement the coordinator job of this application
    void coordinator(int world_size) {

    std::cout << " coordinator rank [0] starting " << std::endl;

    // generate 100000 random integers and store them in an array

    int values[40];
    for (unsigned int i = 0; i < 40; i++){
        values[i] = rand() % 10;
        std::cout << values[i] << ", ";
        if (i % 10 == 9) std::cout << std::endl;
    }

    // determine the size of each partition by dividing 100000 by the world size
    // it is impertative that the world_size divides this evenly

    int partition_size = 40 / world_size;
    std::cout << " coordinator rank [0] partition size is " << partition_size  << "\n" << std::endl;

    // broadcast the partition size to each node so they can setup up memory as appropriate

    MPI_Bcast(&partition_size, 1, MPI_INT, 0, MPI_COMM_WORLD);
    std::cout << " coordinator rank [0] broadcasted partition size\n" << std::endl;

    // generate an average for our partition

    int total = 0;
    for (unsigned int i = 0; i < (40 / world_size); i++)
        total += values[i];
    float average = (float)total / (40 / world_size);
    std::cout << " coordinator rank [0] average is " << average << "\n" << std::endl;

    // call a reduce operation to get the total average and then divide that by the world size

    float total_average = 0;
    MPI_Reduce(&average, &total_average, 1, MPI_FLOAT, MPI_SUM, 0, MPI_COMM_WORLD);
    std::cout << " total average is " << total_average / world_size << std::endl;
}
// function that will implement the participant job of this applicaiton

void participant(int world_rank, int world_size) {

    std::cout << " participant rank [" << world_rank << "] starting" << std::endl;

    // get the partition size from the root and allocate memory as necessary

    int partition_size = 0;
    MPI_Bcast(&partition_size, 1, MPI_INT, 0, MPI_COMM_WORLD);
    std::cout << " participant rank [" << world_rank << "] recieved partition size of " <<
        partition_size << std::endl;

    // allocate the memory for our partition

    int *partition = new int[partition_size];

    // generate an average for our partition

    int total = 0;
    for (unsigned int i = 0; i < partition_size; i++)
        total += partition[i];
    float average = (float)total / partition_size;
    std::cout << " participant rank [" << world_rank << "] average is " << average << std::endl;

    // call a reduce operation to get the total average and then divide that by the world size

    float total_average = 0;
    MPI_Reduce(&average, &total_average, 1, MPI_FLOAT, MPI_SUM, 0, MPI_COMM_WORLD);

    // as we are finished with the memory we should free it

    delete partition;
}

int main(int argc, char** argv) {

    // initialise the MPI library

    MPI_Init(NULL, NULL);


    // determine the world size
    int world_size;
    MPI_Comm_size(MPI_COMM_WORLD, &world_size);

    // determine our rank in the world

    int world_rank;
    MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);

    // print out the rank and size

    std::cout << " rank [" << world_rank << "] size [" << world_size << "]" << std::endl;

    // if we have a rank of zero then we are the coordinator. if not we are a participant
    // in the task

    if (world_rank == 0){
        coordinator(world_size);
    } 
    else{
        participant(world_rank, world_size);
    }

    int *values = new int[40];
    int *partition_size = new int[40 / world_size];

    // run the scatter operation and then display the contents of all 4 nodes

    MPI_Scatter(values, 40 / world_size, MPI_INT, partition_size, 40 / world_size,
        MPI_INT, 0, MPI_COMM_WORLD);
    std::cout << "rank " << world_rank << " partition: ";
    for (unsigned int i = 0; i < 40 / world_size; i++)
        std::cout << partition_size[i] << ", ";
    std::cout << std::endl;

    // finalise the MPI library
    MPI_Finalize();

}

这是我在 运行 代码

之后得到的

我需要得到这个

1, 7, 4, 0, 9, 4, 8, 8, 2, 4,

5, 5, 1, 7, 1, 1, 5, 2, 7, 6,

1, 4, 2, 3, 2, 2, 1, 6, 8, 5,

7、6、1、8、9、2、7、9、5、4,

但我明白了

等级 0 分区:-842150451,-842150451,-842150451,-842150451,-842150451,-842150451,-842150451,-842150451,-842150451,-842150451,

等级 3 分区:-842150451,-842150451,-842150451,-842150451,-842150451,-842150451,-842150451,-842150451,-842150451,-842150451,

等级 2 分区:-842150451,-842150451,-842150451,-842150451,-842150451,-842150451,-842150451,-842150451,-842150451,-842150451,

等级 1 分区:-842150451,-842150451,-842150451,-842150451,-842150451,-842150451,-842150451,-842150451,-842150451,-842150451,

您正在散布一个 未初始化 数据数组:

int *values = new int[40];
int *partition_size = new int[40 / world_size];

// values is never initialised

MPI_Scatter(values, 40 / world_size, MPI_INT, partition_size, 40 / world_size,
    MPI_INT, 0, MPI_COMM_WORLD);

-8421504510xCDCDCDCD,Microsoft CRT在调试模式下用这个值填充新分配的内存(在释放模式下,内存内容将在分配后保持原样)。

您必须将对 MPI_Scatter 的调用放在相应的 coordinator/participant 函数中。