MPI_Scatterv 中的 displs 参数是什么?

What is the displs argument in MPI_Scatterv?

MPI_Scatterv() 函数的 displs 参数据说是一个 "integer array (of length group size). Entry i specifies the displacement (relative to sendbuf from which to take the outgoing data to process i"。 假设我有 sendcounts 参数

int sendcounts[7] = {3, 3, 3, 3, 4, 4, 4};

我的推理方式是 displs 数组应始终以值 0 开头,因为第一个条目相对于 sendbuf 的位移为 0,因此在我上面的示例中, displs 应如下所示:

int displs[7] = {0, 3, 6, 9, 13, 17, 21};

对吗?我知道这是一个微不足道的问题,但出于某种原因,网络根本没有帮助。那里没有很好的例子,因此我的问题。

是的,您的推理是正确的 - 对于 连续 数据。 MPI_Scattervdisplacements 参数的要点是还允许 strided 数据,这意味着 sendbuf 之间存在未使用的内存间隙大块。

这是一个example for contigous data. The official documentation actually contains good examples for strided data

是的,位移为根信息提供了有关将哪些项目发送到特定任务的信息 - 起始项目的偏移量。所以在大多数简单的情况下(例如,你会使用 MPI_Scatter 但计数不会平均分配)这可以立即从计数信息中计算出来:

displs[0] = 0;              // offsets into the global array
for (size_t i=1; i<comsize; i++)
    displs[i] = displs[i-1] + counts[i-1];

但不必如此;唯一的限制是您发送的数据不能重叠。你也可以从后面数:

displs[0] = globalsize - counts[0];                 
for (size_t i=1; i<comsize; i++)
    displs[i] = displs[i-1] - counts[i];

或任何任意顺序也可以。

而且通常计算会更复杂,因为发送缓冲区和接收缓冲区的类型必须一致但不一定相同 - 如果您经常遇到这种情况例如,'正在发送多维数组切片。

作为简单案例的例子,下面是前向和后向案例:

#include <iostream>
#include <vector>
#include "mpi.h"

int main(int argc, char **argv) {
    const int root = 0;             // the processor with the initial global data

    size_t globalsize;
    std::vector<char> global;       // only root has this

    const size_t localsize = 2;     // most ranks will have 2 items; one will have localsize+1
    char local[localsize+2];        // everyone has this
    int  mynum;                     // how many items 

    MPI_Init(&argc, &argv); 

    int comrank, comsize;
    MPI_Comm_rank(MPI_COMM_WORLD, &comrank);
    MPI_Comm_size(MPI_COMM_WORLD, &comsize);

    // initialize global vector
    if (comrank == root) {
        globalsize = comsize*localsize + 1;
        for (size_t i=0; i<globalsize; i++) 
            global.push_back('a'+i);
    }

    // initialize local
    for (size_t i=0; i<localsize+1; i++) 
        local[i] = '-';
    local[localsize+1] = '[=12=]';

    int counts[comsize];        // how many pieces of data everyone has
    for (size_t i=0; i<comsize; i++)
        counts[i] = localsize;
    counts[comsize-1]++;

    mynum = counts[comrank];
    int displs[comsize];

    if (comrank == 0) 
        std::cout << "In forward order" << std::endl;

    displs[0] = 0;              // offsets into the global array
    for (size_t i=1; i<comsize; i++)
        displs[i] = displs[i-1] + counts[i-1];

    MPI_Scatterv(global.data(), counts, displs, MPI_CHAR, // For root: proc i gets counts[i] MPI_CHARAs from displs[i] 
                 local, mynum, MPI_CHAR,                  // I'm receiving mynum MPI_CHARs into local */
                 root, MPI_COMM_WORLD);                   // Task (root, MPI_COMM_WORLD) is the root

    local[mynum] = '[=12=]';
    std::cout << comrank << " " << local << std::endl;

    std::cout.flush();
    if (comrank == 0) 
        std::cout << "In reverse order" << std::endl;

    displs[0] = globalsize - counts[0];                 
    for (size_t i=1; i<comsize; i++)
        displs[i] = displs[i-1] - counts[i];

    MPI_Scatterv(global.data(), counts, displs, MPI_CHAR, // For root: proc i gets counts[i] MPI_CHARAs from displs[i] 
                 local, mynum, MPI_CHAR,                  // I'm receiving mynum MPI_CHARs into local */
                 root, MPI_COMM_WORLD);                   // Task (root, MPI_COMM_WORLD) is the root

    local[mynum] = '[=12=]';
    std::cout << comrank << " " << local << std::endl;

    MPI_Finalize();
}

运行 给出:

In forward order
0 ab
1 cd
2 ef
3 ghi

In reverse order
0 hi
1 fg
2 de
3 abc