由结构填充和非阻塞通信缓冲区问题引起的 MPI 派生数据类型问题

MPI derived datatype problem caused by struct padding and non-blocking communication buffer's problem

您好,我正在编写一个 c++ 程序,我希望 MPI 在其中通过派生数据类型进行通信。但是接收方并没有收到发送方发出的完整信息。

以下是我构建派生数据类型的方法:

// dg_derived_datatype.cpp

#include <mpi.h>
#include "dg_derived_datatype.h"

namespace Hash{

    MPI_Datatype Face_type;
};

void Construct_data_type(){

    MPI_Face_type();

}

void MPI_Face_type(){

    int num = 3;

    // Number of elements in each block (array of integers)
    int elem_blocklength[num]{2, 1, 5};

    // Byte displacement of each block (array of integers).
    MPI_Aint array_of_offsets[num];
    MPI_Aint intex, charex;
    MPI_Aint lb;
    MPI_Type_get_extent(MPI_INT, &lb, &intex);
    MPI_Type_get_extent(MPI_CHAR, &lb, &charex);

    array_of_offsets[0] = (MPI_Aint) 0;
    array_of_offsets[1] = array_of_offsets[0] + intex * 2;
    array_of_offsets[2] = array_of_offsets[1] + charex;

    MPI_Datatype array_of_types[num]{MPI_INT, MPI_CHAR, MPI_INT};

    // create and MPI datatype
    MPI_Type_create_struct(num, elem_blocklength, array_of_offsets, array_of_types, &Hash::Face_type);  
    MPI_Type_commit(&Hash::Face_type);

}

void Free_type(){

    MPI_Type_free(&Hash::Face_type);    

}

这里我导出我的数据类型 Hash::Face_type 并提交它。 Hash::Face_type 用于传输我的结构 (face_pack, 2 int + 1 char + 5 int) 向量。

// dg_derived_datatype.h

#ifndef DG_DERIVED_DATA_TYPE_H
#define DG_DERIVED_DATA_TYPE_H

#include <mpi.h>

struct face_pack{

    int owners_key; 

    int facei; 

    char face_type;

    int hlevel;

    int porderx;

    int pordery; 

    int key;

    int rank;

};

namespace Hash{

    extern MPI_Datatype Face_type;
};

void Construct_data_type();

void Free_type();

#endif

然后在我的主程序中我做

// dg_main.cpp

#include <iostream>
#include <mpi.h>
#include "dg_derived_datatype.h"
#include <vector>

void Recv_face(int source, int tag, std::vector<face_pack>& recv_face);

int main(){
// Initialize MPI. 
// some code here.
// I create a vector of struct: std::vector<face_pack> face_info,
// to store the info I want to let proccesors communicate. 

Construct_data_type(); // construct my derived data type

MPI_Request request_pre1, request_pre2, request_next1, request_next2;

// send
if(num_next > 0){ // If fullfilled the current processor send info to the next processor (myrank + 1)

std::vector<face_pack> face_info;
// some code to construct face_info

// source my_rank, destination my_rank + 1
MPI_Isend(&face_info[0], num_n, Hash::Face_type, mpi::rank + 1, mpi::rank + 1, MPI_COMM_WORLD, &request_next2);

}

// recv
if(some critira){ // recv from the former processor (my_rank - 1)

std::vector<face_pack> recv_face;

Recv_face(mpi::rank - 1, mpi::rank, recv_face); // recv info from former processor

}
if(num_next > 0){

        MPI_Status status;
        MPI_Wait(&request_next2, &status);

}

Free_type();

// finialize MPI
}

void Recv_face(int source, int tag, std::vector<face_pack>& recv_face){

    MPI_Status status1, status2;

    MPI_Probe(source, tag, MPI_COMM_WORLD, &status1);

    int count;
    MPI_Get_count(&status1, Hash::Face_type, &count);

    recv_face = std::vector<face_pack>(count);

    MPI_Recv(&recv_face[0], count, Hash::Face_type, source, tag, MPI_COMM_WORLD, &status2);
}


问题是接收方有时会收到不完整的信息。

比如我在发出前打印出face_info

// rank 2
owners_key3658 facei 0 face_type M neighbour 192 n_rank 0
owners_key3658 facei 1 face_type L neighbour 66070 n_rank 1
owners_key3658 facei 1 face_type L neighbour 76640 n_rank 1
owners_key3658 facei 2 face_type M neighbour 2631 n_rank 0
owners_key3658 facei 3 face_type L neighbour 4953 n_rank 1
...
owners_key49144 facei 1 face_type M neighbour 844354 n_rank 2
owners_key49144 facei 1 face_type M neighbour 913280 n_rank 2
owners_key49144 facei 2 face_type L neighbour 41619 n_rank 1
owners_key49144 facei 3 face_type M neighbour 57633 n_rank 2

这是正确的。

但是在接收方,我打印出它收到的消息:

owners_key3658 facei 0 face_type M neighbour 192 n_rank 0
owners_key3658 facei 1 face_type L neighbour 66070 n_rank 1
owners_key3658 facei 1 face_type L neighbour 76640 n_rank 1
owners_key3658 facei 2 face_type M neighbour 2631 n_rank 0
owners_key3658 facei 3 face_type L neighbour 4953 n_rank 1

... // at the beginning it's fine, however, at the end it messed up

owners_key242560 facei 2 face_type ! neighbour 2 n_rank 2
owners_key217474 facei 2 face_type ! neighbour 2 n_rank 2
owners_key17394 facei 2 face_type ! neighbour 2 n_rank 2
owners_key216815 facei 2 face_type ! neighbour 2 n_rank 2

当然,它丢失了 face_type 信息,这是一个字符。据我所知,std::vector 保证了连续内存 here。所以我不确定我派生的 mpi 数据类型的哪一部分是错误的。消息传递有时有效有时无效。

好的,我有点想通了我的问题。那里有两个。

第一个是MPI_Type_get_extent()的使用。由于 c/c++ struct 可以由您的编译器填充,因此如果您只发送一个元素是可以的,但如果您发送多个元素,尾部填充可能会导致问题(见下图)。

因此,定义派生数据类型的更安全、更有利的方法是使用 MPI_Get_address()。这是我的做法:

// generate the derived datatype
void MPI_Face_type(){

    int num = 3;

    int elem_blocklength[num]{2, 1, 5};

    MPI_Datatype array_of_types[num]{MPI_INT, MPI_CHAR, MPI_INT};

    MPI_Aint array_of_offsets[num];
    MPI_Aint baseadd, add1, add2;

    std::vector<face_pack> myface(1);

    MPI_Get_address(&(myface[0].owners_key), &baseadd);
    MPI_Get_address(&(myface[0].face_type), &add1);
    MPI_Get_address(&(myface[0].hlevel), &add2);

    array_of_offsets[0] = 0;
    array_of_offsets[1] = add1 - baseadd;
    array_of_offsets[2] = add2 - baseadd;

    MPI_Type_create_struct(num, elem_blocklength, array_of_offsets, array_of_types, &Hash::Face_type);  

    // check that the extent is correct
    MPI_Aint lb, extent;
    MPI_Type_get_extent(Hash::Face_type, &lb, &extent); 
    if(extent != sizeof(myface[0])){
        MPI_Datatype old = Hash::Face_type;
        MPI_Type_create_resized(old, 0, sizeof(myface[0]), &Hash::Face_type);
        MPI_Type_free(&old);
    }
    MPI_Type_commit(&Hash::Face_type);
}

第二种是使用非阻塞发送MPI_Isend()。我把非阻塞发送改成阻塞发送后程序正常运行

我程序的相关部分如下所示:

if(criteria1){

//form the vector using my derived datatype
std::vector<derived_type> my_vector;

// use MPI_Isend to send the vector to the target rank
MPI_Isend(... my_vector...);

}

if(critira2){

// need to recv message 
MPI_Recv();
}

if(critira1){

// the sender now needs to make sure the message has arrived. 
MPI_Wait();
}

虽然我使用了 MPI_Wait,但 recver 没有得到完整的消息。我查看了 MPI_Isend() 的手册页,上面写着 man_page

A nonblocking send call indicates that the system may start copying data out of the send buffer. The sender should not modify any part of the send buffer after a nonblocking send operation is called until the send completes.

但我不认为我修改了发送缓冲区?或者可能是发送缓冲区中没有足够的 space 来存储要发送的信息?在我的理解中,非阻塞发送是这样工作的,发送方将消息放入其缓冲区中,并在目标等级达到 MPI_Recv 时发送到目标等级。所以可能是发件人的缓冲区用完了 space 来存储消息,然后才将它们发送出去?如果我错了,请纠正我。