MPI(求和)
MPI (Summation)
我正在编写一个程序,计算每个数字的总和,直到 1000。例如,1+2+3+4+5....+100。首先,我将求和作业分配给 10 个处理器:处理器 0 得到 1-100,处理器 1 得到 101-200,依此类推。总和存储在数组中。
并行完成所有求和后,处理器将它们的值发送到处理器 0(处理器 0 使用非阻塞接收值 send/recv),处理器 0 对所有值求和并显示结果。
代码如下:
#include <mpi.h>
#include <iostream>
using namespace std;
int summation(int, int);
int main(int argc, char ** argv)
{
int * array;
int total_proc;
int curr_proc;
int limit = 0;
int partial_sum = 0;
int upperlimit = 0, lowerlimit = 0;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &total_proc);
MPI_Comm_rank(MPI_COMM_WORLD, &curr_proc);
MPI_Request send_request, recv_request;
/* checking if 1000 is divisible by number of procs, else quit */
if(1000 % total_proc != 0)
{
MPI_Finalize();
if(curr_proc == 0)
cout << "**** 1000 is not divisible by " << total_proc << " ...quitting..."<< endl;
return 0;
}
/* number of partial summations */
limit = 1000/total_proc;
array = new int [total_proc];
/* assigning jobs to processors */
for(int i = 0; i < total_proc; i++)
{
if(curr_proc == i)
{
upperlimit = upperlimit + limit;
lowerlimit = (upperlimit - limit) + 1;
partial_sum = summation(upperlimit, lowerlimit);
array[i] = partial_sum;
}
else
{
upperlimit = upperlimit + limit;
lowerlimit = (upperlimit - limit) + 1;
}
}
cout << "** Partial Sum From Process " << curr_proc << " is " << array[curr_proc] << endl;
/* send and receive - non blocking */
for(int i = 1; i < total_proc; i++)
{
if(curr_proc == i) /* (i = current processor) */
{
MPI_Isend(&array[i], 1, MPI_INT, 0, i, MPI_COMM_WORLD, &send_request);
cout << "-> Process " << i << " sent " << array[i] << " to Process 0" << endl;
MPI_Irecv(&array[i], 1, MPI_INT, i, i, MPI_COMM_WORLD, &recv_request);
//cout << "<- Process 0 received " << array[i] << " from Process " << i << endl;
}
}
MPI_Finalize();
if(curr_proc == 0)
{
for(int i = 1; i < total_proc; i++)
array[0] = array[0] + array[i];
cout << "Sum is " << array[0] << endl;
}
return 0;
}
int summation(int u, int l)
{
int result = 0;
for(int i = l; i <= u; i++)
result = result + i;
return result;
}
输出:
** Partial Sum From Process 0 is 5050
** Partial Sum From Process 3 is 35050
-> Process 3 sent 35050 to Process 0
<- Process 0 received 35050 from Process 3
** Partial Sum From Process 4 is 45050
-> Process 4 sent 45050 to Process 0
<- Process 0 received 45050 from Process 4
** Partial Sum From Process 5 is 55050
-> Process 5 sent 55050 to Process 0
<- Process 0 received 55050 from Process 5
** Partial Sum From Process 6 is 65050
** Partial Sum From Process 8 is 85050
-> Process 8 sent 85050 to Process 0
<- Process 0 received 85050 from Process 8
-> Process 6 sent 65050 to Process 0
** Partial Sum From Process 1 is 15050
** Partial Sum From Process 2 is 25050
-> Process 2 sent 25050 to Process 0
<- Process 0 received 25050 from Process 2
<- Process 0 received 65050 from Process 6
** Partial Sum From Process 7 is 75050
-> Process 1 sent 15050 to Process 0
<- Process 0 received 15050 from Process 1
-> Process 7 sent 75050 to Process 0
<- Process 0 received 75050 from Process 7
** Partial Sum From Process 9 is 95050
-> Process 9 sent 95050 to Process 0
<- Process 0 received 95050 from Process 9
Sum is -1544080023
正在打印数组的内容:
5050
536870912
-1579286148
-268433415
501219332
32666
501222192
32666
1
0
我想知道是什么原因造成的。
如果我在调用 MPI_Finalize 之前打印数组,它就可以正常工作。
您只是在初始化 array[i]
,即对应于 curr_proc
id 的元素。该数组中的其他元素将未初始化,从而产生随机值。在您的 send/receive 打印循环中,您只能访问已初始化的元素。
我对 MPI 不太熟悉所以我猜,但您可能想在调用 MPI_Init
之前分配 array
。或者在进程 0 上调用 MPI_Receive
,而不是每个进程。
你的程序最重要的缺陷是你如何划分工作。在 MPI 中,每个进程都在执行主函数。因此,如果您希望所有进程协作构建结果,则必须确保所有进程都执行您的 summation
函数。
您不需要 for 循环。每个进程都在单独执行 main。它们只是具有不同的 curr_proc
值,您可以根据此计算出它们必须执行的工作部分:
/* assigning jobs to processors */
int chunk_size = 1000 / total_proc;
lowerlimit = curr_proc * chunk_size;
upperlimit = (curr_proc+1) * chunk_size;
partial_sum = summation(upperlimit, lowerlimit);
那么,master进程如何接收所有其他进程的部分和是不正确的。
- MPI 排名值 (
curr_proc
) 从 0 开始到 MPI_Comm_size
输出值 (total_proc-1
)。
- 只有进程 #1 是 sending/receiving 数据。
- 您正在使用发送和接收的即时版本:
MPI_Isend
和 MPI_recv
但您没有等到这些请求完成。为此,您应该使用 MPI_Waitall
。
正确的版本应该是这样的:
if( curr_proc == 0 ) {
// master process receives all data
for( int i = 1; i < total_proc; i++ )
MPI_Recv( &array[i], MPI_INT, 1, i, 0, MPI_COMM_WORLD );
} else {
// other processes send data to the master
MPI_Send( &partial_sum, MPI_INT, 1, 0, 0, MPI_COMM_WORLD );
}
这种一对一的交流模式被称为聚集。在 MPI 中有一个函数已经执行了这个功能:MPI_Gather
.
最后,您打算执行的操作称为归约:获取给定数量的数值并通过连续执行单个操作(求和,在你的情况下)。在 MPI 中也有一个函数可以做到这一点:MPI_Reduce
.
我强烈建议你做 some basic guided exercises before trying to make your own. MPI is difficult to understand at the beginning. Building a good base is vital for you to be able to add complexity later on. A hands on tutorial 也是开始 MPI 的好方法。
编辑:忘记提及 您不需要按资源数量(total_proc
).根据情况,您可以将剩余部分分配给单个进程:
chunk_size = 1000 / total_proc;
if( curr_proc == 0 )
chunk_size += 1000 % total_proc;
或者尽量平衡:
int remainder = curr_proc < ( 1000 % proc )? 1 : 0;
lowerlimit = curr_proc * chunk_size /* as usual */
+ curr_proc; /* cumulative remainder */
upperlimit = (curr_proc + 1) * chunk_size /* as usual */
+ remainder; /* curr_proc remainder */
第二种情况,负载不平衡会达到1,而第一种情况,最坏情况下负载不平衡可以达到total_proc-1
。
我正在编写一个程序,计算每个数字的总和,直到 1000。例如,1+2+3+4+5....+100。首先,我将求和作业分配给 10 个处理器:处理器 0 得到 1-100,处理器 1 得到 101-200,依此类推。总和存储在数组中。
并行完成所有求和后,处理器将它们的值发送到处理器 0(处理器 0 使用非阻塞接收值 send/recv),处理器 0 对所有值求和并显示结果。
代码如下:
#include <mpi.h>
#include <iostream>
using namespace std;
int summation(int, int);
int main(int argc, char ** argv)
{
int * array;
int total_proc;
int curr_proc;
int limit = 0;
int partial_sum = 0;
int upperlimit = 0, lowerlimit = 0;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &total_proc);
MPI_Comm_rank(MPI_COMM_WORLD, &curr_proc);
MPI_Request send_request, recv_request;
/* checking if 1000 is divisible by number of procs, else quit */
if(1000 % total_proc != 0)
{
MPI_Finalize();
if(curr_proc == 0)
cout << "**** 1000 is not divisible by " << total_proc << " ...quitting..."<< endl;
return 0;
}
/* number of partial summations */
limit = 1000/total_proc;
array = new int [total_proc];
/* assigning jobs to processors */
for(int i = 0; i < total_proc; i++)
{
if(curr_proc == i)
{
upperlimit = upperlimit + limit;
lowerlimit = (upperlimit - limit) + 1;
partial_sum = summation(upperlimit, lowerlimit);
array[i] = partial_sum;
}
else
{
upperlimit = upperlimit + limit;
lowerlimit = (upperlimit - limit) + 1;
}
}
cout << "** Partial Sum From Process " << curr_proc << " is " << array[curr_proc] << endl;
/* send and receive - non blocking */
for(int i = 1; i < total_proc; i++)
{
if(curr_proc == i) /* (i = current processor) */
{
MPI_Isend(&array[i], 1, MPI_INT, 0, i, MPI_COMM_WORLD, &send_request);
cout << "-> Process " << i << " sent " << array[i] << " to Process 0" << endl;
MPI_Irecv(&array[i], 1, MPI_INT, i, i, MPI_COMM_WORLD, &recv_request);
//cout << "<- Process 0 received " << array[i] << " from Process " << i << endl;
}
}
MPI_Finalize();
if(curr_proc == 0)
{
for(int i = 1; i < total_proc; i++)
array[0] = array[0] + array[i];
cout << "Sum is " << array[0] << endl;
}
return 0;
}
int summation(int u, int l)
{
int result = 0;
for(int i = l; i <= u; i++)
result = result + i;
return result;
}
输出:
** Partial Sum From Process 0 is 5050
** Partial Sum From Process 3 is 35050
-> Process 3 sent 35050 to Process 0
<- Process 0 received 35050 from Process 3
** Partial Sum From Process 4 is 45050
-> Process 4 sent 45050 to Process 0
<- Process 0 received 45050 from Process 4
** Partial Sum From Process 5 is 55050
-> Process 5 sent 55050 to Process 0
<- Process 0 received 55050 from Process 5
** Partial Sum From Process 6 is 65050
** Partial Sum From Process 8 is 85050
-> Process 8 sent 85050 to Process 0
<- Process 0 received 85050 from Process 8
-> Process 6 sent 65050 to Process 0
** Partial Sum From Process 1 is 15050
** Partial Sum From Process 2 is 25050
-> Process 2 sent 25050 to Process 0
<- Process 0 received 25050 from Process 2
<- Process 0 received 65050 from Process 6
** Partial Sum From Process 7 is 75050
-> Process 1 sent 15050 to Process 0
<- Process 0 received 15050 from Process 1
-> Process 7 sent 75050 to Process 0
<- Process 0 received 75050 from Process 7
** Partial Sum From Process 9 is 95050
-> Process 9 sent 95050 to Process 0
<- Process 0 received 95050 from Process 9
Sum is -1544080023
正在打印数组的内容:
5050
536870912
-1579286148
-268433415
501219332
32666
501222192
32666
1
0
我想知道是什么原因造成的。
如果我在调用 MPI_Finalize 之前打印数组,它就可以正常工作。
您只是在初始化 array[i]
,即对应于 curr_proc
id 的元素。该数组中的其他元素将未初始化,从而产生随机值。在您的 send/receive 打印循环中,您只能访问已初始化的元素。
我对 MPI 不太熟悉所以我猜,但您可能想在调用 MPI_Init
之前分配 array
。或者在进程 0 上调用 MPI_Receive
,而不是每个进程。
你的程序最重要的缺陷是你如何划分工作。在 MPI 中,每个进程都在执行主函数。因此,如果您希望所有进程协作构建结果,则必须确保所有进程都执行您的 summation
函数。
您不需要 for 循环。每个进程都在单独执行 main。它们只是具有不同的 curr_proc
值,您可以根据此计算出它们必须执行的工作部分:
/* assigning jobs to processors */
int chunk_size = 1000 / total_proc;
lowerlimit = curr_proc * chunk_size;
upperlimit = (curr_proc+1) * chunk_size;
partial_sum = summation(upperlimit, lowerlimit);
那么,master进程如何接收所有其他进程的部分和是不正确的。
- MPI 排名值 (
curr_proc
) 从 0 开始到MPI_Comm_size
输出值 (total_proc-1
)。 - 只有进程 #1 是 sending/receiving 数据。
- 您正在使用发送和接收的即时版本:
MPI_Isend
和MPI_recv
但您没有等到这些请求完成。为此,您应该使用MPI_Waitall
。
正确的版本应该是这样的:
if( curr_proc == 0 ) {
// master process receives all data
for( int i = 1; i < total_proc; i++ )
MPI_Recv( &array[i], MPI_INT, 1, i, 0, MPI_COMM_WORLD );
} else {
// other processes send data to the master
MPI_Send( &partial_sum, MPI_INT, 1, 0, 0, MPI_COMM_WORLD );
}
这种一对一的交流模式被称为聚集。在 MPI 中有一个函数已经执行了这个功能:MPI_Gather
.
最后,您打算执行的操作称为归约:获取给定数量的数值并通过连续执行单个操作(求和,在你的情况下)。在 MPI 中也有一个函数可以做到这一点:MPI_Reduce
.
我强烈建议你做 some basic guided exercises before trying to make your own. MPI is difficult to understand at the beginning. Building a good base is vital for you to be able to add complexity later on. A hands on tutorial 也是开始 MPI 的好方法。
编辑:忘记提及 您不需要按资源数量(total_proc
).根据情况,您可以将剩余部分分配给单个进程:
chunk_size = 1000 / total_proc;
if( curr_proc == 0 )
chunk_size += 1000 % total_proc;
或者尽量平衡:
int remainder = curr_proc < ( 1000 % proc )? 1 : 0;
lowerlimit = curr_proc * chunk_size /* as usual */
+ curr_proc; /* cumulative remainder */
upperlimit = (curr_proc + 1) * chunk_size /* as usual */
+ remainder; /* curr_proc remainder */
第二种情况,负载不平衡会达到1,而第一种情况,最坏情况下负载不平衡可以达到total_proc-1
。