如何在 MPI_Scatter 的数组中分散多个变量
How to scatter multiple variables in an array for MPI_Scatter
我目前正在努力将具有 8 个整数的数组平均分配给 2 个整数 per 4 个处理器。我使用 MPI_Bcast
让每个处理器知道总共有 8 个数组,每个处理器都有 2 个名为“my_input”的整数数组。
MPI_Bcast(&totalarray,1,MPI_INT,0,MPI_COMM_WORLD);
MPI_Bcast(&my_input,2,MPI_INT,0,MPI_COMM_WORLD);
MPI_Scatter (input, 2 , MPI_INT, &my_input, 2 , MPI_INT, 0, MPI_COMM_WORLD );
//MPI_Barrier (MPI_COMM_WORLD);
printf("\n my input is %d & %d and rank is %d \n" , my_input[0], my_input[1] , rank);
但是在分散之后,我看到打印函数无法打印 'rank' 而是 8 个整数数组中的所有整数。我应该如何编程才能将数组的数量从根节点平均分配给其他处理器?
这是我的完整代码(它只是为了测试总共 8 个整数,因此 scanf 我将输入 '8'):
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include "mpi.h"
int main(int argc, char *argv[])
{
//initailise MPI
MPI_Init(&argc, &argv);
//Variable to identify processor and total number of processors
int rank, size;
int my_input[0];
//initailse total array variable
int totalarray =0;
//initialise memory array
int* input;
//range of random number
int upper = 100, lower = 0;
//declare processor rank
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
//declare total size of processor
MPI_Comm_size(MPI_COMM_WORLD, &size);
//let root gather N elements from user
if (rank == 0)
{
printf("Enter a number from 1 to 1000: ");
fflush(stdout);
int number;
//ask user to input number of elements
scanf("%d",&number);
printf("Your number is %d\n",number);
//Fill the array to power of 2
int totalarray = pow(2, ceil(log(number)/log(2)));
input[totalarray];
my_input[totalarray/size];
//allocate memory for the array
input = malloc(totalarray * sizeof(int) );
//Add randomise number until N elements
for(int i =0; i<=totalarray ; i++)
{
if( i<number)
{
input[i] = (rand() % (upper - lower + 1)) + lower; ;
}
//padding zero to the extra elements
else if(number <= i < totalarray)
{
input[i] = 0;
}
}
//confirm the input array
printf("the input is: ");
for(int i =0; i < totalarray ; i++)
{
printf( "%d ", input[i]);
}
}
MPI_Bcast(&totalarray,1,MPI_INT,0,MPI_COMM_WORLD);
MPI_Bcast(&my_input,2,MPI_INT,0,MPI_COMM_WORLD);
MPI_Scatter (input, 2 , MPI_INT, &my_input, 2 , MPI_INT, 0, MPI_COMM_WORLD );
//MPI_Barrier (MPI_COMM_WORLD);
printf("\n my input is %d & %d and rank is %d \n" , my_input[0], my_input[1] , rank);
MPI_Finalize();
return 0;
}
I used MPI_Bcast to let every processors to know there are total array
of 8 and each of those will have 2 integers array called "my_input".
是的,有道理。
However after scattering, I see the print function cannot print the
'rank' but all the integers from the 8 integers array. How should I
program in order to equally distribute the number of arrays to other
processors from root?
您的代码有一些问题。例如,您将变量 my_input
、totalarray
和 input
声明为:
int my_input[0];
...
int totalarray =0;
...
int* input;
然后在 if (rank == 0)
中再次重新定义它们:
int totalarray = pow(2, ceil(log(number)/log(2)));
input[totalarray];
my_input[totalarray/size];
input = malloc(totalarray * sizeof(int) );
这是不正确的,或者您可以将两个数组都声明为 int*
,即:
int *my_input;
int *input;
一旦知道每个数组中将有多少元素,就立即分配它们的 space。
input
数组可以在用户插入该数组的大小后立即分配:
//ask user to input number of elements
scanf("%d",&number);
printf("Your number is %d\n",number);
input = malloc(totalarray * sizeof(int));
和 my_input
数组在 master 进程将输入大小广播到其他进程后:
MPI_Bcast(&totalarray, 1, MPI_INT, 0, MPI_COMM_WORLD);
int *my_input = malloc((totalarray/size) * sizeof(int));
对于变量totalarray
只是不要在if (rank == 0)
内再次声明。因为如果你这样做,那么 int totalarray = pow(2, ceil(log(number)/log(2)));
将是一个不同的变量,它只存在于 if (rank == 0)
.
的范围内
第二次MPI_Bcast
调用
MPI_Bcast(&my_input,2,MPI_INT,0,MPI_COMM_WORLD);
是除非,既然你想
to equally distribute total 8 integers in an array to 2 integers for
4 processors.
并不是每个进程都拥有 master 进程的 my_input
数组的整个竞争。
为此,您需要 MPI_Scatter
。然而,而不是
MPI_Scatter (input, 2 , MPI_INT, &my_input, 2 , MPI_INT, 0, MPI_COMM_WORLD );
不要对输入的大小进行硬编码,因为如果您想使用不同的输入大小 and/or 和不同数量的进程进行测试,代码将无法正常工作,请改为执行以下操作:
int size_per_process = totalarray/size;
MPI_Scatter (input, size_per_process , MPI_INT, my_input, size_per_process , MPI_INT, 0, MPI_COMM_WORLD );
循环 for(int i =0; i<=totalarray ; i++)
实际上应该是 for(int i =0; i< totalarray ; i++)
,否则你会超出数组的边界 input
。个人意见,但我认为添加随机值逻辑这样读起来更好:
for(int i =0; i < number ; i++)
input[i] = (rand() % (upper - lower + 1)) + lower;
for(int i = number; i < totalarray; i++)
input[i] = 0;
最终代码如下所示:
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include "mpi.h"
int main(int argc, char *argv[])
{
MPI_Init(&argc, &argv);
int rank, size;
int *input;
int totalarray;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
if (rank == 0){
printf("Enter a number from 1 to 1000: ");
fflush(stdout);
int number;
scanf("%d",&number);
printf("Your number is %d\n",number);
totalarray = pow(2, ceil(log(number)/log(2)));
input = malloc(totalarray * sizeof(int));
int upper = 100, lower = 0;
for(int i = 0; i < number ; i++)
input[i] = (rand() % (upper - lower + 1)) + lower;
for(int i = number; i < totalarray; i++)
input[i] = 0;
printf("the input is: ");
for(int i =0; i < totalarray ; i++)
printf( "%d ", input[i]);
}
MPI_Bcast(&totalarray, 1, MPI_INT, 0, MPI_COMM_WORLD);
int size_per_process = totalarray / size;
int *my_input = malloc(size_per_process * sizeof(int));
printf("SIZE PER %d\n", size_per_process);
MPI_Scatter (input, size_per_process, MPI_INT, my_input, size_per_process, MPI_INT, 0, MPI_COMM_WORLD );
printf("\n my input is %d & %d and rank is %d \n" , my_input[0], my_input[1] , rank);
MPI_Finalize();
return 0;
}
通过打印整个 my_input
而不仅仅是前两个位置,也可以使最后的打印更加通用。
我目前正在努力将具有 8 个整数的数组平均分配给 2 个整数 per 4 个处理器。我使用 MPI_Bcast
让每个处理器知道总共有 8 个数组,每个处理器都有 2 个名为“my_input”的整数数组。
MPI_Bcast(&totalarray,1,MPI_INT,0,MPI_COMM_WORLD);
MPI_Bcast(&my_input,2,MPI_INT,0,MPI_COMM_WORLD);
MPI_Scatter (input, 2 , MPI_INT, &my_input, 2 , MPI_INT, 0, MPI_COMM_WORLD );
//MPI_Barrier (MPI_COMM_WORLD);
printf("\n my input is %d & %d and rank is %d \n" , my_input[0], my_input[1] , rank);
但是在分散之后,我看到打印函数无法打印 'rank' 而是 8 个整数数组中的所有整数。我应该如何编程才能将数组的数量从根节点平均分配给其他处理器?
这是我的完整代码(它只是为了测试总共 8 个整数,因此 scanf 我将输入 '8'):
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include "mpi.h"
int main(int argc, char *argv[])
{
//initailise MPI
MPI_Init(&argc, &argv);
//Variable to identify processor and total number of processors
int rank, size;
int my_input[0];
//initailse total array variable
int totalarray =0;
//initialise memory array
int* input;
//range of random number
int upper = 100, lower = 0;
//declare processor rank
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
//declare total size of processor
MPI_Comm_size(MPI_COMM_WORLD, &size);
//let root gather N elements from user
if (rank == 0)
{
printf("Enter a number from 1 to 1000: ");
fflush(stdout);
int number;
//ask user to input number of elements
scanf("%d",&number);
printf("Your number is %d\n",number);
//Fill the array to power of 2
int totalarray = pow(2, ceil(log(number)/log(2)));
input[totalarray];
my_input[totalarray/size];
//allocate memory for the array
input = malloc(totalarray * sizeof(int) );
//Add randomise number until N elements
for(int i =0; i<=totalarray ; i++)
{
if( i<number)
{
input[i] = (rand() % (upper - lower + 1)) + lower; ;
}
//padding zero to the extra elements
else if(number <= i < totalarray)
{
input[i] = 0;
}
}
//confirm the input array
printf("the input is: ");
for(int i =0; i < totalarray ; i++)
{
printf( "%d ", input[i]);
}
}
MPI_Bcast(&totalarray,1,MPI_INT,0,MPI_COMM_WORLD);
MPI_Bcast(&my_input,2,MPI_INT,0,MPI_COMM_WORLD);
MPI_Scatter (input, 2 , MPI_INT, &my_input, 2 , MPI_INT, 0, MPI_COMM_WORLD );
//MPI_Barrier (MPI_COMM_WORLD);
printf("\n my input is %d & %d and rank is %d \n" , my_input[0], my_input[1] , rank);
MPI_Finalize();
return 0;
}
I used MPI_Bcast to let every processors to know there are total array of 8 and each of those will have 2 integers array called "my_input".
是的,有道理。
However after scattering, I see the print function cannot print the 'rank' but all the integers from the 8 integers array. How should I program in order to equally distribute the number of arrays to other processors from root?
您的代码有一些问题。例如,您将变量 my_input
、totalarray
和 input
声明为:
int my_input[0];
...
int totalarray =0;
...
int* input;
然后在 if (rank == 0)
中再次重新定义它们:
int totalarray = pow(2, ceil(log(number)/log(2)));
input[totalarray];
my_input[totalarray/size];
input = malloc(totalarray * sizeof(int) );
这是不正确的,或者您可以将两个数组都声明为 int*
,即:
int *my_input;
int *input;
一旦知道每个数组中将有多少元素,就立即分配它们的 space。
input
数组可以在用户插入该数组的大小后立即分配:
//ask user to input number of elements
scanf("%d",&number);
printf("Your number is %d\n",number);
input = malloc(totalarray * sizeof(int));
和 my_input
数组在 master 进程将输入大小广播到其他进程后:
MPI_Bcast(&totalarray, 1, MPI_INT, 0, MPI_COMM_WORLD);
int *my_input = malloc((totalarray/size) * sizeof(int));
对于变量totalarray
只是不要在if (rank == 0)
内再次声明。因为如果你这样做,那么 int totalarray = pow(2, ceil(log(number)/log(2)));
将是一个不同的变量,它只存在于 if (rank == 0)
.
第二次MPI_Bcast
调用
MPI_Bcast(&my_input,2,MPI_INT,0,MPI_COMM_WORLD);
是除非,既然你想
to equally distribute total 8 integers in an array to 2 integers for 4 processors.
并不是每个进程都拥有 master 进程的 my_input
数组的整个竞争。
为此,您需要 MPI_Scatter
。然而,而不是
MPI_Scatter (input, 2 , MPI_INT, &my_input, 2 , MPI_INT, 0, MPI_COMM_WORLD );
不要对输入的大小进行硬编码,因为如果您想使用不同的输入大小 and/or 和不同数量的进程进行测试,代码将无法正常工作,请改为执行以下操作:
int size_per_process = totalarray/size;
MPI_Scatter (input, size_per_process , MPI_INT, my_input, size_per_process , MPI_INT, 0, MPI_COMM_WORLD );
循环 for(int i =0; i<=totalarray ; i++)
实际上应该是 for(int i =0; i< totalarray ; i++)
,否则你会超出数组的边界 input
。个人意见,但我认为添加随机值逻辑这样读起来更好:
for(int i =0; i < number ; i++)
input[i] = (rand() % (upper - lower + 1)) + lower;
for(int i = number; i < totalarray; i++)
input[i] = 0;
最终代码如下所示:
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include "mpi.h"
int main(int argc, char *argv[])
{
MPI_Init(&argc, &argv);
int rank, size;
int *input;
int totalarray;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
if (rank == 0){
printf("Enter a number from 1 to 1000: ");
fflush(stdout);
int number;
scanf("%d",&number);
printf("Your number is %d\n",number);
totalarray = pow(2, ceil(log(number)/log(2)));
input = malloc(totalarray * sizeof(int));
int upper = 100, lower = 0;
for(int i = 0; i < number ; i++)
input[i] = (rand() % (upper - lower + 1)) + lower;
for(int i = number; i < totalarray; i++)
input[i] = 0;
printf("the input is: ");
for(int i =0; i < totalarray ; i++)
printf( "%d ", input[i]);
}
MPI_Bcast(&totalarray, 1, MPI_INT, 0, MPI_COMM_WORLD);
int size_per_process = totalarray / size;
int *my_input = malloc(size_per_process * sizeof(int));
printf("SIZE PER %d\n", size_per_process);
MPI_Scatter (input, size_per_process, MPI_INT, my_input, size_per_process, MPI_INT, 0, MPI_COMM_WORLD );
printf("\n my input is %d & %d and rank is %d \n" , my_input[0], my_input[1] , rank);
MPI_Finalize();
return 0;
}
通过打印整个 my_input
而不仅仅是前两个位置,也可以使最后的打印更加通用。