如何从C使用MPI_Scatter和MPI_Gather?

DSF*_*DSF 27 c parallel-processing distributed-computing mpi

到目前为止,我的应用程序正在读取带有整数列表的txt文件.这些整数需要由主进程存储在数组中,即排名为0的处理器.这样可以正常工作.

现在,当我运行程序时,我有一个if语句检查它是否是主进程,如果是,我正在执行MPI_Scatter命令.

根据我的理解,这将使用数字细分数组并将其传递给从属进程,即所有rank> 0.但是,我不知道如何处理MPI_Scatter.从属进程如何"订阅"以获取子数组?如何告诉非主进程对子数组执行某些操作?

有人可以提供一个简单的例子来向我展示主进程如何从数组中发出元素,然后让奴隶添加总和并将其返回给主数据库,将所有总和加在一起并将其打印出来?

我的代码到目前为止:

#include <stdio.h>
#include <mpi.h>

//A pointer to the file to read in.
FILE *fr;

int main(int argc, char *argv[]) {

int rank,size,n,number_read;
char line[80];
int numbers[30];
int buffer[30];

MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);

fr = fopen ("int_data.txt","rt"); //We open the file to be read.

if(rank ==0){
printf("my rank = %d\n",rank);

//Reads in the flat file of integers  and stores it in the array 'numbers' of type int.
n=0;
while(fgets(line,80,fr) != NULL) {
  sscanf(line, "%d", &number_read);
  numbers[n] = number_read;
  printf("I am processor no. %d --> At element %d we have number: %d\n",rank,n,numbers[n]);
  n++;
}

fclose(fr);

MPI_Scatter(&numbers,2,MPI_INT,&buffer,2,MPI_INT,rank,MPI_COMM_WORLD);

}
else {
MPI_Gather ( &buffer, 2, MPI_INT, &numbers, 2, MPI_INT, 0, MPI_COMM_WORLD); 
printf("%d",buffer[0]);
}
MPI_Finalize();
return 0;
}
Run Code Online (Sandbox Code Playgroud)

Jon*_*rsi 61

这是对MPI中操作如何与新手一起工作的常见误解; 特别是对于集体操作,人们试图MPI_Bcast从等级0 开始使用broadcast(),期望调用以某种方式将数据"推送"到其他处理器.但这并不是MPI例程的工作原理; 大多数MPI通信都要求发送方和接收方都进行MPI呼叫.

特别是MPI_Scatter()MPI_Gather()(以及MPI_Bcast许多其他人)是集体行动; 它们必须被通信器中的所有任务调用.通信器中的所有处理器进行相同的呼叫,并执行操作.(这就是为什么分散和收集都需要作为参数之一的"根"过程,其中所有数据都来自/来自).通过这种方式,MPI实现具有很多优化通信模式的范围.

所以这是一个简单的例子(更新为包括收集):

#include <mpi.h>
#include <stdio.h>
#include <stdlib.h>

int main(int argc, char **argv) {
    int size, rank;

    MPI_Init(&argc, &argv);
    MPI_Comm_size(MPI_COMM_WORLD, &size);
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);

    int *globaldata=NULL;
    int localdata;

    if (rank == 0) {
        globaldata = malloc(size * sizeof(int) );
        for (int i=0; i<size; i++)
            globaldata[i] = 2*i+1;

        printf("Processor %d has data: ", rank);
        for (int i=0; i<size; i++)
            printf("%d ", globaldata[i]);
        printf("\n");
    }

    MPI_Scatter(globaldata, 1, MPI_INT, &localdata, 1, MPI_INT, 0, MPI_COMM_WORLD);

    printf("Processor %d has data %d\n", rank, localdata);
    localdata *= 2;
    printf("Processor %d doubling the data, now has %d\n", rank, localdata);

    MPI_Gather(&localdata, 1, MPI_INT, globaldata, 1, MPI_INT, 0, MPI_COMM_WORLD);

    if (rank == 0) {
        printf("Processor %d has data: ", rank);
        for (int i=0; i<size; i++)
            printf("%d ", globaldata[i]);
        printf("\n");
    }

    if (rank == 0)
        free(globaldata);

    MPI_Finalize();
    return 0;
}
Run Code Online (Sandbox Code Playgroud)

运行它给出:

gpc-f103n084-$ mpicc -o scatter-gather scatter-gather.c -std=c99
gpc-f103n084-$ mpirun -np 4 ./scatter-gather
Processor 0 has data: 1 3 5 7 
Processor 0 has data 1
Processor 0 doubling the data, now has 2
Processor 3 has data 7
Processor 3 doubling the data, now has 14
Processor 2 has data 5
Processor 2 doubling the data, now has 10
Processor 1 has data 3
Processor 1 doubling the data, now has 6
Processor 0 has data: 2 6 10 14
Run Code Online (Sandbox Code Playgroud)

  • 多么好的答案.非常直接,我看到它现在如何运作.我错误地认为它不是集体行动.非常感谢! (2认同)
  • 哇 !你救了我的一天,干杯.谢谢 (2认同)