从一个传播者到另一个传播者的mpi集体操作

mgi*_*son 4 c fortran mpi communicator

我有一个与MPI并行化的应用程序,它分为许多不同的任务。每个处理器仅分配一个任务,而分配了相同任务的处理器组则分配有它自己的通信器。任务需要定期进行同步。当前,同步是通过MPI_COMM_WORLD完成的,但是它的缺点是无法使用集体操作,因为不能保证其他任务将到达该代码块。

作为更具体的示例:

task1: equation1_solver, N nodes, communicator: mpi_comm_solver1
task2: equation2_solver, M nodes, communicator: mpi_comm_solver2
task3: file IO         , 1 node , communicator: mpi_comm_io
Run Code Online (Sandbox Code Playgroud)

我想在task1上使用MPI_SUM一个数组,并将结果显示在task3上。有一种有效的方法可以做到这一点吗?(如果这是一个愚蠢的问题,我深表歉意,我在创建和使用自定义MPI通信器方面没有太多经验)

Jon*_*rsi 5

查尔斯是完全正确的。交流者可以让您在交流者之间进行交流(或者,在这种情况下,区分“正常”交流者为“内部交流者”,这对我没有太大的改善)。

我一直发现这些交互器的使用对于那些刚接触它的人有些困惑。不是有意义的基本思想,而是将机械原理MPI_Reduce与(其中之一)结合使用的机制。到目前为止,进行归约处理的任务组指定了远程通信器的根目录等级。但在远程等级通讯器中,不是 root的每个人都指定MPI_PROC_NULL为root,而实际root则指定MPI_ROOT。为了向后兼容所做的事情,嘿?

#include <mpi.h>
#include <stdio.h>


int main(int argc, char **argv)
{
    int commnum = 0;         /* which of the 3 comms I belong to */
    MPI_Comm   mycomm;       /* Communicator I belong to */
    MPI_Comm   intercomm;    /* inter-communicator */
    int cw_rank, cw_size;    /* size, rank in MPI_COMM_WORLD */
    int rank;                /* rank in local communicator */

    MPI_Init(&argc, &argv);
    MPI_Comm_rank(MPI_COMM_WORLD, &cw_rank);
    MPI_Comm_size(MPI_COMM_WORLD, &cw_size);

    if (cw_rank == cw_size-1)      /* last task is IO task */
        commnum = 2;
    else {
        if (cw_rank < (cw_size-1)/2)
            commnum = 0;
        else
            commnum = 1;
    }

    printf("Rank %d in comm %d\n", cw_rank, commnum);

    /* create the local communicator, mycomm */
    MPI_Comm_split(MPI_COMM_WORLD, commnum, cw_rank, &mycomm);

    const int lldr_tag = 1;
    const int intercomm_tag = 2;
    if (commnum == 0) {
        /* comm 0 needs to communicate with comm 2. */
        /* create an intercommunicator: */

        /* rank 0 in our new communicator will be the "local leader"
         *  of this commuicator for the purpose of the intercommuniator */
        int local_leader = 0;

        /* Now, since we're not part of the other communicator (and vice
         * versa) we have to refer to the "remote leader" in terms of its
         * rank in COMM_WORLD.   For us, that's easy; the remote leader
         * in the IO comm is defined to be cw_size-1, because that's the
         * only task in that comm.   But for them, it's harder.  So we'll
         * send that task the id of our local leader. */

        /* find out which rank in COMM_WORLD is the local leader */
        MPI_Comm_rank(mycomm, &rank);

        if (rank == 0)
            MPI_Send(&cw_rank, 1, MPI_INT, cw_size-1, 1, MPI_COMM_WORLD);
        /* now create the inter-communicator */
        MPI_Intercomm_create( mycomm, local_leader,
                              MPI_COMM_WORLD, cw_size-1,
                              intercomm_tag, &intercomm);
    }
    else if (commnum == 2)
    {
        /* there's only one task in this comm */
        int local_leader = 0;
        int rmt_ldr;
        MPI_Status s;
        MPI_Recv(&rmt_ldr, 1, MPI_INT, MPI_ANY_SOURCE, lldr_tag, MPI_COMM_WORLD, &s);
        MPI_Intercomm_create( mycomm, local_leader,
                              MPI_COMM_WORLD, rmt_ldr,
                              intercomm_tag, &intercomm);
    }


    /* now let's play with our communicators and make sure they work */

    if (commnum == 0) {
        int max_of_ranks = 0;
        /* try it internally; */
        MPI_Reduce(&rank, &max_of_ranks, 1, MPI_INT, MPI_MAX, 0, mycomm);
        if (rank == 0) {
            printf("Within comm 0: maximum of ranks is %d\n", max_of_ranks);
            printf("Within comm 0: sum of ranks should be %d\n", max_of_ranks*(max_of_ranks+1)/2);
        }

        /* now try summing it to the other comm */
        /* the "root" parameter here is the root in the remote group */
        MPI_Reduce(&rank, &max_of_ranks, 1, MPI_INT, MPI_SUM, 0, intercomm);
    }

    if (commnum == 2) {
        int sum_of_ranks = -999;
        int rootproc;

        /* get reduction data from other comm */

        if (rank == 0)   /* am I the root of this reduce? */
            rootproc = MPI_ROOT;
        else
            rootproc = MPI_PROC_NULL;

        MPI_Reduce(&rank, &sum_of_ranks, 1, MPI_INT, MPI_SUM, rootproc, intercomm);

        if (rank == 0) 
            printf("From comm 2: sum of ranks is %d\n", sum_of_ranks);
    }

    if (commnum == 0 || commnum == 2);
            MPI_Comm_free(&intercomm);

    MPI_Finalize();
}
Run Code Online (Sandbox Code Playgroud)