OpenMPI使用MINLOC减少

jth*_*cie 7 c++ parallel-processing mpi

我目前正在研究一些图形理论问题的MPI代码,其中许多节点都可以包含答案和答案的长度.为了让所有东西都回到主节点,我正在做一个MPI_Gather来获得答案,并且我正在尝试使用MPI_MINLOC操作来确定谁拥有最短的解决方案.现在我存储长度和节点ID的数据类型定义为(按照http://www.open-mpi.org/doc/v1.4/man3/MPI_Reduce.3.php等众多网站上显示的示例):

struct minType
{
    float len;
    int index;
};
Run Code Online (Sandbox Code Playgroud)

在每个节点上,我正在以下列方式初始化此结构的本地副本:

int commRank;
MPI_Comm_rank (MPI_COMM_WORLD, &commRank);
minType solutionLen;
solutionLen.len = 1e37;
solutionLen.index = commRank;
Run Code Online (Sandbox Code Playgroud)

在执行结束时,我有一个MPI_Gather调用,成功地将所有解决方案(我从内存中打印出来以验证它们)以及调用:

MPI_Reduce (&solutionLen, &solutionLen, 1, MPI_FLOAT_INT, MPI_MINLOC, 0, MPI_COMM_WORLD);
Run Code Online (Sandbox Code Playgroud)

我的理解是这些论点应该是:

  1. 数据源
  2. 是结果的目标(仅在指定的根节点上有效)
  3. 每个节点发送的项目数
  4. 数据类型(MPI_FLOAT_INT似乎是根据上面的链接定义的)
  5. 操作(MPI_MINLOC似乎也被定义)
  6. 指定通信组中的根节点ID
  7. 要等待的通信组.

当我的代码进入reduce操作时,我收到此错误:

[compute-2-19.local:9754] *** An error occurred in MPI_Reduce
[compute-2-19.local:9754] *** on communicator MPI_COMM_WORLD
[compute-2-19.local:9754] *** MPI_ERR_ARG: invalid argument of some other kind
[compute-2-19.local:9754] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
--------------------------------------------------------------------------
mpirun has exited due to process rank 0 with PID 9754 on
node compute-2-19.local exiting improperly. There are two reasons this could occur:

1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.

2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"

This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
Run Code Online (Sandbox Code Playgroud)

我承认完全被这个难过了.如果重要的话我正在使用基于CentOS 5.5的Rocks集群上的OpenMPI 1.5.3(使用gcc 4.4构建)进行编译.

Wal*_*ter 4

我认为您不允许对输入和输出使用相同的缓冲区(前两个参数)。手册说:

当通信器是内部通信器时,可以就地执行reduce操作(输出缓冲区用作输入缓冲区)。使用变量 MPI_IN_PLACE 作为根进程 sendbuf 的值。在这种情况下,输入数据从接收缓冲区的根部获取,并被输出数据替换。