我有两个程序.生成"工人"的"主人"执行一些计算,我希望主人从工人那里得到结果并存储金额.我正在尝试使用MPI_Reduce来收集工作者的结果,并且工作人员使用MPI_Reduce发送给主人MPI_Comm.我不确定这是否正确.这是我的节目:
主:
#include <mpi.h>
#include <iostream>
using namespace std;
int main(int argc, char *argv[]) {
int world_size, universe_size, *universe_sizep, flag;
int rc, send, recv;
// intercommunicator
MPI_Comm everyone;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &world_size);
if (world_size != 1) {
cout << "Top heavy with management" << endl;
}
MPI_Attr_get(MPI_COMM_WORLD, MPI_UNIVERSE_SIZE, &universe_sizep, &flag);
if (!flag) {
cout << "This MPI does not support UNIVERSE_SIZE. How many processes total?";
cout << "Enter the universe size: ";
cin >> universe_size;
} else {
universe_size = *universe_sizep;
}
if (universe_size == 1) {
cout << "No room to start workers" << endl;
}
MPI_Comm_spawn("so_worker", MPI_ARGV_NULL, universe_size-1,
MPI_INFO_NULL, 0, MPI_COMM_SELF, &everyone,
MPI_ERRCODES_IGNORE);
send = 0;
rc = MPI_Reduce(&send, &recv, 1, MPI_INT, MPI_SUM, 0, everyone);
// store result of recv ...
// other calculations here
cout << "From spawned workers recv: " << recv << endl;
MPI_Finalize();
return 0;
}
Run Code Online (Sandbox Code Playgroud)
工人:
#include <mpi.h>
#include <iostream>
using namespace std;
int main(int argc, char *argv[]) {
int rc, send,recv;
int parent_size, parent_id, my_id, numprocs;
// parent intercomm
MPI_Comm parent;
MPI_Init(&argc, &argv);
MPI_Comm_get_parent(&parent);
if (parent == MPI_COMM_NULL) {
cout << "No parent!" << endl;
}
MPI_Comm_remote_size(parent, &parent_size);
MPI_Comm_rank(parent, &parent_id) ;
//cout << "Parent is of size: " << size << endl;
if (parent_size != 1) {
cout << "Something's wrong with the parent" << endl;
}
MPI_Comm_rank(MPI_COMM_WORLD, &my_id) ;
MPI_Comm_size(MPI_COMM_WORLD, &numprocs) ;
cout << "I'm child process rank "<< my_id << " and we are " << numprocs << endl;
cout << "The parent process rank "<< parent_id << " and we are " << parent_size << endl;
// get value of send
send = 7; // just an example
recv = 0;
rc = MPI_Reduce(&send, &recv, 1, MPI_INT, MPI_SUM, parent_id, parent);
if (rc != MPI_SUCCESS)
cout << my_id << " failure on mpi_reduce in WORKER" << endl;
MPI_Finalize();
return 0;
}
Run Code Online (Sandbox Code Playgroud)
我编译了两个并执行这样的(mpic ++ for osx):
mpic++ so_worker.cpp -o so_worker
mpic++ so_master.cpp -o so_master
mpirun -n 1 so_master
Run Code Online (Sandbox Code Playgroud)
这是运行产生工人的主人的正确方法吗?
在Master中我总是从MPI_Reduce返回0.我可以使用来自intercommunicators的MPI_reduce,还是应该使用来自worker的MPI_Send和来自master的MPI_Recv?我真的不确定为什么它不起作用.
任何帮助,将不胜感激.谢谢!
MPI_Comm_get_parent返回包含原始进程和所有生成进程的父对等通道.在这种情况下,调用MPI_Comm_rank(parent, &parent_id)不返回父级别,而是返回互通者本地组中当前进程的级别:
I'm child process rank 0 and we are 3
The parent process **rank 0** and we are 1
I'm child process rank 1 and we are 3
The parent process **rank 1** and we are 1
I'm child process rank 2 and we are 3
The parent process **rank 2** and we are 1
Run Code Online (Sandbox Code Playgroud)
(观察突出显示的值如何不同 - 人们会期望父进程的等级应该相同,不应该吗?)
这就是为什么MPI_Reduce()调用不会成功的原因,因为所有工作进程都为根级别指定了不同的值.由于最初有一个主进程,它在远程组中的等级parent将是0,因此所有工作者应指定0为根目录MPI_Reduce:
//
// Worker code
//
rc = MPI_Reduce(&send, &recv, 1, MPI_INT, MPI_SUM, 0, parent);
Run Code Online (Sandbox Code Playgroud)
这只是问题的一半.另一半是根深蒂固的集体行动(例如MPI_REDUCE)与互通者的运作有点不同.首先必须决定两个组中的哪一个将托管根.标识根组后,根进程必须MPI_ROOT作为rootin 的值传递,MPI_REDUCE并且根组中的所有其他进程必须通过MPI_PROC_NULL.也就是说,接收组中的进程根本不参与有根集合的操作.由于写入主代码使得主组中只有一个进程,因此将MPI_Reduce主代码中的调用更改为:
//
// Master code
//
rc = MPI_Reduce(&send, &recv, 1, MPI_INT, MPI_SUM, MPI_ROOT, everyone);
Run Code Online (Sandbox Code Playgroud)
请注意,主服务器本身也不参与还原操作,例如sendbuf(&send在这种情况下)的值是无关紧要的,因为根不会发送要减少的数据 - 它只是收集对来自以下值的减少的结果远程组中的进程.