嵌入式调用mpirun的Python

JbG*_*bGT 7 c++ python mpi

我正在尝试使用PyOpt运行一些并行优化.棘手的部分是在我的目标函数中,我想使用mpi运行C++代码.

我的python脚本如下:

#!/usr/bin/env python    
# Standard Python modules
import os, sys, time, math
import subprocess


# External Python modules
try:
    from mpi4py import MPI
    comm = MPI.COMM_WORLD
    myrank = comm.Get_rank()
except:
    raise ImportError('mpi4py is required for parallelization')

# Extension modules
from pyOpt import Optimization
from pyOpt import ALPSO

# Predefine the BashCommand
RunCprogram = "mpirun -np 2 CProgram" # Parallel C++ program


######################### 
def objfunc(x):

    f = -(((math.sin(2*math.pi*x[0])**3)*math.sin(2*math.pi*x[1]))/((x[0]**3)*(x[0]+x[1])))

    # Run CProgram 
    os.system(RunCprogram) #where the mpirun call occurs

    g = [0.0]*2
    g[0] = x[0]**2 - x[1] + 1
    g[1] = 1 - x[0] + (x[1]-4)**2

    time.sleep(0.01)
    fail = 0
    return f,g, fail

# Instantiate Optimization Problem 
opt_prob = Optimization('Thermal Conductivity Optimization',objfunc)
opt_prob.addVar('x1','c',lower=5.0,upper=1e-6,value=10.0)
opt_prob.addVar('x2','c',lower=5.0,upper=1e-6,value=10.0)
opt_prob.addObj('f')
opt_prob.addCon('g1','i')
opt_prob.addCon('g2','i')

# Solve Problem (DPM-Parallelization)
alpso_dpm = ALPSO(pll_type='DPM')
alpso_dpm.setOption('fileout',0)
alpso_dpm(opt_prob)
print opt_prob.solution(0)
Run Code Online (Sandbox Code Playgroud)

我运行该代码使用:

mpirun -np 20 python Script.py
Run Code Online (Sandbox Code Playgroud)

但是,我收到以下错误:

[user:28323] *** Process received signal ***
[user:28323] Signal: Segmentation fault (11)
[user:28323] Signal code: Address not mapped (1)
[user:28323] Failing at address: (nil)
[user:28323] [ 0] /lib64/libpthread.so.0() [0x3ccfc0f500]
[user:28323] *** End of error message ***
Run Code Online (Sandbox Code Playgroud)

我认为2个不同的mpirun调用(调用python脚本和脚本中的调用之一)彼此冲突.如何解决这个问题?

谢谢!!

fra*_*cis 3

请参阅以串行方式调用 mpi 二进制文件作为 mpi 应用程序的子进程:最安全的方法是使用MPI_Comm_spawn()。例如,看一下这个经理-工人示例。

一个快速解决方法是subprocess.Popen按照@EdSmith 的指示使用。subprocess.Popen然而,请注意使用父级环境的默认行为。我的猜测是,它是相同的os.system()。不幸的是,一些环境变量是由 mpirun 添加的,具体取决于 MPI 实现,例如OMPI_COMM_WORLD_RANKOMPI_MCA_orte_ess_num_procs。要查看这些环境变量,请输入import os ; print os.environmpi4py 代码和基本 python shell。这些环境变量可能会导致子进程失败。所以我不得不添加一行来摆脱它们......这相当脏......它归结为:

    args = shlex.split(RunCprogram)
    env=os.environ
    # to remove all environment variables with "MPI" in it...rather dirty...
    new_env = {k: v for k, v in env.iteritems() if "MPI" not in k}

    #print new_env
    # shell=True : watch for security issues...
    p = subprocess.Popen(RunCprogram,shell=True, env=new_env,stdout=subprocess.PIPE, stdin=subprocess.PIPE)
    p.wait()
    result="process myrank "+str(myrank)+" got "+p.stdout.read()
    print result
Run Code Online (Sandbox Code Playgroud)

完整的测试代码,运行者mpirun -np 2 python opti.py

#!/usr/bin/env python    
# Standard Python modules
import os, sys, time, math
import subprocess
import shlex


# External Python modules
try:
    from mpi4py import MPI
    comm = MPI.COMM_WORLD
    myrank = comm.Get_rank()
except:
    raise ImportError('mpi4py is required for parallelization')

# Predefine the BashCommand
RunCprogram = "mpirun -np 2 main" # Parallel C++ program


######################### 
def objfunc(x):

    f = -(((math.sin(2*math.pi*x[0])**3)*math.sin(2*math.pi*x[1]))/((x[0]**3)*(x[0]+x[1])))

    # Run CProgram 
    #os.system(RunCprogram) #where the mpirun call occurs
    args = shlex.split(RunCprogram)
    env=os.environ
    new_env = {k: v for k, v in env.iteritems() if "MPI" not in k}

    #print new_env
    p = subprocess.Popen(RunCprogram,shell=True, env=new_env,stdout=subprocess.PIPE, stdin=subprocess.PIPE)
    p.wait()
    result="process myrank "+str(myrank)+" got "+p.stdout.read()
    print result



    g = [0.0]*2
    g[0] = x[0]**2 - x[1] + 1
    g[1] = 1 - x[0] + (x[1]-4)**2

    time.sleep(0.01)
    fail = 0
    return f,g, fail

print objfunc([1.0,0.0])
Run Code Online (Sandbox Code Playgroud)

基本工人,编译者mpiCC main.cpp -o main

#include "mpi.h"

int main(int argc, char* argv[]) { 
    int rank, size;

    MPI_Init (&argc, &argv);    
    MPI_Comm_rank (MPI_COMM_WORLD, &rank);  
    MPI_Comm_size (MPI_COMM_WORLD, &size);  

    if(rank==0){
        std::cout<<" size "<<size<<std::endl;
    }
    MPI_Finalize();

    return 0;

}
Run Code Online (Sandbox Code Playgroud)