我想使用multiprocessing的Pool.map()功能,同时划分出工作.当我使用以下代码时,它工作正常:
import multiprocessing
def f(x):
return x*x
def go():
pool = multiprocessing.Pool(processes=4)
print pool.map(f, range(10))
if __name__== '__main__' :
go()
Run Code Online (Sandbox Code Playgroud)
但是,当我在面向对象的方法中使用它时,它不起作用.它给出的错误信息是:
PicklingError: Can't pickle <type 'instancemethod'>: attribute lookup
__builtin__.instancemethod failed
Run Code Online (Sandbox Code Playgroud)
当以下是我的主程序时会发生这种情况:
import someClass
if __name__== '__main__' :
sc = someClass.someClass()
sc.go()
Run Code Online (Sandbox Code Playgroud)
以下是我的someClass课程:
import multiprocessing
class someClass(object):
def __init__(self):
pass
def f(self, x):
return x*x
def go(self):
pool = multiprocessing.Pool(processes=4)
print pool.map(self.f, range(10))
Run Code Online (Sandbox Code Playgroud)
任何人都知道问题可能是什么,或者一个简单的方法呢?
我对Python一点都不熟悉,我经常做Ruby或JS.但我需要在运行Python的系统上编写基准测试脚本.我要做的是创建一个小脚本,获取文件大小和线程数,并写一个随机缓冲区.这是我在摆弄2个小时后得到的:
from multiprocessing import Pool
import os, sys
def writeBuf(buf):
def write(n):
f = open(os.path.join(directory, 'n' + str(n)), 'w')
try:
f.write(buf)
f.flush()
os.fsync(f.fileno)
finally:
f.close()
return write
if __name__ == '__main__':
targetDir = sys.argv[1]
numThreads = int(sys.argv[2])
numKiloBytes = int(sys.argv[3])
numFiles = int(102400 / numKiloBytes)
buf = os.urandom(numKiloBytes * 1024)
directory = os.path.join(targetDir, str(numKiloBytes) + 'k')
if not os.path.exists(directory):
os.makedirs(directory)
with Pool(processes=numThreads) as pool:
pool.map(writeBuf(buf), range(numFiles))
Run Code Online (Sandbox Code Playgroud)
但它抛出了错误: AttributeError: Can't pickle local object 'writeBuf.<locals>.write'
我之前尝试过write没有闭包,但是当我尝试在__name__ == '__main__'部件内部定义函数时出现错误.省略这 …