scikit - 随机森林回归 - AttributeError:'Thread'对象没有属性'_children'

Hap*_*per 7 python flask scikit-learn

我为随机森林回归器设置我的n_jobs参数> 1时出现以下错误.如果我设置n_jobs = 1,一切正常.

AttributeError:'Thread'对象没有属性'_children'

我正在烧瓶服务中运行此代码.有趣的是,在烧瓶服务之外运行时不会发生这种情况.我只在新安装的Ubuntu盒子上重新编写了这个.在我的Mac上,它工作得很好.

这是一个谈论这个问题的线程,但似乎没有超越解决方法 'Thread'对象没有属性'_children' - django + scikit-learn

有什么想法吗?

感谢大家!

这是我的测试代码:

@test.route('/testfun')

    def testfun():
        from sklearn.ensemble import RandomForestRegressor
        import numpy as np

        train_data = np.array([[1,2,3], [2,1,3]])
        target_data = np.array([1,1])

        model = RandomForestRegressor(n_jobs=2)
        model.fit(train_data, target_data)
        return "yey"

堆栈跟踪:


    Traceback (most recent call last):
      File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1836, in __call__
        return self.wsgi_app(environ, start_response)
      File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1820, in wsgi_app
        response = self.make_response(self.handle_exception(e))
      File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1403, in handle_exception
        reraise(exc_type, exc_value, tb)
      File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1817, in wsgi_app
        response = self.full_dispatch_request()
      File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1477, in full_dispatch_request
        rv = self.handle_user_exception(e)
      File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1381, in handle_user_exception
        reraise(exc_type, exc_value, tb)
      File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1475, in full_dispatch_request
        rv = self.dispatch_request()
      File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1461, in dispatch_request
        return self.view_functions[rule.endpoint](**req.view_args)
      File "/home/vagrant/flask.global-relevance-engine/global_relevance_engine/routes/test.py", line 47, in testfun
        model.fit(train_data, target_data)
      File "/usr/local/lib/python2.7/dist-packages/sklearn/ensemble/forest.py", line 273, in fit
        for i, t in enumerate(trees))
      File "/usr/local/lib/python2.7/dist-packages/sklearn/externals/joblib/parallel.py", line 574, in __call__
        self._pool = ThreadPool(n_jobs)
      File "/usr/lib/python2.7/multiprocessing/pool.py", line 685, in __init__
        Pool.__init__(self, processes, initializer, initargs)
      File "/usr/lib/python2.7/multiprocessing/pool.py", line 136, in __init__
        self._repopulate_pool()
      File "/usr/lib/python2.7/multiprocessing/pool.py", line 199, in _repopulate_pool
        w.start()
      File "/usr/lib/python2.7/multiprocessing/dummy/__init__.py", line 73, in start
        self._parent._children[self] = None

Kob*_*ohn 7

问题

这可能是由于python 2.7.5和3.3.2之前存在的错误multiprocessing.dummy(参见此处此处).

解决方案A - 升级Python

请参阅注释以确认新版本适用于OP.

解决方案B - 修改 dummy

如果您无法升级但有权访问.../py/Lib/multiprocessing/dummy/__init__.py,请按如下start方式编辑DummyProcess类中的方法(应该是〜第73行):

if hasattr(self._parent, '_children'):  # add this line
    self._parent._children[self] = None  # indent this existing line
Run Code Online (Sandbox Code Playgroud)

解决方案C - 猴子补丁

DummyProcess是这个bug存在的地方.让我们看看您导入的代码中存在的位置,以确保我们在正确的位置进行修补.

  • RandomForestRegressor
  • 继承:ForestRegressor
  • 继承:BaseForest
  • 创建于:sklearn.ensemble.forest
  • 哪个导入:从sklearn.externals.joblib并行
  • 从multiprocessing.pool导入ThreadPool
  • 从multiprocessing.dummy导入和存储Process
  • 已分配给:DummyProcess也在multiprocessing.dummy中

DummyProcess链条的存在保证了它在进口后已经进口RandomForestRegressor.此外,我认为我们可以DummyProcess在任何实例之前访问该类.因此,我们可以修改一次类,而不是需要搜索实例来修补.

# Let's make it available in our namespace:
from sklearn.ensemble import RandomForestRegressor
from multiprocessing import dummy as __mp_dummy

# Now we can define a replacement and patch DummyProcess:
def __DummyProcess_start_patch(self):  # pulled from an updated version of Python
    assert self._parent is __mp_dummy.current_process()  # modified to avoid further imports
    self._start_called = True
    if hasattr(self._parent, '_children'):
        self._parent._children[self] = None
    __mp_dummy.threading.Thread.start(self)  # modified to avoid further imports
__mp_dummy.DummyProcess.start = __DummyProcess_start_patch
Run Code Online (Sandbox Code Playgroud)

除非我遗漏了某些内容,否则从现在开始,所有创建的DummyProcess实例都将被修补,因此不会发生该错误.

对于任何更广泛使用sklearn的人来说,我认为你可以反过来做到这一点,并使其适用于所有sklearn而不是专注于一个模块.DummyProcess在进行任何sklearn导入之前,您需要如上导入和修补它.然后sklearn将从一开始就使用补丁类.


原始答案:

当我写评论时,我意识到我可能已经找到了你的问题 - 我认为你的烧瓶环境正在使用旧版本的python.

原因是在最新版本的python多处理中,您接收该错误的行受条件保护:

if hasattr(self._parent, '_children'):
    self._parent._children[self] = None
Run Code Online (Sandbox Code Playgroud)

看起来这个bug在python 2.7中被修复了(我想从2.7.5修复).也许你的烧瓶是2.7或2.6?

你能检查一下你的环境吗?如果你无法更新解释器,也许我们可以找到一种方法来修补多处理,以防止它崩溃.