alf*_*dox 8 python apache-spark caffe pyspark pycaffe
我们在python脚本上运行spark-submit命令,该脚本使用Spark在Python中使用Caffe并行化对象检测.如果在仅使用Python的脚本中运行,脚本本身运行完全正常,但在将其与Spark代码一起使用时会返回导入错误.我知道火花代码不是问题,因为它在我的家用机器上运行得非常好,但它在AWS上运行不正常.我不确定这是否与环境变量有关,就好像它没有检测到它们一样.
设置这些环境变量:
SPARK_HOME=/opt/spark/spark-2.0.0-bin-hadoop2.7
PATH=$SPARK_HOME/bin:$PATH
PYTHONPATH=$SPARK_HOME/python/:$PYTHONPATH
PYTHONPATH=/opt/caffe/python:${PYTHONPATH}
Run Code Online (Sandbox Code Playgroud)
错误:
16/10/03 01:36:21 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, 172.31.50.167): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/opt/spark/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 161, in main
func, profiler, deserializer, serializer = read_command(pickleSer, infile)
File "/opt/spark/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 54, in read_command
command = serializer._read_with_length(file)
File "/opt/spark/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/serializers.py", line 164, in _read_with_length
return self.loads(obj)
File "/opt/spark/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/serializers.py", line 422, in loads
return pickle.loads(obj)
File "/opt/spark/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/cloudpickle.py", line 664, in subimport
__import__(name)
ImportError: ('No module named caffe', <function subimport at 0x7efc34a68b90>, ('caffe',))
Run Code Online (Sandbox Code Playgroud)
有谁知道为什么这会是一个问题?
雅虎的这个软件包通过将Caffe作为jar依赖项运送来管理我们正在尝试做的事情,然后在Python中再次使用它.但我没有找到任何关于如何构建它并自己导入它的资源.
您可能还没有在 AWS 环境中编译 caffe python 包装器。由于我完全无法理解的原因(以及其他几个人,https://github.com/BVLC/caffe/issues/2440),pycaffe 不能作为 pypi 包使用,你必须自己编译它。如果您位于 AWS EB 环境中,您应该按照此处的编译/制作说明进行操作,或者使用 ebextensions 将其自动化: http: //caffe.berkeleyvision.org/installation.html#python
\n 归档时间: |
|
查看次数: |
992 次 |
最近记录: |