如何将C编译模块(例如,python-Levenshtein)发送到Spark集群中的每个节点?
我知道我可以使用独立的Python脚本在Spark中发布Python文件(下面的示例代码):
from pyspark import SparkContext
sc = SparkContext("local", "App Name", pyFiles=['MyFile.py', 'MyOtherFile.py'])
Run Code Online (Sandbox Code Playgroud)
但是在没有".py"的情况下,我该如何运送模块?
我在一个大型集群上运行Spark程序(为此,我没有管理权限).numpy
没有安装在工作节点上.因此,我捆绑numpy
了我的程序,但是我收到以下错误:
Traceback (most recent call last):
File "/home/user/spark-script.py", line 12, in <module>
import numpy
File "/usr/local/lib/python2.7/dist-packages/numpy/__init__.py", line 170, in <module>
File "/usr/local/lib/python2.7/dist-packages/numpy/add_newdocs.py", line 13, in <module>
File "/usr/local/lib/python2.7/dist-packages/numpy/lib/__init__.py", line 8, in <module>
File "/usr/local/lib/python2.7/dist-packages/numpy/lib/type_check.py", line 11, in <module>
File "/usr/local/lib/python2.7/dist-packages/numpy/core/__init__.py", line 6, in <module>
ImportError: cannot import name multiarray
Run Code Online (Sandbox Code Playgroud)
该脚本实际上非常简单:
from pyspark import SparkConf, SparkContext
sc = SparkContext()
sc.addPyFile('numpy.zip')
import numpy
a = sc.parallelize(numpy.array([12, 23, 34, 45, 56, 67, 78, 89, 90]))
print a.collect()
Run Code Online (Sandbox Code Playgroud)
我理解错误的发生是因为numpy …