kos*_*gin 6 ivy apache-spark pyspark graphframes spark-submit
我一整天都在与它斗争。我能够安装并使用带有 Spark shell 或连接的 Jupiter 笔记本的包(graphframes),但我想使用 Spark-Submit 将其移动到基于 kubernetes 的 Spark 环境。我的spark版本:3.0.1 我从spark-packages下载了最后一个可用的.jar文件(graphframes-0.8.1-spark3.0-s_2.12.jar)并将其放入jars文件夹中。我使用标准 Spark docker 文件的变体来构建我的图像。我的 Spark-submit 命令如下所示:
$SPARK_HOME/bin/spark-submit \
--master k8s://https://kubernetes.docker.internal:6443 \
--deploy-mode cluster \
--conf spark.executor.instances=$2 \
--conf spark.kubernetes.container.image=myimage.io/repositorypath \
--packages graphframes:graphframes:0.8.1-spark3.0-s_2.12 \
--jars "local:///opt/spark/jars/graphframes-0.8.1-spark3.0-s_2.12.jar" \
path/to/my/script/script.py
Run Code Online (Sandbox Code Playgroud)
但它以错误结束:
Ivy Default Cache set to: /opt/spark/.ivy2/cache
The jars for the packages stored in: /opt/spark/.ivy2/jars
:: loading settings :: url = jar:file:/opt/spark/jars/ivy-2.4.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
graphframes#graphframes added as a dependency
:: resolving dependencies :: org.apache.spark#spark-submit-parent-e833e157-44f5-4055-81a4-3ab524176ef5;1.0
confs: [default]
Exception in thread "main" java.io.FileNotFoundException: /opt/spark/.ivy2/cache/resolved-org.apache.spark-spark-submit-parent-e833e157-44f5-4055-81a4-3ab524176ef5-1.0.xml (No such file or directory)
Run Code Online (Sandbox Code Playgroud)
这是我的案例日志:
Ivy Default Cache set to: /opt/spark/.ivy2/cache
The jars for the packages stored in: /opt/spark/.ivy2/jars
:: loading settings :: url = jar:file:/opt/spark/jars/ivy-2.4.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
graphframes#graphframes added as a dependency
:: resolving dependencies :: org.apache.spark#spark-submit-parent-e833e157-44f5-4055-81a4-3ab524176ef5;1.0
confs: [default]
Exception in thread "main" java.io.FileNotFoundException: /opt/spark/.ivy2/cache/resolved-org.apache.spark-spark-submit-parent-e833e157-44f5-4055-81a4-3ab524176ef5-1.0.xml (No such file or directory)
Run Code Online (Sandbox Code Playgroud)
有人有熟悉的东西吗?也许你知道我在这里做错了什么?
Rid*_*wan 12
使用 Spark Submit 添加此配置对我有用:
spark-submit \
--conf spark.driver.extraJavaOptions="-Divy.cache.dir=/tmp -Divy.home=/tmp" \
Run Code Online (Sandbox Code Playgroud)
好的,我解决了我的问题。不确定它是否适用于其他软件包,但它允许我在上述设置中运行图形框架:
mv ./graphframes-0.8.1-spark3.0-s_2.12.jar ./graphframes.jar
Run Code Online (Sandbox Code Playgroud)
# Extract jar contents
jar xf graphframes.jar
Run Code Online (Sandbox Code Playgroud)
现在是第一点。我将使用的所有包放在一个依赖项文件夹中,稍后我会以压缩形式将其提交给 kubernetes。这个文件夹背后的逻辑在我的另一个问题中得到了解释,我再次回答了自己。看这里。现在,我使用 jar 命令将上一步中提取的内容中的 graphframes 文件夹复制到我的依赖项文件夹中: 4. 将之前提取的内容中的 graphframes 文件夹复制到您的依赖项文件夹中
cp -r ./graphframes $SPARK_HOME/path/to/your/dependencies
Run Code Online (Sandbox Code Playgroud)
$SPARK_HOME/bin/spark-submit \
--master k8s://https://kubernetes.docker.internal:6443 \
--deploy-mode cluster \
--conf spark.executor.instances=$2 \
--conf spark.kubernetes.container.image=docker.io/path/to/your/image \
--jars "local:///opt/spark/jars/graphframes.jar" \ ...
Run Code Online (Sandbox Code Playgroud)
我现在很着急,但在不久的将来我将编辑这篇文章,添加一个关于处理 py-spark 中的依赖关系的中短文章的链接。希望它对某人有用:)
| 归档时间: |
|
| 查看次数: |
7917 次 |
| 最近记录: |