Fra*_*son 8 apache-spark kubernetes
我一直在尝试使用 Spark 2.4.0 在 Kubernetes 上简单地运行 SparkPi 示例,但它的行为似乎与文档中的完全不同。
我跟着指南走。我用docker-image-tool.sh脚本构建了一个 vanilla docker 镜像。将其添加到我的注册表中。
我使用如下命令从我的 spark 文件夹启动作业:
bin/spark-submit \
--master k8s://https://<k8s-apiserver-host>:<k8s-apiserver-port> \
--deploy-mode cluster \
--name spark-pi \
--class org.apache.spark.examples.SparkPi \
--conf spark.executor.instances=5 \
--conf spark.kubernetes.container.image=<spark-image> \
--conf spark.kubernetes.namespace=mynamespace \
--conf spark.kubernetes.container.image.pullSecrets=myPullSecret \
local:///opt/spark/examples/jars/spark-examples_2.11-2.4.0.jar
Run Code Online (Sandbox Code Playgroud)
除了namespace和pullSecrets选项外,这几乎与文档中的相同。由于多用户 kubernetes 环境中的限制,我需要这些选项。即便如此,我还是尝试使用默认命名空间,但得到了相同的结果。
发生的情况是 pod 卡在 failed 状态并出现两种异常情况:
MountVolume.SetUp failed for volume "spark-conf-volume" : configmaps "spark-pi-1547643379283-driver-conf-map" not found。表示 k8s 无法将配置映射挂载到 /opt/spark/conf 中,其中应该包含一个属性文件。配置映射(同名)存在,所以我不明白为什么 k8s 不能挂载它。容器日志:
CMD=(${JAVA_HOME}/bin/java "${SPARK_JAVA_OPTS[@]}" -cp "$SPARK_CLASSPATH" -Xms$SPARK_DRIVER_MEMORY -Xmx$SPARK_DRIVER_MEMORY -Dspark.driver.bindAddress=$SPARK_DRIVER_BIND_ADDRESS $SPARK_DRIVER_CLASS $SPARK_DRIVER_ARGS)
exec /sbin/tini -s -- /usr/lib/jvm/java-1.8-openjdk/bin/java -cp ':/opt/spark/jars/*' -Xms -Xmx -Dspark.driver.bindAddress=10.11.12.13
Run Code Online (Sandbox Code Playgroud)
您可以直接使用属性控制其中的一些变量,spark.kubernetes.driverEnv.SPARK_DRIVER_CLASS但是这应该不是必需的(在本例中,类已经用 指定--class)。
为清楚起见,以下环境变量为空:
SPARK_DRIVER_MEMORYSPARK_DRIVER_CLASSSPARK_DRIVER_ARGS的SPARK_CLASSPATH也缺失命令行(火花examples_2.11-2.4.0.jar)上指定的容器本地罐子我。
似乎即使我们解决了挂载 configmap 的问题,它也无济于事,SPARK_DRIVER_MEMORY因为它不包含等效的配置参数。
如何解决挂载配置映射的问题以及如何解决这些环境变量?
kubernetes yaml 配置是由 Spark 创建的,但如果有帮助,我在这里发布:
pod-spec.yaml
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "spark-pi-1547644451461-driver",
"namespace": "frank",
"selfLink": "/api/v1/namespaces/frank/pods/spark-pi-1547644451461-driver",
"uid": "90c9577c-1990-11e9-8237-00155df6cf35",
"resourceVersion": "19241392",
"creationTimestamp": "2019-01-16T13:13:50Z",
"labels": {
"spark-app-selector": "spark-6eafcf5825e94637974f39e5b8512028",
"spark-role": "driver"
}
},
"spec": {
"volumes": [
{
"name": "spark-local-dir-1",
"emptyDir": {}
},
{
"name": "spark-conf-volume",
"configMap": {
"name": "spark-pi-1547644451461-driver-conf-map",
"defaultMode": 420
}
},
{
"name": "default-token-rfz9m",
"secret": {
"secretName": "default-token-rfz9m",
"defaultMode": 420
}
}
],
"containers": [
{
"name": "spark-kubernetes-driver",
"image": "my-repo:10001/spark:latest",
"args": [
"driver",
"--properties-file",
"/opt/spark/conf/spark.properties",
"--class",
"org.apache.spark.examples.SparkPi",
"spark-internal"
],
"ports": [
{
"name": "driver-rpc-port",
"containerPort": 7078,
"protocol": "TCP"
},
{
"name": "blockmanager",
"containerPort": 7079,
"protocol": "TCP"
},
{
"name": "spark-ui",
"containerPort": 4040,
"protocol": "TCP"
}
],
"env": [
{
"name": "SPARK_DRIVER_BIND_ADDRESS",
"valueFrom": {
"fieldRef": {
"apiVersion": "v1",
"fieldPath": "status.podIP"
}
}
},
{
"name": "SPARK_LOCAL_DIRS",
"value": "/var/data/spark-368106fd-09e1-46c5-a443-eec0b64b5cd9"
},
{
"name": "SPARK_CONF_DIR",
"value": "/opt/spark/conf"
}
],
"resources": {
"limits": {
"memory": "1408Mi"
},
"requests": {
"cpu": "1",
"memory": "1408Mi"
}
},
"volumeMounts": [
{
"name": "spark-local-dir-1",
"mountPath": "/var/data/spark-368106fd-09e1-46c5-a443-eec0b64b5cd9"
},
{
"name": "spark-conf-volume",
"mountPath": "/opt/spark/conf"
},
{
"name": "default-token-rfz9m",
"readOnly": true,
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "IfNotPresent"
}
],
"restartPolicy": "Never",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"serviceAccountName": "default",
"serviceAccount": "default",
"nodeName": "kube-worker16",
"securityContext": {},
"imagePullSecrets": [
{
"name": "mypullsecret"
}
],
"schedulerName": "default-scheduler",
"tolerations": [
{
"key": "node.kubernetes.io/not-ready",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
},
{
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
}
]
},
"status": {
"phase": "Failed",
"conditions": [
{
"type": "Initialized",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-01-16T13:15:11Z"
},
{
"type": "Ready",
"status": "False",
"lastProbeTime": null,
"lastTransitionTime": "2019-01-16T13:15:11Z",
"reason": "ContainersNotReady",
"message": "containers with unready status: [spark-kubernetes-driver]"
},
{
"type": "ContainersReady",
"status": "False",
"lastProbeTime": null,
"lastTransitionTime": null,
"reason": "ContainersNotReady",
"message": "containers with unready status: [spark-kubernetes-driver]"
},
{
"type": "PodScheduled",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-01-16T13:13:50Z"
}
],
"hostIP": "10.1.2.3",
"podIP": "10.11.12.13",
"startTime": "2019-01-16T13:15:11Z",
"containerStatuses": [
{
"name": "spark-kubernetes-driver",
"state": {
"terminated": {
"exitCode": 1,
"reason": "Error",
"startedAt": "2019-01-16T13:15:23Z",
"finishedAt": "2019-01-16T13:15:23Z",
"containerID": "docker://931908c3cfa6c2607c9d493c990b392f1e0a8efceff0835a16aa12afd03ec275"
}
},
"lastState": {},
"ready": false,
"restartCount": 0,
"image": "my-repo:10001/spark:latest",
"imageID": "docker-pullable://my-repo:10001/spark@sha256:58e319143187d3a0df14ceb29a874b35756c4581265f0e1de475390a2d3e6ed7",
"containerID": "docker://931908c3cfa6c2607c9d493c990b392f1e0a8efceff0835a16aa12afd03ec275"
}
],
"qosClass": "Burstable"
}
}
Run Code Online (Sandbox Code Playgroud)
config-map.yml
{
"kind": "ConfigMap",
"apiVersion": "v1",
"metadata": {
"name": "spark-pi-1547644451461-driver-conf-map",
"namespace": "frank",
"selfLink": "/api/v1/namespaces/frank/configmaps/spark-pi-1547644451461-driver-conf-map",
"uid": "90eda9e3-1990-11e9-8237-00155df6cf35",
"resourceVersion": "19241350",
"creationTimestamp": "2019-01-16T13:13:50Z",
"ownerReferences": [
{
"apiVersion": "v1",
"kind": "Pod",
"name": "spark-pi-1547644451461-driver",
"uid": "90c9577c-1990-11e9-8237-00155df6cf35",
"controller": true
}
]
},
"data": {
"spark.properties": "#Java properties built from Kubernetes config map with name: spark-pi-1547644451461-driver-conf-map\r\n#Wed Jan 16 13:14:12 GMT 2019\r\nspark.kubernetes.driver.pod.name=spark-pi-1547644451461-driver\r\nspark.driver.host=spark-pi-1547644451461-driver-svc.frank.svc\r\nspark.kubernetes.container.image=aow-repo\\:10001/spark\\:latest\r\nspark.kubernetes.container.image.pullSecrets=mypullsecret\r\nspark.executor.instances=5\r\nspark.app.id=spark-6eafcf5825e94637974f39e5b8512028\r\nspark.app.name=spark-pi\r\nspark.driver.port=7078\r\nspark.kubernetes.resource.type=java\r\nspark.master=k8s\\://https\\://10.1.2.2\\:6443\r\nspark.kubernetes.python.pyFiles=\r\nspark.kubernetes.executor.podNamePrefix=spark-pi-1547644451461\r\nspark.kubernetes.namespace=frank\r\nspark.driver.blockManager.port=7079\r\nspark.jars=/opt/spark/examples/jars/spark-examples_2.11-2.4.0.jar\r\nspark.submit.deployMode=cluster\r\nspark.kubernetes.submitInDriver=true\r\n"
}
}
Run Code Online (Sandbox Code Playgroud)
Spark on Kubernetes 有一个错误。
在 Spark 作业提交到 Kubernetes 集群期间,我们首先创建 Spark Driver Pod:https : //github.com/apache/spark/blob/02c5b4f76337cc3901b8741887292bb4478931f3/resource-managers/kubernetes/core/src/main/scala/org/apache/s /deploy/k8s/submit/KubernetesClientApplication.scala#L130。
之后我们才创建所有其他资源(例如:Spark Driver Service),包括 ConfigMap:https : //github.com/apache/spark/blob/02c5b4f76337cc3901b8741887292bb4478931f3/resource-managers/kubernetes/core/src/main/scala/ org/apache/spark/deploy/k8s/submit/KubernetesClientApplication.scala#L135。
我们这样做是为了能够将 Spark Driver Pod 设置ownerReference为所有这些资源(在我们创建所有者 Pod 之前无法完成):https : //github.com/apache/spark/blob/02c5b4f76337cc3901b8741887292bb4478931f3/resource- manager/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/KubernetesClientApplication.scala#L134。
它帮助我们将所有这些资源的删除委托给 Kubernetes 的职责,这有助于更轻松地在集群中收集未使用的资源。在这种情况下,我们需要清理的只是删除 Spark Driver Pod。但是存在 Kubernetes 会在 ConfigMap 准备好之前实例化 Spark Driver Pod 创建的风险,这会导致您的问题。
对于 2.4.4 仍然如此。
我认为问题主要在于我的 docker“最新”标签指向以前版本的 Spark(v2.3.2)的图像。容器从 Spark-Submit 和 kubernetes 接收参数的方式似乎发生了一些变化。我剩下的问题启动火花管道似乎与 serviceAccounts 相关(可能属于另一个问题)。
| 归档时间: |
|
| 查看次数: |
3468 次 |
| 最近记录: |