Kubernetes上的Spark UI History服务器?

JDe*_*Dev 5 apache-spark kubernetes

通过提交火花,我在Kubernetes集群上启动了应用程序。而且只有在访问http:// driver-pod:port时,我才能看到Spark-UI 。

如何在集群上启动Spark-UI History Server?如何使所有正在运行的Spark作业都在Spark-UI历史记录服务器上注册。

这可能吗?

Qas*_*raz 6

对的,这是可能的。简而言之,您需要确保以下几点:

  • 确保所有的应用程序事件日志存储在特定的位置(filesystems3hdfs等)。
  • 在您的集群中部署历史服务器,可以访问上述事件日志位置。

现在 spark(默认情况下)仅从filesystem路径中读取,因此我将使用spark 运算符详细说明这种情况:

  • 创建一个PVC具有支持ReadWriteMany模式的卷类型。例如NFS音量。以下代码段假设您已经为NFS( nfs-volume) 配置了存储类:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: spark-pvc
  namespace: spark-apps
spec:
  accessModes:
    - ReadWriteMany
  volumeMode: Filesystem
  resources:
    requests:
      storage: 5Gi
  storageClassName: nfs-volume
Run Code Online (Sandbox Code Playgroud)
  • 确保您的所有 Spark 应用程序都启用了事件日志记录并且位于正确的路径:
  sparkConf:
    "spark.eventLog.enabled": "true"
    "spark.eventLog.dir": "file:/mnt"
Run Code Online (Sandbox Code Playgroud)
  • 将事件日志卷安装到每个应用程序(您也可以使用 operator mutating web hook 来集中它)pod。具有上述配置的示例清单如下所示:
---
apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
  name: spark-java-pi
  namespace: spark-apps

spec:
  type: Java
  mode: cluster

  image: gcr.io/spark-operator/spark:v2.4.4
  mainClass: org.apache.spark.examples.SparkPi
  mainApplicationFile: "local:///opt/spark/examples/jars/spark-examples_2.11-2.4.4.jar"

  imagePullPolicy: Always
  sparkVersion: 2.4.4
  sparkConf:
    "spark.eventLog.enabled": "true"
    "spark.eventLog.dir": "file:/mnt"
  restartPolicy:
    type: Never
  volumes:
    - name: spark-data
      persistentVolumeClaim:
        claimName: spark-pvc
  driver:
    cores: 1
    coreLimit: "1200m"
    memory: "512m"
    labels:
      version: 2.4.4
    serviceAccount: spark
    volumeMounts:
      - name: spark-data
        mountPath: /mnt
  executor:
    cores: 1
    instances: 1
    memory: "512m"
    labels:
      version: 2.4.4
    volumeMounts:
      - name: spark-data
        mountPath: /mnt

Run Code Online (Sandbox Code Playgroud)
  • 安装安装共享卷的 spark 历史服务器。然后您将在历史服务器 UI 中访问事件:
apiVersion: apps/v1beta1
kind: Deployment

metadata:
  name: spark-history-server
  namespace: spark-apps

spec:
  replicas: 1

  template:
    metadata:
      name: spark-history-server
      labels:
        app: spark-history-server

    spec:
      containers:
        - name: spark-history-server
          image: gcr.io/spark-operator/spark:v2.4.0

          resources:
            requests:
              memory: "512Mi"
              cpu: "100m"

          command:
            -  /sbin/tini
            - -s
            - --
            - /opt/spark/bin/spark-class
            - -Dspark.history.fs.logDirectory=/data/
            - org.apache.spark.deploy.history.HistoryServer

          ports:
            - name: http
              protocol: TCP
              containerPort: 18080

          readinessProbe:
            timeoutSeconds: 4
            httpGet:
              path: /
              port: http

          livenessProbe:
            timeoutSeconds: 4
            httpGet:
              path: /
              port: http

          volumeMounts:
            - name: data
              mountPath: /data
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: spark-pvc
          readOnly: true

Run Code Online (Sandbox Code Playgroud)

随意配置IngressService以访问UI. 在此处输入图片说明

您也可以使用 Google Cloud Storage、Azrue Blob Storage 或 AWS S3 作为事件日志位置。为此,您需要安装一些额外的东西,jars因此我建议您查看 lightbend spark history server image and charts