shi*_*455 7 scala playframework apache-spark kubernetes
我正在尝试使用play框架在scala中的kubernetes集群上提交spark 2.3作业.
我还试过一个简单的scala程序而不使用play框架.
该作业已提交到k8群集,但未调用stateChanged和infoChanged.我也希望能够获得handle.getAppId.
我使用的火花提交,提交作业,描述在这里
$ bin/spark-submit \
--master k8s://https://<k8s-apiserver-host>:<k8s-apiserver-port> \
--deploy-mode cluster \
--name spark-pi \
--class org.apache.spark.examples.SparkPi \
--conf spark.executor.instances=5 \
--conf spark.kubernetes.container.image=<spark-image> \
local:///path/to/examples.jar
Run Code Online (Sandbox Code Playgroud)
这是工作的代码:
def index = Action {
try {
val spark = new SparkLauncher()
.setMaster("my k8 apiserver host")
.setVerbose(true)
.addSparkArg("--verbose")
.setMainClass("myClass")
.setAppResource("hdfs://server/inputs/my.jar")
.setConf("spark.app.name","myapp")
.setConf("spark.executor.instances","5")
.setConf("spark.kubernetes.container.image","mydockerimage")
.setDeployMode("cluster")
.startApplication(new SparkAppHandle.Listener(){
def infoChanged(handle: SparkAppHandle): Unit = {
System.out.println("Spark App Id ["
+ handle.getAppId
+ "] Info Changed. State ["
+ handle.getState + "]")
}
def stateChanged(handle: SparkAppHandle): Unit = {
System.out.println("Spark App Id ["
+ handle.getAppId
+ "] State Changed. State ["
+ handle.getState + "]")
if (handle.getState.toString == "FINISHED") System.exit(0)
}
} )
Ok(spark.getState().toString())
} catch {
case NonFatal(e)=>{
println("failed with exception: " + e)
}
}
Ok
}
Run Code Online (Sandbox Code Playgroud)
SparkLauncher允许以编程方式运行spark-submit命令。它在 JVM 中作为单独的子线程运行。您需要在客户端主函数中等待,直到驱动程序在 K8s 中启动并获得侦听器回调。否则,JVM 主线程会杀死客户端并且不报告任何内容。
----------------------- -----------------------
| User App | spark-submit | Spark App |
| | -------------------> | |
| ------------| |------------- |
| | | hello | | |
| | L. Server |<----------------------| L. Backend | |
| | | | | |
| ------------- -----------------------
| | | ^
| v | |
| -------------| |
| | | <per-app channel> |
| | App Handle |<------------------------------
| | |
-----------------------
Run Code Online (Sandbox Code Playgroud)
我添加了一个j.u.c.CountDownLatch实现,可以防止主线程退出,直到appState.isFinal达到为止。
object SparkLauncher {
def main(args: Array[String]) {
import java.util.concurrent.CountDownLatch
val countDownLatch = new CountDownLatch(1)
val launcher = new SparkLauncher()
.setMaster("k8s://http://127.0.0.1:8001")
.setAppResource("local:/{PATH}/spark-examples_2.11-2.3.0.jar")
.setConf("spark.app.name","spark-pi")
.setMainClass("org.apache.spark.examples.SparkPi")
.setConf("spark.executor.instances","5")
.setConf("spark.kubernetes.container.image","spark:spark-docker")
.setConf("spark.kubernetes.driver.pod.name","spark-pi-driver")
.setDeployMode("cluster")
.startApplication(new SparkAppHandle.Listener() {
def infoChanged(handle: SparkAppHandle): Unit = {
}
def stateChanged(handle: SparkAppHandle): Unit = {
val appState = handle.getState()
println(s"Spark App Id [${handle.getAppId}] State Changed. State [${handle.getState}]")
if (appState != null && appState.isFinal) {
countDownLatch.countDown //waiting until spark driver exits
}
}
})
countDownLatch.await()
}
}
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
926 次 |
| 最近记录: |