Ade*_*nde 6 java hadoop-yarn apache-spark
我在同一台机器上设置了我的纱线簇和我的火花簇,但现在我需要使用客户端模式运行带有纱线的火花作业.
这是我的工作示例配置:
SparkConf sparkConf = new SparkConf(true).setAppName("SparkQueryApp")
.setMaster("yarn-client")// "yarn-cluster" or "yarn-client"
.set("es.nodes", "10.0.0.207")
.set("es.nodes.discovery", "false")
.set("es.cluster", "wp-es-reporting-prod")
.set("es.scroll.size", "5000")
.setJars(JavaSparkContext.jarOfClass(Demo.class))
.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
.set("spark.default.parallelism", String.valueOf(cpus * 2))
.set("spark.executor.memory", "10g")
.set("spark.num.executors", "40")
.set("spark.dynamicAllocation.enabled", "true")
.set("spark.dynamicAllocation.minExecutors", "10")
.set("spark.dynamicAllocation.maxExecutors", "50") .set("spark.logConf", "true");
Run Code Online (Sandbox Code Playgroud)
当我尝试运行我的Spark工作时,这似乎不起作用
java -jar spark-test-job.jar"
我得到了这个例外
405472 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to
server: 0.0.0.0/0.0.0.0:8032. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
sleepTime=1 SECONDS)
406473 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to
server: 0.0.0.0/0.0.0.0:8032. Already tried 3 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
...
Run Code Online (Sandbox Code Playgroud)
有帮助吗?
| 归档时间: |
|
| 查看次数: |
1115 次 |
| 最近记录: |