Adi*_*kar 5 java apache-spark spark-streaming
我正在使用spark-submit在yarn-cluster上运行我的spark流应用程序.当我在本地模式下运行它时工作正常.但是当我尝试使用spark-submit在yarn-cluster上运行它时,它会运行一段时间然后以下面的execption退出.
Diagnostics: Exception from container-launch.
Container id: container_1435576266959_1208_02_000002
Exit code: 13
Stack trace: ExitCodeException exitCode=13:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
at org.apache.hadoop.util.Shell.run(Shell.java:455)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Run Code Online (Sandbox Code Playgroud)
任何帮助将不胜感激.
我得到了解决方案.
在我的Spark Streaming应用程序中,我设置了SparkConf.setMaster("local [*]")并在spark-submit中提供了--master yarn-cluster.
因此,两个主人都存在冲突,并且仍然处于"接受"状态并退出.
| 归档时间: |
|
| 查看次数: |
3656 次 |
| 最近记录: |