我正在尝试从应用程序连接到Spark的独立集群.我想在一台机器上做这个.我按命令运行独立主服务器:
bash start-master.sh
Run Code Online (Sandbox Code Playgroud)
然后我按命令运行一个worker:
bash spark-class org.apache.spark.deploy.worker.Worker spark://PC:7077 -m 512m
Run Code Online (Sandbox Code Playgroud)
(我为它分配了512 MB).
在master的web UI上:
http://localhost:8080
Run Code Online (Sandbox Code Playgroud)
我明白了,那个主人和工人正在奔跑.
然后我尝试使用以下命令从应用程序连接到群集:
JavaSparkContext sc = new JavaSparkContext("spark://PC:7077", "myapplication");
Run Code Online (Sandbox Code Playgroud)
当我运行应用程序时,它崩溃并显示以下错误消息:
4/11/01 22:53:26 INFO client.AppClient$ClientActor: Connecting to master spark://PC:7077...
14/11/01 22:53:26 INFO spark.SparkContext: Starting job: collect at App.java:115
14/11/01 22:53:26 INFO scheduler.DAGScheduler: Got job 0 (collect at App.java:115) with 2 output partitions (allowLocal=false)
14/11/01 22:53:26 INFO scheduler.DAGScheduler: Final stage: Stage 0(collect at App.java:115)
14/11/01 22:53:26 INFO scheduler.DAGScheduler: Parents of final stage: List()
14/11/01 22:53:26 INFO scheduler.DAGScheduler: Missing parents: List()
14/11/01 22:53:26 INFO scheduler.DAGScheduler: Submitting Stage 0 (ParallelCollectionRDD[0] at parallelize at App.java:109), which has no missing parents
14/11/01 22:53:27 INFO scheduler.DAGScheduler: Submitting 2 missing tasks from Stage 0 (ParallelCollectionRDD[0] at parallelize at App.java:109)
14/11/01 22:53:27 INFO scheduler.TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
14/11/01 22:53:42 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
14/11/01 22:53:46 INFO client.AppClient$ClientActor: Connecting to master spark://PC:7077...
14/11/01 22:53:57 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
14/11/01 22:54:06 INFO client.AppClient$ClientActor: Connecting to master spark://PC:7077...
14/11/01 22:54:12 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
14/11/01 22:54:26 ERROR cluster.SparkDeploySchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.
14/11/01 22:54:26 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
14/11/01 22:54:26 INFO scheduler.DAGScheduler: Failed to run collect at App.java:115
Exception in thread "main" 14/11/01 22:54:26 INFO scheduler.TaskSchedulerImpl: Cancelling stage 0
org.apache.spark.SparkException: Job aborted due to stage failure: All masters are unresponsive! Giving up.
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAnd IndependentStages(DAGScheduler.scala:1033)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1017 )
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1015 )
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1015)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.s cala:633)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.s cala:633)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:633)
at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAG Scheduler.scala:1207)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
at akka.actor.ActorCell.invoke(ActorCell.scala:456)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
at akka.dispatch.Mailbox.run(Mailbox.scala:219)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
14/11/01 22:54:26 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/metrics/json,null}
14/11/01 22:54:26 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/stages/stage/kill,null}
14/11/01 22:54:26 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/,null}
14/11/01 22:54:26 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/static,null}
14/11/01 22:54:26 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/executors/json,null}
14/11/01 22:54:26 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/executors,null}
14/11/01 22:54:26 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/environment/json,null}
Run Code Online (Sandbox Code Playgroud)
有什么想法发生了什么?
PS我正在使用预构建版本的Spark - spark-1.1.0-bin-hadoop2.4.
谢谢.
确保独立工作程序和 Spark 驱动程序都连接到 Spark 主站,其 Web UI 中列出的确切地址/启动日志消息中打印的地址相同。Spark 使用 Akka 进行一些控制平面通信,而 Akka 对主机名非常挑剔,因此这些需要完全匹配。
有几个选项可以控制驱动程序和主机将绑定到哪些主机名/网络接口。也许最简单的选择是设置SPARK_LOCAL_IP环境变量来控制主站/驱动程序将绑定到的地址。有关影响网络地址绑定的其他设置的概述,请参阅http://databricks.gitbooks.io/databricks-spark-knowledge-base/content/troubleshooting/connectivity_issues.html 。
| 归档时间: |
|
| 查看次数: |
8287 次 |
| 最近记录: |