我有一个Spark群集设置,一个主人和三个工人.我也在CentOS VM上安装了Spark.我正在尝试从我的本地VM运行一个Spark shell,它将连接到master,并允许我执行简单的Scala代码.所以,这是我在本地VM上运行的命令:
bin/spark-shell --master spark://spark01:7077
Run Code Online (Sandbox Code Playgroud)
shell运行到我可以输入Scala代码的位置.它说执行者已被授予(x3 - 每个工人一个).如果我查看Master的UI,我可以看到一个正在运行的应用程序,Spark shell.所有工作者都是ALIVE,使用了2/2个核心,并为应用程序分配了512 MB(5 GB中).所以,我尝试执行以下Scala代码:
sc.parallelize(1 to 100).count
Run Code Online (Sandbox Code Playgroud)
不幸的是,该命令不起作用.shell将无休止地打印相同的警告:
INFO SparkContext: Starting job: count at <console>:13
INFO DAGScheduler: Got job 0 (count at <console>:13) with 2 output partitions (allowLocal=false)
INFO DAGScheduler: Final stage: Stage 0(count at <console>:13) with 2 output partitions (allowLocal=false)
INFO DAGScheduler: Parents of final stage: List()
INFO DAGScheduler: Missing parents: List()
INFO DAGScheduler: Submitting Stage 0 (Parallel CollectionRDD[0] at parallelize at <console>:13), which has no missing parents
INFO …Run Code Online (Sandbox Code Playgroud) apache-spark ×1