我正在学习Spark,并希望运行由两台物理机器组成的最简单的集群.我已完成所有基本设置,似乎没问题.自动启动脚本的输出如下所示:
[username@localhost sbin]$ ./start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /home/username/spark-1.6.0-bin-hadoop2.6/logs/spark-username-org.apache.spark.deploy.master.Master-1-localhost.out
localhost: starting org.apache.spark.deploy.worker.Worker, logging to /home/sername/spark-1.6.0-bin-hadoop2.6/logs/spark-username-org.apache.spark.deploy.worker.Worker-1-localhost.out
username@192.168.???.??: starting org.apache.spark.deploy.worker.Worker, logging to /home/username/spark-1.6.0-bin-hadoop2.6/logs/spark-username-org.apache.spark.deploy.worker.Worker-1-localhost.localdomain.out
Run Code Online (Sandbox Code Playgroud)
所以这里没有错误,似乎主节点正在运行以及两个Worker节点.但是,当我在192.168.?????:8080打开WebGUI时,它只列出一个工作人员 - 本地工作人员.我的问题与此处描述的类似:Spark Clusters:工作者信息不会显示在Web UI上,但我的/ etc/hosts文件中没有任何内容.它包含的全部是:
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
Run Code Online (Sandbox Code Playgroud)
我错过了什么?两台机器都运行Fedora Workstation x86_64.
apache-spark ×1