来自Cluster的HADOOP_CONF_DIR的值

nis*_*013 6 hadoop-yarn apache-spark

我使用Ambari设置了一个集群(YARN),其中3个虚拟机作为主机.

我在哪里可以找到HADOOP_CONF_DIR的值?

# Run on a YARN cluster
export HADOOP_CONF_DIR=XXX
./bin/spark-submit \
  --class org.apache.spark.examples.SparkPi \
  --master yarn-cluster \  # can also be `yarn-client` for client mode
  --executor-memory 20G \
  --num-executors 50 \
  /path/to/examples.jar \
  1000
Run Code Online (Sandbox Code Playgroud)

小智 9

也安装Hadoop.在我的情况下,我已经将它安装在/ usr/local/hadoop中

设置Hadoop环境变量

export HADOOP_INSTALL=/usr/local/hadoop
Run Code Online (Sandbox Code Playgroud)

然后设置conf目录

export HADOOP_CONF_DIR=$HADOOP_INSTALL/etc/hadoop
Run Code Online (Sandbox Code Playgroud)


Atu*_*man 4

/etc/spark/conf/spark-env.sh

export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-/etc/hadoop/conf}
Run Code Online (Sandbox Code Playgroud)