bdp*_*ish 6 hadoop accumulo hadoop-yarn
我正在尝试使用Hadoop,YARN和Accumulo运行MapReduce作业.
我得到以下输出,我无法追查问题.看起来是一个YARN问题,但我不确定它在寻找什么.我在$ HADOOP_PREFIX/grid/hadoop/hdfs/yarn/logs位置有一个nmPrivate文件夹.这是它说不能找到的文件夹吗?
14/03/31 08:48:46 INFO mapreduce.Job: Job job_1395942264921_0023 failed with state FAILED due to: Application application_1395942264921_0023 failed 2 times due to AM Container for appattempt_1395
942264921_0023_000002 exited with exitCode: -1000 due to: Could not find any valid local directory for nmPrivate/container_1395942264921_0023_02_000001.tokens
.Failing this attempt.. Failing the application.
Run Code Online (Sandbox Code Playgroud)
小智 1
当我在集群模式下测试spark-submit-on-yarn时:
spark-submit --master yarn --deploy-mode cluster --class org.apache.spark.examples.SparkPi /usr/local/install/spark-2.2.0-bin-hadoop2.7/examples/jars/spark-examples_2.11-2.2.0.jar 100
Run Code Online (Sandbox Code Playgroud)
我得到了同样的错误:
Application application_1532249549503_0007 failed 2 times due to AM Container for appattempt_1532249549503_0007_000002 exited with exitCode: -1000 Failing this attempt.Diagnostics: java.io.IOException: Resource file:/usr/local/install/spark-2.2.0-bin-hadoop2.7/examples/jars/spark-examples_2.11-2.2.0.jar changed on src filesystem (expected 1531576498000, was 1531576511000
Run Code Online (Sandbox Code Playgroud)
有一个建议可以解决这种错误,修改你的 core-site.xml 或 HADOOP 的其他配置。
fs.defaultFS
最后,我通过在 $HADOOP_HOME/etc/hadoop/core-site.xml 中设置属性来修复错误
归档时间: |
|
查看次数: |
2265 次 |
最近记录: |