HDFS写入导致"CreateSymbolicLink错误(1314):客户端不保留所需的权限."

Syl*_*iel 5 java hadoop mapreduce hdfs

试图从Apache Hadoop执行示例map reduce程序.地图缩小作业运行时,下面有例外.试过,hdfs dfs -chmod 777 /但没有解决问题.

15/03/10 13:13:10 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with
ToolRunner to remedy this.
15/03/10 13:13:10 WARN mapreduce.JobSubmitter: No job jar file set.  User classes may not be found. See Job or Job#setJar(String).
15/03/10 13:13:10 INFO input.FileInputFormat: Total input paths to process : 2
15/03/10 13:13:11 INFO mapreduce.JobSubmitter: number of splits:2
15/03/10 13:13:11 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1425973278169_0001
15/03/10 13:13:12 INFO mapred.YARNRunner: Job jar is not present. Not adding any jar to the list of resources.
15/03/10 13:13:12 INFO impl.YarnClientImpl: Submitted application application_1425973278169_0001
15/03/10 13:13:12 INFO mapreduce.Job: The url to track the job: http://B2ML10803:8088/proxy/application_1425973278169_0001/
15/03/10 13:13:12 INFO mapreduce.Job: Running job: job_1425973278169_0001
15/03/10 13:13:18 INFO mapreduce.Job: Job job_1425973278169_0001 running in uber mode : false
15/03/10 13:13:18 INFO mapreduce.Job:  map 0% reduce 0%
15/03/10 13:13:18 INFO mapreduce.Job: Job job_1425973278169_0001 failed with state FAILED due to: Application application_1425973278169_0001 failed 2 times due
to AM Container for appattempt_1425973278169_0001_000002 exited with  exitCode: 1
For more detailed output, check application tracking page:http://B2ML10803:8088/proxy/application_1425973278169_0001/Then, click on links to logs of each attemp
t.
Diagnostics: Exception from container-launch.
Container id: container_1425973278169_0001_02_000001
Exit code: 1
Exception message: CreateSymbolicLink error (1314): A required privilege is not held by the client.
Run Code Online (Sandbox Code Playgroud)

堆栈跟踪:

ExitCodeException exitCode=1: CreateSymbolicLink error (1314): A required privilege is not held by the client.

    at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
    at org.apache.hadoop.util.Shell.run(Shell.java:455)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Run Code Online (Sandbox Code Playgroud)

壳牌产量:

1 file(s) moved.

Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
15/03/10 13:13:18 INFO mapreduce.Job: Counters: 0
Run Code Online (Sandbox Code Playgroud)

Mar*_*usz 13

赢8.1 + hadoop 2.7.0(从源码构建)

  1. 在管理员模式下运行命令提示符

  2. 执行etc\hadoop\hadoop-env.cmd

  3. 运行sbin\start-dfs.cmd

  4. 运行sbin\start-yarn.cmd

  5. 现在尝试运行你的工作


Dar*_*ero 6

我最近遇到了完全相同的问题.我尝试重新格式化namenode但它不起作用,我相信这不能永久解决问题.通过@aoetalks的参考,我通过查看本地组策略解决了Windows Server 2012 R2上的这个问题.

最后,请尝试以下步骤:

  1. 打开本地组策略(按Win+R打开"运行..." - 键入gpedit.msc)
  2. 展开"计算机配置" - "Windows设置" - "安全设置" - "本地策略" - "用户权限分配"
  3. 找到右侧的"创建符号链接",查看您的用户是否包含在内.如果没有,请将您的用户添加到其中.
  4. 这将在下次登录后生效,因此请注销并登录.

如果这仍然不起作用,可能是因为您使用的是管理员帐户.在这种情况下,您必须User Account Control: Run all administrators in Admin Approval Mode在同一目录中禁用(即组策略中的用户权限分配)然后重新启动计算机以使其生效.

参考:https://superuser.com/questions/104845/permission-to-make-symbolic-links-in-windows-7