ae3*_*e35 2 hadoop hadoop-yarn
我有一个Hadoop FileSystem,它使用JNI的本机库.
显然,我必须独立于当前执行的作业包含共享对象.但我无法找到一种方法来告诉Hadoop/Yarn它应该在哪里寻找共享对象.
我使用以下解决方案获得了部分成功,同时使用yarn启动了wordcount示例.
export JAVA_LIBRARY_PATH=/path启动资源和节点管理器时的设置.
这有助于资源和节点管理器,但实际的作业/应用程序失败.打印LD_LIBRARY_PATH和java.library.path执行wordcount示例时会产生以下结果.什么
/logs/userlogs/application_x/container_x_001/stdout
...
java.library.path : /tmp/hadoop-u/nm-local-dir/usercache/u/appcache/application_x/container_x_001:/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
LD_LIBRARY_PATH : /tmp/hadoop-u/nm-local-dir/usercache/u/appcache/application_x/container_x
Run Code Online (Sandbox Code Playgroud)设置 yarn.app.mapreduce.am.env="LD_LIBRARY_PATH=/path"
这对一些乔布斯有帮助.实际的map/reduce作业确实有效(至少我有正确的结果),但是调用确实因Error而失败no jni-xtreemfs in java.library.path.
不知何故,第一个应用程序/工作确实起作用了
/logs/userlogs/application_x/container_x_001/stdout
...
java.library.path : /tmp/hadoop-u/nm-local-dir/usercache/u/appcache/application_x/container_x_001:/path:/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
LD_LIBRARY_PATH : /tmp/hadoop-u/nm-local-dir/usercache/u/appcache/application_x/container_x_001:/path
Run Code Online (Sandbox Code Playgroud)
但第二个和其他的确失败了:
/logs/userlogs/application_x/container_x_002/stdout
...
java.library.path : /tmp/hadoop-u/nm-local-dir/usercache/u/appcache/application_x/container_x_002:/opt/hadoop-2.7.1/lib/native:/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
LD_LIBRARY_PATH : /tmp/hadoop-u/nm-local-dir/usercache/u/appcache/application_x/container_x_002/opt/hadoop-2.7.1/lib/native
Run Code Online (Sandbox Code Playgroud)
后面的堆栈跟踪显示,执行时发生错误YarnChild:
2015-08-03 15:24:03,851 FATAL [main] org.apache.hadoop.mapred.YarnChild: Error running child : java.lang.UnsatisfiedLinkError: no jni-xtreemfs in java.library.path
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1886)
at java.lang.Runtime.loadLibrary0(Runtime.java:849)
at java.lang.System.loadLibrary(System.java:1088)
at org.xtreemfs.common.libxtreemfs.jni.NativeHelper.loadLibrary(NativeHelper.java:54)
at org.xtreemfs.common.libxtreemfs.jni.NativeClient.<clinit>(NativeClient.java:41)
at org.xtreemfs.common.libxtreemfs.ClientFactory.createClient(ClientFactory.java:72)
at org.xtreemfs.common.libxtreemfs.ClientFactory.createClient(ClientFactory.java:51)
at org.xtreemfs.common.clients.hadoop.XtreemFSFileSystem.initialize(XtreemFSFileSystem.java:191)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2653)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:170)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Run Code Online (Sandbox Code Playgroud)供应libjni-xtreemfs.so通过命令行参数-files
这确实有效.我假设.so被复制到tmp目录.但这不是一个可行的解决方案,因为它要求用户在每次调用时都提供到.so的路径.
现在有没有人我怎么可以全局设置LD_LIBRARY_PATH或者java.library.path或者可以建议哪些配置选项我也可能会错过?我会非常感激的!
简答:在你的mapred-site.xml中输入以下内容
<property>
<name>mapred.child.java.opts</name>
<value>-Djava.library.path=$PATH_TO_NATIVE_LIBS</value>
</property>
Run Code Online (Sandbox Code Playgroud)
说明:作业/应用程序不是由yarn而不是mapred(map/reduce)容器执行的,其配置由mapred-site.xml文件控制.指定自定义java参数会导致实际工作者使用正确的路径旋转
| 归档时间: |
|
| 查看次数: |
4570 次 |
| 最近记录: |