Jun*_*hao 3 c java ubuntu hadoop
代码是libhdfs测试代码.
int main(int argc, char **argv)
{
hdfsFS fs = hdfsConnect("hdfs://labossrv14", 9000);
const char* writePath = "/libhdfs_test.txt";
hdfsFile writeFile = hdfsOpenFile(fs, writePath, O_WRONLY|O_CREAT, 0, 0, 0);
if(!writeFile)
{
fprintf(stderr, "Failed to open %s for writing!\n", writePath);
exit(-1);
}
char* buffer = "Hello, libhdfs!";
tSize num_written_bytes = hdfsWrite(fs, writeFile, (void*)buffer, strlen(buffer)+1);
if (hdfsFlush(fs, writeFile))
{
fprintf(stderr, "Failed to 'flush' %s\n", writePath);
exit(-1);
}
hdfsCloseFile(fs, writeFile);
}
Run Code Online (Sandbox Code Playgroud)
我付出了很多努力来成功编译这个代码,但是当我运行程序时它不起作用.错误信息如下.
loadFileSystems error:
(unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
hdfsBuilderConnect(forceNewInstance=0, nn=labossrv14, port=9000, kerbTicketCachePath=(NULL), userName=(NULL)) error:
(unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
hdfsOpenFile(/libhdfs_test.txt): constructNewObjectOfPath error:
(unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
Failed to open /libhdfs_test.txt for writing!
Run Code Online (Sandbox Code Playgroud)
根据官方文件我玩这个东西.我发现问题可能是错误的CLASSPATH.我的CLASSPATH是跟随的,它由"hadoop classpath --glob"生成的类路径和jdk和jre的lib路径组合而成.
export CLASSPATH=/home/junzhao/hadoop/hadoop-2.5.2/etc/hadoop:/home/junzhao/hadoop/hadoop-2.5.2/share/hadoop/common/lib/*:/home/junzhao/hadoop/hadoop-2.5.2/share/hadoop/common/*:/home/junzhao/hadoop/hadoop-2.5.2/share/hadoop/hdfs:/home/junzhao/hadoop/hadoop-2.5.2/share/hadoop/hdfs/lib/*:/home/junzhao/hadoop/hadoop-2.5.2/share/hadoop/hdfs/*:/home/junzhao/hadoop/hadoop-2.5.2/share/hadoop/yarn/lib/*:/home/junzhao/hadoop/hadoop-2.5.2/share/hadoop/yarn/*:/home/junzhao/hadoop/hadoop-2.5.2/share/hadoop/mapreduce/lib/*:/home/junzhao/hadoop/hadoop-2.5.2/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar:/usr/lib/jvm/java-8-oracle/lib:/usr/lib/jvm/java-8-oracle/jre/lib:$CLASSPATH
Run Code Online (Sandbox Code Playgroud)
有没有人有一些好的解决方案?谢谢!
我再次阅读教程中的一些信息和之前提出的一些问题.最后我发现问题是由JNI不扩展CLASSPATH中的通配符引起的.所以我只是将所有的jar放入CLASSPATH,问题就解决了.由于此命令"hadoop classpath --glob"也会生成通配符,因此它解释了为什么官方文档说这个
使用通配符语法指定多个jar无效.运行hadoop classpath --glob或hadoop classpath --jar为部署生成正确的类路径可能很有用.
我昨天误解了这一段.
另请参阅运行Exception的Hadoop C++ HDFS测试,并且可以在类路径中使用JNI来表示通配符扩展吗?
| 归档时间: |
|
| 查看次数: |
1194 次 |
| 最近记录: |