我是Scala的新手.如何使用Scala从HDFS读取文件(不使用Spark)?当我用谷歌搜索它时,我只找到了HDFS的写入选项.
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import java.io.PrintWriter;
/**
* @author ${user.name}
*/
object App {
//def foo(x : Array[String]) = x.foldLeft("")((a,b) => a + b)
def main(args : Array[String]) {
println( "Trying to write to HDFS..." )
val conf = new Configuration()
//conf.set("fs.defaultFS", "hdfs://quickstart.cloudera:8020")
conf.set("fs.defaultFS", "hdfs://192.168.30.147:8020")
val fs= FileSystem.get(conf)
val output = fs.create(new Path("/tmp/mySample.txt"))
val writer = new PrintWriter(output)
try {
writer.write("this is a test")
writer.write("\n")
}
finally {
writer.close()
println("Closed!")
}
println("Done!")
}
}
Run Code Online (Sandbox Code Playgroud)
请帮帮我.如何使用scala从HDFS读取文件或加载文件.
我正在运行以下spark-java代码.
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.sql.hive.HiveContext;
import org.apache.spark.sql.DataFrame;
import org.apache.spark.sql.SQLContext;
import org.apache.hadoop.fs.*;
public class Resource{
public static void main(String[] args) {
SparkConf conf = new SparkConf().setAppName("cust data");
JavaSparkContext jsc = new JavaSparkContext(conf);
HiveContext hiveSQLContext = new org.apache.spark.sql.hive.HiveContext(jsc.sc());
DataFrame df = hiveSQLContext.sql(" select * from emprecord");
df.registerTempTable("mytempTable");//creating temptable
hiveSQLContext.sql("create table xyz as select * from mytempTable");//inserting into hie table
jsc.close(
}
}
Run Code Online (Sandbox Code Playgroud)
[harsha@hdp-poc1 SparkworkingJars]$ javac -cp $CLASSPATH Resource.java
warning: /home/harsha/SparkworkingJars/spark-core_2.11- 1.6.1.jar(org/apache/spark/api/java/JavaSparkContextVarargsWorkaround.class): major version 51 is newer than 50, the …Run Code Online (Sandbox Code Playgroud)