Elv*_*ade 78 eclipse scala apache-spark
我无法spark在Scala IDE(Maven spark project)上安装一个简单的工作Windows 7
已添加Spark核心依赖项.
val conf = new SparkConf().setAppName("DemoDF").setMaster("local")
val sc = new SparkContext(conf)
val logData = sc.textFile("File.txt")
logData.count()
Run Code Online (Sandbox Code Playgroud)
错误:
16/02/26 18:29:33 INFO SparkContext: Created broadcast 0 from textFile at FrameDemo.scala:13
16/02/26 18:29:34 ERROR Shell: Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:278)
at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:300)
at org.apache.hadoop.util.Shell.<clinit>(Shell.java:293)
at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:76)
at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:362)
at <br>org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$33.apply(SparkContext.scala:1015)
at org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$33.apply(SparkContext.scala:1015)
at <br>org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:176)
at <br>org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:176)<br>
at scala.Option.map(Option.scala:145)<br>
at org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:176)<br>
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:195)<br>
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)<br>
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)<br>
at scala.Option.getOrElse(Option.scala:120)<br>
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)<br>
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)<br>
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)<br>
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)<br>
at scala.Option.getOrElse(Option.scala:120)<br>
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)<br>
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)<br>
at org.apache.spark.rdd.RDD.count(RDD.scala:1143)<br>
at com.org.SparkDF.FrameDemo$.main(FrameDemo.scala:14)<br>
at com.org.SparkDF.FrameDemo.main(FrameDemo.scala)<br>
Run Code Online (Sandbox Code Playgroud)
Tak*_*aky 120
以下是对解决方案问题的一个很好的解释.
在操作系统级别或以编程方式设置您的HADOOP_HOME环境变量:
System.setProperty("hadoop.home.dir","带winutils的bin文件夹的完整路径");
请享用
小智 55
C:\winutils\binwinutils.exe里面C:\winutils\binHADOOP_HOME为C:\winutilsAni*_*non 22
按照这个:
bin在任何目录中创建一个文件夹(将在步骤3中使用).
下载winutils.exe并将其放在bin目录中.
现在添加System.setProperty("hadoop.home.dir", "PATH/TO/THE/DIR");您的代码.
1) Download winutils.exe from https://github.com/steveloughran/winutils
2) Create a directory In windows "C:\winutils\bin
3) Copy the winutils.exe inside the above bib folder .
4) Set the environmental property in the code
System.setProperty("hadoop.home.dir", "file:///C:/winutils/");
5) Create a folder "file:///C:/temp" and give 777 permissions.
6) Add config property in spark Session ".config("spark.sql.warehouse.dir", "file:///C:/temp")"
Run Code Online (Sandbox Code Playgroud)
小智 5
您也可以winutils.exe从 GITHub 下载:
https://github.com/steveloughran/winutils/tree/master/hadoop-2.7.1/bin
替换hadoop-2.7.1为您想要的版本并将文件放入D:\hadoop\bin
如果您无权访问计算机上的环境变量设置,只需将以下行添加到您的代码中:
System.setProperty("hadoop.home.dir", "D:\\hadoop");
Run Code Online (Sandbox Code Playgroud)
在 Windows 10 上 - 您应该添加两个不同的参数。
(1) 在系统变量下添加新变量和值 - HADOOP_HOME 和路径(即 c:\Hadoop)。
(2) 将新条目添加/附加到“Path”变量作为“C:\Hadoop\bin”。
以上对我有用。
| 归档时间: |
|
| 查看次数: |
93450 次 |
| 最近记录: |