ele*_*ias 4 scala apache-spark apache-spark-sql
我在其他SO帖子中读过这个问题,但我仍然不知道我做错了什么.原则上,添加以下两行:
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
import sqlContext.implicits._
Run Code Online (Sandbox Code Playgroud)
应该做的伎俩,但错误仍然存在
这是我的build.sbt:
name := "PickACustomer"
version := "1.0"
scalaVersion := "2.11.7"
libraryDependencies ++= Seq("com.databricks" %% "spark-avro" % "2.0.1",
"org.apache.spark" %% "spark-sql" % "1.6.0",
"org.apache.spark" %% "spark-core" % "1.6.0")
Run Code Online (Sandbox Code Playgroud)
我的scala代码是:
import scala.collection.mutable.Map
import scala.collection.immutable.Vector
import org.apache.spark.rdd.RDD
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
import org.apache.spark.sql._
object Foo{
def reshuffle_rdd(rawText: RDD[String]): RDD[Map[String, (Vector[(Double, Double, String)], Map[String, Double])]] = {...}
def do_prediction(shuffled:RDD[Map[String, (Vector[(Double, Double, String)], Map[String, Double])]], prediction:(Vector[(Double, Double, String)] => Map[String, Double]) ) : RDD[Map[String, Double]] = {...}
def get_match_rate_from_results(results : RDD[Map[String, Double]]) : Map[String, Double] = {...}
def retrieve_duid(element: Map[String,(Vector[(Double, Double, String)], Map[String,Double])]): Double = {...}
def main(args: Array[String]){
val conf = new SparkConf().setAppName(this.getClass.getSimpleName)
if (!conf.getOption("spark.master").isDefined) conf.setMaster("local")
val sc = new SparkContext(conf)
//This should do the trick
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
import sqlContext.implicits._
val PATH_FILE = "/mnt/fast_export_file_clean.csv"
val rawText = sc.textFile(PATH_FILE)
val shuffled = reshuffle_rdd(rawText)
// PREDICT AS A FUNCTION OF THE LAST SEEN UID
val results = do_prediction(shuffled.filter(x => retrieve_duid(x) > 1) , predict_as_last_uid)
results.cache()
case class Summary(ismatch: Double, t_to_last:Double, nflips:Double,d_uid: Double, truth:Double, guess:Double)
val summary = results.map(x => Summary(x("match"), x("t_to_last"), x("nflips"), x("d_uid"), x("truth"), x("guess")))
//PROBLEMATIC LINE
val sum_df = summary.toDF()
}
}
Run Code Online (Sandbox Code Playgroud)
我总是得到:
值toDF不是org.apache.spark.rdd.RDD的成员[摘要]
有点丢失了.有任何想法吗?
移动你的案例类main:
object Foo {
case class Summary(ismatch: Double, t_to_last:Double, nflips:Double,d_uid: Double, truth:Double, guess:Double)
def main(args: Array[String]){
...
}
}
Run Code Online (Sandbox Code Playgroud)
关于它的范围的一些事情是阻止Spark能够处理模式的自动派生Summary.仅供参考我实际上有一个不同的错误sbt:
没有TypeTag可用于摘要
| 归档时间: |
|
| 查看次数: |
5619 次 |
| 最近记录: |