Yey*_*eye 9 scala apache-spark rdd spark-dataframe apache-spark-mllib
我对Spark和Scala相对较新.
我从以下数据帧开始(单个列由密集的双打矢量组成):
scala> val scaledDataOnly_pruned = scaledDataOnly.select("features")
scaledDataOnly_pruned: org.apache.spark.sql.DataFrame = [features: vector]
scala> scaledDataOnly_pruned.show(5)
+--------------------+
| features|
+--------------------+
|[-0.0948337274182...|
|[-0.0948337274182...|
|[-0.0948337274182...|
|[-0.0948337274182...|
|[-0.0948337274182...|
+--------------------+
Run Code Online (Sandbox Code Playgroud)
直接转换为RDD会生成org.apache.spark.rdd.RDD [org.apache.spark.sql.Row]的实例:
scala> val scaledDataOnly_rdd = scaledDataOnly_pruned.rdd
scaledDataOnly_rdd: org.apache.spark.rdd.RDD[org.apache.spark.sql.Row] = MapPartitionsRDD[32] at rdd at <console>:66
Run Code Online (Sandbox Code Playgroud)
有谁知道如何将此DF转换为org.apache.spark.rdd.RDD [org.apache.spark.mllib.linalg.Vector]的实例?到目前为止,我的各种尝试都没有成功.
提前感谢您的任何指示!
刚发现:
val scaledDataOnly_rdd = scaledDataOnly_pruned.map{x:Row => x.getAs[Vector](0)}
Run Code Online (Sandbox Code Playgroud)
编辑:使用更复杂的方式来解释Row中的字段.
这对我有用
val featureVectors = features.map(row => {
Vectors.dense(row.toSeq.toArray.map({
case s: String => s.toDouble
case l: Long => l.toDouble
case _ => 0.0
}))
})
Run Code Online (Sandbox Code Playgroud)
features是spark SQL的DataFrame.
| 归档时间: |
|
| 查看次数: |
13184 次 |
| 最近记录: |