小编Tia*_*ang的帖子

如何将spark DataFrame转换为RDD mllib LabeledPoints?

我尝试将PCA应用于我的数据,然后将RandomForest应用于转换后的数据.但是,PCA.transform(data)给了我一个DataFrame,但我需要一个mllib LabeledPoints来提供我的RandomForest.我怎样才能做到这一点?我的代码:

    import org.apache.spark.mllib.util.MLUtils
    import org.apache.spark.{SparkConf, SparkContext}
    import org.apache.spark.mllib.tree.RandomForest
    import org.apache.spark.mllib.tree.model.RandomForestModel
    import org.apache.spark.ml.feature.PCA
    import org.apache.spark.mllib.regression.LabeledPoint
    import org.apache.spark.mllib.linalg.Vectors


    val dataset = MLUtils.loadLibSVMFile(sc, "data/mnist/mnist.bz2")

    val splits = dataset.randomSplit(Array(0.7, 0.3))

    val (trainingData, testData) = (splits(0), splits(1))

    val trainingDf = trainingData.toDF()

    val pca = new PCA()
    .setInputCol("features")
    .setOutputCol("pcaFeatures")
    .setK(100)
    .fit(trainingDf)

    val pcaTrainingData = pca.transform(trainingDf)

    val numClasses = 10
    val categoricalFeaturesInfo = Map[Int, Int]()
    val numTrees = 10 // Use more in practice.
    val featureSubsetStrategy = "auto" // Let the algorithm choose.
    val impurity = …
Run Code Online (Sandbox Code Playgroud)

scala pca apache-spark rdd apache-spark-mllib

10
推荐指数
1
解决办法
1万
查看次数

标签 统计

apache-spark ×1

apache-spark-mllib ×1

pca ×1

rdd ×1

scala ×1