小编PAR*_*DER的帖子

计算标准误差估计,Wald-Chi Square统计量,p值与Spark中的逻辑回归

我试图在样本数据上构建Logistic回归模型.

我们可以得到的模型输出是用于构建模型的特征的权重.

我找不到Spark API用于估计的标准误差,Wald-Chi Square统计量,p值等.

我将以下代码粘贴为例子

import org.apache.spark.mllib.classification.LogisticRegressionWithLBFGS
import org.apache.spark.mllib.evaluation.{BinaryClassificationMetrics, MulticlassMetrics}
import org.apache.spark.mllib.linalg.Vectors
import org.apache.spark.mllib.regression.LabeledPoint
import org.apache.spark.mllib.tree.RandomForest
import org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf, SparkContext}


    val sc = new SparkContext(new SparkConf().setAppName("SparkTest").setMaster("local[*]"))

    val sqlContext = new org.apache.spark.sql.SQLContext(sc);

    val data: RDD[String] = sc.textFile("C:/Users/user/Documents/spark-1.5.1-bin-hadoop2.4/data/mllib/credit_approval_2_attr.csv")


    val parsedData = data.map { line =>
      val parts = line.split(',').map(_.toDouble)
      LabeledPoint(parts(0), Vectors.dense(parts.tail))
    }

    //Splitting the data
    val splits: Array[RDD[LabeledPoint]] = parsedData.randomSplit(Array(0.7, 0.3), seed = 11L)
    val training: RDD[LabeledPoint] = splits(0).cache()
    val test: RDD[LabeledPoint] = splits(1)



    // Run training algorithm to build …
Run Code Online (Sandbox Code Playgroud)

standard-error logistic-regression pyspark apache-spark-mllib

6
推荐指数
1
解决办法
876
查看次数

使用树输出在Spark中使用Gradient Boosting Tree预测类的概率

众所周知,Spark中的GBT为您提供了截至目前的预测标签.

我正在考虑尝试计算一个类的预测概率(比如说属于某个叶子的所有实例)

构建GBT的代码

import org.apache.spark.SparkContext
import org.apache.spark.mllib.regression.LabeledPoint
import org.apache.spark.mllib.linalg.Vectors
import org.apache.spark.mllib.tree.GradientBoostedTrees
import org.apache.spark.mllib.tree.configuration.BoostingStrategy
import org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
import org.apache.spark.mllib.util.MLUtils

//Importing the data
val data = sc.textFile("data/mllib/credit_approval_2_attr.csv") //using the credit approval data set from UCI machine learning repository

//Parsing the data
val parsedData = data.map { line =>
    val parts = line.split(',').map(_.toDouble)
    LabeledPoint(parts(0), Vectors.dense(parts.tail))
}

//Splitting the data
val splits = parsedData.randomSplit(Array(0.7, 0.3), seed = 11L)
val training = splits(0).cache() 
val test = splits(1)

// Train a GradientBoostedTrees model.
// The defaultParams for …
Run Code Online (Sandbox Code Playgroud)

tree probability prediction boosting apache-spark-mllib

5
推荐指数
1
解决办法
5118
查看次数

Spark 中二元分类的评估指标:AUC 和 PR 曲线

我试图使用 BinaryclassificationMetrics 计算 LogisticRegressionwithLBFGS 的精度、召回率阈值。我得到了所有这些。我试图弄清楚是否可以获得 PR 和 AUC 曲线的图形输出。

在下面粘贴我的代码:

import org.apache.spark.mllib.classification.LogisticRegressionWithLBFGS
import org.apache.spark.mllib.evaluation.{BinaryClassificationMetrics, MulticlassMetrics}
import org.apache.spark.mllib.linalg.Vectors
import org.apache.spark.mllib.regression.LabeledPoint
import org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf, SparkContext}



object log_reg_eval_metric {

  def main(args: Array[String]): Unit = {


    System.setProperty("hadoop.home.dir", "c:\\winutil\\")


    val sc = new SparkContext(new SparkConf().setAppName("SparkTest").setMaster("local[*]"))

    val sqlContext = new org.apache.spark.sql.SQLContext(sc);

    val data: RDD[String] = sc.textFile("C:/Users/user/Documents/spark-1.5.1-bin-hadoop2.4/data/mllib/credit_approval_2_attr.csv")


    val parsedData = data.map { line =>
      val parts = line.split(',').map(_.toDouble)
      LabeledPoint(parts(0), Vectors.dense(parts.tail))
    }

    //Splitting the data
    val splits: Array[RDD[LabeledPoint]] = parsedData.randomSplit(Array(0.7, 0.3), seed = 11L)
    val training: RDD[LabeledPoint] …
Run Code Online (Sandbox Code Playgroud)

logistic-regression auc rdd apache-spark-mllib

5
推荐指数
1
解决办法
4942
查看次数