我正在使用Spark 1.6.1:
目前我正在使用CrossValidator来训练我的ML管道,其中包含各种参数.在训练过程之后,我可以使用CrossValidatorModel的bestModel属性来获取在交叉验证期间表现最佳的模型.是否会自动丢弃交叉验证的其他模型,还是可以选择性能比bestModel差的模型?
我问,因为我使用F1分数指标进行交叉验证,但我也对所有模型的weighedRecall感兴趣,而不仅仅是在交叉验证期间表现最佳的模型.
val folds = 6
val cv = new CrossValidator()
.setEstimator(pipeline)
.setEvaluator(new MulticlassClassificationEvaluator)
.setEstimatorParamMaps(paramGrid)
.setNumFolds(folds)
val avgF1Scores = cvModel.avgMetrics
val predictedDf = cvModel.bestModel.transform(testDf)
// Here I would like to predict as well with the other models of the cross validation
Run Code Online (Sandbox Code Playgroud) cross-validation apache-spark apache-spark-mllib apache-spark-1.6
我用pySpark训练了一个随机森林。我想在结果中每个网格点都有一个csv。我的代码是:
estimator = RandomForestRegressor()
evaluator = RegressionEvaluator()
paramGrid = ParamGridBuilder().addGrid(estimator.numTrees, [2,3])\
.addGrid(estimator.maxDepth, [2,3])\
.addGrid(estimator.impurity, ['variance'])\
.addGrid(estimator.featureSubsetStrategy, ['sqrt'])\
.build()
pipeline = Pipeline(stages=[estimator])
crossval = CrossValidator(estimator=pipeline,
estimatorParamMaps=paramGrid,
evaluator=evaluator,
numFolds=3)
cvModel = crossval.fit(result)
Run Code Online (Sandbox Code Playgroud)
所以我想要一个csv:
numTrees | maxDepth | impurityMeasure
2 2 0.001
2 3 0.00023
Run Code Online (Sandbox Code Playgroud)
等等
做这个的最好方式是什么?