oku*_*oub 2 python apache-spark pyspark apache-spark-ml
我用pySpark训练了一个随机森林。我想在结果中每个网格点都有一个csv。我的代码是:
estimator = RandomForestRegressor()
evaluator = RegressionEvaluator()
paramGrid = ParamGridBuilder().addGrid(estimator.numTrees, [2,3])\
.addGrid(estimator.maxDepth, [2,3])\
.addGrid(estimator.impurity, ['variance'])\
.addGrid(estimator.featureSubsetStrategy, ['sqrt'])\
.build()
pipeline = Pipeline(stages=[estimator])
crossval = CrossValidator(estimator=pipeline,
estimatorParamMaps=paramGrid,
evaluator=evaluator,
numFolds=3)
cvModel = crossval.fit(result)
Run Code Online (Sandbox Code Playgroud)
所以我想要一个csv:
numTrees | maxDepth | impurityMeasure
2 2 0.001
2 3 0.00023
Run Code Online (Sandbox Code Playgroud)
等等
做这个的最好方式是什么?
您必须组合不同的数据位:
Estimator ParamMaps使用getEstimatorParamMaps方法提取。avgMetrics参数检索的训练指标。首先获取映射中声明的所有参数的名称和值:
params = [{p.name: v for p, v in m.items()} for m in cvModel.getEstimatorParamMaps()]
Run Code Online (Sandbox Code Playgroud)
塔恩zip与指标并转换为数据框
import pandas as pd
pd.DataFrame.from_dict([
{cvModel.getEvaluator().getMetricName(): metric, **ps}
for ps, metric in zip(params, cvModel.avgMetrics)
])
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
1093 次 |
| 最近记录: |