使用 Pyspark 调整超参数

mir*_*ali 4 apache-spark pyspark apache-spark-mllib

我正在处理一个数据集,我使用线性回归来拟合模型。在结束之前,我想尝试使用超参数调整来获得可用的最佳模型。

我一直在通过管道运行数据,首先将字符串转换为数字,然后对其进行编码,然后对所有列进行矢量化,然后在应用线性回归之前对其进行缩放。我很想知道如何设置网格来开始超参数球滚动(可以这么说)。

import pyspark.ml.feature as ft
WD_indexer = ft.StringIndexer(inputCol="Wind_Direction", outputCol="WD-num")
WD_encoder = ft.OneHotEncoder(inputCol="WD-num", outputCol='WD-vec')
featuresCreator = ft.VectorAssembler(inputCols=["Dew_Point", "Temperature",
                                            "Pressure", "WD-vec", "Wind_Speed","Hours_Snow","Hours_Rain"], outputCol='features')

from pyspark.ml.feature import StandardScaler
feature_scaler = StandardScaler(inputCol="features",outputCol="sfeatures")

from pyspark.ml.regression import LinearRegression
lr = LinearRegression(featuresCol="sfeatures",labelCol="PM_Reading")
Run Code Online (Sandbox Code Playgroud)

所以管道看起来像这样:

from pyspark.ml import Pipeline
pipeline = Pipeline( stages = [WD_indexer, WD_encoder, featuresCreator, feature_scaler, lr] )
Run Code Online (Sandbox Code Playgroud)

如何为此管道设置网格?

谢谢

aha*_*jib 5

我知道这个问题是两年前发布的,但让每个人都了解最新的发现和问题的替代解决方案仍然没有什么坏处。正如 Frank Kane在这里详细解释的那样,CrossValidator它非常昂贵,因为需要评估指定超参数值的每种可能的组合。因此,建议您使用TrainValidationSplit它仅评估每个组合的单个随机训练/测试数据拆分。当您处理非常大的数据集时,这可能非常有用。Spark 文档中的示例代码(在此处查找更多详细信息):

# We use a ParamGridBuilder to construct a grid of parameters to search over.
# TrainValidationSplit will try all combinations of values and determine best model using
# the evaluator.
paramGrid = ParamGridBuilder()\
    .addGrid(lr.regParam, [0.1, 0.01]) \
    .addGrid(lr.fitIntercept, [False, True])\
    .addGrid(lr.elasticNetParam, [0.0, 0.5, 1.0])\
    .build()

# In this case the estimator is simply the linear regression.
# A TrainValidationSplit requires an Estimator, a set of Estimator ParamMaps, and an Evaluator.
tvs = TrainValidationSplit(estimator=lr,
                           estimatorParamMaps=paramGrid,
                           evaluator=RegressionEvaluator(),
                           # 80% of the data will be used for training, 20% for validation.
                           trainRatio=0.8)

# Run TrainValidationSplit, and choose the best set of parameters.
model = tvs.fit(train)
Run Code Online (Sandbox Code Playgroud)