是否可以将 TransformedTargetRegressor 添加到 scikit-learn 管道中?

nve*_*gos 6 python machine-learning linear-regression scikit-learn

我正在对一些数据建立一个预测分析管道,我正在选择模型。我的目标变量是偏斜的,所以我想对它进行对数变换以提高我的线性回归估计器的性能。我遇到了相对较新TransformedTargetRegressor的 scikit-learn,我想我可以将它用作管道的一部分。我附上我的代码

我最初的尝试是y_train在调用之前进行转换gs.fit(),将其转换为np.log1p(y_train). 这行得通,我可以执行嵌套的交叉验证并返回所有估算器感兴趣的指标。但是,我希望能够在以前看不见的数据(验证集)上获得训练模型的 R^2 和 RMSE,我明白为了做到这一点,我需要使用(例如)r2_score函数y_val, preds,其中预测需要已经转换回真实值,即,preds = np.expm1(gs.predict(X_val))

### Create a pipeline
pipe = Pipeline([
    # the transformer stage is populated by the param_grid
    ('transformer', TransformedTargetRegressor(func=np.log1p, inverse_func=np.expm1)),
    ('reg', DummyEstimator())  # Placeholder Estimator
])

### Candidate learning algorithms and their hyperparameters
alphas = [0.001, 0.01, 0.1, 1, 10, 100]
param_grid = [  
                {'transformer__regressor': Lasso(),
                 'reg': [Lasso()], # Actual Estimator
                 'reg__alpha': alphas},
                {'transformer__regressor': LassoLars(),
                 'reg': [LassoLars()], # Actual Estimator
                 'reg__alpha': alphas},
                {'transformer__regressor': Ridge(),
                 'reg': [Ridge()], # Actual Estimator
                 'reg__alpha': alphas},
                {'transformer__regressor': ElasticNet(),
                 'reg': [ElasticNet()], # Actual Estimator
                 'reg__alpha': alphas,
                 'reg__l1_ratio': [0.25, 0.5, 0.75]}]


### Create grid search (Inner CV)
gs = GridSearchCV(pipe, param_grid=param_grid, cv=5, verbose=2, n_jobs=-1,
                  scoring=scoring, refit='r2', return_train_score=True)


### Fit
best_model = gs.fit(X_train, y_train)

### scoring metrics for outer CV
scoring = ['neg_mean_absolute_error', 'r2', 'explained_variance', 'neg_mean_squared_error']

### Outer CV
linear_cv_results = cross_validate(gs, X_train, y_train_transformed,
                                   scoring=scoring, cv=5, verbose=3, return_train_score=True)

### Calculate mean metrics
train_r2 = (linear_cv_results['train_r2']).mean()
test_r2 = (linear_cv_results['test_r2']).mean()
train_mae = (-linear_cv_results['train_neg_mean_absolute_error']).mean()
test_mae = (-linear_cv_results['test_neg_mean_absolute_error']).mean()
train_exp_var = (linear_cv_results['train_explained_variance']).mean()
test_exp_var = (linear_cv_results['test_explained_variance']).mean()
train_rmse = (np.sqrt(-linear_cv_results['train_neg_mean_squared_error'])).mean()
test_rmse = (np.sqrt(-linear_cv_results['test_neg_mean_squared_error'])).mean()
Run Code Online (Sandbox Code Playgroud)

显然这个代码片段不起作用,因为显然我无法添加TransformedTargetRegressor到我的管道中,因为它没有transform方法(我得到这个TypeErrorTypeError:所有中间步骤都应该是转换器并实现 fit 和 transform)。

有没有一种“正确”的方法可以做到这一点,或者y_val当我想调用r2_score函数等时,我是否只需要即时进行日志转换?

Viv*_*mar 9

不,因为 scikit-learn 原件Pipeline不会改变步骤yXy步骤中的样本数量或数量。

你的用例有点不清楚。reg如果reg已经将相同的步骤添加到TransformedTargetRegressor.

查看 的文档TransformedTargetRegressor,该参数regressor接受一个回归器(它也可以是一个管道,X在最后阶段有一些特征选择操作和一个回归器)。的工作TransformedTargetRegressor将是:

fit():

    regressor.fit(X, func(y))

predict():

    inverse_func(regressor.predict(X))
Run Code Online (Sandbox Code Playgroud)

因此,无需将相同的回归量附加为新步骤。您的模型选择代码现在可以是:

pipe = TransformedTargetRegressor(regressos = DummyEstimator(),
                                  func=np.log1p, 
                                  inverse_func=np.expm1)),

### Candidate learning algorithms and their hyperparameters
alphas = [0.001, 0.01, 0.1, 1, 10, 100]
param_grid = [  
                {'transformer__regressor': Lasso(),
                 'transformer__regressor__alpha': alphas},
                {'transformer__regressor': LassoLars(),
                 'transformer__regressor__alpha': alphas},
                {'transformer__regressor': Ridge(),
                 'transformer__regressor__alpha': alphas},
                {'transformer__regressor': ElasticNet(),
                 'transformer__regressor__alpha': alphas,
                 'transformer__regressor__l1_ratio': [0.25, 0.5, 0.75]}
              ]
Run Code Online (Sandbox Code Playgroud)