mgo*_*ser 5 python scikit-learn grid-search
对于给定的模型类型,我都想1)为各种模型类型调整参数,以及2)找到最佳的调整模型类型。我想用GridSearchCV这个。
我能够执行以下操作,但是我也担心这不能按我期望的方式工作,并且我还担心也许您不需要嵌套GridSearchCV-是否可以使用一个嵌套GridSearchCV?
我对嵌套GridSearchCV的一个担心是,我可能还会进行嵌套的交叉验证,因此与其对66%的火车数据进行网格搜索,还不如对43.56%的火车数据进行网格搜索。我还有一个担心是,我增加了代码复杂度。
这是我GridSearchCV使用虹膜数据集的嵌套示例:
import numpy as np
import pandas as pd
from sklearn.datasets import load_iris
from sklearn.decomposition import KernelPCA
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV, train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC
iris_raw_data = load_iris()
iris_df = pd.DataFrame(np.c_[iris_raw_data.data, iris_raw_data.target],
columns=iris_raw_data.feature_names + ['target'])
iris_category_labels = {0:'setosa', 1:'versicolor', 2:'virginica'}
iris_df['species_name'] = iris_df['target'].apply(lambda l: iris_category_labels[int(l)])
features = ['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)']
target = 'target'
X_train, X_test, y_train, y_test = train_test_split(iris_df[features], iris_df[target], test_size=.33)
pipe_knn = Pipeline(steps=[
('scaler', StandardScaler()),
('reduce_dim', KernelPCA(n_components=2)), # select feature 2 and 4
('clf', KNeighborsClassifier())
])
params_knn = dict(scaler=[None, StandardScaler()],
reduce_dim=[None, KernelPCA(n_components=2)],
clf__n_neighbors=[2, 5, 15])
grid_search_knn = GridSearchCV(pipe_knn, param_grid=params_knn)
pipe_svc = Pipeline(steps=[
('scaler', StandardScaler()),
('reduce_dim', KernelPCA(n_components=2)), # select feature 2 and 4
('clf', SVC())
])
params_svc = dict(scaler=[None, StandardScaler()],
reduce_dim=[None, KernelPCA(n_components=2)],
clf__C=[0.1, 1, 10, 100])
grid_search_svc = GridSearchCV(pipe_svc, param_grid=params_svc)
pipe_rf = Pipeline(steps=[
('clf', RandomForestClassifier())
])
params_rf = dict(clf__n_estimators=[10, 50, 100],
clf__min_samples_leaf=[2, 5, 10])
grid_search_rf = GridSearchCV(pipe_rf, param_grid=params_rf)
pipe_meta = Pipeline(steps=[('subpipes', pipe_knn)])
params_meta = dict(subpipes=[grid_search_svc, grid_search_knn, grid_search_rf])
grid_search_meta = GridSearchCV(pipe_meta, param_grid=params_meta)
grid_search_meta.fit(X_train, y_train)
print(grid_search_meta.best_estimator_)
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
570 次 |
| 最近记录: |