如何在K近邻中找到最优的K值?

Bab*_*154 1 python machine-learning knn scikit-learn

我正在从udemy学习ML,下面是讲师在他的讲座中使用的代码。但是我对这段代码并不完全满意,因为它给出了许多错误率几乎相同的k值(我必须手动检查错误率为的k值)微不足道)。

有没有其他方法可以找到最佳 k 值 ( n_neighbor)?

error_rate = []
for i in range(1,40):
    knn = KNeighborsClassifier(n_neighbors=i)
    knn.fit(X_train,y_train)
    pred_i = knn.predict(X_test)
    error_rate.append(np.mean(pred_i != y_test))
Run Code Online (Sandbox Code Playgroud)

使用图来显示错误率与 K 值的关系。

plt.figure(figsize=(10,6))
plt.plot(range(1,40),error_rate,color='blue', linestyle='dashed', marker='o',
markerfacecolor='red', markersize=10)
plt.title('Error Rate vs. K Value')
plt.xlabel('K')
plt.ylabel('Error Rate')
Run Code Online (Sandbox Code Playgroud)

错误率与 k_value 图

小智 5

sklearn 中提供了 GridSearchCV 和其他类似算法,可用于进行交叉验证并找到最佳参数


from sklearn.model_selection import GridSearchCV

from sklearn.datasets import load_iris
from sklearn.neighbors import KNeighborsClassifier

iris = load_iris()

X = iris.data
y = iris.target

k_range = list(range(1,100))
weight_options = ["uniform", "distance"]

param_grid = dict(n_neighbors = k_range, weights = weight_options)


knn = KNeighborsClassifier()

grid = GridSearchCV(knn, param_grid, cv = 10, scoring = 'accuracy')
grid.fit(X,y)

print (grid.best_score_)
print (grid.best_params_)
print (grid.best_estimator_)


# 0.9800000000000001
# {'n_neighbors': 13, 'weights': 'uniform'}
# KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski',
#                     metric_params=None, n_jobs=None, n_neighbors=13, p=2,
#                     weights='uniform')

Run Code Online (Sandbox Code Playgroud)

所有算法都可以在这里找到。 https://scikit-learn.org/stable/modules/classes.html#hyper-parameter-optimizers