Lau*_*uza 0 python cluster-analysis machine-learning dbscan scikit-learn
我正在使用 sklearn 的内置数据集 iris 进行聚类。在 KMeans 中,我预先设置了簇的数量,但对于 DBSCAN 来说并非如此。如果不提前设置簇数,如何训练模型?
我试过:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
#%matplotib inline
from sklearn.cluster import DBSCAN,MeanShift
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split,KFold,cross_val_score
from sklearn.metrics import accuracy_score,confusion_matrix
iris = load_iris()
X = iris.data
y = iris.target
dbscan = DBSCAN(eps=0.3,min_samples=10)
dbscan.fit(X,y)
Run Code Online (Sandbox Code Playgroud)
我已经被困住了!
DBSCAN 是一种聚类算法,因此它不使用标签y
。确实,您可以使用它的fit
方法.fit(X, y)
,但是根据文档:
y:忽略
未使用,按照惯例在此处呈现以保持 API 一致性。
DBSCAN的另一个特点是,与KMeans等算法相比,它不以簇的数量作为输入;相反,它还会自行估计它们的数量。
澄清这一点后,让我们用虹膜数据调整文档演示:
import numpy as np
from sklearn.cluster import DBSCAN
from sklearn import metrics
from sklearn.datasets import load_iris
from sklearn.preprocessing import StandardScaler
X, labels_true = load_iris(return_X_y=True)
X = StandardScaler().fit_transform(X)
# Compute DBSCAN
db = DBSCAN(eps=0.5,min_samples=5) # default parameter values
db.fit(X)
core_samples_mask = np.zeros_like(db.labels_, dtype=bool)
core_samples_mask[db.core_sample_indices_] = True
labels = db.labels_
# Number of clusters in labels, ignoring noise if present.
n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0)
n_noise_ = list(labels).count(-1)
print('Estimated number of clusters: %d' % n_clusters_)
print('Estimated number of noise points: %d' % n_noise_)
print("Homogeneity: %0.3f" % metrics.homogeneity_score(labels_true, labels))
print("Completeness: %0.3f" % metrics.completeness_score(labels_true, labels))
print("V-measure: %0.3f" % metrics.v_measure_score(labels_true, labels))
print("Adjusted Rand Index: %0.3f"
% metrics.adjusted_rand_score(labels_true, labels))
print("Adjusted Mutual Information: %0.3f"
% metrics.adjusted_mutual_info_score(labels_true, labels))
print("Silhouette Coefficient: %0.3f"
% metrics.silhouette_score(X, labels))
Run Code Online (Sandbox Code Playgroud)
结果:
Estimated number of clusters: 2
Estimated number of noise points: 17
Homogeneity: 0.560
Completeness: 0.657
V-measure: 0.604
Adjusted Rand Index: 0.521
Adjusted Mutual Information: 0.599
Silhouette Coefficient: 0.486
Run Code Online (Sandbox Code Playgroud)
让我们绘制它们:
# Plot result
import matplotlib.pyplot as plt
# Black removed and is used for noise instead.
unique_labels = set(labels)
colors = [plt.cm.Spectral(each)
for each in np.linspace(0, 1, len(unique_labels))]
for k, col in zip(unique_labels, colors):
if k == -1:
# Black used for noise.
col = [0, 0, 0, 1]
class_member_mask = (labels == k)
xy = X[class_member_mask & core_samples_mask]
plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=tuple(col),
markeredgecolor='k', markersize=14)
xy = X[class_member_mask & ~core_samples_mask]
plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=tuple(col),
markeredgecolor='k', markersize=6)
plt.title('Estimated number of clusters: %d' % n_clusters_)
plt.show()
Run Code Online (Sandbox Code Playgroud)
就是这样。
与所有聚类算法一样,这里监督学习的常见概念(例如训练/测试分割、使用未见过的数据进行预测、交叉验证等)不成立。这种无监督方法可能在初始探索性数据分析 (EDA) 中有用,以便让我们对数据有一个总体了解 - 但是,正如您可能已经注意到的那样,此类分析的结果不一定对以下有用:监督问题:这里,尽管我们的虹膜数据集中存在 3 个标签,但算法只发现了 2 个簇。
...这当然可能会改变,具体取决于模型参数。实验...