sol*_*lub 5 python statistics metrics cluster-analysis data-science
我正在尝试用Python计算Davies-Bouldin 指数。
以下是下面的代码尝试重现的步骤。
5 个步骤:
然后,
最后,
代码
def daviesbouldin(X, labels, centroids):
import numpy as np
from scipy.spatial.distance import pdist, euclidean
nbre_of_clusters = len(centroids) #Get the number of clusters
distances = [[] for e in range(nbre_of_clusters)] #Store intra-cluster distances by cluster
distances_means = [] #Store the mean of these distances
DB_indexes = [] #Store Davies_Boulin index of each pair of cluster
second_cluster_idx = [] #Store index of the second cluster of each pair
first_cluster_idx = 0 #Set index of first cluster of each pair to 0
# Step 1: Compute euclidean distances between each point of a cluster to their centroid
for cluster in range(nbre_of_clusters):
for point in range(X[labels == cluster].shape[0]):
distances[cluster].append(euclidean(X[labels == cluster][point], centroids[cluster]))
# Step 2: Compute the mean of these distances
for e in distances:
distances_means.append(np.mean(e))
# Step 3: Compute euclidean distances between each pair of centroid
ctrds_distance = pdist(centroids)
# Tricky step 4: Compute Davies-Bouldin index of each pair of cluster
for i, e in enumerate(e for start in range(1, nbre_of_clusters) for e in range(start, nbre_of_clusters)):
second_cluster_idx.append(e)
if second_cluster_idx[i-1] == nbre_of_clusters - 1:
first_cluster_idx += 1
DB_indexes.append((distances_means[first_cluster_idx] + distances_means[e]) / ctrds_distance[i])
# Step 5: Compute the mean of all DB_indexes
print("DAVIES-BOULDIN Index: %.5f" % np.mean(DB_indexes))
Run Code Online (Sandbox Code Playgroud)
在参数中:
X是数据labels,是通过聚类算法计算的标签(即:kmeans)centroids是每个簇质心的坐标(即:cluster_centers_)另请注意,我使用的是 Python 3
问题 1:每对质心之间的欧几里得距离的计算是否正确(步骤 3)?
问题2:我对步骤4 的实施是否正确?
问题 3:我是否需要标准化簇内和簇间距离?
对步骤 4 的进一步说明
假设我们有 10 个集群。该循环应计算每对集群的数据库索引。
在第一次迭代时:
distances_means)和聚类 2 的内部距离平均值( 的索引 1 distances_means)之和ctrds_distance)在第二次迭代时:
distances_means)和聚类 3 的内部距离平均值( 的索引 2 distances_means)之和ctrds_distance)等等...
以 10 个集群为例,完整的迭代过程应如下所示:
intra-cluster distance intra-cluster distance distance between their
of cluster: of cluster: centroids(storage num):
0 + 1 / 0
0 + 2 / 1
0 + 3 / 2
0 + 4 / 3
0 + 5 / 4
0 + 6 / 5
0 + 7 / 6
0 + 8 / 7
0 + 9 / 8
1 + 2 / 9
1 + 3 / 10
1 + 4 / 11
1 + 5 / 12
1 + 6 / 13
1 + 7 / 14
1 + 8 / 15
1 + 9 / 16
2 + 3 / 17
2 + 4 / 18
2 + 5 / 19
2 + 6 / 20
2 + 7 / 21
2 + 8 / 22
2 + 9 / 23
3 + 4 / 24
3 + 5 / 25
3 + 6 / 26
3 + 7 / 27
3 + 8 / 28
3 + 9 / 29
4 + 5 / 30
4 + 6 / 31
4 + 7 / 32
4 + 8 / 33
4 + 9 / 34
5 + 6 / 35
5 + 7 / 36
5 + 8 / 37
5 + 9 / 38
6 + 7 / 39
6 + 8 / 40
6 + 9 / 41
7 + 8 / 42
7 + 9 / 43
8 + 9 / 44
Run Code Online (Sandbox Code Playgroud)
这里的问题是我不太确定 的索引是否distances_means与 的索引匹配ctrds_distance。
换句话说,我不确定计算的第一个簇间距离是否对应于簇 1 和簇 2 之间的距离。并且计算的第二个簇间距离对应于簇 3 和簇 1 之间的距离...依此类推,遵循上述模式。
简而言之:恐怕我将成对的簇内距离除以不对应的簇间距离。
这是上面的 Davies-Bouldin 索引简单实现的更短、更快的更正版本。
def DaviesBouldin(X, labels):
n_cluster = len(np.bincount(labels))
cluster_k = [X[labels == k] for k in range(n_cluster)]
centroids = [np.mean(k, axis = 0) for k in cluster_k]
variances = [np.mean([euclidean(p, centroids[i]) for p in k]) for i, k in enumerate(cluster_k)]
db = []
for i in range(n_cluster):
for j in range(n_cluster):
if j != i:
db.append((variances[i] + variances[j]) / euclidean(centroids[i], centroids[j]))
return(np.max(db) / n_cluster)
Run Code Online (Sandbox Code Playgroud)
回答我自己的问题:
请注意,您可以找到尝试改进该指数的创新方法,特别是“新版本的戴维斯-布尔丁指数”,它用圆柱距离取代了欧几里得距离。