如何在Python中自动使用层次聚类分析获得最佳聚类数?

yic*_*hun 7 python cluster-analysis hierarchical-clustering

我想使用层次聚类分析自动获得最佳聚类数(K),然后将这个 K 应用于python 中的K 均值聚类

在研究了很多文章之后,我知道一些方法告诉我们可以绘制图形来确定 K,但是有什么方法可以在 python 中自动输出实数吗?

Tri*_*ops 7

层次聚类方法基于树状图来确定最佳聚类数。使用类似于以下的代码绘制树状图:

# General imports
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd

# Special imports
from scipy.cluster.hierarchy import dendrogram, linkage

# Load data, fill in appropriately
X = []

# How to cluster the data, single is minimal distance between clusters
linked = linkage(X, 'single')

# Plot dendrogram
plt.figure(figsize=(10, 7))
dendrogram(linked,
            orientation='top',
            labels=labelList,
            distance_sort='descending',
            show_leaf_counts=True)
plt.show()
Run Code Online (Sandbox Code Playgroud)

在树状图中找到节点之间最大的垂直差异,并在中间穿过一条水平线。与它相交的垂直线的数量是最佳簇数(当使用联动中设置的方法计算亲和力时)。

请参阅此处的示例:https ://stackabuse.com/hierarchical-clustering-with-python-and-scikit-learn/

我也想知道如何自动读取树状图并提取该数字。

添加编辑: 有一种方法可以使用 SK Learn 包来做到这一点。请参见以下示例:

#==========================================================================
# Hierarchical Clustering - Automatic determination of number of clusters
#==========================================================================

# General imports
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from os import path

# Special imports
from scipy.cluster.hierarchy import dendrogram, linkage
import scipy.cluster.hierarchy as shc
from sklearn.cluster import AgglomerativeClustering

# %matplotlib inline

print("============================================================")
print("       Hierarchical Clustering demo - num of clusters       ")
print("============================================================")
print(" ")


folder = path.dirname(path.realpath(__file__)) # set current folder

# Load data
customer_data = pd.read_csv( path.join(folder, "hierarchical-clustering-with-python-and-scikit-learn-shopping-data.csv"))
# print(customer_data.shape)
print("In this data there should be 5 clusters...")

# Retain only the last two columns
data = customer_data.iloc[:, 3:5].values

# # Plot dendrogram using SciPy
# plt.figure(figsize=(10, 7))
# plt.title("Customer Dendograms")
# dend = shc.dendrogram(shc.linkage(data, method='ward'))

# plt.show()


# Initialize hiererchial clustering method, in order for the algorithm to determine the number of clusters
# put n_clusters=None, compute_full_tree = True,
# best distance threshold value for this dataset is distance_threshold = 200
cluster = AgglomerativeClustering(n_clusters=None, affinity='euclidean', linkage='ward', compute_full_tree=True, distance_threshold=200)

# Cluster the data
cluster.fit_predict(data)

print(f"Number of clusters = {1+np.amax(cluster.labels_)}")

# Display the clustering, assigning cluster label to every datapoint 
print("Classifying the points into clusters:")
print(cluster.labels_)

# Display the clustering graphically in a plot
plt.scatter(data[:,0],data[:,1], c=cluster.labels_, cmap='rainbow')
plt.title(f"SK Learn estimated number of clusters = {1+np.amax(cluster.labels_)}")
plt.show()

print(" ")
Run Code Online (Sandbox Code Playgroud)

聚类结果

数据取自此处:https://stackabuse.s3.amazonaws.com/files/hierarchical-clustering-with-python-and-scikit-learn-shopping-data.csv