与Apache Spark和scikit_learn之间的KMeans结果不一致

Clo*_*ave 4 python k-means scikit-learn apache-spark pyspark

我正在使用PySpark对数据集进行聚类.为了找到我在一系列值(2,20)上进行聚类的聚类数,并找到wsse每个值的(聚类内的平方和)值k.这是我发现一些不寻常的东西.根据我的理解,当你增加簇的数量时,wsse单调减少.但结果我说不然.我wsse只显示前几个群集

Results from spark

For k = 002 WSSE is 255318.793358
For k = 003 WSSE is 209788.479560
For k = 004 WSSE is 208498.351074
For k = 005 WSSE is 142573.272672
For k = 006 WSSE is 154419.027612
For k = 007 WSSE is 115092.404604
For k = 008 WSSE is 104753.205635
For k = 009 WSSE is 98000.985547
For k = 010 WSSE is 95134.137071
Run Code Online (Sandbox Code Playgroud)

如果你看一下wsse对价值k=5k=6,你会看到的wsse有所增加.我转向sklearn,看看我是否得到了类似的结果.我用于spark和sklearn的代码位于帖子末尾的附录部分.我试图对spark和sklearn KMeans模型中的参数使用相同的值.以下是sklearn的结果,它们正如我所预期的那样 - 单调递减.

Results from sklearn

For k = 002 WSSE is 245090.224247
For k = 003 WSSE is 201329.888159
For k = 004 WSSE is 166889.044195
For k = 005 WSSE is 142576.895154
For k = 006 WSSE is 123882.070776
For k = 007 WSSE is 112496.692455
For k = 008 WSSE is 102806.001664
For k = 009 WSSE is 95279.837212
For k = 010 WSSE is 89303.574467
Run Code Online (Sandbox Code Playgroud)

我不确定为什么我的wsseSpark值增加了.我尝试使用不同的数据集,并在那里发现了类似的行为.有什么地方我错了吗?任何线索都会很棒.


附录

数据集位于 此处.

读取数据并设置声明变量

# get data
import pandas as pd
url = "https://raw.githubusercontent.com/vectosaurus/bb_lite/master/3.0%20data/adult_comp_cont.csv"

df_pandas = pd.read_csv(url)
df_spark = sqlContext(df_pandas)
target_col = 'high_income'
numeric_cols = [i for i in df_pandas.columns if i !=target_col]

k_min = 2 # 2 in inclusive
k_max = 21 # 2i is exlusive. will fit till 20

max_iter = 1000
seed = 42    
Run Code Online (Sandbox Code Playgroud)

这是我用于获取sklearn结果的代码:

from sklearn.cluster import KMeans as KMeans_SKL
from sklearn.preprocessing import StandardScaler as StandardScaler_SKL

ss = StandardScaler_SKL(with_std=True, with_mean=True)
ss.fit(df_pandas.loc[:, numeric_cols])
df_pandas_scaled = pd.DataFrame(ss.transform(df_pandas.loc[:, numeric_cols]))

wsse_collect = []

for i in range(k_min, k_max):
    km = KMeans_SKL(random_state=seed, max_iter=max_iter, n_clusters=i)
    _ = km.fit(df_pandas_scaled)
    wsse = km.inertia_
    print('For k = {i:03d} WSSE is {wsse:10f}'.format(i=i, wsse=wsse))
    wsse_collect.append(wsse)
Run Code Online (Sandbox Code Playgroud)

这是我用来获得火花结果的代码

from pyspark.ml.feature import StandardScaler, VectorAssembler
from pyspark.ml.clustering import KMeans

standard_scaler_inpt_features = 'ss_features'
kmeans_input_features = 'features'
kmeans_prediction_features = 'prediction'


assembler = VectorAssembler(inputCols=numeric_cols, outputCol=standard_scaler_inpt_features)
assembled_df = assembler.transform(df_spark)

scaler = StandardScaler(inputCol=standard_scaler_inpt_features, outputCol=kmeans_input_features, withStd=True, withMean=True)
scaler_model = scaler.fit(assembled_df)
scaled_data = scaler_model.transform(assembled_df)

wsse_collect_spark = []

for i in range(k_min, k_max):
    km = KMeans(featuresCol=kmeans_input_features, predictionCol=kmeans_prediction_col,
                        k=i, maxIter=max_iter, seed=seed)
    km_fit = km.fit(scaled_data)
    wsse_spark = km_fit.computeCost(scaled_data)
    wsse_collect_spark .append(wsse_spark)
    print('For k = {i:03d} WSSE is {wsse:10f}'.format(i=i, wsse=wsse_spark))
Run Code Online (Sandbox Code Playgroud)

UPDATE

在@Michail N的回答之后,我改变了Spark 模型的tolmaxIterKMeans.我重新运行代码,但我看到相同的行为重复.但是自从Michail提到

事实上,Spark MLlib实现了K-means ||

我将数量增加了initSteps50倍并重新运行了该过程,得到了以下结果.

For k = 002 WSSE is 255318.718684
For k = 003 WSSE is 212364.906298
For k = 004 WSSE is 185999.709027
For k = 005 WSSE is 168616.028321                                               
For k = 006 WSSE is 123879.449228                                               
For k = 007 WSSE is 113646.930680                                               
For k = 008 WSSE is 102803.889178                                               
For k = 009 WSSE is 97819.497501                                                
For k = 010 WSSE is 99973.198132                                                
For k = 011 WSSE is 89103.510831                                                
For k = 012 WSSE is 84462.110744                                                
For k = 013 WSSE is 78803.619605                                                
For k = 014 WSSE is 82174.640611                                                
For k = 015 WSSE is 79157.287447                                                
For k = 016 WSSE is 75007.269644                                                
For k = 017 WSSE is 71610.292172                                                
For k = 018 WSSE is 68706.739299                                                
For k = 019 WSSE is 65440.906151                                                
For k = 020 WSSE is 66396.106118
Run Code Online (Sandbox Code Playgroud)

的增加wsse来自k=5k=6消失.但如果你看问题仍然存在k=13,并k=14和其他地方,但至少我得知道这个来自何处.

Mic*_*l N 6

WSSE没有单调减少没有错.理论上,如果集群是最优的,WSSE必须单调减少,这意味着从所有可能的k中心集群中具有最佳WSSE的集群.

问题是K-means不一定能够找到给定k的最佳聚类.它的迭代过程可以从随机起始点收敛到局部最小值,这可能是好的但不是最优的.

有像K-means ++和Kmeans ||这样的方法 有选择算法的变体更有可能选择不同的,分离的质心,更可靠地导致良好的聚类,而Spark MLlib实际上实现了K-means ||.但是,在选择中仍然具有随机性元素,并且不能保证最佳聚类.

为k = 6选择的随机起始簇集可能导致特别次优的聚类,或者它可能在它达到其局部最优值之前就已经停止.

您可以通过手动更改Kmeans参数来改进它.该算法具有阈值via tol,其控制被认为是重要的群集质心运动的最小量,其中较低值意味着K均值算法将使质心继续移动更长时间.

使用maxIter增加最大迭代次数也可以防止它过早地以可能更多的计算为代价停止.

所以我的建议是重新运行你的聚类

 ...
 #increase from default 20
 max_iter= 40     
 #decrase from default 0.0001
 tol = 0.00001 
 km = KMeans(featuresCol=kmeans_input_features, predictionCol=kmeans_prediction_col, k=i, maxIter=max_iter, seed=seed , tol = tol )
 ...
Run Code Online (Sandbox Code Playgroud)