如何在Spark MLlib中初始化K-means的集群中心?

Har*_*hit 5 apache-spark apache-spark-mllib

有没有办法在Spark MLlib中运行K-Means时初始化集群中心?

我试过以下:

model = KMeans.train(
    sc.parallelize(data), 3, maxIterations=0,
    initialModel = KMeansModel([(-1000.0,-1000.0),(5.0,5.0),(1000.0,1000.0)]))
Run Code Online (Sandbox Code Playgroud)

initialModel并且setInitialModel不存在于spark-mllib_2.10中

zer*_*323 7

初始模型可以在Scala中设置,因为Spark 1.5+使用了以下setInitialModel内容KMeansModel:

import org.apache.spark.mllib.clustering.{KMeans, KMeansModel}
import org.apache.spark.mllib.linalg.Vectors

val data = sc.parallelize(Seq(
    "[0.0, 0.0]", "[1.0, 1.0]", "[9.0, 8.0]", "[8.0,  9.0]"
)).map(Vectors.parse(_))

val initialModel = new KMeansModel(
   Array("[0.6,  0.6]", "[8.0,  8.0]").map(Vectors.parse(_))
)

val model = new KMeans()
  .setInitialModel(initialModel)
  .setK(2)
  .run(data)
Run Code Online (Sandbox Code Playgroud)

和PySpark 1.6+使用initialModel参数train方法:

from pyspark.mllib.clustering import KMeansModel, KMeans
from pyspark.mllib.linalg import Vectors

data = sc.parallelize([
    "[0.0, 0.0]", "[1.0, 1.0]", "[9.0, 8.0]", "[8.0,  9.0]"
]).map(Vectors.parse)

initialModel = KMeansModel([
    Vectors.parse(v) for v in ["[0.6,  0.6]", "[8.0,  8.0]"]])
model = KMeans.train(data, 2, initialModel=initialModel)
Run Code Online (Sandbox Code Playgroud)

如果这些方法中的任何一个不起作用,则意味着您使用的是早期版本的Spark.