mai*_*iky 12 lucene indexing cluster-analysis k-means mahout
我正在阅读我可以从lucene索引创建mahout向量,该索引可用于应用mahout聚类算法. http://cwiki.apache.org/confluence/display/MAHOUT/Creating+Vectors+from+Text
我想在我的Lucene索引中的文档中应用K-means聚类算法,但是我不清楚如何应用这个算法(或层次聚类)来提取这些文档的有意义的聚类.
在这个页面中http://cwiki.apache.org/confluence/display/MAHOUT/k-Means 说该算法接受两个输入目录:一个用于数据点,一个用于初始簇.我的数据点是文件?我如何"声明"这些是我的文件(或它们的载体),只需要它们并进行聚类?
抱歉我的语法很差
谢谢
如果您有向量,则可以运行 KMeansDriver。这是相同的帮助。
Usage:
[--input <input> --clusters <clusters> --output <output> --distance <distance>
--convergence <convergence> --max <max> --numReduce <numReduce> --k <k>
--vectorClass <vectorClass> --overwrite --help]
Options
--input (-i) input The Path for input Vectors. Must be a
SequenceFile of Writable, Vector
--clusters (-c) clusters The input centroids, as Vectors. Must be a
SequenceFile of Writable, Cluster/Canopy.
If k is also specified, then a random set
of vectors will be selected and written out
to this path first
--output (-o) output The Path to put the output in
--distance (-m) distance The Distance Measure to use. Default is
SquaredEuclidean
--convergence (-d) convergence The threshold below which the clusters are
considered to be converged. Default is 0.5
--max (-x) max The maximum number of iterations to
perform. Default is 20
--numReduce (-r) numReduce The number of reduce tasks
--k (-k) k The k in k-Means. If specified, then a
random selection of k Vectors will be
chosen as the Centroid and written to the
clusters output path.
--vectorClass (-v) vectorClass The Vector implementation class name.
Default is SparseVector.class
--overwrite (-w) If set, overwrite the output directory
--help (-h) Print out help
Run Code Online (Sandbox Code Playgroud)
更新:将结果目录从HDFS获取到本地fs。然后使用 ClusterDumper 实用程序获取集群以及该集群中的文档列表。