4 python numpy matplotlib scikit-learn
我有一组文档,我从中创建了一个特征矩阵.然后我计算文档之间的余弦相似度.我将余弦距离矩阵输入到DBSCAN算法.我的代码如下.
import pandas as pd
import numpy as np
from sklearn.metrics import pairwise_distances
from scipy.spatial.distance import cosine
from sklearn.cluster import DBSCAN
# Initialize some documents
doc1 = {'Science':0.8, 'History':0.05, 'Politics':0.15, 'Sports':0.1}
doc2 = {'News':0.2, 'Art':0.8, 'Politics':0.1, 'Sports':0.1}
doc3 = {'Science':0.8, 'History':0.1, 'Politics':0.05, 'News':0.1}
doc4 = {'Science':0.1, 'Weather':0.2, 'Art':0.7, 'Sports':0.1}
doc5 = {'Science':0.2, 'Weather':0.7, 'Art':0.8, 'Sports':0.9}
doc6 = {'Science':0.2, 'Weather':0.8, 'Art':0.8, 'Sports':1.0}
collection = [doc1, doc2, doc3, doc4, doc5, doc6]
df = pd.DataFrame(collection)
# Fill missing values with zeros
df.fillna(0, inplace=True)
# Get Feature Vectors
feature_matrix = df.as_matrix()
print(feature_matrix.tolist())
# Get cosine distance between pairs
sims = pairwise_distances(feature_matrix, metric='cosine')
# Fit DBSCAN
db = DBSCAN(min_samples=1, metric='precomputed').fit(sims)
Run Code Online (Sandbox Code Playgroud)
现在,如sklearn的DBSCAN 演示中所示,我绘制了聚类.也就是说,而不是X我插入sims,这是我的余弦距离矩阵.
labels = db.labels_
n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0)
print('Estimated number of clusters: %d' % n_clusters_)
core_samples_mask = np.zeros_like(db.labels_, dtype=bool)
core_samples_mask[db.core_sample_indices_] = True
#print(labels)
# Plot result
import matplotlib.pyplot as plt
# Black removed and is used for noise instead.
unique_labels = set(labels)
colors = [plt.cm.Spectral(each)
for each in np.linspace(0, 1, len(unique_labels))]
for k, col in zip(unique_labels, colors):
if k == -1:
# Black used for noise.
col = [0, 0, 0, 1]
class_member_mask = (labels == k)
xy = sims[class_member_mask & core_samples_mask]
plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=tuple(col),
markeredgecolor='k', markersize=14)
xy = sims[class_member_mask & ~core_samples_mask]
plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=tuple(col),
markeredgecolor='k', markersize=6)
plt.title('Estimated number of clusters: %d' % n_clusters_)
plt.show()
Run Code Online (Sandbox Code Playgroud)
sims,而不是X,因为X代表坐标值在sklearn的演示,而sims代表余弦距离值?[0.8, 0.0, 0.0, 0.0, 0.2, 0.9, 0.7]的feature_matrix红色?首先是关于术语的评论:
有两种类型的矩阵可以衡量数据集中对象的接近程度:
距离矩阵描述数据集中对象之间的成对距离.
相似性矩阵描述数据集中对象之间的成对相似性.
通常,当两个物体彼此靠近时,它们的距离很小,但它们的相似性很大.因此距离矩阵和相似度矩阵在某种意义上是彼此的对立面.例如,对于余弦度量,距离矩阵D和相似度矩阵之间的关系S可以写成D = 1 - S.
由于sims上例中的数组包含成对距离,因此将其称为dists数组可能更合适.
我的第一个问题是,改变sims而不是X是正确的,因为X代表sklearn演示中的坐标值,而sims代表余弦距离值?
不可以.如果要在二维平面上绘制数据,绘图功能需要一个二维坐标数组作为输入.距离矩阵是不够的.
如果您的数据有两个以上的维度,您可以通过一些降维技术获得它的二维表示.Sklearn在sklearn.manifold和sklearn.decomposition模块中包含许多有用的降维算法.算法的选择通常取决于数据的性质,可能需要一些实验.
在sklearn中,大多数降维方法接受特征(或坐标)向量作为输入.有些人还接受距离或相似性矩阵(这需要从文档中检查;一个好的提示是关键字precomputed在某处提到).还应注意不要使用需要距离矩阵的相似性矩阵,反之亦然.
我的第二个问题是,是否可以将给定的点变成红色?例如,我想将feature_matrix的reprsents [0.8,0.0,0.0,0.0,0.2,0.9,0.7]的点更改为红色?
问题2有点不同,主要是处理matplotlib.
我假设事先知道哪些点会涂成红色.在red_points下面的代码中调用了一个数组,它应该包含红点的索引.因此,如果例如doc2并且doc5应该涂成红色,则会设置red_points = [1, 4](索引从零开始).
对于聚类的可视化,通过主成分分析(PCA)完成尺寸减小,这是用于这种任务的最直接的方法之一.请注意,我根本不计算距离矩阵,而是直接应用DBSCAN和PCA feature_matrix.
import pandas as pd
import numpy as np
from sklearn.metrics import pairwise_distances
from scipy.spatial.distance import cosine
from sklearn.cluster import DBSCAN
# Initialize some documents
doc1 = {'Science':0.8, 'History':0.05, 'Politics':0.15, 'Sports':0.1}
doc2 = {'News':0.2, 'Art':0.8, 'Politics':0.1, 'Sports':0.1}
doc3 = {'Science':0.8, 'History':0.1, 'Politics':0.05, 'News':0.1}
doc4 = {'Science':0.1, 'Weather':0.2, 'Art':0.7, 'Sports':0.1}
doc5 = {'Science':0.2, 'Weather':0.7, 'Art':0.8, 'Sports':0.9}
doc6 = {'Science':0.2, 'Weather':0.8, 'Art':0.8, 'Sports':1.0}
collection = [doc1, doc2, doc3, doc4, doc5, doc6]
df = pd.DataFrame(collection)
# Fill missing values with zeros
df.fillna(0, inplace=True)
# Get Feature Vectors
feature_matrix = df.as_matrix()
# Fit DBSCAN
db = DBSCAN(min_samples=1).fit(feature_matrix)
labels = db.labels_
n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0)
print('Estimated number of clusters: %d' % n_clusters_)
core_samples_mask = np.zeros_like(db.labels_, dtype=bool)
core_samples_mask[db.core_sample_indices_] = True
# Plot result
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
# Perform dimensional reduction of the feature matrix with PCA
X = PCA(n_components=2).fit_transform(feature_matrix)
# Select which points will be painted red
red_points = [1, 4]
for i in red_points:
labels[i] = -2
# Black removed and is used for noise instead.
unique_labels = set(labels)
colors = [plt.cm.Spectral(each)
for each in np.linspace(0, 1, len(unique_labels))]
for k, col in zip(unique_labels, colors):
if k == -1:
# Black used for noise.
col = [0, 0, 0, 1]
if k == -2:
# Red for selected points
col = [1, 0, 0, 1]
class_member_mask = (labels == k)
xy = X[class_member_mask & core_samples_mask]
plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=tuple(col),
markeredgecolor='k', markersize=14)
xy = X[class_member_mask & ~core_samples_mask]
plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=tuple(col),
markeredgecolor='k', markersize=6)
plt.title('Estimated number of clusters: %d' % n_clusters_)
plt.show()
Run Code Online (Sandbox Code Playgroud)
左图red_points是空的情况,右图是red_points = [1, 4].
| 归档时间: |
|
| 查看次数: |
1287 次 |
| 最近记录: |