马氏距离的多元离群值去除

Shu*_*agi 2 machine-learning

*** <code>在此处输入图片描述</ code> ***

我有离群值的数据。我怎样才能找到马哈拉诺比斯的分歧并用它来消除离群值。

Yah*_*hya 12

我发现@Nipun Wijerathne的答案不完整且有些混乱,因此我决定为将来的读者提供MCVE实际上是末尾的MCVE:D),但首先让我提出一些一般性准则:

  1. 实际上,如果您具有很多功能,而示例更少(即输入),那么Mahalanobis算法往往会产生误导性的结果(您可以自己尝试),因此,您拥有的功能越多,应提供的示例就越多。
  2. 协方差矩阵必须是对称正定的才能使算法起作用,因此您应在继续之前进行检查!

如前所述,欧几里得度量标准无法找到正确的距离,因为它试图获得普通的直线距离。因此,如果我们具有变量的多维空间,则两个点可能看起来与均值具有相同的距离,但实际上其中之一离数据云很远(即它的极值)。

医学博士


解决方案是马氏距离,它通过获取变量的特征向量而不是原始轴来进行类似于要素缩放的操作

它采用以下公式:

马氏距离公式

其中:


刷新:

所述协方差表示两个变量(即正,负或零)之间的关系的方向,因此它显示的一个变量是如何与其它的变化强度。


实作

考虑这个6x3数据集示例,其中每一行代表一个输入/示例,每一列代表该示例的功能:

矩阵代表数据

首先,我们需要为每个输入的特征创建一个协方差矩阵,这就是为什么我们rowvarnumpy.cov函数中将参数设置为False的原因,所以每个列都代表一个变量:

协方差考虑

covariance_matrix = np.cov(data, rowvar=False)  
# data here looks similar to the above 2D / Matrix example in the pictures
Run Code Online (Sandbox Code Playgroud)

然后我们找到协方差矩阵的

inv_covariance_matrix = np.linalg.inv(covariance_matrix)
Run Code Online (Sandbox Code Playgroud)

但是在开始之前,我们应该检查-as上文提到如果矩阵及其逆是对称正定,我们使用这个Cholesky分解算法,幸运的是这在已经实施numpy.linalg.cholesky

def is_pos_def(A):
    if np.allclose(A, A.T):
        try:
            np.linalg.cholesky(A)
            return True
        except np.linalg.LinAlgError:
            return False
    else:
        return False 
Run Code Online (Sandbox Code Playgroud)

接下来,我们找到m每个要素上变量的均值(我要说维)并将其保存在这样的数组中:

vars_mean = []
for i in range(data.shape[0]):
    vars_mean.append(list(data.mean(axis=0)))  # axis=0 means each column in the 2D array
Run Code Online (Sandbox Code Playgroud)

vars_mean

请注意,我重复每一行只是为了利用矩阵减法,如下所示。

接下来,我们找到x - m(即微分),但是我们已经向量化了,vars_mean所以我们要做的就是:

diff = data - vars_mean
# here we subtract the mean of feature from each feature of each example
Run Code Online (Sandbox Code Playgroud)

差异

最后,应用如下公式:

md = []
for i in range(len(diff)):
    md.append(np.sqrt(diff[i].dot(inv_covariance_matrix).dot(diff[i]))) 
Run Code Online (Sandbox Code Playgroud)

请注意以下几点:

  • 协方差矩阵逆的维数为: number_of_features x number_of_features
  • diff矩阵的维类似于原始数据矩阵:number_of_examples x number_of_features
  • 因此,每个diff[i](即行)为1 x number_of_features
  • 因此,根据矩阵乘法规则,来自的结果矩阵diff[i].dot(inv_covariance_matrix)将为1 x number_of_features并且当我们再次乘以时,会diff[i] numpy自动将后者视为列矩阵,即number_of_features x 1最终结果将变为单个值!(即无需移调)

为了检测异常值,我们应该指定阈值;为此,我们将马氏距离结果的平均值乘以极限度k,其中k = 2.0 * std极限值和3.0 * std极值均根据68–95–99.7规则(该图像用于同一链接的插图):

68–95–99.7规则


放在一起

import numpy as np


def create_data(examples=50, features=5, upper_bound=10, outliers_fraction=0.1, extreme=False):
    '''
    This method for testing (i.e. to generate a 2D array of data)
    '''
    data = []
    magnitude = 4 if extreme else 3
    for i in range(examples):
        if (examples - i) <= round((float(examples) * outliers_fraction)):
            data.append(np.random.poisson(upper_bound ** magnitude, features).tolist())
        else:
            data.append(np.random.poisson(upper_bound, features).tolist())
    return np.array(data)


def MahalanobisDist(data, verbose=False):
    covariance_matrix = np.cov(data, rowvar=False)
    if is_pos_def(covariance_matrix):
        inv_covariance_matrix = np.linalg.inv(covariance_matrix)
        if is_pos_def(inv_covariance_matrix):
            vars_mean = []
            for i in range(data.shape[0]):
                vars_mean.append(list(data.mean(axis=0)))
            diff = data - vars_mean
            md = []
            for i in range(len(diff)):
                md.append(np.sqrt(diff[i].dot(inv_covariance_matrix).dot(diff[i])))

            if verbose:
                print("Covariance Matrix:\n {}\n".format(covariance_matrix))
                print("Inverse of Covariance Matrix:\n {}\n".format(inv_covariance_matrix))
                print("Variables Mean Vector:\n {}\n".format(vars_mean))
                print("Variables - Variables Mean Vector:\n {}\n".format(diff))
                print("Mahalanobis Distance:\n {}\n".format(md))
            return md
        else:
            print("Error: Inverse of Covariance Matrix is not positive definite!")
    else:
        print("Error: Covariance Matrix is not positive definite!")


def MD_detectOutliers(data, extreme=False, verbose=False):
    MD = MahalanobisDist(data, verbose)
    # one popular way to specify the threshold
    #m = np.mean(MD)
    #t = 3. * m if extreme else 2. * m
    #outliers = []
    #for i in range(len(MD)):
    #    if MD[i] > t:
    #        outliers.append(i)  # index of the outlier
    #return np.array(outliers)

    # or according to the 68–95–99.7 rule
    std = np.std(MD)
    k = 3. * std if extreme else 2. * std
    m = np.mean(MD)
    up_t = m + k
    low_t = m - k
    outliers = []
    for i in range(len(MD)):
        if (MD[i] >= up_t) or (MD[i] <= low_t):
            outliers.append(i)  # index of the outlier
    return np.array(outliers)


def is_pos_def(A):
    if np.allclose(A, A.T):
        try:
            np.linalg.cholesky(A)
            return True
        except np.linalg.LinAlgError:
            return False
    else:
        return False


data = create_data(15, 3, 10, 0.1)
print("data:\n {}\n".format(data))

outliers_indices = MD_detectOutliers(data, verbose=True)

print("Outliers Indices: {}\n".format(outliers_indices))
print("Outliers:")
for ii in outliers_indices:
    print(data[ii])
Run Code Online (Sandbox Code Playgroud)

结果

data:
 [[ 12   7   9]
 [  9  16   7]
 [ 14  11  10]
 [ 14   5   5]
 [ 12   8   7]
 [  8   8  10]
 [  9  14   8]
 [ 12  12  10]
 [ 18  10   6]
 [  6  12  11]
 [  4  12  15]
 [  5  13  10]
 [  8   9   8]
 [106 116  97]
 [ 90 116 114]]

Covariance Matrix:
 [[ 980.17142857 1143.62857143 1035.6       ]
 [1143.62857143 1385.11428571 1263.12857143]
 [1035.6        1263.12857143 1170.74285714]]

Inverse of Covariance Matrix:
 [[ 0.03021777 -0.03563241  0.0117146 ]
 [-0.03563241  0.08684092 -0.06217448]
 [ 0.0117146  -0.06217448  0.05757261]]

Variables Mean Vector:
 [[21.8, 24.6, 21.8], [21.8, 24.6, 21.8], [21.8, 24.6, 21.8], [21.8, 24.6, 21.8], [21.8, 24.6, 21.8], [21.8, 24.6, 21.8], [21.8, 24.6, 21.8], [21.8, 24.6, 21.8], [21.8, 24.6, 21.8], [21.8, 24.6, 21.8], [21.8, 24.6, 21.8], [21.8, 24.6, 21.8], [21.8, 24.6, 21.8], [21.8, 24.6, 21.8], [21.8, 24.6, 21.8]]

Variables - Variables Mean Vector:
 [[ -9.8 -17.6 -12.8]
 [-12.8  -8.6 -14.8]
 [ -7.8 -13.6 -11.8]
 [ -7.8 -19.6 -16.8]
 [ -9.8 -16.6 -14.8]
 [-13.8 -16.6 -11.8]
 [-12.8 -10.6 -13.8]
 [ -9.8 -12.6 -11.8]
 [ -3.8 -14.6 -15.8]
 [-15.8 -12.6 -10.8]
 [-17.8 -12.6  -6.8]
 [-16.8 -11.6 -11.8]
 [-13.8 -15.6 -13.8]
 [ 84.2  91.4  75.2]
 [ 68.2  91.4  92.2]]

Mahalanobis Distance:
 [1.3669401667524865, 2.1796331318432967, 0.7470525416547134, 1.6364973119931507, 0.8351423113609481, 0.9128858131134882, 1.397144258271586, 0.35603382066414996, 1.4449501739129382, 0.9668775289588046, 1.490503433100514, 1.4021488309805878, 0.4500345257064412, 3.239353067840299, 3.260149280200771]

Outliers Indices: [13 14]

Outliers:
[106 116  97]
[ 90 116 114]
Run Code Online (Sandbox Code Playgroud)

  • 最小?哇。印象深刻。您似乎为此付出了很多工作。希望有很多人认识到这一努力。 (2认同)

Nip*_*hne 5

在多变量数据,如果有(存在变量之间的协方差欧氏距离失败你的情况X,Y,Z)。在此处输入图片说明

因此,马氏距离的作用是

  1. 它将变量转换为不相关的空间。

  2. 使每个变量的方差等于1。

  3. 然后计算简单的欧几里得距离。

我们可以如下计算每个数据样本的马氏距离

在此处输入图片说明

在这里,我提供了python代码并添加了注释,以便您可以理解代码。

import numpy as np

data= np.matrix([[1, 2, 3, 4, 5, 6, 7, 8],[1, 4, 9, 16, 25, 36, 49, 64],[1, 4, 9, 16, 25, 16, 49, 64]])

def MahalanobisDist(data):
    covariance_xyz = np.cov(data) # calculate the covarince matrix
    inv_covariance_xyz = np.linalg.inv(covariance_xyz) #take the inverse of the covarince matrix
    xyz_mean = np.mean(data[0]),np.mean(data[1]),np.mean(data[2])
    x_diff = np.array([x_i - xyz_mean[0] for x_i in x]) # take the diffrence between the mean of X variable the sample
    y_diff = np.array([y_i - xyz_mean[1] for y_i in y]) # take the diffrence between the mean of Y variable the sample
    z_diff = np.array([z_i - xyz_mean[2] for z_i in z]) # take the diffrence between the mean of Z variable the sample
    diff_xyz = np.transpose([x_diff, y_diff, z_diff])

    md = []
    for i in range(len(diff_xyz)):
        md.append(np.sqrt(np.dot(np.dot(np.transpose(diff_xyz[i]),inv_covariance_xyz),diff_xyz[i]))) #calculate the Mahalanobis Distance for each data sample
    return md

def MD_removeOutliers(data):
    MD = MahalanobisDist(data)
    threshold = np.mean(MD) * 1.5 # adjust 1.5 accordingly
    outliers = []
    for i in range(len(MD)):
        if MD[i] > threshold:
            outliers.append(i) # index of the outlier
    return np.array(outliers)

print(MD_removeOutliers(data))
Run Code Online (Sandbox Code Playgroud)

希望这可以帮助。

参考资料

  1. http://mccormickml.com/2014/07/21/mahalanobis-distance/

  2. http://kldavenport.com/mahalanobis-distance-and-outliers/

  3. https://www.youtube.com/watch?v=3IdvoI8O9hU&t=540s