是否可以为sklearn SVM明确设置可能的类的列表?

Jos*_*lle 5 python classification svm scikit-learn

我有一个使用sklearn的SVC类的程序。确实,我正在使用使用SVC类的OneVsRestClassifier类。我的问题是predict_proba()方法有时返回的向量太短。这是因为classes_属性缺少一个类,这种情况发生在训练过程中不存在标签时。

考虑下面的示例(下面显示的代码)。假设所有可能的类分别是1、2、3和4。现在假设训练数据恰好不包含任何用3类标记的数据。这很好,除非当我调用predict_proba()时我想要一个长度为4的向量。 ,我得到一个长度为3的向量。也就是说,predict_proba()返回[p(1)p(2)p(4)],但是我想要[p(1)p(2)p(3)p(4) )],其中p(3)= 0。

我想clf.classes_是由培训期间看到的标签隐式定义的,在这种情况下是不完整的。有什么方法可以显式设置可能的类标签?我知道一个简单的解决方法是仅采用predict_proba()输出并手动创建所需的数组。但是,这很不方便,可能会使我的程序变慢。

# Python 2.7.6

from sklearn.svm import SVC
from sklearn.multiclass import OneVsRestClassifier
import numpy as np

X_train = [[1], [2], [4]] * 10
y = [1, 2, 4] * 10
X_test = [[1]]

clf = OneVsRestClassifier(SVC(probability=True, kernel="linear"))
clf.fit(X_train, y)

# calling predict_proba() gives: [p(1) p(2) p(4)]
# I want: [p(1) p(2) p(3) p(4)], where p(3) = 0
print clf.predict_proba(X_test)
Run Code Online (Sandbox Code Playgroud)

我想到的变通方法会创建一个新的概率列表,并通过多个append()调用一次将其构建为一个元素(请参见下面的代码)。与让predict_proba()自动返回我想要的东西相比,这似乎要慢一些。我还不知道它是否会大大减慢我的程序的速度,因为我还没有尝试过。无论如何,我想知道是否有更好的方法。

def workAround(probs, classes_, all_classes):
    """
    probs: list of probabilities, output of predict_proba (but 1D)
    classes_: clf.classes_
    all_classes: all possible classes; superset of classes_
    """
    all_probs = []
    i = 0  # index into probs and classes_

    for cls in all_classes:
        if cls == classes_[i]:
            all_probs.append(probs[i])
            i += 1
        else:
            all_probs.append(0.0)

    return np.asarray(all_probs)
Run Code Online (Sandbox Code Playgroud)

Fra*_*urt 4

正如评论中所述,scikit-learn 没有提供显式设置可能的类标签的方法。

我 NumPyfied 你的解决方法:

import sklearn
import sklearn.svm
import numpy as np
np.random.seed(3) # for reproducibility

def predict_proba_ordered(probs, classes_, all_classes):
    """
    probs: list of probabilities, output of predict_proba 
    classes_: clf.classes_
    all_classes: all possible classes (superset of classes_)
    """
    proba_ordered = np.zeros((probs.shape[0], all_classes.size),  dtype=np.float)
    sorter = np.argsort(all_classes) # http://stackoverflow.com/a/32191125/395857
    idx = sorter[np.searchsorted(all_classes, classes_, sorter=sorter)]
    proba_ordered[:, idx] = probs
    return proba_ordered

# Prepare the data set
all_classes = np.array([1,2,3,4]) # explicitly set the possible class labels.
X_train = [[1], [2], [4]] * 3
print('X_train: {0}'.format(X_train))
y = [1, 2, 4] * 3 # Label 3 is missing.
print('y: {0}'.format(y))
X_test = [[1], [2], [3]]
print('X_test: {0}'.format(X_test))

# Train
clf = sklearn.svm.SVC(probability=True, kernel="linear")
clf.fit(X_train, y)
print('clf.classes_: {0}'.format(clf.classes_))

# Predict
probs = clf.predict_proba(X_test) #As label 3 isn't in train set, the probs' size is 3, not 4
proba_ordered = predict_proba_ordered(probs, clf.classes_, all_classes)
print('proba_ordered: {0}'.format(proba_ordered))
Run Code Online (Sandbox Code Playgroud)

输出:

X_train: [[1], [2], [4], [1], [2], [4], [1], [2], [4]]
y: [1, 2, 4, 1, 2, 4, 1, 2, 4]
X_test: [[1], [2], [3]]
clf.classes_: [1 2 4]
proba_ordered: [[ 0.81499201  0.08640176  0.          0.09860622]
                [ 0.21105955  0.63893181  0.          0.15000863]
                [ 0.08965731  0.49640147  0.          0.41394122]]
Run Code Online (Sandbox Code Playgroud)

请注意,您可以显式设置可能的类标签sklearn.metrics(例如sklearn.metrics.f1_score使用labels参数:

labels : array
Integer array of labels.
Run Code Online (Sandbox Code Playgroud)

例子:

# Score
y_pred = clf.predict(X_test)
y_true = np.array([1,2,3])
precision = sklearn.metrics.precision_score(y_true, y_pred, labels=all_classes, average=None)
print('precision: {0}'.format(precision))
recall = sklearn.metrics.recall_score(y_true, y_pred, labels=all_classes, average=None)
print('recall: {0}'.format(recall))
f1_score = sklearn.metrics.f1_score(y_true, y_pred, labels=all_classes, average=None)
print('f1_score: {0}'.format(f1_score))
Run Code Online (Sandbox Code Playgroud)

请注意,到目前为止,当给定标签的基本事实中没有正面示例时,您将遇到尝试使用问题sklearn.metrics.roc_auc_score()