sklearn:有一个过滤样本的估算器

Kor*_*rem 5 python scikit-learn

我正在尝试实现自己的Imputer.在某些条件下,我想过滤一些列车样本(我认为质量低).

但是,因为该transform方法只返回X而不是y,并且y本身是一个numpy数组(我无法根据我的知识过滤),而且 - 当我使用时GridSearchCV- y我的transform方法收到的是None,我不能似乎找到了办法.

只是为了澄清:我非常清楚如何过滤数组.我找不到一种方法来将y矢量上的样本过滤适合当前的API.

我真的想从一个BaseEstimator实现中做到这一点,以便我可以使用它GridSearchCV(它有一些参数).我是否错过了实现样品过滤的不同方式(不是通过BaseEstimator,而是GridSearchCV顺从)?目前的API有什么办法吗?

Kor*_*rem 10

我找到了一个解决方案,它有三个部分:

  1. 有线if idx == id(self.X):.这将确保仅在训练集上过滤样本.
  2. 重写fit_transform以确保转换方法得到y和不得None
  3. 覆盖Pipeline允许tranform返回说y.

这是一个演示它的示例代码,我想它可能不会涵盖所有细节,但我认为它解决了API的主要问题.

from sklearn.base import BaseEstimator
from mne.decoding.mixin import TransformerMixin
import numpy as np
from sklearn.pipeline import Pipeline
from sklearn.naive_bayes import GaussianNB
from sklearn import cross_validation
from sklearn.grid_search import GridSearchCV
from sklearn.externals import six

class SampleAndFeatureFilter(BaseEstimator, TransformerMixin):
    def __init__(self, perc = None):
        self.perc = perc

    def fit(self, X, y=None):
        self.X = X
        sum_per_feature = X.sum(0)
        sum_per_sample = X.sum(1)
        self.featurefilter = sum_per_feature >= np.percentile(sum_per_feature, self.perc)
        self.samplefilter  = sum_per_sample >= np.percentile(sum_per_sample, self.perc)
        return self

    def transform(self, X, y=None, copy=None):
        idx = id(X)
        X=X[:,self.featurefilter]
        if idx == id(self.X):
            X = X[self.samplefilter, :]
            if y is not None:
                y = y[self.samplefilter]
            return X, y
        return X

    def fit_transform(self, X, y=None, **fit_params):
        if y is None:
            return self.fit(X, **fit_params).transform(X)
        else:
            return self.fit(X, y, **fit_params).transform(X,y)

class PipelineWithSampleFiltering(Pipeline):
    def fit_transform(self, X, y=None, **fit_params):
        Xt, yt, fit_params = self._pre_transform(X, y, **fit_params)
        if hasattr(self.steps[-1][-1], 'fit_transform'):
            return self.steps[-1][-1].fit_transform(Xt, yt, **fit_params)
        else:
            return self.steps[-1][-1].fit(Xt, yt, **fit_params).transform(Xt)

    def fit(self, X, y=None, **fit_params):
        Xt, yt, fit_params = self._pre_transform(X, y, **fit_params)
        self.steps[-1][-1].fit(Xt, yt, **fit_params)
        return self

    def _pre_transform(self, X, y=None, **fit_params):
        fit_params_steps = dict((step, {}) for step, _ in self.steps)
        for pname, pval in six.iteritems(fit_params):
            step, param = pname.split('__', 1)
            fit_params_steps[step][param] = pval
        Xt = X
        yt = y
        for name, transform in self.steps[:-1]:
            if hasattr(transform, "fit_transform"):
                res = transform.fit_transform(Xt, yt, **fit_params_steps[name])
                if len(res) == 2:
                    Xt, yt = res
                else:
                    Xt = res
            else:
                Xt = transform.fit(Xt, y, **fit_params_steps[name]) \
                              .transform(Xt)
        return Xt, yt, fit_params_steps[self.steps[-1][0]]

if __name__ == '__main__':
    X = np.random.random((100,30))
    y = np.random.random_integers(0, 1, 100)
    pipe = PipelineWithSampleFiltering([('flt', SampleAndFeatureFilter()), ('cls', GaussianNB())])
    X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size = 0.3, random_state = 42)
    kfold = cross_validation.KFold(len(y_train), 10)
    clf = GridSearchCV(pipe, cv = kfold, param_grid = {'flt__perc':[10,20,30,40,50,60,70,80]}, n_jobs = 1)
    clf.fit(X_train, y_train)
Run Code Online (Sandbox Code Playgroud)


eic*_*erg 5

scikit-learn 变压器 API 是为了改变数据的特征(本质上以及可能的数量/维度),但不是为了改变样本的数量。从 scikit-learn 的现有版本开始,任何删除或添加样本的变压器都不符合 API(如果认为重要的话,可能会在未来添加)。

因此,鉴于此,您似乎必须围绕标准 scikit-learn API 进行工作。