在训练,验证和测试集中对熊猫数据框进行分层划分

use*_*212 6 python machine-learning dataframe pandas deep-learning

以下经过极端简化的DataFrame表示包含医疗诊断的更大的DataFrame:

medicalData = pd.DataFrame({'diagnosis':['positive','positive','negative','negative','positive','negative','negative','negative','negative','negative']})
medicalData

    diagnosis
0   positive
1   positive
2   negative
3   negative
4   positive
5   negative
6   negative
7   negative
8   negative
9   negative
Run Code Online (Sandbox Code Playgroud)

对于机器学习,我需要通过以下方式将该数据帧随机分为三个子帧

trainingDF, validationDF, testDF = SplitData(medicalData,fractions = [0.6,0.2,0.2])
Run Code Online (Sandbox Code Playgroud)

在拆分数组指定进入每个子帧的完整数据的一部分的情况下,子帧中的数据需要互斥,拆分数组的总和必须为1。 另外,每个子集中阳性诊断的比例必须大致相同。

对于这个问题的答案建议使用pandas示例方法sklearn的train_test_split函数。但是这些解决方案似乎都不能很好地推广到n个拆分,也没有一个提供分层拆分。

sta*_*010 10

这是一个 Python 函数,它通过分层采样将 Pandas 数据帧拆分为训练、验证和测试数据帧。它通过调用 scikit-learn 的函数两次来执行此拆分train_test_split()

import pandas as pd
from sklearn.model_selection import train_test_split

def split_stratified_into_train_val_test(df_input, stratify_colname='y',
                                         frac_train=0.6, frac_val=0.15, frac_test=0.25,
                                         random_state=None):
    '''
    Splits a Pandas dataframe into three subsets (train, val, and test)
    following fractional ratios provided by the user, where each subset is
    stratified by the values in a specific column (that is, each subset has
    the same relative frequency of the values in the column). It performs this
    splitting by running train_test_split() twice.

    Parameters
    ----------
    df_input : Pandas dataframe
        Input dataframe to be split.
    stratify_colname : str
        The name of the column that will be used for stratification. Usually
        this column would be for the label.
    frac_train : float
    frac_val   : float
    frac_test  : float
        The ratios with which the dataframe will be split into train, val, and
        test data. The values should be expressed as float fractions and should
        sum to 1.0.
    random_state : int, None, or RandomStateInstance
        Value to be passed to train_test_split().

    Returns
    -------
    df_train, df_val, df_test :
        Dataframes containing the three splits.
    '''

    if frac_train + frac_val + frac_test != 1.0:
        raise ValueError('fractions %f, %f, %f do not add up to 1.0' % \
                         (frac_train, frac_val, frac_test))

    if stratify_colname not in df_input.columns:
        raise ValueError('%s is not a column in the dataframe' % (stratify_colname))

    X = df_input # Contains all columns.
    y = df_input[[stratify_colname]] # Dataframe of just the column on which to stratify.

    # Split original dataframe into train and temp dataframes.
    df_train, df_temp, y_train, y_temp = train_test_split(X,
                                                          y,
                                                          stratify=y,
                                                          test_size=(1.0 - frac_train),
                                                          random_state=random_state)

    # Split the temp dataframe into val and test dataframes.
    relative_frac_test = frac_test / (frac_val + frac_test)
    df_val, df_test, y_val, y_test = train_test_split(df_temp,
                                                      y_temp,
                                                      stratify=y_temp,
                                                      test_size=relative_frac_test,
                                                      random_state=random_state)

    assert len(df_input) == len(df_train) + len(df_val) + len(df_test)

    return df_train, df_val, df_test
Run Code Online (Sandbox Code Playgroud)

下面是一个完整的工作示例。

考虑一个具有要对其执行分层的标签的数据集。该标签在原始数据集中有自己的分布,例如 75% foo、 15%bar和 10% baz。现在,我们使用 60/20/20 的比例将数据集分割为训练、验证和测试子集,其中每个分割保留相同的标签分布。见下图:

在此输入图像描述

这是示例数据集:

df = pd.DataFrame( { 'A': list(range(0, 100)),
                     'B': list(range(100, 0, -1)),
                     'label': ['foo'] * 75 + ['bar'] * 15 + ['baz'] * 10 } )

df.head()
#    A    B label
# 0  0  100   foo
# 1  1   99   foo
# 2  2   98   foo
# 3  3   97   foo
# 4  4   96   foo

df.shape
# (100, 3)

df.label.value_counts()
# foo    75
# bar    15
# baz    10
# Name: label, dtype: int64
Run Code Online (Sandbox Code Playgroud)

现在,让我们调用split_stratified_into_train_val_test()上面的函数来按照 60/20/20 的比例获取训练、验证和测试数据帧。

df_train, df_val, df_test = \
    split_stratified_into_train_val_test(df, stratify_colname='label', frac_train=0.60, frac_val=0.20, frac_test=0.20)
Run Code Online (Sandbox Code Playgroud)

三个数据框df_traindf_val、 和df_test包含所有原始行,但它们的大小将遵循上述比例。

df_train.shape
#(60, 3)

df_val.shape
#(20, 3)

df_test.shape
#(20, 3)
Run Code Online (Sandbox Code Playgroud)

此外,三个分割中的每一个都将具有相同的标签分布,即 75% foo、 15%bar和 10% baz

df_train.label.value_counts()
# foo    45
# bar     9
# baz     6
# Name: label, dtype: int64

df_val.label.value_counts()
# foo    15
# bar     3
# baz     2
# Name: label, dtype: int64

df_test.label.value_counts()
# foo    15
# bar     3
# baz     2
# Name: label, dtype: int64
Run Code Online (Sandbox Code Playgroud)


cs9*_*s95 8

np.array_split

如果您想归纳为n拆分,np.array_split是您的朋友(它很好地与DataFrames配合使用)。

fractions = np.array([0.6, 0.2, 0.2])
# shuffle your input
df = df.sample(frac=1) 
# split into 3 parts
train, val, test = np.array_split(
    df, (fractions[:-1].cumsum() * len(df)).astype(int))
Run Code Online (Sandbox Code Playgroud)

train_test_split

一个有风的解决方案,train_test_split用于分层拆分。

y = df.pop('diagnosis').to_frame()
X = df
Run Code Online (Sandbox Code Playgroud)

X_train, X_test, y_train, y_test = train_test_split(
        X, y,stratify=y, test_size=0.4)

X_test, X_val, y_test, y_val = train_test_split(
        X_test, y_test, stratify=y_test, test_size=0.5)
Run Code Online (Sandbox Code Playgroud)

X特征的DataFrame 在哪里,y标签的单列DataFrame 在哪里。