eri*_*rik 86 python arrays optimization numpy
将NumPy数组随机分成训练和测试/验证数据集的好方法是什么?类似于Matlab中的函数cvpartition
或crossvalind
函数.
pbe*_*kes 103
如果要将数据集分成两半,则可以使用numpy.random.shuffle
,或者numpy.random.permutation
如果需要跟踪索引:
import numpy
# x is your dataset
x = numpy.random.rand(100, 5)
numpy.random.shuffle(x)
training, test = x[:80,:], x[80:,:]
Run Code Online (Sandbox Code Playgroud)
要么
import numpy
# x is your dataset
x = numpy.random.rand(100, 5)
indices = numpy.random.permutation(x.shape[0])
training_idx, test_idx = indices[:80], indices[80:]
training, test = x[training_idx,:], x[test_idx,:]
Run Code Online (Sandbox Code Playgroud)
有许多方法可以重复分区相同的数据集以进行交叉验证.一种策略是从数据集重新取样,重复:
import numpy
# x is your dataset
x = numpy.random.rand(100, 5)
training_idx = numpy.random.randint(x.shape[0], size=80)
test_idx = numpy.random.randint(x.shape[0], size=20)
training, test = x[training_idx,:], x[test_idx,:]
Run Code Online (Sandbox Code Playgroud)
最后,sklearn包含几种交叉验证方法(k-fold,leave-n-out,...).它还包括更先进的"分层抽样"方法,这些方法创建了与某些特征相关的数据分区,例如,以确保在训练和测试集中存在相同比例的正面和负面示例.
小智 45
还有另一种选择需要使用scikit-learn.正如scikit的维基描述的那样,您可以使用以下说明:
from sklearn.model_selection import train_test_split
data, labels = np.arange(10).reshape((5, 2)), range(5)
data_train, data_test, labels_train, labels_test = train_test_split(data, labels, test_size=0.20, random_state=42)
Run Code Online (Sandbox Code Playgroud)
这样,您可以保持同步标记,以便将您尝试拆分为训练和测试的数据.
off*_*tus 35
只是一张纸条.如果您需要训练,测试和验证集,您可以这样做:
from sklearn.cross_validation import train_test_split
X = get_my_X()
y = get_my_y()
x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
x_test, x_val, y_test, y_val = train_test_split(x_test, y_test, test_size=0.5)
Run Code Online (Sandbox Code Playgroud)
这些参数将提供70%的训练,15%用于测试和val组.希望这可以帮助.
小智 11
由于sklearn.cross_validation
不推荐使用模块,您可以使用:
import numpy as np
from sklearn.model_selection import train_test_split
X, y = np.arange(10).reshape((5, 2)), range(5)
X_trn, X_tst, y_trn, y_tst = train_test_split(X, y, test_size=0.2, random_state=42)
Run Code Online (Sandbox Code Playgroud)
您还可以考虑将训练集和测试集分层划分。Startified Division 也随机生成训练和测试集,但保留原始类比例的方式。这使得训练集和测试集更好地反映了原始数据集的属性。
import numpy as np
def get_train_test_inds(y,train_proportion=0.7):
'''Generates indices, making random stratified split into training set and testing sets
with proportions train_proportion and (1-train_proportion) of initial sample.
y is any iterable indicating classes of each observation in the sample.
Initial proportions of classes inside training and
testing sets are preserved (stratified sampling).
'''
y=np.array(y)
train_inds = np.zeros(len(y),dtype=bool)
test_inds = np.zeros(len(y),dtype=bool)
values = np.unique(y)
for value in values:
value_inds = np.nonzero(y==value)[0]
np.random.shuffle(value_inds)
n = int(train_proportion*len(value_inds))
train_inds[value_inds[:n]]=True
test_inds[value_inds[n:]]=True
return train_inds,test_inds
y = np.array([1,1,2,2,3,3])
train_inds,test_inds = get_train_test_inds(y,train_proportion=0.5)
print y[train_inds]
print y[test_inds]
Run Code Online (Sandbox Code Playgroud)
此代码输出:
[1 2 3]
[1 2 3]
Run Code Online (Sandbox Code Playgroud)