使用 pytorch 和 sklearn 对 MNIST 数据集进行交叉验证

Kim*_*men 7 scikit-learn cross-validation mnist pytorch k-fold

我是 pytorch 的新手,正在尝试实现一个前馈神经网络来对 mnist 数据集进行分类。我在尝试使用交叉验证时遇到了一些问题。我的数据具有以下形状 x_train:: torch.Size([45000, 784])y_traintorch.Size([45000])

我尝试使用 sklearn 中的 KFold。

kfold =KFold(n_splits=10)

这是我的训练方法的第一部分,我将数据分成几部分:

for  train_index, test_index in kfold.split(x_train, y_train): 
        x_train_fold = x_train[train_index]
        x_test_fold = x_test[test_index]
        y_train_fold = y_train[train_index]
        y_test_fold = y_test[test_index]
        print(x_train_fold.shape)
        for epoch in range(epochs):
         ...
Run Code Online (Sandbox Code Playgroud)

y_train_fold变量的索引是正确的,它只是: [ 0 1 2 ... 4497 4498 4499],但它不是 for x_train_fold,而是[ 4500 4501 4502 ... 44997 44998 44999]。测试折叠也是如此。

对于第一次迭代,我希望变量x_train_fold是前 4500 张图片,换句话说,具有 shape torch.Size([4500, 784]),但它具有 shapetorch.Size([40500, 784])

关于如何做到这一点的任何提示?

kHa*_*hit 8

我觉得你糊涂了!

暂时忽略第二个维度,当你有 45000 个点,并且使用 10 折交叉验证时,每折的大小是多少?45000/10 即 4500。

这意味着您的每个折叠将包含 4500 个数据点,其中一个折叠将用于测试,其余用于训练,即

对于测试: 1 折 => 4500 个数据点 => 大小:4500
对于训练:剩余折叠 => 45000-4500 个数据点 => 大小:45000-4500=40500

因此,对于第一次迭代,前 4500 个数据点(对应于索引)将用于测试,其余用于训练。(检查下图)

鉴于您的数据是x_train: torch.Size([45000, 784])and y_train: torch.Size([45000]),您的代码应该如下所示:

for train_index, test_index in kfold.split(x_train, y_train):  
    print(train_index, test_index)

    x_train_fold = x_train[train_index] 
    y_train_fold = y_train[train_index] 
    x_test_fold = x_train[test_index] 
    y_test_fold = y_train[test_index] 

    print(x_train_fold.shape, y_train_fold.shape) 
    print(x_test_fold.shape, y_test_fold.shape) 
    break 

[ 4500  4501  4502 ... 44997 44998 44999] [   0    1    2 ... 4497 4498 4499]
torch.Size([40500, 784]) torch.Size([40500])
torch.Size([4500, 784]) torch.Size([4500])
Run Code Online (Sandbox Code Playgroud)

所以,当你说

我希望变量x_train_fold是第一个 4500 图片...形状 torch.Size([4500, 784])。

你错了。这个大小对应于x_test_fold。在第一次迭代中,基于 10 折,x_train_fold将有 40500 个点,因此它的大小应该是torch.Size([40500, 784]).

K 折验证图像


Kim*_*men 8

想我现在已经有了,但我觉得代码有点乱,有 3 个嵌套循环。有没有更简单的方法,或者这种方法可以吗?

这是我的交叉验证训练代码:

def train(network, epochs, save_Model = False):
    total_acc = 0
    for fold, (train_index, test_index) in enumerate(kfold.split(x_train, y_train)):
        ### Dividing data into folds
        x_train_fold = x_train[train_index]
        x_test_fold = x_train[test_index]
        y_train_fold = y_train[train_index]
        y_test_fold = y_train[test_index]

        train = torch.utils.data.TensorDataset(x_train_fold, y_train_fold)
        test = torch.utils.data.TensorDataset(x_test_fold, y_test_fold)
        train_loader = torch.utils.data.DataLoader(train, batch_size = batch_size, shuffle = False)
        test_loader = torch.utils.data.DataLoader(test, batch_size = batch_size, shuffle = False)

        for epoch in range(epochs):
            print('\nEpoch {} / {} \nFold number {} / {}'.format(epoch + 1, epochs, fold + 1 , kfold.get_n_splits()))
            correct = 0
            network.train()
            for batch_index, (x_batch, y_batch) in enumerate(train_loader):
                optimizer.zero_grad()
                out = network(x_batch)
                loss = loss_f(out, y_batch)
                loss.backward()
                optimizer.step()
                pred = torch.max(out.data, dim=1)[1]
                correct += (pred == y_batch).sum()
                if (batch_index + 1) % 32 == 0:
                    print('[{}/{} ({:.0f}%)]\tLoss: {:.6f}\t Accuracy:{:.3f}%'.format(
                        (batch_index + 1)*len(x_batch), len(train_loader.dataset),
                        100.*batch_index / len(train_loader), loss.data, float(correct*100) / float(batch_size*(batch_index+1))))
        total_acc += float(correct*100) / float(batch_size*(batch_index+1))
    total_acc = (total_acc / kfold.get_n_splits())
    print('\n\nTotal accuracy cross validation: {:.3f}%'.format(total_acc))
Run Code Online (Sandbox Code Playgroud)

  • @kHarshit 每次折叠后模型的权重不应该重新初始化吗?另外,由于优化器使用模型的参数,是否不需要为每个折叠创建一个新的优化器实例? (2认同)