Dataloader 对象不可下标的问题

tha*_*ngi 1 python tensorflow pytorch

我现在正在使用 Pytorch 运行 Python 程序。我使用自己的数据集,而不是torch.data.dataset. 我从从特征提取中提取的泡菜文件下载数据。但是出现以下错误:

Traceback (most recent call last):
  File "C:\Users\hp\Downloads\efficient_densenet_pytorch-master\demo-emotion.py", line 326, in <module>
    fire.Fire(demo)
  File "C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\fire\core.py", line 138, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\fire\core.py", line 468, in _Fire
    target=component.__name__)
  File "C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\fire\core.py", line 672, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "C:\Users\hp\Downloads\efficient_densenet_pytorch-master\demo-emotion.py", line 304, in demo
    train(model,train_set1, valid_set=valid_set, test_set=test1, save=save, n_epochs=n_epochs,batch_size=batch_size,seed=seed)
  File "C:\Users\hp\Downloads\efficient_densenet_pytorch-master\demo-emotion.py", line 172, in train
    n_epochs=n_epochs,
  File "C:\Users\hp\Downloads\efficient_densenet_pytorch-master\demo-emotion.py", line 37, in train_epoch
    loader=np.asarray(list(loader))
  File "C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\torch\utils\data\dataloader.py", line 345, in __next__
    data = self._next_data()
  File "C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\torch\utils\data\dataloader.py", line 385, in _next_data
    data = self._dataset_fetcher.fetch(index)  # may raise StopIteration
  File "C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\torch\utils\data\_utils\fetch.py", line 44, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\torch\utils\data\_utils\fetch.py", line 44, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\torch\utils\data\dataset.py", line 257, in __getitem__
    return self.dataset[self.indices[idx]]
TypeError: 'DataLoader' object is not subscriptable
Run Code Online (Sandbox Code Playgroud)

代码是:

train_set1 = Owndata()

train1, test1 = train_set1 .get_splits()
# prepare data loaders
train_dl = torch.utils.data.DataLoader(train1, batch_size=32, shuffle=True)
test_dl =torch.utils.data.DataLoader(test1, batch_size=1024, shuffle=False)
test_set1 = Owndata()
'''print('test_set# ',test_set)'''  
if valid_size:
    valid_set = Owndata()
    indices = torch.randperm(len(train_set1))
    train_indices = indices[:len(indices) - valid_size]
    valid_indices = indices[len(indices) - valid_size:]
    train_set1 = torch.utils.data.Subset(train_dl, train_indices)
    valid_set = torch.utils.data.Subset(valid_set, valid_indices)
else:
    valid_set = None
model = DenseNet(
    growth_rate=growth_rate,
    block_config=block_config,
    num_classes=10,
    small_inputs=True,
    efficient=efficient,
)
train(model,train_set1, valid_set=valid_set, test_set=test1, save=save, n_epochs=n_epochs, batch_size=batch_size, seed=seed)
Run Code Online (Sandbox Code Playgroud)

任何帮助表示赞赏!非常感谢提前!

Szy*_*zke 6

它不是给你一个错误的行,因为它是train你没有显示的最后一个函数。

你混淆了两件事:

  • torch.utils.data.Dataset对象是可索引的(dataset[5]例如工作正常)。它是一个简单的对象,它定义了如何获取单个(通常是单个)数据样本。
  • torch.utils.data.DataLoader- 不可索引,只能迭代,通常从上面返回批量数据Dataset。可以使用并行工作num_workers。这是您尝试索引的内容,而您应该dataset为此使用它。

请参阅有关数据的 PyTorch 文档,以更好地了解它们的工作原理。