Var*_*kar 1 python-3.x pytorch bert-language-model
我正在尝试在 git 链接中使用的 train2012 数据上训练 BertPunc 模型: https: //github.com/nkrnrnk/BertPunc。在启用 4 个 GPU 的服务器上运行时,出现以下错误:
StopIteration: Caught StopIteration in replica 1 on device 1.
Original Traceback (most recent call last):
File "/home/stenoaimladmin/.local/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/home/stenoaimladmin/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/stenoaimladmin/notebooks/model_BertPunc.py", line 16, in forward
x = self.bert(x)
File "/home/stenoaimladmin/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/stenoaimladmin/anaconda3/lib/python3.8/site-packages/pytorch_pretrained_bert/modeling.py", line 861, in forward
sequence_output, _ = self.bert(input_ids, token_type_ids, attention_mask,
File "/home/stenoaimladmin/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/stenoaimladmin/anaconda3/lib/python3.8/site-packages/pytorch_pretrained_bert/modeling.py", line 727, in forward
extended_attention_mask = extended_attention_mask.to(dtype=next(self.parameters()).dtype) # fp16 compatibility
StopIteration
Run Code Online (Sandbox Code Playgroud)
从链接: https: //github.com/huggingface/transformers/issues/8145,当数据在多个 GPU 之间来回移动时,这似乎会发生。
根据 git 链接:https://github.com/interpretml/interpret-text/issues/117,我们需要将 PyTorch 版本从我当前使用的 1.7 降级到 1.4。对我来说,降级版本不是一个选项,因为我还有其他使用 Torch 1.7 版本的脚本。我应该做什么来克服这个错误?
我无法将整个代码放在这里,因为行数太多,但这是给我错误的代码片段:
bert_punc, optimizer, best_val_loss = train(bert_punc, optimizer, criterion, epochs_top,
data_loader_train, data_loader_valid, save_path, punctuation_enc, iterations_top, best_val_loss=1e9)
Run Code Online (Sandbox Code Playgroud)
这是我的 DataParallel 代码:
bert_punc = nn.DataParallel(BertPunc(segment_size, output_size, dropout)).cuda()
Run Code Online (Sandbox Code Playgroud)
我尝试更改 Dataparallel 行,将训练转移到仅 1 个 GPU(目前有 4 个 GPU)。但这给了我一个空间问题,因此必须将代码恢复为默认值。
这是我正在使用的所有脚本的链接: https: //github.com/nkrnrnk/BertPunc 请提出建议。
改变
extended_attention_mask = extended_attention_mask.to(dtype=next(self.parameters()).dtype) # fp16 compatibility
到
extended_attention_mask = extended_attention_mask.to(dtype=torch.float32) # fp16 compatibility
有关更多详细信息,请参阅https://github.com/vid-koci/bert-commonsense/issues/6
归档时间: |
|
查看次数: |
4837 次 |
最近记录: |