Tri*_*ran 6 python nlp deep-learning fast-ai
I'm doing a Text Classification (NLP) model using fastai train on googlecolab (gpu) after I load the model using load_learner without any error but when I change the cpu usage, I get an error "RuntimeError: _th_index_select not supported on CPUType for Half" Is there any way for me to predict cpu usage results?
from fastai import *
from fastai.text import *
from sklearn.metrics import f1_score
defaults.device = torch.device('cpu')
@np_func
def f1(inp,targ): return f1_score(targ, np.argmax(inp, axis=-1))
path = Path('/content/drive/My Drive/Test_fast_ai')
learn = load_learner(path)
learn.predict("so sad")
Run Code Online (Sandbox Code Playgroud)
RuntimeError Traceback (most recent call last)
<ipython-input-13-3775eb2bfe91> in <module>()
----> 1 learn.predict("so sad")
11 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1504 # remove once script supports set_grad_enabled
1505 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1506 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1507
1508
RuntimeError: _th_index_select not supported on CPUType for Half
Run Code Online (Sandbox Code Playgroud)
小智 1
我也有同样的问题。您正在使用参数训练模型to_fp16()吗?我通过从学习器中删除此参数来解决该问题。例如,当我使用以下命令行进行训练时,在使用模型在 CPU 环境中进行预测时出现了 RuntimeError。
learn_c = text_classifier_learner(data_clas, AWD_LSTM, drop_mult=0.5, metrics=[accuracy]).to_fp16()
Run Code Online (Sandbox Code Playgroud)
为了解决这个问题,我只需删除后缀.to_fp16(),一切都很顺利。