标签: roberta

AutoModelForSequenceClassification 需要 PyTorch 库,但在您的环境中找不到它

我正在尝试使用 roberta 变压器和预训练模型,但我不断收到此错误:

    ImportError: 
AutoModelForSequenceClassification requires the PyTorch library but it was not found in your environment. Checkout the instructions on the
installation page: https://pytorch.org/get-started/locally/ and follow the ones that match your environment.
Run Code Online (Sandbox Code Playgroud)

这是我的代码:

# Tasks:
# emoji, emotion, hate, irony, offensive, sentiment
# stance/abortion, stance/atheism, stance/climate, stance/feminist, stance/hillary

task='sentiment'
MODEL = f"cardiffnlp/twitter-roberta-base-{task}"

tokenizer = AutoTokenizer.from_pretrained(MODEL)
# download label mapping
labels=[]
mapping_link = f"https://raw.githubusercontent.com/cardiffnlp/tweeteval/main/datasets/{task}/mapping.txt"
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
model.save_pretrained(MODEL)
labels=[]
mapping_link = f"https://raw.githubusercontent.com/cardiffnlp/tweeteval/main/datasets/{task}/mapping.txt"
with urllib.request.urlopen(mapping_link) as f:
    html = f.read().decode('utf-8').split("\n")
    csvreader …
Run Code Online (Sandbox Code Playgroud)

python pytorch roberta

26
推荐指数
2
解决办法
2万
查看次数

微调 LM 与快速设计 LLM

是否有可能对像 Roberta 这样的小得多的语言模型(例如客户服务数据集)进行微调,并获得与使用部分数据集提示 GPT-4 所获得的结果一样好的结果?

经过微调的 Roberta 模型能否学会以对话方式遵循指令,至少对于这样的小领域?

有没有任何论文或文章可以根据经验探讨这个问题?

language-model roberta roberta-language-model gpt-4 large-language-model

3
推荐指数
1
解决办法
2130
查看次数