An *_*ea. 4 python pre-trained-model pytorch huggingface-transformers huggingface-trainer
I\xe2\x80\x99m 尝试在没有评估数据集的情况下进行微调。\n为此,I\xe2\x80\x99m 使用以下代码:
\ntraining_args = TrainingArguments(\n output_dir=resume_from_checkpoint,\n evaluation_strategy="epoch",\n per_device_train_batch_size=1,\n)\ndef compute_metrics(pred: EvalPrediction):\n labels = pred.label_ids\n preds = pred.predictions.argmax(-1)\n f1 = f1_score(labels, preds, average="weighted")\n acc = accuracy_score(labels, preds, average="weighted")\n return {"accuracy": acc, "f1": f1}\ntrainer = Trainer(\n model=self.nli_model,\n args=training_args,\n train_dataset=tokenized_datasets,\n compute_metrics=compute_metrics,\n)\n
Run Code Online (Sandbox Code Playgroud)\n但是,我得到
\nValueError: Trainer: evaluation requires an eval_dataset\n
Run Code Online (Sandbox Code Playgroud)\n我认为默认情况下,Trainer 至少在文档中不进行评估\xe2\x80\xa6,我得到了这个想法\xe2\x80\xa6
\n我设置了,evaluation_strategy="no"
然后设置后我就可以调用 ,而无需传递任何评估数据集。注意,我在版本上测试了这个do_eval=False
TrainingArguments
trainer.train()
transformers
4.31.0
归档时间: |
|
查看次数: |
3638 次 |
最近记录: |