我有点不确定如何继续讨论上述主题。
\n基线是通过 Huggingface\xe2\x80\x99s 库创建的模型,作为 AutoModelForCausalLM 模型、PEFT 和 LoRA 方法,并随后合并权重。
\n我现在想进一步微调模型而不丢失其原始属性 - 在这种情况下通过指令微调/前缀调整。
\n我的方法如下:
\nmodel = AutoModelForCausalLM.from_pretrained(\n model_id,\n use_cache=False if gradient_checkpointing else True\n device_map="auto",\n load_in_8bit=True,\n )\n\nmodel = create_peft_config(model)\n\noutput_dir = "/tmp"\ntraining_args = TrainingArguments(\n output_dir=output_dir,\n overwrite_output_dir=True,\n per_device_train_batch_size=per_device_train_batch_size,\n per_device_eval_batch_size=per_device_train_batch_size,\n bf16=bf16,\n learning_rate=lr,\n num_train_epochs=epochs,\n gradient_checkpointing=gradient_checkpointing,\n gradient_accumulation_steps=2,\n logging_dir=f"{output_dir}/logs",\n logging_strategy="steps",\n logging_steps=10,\n optim="adafactor",\n save_strategy="epoch",\n save_total_limit=3,\n evaluation_strategy="epoch",\n load_best_model_at_end=False,\n no_cuda=False,\n auto_find_batch_size=True\n)\n\ntrainer = Trainer(\n model=model,\n args=training_args,\n train_dataset=dataset_train,\n compute_metrics=compute_metrics,\n preprocess_logits_for_metrics=preprocess_logits_for_metrics,\n eval_dataset=dataset_eval,\n data_collator=default_data_collator\n)\n\ntrainer.train()\n\ntrainer.model.save_pretrained(output_dir)\n\ndel model\ndel trainer\n\npeft_config = PeftConfig.from_pretrained(output_dir)\nmodel = AutoModelForCausalLM.from_pretrained(\n peft_config.base_model_name_or_path,\n load_in_8bit=False,\n return_dict=True,\n device_map="auto",\n torch_dtype=torch.float16,\n low_cpu_mem_usage=True,\n)\nmodel = PeftModel.from_pretrained(\n model,\n output_dir,\n torch_dtype=torch.float16,\n …Run Code Online (Sandbox Code Playgroud) huggingface-transformers text-generation large-language-model peft