无法从“flash_attn”导入名称“flash_attn_func”

Ham*_*d K 3 pytorch huggingface-transformers llama

尝试加载 llama2 模型:

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    quantization_config=bnb_config,
    device_map=device_map
)
Run Code Online (Sandbox Code Playgroud)

使用这些 bnb_config:

BitsAndBytesConfig {
  "bnb_4bit_compute_dtype": "bfloat16",
  "bnb_4bit_quant_type": "nf4",
  "bnb_4bit_use_double_quant": true,
  "llm_int8_enable_fp32_cpu_offload": false,
  "llm_int8_has_fp16_weight": false,
  "llm_int8_skip_modules": null,
  "llm_int8_threshold": 6.0,
  "load_in_4bit": true,
  "load_in_8bit": false,
  "quant_method": "bitsandbytes"
}
Run Code Online (Sandbox Code Playgroud)

我收到此错误:

RuntimeError: Failed to import transformers.models.llama.modeling_llama because of the following error (look up to see its traceback):
cannot import name 'flash_attn_func' from 'flash_attn' (/opt/conda/lib/python3.10/site-packages/flash_attn/__init__.py)
Run Code Online (Sandbox Code Playgroud)

任何帮助都会有所帮助。

小智 6

我在微调 llama2 模型时遇到了同样的错误,解决方案是恢复到以前版本的变压器。

pip install transformers==4.33.1 --upgrade
Run Code Online (Sandbox Code Playgroud)

这应该有效。