Huggingface AutoTokenizer 无法从本地路径加载

San*_*ndy 7 huggingface-transformers

我正在尝试使用我自己的标记生成器从 Huggingface 示例运行语言模型微调脚本(run_language_modeling.py)(刚刚添加了几个标记,请参阅评论)。我在加载分词器时遇到问题。我认为问题出在 AutoTokenizer.from_pretrained('local/path/to/directory') 上。

代码:

from transformers import *

tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
# special_tokens = ['<HASHTAG>', '<URL>', '<AT_USER>', '<EMOTICON-HAPPY>', '<EMOTICON-SAD>']
# tokenizer.add_tokens(special_tokens)
tokenizer.save_pretrained('../twitter/twittertokenizer/')
tmp = AutoTokenizer.from_pretrained('../twitter/twittertokenizer/')
Run Code Online (Sandbox Code Playgroud)

错误信息:

OSError                                   Traceback (most recent call last)
/z/huggingface_venv/lib/python3.7/site-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, pretrained_config_archive_map, **kwargs)
    248                 resume_download=resume_download,
--> 249                 local_files_only=local_files_only,
    250             )

/z/huggingface_venv/lib/python3.7/site-packages/transformers/file_utils.py in cached_path(url_or_filename, cache_dir, force_download, proxies, resume_download, user_agent, extract_compressed_file, force_extract, local_files_only)
    265         # File, but it doesn't exist.
--> 266         raise EnvironmentError("file {} not found".format(url_or_filename))
    267     else:

OSError: file ../twitter/twittertokenizer/config.json not found

During handling of the above exception, another exception occurred:

OSError                                   Traceback (most recent call last)
<ipython-input-32-662067cb1297> in <module>
----> 1 tmp = AutoTokenizer.from_pretrained('../twitter/twittertokenizer/')

/z/huggingface_venv/lib/python3.7/site-packages/transformers/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
    190         config = kwargs.pop("config", None)
    191         if not isinstance(config, PretrainedConfig):
--> 192             config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
    193 
    194         if "bert-base-japanese" in pretrained_model_name_or_path:

/z/huggingface_venv/lib/python3.7/site-packages/transformers/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
    192         """
    193         config_dict, _ = PretrainedConfig.get_config_dict(
--> 194             pretrained_model_name_or_path, pretrained_config_archive_map=ALL_PRETRAINED_CONFIG_ARCHIVE_MAP, **kwargs
    195         )
    196 

/z/huggingface_venv/lib/python3.7/site-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, pretrained_config_archive_map, **kwargs)
    270                     )
    271                 )
--> 272             raise EnvironmentError(msg)
    273 
    274         except json.JSONDecodeError:

OSError: Can't load '../twitter/twittertokenizer/'. Make sure that:

- '../twitter/twittertokenizer/' is a correct model identifier listed on 'https://huggingface.co/models'

- or '../twitter/twittertokenizer/' is the correct path to a directory containing a 'config.json' file
Run Code Online (Sandbox Code Playgroud)

如果我更改 AutoTokenizerBertTokenizer,上面的代码就可以工作。另外,我可以毫无问题地运行脚本,因为我通过快捷方式名称而不是路径加载。但在脚本 run_language_modeling.py 中它使用AutoTokenizer. 我正在寻找一种让它运行的方法。

任何想法?谢谢!

den*_*ger 2

问题是您没有使用任何可以指示要实例化的正确标记生成器的内容。

作为参考,请参阅Huggingface 文档中定义的规则。具体来说,由于您使用的是 BERT:

包含bert:BertTokenizer(Bert模型)

否则,正如您所提到的,您必须自己指定确切的类型。

  • 您的意思是 AutoTokenizer.from_pretrained 无法确定要实例化自身的标记生成器类型吗?为什么 AutoTokenizer.from_pretrained('bert-base-uncased') 有效,但 AutoTokenizer.from_pretrained('local/path') 无效?谢谢你! (3认同)
  • 正如我所说,“AutoTokenizer”的选择基于您提供的*名称*。由于预训练模型指示要选择哪个模型(即“bert-base-uncased”指的是 BERT 模型等),因此您必须将本地模型存储在*类似地*指示使用的模型的文件夹中,即“/path/to/bert-derivative”。 (2认同)