下载什么以使nltk.tokenize.word_tokenize有效?

pet*_*bel 16 python nltk

我打算nltk.tokenize.word_tokenize在我的帐户受空间配额限制的群集上使用.在家里,我下载了所有nltk资源nltk.download()但是,据我发现,它需要~2.5GB.

这对我来说似乎有点矫枉过正.你能建议什么是最小的(或几乎是最小的)依赖关系nltk.tokenize.word_tokenize?到目前为止,我已经看到了,nltk.download('punkt')但我不确定它是否足够,尺寸是多少.究竟应该运行什么才能使它工作?

Tul*_*nde 27

你是对的.你需要Punkt Tokenizer模型.它有13 MB,nltk.download('punkt')应该可以做到这一点.

  • 或使用终端:`python -m nltk.downloader'punkt'`.另请注意,13 MB是压缩文件,最后是~36 MB. (10认同)

alv*_*vas 7

简而言之:

nltk.download('punkt')
Run Code Online (Sandbox Code Playgroud)

就够了


长期:

如果您只是要NLTK用于标记化,则无需下载NLTk中可用的所有模型和语料库.

实际上,如果您只是使用word_tokenize(),那么您将不需要任何资源nltk.download().如果我们查看代码,word_tokenize()基本上是TreebankWordTokenizer的默认值不应该使用任何其他资源:

alvas@ubi:~$ ls nltk_data/
chunkers  corpora  grammars  help  models  stemmers  taggers  tokenizers
alvas@ubi:~$ mv nltk_data/ tmp_move_nltk_data/
alvas@ubi:~$ python
Python 2.7.11+ (default, Apr 17 2016, 14:00:29) 
[GCC 5.3.1 20160413] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from nltk import word_tokenize
>>> from nltk.tokenize import TreebankWordTokenizer
>>> tokenizer = TreebankWordTokenizer()
>>> tokenizer.tokenize('This is a sentence.')
['This', 'is', 'a', 'sentence', '.']
Run Code Online (Sandbox Code Playgroud)

但:

alvas@ubi:~$ ls nltk_data/
chunkers  corpora  grammars  help  models  stemmers  taggers  tokenizers
alvas@ubi:~$ mv nltk_data/ tmp_move_nltk_data
alvas@ubi:~$ python
Python 2.7.11+ (default, Apr 17 2016, 14:00:29) 
[GCC 5.3.1 20160413] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from nltk import sent_tokenize
>>> sent_tokenize('This is a sentence. This is another.')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/__init__.py", line 90, in sent_tokenize
    tokenizer = load('tokenizers/punkt/{0}.pickle'.format(language))
  File "/usr/local/lib/python2.7/dist-packages/nltk/data.py", line 801, in load
    opened_resource = _open(resource_url)
  File "/usr/local/lib/python2.7/dist-packages/nltk/data.py", line 919, in _open
    return find(path_, path + ['']).open()
  File "/usr/local/lib/python2.7/dist-packages/nltk/data.py", line 641, in find
    raise LookupError(resource_not_found)
LookupError: 
**********************************************************************
  Resource u'tokenizers/punkt/english.pickle' not found.  Please
  use the NLTK Downloader to obtain the resource:  >>>
  nltk.download()
  Searched in:
    - '/home/alvas/nltk_data'
    - '/usr/share/nltk_data'
    - '/usr/local/share/nltk_data'
    - '/usr/lib/nltk_data'
    - '/usr/local/lib/nltk_data'
    - u''
**********************************************************************

>>> from nltk import word_tokenize
>>> word_tokenize('This is a sentence.')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/__init__.py", line 106, in word_tokenize
    return [token for sent in sent_tokenize(text, language)
  File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/__init__.py", line 90, in sent_tokenize
    tokenizer = load('tokenizers/punkt/{0}.pickle'.format(language))
  File "/usr/local/lib/python2.7/dist-packages/nltk/data.py", line 801, in load
    opened_resource = _open(resource_url)
  File "/usr/local/lib/python2.7/dist-packages/nltk/data.py", line 919, in _open
    return find(path_, path + ['']).open()
  File "/usr/local/lib/python2.7/dist-packages/nltk/data.py", line 641, in find
    raise LookupError(resource_not_found)
LookupError: 
**********************************************************************
  Resource u'tokenizers/punkt/english.pickle' not found.  Please
  use the NLTK Downloader to obtain the resource:  >>>
  nltk.download()
  Searched in:
    - '/home/alvas/nltk_data'
    - '/usr/share/nltk_data'
    - '/usr/local/share/nltk_data'
    - '/usr/lib/nltk_data'
    - '/usr/local/lib/nltk_data'
    - u''
**********************************************************************
Run Code Online (Sandbox Code Playgroud)

但是,这看起来并非如此,如果我们看一下https://github.com/nltk/nltk/blob/develop/nltk/tokenize/ 初始化的.py#L93.它似乎word_tokenize已经隐含地调用sent_tokenize(),需要punkt模型.

我不确定这是一个错误还是一个功能,但似乎旧的习语可能会因当前代码而过时:

>>> from nltk import sent_tokenize, word_tokenize
>>> sentences = 'This is a foo bar sentence. This is another sentence.'
>>> tokenized_sents = [word_tokenize(sent) for sent in sent_tokenize(sentences)]
>>> tokenized_sents
[['This', 'is', 'a', 'foo', 'bar', 'sentence', '.'], ['This', 'is', 'another', 'sentence', '.']]
Run Code Online (Sandbox Code Playgroud)

它可以简单地是:

>>> word_tokenize(sentences)
['This', 'is', 'a', 'foo', 'bar', 'sentence', '.', 'This', 'is', 'another', 'sentence', '.']
Run Code Online (Sandbox Code Playgroud)

但是我们看到将word_tokenize()字符串列表的列表扁平化为单个字符串列表.


或者,你可以尝试使用新的标记生成器将添加到NLTK toktok.py基于https://github.com/jonsafari/tok-tok,无需预先训练模式.