使用nltk.data.load加载english.pickle失败

Mar*_*tin 134 python nltk jenkins

尝试加载punkt令牌化程序时...

import nltk.data
tokenizer = nltk.data.load('nltk:tokenizers/punkt/english.pickle')
Run Code Online (Sandbox Code Playgroud)

...... a LookupError被提出:

> LookupError: 
>     *********************************************************************   
> Resource 'tokenizers/punkt/english.pickle' not found.  Please use the NLTK Downloader to obtain the resource: nltk.download().   Searched in:
>         - 'C:\\Users\\Martinos/nltk_data'
>         - 'C:\\nltk_data'
>         - 'D:\\nltk_data'
>         - 'E:\\nltk_data'
>         - 'E:\\Python26\\nltk_data'
>         - 'E:\\Python26\\lib\\nltk_data'
>         - 'C:\\Users\\Martinos\\AppData\\Roaming\\nltk_data'
>     **********************************************************************
Run Code Online (Sandbox Code Playgroud)

ric*_*rdr 252

我有同样的问题.进入python shell并输入:

>>> import nltk
>>> nltk.download()
Run Code Online (Sandbox Code Playgroud)

然后会出现一个安装窗口.转到"模型"标签,然后从"标识符"列下方选择"朋克".然后单击"下载",它将安装必要的文件.那它应该工作!

  • 请注意,在某些版本中,没有"模型"选项卡,您可以转到"下载"并尝试获取包"punkt"或使用任何"列表"选项列出可用的包. (4认同)
  • 它安装到我的主文件夹中的 nltk_data 目录中。我应该将此 punkt 目录复制到任何 nltk 库文件夹中吗?请帮忙 (2认同)

Nar*_*ula 88

import nltk
nltk.download('punkt')

from nltk import word_tokenize,sent_tokenize
Run Code Online (Sandbox Code Playgroud)

使用标记器:)


jji*_*ing 26

这对我来说现在很有用:

# Do this in a separate python interpreter session, since you only have to do it once
import nltk
nltk.download('punkt')

# Do this in your ipython notebook or analysis script
from nltk.tokenize import word_tokenize

sentences = [
    "Mr. Green killed Colonel Mustard in the study with the candlestick. Mr. Green is not a very nice fellow.",
    "Professor Plum has a green plant in his study.",
    "Miss Scarlett watered Professor Plum's green plant while he was away from his office last week."
]

sentences_tokenized = []
for s in sentences:
    sentences_tokenized.append(word_tokenize(s))
Run Code Online (Sandbox Code Playgroud)

sentences_tokenized是令牌列表的列表:

[['Mr.', 'Green', 'killed', 'Colonel', 'Mustard', 'in', 'the', 'study', 'with', 'the', 'candlestick', '.', 'Mr.', 'Green', 'is', 'not', 'a', 'very', 'nice', 'fellow', '.'],
['Professor', 'Plum', 'has', 'a', 'green', 'plant', 'in', 'his', 'study', '.'],
['Miss', 'Scarlett', 'watered', 'Professor', 'Plum', "'s", 'green', 'plant', 'while', 'he', 'was', 'away', 'from', 'his', 'office', 'last', 'week', '.']]
Run Code Online (Sandbox Code Playgroud)

这些句子取自"挖掘社交网络,第2版"一书的示例ipython笔记本


cgl*_*cgl 13

从bash命令行运行:

$ python -c "import nltk; nltk.download('punkt')"
Run Code Online (Sandbox Code Playgroud)


Ros*_*iya 11

这对我有用:

>>> import nltk
>>> nltk.download()
Run Code Online (Sandbox Code Playgroud)

在Windows中,您还将获得nltk下载程序

NLTK下载器


小智 8

简单nltk.download()不会解决这个问题.我尝试了下面的内容,它对我有用:

nltk文件夹中创建一个tokenizers文件夹并将文件夹复制punkttokenizers文件夹中.

这将有效.文件夹结构需要如图所示!1


Abh*_*eet 6

在 Jenkins 上,这可以通过在Build选项卡下向Virtualenv Builder添加以下类似代码来解决

python -m nltk.downloader punkt

在此处输入图片说明


小智 6

在 Spyder 中,转到您的活动 shell 并使用以下 2 个命令下载 nltk。导入 nltk nltk.download() 然后你应该看到 NLTK 下载器窗口打开,如下所示,转到此窗口中的“模型”选项卡,然后单击“punkt”并下载“punkt”

窗户


Je *_* Je 6

在使用分配的文件夹进行多次下载时,我遇到了类似的问题,我必须手动附加数据路径:

单个下载,可按如下方式实现(有效)

import os as _os
from nltk.corpus import stopwords
from nltk import download as nltk_download

nltk_download('stopwords', download_dir=_os.path.join(get_project_root_path(), 'temp'), raise_on_error=True)

stop_words: list = stopwords.words('english')

Run Code Online (Sandbox Code Playgroud)

此代码有效,这意味着 nltk 会记住下载函数中传递的下载路径。另一方面,如果我下载后续包,我会收到用户描述的类似错误:

多次下载会引发错误:

import os as _os

from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize

from nltk import download as nltk_download

nltk_download(['stopwords', 'punkt'], download_dir=_os.path.join(get_project_root_path(), 'temp'), raise_on_error=True)

print(stopwords.words('english'))
print(word_tokenize("I am trying to find the download path 99."))


Run Code Online (Sandbox Code Playgroud)

错误:

未找到资源点。请使用NLTK下载器获取资源:

导入 nltk nltk.download('punkt')

现在,如果我将 ntlk 数据路径附加到我的下载路径,它就可以工作:

import os as _os

from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize

from nltk import download as nltk_download
from nltk.data import path as nltk_path


nltk_path.append( _os.path.join(get_project_root_path(), 'temp'))


nltk_download(['stopwords', 'punkt'], download_dir=_os.path.join(get_project_root_path(), 'temp'), raise_on_error=True)

print(stopwords.words('english'))
print(word_tokenize("I am trying to find the download path 99."))

Run Code Online (Sandbox Code Playgroud)

这有效...不知道为什么在一种情况下有效,但在另一种情况下无效,但错误消息似乎暗示它不会第二次签入下载文件夹。注意:使用windows8.1/python3.7/nltk3.5


Jig*_*oya 5

nltk拥有经过预先训练的标记器模型.模型从内部预定义的Web源下载并存储在已安装的nltk包的路径中,同时执行以下可能的函数调用.

例如1 tokenizer = nltk.data.load('nltk:tokenizers/punkt/english.pickle')

例如2 nltk.download('punkt')

如果您在代码中调用上述句子,请确保您没有任何防火墙保护的互联网连接.

我想分享一些更好的改变网络方式来解决上述问题,提供更好的深层理解.

请按照以下步骤使用nltk享受英语单词标记化.

步骤1:首先按照Web路径下载"english.pickle"模型.

转到链接" http://www.nltk.org/nltk_data/ "并点击"107. Punkt Tokenizer Models"选项"下载"

步骤2:解压缩下载的"punkt.zip"文件并从中找到"english.pickle"文件并放入C盘.

第3步:复制粘贴代码并执行.

from nltk.data import load
from nltk.tokenize.treebank import TreebankWordTokenizer

sentences = [
    "Mr. Green killed Colonel Mustard in the study with the candlestick. Mr. Green is not a very nice fellow.",
    "Professor Plum has a green plant in his study.",
    "Miss Scarlett watered Professor Plum's green plant while he was away from his office last week."
]

tokenizer = load('file:C:/english.pickle')
treebank_word_tokenize = TreebankWordTokenizer().tokenize

wordToken = []
for sent in sentences:
    subSentToken = []
    for subSent in tokenizer.tokenize(sent):
        subSentToken.extend([token for token in treebank_word_tokenize(subSent)])

    wordToken.append(subSentToken)

for token in wordToken:
    print token
Run Code Online (Sandbox Code Playgroud)

如果您遇到任何问题,请告诉我