如何使用 Python nltk.tokenize 将包含停用词的短语视为单个标记

fol*_*ion 5 python tokenize nltk stop-words

可以通过使用nltk.tokenize删除一些不必要的停用词来对字符串进行标记。但是如何将包含停用词的短语标记为单个标记,同时删除其他停用词?

例如:

输入:特朗普是美国总统。

输出:['特朗普','美国总统']

如何获得仅删除“is”和第一个“the”但不删除“of”和第二个“the”的结果?

glh*_*lhr 3

您可以使用 nltk 的多词表达式分词器,它允许将多词表达式合并为单个标记。您可以创建一个多词表达词典并向其中添加条目,如下所示:

from nltk.tokenize import MWETokenizer
mwetokenizer = MWETokenizer([('President','of','the','United','States')], separator=' ')
mwetokenizer.add_mwe(('President','of','France'))
Run Code Online (Sandbox Code Playgroud)

请注意,MWETokenizer 将标记化文本列表作为输入,并重新对其进行标记化。因此,首先对句子进行标记,例如。,然后word_tokenize()将其输入 MWETokenizer:

from nltk.tokenize import word_tokenize
sentence = "Trump is the President of the United States, and Macron is the President of France."
mwetokenized_sentence = mwetokenizer.tokenize(word_tokenize(sentence))
# ['Trump', 'is', 'the', 'President of the United States', ',', 'and', 'Macron', 'is', 'the', 'President of France', '.']
Run Code Online (Sandbox Code Playgroud)

然后,过滤掉停用词以获得最终过滤后的标记化句子:

from nltk.corpus import stopwords
stop_words = set(stopwords.words('english'))
filtered_sentence = [token for token in mwetokenizer.tokenize(word_tokenize(sentence)) if token not in stop_words]
print(filtered_sentence)
Run Code Online (Sandbox Code Playgroud)

输出:

['Trump', 'President of the United States', ',', 'Macron', 'President of France', '.']
Run Code Online (Sandbox Code Playgroud)