使意大利语句子合法化以进行频率计数

TPP*_*PPZ 5 nlp stemming nltk lemmatization python-2.7

我想对一些意大利语文本进行词素化,以便对单词进行频率计数,并对这种词素化内容的输出进行进一步调查。

我更喜欢使用词组比词干,因为我可以从句子中的上下文中提取单词含义(例如,区分动词和名词)并获取语言中存在的单词,而不是那些通常没有的单词的词根一个意义。

我发现这个名为patternpip2 install pattern)的库应该进行补充nltk以执行意大利语的词素化,但是我不确定以下方法是否正确,因为每个单词都是由词素化的,而不是在句子的上下文中。

可能我应该pattern负责将一个句子标记化(因此也要用有关动词/名词/形容词的元数据对每个单词进行注释),然后检索经过修饰的单词,但是我无法做到这一点,我甚至不确定是目前可能吗?

另外:在意大利语中,有些文章带有撇号,因此“ l'appartamento”(英语为“ the flat”)实际上是2个字:“ lo”和“ appartamento”。现在,我无法找到将组合在一起的两个单词分开的方法nltkpattern因此无法以正确的方式计算单词的出现频率。

import nltk
import string
import pattern

# dictionary of Italian stop-words
it_stop_words = nltk.corpus.stopwords.words('italian')
# Snowball stemmer with rules for the Italian language
ita_stemmer = nltk.stem.snowball.ItalianStemmer()

# the following function is just to get the lemma
# out of the original input word (but right now
# it may be loosing the context about the sentence
# from where the word is coming from i.e.
# the same word could either be a noun/verb/adjective
# according to the context)
def lemmatize_word(input_word):
    in_word = input_word#.decode('utf-8')
    # print('Something: {}'.format(in_word))
    word_it = pattern.it.parse(
        in_word, 
        tokenize=False,  
        tag=False,  
        chunk=False,  
        lemmata=True 
    )
    # print("Input: {} Output: {}".format(in_word, word_it))
    the_lemmatized_word = word_it.split()[0][0][4]
    # print("Returning: {}".format(the_lemmatized_word))
    return the_lemmatized_word

it_string = "Ieri sono andato in due supermercati. Oggi volevo andare all'ippodromo. Stasera mangio la pizza con le verdure."

# 1st tokenize the sentence(s)
word_tokenized_list = nltk.tokenize.word_tokenize(it_string)
print("1) NLTK tokenizer, num words: {} for list: {}".format(len(word_tokenized_list), word_tokenized_list))

# 2nd remove punctuation and everything lower case
word_tokenized_no_punct = [string.lower(x) for x in word_tokenized_list if x not in string.punctuation]
print("2) Clean punctuation, num words: {} for list: {}".format(len(word_tokenized_no_punct), word_tokenized_no_punct))

# 3rd remove stop words (for the Italian language)
word_tokenized_no_punct_no_sw = [x for x in word_tokenized_no_punct if x not in it_stop_words]
print("3) Clean stop-words, num words: {} for list: {}".format(len(word_tokenized_no_punct_no_sw), word_tokenized_no_punct_no_sw))

# 4.1 lemmatize the words
word_tokenize_list_no_punct_lc_no_stowords_lemmatized = [lemmatize_word(x) for x in word_tokenized_no_punct_no_sw]
print("4.1) lemmatizer, num words: {} for list: {}".format(len(word_tokenize_list_no_punct_lc_no_stowords_lemmatized), word_tokenize_list_no_punct_lc_no_stowords_lemmatized))

# 4.2 snowball stemmer for Italian
word_tokenize_list_no_punct_lc_no_stowords_stem = [ita_stemmer.stem(i) for i in word_tokenized_no_punct_no_sw]
print("4.2) stemmer, num words: {} for list: {}".format(len(word_tokenize_list_no_punct_lc_no_stowords_stem), word_tokenize_list_no_punct_lc_no_stowords_stem))

# difference between stemmer and lemmatizer
print(
    "For original word(s) '{}' and '{}' the stemmer: '{}' '{}' (count 1 each), the lemmatizer: '{}' '{}' (count 2)"
    .format(
        word_tokenized_no_punct_no_sw[1],
        word_tokenized_no_punct_no_sw[6],
        word_tokenize_list_no_punct_lc_no_stowords_stem[1],
        word_tokenize_list_no_punct_lc_no_stowords_stem[6],
        word_tokenize_list_no_punct_lc_no_stowords_lemmatized[1],
        word_tokenize_list_no_punct_lc_no_stowords_lemmatized[1]
    )
)
Run Code Online (Sandbox Code Playgroud)

给出以下输出:

1) NLTK tokenizer, num words: 20 for list: ['Ieri', 'sono', 'andato', 'in', 'due', 'supermercati', '.', 'Oggi', 'volevo', 'andare', "all'ippodromo", '.', 'Stasera', 'mangio', 'la', 'pizza', 'con', 'le', 'verdure', '.']
2) Clean punctuation, num words: 17 for list: ['ieri', 'sono', 'andato', 'in', 'due', 'supermercati', 'oggi', 'volevo', 'andare', "all'ippodromo", 'stasera', 'mangio', 'la', 'pizza', 'con', 'le', 'verdure']
3) Clean stop-words, num words: 12 for list: ['ieri', 'andato', 'due', 'supermercati', 'oggi', 'volevo', 'andare', "all'ippodromo", 'stasera', 'mangio', 'pizza', 'verdure']
4.1) lemmatizer, num words: 12 for list: [u'ieri', u'andarsene', u'due', u'supermercato', u'oggi', u'volere', u'andare', u"all'ippodromo", u'stasera', u'mangiare', u'pizza', u'verdura']
4.2) stemmer, num words: 12 for list: [u'ier', u'andat', u'due', u'supermerc', u'oggi', u'vol', u'andar', u"all'ippodrom", u'staser', u'mang', u'pizz', u'verdur']
For original word(s) 'andato' and 'andare' the stemmer: 'andat' 'andar' (count 1 each), the lemmatizer: 'andarsene' 'andarsene' (count 2)
Run Code Online (Sandbox Code Playgroud)
  • 如何pattern使用它们的标记器有效地使某些句子脱句?(假设引理被识别为名词/动词/形容词等)
  • 是否有python替代品可pattern用于与意大利词条化nltk
  • 如何使用撇号拆分与下一个单词绑定的文章?

Ado*_*nis 7

我会尽量回答你的问题,知道我对意大利语了解不多!

1)据我所知,去除撇号的主要责任是分词器,因此nltk意大利语分词器似乎失败了。

3)你可以做的一个简单的事情是调用replace方法(尽管你可能不得不使用re包来获得更复杂的模式),一个例子:

word_tokenized_no_punct_no_sw_no_apostrophe = [x.split("'") for x in word_tokenized_no_punct_no_sw]
word_tokenized_no_punct_no_sw_no_apostrophe = [y for x in word_tokenized_no_punct_no_sw_no_apostrophe for y in x]
Run Code Online (Sandbox Code Playgroud)

它产生:

['ieri', 'andato', 'due', 'supermercati', 'oggi', 'volevo', 'andare', 'all', 'ippodromo', 'stasera', 'mangio', 'pizza', 'verdure']
Run Code Online (Sandbox Code Playgroud)

2) 模式的替代方案是treetagger,当然它不是最简单的安装(您需要python 包工具本身,但是在这部分之后它可以在 Windows 和 Linux 上运行)。

上面例子的一个简单例子:

import treetaggerwrapper 
from pprint import pprint

it_string = "Ieri sono andato in due supermercati. Oggi volevo andare all'ippodromo. Stasera mangio la pizza con le verdure."
tagger = treetaggerwrapper.TreeTagger(TAGLANG="it")
tags = tagger.tag_text(it_string)
pprint(treetaggerwrapper.make_tags(tags))
Run Code Online (Sandbox Code Playgroud)

pprint收益率:

[Tag(word=u'Ieri', pos=u'ADV', lemma=u'ieri'),
 Tag(word=u'sono', pos=u'VER:pres', lemma=u'essere'),
 Tag(word=u'andato', pos=u'VER:pper', lemma=u'andare'),
 Tag(word=u'in', pos=u'PRE', lemma=u'in'),
 Tag(word=u'due', pos=u'ADJ', lemma=u'due'),
 Tag(word=u'supermercati', pos=u'NOM', lemma=u'supermercato'),
 Tag(word=u'.', pos=u'SENT', lemma=u'.'),
 Tag(word=u'Oggi', pos=u'ADV', lemma=u'oggi'),
 Tag(word=u'volevo', pos=u'VER:impf', lemma=u'volere'),
 Tag(word=u'andare', pos=u'VER:infi', lemma=u'andare'),
 Tag(word=u"all'", pos=u'PRE:det', lemma=u'al'),
 Tag(word=u'ippodromo', pos=u'NOM', lemma=u'ippodromo'),
 Tag(word=u'.', pos=u'SENT', lemma=u'.'),
 Tag(word=u'Stasera', pos=u'ADV', lemma=u'stasera'),
 Tag(word=u'mangio', pos=u'VER:pres', lemma=u'mangiare'),
 Tag(word=u'la', pos=u'DET:def', lemma=u'il'),
 Tag(word=u'pizza', pos=u'NOM', lemma=u'pizza'),
 Tag(word=u'con', pos=u'PRE', lemma=u'con'),
 Tag(word=u'le', pos=u'DET:def', lemma=u'il'),
 Tag(word=u'verdure', pos=u'NOM', lemma=u'verdura'),
 Tag(word=u'.', pos=u'SENT', lemma=u'.')]
Run Code Online (Sandbox Code Playgroud)

在词形还原之前,它还很好地标记了all'ippodromotoalippodromo(希望是正确的)。现在我们只需要去除停用词和标点符号就可以了。

为 Python安装 TreeTaggerWrapper的文档

  • 谢谢,这有效。在linux上,可以直接在python代码中指定安装脚本的路径,例如 `treetaggerwrapper.TreeTagger(TAGLANG="it", TAGDIR='/abs-path/to/tree-tagger-linux-3.2.1/' )`,或者作为环境变量,如下所述http://treetaggerwrapper.readthedocs.io/en/latest/#configuration 另外,为了从名为元组的`Tag`中获取引理:`tags_str = tagger.tag_text(it_string) ` 然后 `tags = treetaggerwrapper.make_tags(tags_str)` 然后 `lemmas = map((lambda x: x.lemma), Tags)` (3认同)