nltk句子标记化器,将新行视为句子边界

Cen*_*tAu 19 python nlp tokenize nltk

我正在使用nltk PunkSentenceTokenizer将文本标记为一组句子.但是,标记化器似乎不会将新段落或新行视为新句子.

>>> from nltk.tokenize.punkt import PunktSentenceTokenizer
>>> tokenizer = PunktSentenceTokenizer()
>>> tokenizer.tokenize('Sentence 1 \n Sentence 2. Sentence 3.')
['Sentence 1 \n Sentence 2.', 'Sentence 3.']
>>> tokenizer.span_tokenize('Sentence 1 \n Sentence 2. Sentence 3.')
[(0, 24), (25, 36)]
Run Code Online (Sandbox Code Playgroud)

我希望将新行视为句子的边界.无论如何要做到这一点(我也需要保存偏移量)?

Juc*_*uca 15

好吧,我遇到了同样的问题,而我所做的就是将文本拆分为'\n'.像这样的东西:

# in my case, when it had '\n', I called it a new paragraph, 
# like a collection of sentences
paragraphs = [p for p in text.split('\n') if p]
# and here, sent_tokenize each one of the paragraphs
for paragraph in paragraphs:
    sentences = tokenizer.tokenize(paragraph)
Run Code Online (Sandbox Code Playgroud)

这是我在制作中的简化版本,但总体思路是一样的.并且,对于葡萄牙语中的评论和文档字段抱歉,这是针对巴西观众的"教育目的"

def paragraphs(self):
    if self._paragraphs is not None:
        for p in  self._paragraphs:
            yield p
    else:
        raw_paras = self.raw_text.split(self.paragraph_delimiter)
        gen = (Paragraph(self, p) for p in raw_paras if p)
        self._paragraphs = []
        for p in gen:
            self._paragraphs.append(p)
            yield p
Run Code Online (Sandbox Code Playgroud)

完整代码https://gitorious.org/restjor/restjor/source/4d684ea4f18f66b097be1e10cc8814736888dfb4:restjor/decomposition.py#Lundefined

  • 虽然我认为我可以用点替换换行符.这可能会奏效. (2认同)