SRR*_*sel 12 regex r text-mining opennlp stringi
我想打破下string
一句话:
library(NLP) # NLP_0.1-7
string <- as.String("Mr. Brown comes. He says hello. i give him coffee.")
Run Code Online (Sandbox Code Playgroud)
我想展示两种不同的方式.一个来自包装openNLP
:
library(openNLP) # openNLP_0.2-5
sentence_token_annotator <- Maxent_Sent_Token_Annotator(language = "en")
boundaries_sentences<-annotate(string, sentence_token_annotator)
string[boundaries_sentences]
[1] "Mr. Brown comes." "He says hello." "i give him coffee."
Run Code Online (Sandbox Code Playgroud)
第二个来自包装stringi
:
library(stringi) # stringi_0.5-5
stri_split_boundaries( string , opts_brkiter=stri_opts_brkiter('sentence'))
[[1]]
[1] "Mr. " "Brown comes. "
[3] "He says hello. i give him coffee."
Run Code Online (Sandbox Code Playgroud)
在第二种方式之后,我需要准备句子以删除多余的空格或再次将新的字符串分解成句子.我可以调整stringi函数来提高结果的质量吗?
当它是一个大数据时,openNLP
(非常)慢stringi
.
有没有办法结合stringi
( - >快速)和openNLP
( - >质量)?
ICU中的文本边界(在本例中为句子边界)分析(因此在stringi中)由Unicode UAX29中描述的规则控制,另请参阅该主题的ICU用户指南.我们读:
[Unicode规则]无法检测到诸如"......先生.琼斯......"; 需要更复杂的剪裁来检测这种情况.
换句话说,如果没有实际上实现的不停词的自定义词典,就无法做到这一点openNLP
.因此,将stringi用于执行此任务的几种可能方案包括:
stri_split_boundaries
然后编写一个函数来决定应该连接哪些错误分割的标记.stri_split_regex
.等等.
这可能是一个可行的正则表达式解决方案:
string <- "Mr. Brown comes. He says hello. i give him coffee."
stringi::stri_split_regex(string, "(?<!\\w\\.\\w.)(?<![A-Z][a-z]\\.)(?<=\\.|\\?|\\!)\\s")
## [[1]]
## [1] "Mr. Brown comes." "He says hello." "i give him coffee."
Run Code Online (Sandbox Code Playgroud)
执行得不太好:
string <- "Mr. Brown comes! He says hello. i give him coffee. i will got at 5 p. m. eastern time. Or somewhere in between"
Run Code Online (Sandbox Code Playgroud)