计算两个字符串中的常用单词

Jai*_*ain 5 string r text-mining data-analysis

我有两个字符串:

a <- "Roy lives in Japan and travels to Africa"
b <- "Roy travels Africa with this wife"
Run Code Online (Sandbox Code Playgroud)

我希望得到这些字符串之间的常用词.

答案应该是3.

  • "罗伊"

  • "旅行"

  • "非洲"

是常用词

这是我试过的:

stra <- as.data.frame(t(read.table(textConnection(a), sep = " ")))
strb <- as.data.frame(t(read.table(textConnection(b), sep = " ")))
Run Code Online (Sandbox Code Playgroud)

采取独特的,以避免重复计数

stra_unique <-as.data.frame(unique(stra$V1))
strb_unique <- as.data.frame(unique(strb$V1))
colnames(stra_unique) <- c("V1")
colnames(strb_unique) <- c("V1")

common_words <-length(merge(stra_unique,strb_unique, by = "V1")$V1)
Run Code Online (Sandbox Code Playgroud)

我需要这个用于超过2000和1200字符串的数据集.我必须评估字符串的总时间是2000 X 1200.任何快速方式,不使用循环.

Ale*_*lds 7

您可以使用strsplitintersect来自base库:

> a <- "Roy lives in Japan and travels to Africa"
> b <- "Roy travels Africa with this wife"
> a_split <- unlist(strsplit(a, sep=" "))
> b_split <- unlist(strsplit(b, sep=" "))
> length(intersect(a_split, b_split))
[1] 3
Run Code Online (Sandbox Code Playgroud)


akr*_*run 6

也许,使用intersectstr_extract For multiple strings,你可以把它们作为list或作为vector

 vec1 <- c(a,b)
 Reduce(`intersect`,str_extract_all(vec1, "\\w+"))
 #[1] "Roy"     "travels" "Africa" 
Run Code Online (Sandbox Code Playgroud)

对于faster选项,请考虑stringi

 library(stringi)
 Reduce(`intersect`,stri_extract_all_regex(vec1,"\\w+"))
 #[1] "Roy"     "travels" "Africa" 
Run Code Online (Sandbox Code Playgroud)

用于计数:

 length(Reduce(`intersect`,stri_extract_all_regex(vec1,"\\w+")))
 #[1] 3
Run Code Online (Sandbox Code Playgroud)

或使用 base R

  Reduce(`intersect`,regmatches(vec1,gregexpr("\\w+", vec1)))
  #[1] "Roy"     "travels" "Africa" 
Run Code Online (Sandbox Code Playgroud)