大型数据集中的快速子集/查找/过滤

Vic*_*yuk 3 performance r subset dplyr data.table

我需要在包含因子和数字列的大型(许多 GB)表中重复查找“最接近”的行。使用dplyr,它看起来像这样:

df <- data.frame(factorA = rep(letters[1:3], 100000),
             factorB = sample(rep(letters[1:3], 100000), 
                              3*100000, replace = FALSE),
             numC = round(rnorm(3*100000), 2),
             numD = round(rnorm(3*100000), 2))

closest <- function(ValueA, ValueB, ValueC, ValueD) {
  df_sub <- df %>%
    filter(factorA == ValueA,
           factorB == ValueB,
           numC >= 0.9 * ValueC,
           numC <= 1.1 * ValueC,
           numD >= 0.9 * ValueD,
           numD <= 1.1 * ValueD)

  if (nrow(df_sub) == 0) stop("Oh-oh, no candidates.")

  minC <- df_sub[which.min(abs(df_sub$numC - ValueC)), "numC"]

  df_sub %>%
    filter(numC == minC) %>%
    slice(which.min(abs(numD - ValueD))) %>%
    as.list() %>%
    return()
}
Run Code Online (Sandbox Code Playgroud)

这是上述的基准:

> microbenchmark(closest("a", "b", 0.5, 0.6))
Unit: milliseconds
                        expr      min       lq     mean   median       uq      max neval
 closest("a", "b", 0.5, 0.6) 25.20927 28.90623 35.16863 34.59485 35.25468 108.3489   100
Run Code Online (Sandbox Code Playgroud)

优化此功能以提高速度的最佳方法是什么?即使内存很大,也有空闲的 RAM df,但考虑到对此函数的多次调用,我希望使其尽可能快。

使用 adata.table代替有dplyr帮助吗?


以下是我迄今为止尝试过的两个优化:

dt <- as.data.table(df)

closest2 <- function(ValueA, ValueB, ValueC, ValueD) {
  df_sub <- df %>%
    filter(factorA == ValueA,
           factorB == ValueB,
           dplyr::between(numC, 0.9 * ValueC, 1.1 * ValueC),
           dplyr::between(numD, 0.9 * ValueD, 1.1 * ValueD))

  if (nrow(df_sub) == 0) stop("Oh-oh, no candidates.")

  minC <- df_sub[which.min(abs(df_sub$numC - ValueC)), "numC"]

  df_sub %>%
    filter(numC == minC) %>%
    slice(which.min(abs(numD - ValueD))) %>%
    as.list() %>%
    return()
}

closest3 <- function(ValueA, ValueB, ValueC, ValueD) {

  dt_sub <- dt[factorA == ValueA & 
                 factorB == ValueB & 
                 numC %between% c(0.9 * ValueC, 1.1 * ValueC) &
                 numD %between% c(0.9 * ValueD, 1.1 * ValueD)]

  if (nrow(dt_sub) == 0) stop("Oh-oh, no candidates.")

  dt_sub[abs(numC - ValueC) == min(abs(numC - ValueC))][which.min(abs(numD - ValueD))] %>%
    as.list() %>%
    return()
}
Run Code Online (Sandbox Code Playgroud)

基准:

> microbenchmark(closest("a", "b", 0.5, 0.6), closest2("a", "b", 0.5, 0.6), closest3("a", "b", 0.5, 0.6))
Unit: milliseconds
                         expr      min       lq     mean   median       uq       max neval cld
  closest("a", "b", 0.5, 0.6) 25.15780 25.62904 36.52022 34.68219 35.27116 155.31924   100   c
 closest2("a", "b", 0.5, 0.6) 22.14465 22.46490 27.81361 31.40918 32.04427  35.79021   100  b 
 closest3("a", "b", 0.5, 0.6) 13.52094 13.77555 20.04284 22.70408 23.41452 142.73626   100 a  
Run Code Online (Sandbox Code Playgroud)

这个可以再优化一下吗?

Fra*_*ank 5

如果您可以并行而不是顺序调用许多值元组......

\n\n
set.seed(1)\nDF <- data.frame(factorA = rep(letters[1:3], 100000),\n             factorB = sample(rep(letters[1:3], 100000), \n                              3*100000, replace = FALSE),\n             numC = round(rnorm(3*100000), 2),\n             numD = round(rnorm(3*100000), 2))\n\nlibrary(data.table)\nDT = data.table(DF)\n\nf = function(vA, vB, nC, nD, dat = DT){\n\n  rs <- dat[.(vA, vB, nC), on=.(factorA, factorB, numC), roll="nearest",\n    .(g = .GRP, r = .I, numD), by=.EACHI][.(seq_along(vA), nD), on=.(g, numD), roll="nearest", mult="first", \n    r]\n\n  df[rs]\n}\n\n# example usage\nmDT = data.table(vA = c("a", "b"), vB = c("c", "c"), nC = c(.3, .5), nD = c(.6, .8))\n\nmDT[, do.call(f, .SD)]\n\n#    factorA factorB numC numD\n# 1:       a       c  0.3 0.60\n# 2:       b       c  0.5 0.76\n
Run Code Online (Sandbox Code Playgroud)\n\n

与必须按行运行的其他解决方案相比......

\n\n
# check the results match\nlibrary(magrittr)\ndt = copy(DT)\nmDT[, closest3(vA, vB, nC, nD), by=.(mr = seq_len(nrow(mDT)))]\n\n#    mr factorA factorB numC numD\n# 1:  1       a       c  0.3 0.60\n# 2:  2       b       c  0.5 0.76\n\n# check speed for a larger number of comparisons\n\nnr = 100\nsystem.time( mDT[rep(1:2, each=nr), do.call(f, .SD)] )\n#    user  system elapsed \n#    0.07    0.00    0.06 \n\nsystem.time( mDT[rep(1:2, each=nr), closest3(vA, vB, nC, nD), by=.(mr = seq_len(nr*nrow(mDT)))] )\n#    user  system elapsed \n#   10.65    2.30   12.60 \n
Run Code Online (Sandbox Code Playgroud)\n\n

怎么运行的

\n\n

对于 中的每个元组.(vA, vB, nC),我们查找vAvB和 完全匹配的行,然后“滚动”到最接近的值nC——这与 OP 的规则不太匹配(在 nC*[0.9 的范围内查找, 1.1]),但该规则可以在事后轻松应用。对于每个匹配,我们获取元组的“组号”、匹配的行号以及这些行上.GRP的值。numD

\n\n

然后我们加入组号 和nD,在前者上精确匹配,在后者上滚动到最近的值。如果有多个最接近的匹配项,我们将采用第一个匹配项mult="first"

\n\n

然后我们可以获取每个元组匹配的行号并在原始表中查找。

\n\n

表现

\n\n

因此,与 R 一样,矢量化解决方案似乎具有很大的性能优势。

\n\n

如果您一次只能传递约 5 个元组(对于 OP)而不是 200 个,那么which.min由于二分搜索,这种方法与类似的方法仍然可能会带来好处,正如 @F.Priv\xc3\xa9 中建议的那样一条评论。

\n\n

正如 @HarlanNelson 的回答中所述,向表中添加索引可能会进一步提高性能。看看他的回答和?setindex

\n\n

修复 numC 滚动到一个值的问题

\n\n

感谢OP发现这个问题:

\n\n
DT2 = data.table(id = "A", numC = rep(c(1.01,1.02), each=5), numD = seq(.01,.1,.01))\nDT2[.("A", 1.011), on=.(id, numC), roll="nearest"]\n#    id  numC numD\n# 1:  A 1.011 0.05\n
Run Code Online (Sandbox Code Playgroud)\n\n

在这里,我们看到一行,但我们应该看到五行。一种修复方法(虽然我不确定为什么)是转换为整数:

\n\n
DT3 = copy(DT2)\nDT3[, numC := as.integer(numC*100)]\nDT3[, numD := as.integer(numD*100)]\nDT3[.("A", 101.1), on=.(id, numC), roll="nearest"]\n#    id numC numD\n# 1:  A  101    1\n# 2:  A  101    2\n# 3:  A  101    3\n# 4:  A  101    4\n# 5:  A  101    5\n
Run Code Online (Sandbox Code Playgroud)\n