multidplyr和group_by()和filter()

Jus*_*kis 8 r dplyr multidplyr

我有以下数据框,我的目的是找到所有ID,具有不同的USAGE但相同的TYPE.

ID <- rep(1:4, each=3)
USAGE <- c("private","private","private","private",
"taxi","private","taxi","taxi","taxi","taxi","private","taxi")
TYPE <- c("VW","VW","VW","VW","MER","VW","VW","VW","VW","VW","VW","VW")
df <- data.frame(ID,USAGE,TYPE)
Run Code Online (Sandbox Code Playgroud)

如果我跑

df %>% group_by(ID, TYPE) %>% filter(n_distinct(USAGE)>1)
Run Code Online (Sandbox Code Playgroud)

我得到了预期的结果.但我的原始数据帧有> 2百万行.所以我想在运行此操作时使用所有内核.

我用multidplyr尝试了这段代码:

f1 <- partition(df, ID)
f2 <- f1 %>% group_by(ID, TYPE) %>% filter(n_distinct(USAGE)>1)
f3 <- collect(f2)
Run Code Online (Sandbox Code Playgroud)

但随后出现以下消息:

Warning message: group_indices_.grouped_df ignores extra arguments
Run Code Online (Sandbox Code Playgroud)

f1 <- partition(df, ID)
Run Code Online (Sandbox Code Playgroud)

Error in checkForRemoteErrors(lapply(cl, recvResult)) : 
  4 nodes produced errors; first error: Evaluation error: object 'f1' not found.
Run Code Online (Sandbox Code Playgroud)

f2 <- f1%>% group_by(ID, TYPE) %>% filter(f1, n_distinct(USAGE)>1)
Run Code Online (Sandbox Code Playgroud)

将整个操作实现到multidplyr的正确方法是什么?非常感谢.

And*_*ēza 5

您应在对的调用中包括所有分组变量partition()。这样,每个核都具有为给定组执行计算所需的所有数据。

library(tidyverse)
library(multidplyr)

fast <- df %>%
  partition(ID, TYPE) %>%
  group_by(ID, TYPE) %>%
  filter(n_distinct(USAGE) > 1) %>%
  collect()
Run Code Online (Sandbox Code Playgroud)

验证

您仍然会收到有关group_indices的警告,但结果与原始dplyr方法相同。

slow <- df %>%
  group_by(ID, TYPE) %>%
  filter(n_distinct(USAGE) > 1)

fast == slow
       ID USAGE TYPE
#[1,] TRUE  TRUE TRUE
#[2,] TRUE  TRUE TRUE
#[3,] TRUE  TRUE TRUE
Run Code Online (Sandbox Code Playgroud)

标杆管理

现在有个大问题:速度更快吗?定义cluster使我们可以确保使用所有内核。

library(microbenchmark)
library(parallel)

cluster <- create_cluster(cores = detectCores())

fast_func <- function(df) {
  df %>%
    partition(ID, TYPE, cluster = cluster) %>%
    group_by(ID, TYPE) %>%
    filter(n_distinct(USAGE) > 1) %>%
    collect()
}

slow_func <- function(df) {
  slow <- df %>%
    group_by(ID, TYPE) %>%
    filter(n_distinct(USAGE) > 1)
}

microbenchmark(fast_func(df), slow_func(df))
# Unit: milliseconds
# expr       min        lq      mean    median        uq       max neval cld
# fast_func(df) 41.360358 47.529695 55.806609 50.529851 61.459433 133.53045   100   b
# slow_func(df)  4.717761  6.974897  9.333049  7.796686  8.468594  49.51916   100  a 
Run Code Online (Sandbox Code Playgroud)

在这种情况下,使用并行处理实际上会更慢。中位数运行时间为fast_func56毫秒,而不是9毫秒。这是由于与管理跨集群数据流相关的开销。但是您说您的数据有数百万行,所以让我们尝试一下。

# Embiggen the data
df <- df[rep(seq_len(nrow(df)), each=2000000),] %>% tbl_df()

microbenchmark(fast_func(df), slow_func(df))
# Unit: seconds
# expr       min        lq      mean    median        uq       max neval cld
# fast_func(df) 43.067089 43.781144 50.754600 49.440864 55.308355 65.499095    10   b
# slow_func(df)  1.741674  2.550008  3.529607  3.246665  3.983452  7.214484    10  a 
Run Code Online (Sandbox Code Playgroud)

有了巨大的数据集,fast_func速度仍然会变慢!有时并行运行可以节省大量时间,但是简单的分组过滤器不一定是其中之一。