R mcparallel mapReduce

pav*_*vel 3 multicore r mapreduce

我正在寻找并行版本的aggregate()函数,看起来像http://cran.r-project.org/web/packages/mapReduce/mapReduce.pdfhttp://cran.r-project.org/web /packages/multicore/multicore.pdf正是我想要的.

因此,作为测试,我创建了一个包含10m记录的数据集

blockSize <- 5000
records <- blockSize * 2000
df <- data.frame(id=1:records, value=rnorm(records))
df$period <- round(df$id/blockSize)
# now I want to aggregate by period and return mean of every block:
x <- aggregate(value ~ period, data=df, function(x) { mean(x) })
# with mapReduce it can be done
library(multicore)
library(mapReduce)
jobId <- mcparallel(mapReduce(map=period, mean(value), data=df))
y <- collect(jobId)
Run Code Online (Sandbox Code Playgroud)

但仍然以某种方式它不利用我的笔记本电脑上的所有4个CPU内核:

$ top
02:00:35 up 3 days, 18:14,  3 users,  load average: 1,61, 1,20, 0,79
Tasks: 237 total,   2 running, 235 sleeping,   0 stopped,   0 zombie
%Cpu0  : 17,4 us,  5,1 sy,  0,0 ni, 74,3 id,  0,0 wa,  0,0 hi,  3,2 si,  0,0 st
%Cpu1  : 13,4 us,  6,9 sy,  0,0 ni, 79,7 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu2  : 21,3 us, 32,3 sy,  0,0 ni, 46,3 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu3  : 17,0 us, 36,0 sy,  0,0 ni, 47,0 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 st
KiB Mem:   3989664 total,  3298340 used,   691324 free,    27248 buffers
KiB Swap:  7580668 total,  1154164 used,  6426504 free,   320360 cached

PID USER      PR  NI  VIRT  RES  SHR S  %CPU %MEM    TIME+  COMMAND
459 myuser    20   0 1850m 1,8g 1120 R  **99,1** 46,4   0:37.43 R
Run Code Online (Sandbox Code Playgroud)

我用R 2.15.1:

R version 2.15.1 (2012-06-22) -- "Roasted Marshmallows"
Copyright (C) 2012 The R Foundation for Statistical Computing
ISBN 3-900051-07-0
Platform: i686-pc-linux-gnu (32-bit)
Run Code Online (Sandbox Code Playgroud)

我做错了什么以及如何利用多核聚合R中的大数据集?

谢谢.

mne*_*nel 5

你如何聚合庞大的数据集R

使用 data.table

library(data.table)


DT <- data.table(df)
setkey(DT, period)

DT[, list(value = mean(value)), by = period]
Run Code Online (Sandbox Code Playgroud)

这不会使用多个内核,但会非常快速且内存效率高.