使用data.table有效地模拟数据

Mik*_*han 15 memory performance r data.table

我试图从两个较小的数据集中模拟一个新的数据集.对我来说,在最终数据集中保持这些较小数据集的边际计数非常重要.希望这个可重复的例子可以解释我的意思.

建立虚假数据

library(data.table) # 1.10.5
set.seed(123)
meanVal <- 40
Run Code Online (Sandbox Code Playgroud)

demoDat

在这里,我模拟一些年龄和性别数据.每个位置将始终具有2个性别级别和100个年龄级别.

demoDat <- CJ(with(CJ(letters,letters[1:5]), paste0(V1,V2)), c("M","F"), 0:99)
setnames(demoDat, c("Location","Gender","Age"))
demoDat[, Val := rpois(.N, meanVal)]


       Location Gender Age Val
    1:       aa      F   0  36
    2:       aa      F   1  47
    3:       aa      F   2  29
   ---                        
25998:       ze      M  97  45
25999:       ze      M  98  38
26000:       ze      M  99  39
Run Code Online (Sandbox Code Playgroud)

timeDat

此代码模拟时间数据维度.在这种情况下,日期按周分隔,但实际数据不必遵循这种一致性.几周可能会丢失.

timeDat <- with(demoDat, CJ(unique(Location), seq(from=as.Date("2016-01-01"),by=7,length.out = 52)))
setnames(timeDat, c("Location","Date"))
totals <- demoDat[, .(Val=sum(Val)), by=.(Location)]
timeDat[totals, Val := rmultinom(1:.N, i.Val, prob=rep(1,.N)), by=.EACHI,on=.(Location)]

      Location       Date Val
   1:       aa 2016-01-01 176
   2:       aa 2016-01-08 143
   3:       aa 2016-01-15 143
  ---                        
6758:       ze 2016-12-09 165
6759:       ze 2016-12-16 142
6760:       ze 2016-12-23 156
Run Code Online (Sandbox Code Playgroud)

快速对帐

每个位置都应该是Val一个在数据集demoDattimeDat数据集中总计相同的列.

timeDat[, sum(Val), by=.(Location)][order(-V1)][1:5]
#    Location   V1
# 1:       jb 8229
# 2:       xb 8223
# 3:       ad 8179
# 4:       nc 8176
# 5:       gd 8173
demoDat[, sum(Val), by=.(Location)][order(-V1)][1:5]
#    Location   V1
# 1:       jb 8229
# 2:       xb 8223
# 3:       ad 8179
# 4:       nc 8176
# 5:       gd 8173
Run Code Online (Sandbox Code Playgroud)

期望的最终数据集

接下来,我想创建一个数据集Age,GenderDate变量.但是,我必须保持我的边际资金ValdemoDattimeDat数据集.

我有一个策略来完成这个任务,但它占用了相当多的RAM.我可以采用另一种策略,一次在每个组内进行扩展吗?也许用 .EACHI

展开两个数据集并合并

这是操作中昂贵的部分.扩展数据集,使行数等于sum(Val).如果sum(Val)> 500,000,000,这可能是昂贵的.特别是因为对第二数据集重复该操作.我希望使用.EACHI这样只有组内的数据被扩展,这应该大大降低内存占用.

library(pryr)
memUsed <- mem_used() 
demoDatBig <- demoDat[rep(1:.N, Val), .(Location, Gender, Age, ID=rowid(Location))]
timeDatBig <- timeDat[rep(1:.N, Val), .(Location, Date, ID=rowid(Location))]
demoDatBig[timeDatBig, Date := i.Date, on=.(Location, ID)]
finalBigDat <- demoDatBig[, .(Val=.N), by=.(Location, Gender, Age, Date)]
mem_used() - memUsed
# 47 MB
Run Code Online (Sandbox Code Playgroud)

所以这个操作需要47 MB​​的RAM,但如果我增加meanVal,它会显着增加.我希望这使用尽可能多的RAM,因为此操作将对最大LocationID组使用相同的功能.我认为这是可能的.EACHI,但我不确定如何使用.

结果data.table

       Location Gender Age       Date Val
    1:       aa      F   0 2016-01-01  36
    2:       aa      F   1 2016-01-01  47
    3:       aa      F   2 2016-01-01  29
    4:       aa      F   3 2016-01-01  40
    5:       aa      F   4 2016-01-01  24
   ---                                   
32430:       ze      M  96 2016-12-16   7
32431:       ze      M  96 2016-12-23  34
32432:       ze      M  97 2016-12-23  45
32433:       ze      M  98 2016-12-23  38
32434:       ze      M  99 2016-12-23  39
Run Code Online (Sandbox Code Playgroud)

该解决方案有望通过这些测试

#### Test 1
test1 <- finalBigDat[, .(Val = sum(Val)), by=.(Location, Gender, Age)]
test1[demoDat, ValCheck := i.Val, on=.(Location, Gender, Age)]
test1[Val != ValCheck]
#Empty data.table (0 rows) of 5 cols: Location,Gender,Age,Val,ValCheck

#### Test 2
test2 <- finalBigDat[, .(Val = sum(Val)), by=.(Location, Date)]
test2[timeDat, ValCheck := i.Val, on=.(Location, Date)]
test2[Val != ValCheck]
#Empty data.table (0 rows) of 4 cols: Location,Date,Val,ValCheck
Run Code Online (Sandbox Code Playgroud)

结果

我查看了两个解决方案并跟踪了两者的内存和系统时序.这两种解决方案都令人惊叹,并且是我目前拥有的巨大升级.@ swihart的解决方案无法令人难以置信地扩大到大meanVal,所以我选择这个作为公认的答案.希瑟的答案将有助于在meanVal不那么大的情况下.无论大小都meanVal经常发生,所以我需要两者.

   meanVal            Ans            Time      Mem    Rows
1:      40     Mike.Gahan  0.6245470 secs 44.54293   32434
2:      40 Heather Turner  0.6391492 secs 38.65355 1352000
3:      40        swihart 11.1602619 secs 66.97550 1352000
4:     400     Mike.Gahan  2.593275 secs 437.23832   32611
5:     400 Heather Turner  1.303993 secs  38.79871 1352000
6:     400        swihart 11.736836 secs  66.97550 1352000
7:    4000     Mike.Gahan 30.390986 secs 4364.51501   32629
8:    4000 Heather Turner  6.279249 secs   38.79871 1352000
9:    4000        swihart 11.427965 secs   66.97550 1352000
10:   20000     Mike.Gahan -------did not finish----------
11:   20000 Heather Turner 23.78948 secs 36.30617 1352000
12:   20000        swihart 11.53811 secs 66.97550 1352000
13:   30000     Mike.Gahan -------did not finish----------
14:   30000 Heather Turner 537.6459  secs 57.15375 1352000
15:   30000        swihart 11.970013 secs 66.97474 1352000
Run Code Online (Sandbox Code Playgroud)

swi*_*art 1

我针对不同大小的 运行了您的方法,并看到了生成和 的meanVal方法的缩放问题。我有一种方法(附在这篇文章的底部)可以生成日期和性别年龄组的笛卡尔交叉,该方法对 和 的增加具有鲁棒性,如该表所示,列出了data.tables的结果正在讨论:demoDatBigtimeDatBigcartDatmeanValsum(Val)object.size()

| meanVal  | sum(Val) | demoDatBig (MB)  | timeDatBig (MB)  | cartDat (MB)  |
|----------|----------|------------------|------------------|---------------|
|      40  |     1e6  |            27.8  |            15.9  |          67.1 |
|     400  |     1e7  |           277.6  |           158.7  |          67.1 |
|   4,000  |     1e8  |         2,776.8  |         1,586.8  |          67.1 |
|  40,000  |     1e9  |        27,770.3  |        15,868.7  |          67.1 |
Run Code Online (Sandbox Code Playgroud)

我的方法的关键是在未扩展的源 data.tablesdemoDat和之间生成笛卡尔交叉timeDat,然后使用“迭代多元超几何采样”( IMHS) 方案来保留两个源 data.tables 的边距。为了让 具有 R 功能IMHS,我从 CRAN 中获取了 R 包 BiasedUrn 并重新编译它,以便它可以处理 52 种颜色(在我们的应用程序中为日期)。如果需要调整给定位置的最大日期数,请告诉我,我将重新编译。因此,R 包 BiasedUrn52 位于 github 上

我的解决方案通过了test1test2并保留了边缘。然而,它似乎比 OP 程序将性别年龄边际分布在更多的日期上。请允许我详细说明一下:

如果我们取前 5 行timeDat

> head(demoDat,5)
   Location Gender Age Val
1:       aa      F   0  36
2:       aa      F   1  47
3:       aa      F   2  29
4:       aa      F   3  40
5:       aa      F   4  50
Run Code Online (Sandbox Code Playgroud)

前 6 个finalBigDat

> head(finalBigDat,6)
   Location Gender Age       Date Val
1:       aa      F   0 2016-01-01  36
2:       aa      F   1 2016-01-01  47
3:       aa      F   2 2016-01-01  29
4:       aa      F   3 2016-01-01  40
5:       aa      F   4 2016-01-01  24
6:       aa      F   4 2016-01-08  26
Run Code Online (Sandbox Code Playgroud)

我们看到 F-0 性别年龄组的全部 36 例归因于 2016-01-01,而 F-4 组的 50 例分布在 2016-01-01 (24) 和 2016-01-08 ( 26),但没有其他日期 (50=24+26)。

IMHS方法将边际分配到更多日期(我不确定这是否需要 - 请让我知道)。例如,IMHS取 F-0 组中的 36 个,而不是将所有 36 个放在 2016-01-01 上,因为finalBigDat它将它们分布在更多日期中(参见seq.Draws):

> cartDat[Location=='aa' & Gender=="F" & Age==0,
+         c("Location", "Gender", "Age", "Date", "seq.Draws"),
+         with=FALSE]
    Location Gender Age       Date seq.Draws
 1:       aa      F   0 2016-01-01         1
 2:       aa      F   0 2016-01-08         0
 3:       aa      F   0 2016-01-15         1
 4:       aa      F   0 2016-01-22         1
 5:       aa      F   0 2016-01-29         0
 6:       aa      F   0 2016-02-05         0
 7:       aa      F   0 2016-02-12         0
 8:       aa      F   0 2016-02-19         0
 9:       aa      F   0 2016-02-26         0
10:       aa      F   0 2016-03-04         0
11:       aa      F   0 2016-03-11         0
12:       aa      F   0 2016-03-18         0
13:       aa      F   0 2016-03-25         3
14:       aa      F   0 2016-04-01         1
15:       aa      F   0 2016-04-08         0
16:       aa      F   0 2016-04-15         0
17:       aa      F   0 2016-04-22         1
18:       aa      F   0 2016-04-29         1
19:       aa      F   0 2016-05-06         0
20:       aa      F   0 2016-05-13         2
21:       aa      F   0 2016-05-20         0
22:       aa      F   0 2016-05-27         0
23:       aa      F   0 2016-06-03         0
24:       aa      F   0 2016-06-10         0
25:       aa      F   0 2016-06-17         1
26:       aa      F   0 2016-06-24         2
27:       aa      F   0 2016-07-01         0
28:       aa      F   0 2016-07-08         0
29:       aa      F   0 2016-07-15         0
30:       aa      F   0 2016-07-22         1
31:       aa      F   0 2016-07-29         0
32:       aa      F   0 2016-08-05         1
33:       aa      F   0 2016-08-12         1
34:       aa      F   0 2016-08-19         1
35:       aa      F   0 2016-08-26         1
36:       aa      F   0 2016-09-02         1
37:       aa      F   0 2016-09-09         2
38:       aa      F   0 2016-09-16         0
39:       aa      F   0 2016-09-23         1
40:       aa      F   0 2016-09-30         0
41:       aa      F   0 2016-10-07         2
42:       aa      F   0 2016-10-14         3
43:       aa      F   0 2016-10-21         0
44:       aa      F   0 2016-10-28         1
45:       aa      F   0 2016-11-04         1
46:       aa      F   0 2016-11-11         1
47:       aa      F   0 2016-11-18         0
48:       aa      F   0 2016-11-25         0
49:       aa      F   0 2016-12-02         2
50:       aa      F   0 2016-12-09         1
51:       aa      F   0 2016-12-16         1
52:       aa      F   0 2016-12-23         1
Run Code Online (Sandbox Code Playgroud)

注意到 OP 方法和 OPIMHS cartDat方法之间的分布差异只是一个旁白。边缘被保留,如下所示。

的边际timeDat被保留:

> cartDat[, sum(seq.Draws), by=.(Location, Date)]
      Location       Date  V1
   1:       aa 2016-01-01 176
   2:       aa 2016-01-08 143
   3:       aa 2016-01-15 143
   4:       aa 2016-01-22 154
   5:       aa 2016-01-29 174
  ---                        
6756:       ze 2016-11-25 169
6757:       ze 2016-12-02 148
6758:       ze 2016-12-09 165
6759:       ze 2016-12-16 142
6760:       ze 2016-12-23 156
> timeDat
      Location       Date Val
   1:       aa 2016-01-01 176
   2:       aa 2016-01-08 143
   3:       aa 2016-01-15 143
   4:       aa 2016-01-22 154
   5:       aa 2016-01-29 174
  ---                        
6756:       ze 2016-11-25 169
6757:       ze 2016-12-02 148
6758:       ze 2016-12-09 165
6759:       ze 2016-12-16 142
6760:       ze 2016-12-23 156
Run Code Online (Sandbox Code Playgroud)

的边际也是demoDat

> cartDat[, sum(seq.Draws), by=.(Location, Gender, Age)]
       Location Gender Age V1
    1:       aa      F   0 36
    2:       aa      F   1 47
    3:       aa      F   2 29
    4:       aa      F   3 40
    5:       aa      F   4 50
   ---                       
25996:       ze      M  95 48
25997:       ze      M  96 41
25998:       ze      M  97 45
25999:       ze      M  98 38
26000:       ze      M  99 39
> demoDat
       Location Gender Age Val
    1:       aa      F   0  36
    2:       aa      F   1  47
    3:       aa      F   2  29
    4:       aa      F   3  40
    5:       aa      F   4  50
   ---                        
25996:       ze      M  95  48
25997:       ze      M  96  41
25998:       ze      M  97  45
25999:       ze      M  98  38
26000:       ze      M  99  39
Run Code Online (Sandbox Code Playgroud)

这是IMHS cartDat方法和一些测试:

#Cartesian cross of demoDat and timeDat
devtools::install_github("swihart/BiasedUrn52")
library(BiasedUrn52)
setkey(timeDat, Location)
setkey(demoDat, Location, Gender, Age)
cartDat <- demoDat[timeDat, allow.cartesian=TRUE]
setkeyv(cartDat, key(demoDat))
cartDat
cartDat[,group:=.GRP,by=c("Gender", "Age") ]
cartDat[,demoDat.Val:=Val]
cartDat[,timeDat.Val:=i.Val]
setcolorder(cartDat, c("Location", 
                       "group",
                       "Gender",
                       "Age",
                       "Val",
                       "demoDat.Val",
                       "Date",
                       "timeDat.Val",
                       "i.Val"))

#Define Iterative Multivariate Hypergeometric Sampling function
imhs <- function(.N, Val, i.Val, group){

  grp.ind <- unique(group)
  num.grp <- max(group)
  grp.size <- as.numeric(table(group))

  draws <- rep(NA, length(group))
  for(grp in grp.ind){

    if(grp==1){
      draws[group==1] = rMFNCHypergeo(1, 
                                      i.Val[group==1], 
                                      Val[group==1][1], 
                                      rep(1/grp.size[grp.ind==1],grp.size[grp.ind==1])
      )
      i.Val[group==2]= i.Val[group==1]-draws[group==1]
    }else{
      draws[group==grp] = rMFNCHypergeo(1, 
                                        i.Val[group==grp], 
                                        Val[group==grp][1], 
                                        rep(1/grp.size[grp.ind==grp],grp.size[grp.ind==grp])
      )
      if(grp<=num.grp){
        i.Val[group==(grp+1)]= i.Val[group==grp]-draws[group==grp]
      }
    }

  }

  list(i.Val, draws)
}


# run it the data.table way:
cartDat[,
        c("seq.Val", "seq.Draws") := imhs(.N, demoDat.Val, timeDat.Val, group),        
        by=c("Location") ]

# take a look:
cartDat

# reconciliation
demoDat[, sum(Val), by=.(Location)][order(-V1)]
cartDat[, sum(seq.Draws), by=.(Location)][order(-V1)]

# do the checks for the margins:
cartDat[, sum(seq.Draws), by=.(Location, Date)]
timeDat
cartDat[, sum(seq.Draws), by=.(Location, Gender, Age)]
demoDat


# such different sizes due to distributing across more dates:
nrow(demoDat)
nrow(cartDat)
nrow(cartDat[seq.Draws != 0])
nrow(finalBigDat)
nrow(cartDat[seq.Draws != 0])/nrow(finalBigDat)

# attain and print object sizes for cartDat
print(object.size(cartDat), units = "Mb")
print(object.size(cartDat[seq.Draws!=0]), units="Mb")

# attain and print object sizes for demoDatBig, timeDatBig, finalBigData
print(object.size(demoDatBig), units = "Mb")
print(object.size(timeDatBig), units = "Mb")
print(object.size(finalBigDat), units = "Mb")



## (OP) The solution would pass these tests:
finalBigDat2 <- cartDat

#### Test 1 (change to sum(seq.Draws))
test1 <- finalBigDat2[, .(Val = sum(seq.Draws)), by=.(Location, Gender, Age)]
test1[demoDat, ValCheck := i.Val, on=.(Location, Gender, Age)]
test1[Val != ValCheck]
#Empty data.table (0 rows) of 5 cols: Location,Gender,Age,Val,ValCheck

#### Test 2 (change to sum(seq.Draws))
test2 <- finalBigDat2[, .(Val = sum(seq.Draws)), by=.(Location, Date)]
test2[timeDat, ValCheck := i.Val, on=.(Location, Date)]
test2[Val != ValCheck]
#Empty data.table (0 rows) of 4 cols: Location,Date,Val,ValCheck
Run Code Online (Sandbox Code Playgroud)