在R中并行计算时更改核心数

Meg*_*ron 5 parallel-processing r

我正在使用parallel包在R中执行并行化代码mclapply,并将预定义数量的内核作为参数.

如果我有一份将要运行几天的工作,我是否有办法编写(或换行)我的mclapply功能,以便在服务器高峰时段使用更少的内核,并在非高峰时段提高使用率?

cry*_*111 3

我想最简单的解决方案是将数据分成较小的块并mclapply在这些块上单独运行。然后您可以设置每次运行的核心数量mclapply。对于运行时变化很小的计算来说,这可能会更好。

我创建了一个简单的模型,展示了它的样子:

library(parallel)
library(lubridate)

#you would have to come up with your own function
#for the number of cores to be used
determine_cores=function(hh) {
  #hh will be the hour of the day
  if (hh>17|hh<9) {
    return(4)
  } else {
    return(2)
  }
}

#prepare some sample data
set.seed(1234)
myData=lapply(seq(1e-1,1,1e-1),function(x) rnorm(1e7,0,x))

#calculate SD with mclapply WITHOUT splitting of data into chunks
#we need this for comparison
compRes=mclapply(myData,function(x) sd(x),mc.cores=4)

set.seed(1234)
#this will hold the results of the separate mclapply calls
res=list()
#starting position within myData
chunk_start_pos=1
calc_flag=TRUE

while(calc_flag) {
  #use the function defined above to determine how many cores we may use
  core_num=determine_cores(lubridate::hour(Sys.time()))
  #determine end position of data chunk
  chunk_end_pos=chunk_start_pos+core_num-1
  if (chunk_end_pos>=length(myData)) {
    chunk_end_pos=length(myData)
    calc_flag=FALSE
  }
  message("Calculating elements ",chunk_start_pos," to ",chunk_end_pos)
  #mclapply call on data chunk
  #store data in res
  res[[length(res)+1]]=mclapply(myData[chunk_start_pos:(chunk_start_pos+core_num-1)],
                                function(x) sd(x),
                                mc.preschedule=FALSE,
                                mc.cores=core_num)
  #calculate new start position
  chunk_start_pos=chunk_start_pos+core_num
}

#let's compare the results
all.equal(compRes,unlist(res,recursive=FALSE))
#TRUE
Run Code Online (Sandbox Code Playgroud)