maj*_*jom 7 api parallel-processing r geturl httr
我正在查询 Freebase 以获取大约 10000 部电影的类型信息。
在阅读如何使用 R 中的 getURL() 优化抓取后,我尝试并行执行请求。但是,我失败了 - 见下文。除了并行化,我还读到这httr可能是RCurl.
我的问题是:是否可以通过使用以下循环的并行版本(使用 WINDOWS 机器)来加速 API 调用?b) getURL 的替代方法,例如GET在httr-package 中?
library(RCurl)
library(jsonlite)
library(foreach)
library(doSNOW)
df <- data.frame(film=c("Terminator", "Die Hard", "Philadelphia", "A Perfect World", "The Parade", "ParaNorman", "Passengers", "Pink Cadillac", "Pleasantville", "Police Academy", "The Polar Express", "Platoon"), genre=NA)
f_query_freebase <- function(film.title){
request <- paste0("https://www.googleapis.com/freebase/v1/search?",
"filter=", paste0("(all alias{full}:", "\"", film.title, "\"", " type:\"/film/film\")"),
"&indent=TRUE",
"&limit=1",
"&output=(/film/film/genre)")
temp <- getURL(URLencode(request), ssl.verifypeer = FALSE)
data <- fromJSON(temp, simplifyVector=FALSE)
genre <- paste(sapply(data$result[[1]]$output$`/film/film/genre`[[1]], function(x){as.character(x$name)}), collapse=" | ")
return(genre)
}
# Non-parallel version
# ----------------------------------
for (i in df$film){
df$genre[which(df$film==i)] <- f_query_freebase(i)
}
# Parallel version - Does not work
# ----------------------------------
# Set up parallel computing
cl<-makeCluster(2)
registerDoSNOW(cl)
foreach(i=df$film) %dopar% {
df$genre[which(df$film==i)] <- f_query_freebase(i)
}
stopCluster(cl)
# --> I get the following error: "Error in { : task 1 failed", further saying that it cannot find the function "getURL".
Run Code Online (Sandbox Code Playgroud)
这并不能在单个 R 会话中实现并行请求,但是,我用它来跨多个R 会话实现 >1 个并发请求(例如并行),因此它可能很有用。
您需要将该过程分为几个部分:
注意:这恰好是在Windows上运行的,所以我使用了powershell。在 Mac 上,这可以用 bash 编写。
使用单个 powershell 脚本启动多个实例 R 进程(这里我们将工作划分为 3 个进程):
例如,保存一个带有.ps1文件扩展名的纯文本文件,您可以双击它来运行它,或者使用任务计划程序/cron来安排它:
start powershell { cd C:\Users\Administrator\Desktop; Rscript extract.R 1; TIMEOUT 20000 }
start powershell { cd C:\Users\Administrator\Desktop; Rscript extract.R 2; TIMEOUT 20000 }
start powershell { cd C:\Users\Administrator\Desktop; Rscript extract.R 3; TIMEOUT 20000 }
Run Code Online (Sandbox Code Playgroud)
它在做什么?它会:
extract.R,并为 R 脚本提供一个参数(1、2和3)。每个 R 进程可以如下所示
# Get command line argument
arguments <- commandArgs(trailingOnly = TRUE)
process_number <- as.numeric(arguments[1])
api_calls <- read.csv("api_calls.csv")
# work out which API calls each R script should make (e.g.
indicies <- seq(process_number, nrow(api_calls), 3)
api_calls_for_this_process_only <- api_calls[indicies, ] # this subsets for 1/3 of the API calls
# (the other two processes will take care of the remaining calls)
# Now, make API calls as usual using rvest/jsonlite or whatever you use for that
Run Code Online (Sandbox Code Playgroud)