Jam*_*igh 7 html r web-scraping rvest
我正在开展网络抓取计划,以搜索特定的葡萄酒,并返回该品种的当地葡萄酒清单.我遇到的问题是多页结果.下面的代码是我正在使用的基本示例
url2 <- "http://www.winemag.com/?s=washington+merlot&search_type=reviews"
htmlpage2 <- read_html(url2)
names2 <- html_nodes(htmlpage2, ".review-listing .title")
Wines2 <- html_text(names2)
Run Code Online (Sandbox Code Playgroud)
对于此特定搜索,有39页的结果.我知道网址更改为http://www.winemag.com/?s=washington%20merlot&drink_type=wine&page=2,但是有一种简单的方法可以使代码循环遍历所有返回的页面并编译所有39个结果页面成一个列表?我知道我可以手动完成所有网址,但这看起来有点矫枉过正.
hrb*_*str 16
purrr::map_df()如果您想要所有信息,您也可以做类似的事情data.frame:
library(rvest)
library(purrr)
url_base <- "http://www.winemag.com/?s=washington merlot&drink_type=wine&page=%d"
map_df(1:39, function(i) {
# simple but effective progress indicator
cat(".")
pg <- read_html(sprintf(url_base, i))
data.frame(wine=html_text(html_nodes(pg, ".review-listing .title")),
excerpt=html_text(html_nodes(pg, "div.excerpt")),
rating=gsub(" Points", "", html_text(html_nodes(pg, "span.rating"))),
appellation=html_text(html_nodes(pg, "span.appellation")),
price=gsub("\\$", "", html_text(html_nodes(pg, "span.price"))),
stringsAsFactors=FALSE)
}) -> wines
dplyr::glimpse(wines)
## Observations: 1,170
## Variables: 5
## $ wine (chr) "Charles Smith 2012 Royal City Syrah (Columbia Valley (WA)...
## $ excerpt (chr) "Green olive, green stem and fresh herb aromas are at the ...
## $ rating (chr) "96", "95", "94", "93", "93", "93", "93", "93", "93", "93"...
## $ appellation (chr) "Columbia Valley", "Columbia Valley", "Columbia Valley", "...
## $ price (chr) "140", "70", "70", "20", "70", "40", "135", "50", "60", "3...
Run Code Online (Sandbox Code Playgroud)
您可以lapply跨越URL的向量,您可以通过将基本URL粘贴到序列来创建:
library(rvest)
wines <- lapply(paste0('http://www.winemag.com/?s=washington%20merlot&drink_type=wine&page=', 1:39),
function(url){
url %>% read_html() %>%
html_nodes(".review-listing .title") %>%
html_text()
})
Run Code Online (Sandbox Code Playgroud)
结果将在包含每个页面元素的列表中返回.
| 归档时间: |
|
| 查看次数: |
9341 次 |
| 最近记录: |