R WebCrawler - XML内容似乎不是XML:

Ann*_*lee 10 xml statistics r

我从rNomads包中获取了以下代码并对其进行了一些修改.

最初运行它时,我得到:

> WebCrawler(url = "www.bikeforums.net")
[1] "www.bikeforums.net"
[1] "www.bikeforums.net"

Warning message:
XML content does not seem to be XML: 'www.bikeforums.net' 
Run Code Online (Sandbox Code Playgroud)

这是代码:

require("XML")

# cleaning workspace
rm(list = ls())

# This function recursively searches for links in the given url and follows every single link.
# It returns a list of the final (dead end) URLs.
# depth - How many links to return. This avoids having to recursively scan hundreds of links. Defaults to NULL, which returns everything.
WebCrawler <- function(url, depth = NULL, verbose = TRUE) {

  doc <- XML::htmlParse(url)
  links <- XML::xpathSApply(doc, "//a/@href")
  XML::free(doc)
  if(is.null(links)) {
    if(verbose) {
      print(url)
    }
    return(url)
  } else {
    urls.out <- vector("list", length = length(links))
    for(link in links) {
      if(!is.null(depth)) {
        if(length(unlist(urls.out)) >= depth) {
          break
        }
      }
      urls.out[[link]] <- WebCrawler(link, depth = depth, verbose = verbose)
    }
    return(urls.out)
  }
}


# Execution
WebCrawler(url = "www.bikeforums.net")
Run Code Online (Sandbox Code Playgroud)

任何建议我做错了什么?

UPDATE

大家好,

我开始这个赏金,因为我认为在R社区中需要这样一个可以抓取网页的功能.赢得赏金的解决方案应该显示一个带有两个参数的函数:

WebCrawler(url = "www.bikeforums.net", xpath = "\\title" )
Run Code Online (Sandbox Code Playgroud)
  • 作为输出,我希望有一个包含两列的数据框:网站链接,如果示例xpath表达式匹配具有匹配表达式的列.

我非常感谢你的回复

dim*_*_ps 2

在您的函数中插入以下代码links <- XML::xpathSApply(doc, "//a/@href")

links <- XML::xpathSApply(doc, "//a/@href")
links1 <- links[grepl("http", links)] # As @Floo0 pointed out this is to capture non relative links
links2 <- paste0(url, links[!grepl("http", links)]) # and to capture relative links
links <- c(links1, links2)
Run Code Online (Sandbox Code Playgroud)

并且还要记住有urlashttp://www......

而且您也没有更新您的urls.out列表。正如你所拥有的,它总是一个空列表,其长度与links