YouTube评论刮刀返回的结果有限

tim*_*ham 23 r youtube-api web-scraping

任务:

我想从给定的视频中删除所有YouTube评论.

我成功地修改了上一个问题的R代码(Scraping Youtube在R中的评论).

这是代码:

library(RCurl)
library(XML)
x <- "https://gdata.youtube.com/feeds/api/videos/4H9pTgQY_mo/comments?orderby=published"
html = getURL(x)
doc  = htmlParse(html, asText=TRUE) 
txt  = xpathSApply(doc, 
"//body//text()[not(ancestor::script)][not(ancestor::style)[not(ancestor::noscript)]",xmlValue)
Run Code Online (Sandbox Code Playgroud)

要使用它,只需将视频ID(即"4H9pTgQY_mo")替换为您需要的ID即可.

问题:

问题是它没有返回所有评论.实际上,无论视频中有多少注释,它总是返回一个包含283个元素的向量.

谁能请点亮这里出了什么问题?令人难以置信的是令人沮丧.谢谢.

nru*_*ell 6

我(大部分)能够通过使用最新版本的Youtube Data API和R包来实现这一目标httr.我采取的基本方法是将多个GET请求发送到适当的URL并以100个批次(API允许的最大值)获取数据 - 即

base_url <- "https://www.googleapis.com/youtube/v3/commentThreads/"
api_opts <- list(
  part = "snippet",
  maxResults = 100,
  textFormat = "plainText",
  videoId = "4H9pTgQY_mo",  
  key = "my_google_developer_api_key",
  fields = "items,nextPageToken",
  orderBy = "published")
Run Code Online (Sandbox Code Playgroud)

当然,key您的实际Google开发人员密钥在哪里.

初始批次的检索方式如下:

init_results <- httr::content(httr::GET(base_url, query = api_opts))
##
R> names(init_results)
#[1] "nextPageToken" "items"
R> init_results$nextPageToken
#[1] "Cg0Q-YjT3bmSxQIgACgBEhQIABDI3ZWQkbzEAhjVneqH75u4AhgCIGQ="       
R> class(init_results)
#[1] "list"
Run Code Online (Sandbox Code Playgroud)

第二个元素 - items- 是第一批的实际结果集:它是一个长度为100的列表,因为我们maxResults = 100在GET请求中指定了.第一个元素 - nextPageToken- 是我们用来确保每个请求返回适当的结果序列.例如,我们可以得到接下来的100个结果:

api_opts$pageToken <- gsub("\\=","",init_results$nextPageToken)
next_results <- httr::content(
    httr::GET(base_url, query = api_opts))
##
R> next_results$nextPageToken
#[1] "ChYQ-YjT3bmSxQIYyN2VkJG8xAIgACgCEhQIABDI3ZWQkbzEAhiSsMv-ivu0AhgCIMgB"
Run Code Online (Sandbox Code Playgroud)

当前请求pageToken作为先前请求返回的位置nextPageToken,我们将获得一个新的nextPageToken用于获取下一批结果的新请求.


这非常简单,但nextPageToken在我们发送的每个请求之后必须不断更改手头的价值显然是非常繁琐的.相反,我认为这将是一个简单的R6类的一个很好的用例:

yt_scraper <- setRefClass(
  "yt_scraper",
  fields = list(
    base_url = "character",
    api_opts = "list",
    nextPageToken = "character",
    data = "list",
    unique_count = "numeric",
    done = "logical",
    core_df = "data.frame"),

  methods = list(
    scrape = function() {
      opts <- api_opts
      if (nextPageToken != "") {
        opts$pageToken <- nextPageToken
      }

      res <- httr::content(
        httr::GET(base_url, query = opts))

      nextPageToken <<- gsub("\\=","",res$nextPageToken)
      data <<- c(data, res$items)
      unique_count <<- length(unique(data))
    },

    scrape_all = function() {
      while (TRUE) {
        old_count <- unique_count
        scrape()
        if (unique_count == old_count) {
          done <<- TRUE
          nextPageToken <<- ""
          data <<- unique(data)
          break
        }
      }
    },

    initialize = function() {
      base_url <<- "https://www.googleapis.com/youtube/v3/commentThreads/"
      api_opts <<- list(
        part = "snippet",
        maxResults = 100,
        textFormat = "plainText",
        videoId = "4H9pTgQY_mo",  
        key = "my_google_developer_api_key",
        fields = "items,nextPageToken",
        orderBy = "published")
      nextPageToken <<- ""
      data <<- list()
      unique_count <<- 0
      done <<- FALSE
      core_df <<- data.frame()
    },

    reset = function() {
      data <<- list()
      nextPageToken <<- ""
      unique_count <<- 0
      done <<- FALSE
      core_df <<- data.frame()
    },

    cache_core_data = function() {
      if (nrow(core_df) < unique_count) {
        sub_data <- lapply(data, function(x) {
          data.frame(
            Comment = x$snippet$topLevelComment$snippet$textDisplay,
            User = x$snippet$topLevelComment$snippet$authorDisplayName,
            ReplyCount = x$snippet$totalReplyCount,
            LikeCount = x$snippet$topLevelComment$snippet$likeCount,
            PublishTime = x$snippet$topLevelComment$snippet$publishedAt,
            CommentId = x$snippet$topLevelComment$id,
            stringsAsFactors=FALSE)
        })
        core_df <<- do.call("rbind", sub_data)
      } else {
        message("\n`core_df` is already up to date.\n")
      } 
    }
  )
)
Run Code Online (Sandbox Code Playgroud)

可以像这样使用:

rObj <- yt_scraper()
##
R> rObj$data
#list()
R> rObj$unique_count
#[1] 0
##
rObj$scrape_all()
##
R> rObj$unique_count
#[1] 1673
R> length(rObj$data)
#[1] 1673
R> ##
R> head(rObj$core_df)
                                                           Comment              User ReplyCount LikeCount              PublishTime
1                    That Andorra player was really Ruud..<U+feff>         Cistrolat          0         6 2015-03-22T14:07:31.213Z
2                          This just in; Karma is a bitch.<U+feff> Swagdalf The Obey          0         1 2015-03-21T20:00:26.044Z
3                                          Legend! Haha B)<U+feff>  martyn baltussen          0         1 2015-01-26T15:33:00.311Z
4 When did Van der sar ran up? He must have run real fast!<U+feff> Witsakorn Poomjan          0         0 2015-01-04T03:33:36.157Z
5                           <U+003c>b<U+003e>LOL<U+003c>/b<U+003e>           F Hanif          5        19 2014-12-30T13:46:44.028Z
6                                          Fucking Legend.<U+feff>        Heisenberg          0        12 2014-12-27T11:59:39.845Z
                            CommentId
1   z123ybioxyqojdgka231tn5zbl20tdcvn
2   z13hilaiftvus1cc1233trvrwzfjg1enm
3 z13fidjhbsvih5hok04cfrkrnla2htjpxfk
4   z12js3zpvm2hipgtf23oytbxqkyhcro12
5 z12egtfq5ojifdapz04ceffqfrregdnrrbk
6 z12fth0gemnwdtlnj22zg3vymlrogthwd04
Run Code Online (Sandbox Code Playgroud)

正如我前面提到的那样,这几乎可以解决所有问题 - 大约1790条评论中有1673条.出于某种原因,它似乎没有捕获用户的嵌套回复,我不太确定如何在API框架中指定它.


之前我曾使用Google AnalyticsAPI 设置了Google Developer帐户,但如果您还没有这样做,那么它应该非常简单.这是一个概述 - 您不需要设置OAuth或类似的东西,只需创建一个项目并创建一个新的公共API访问密钥.