一个网站有一个 URL 列表,我需要编写一个循环来访问每个 URL 并抓取两个表

问题描述 投票:0回答:2

我最终试图从 R 中的几个不同 URL(在同一父站点内)抓取表格。

首先,我假设我必须从 https://www.basketball-reference.com/playoffs/NBA_2017.html 中抓取“季后赛系列赛”下的各个比赛链接 - 该链接表的 xpath 是 // *[@id="all_all_playoffs"]

然后,我想从每个单独的游戏链接中抓取表格(如下所示:https://www.basketball-reference.com/boxscores/201705170BOS.html)——我想要的表格是“基本”每支球队的“方框得分统计”。

(我计划在不同的年份重复此操作,因此输入每个 URL(就像我在下面所做的那样)效率不高)

到目前为止,我只能弄清楚如何一次从一个网址(或一款游戏)抓取表格:

games <- c("201705190BOS","201705190BOS","201705210CLE","201705230CLE","201705250BOS")
urls <- paste0("https://www.basketball-reference.com/boxscores/", games, ".html")
get_table <- function(url) {
  url %>%
    read_html() %>%
    html_nodes(xpath = '//*[@id="div_box_cle_basic"]/table[1]') %>% 
    html_nodes(xpath = '//*[@id="div_box_bos_basic"]/table[1]') %>%
    html_table()
}

results <- sapply(urls, get_table)
r web-scraping url xpath
2个回答
0
投票

这对我有用,试试吧!

library(rvest)

page <- read_html('https://www.basketball-reference.com/playoffs/NBA_2017.html')

#get all links in the playoff section
playoffs <- page %>%
  html_node('#div_all_playoffs') %>%
  html_nodes('a') %>%
  html_attr('href')

#limit to those that are actually links to boxscores
playoffs <- playoffs[grep('boxscore', playoffs)]

#loop to scrape each game
allGames <- list()
for(j in 1:length(playoffs)){
  box <- read_html(paste0('https://www.basketball-reference.com/', playoffs[j]))

  #tables are named based on which team is there, get all html id's to find which one we want
  atrs <- box %>%
    html_nodes('div') %>%
    html_attr('id')

  #limit to only names that include "basic" and "all"
  basicIds <- atrs[grep('basic', atrs)] %>%
    .[grep('all', .)]

  #loop to scrape both tables (1 for each team)
  teams <- list()
  for(i in 1:length(basicIds)){
    #grab table for team
    table <- box %>%
      html_node(paste0('#',basicIds[i])) %>%
      html_node('.stats_table') %>%
      html_table()

    #parse table into starters and reserves tables
    startReserve <- which(table[,1] == 'Reserves')

    starters <- table[2:(startReserve-1),]
    colnames(starters) <- table[1,]

    reserves <- table[(startReserve + 1):nrow(table),]
    colnames(reserves) <- table[startReserve,]

    #extract team name
    team <- gsub('all_box_(.+)_basic', '\\1', basicIds[i])

    #make named list using team name
    assign(team, setNames(list(starters, reserves), c('starters', 'reserves')))
    teams[[i]] <- team
  }

  #find game identifier
  game <- gsub('/boxscores/(.+).html', '\\1', playoffs[j])

  #make list of both teams, name list using game identifier
  assign(paste0('game_',game), setNames(list(eval(parse(text=teams[[1]])), eval(parse(text=teams[[2]]))), c(teams[[1]], teams[[2]])))

  #add to allGames
  allGames <- append(allGames, setNames(list(eval(parse(text = paste0('game_', game)))), paste0('game_', game)))

}

#clean up everything but allGames
rm(list = ls()[-grep('allGames', ls())])

输出是一个列表列表。 这不太好,但您想要的数据本质上是分层的:每场比赛有 2 支球队,每支球队有 2 张桌子(先发球员和替补球员)。 所以,最终的对象看起来像:

-所有游戏

----游戏1

-----Team1

----------入门者

----------储备

-----第二队

----------入门者

----------储备

----游戏2 ...

例如,显示包含决赛最后一场比赛中克利夫兰队先发球员数据的表格:

> allGames$game_201706120GSW$cle$starters
          Starters    MP FG FGA  FG% 3P 3PA  3P% FT FTA   FT% ORB DRB TRB AST STL BLK TOV PF PTS +/-
2     LeBron James 46:13 19  30 .633  2   5 .400  1   4  .250   2  11  13   8   2   1   2  3  41 -13
3     Kyrie Irving 41:47  9  22 .409  1   2 .500  7   7 1.000   1   1   2   6   2   0   4  3  26  +4
4       J.R. Smith 40:49  9  11 .818  7   8 .875  0   1  .000   0   3   3   1   0   2   0  2  25  -2
5       Kevin Love 29:55  2   8 .250  0   3 .000  2   5  .400   3   7  10   2   0   1   0  2   6 -23
6 Tristan Thompson 29:52  6   8 .750  0   0       3   4  .750   4   4   8   3   1   1   3  1  15  -7

0
投票

您是否希望自动解析网站上所有

games
的游戏ID?如果是这样,在将它们输入到表解析器之前,您需要构建一个单独的抓取工具来获取游戏 ID。

我会这样做:

  1. 选择一个开始日期,然后每天迭代地 ping 每个站点,可以使用

    readLines
    来拉回每个日期的 html 字符串: https://www.basketball-reference.com/boxscores/?month=11&day=4&year=2017

    因此只需迭代链接中的月、日和年即可

  2. 从上面的链接中,找到超链接

    final
    下的项目,或在 HTML 文本中显示此内容
    <a href="/boxscores/201711040DEN.html">Final</a>

可以使用正则表达式来解析每一行并搜索类似的内容:

grep('.*<a href=\"/boxscores/.*.html\">Final</a>.*', [object], value = TRUE) %>%
 gsub('.*<a href=\"(/boxscores/.*.html)\">Final</a>.*', '\\1', .)

这将构建游戏链接,然后您可以将其输入到上面的解析器中。

© www.soinside.com 2019 - 2024. All rights reserved.