刮难桌

问题描述 投票:2回答:2

我一直试图从here刮一张桌子很长一段时间,但都没有成功。我试图抓的桌子名为“Team Per Game Stats”。我相信,一旦我能够抓取该表中的一个元素,我就可以从列表中遍历我想要的列,最终得到一个pandas数据帧。

到目前为止,这是我的代码:

from bs4 import BeautifulSoup
import requests

# url that we are scraping
r = requests.get('https://www.basketball-reference.com/leagues/NBA_2019.html')
# Lets look at what the request content looks like
print(r.content)

# use Beautifulsoup on content from request
c = r.content
soup = BeautifulSoup(c)
print(soup)

# using prettify() in Beautiful soup indents HTML like it should be in the web page
# This can make reading the HTML a little be easier
print(soup.prettify())

# get elements within the 'main-content' tag
team_per_game = soup.find(id="all_team-stats-per_game")
print(team_per_game)

任何帮助将不胜感激。

python web-scraping beautifulsoup
2个回答
6
投票

该网页采用了一种技巧来阻止搜索引擎和其他自动Web客户端(包括刮刀)查找表数据:表格存储在HTML注释中:

<div id="all_team-stats-per_game" class="table_wrapper setup_commented commented">

<div class="section_heading">
  <span class="section_anchor" id="team-stats-per_game_link" data-label="Team Per Game Stats"></span><h2>Team Per Game Stats</h2>    <div class="section_heading_text">
      <ul> <li><small>* Playoff teams</small></li>
      </ul>
    </div>      
</div>
<div class="placeholder"></div>
<!--
   <div class="table_outer_container">
      <div class="overthrow table_container" id="div_team-stats-per_game">
  <table class="sortable stats_table" id="team-stats-per_game" data-cols-to-freeze=2><caption>Team Per Game Stats Table</caption>

...

</table>

      </div>
   </div>
-->
</div>

我注意到开放的divsetup_commentedcommented类。然后,您的浏览器会执行页面中包含的Javascript代码,然后从这些注释中加载文本,并将placeholder div替换为内容,作为要显示的浏览器的新HTML。

您可以在此处提取评论文本:

from bs4 import BeautifulSoup, Comment

soup = BeautifulSoup(r.content, 'lxml')
placeholder = soup.select_one('#all_team-stats-per_game .placeholder')
comment = next(elem for elem in placeholder.next_siblings if isinstance(elem, Comment))
table_soup = BeautifulSoup(comment, 'lxml')

然后继续解析表格HTML。

这个特定的网站已经发布了terms of usea page on data use,如果你打算使用他们的数据,你应该阅读。具体而言,他们的条款在第6节中声明。网站内容:

未经SRL事先书面同意,您不得构图,捕获,收集或收集本网站或内容的任何部分。

刮刮数据将属于该标题。


1
投票

只是为了完成Martijn Pieters的回答(并且没有lxml)

from bs4 import BeautifulSoup, Comment
import requests

r = requests.get('https://www.basketball-reference.com/leagues/NBA_2019.html')
soup = BeautifulSoup(r.content, 'html.parser')
placeholder = soup.select_one('#all_team-stats-per_game .placeholder')
comment = next(elem for elem in placeholder.next_siblings if isinstance(elem, Comment))
table = BeautifulSoup(comment, 'html.parser')
rows = table.find_all('tr')
for row in rows:
    cells = row.find_all('td')
    if cells:
        print([cell.text for cell in cells])

部分输出

[u'New Orleans Pelicans', u'71', u'240.0', u'43.6', u'91.7', u'.476', u'10.1', u'29.4', u'.344', u'33.5', u'62.4', u'.537', u'18.1', u'23.9', u'.760', u'11.0', u'36.0', u'47.0', u'27.0', u'7.5', u'5.5', u'14.5', u'21.4', u'115.5']
[u'Milwaukee Bucks*', u'69', u'241.1', u'43.3', u'90.8', u'.477', u'13.3', u'37.9', u'.351', u'30.0', u'52.9', u'.567', u'17.6', u'22.8', u'.773', u'9.3', u'40.1', u'49.4', u'26.0', u'7.4', u'6.0', u'14.0', u'19.8', u'117.6']
[u'Los Angeles Clippers', u'70', u'241.8', u'41.0', u'87.6', u'.469', u'9.8', u'25.2', u'.387', u'31.3', u'62.3', u'.502', u'22.8', u'28.8', u'.792', u'9.9', u'35.7', u'45.6', u'23.4', u'6.6', u'4.7', u'14.5', u'23.5', u'114.6']
© www.soinside.com 2019 - 2024. All rights reserved.