无法提取html表

问题描述 投票:-2回答:1

我想从给定网站内的表格中使用美丽的汤和python3来收集信息。

我也试过使用XPath方法,但仍无法获得获取数据的方法。

coaches = 'https://www.badmintonengland.co.uk/coach/find-a-coach'
coachespage = urlopen(coaches)
soup = BeautifulSoup(coachespage,features="html.parser")
data = soup.find_all("tbody", { "id" : "JGrid-az-com-1031-tbody" })

def crawler(table):
    for mytable in table:  
        try:
            rows = mytable.find_all('tr')
            for tr in rows:
                cols = tr.find_all('td')
                for td in cols:
                    return(td.text)
        except:
            raise ValueError("no data")


print(crawler(data))
python html web-scraping beautifulsoup
1个回答
1
投票

如果您使用selenium进行选择,然后使用pd.read_html page_source来获取表,这允许javascript运行并填充值

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import pandas as pd
import time

url = 'https://www.badmintonengland.co.uk/coach/find-a-coach'
driver = webdriver.Chrome()
driver.get(url)
ele = driver.find_element_by_css_selector('.az-triggers-panel a') #distance dropdown
driver.execute_script("arguments[0].scrollIntoView();", ele)
ele.click()
option = WebDriverWait(driver,10).until(EC.presence_of_element_located((By.ID, "comboOption-az-com-1015-8"))) # any distance
option.click()
driver.find_element_by_css_selector('.az-btn-text').click()

time.sleep(5) #seek better wait condition for page update
tables  = pd.read_html(driver.page_source)
© www.soinside.com 2019 - 2024. All rights reserved.