想要废弃所有会员资料链接以获取会员详细信息

问题描述 投票:1回答:2
from bs4 import BeautifulSoup
import requests
r = requests.get('http://medicalassociation.in/doctor-search')
soup = BeautifulSoup(r.text,'lxml')

link = soup.find('table',{'class':'tab-gender'})
link1 = link.find('tbody')
link2 = link1.find('tr')[3:4]
link3 = link2.find('a',class_='user-name')
print link3.text

没有通过此代码获取链接。我想取出查看个人资料链接

python web-scraping beautifulsoup
2个回答
0
投票

以下对我进行了几次测试。只需使用requestsselect与类选择器。

import requests
from bs4 import BeautifulSoup as bs

r = requests.get('http://medicalassociation.in/doctor-search')
soup = bs(r.content, 'lxml')    
results = [item['href'] for item in soup.select(".user-name")]
print(results)

0
投票

Request.get()渲染javascripts并且看不到任何元素。你可以使用WebDriver并获取page_source然后获取信息。

from bs4 import BeautifulSoup
from selenium import webdriver
driver = webdriver.Chrome()
driver.get("http://medicalassociation.in/doctor-search")
soup = BeautifulSoup(driver.page_source,'html.parser')

for a in soup.find_all('a',class_="user-name"):
    if a.text is not None :
       print(a['href'])
© www.soinside.com 2019 - 2024. All rights reserved.