使用href链接刮取网页

问题描述 投票:0回答:2

我正在废弃这个页面(“http://mahaprantikssksamaj.com/ssk-samaj-maharashtras.aspx”)。我正在存储有效的网址并请求重定向到下一页并为每个有效的网址抓取下一页的数据。

页面数据存储在表中,我收到此错误:“”AttributeError:ResultSet对象没有属性'find'。您可能正在处理像单个项目的项目列表。当你打算调用find()时,你调用了find_all()吗? “”我的代码在这里:

 from bs4 import BeautifulSoup
 import requests
  r = requests.get('http://mahaprantikssksamaj.com/ssk-samaj-maharashtras.aspx')
  soup = BeautifulSoup(r.text, 'html.parser')
      for i in range(36):
      print(i)
      url = 'http://mahaprantikssksamaj.com/ssk-prantik-members.aspx?id={}'.format(i)
      r = requests.get(url)
      web = BeautifulSoup(r.content,"html.parser")
      table= web.findAll("table",id="DGORG") 
      print(table)
      table_body = table.find('tbody')
      rows = table_body.find_all('tr')
          for tr in rows:
           cols = tr.find_all('td')
           for td in cols:
              print (td)

print(table)给出o / p这个:

  <div class="memcss">
  <table  border="1" style="width:90%;padding:10px;margin:0px 0px 20px 
  20px;box-shadow:2px 2px 2px #000000">
  <tr>
  <td colspan="2" style="text-align:center"><h5>Mr. Jaydeo Mahadeosa 
  Pawar</h5></td>
  </tr>
  <tr>
  <td colspan="2" style="text-align:center"><h6>Secretory</h6></td>
  </tr>
  <tr>
  <td style="width:25%;height:30px;text-align:right">Address : </td>
  <td> Pune</td>
  </tr>
  <tr>
  <td style="width:20%;height:30px;text-align:right">City : </td>
  <td> Pune</td>
  </tr>
  <tr>
  <td style="width:20%;height:30px;text-align:right">Mobile : </td>
  <td> </td>
  </tr>
  </table>
  </div>

  </td>
  </tr><tr>
  <td>

试图在csv文件中仅存储名称,名称,地址和手机号码。任何人都可以在我错的地方帮忙。谢谢。

python web-scraping beautifulsoup
2个回答
1
投票

要从登录页面中连接到view members链接的每个表中获取所有内容,您可以遵循以下方法:

from bs4 import BeautifulSoup
from urllib.parse import urljoin
import requests

link = "http://mahaprantikssksamaj.com/ssk-samaj-maharashtras.aspx"

res = requests.get(link)
soup = BeautifulSoup(res.text, 'html.parser')
for item in soup.select("a[style$='text-decoration:none']"):
    req = requests.get(urljoin(link,item.get("href")))
    sauce = BeautifulSoup(req.text,"html.parser")
    for elem in sauce.select(".memcss table tr"):
        data = [item.get_text(strip=True) for item in elem.select("td")]
        print(data)

输出如下:

['Shri. Narsinhasa Narayansa Kolhapure']
['Chairman']
['Address :', 'Ahamadnagar']
['City :', 'Ahamadnagar']
['Mobile :', '2425577']

0
投票
from bs4 import BeautifulSoup
import requests

r = requests.get('http://mahaprantikssksamaj.com/ssk-samaj-maharashtras.aspx')
soup = BeautifulSoup(r.text, 'html.parser')
for i in range(36):
    print(i)
    url = 'http://mahaprantikssksamaj.com/ssk-prantik-members.aspx?id={}'.format(i)
    r = requests.get(url)
    web = BeautifulSoup(r.content, "html.parser")
    table = web.find("table", id="DGORG")
    print(table)
    rows = table.find_all('tr')
    for tr in rows:
        cols = tr.find_all('td')
        for td in cols:
            print(td)

变化

使用qazxsw poi使用qazxsw poi而不是qazxsw poi

当我们检查网站时,它显示table= web.findAll("table",id="DGORG")find。但它可能在源代码中不可用。要确认去findAll

table

© www.soinside.com 2019 - 2024. All rights reserved.