无法从网页中解析某些名称及其相关网址

问题描述 投票:0回答:4

我使用requests和BeautifulSoup创建了一个python脚本,用于从网页中解析配置文件名称和指向其配置文件名称的链接。内容似乎是动态生成的,但它们存在于页面源中。所以,我尝试了以下但不幸的是我什么都没得到。

SiteLink

我到目前为止的尝试:

import requests
from bs4 import BeautifulSoup

URL = 'https://www.century21.com/real-estate-agents/Dallas,TX'

headers = {
    'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3',
    'accept-encoding': 'gzip, deflate, br',
    'accept-language': 'en-US,en;q=0.9,bn;q=0.8',
    'cache-control': 'max-age=0',
    'cookie': 'JSESSIONID=8BF2F6FB5603A416DCFBAB8A3BB5A79E.app09-c21-id8; website_user_id=1255553501;',
    'user-agent': 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36'
}

def get_info(link):
    res = requests.get(link,headers=headers)
    soup = BeautifulSoup(res.text,"lxml")
    for item in soup.select(".media__content"):
        profileUrl = item.get("href")
        profileName = item.select_one("[itemprop='name']").get_text()
        print(profileUrl,profileName)

if __name__ == '__main__':
    get_info(URL)

如何从该页面获取内容?

python python-3.x web-scraping
4个回答
1
投票

页面源中提供了所需的内容。该网站使用相同的user-agent非常擅长丢弃请求。所以,我使用fake_useragent随机提供请求。如果你不经常使用它,它的工作原理。

工作方案:

import requests
from bs4 import BeautifulSoup
from urllib.parse import urljoin
from fake_useragent import UserAgent

URL = 'https://www.century21.com/real-estate-agents/Dallas,TX'

def get_info(s,link):
    s.headers["User-Agent"] = ua.random
    res = s.get(link)
    soup = BeautifulSoup(res.text,"lxml")
    for item in soup.select(".media__content a[itemprop='url']"):
        profileUrl = urljoin(link,item.get("href"))
        profileName = item.select_one("span[itemprop='name']").get_text()
        print(profileUrl,profileName)

if __name__ == '__main__':
    ua = UserAgent()
    with requests.Session() as s:
        get_info(s,URL)

部分输出:

https://www.century21.com/CENTURY-21-Judge-Fite-Company-14501c/Stewart-Kipness-2657107a Stewart Kipness
https://www.century21.com/CENTURY-21-Judge-Fite-Company-14501c/Andrea-Anglin-Bulin-2631495a Andrea Anglin Bulin
https://www.century21.com/CENTURY-21-Judge-Fite-Company-14501c/Betty-DeVinney-2631507a Betty DeVinney
https://www.century21.com/CENTURY-21-Judge-Fite-Company-14501c/Sabra-Waldman-2657945a Sabra Waldman
https://www.century21.com/CENTURY-21-Judge-Fite-Company-14501c/Russell-Berry-2631447a Russell Berry

1
投票

页面内容不通过javascript呈现。你的代码在我的情况下很好。您只需要一些问题就可以找到profileUrl并处理nonetype异常。您必须专注于a标记以获取数据

你应该试试这个:

import requests
from bs4 import BeautifulSoup

URL = 'https://www.century21.com/real-estate-agents/Dallas,TX'

headers = {
    'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3',
    'accept-encoding': 'gzip, deflate, br',
    'accept-language': 'en-US,en;q=0.9,bn;q=0.8',
    'cache-control': 'max-age=0',
    'cookie': 'JSESSIONID=8BF2F6FB5603A416DCFBAB8A3BB5A79E.app09-c21-id8; website_user_id=1255553501;',
    'user-agent': 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36'
}

def get_info(link):
    res = requests.get(link,headers=headers)
    soup = BeautifulSoup(res.text,"lxml")
    results = []
    for item in soup.select(".media__content"):
        a_link = item.find('a')
        if a_link:
            result = {
                    'profileUrl': a_link.get('href'),
                    'profileName' : a_link.get_text()
                }
        results.append(result)
    return results

if __name__ == '__main__':
    info = get_info(URL)
    print(info)
    print(len(info))

OUTPUT:

[{'profileName': 'Stewart Kipness',
  'profileUrl': '/CENTURY-21-Judge-Fite-Company-14501c/Stewart-Kipness-2657107a'},
  ....,
 {'profileName': 'Courtney Melkus',
  'profileUrl': '/CENTURY-21-Realty-Advisors-47551c/Courtney-Melkus-7389925a'}]

941

1
投票

看起来你也可以构建网址(虽然看起来更容易抓住它)

import requests
from bs4 import BeautifulSoup as bs

URL = 'https://www.century21.com/real-estate-agents/Dallas,TX'

headers = {
    'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3',
    'accept-encoding': 'gzip, deflate, br',
    'accept-language': 'en-US,en;q=0.9,bn;q=0.8',
    'cache-control': 'max-age=0',
    'cookie': 'JSESSIONID=8BF2F6FB5603A416DCFBAB8A3BB5A79E.app09-c21-id8; website_user_id=1255553501;',
    'user-agent': 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36'
}

r = requests.get(URL, headers = headers)
soup = bs(r.content, 'lxml')
items = soup.select('.media')
ids = []
names = []
urls = []
for item in items:
    if item.select_one('[data-agent-id]') is not None:
        anId = item.select_one('[data-agent-id]')['data-agent-id']
        ids.append(anId)
        name = item.select_one('[itemprop=name]').text.replace(' ','-')
        names.append(name)
        url = 'https://www.century21.com/CENTURY-21-Judge-Fite-Company-14501c/' + name + '-' + anId + 'a'
        urls.append(url)

results = list(zip(names,  urls))
print(results)

0
投票

请试试:

profileUrl = "https://www.century21.com/" + item.select("a")[0].get("href")
© www.soinside.com 2019 - 2024. All rights reserved.