我想要链接和每个链接的所有内容

问题描述 投票:1回答:1

我在报纸网站上搜索了一个关键词(网络安全),结果显示了大约10篇文章。我希望我的代码获取链接并转到该链接并获取整篇文章并重复此页面中的所有10篇文章。 (我不想要摘要,我想要整篇文章)

import urllib.request
import ssl
import time
from bs4 import BeautifulSoup

ssl._create_default_https_context = ssl._create_unverified_context
pages = [1]
for page in pages:
    data = urllib.request.urlopen("https://www.japantimes.co.jp/tag/cybersecurity/page/{}".format(page))
    soup = BeautifulSoup(data, 'html.parser')

    for article in soup.find_all('div', class_="content_col"):
        link = article.p.find('a')
        print(link.attrs['href'])

        for link in links:
            headline = link.h1.find('div', class_= "padding_block")
            headline = headline.text
            print(headline)
            content = link.p.find_all('div', class_= "entry")
            content = content.text
            print(content)

            print()

        time.sleep(3)

这不起作用。

date = link.li.find('time', class_= "post_time")

显示错误:

AttributeError:'NoneType'对象没有属性'find'

此代码正在工作并抓取所有文章链接。我想要包含将添加每篇文章链接的标题和内容的代码。

import urllib.request
import ssl
import time
from bs4 import BeautifulSoup

ssl._create_default_https_context = ssl._create_unverified_context
pages = [1]
for page in pages:

    data = urllib.request.urlopen("https://www.japantimes.co.jp/tag/cybersecurity/page/{}".format(page))

    soup = BeautifulSoup(data, 'html.parser')

    for article in soup.find_all('div', class_="content_col"):
        link = article.p.find('a')
        print(link.attrs['href'])
        print()
        time.sleep(3)
web-scraping beautifulsoup
1个回答
2
投票

请尝试以下脚本。它将获取所有标题及其内容。放置您想要浏览的最多页数。

import requests
from bs4 import BeautifulSoup

url = 'https://www.japantimes.co.jp/tag/cybersecurity/page/{}'

pages = 4

for page in range(1,pages+1):
    res = requests.get(url.format(page))
    soup = BeautifulSoup(res.text,"lxml")
    for item in soup.select(".content_col header p > a"):
        resp = requests.get(item.get("href"))
        sauce = BeautifulSoup(resp.text,"lxml")
        title = sauce.select_one("header h1").text
        content = [elem.text for elem in sauce.select("#jtarticle p")]
        print(f'{title}\n{content}\n')
© www.soinside.com 2019 - 2024. All rights reserved.