Beautifulsoup意粉代码,附加问题

问题描述 投票:-2回答:1

我有一个代码,允许我拉一些新闻网站的链接。我只想拉出城市名称的链接 - 格但斯克。但是并不总是在URL中使用正确的拼写,所以我需要放入gdańsk,gdansk等。我也想从不同的站点中提取它。我能够添加更多的单词和网站,但它让我做了更多的循环。您能否指导我如何使代码更高效,更短?

第二个问题:我将收到的链接导出为CSV文件。我想把它们聚集在那里以后来分析它们。我发现如果我在csv = open(plik,“a”)中用“a”替换“w”,它应该附加文件。相反 - 没有任何反应。当它只是“w”时它会覆盖文件,但这就是我现在所需要的

import requests
from bs4 import BeautifulSoup as bs

from datetime import datetime
def data(timedateformat='complete'):

formatdaty = timedateformat.lower()

if timedateformat == 'rokmscdz':
    return (str(datetime.now())).split(' ')[0]
elif timedateformat == 'dzmscrok':
    return ((str(datetime.now())).split(' ')[0]).split('-')[2] + '-' + ((str(datetime.now())).split(' ')[0]).split('-')[1] + '-' + ((str(datetime.now())).split(' ')[0]).split('-')[0]


a = requests.get('http://www.dziennikbaltycki.pl')
b = requests.get('http://www.trojmiasto.pl')

zupa = bs(a.content, 'lxml')
zupka = bs(b.content, 'lxml')


rezultaty1 = [item['href'] for item in zupa.select(" [href*='Gdansk']")]
rezultaty2 = [item['href'] for item in zupa.select("[href*='gdansk']")]
rezultaty3 = [item['href'] for item in zupa.select("[href*='Gdańsk']")]
rezultaty4 = [item['href'] for item in zupa.select("[href*='gdańsk']")]

rezultaty5 = [item['href'] for item in zupka.select("[href*='Gdansk']")]
rezultaty6 = [item['href'] for item in zupka.select("[href*='gdansk']")]
rezultaty7 = [item['href'] for item in zupka.select("[href*='Gdańsk']")]
rezultaty8 = [item['href'] for item in zupka.select("[href*='gdańsk']")]

s = set()

plik = "dupa.csv"
csv = open(plik,"a")


for item in rezultaty1:
    s.add(item)
for item in rezultaty2:
    s.add(item)
for item in rezultaty3:
    s.add(item)
for item in rezultaty4:
    s.add(item)
for item in rezultaty5:
    s.add(item)
for item in rezultaty6:
    s.add(item)
for item in rezultaty7:
    s.add(item)
for item in rezultaty8:
    s.add(item)



for item in s:
    print('Data wpisu: ' + data('dzmscrok'))
    print('Link: ' + item)
    print('\n')
    csv.write('Data wpisu: ' + data('dzmscrok') + '\n')
    csv.write(item + '\n'+'\n')
python html web-scraping beautifulsoup
1个回答
0
投票

理想情况下,为了提高性能并修改代码甚至可以进一步循环,您可以解析网页的结果并通过用ASCII等价物(Replacing special characters with ASCII equivalent)替换所有特殊字符来规范化。

您可以通过更改代码来循环Gdansk变体,然后将结果合并到一个集合中来避免重复。我已修改下面的代码并将其拆分为多个函数。

import requests
from bs4 import BeautifulSoup as bs
from datetime import datetime

def extract_links(content):
    # Return a list of hrefs that mention any variation of the city Gdansk
    variations = ['Gdansk', 'gdansk', 'Gdańsk', 'gdańsk']
    result = []
    for x in variations:
        result = [*result, *[item['href'] for item in content.select(f"[href*={x}]")]]
    return result

def data(timedateformat='complete'):
    formatdaty = timedateformat.lower()

    if timedateformat == 'rokmscdz':
        return (str(datetime.now())).split(' ')[0]
    elif timedateformat == 'dzmscrok':
        return ((str(datetime.now())).split(' ')[0]).split('-')[2] + '-' + ((str(datetime.now())).split(' ')[0]).split('-')[1] + '-' + ((str(datetime.now())).split(' ')[0]).split('-')[0]

def get_links_from_urls(*urls):
    # Request webpages then loop over the results to
    # create a set of links that we will write to our file.
    result = []
    for rv in [requests.get(url) for url in urls]:
        zupa = bs(rv.content, 'lxml')
        result = [*result, *extract_links(zupa)]
    return set(result)

def main():
    # use pytons context manager to open 'ass.csv' and write out csv rows
    plik = "dupa.csv"

    with open(plik, 'a') as f:
        for item in get_links_from_urls('http://www.dziennikbaltycki.pl', 'http://www.trojmiasto.pl'):
            print('Data wpisu: ' + data('dzmscrok'))
            print('Link: ' + item)
            print('\n')
            f.write(f'Data wpisu: {data("dzmscrok")},{item}\n')

main()

希望这有帮助,如果您在评论中有任何问题,请告诉我。

© www.soinside.com 2019 - 2024. All rights reserved.