如何迭代页面,获取每篇新闻文章的链接和标题。

问题描述 投票:0回答:1

我从这个网站上搜罗了10页的内容 https:/nypost.comsearchChina+COVID-19page1?orderby=relevance。 (及其以下各页)

我预计总共100个链接和标题应该存储在 网页链接. 但是,只保存了10个链接和10个标题。

我如何才能刮出这10页并保存文章链接标题?

任何帮助将被感激!

def scrap(url):
    user_agent = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; Touch; rv:11.0) like Gecko'}
    request = 0
    urls = [f"{url}{x}" for x in range(1,11)]
    params = {
       "orderby": "relevance",
    }
    for page in urls:
        response = requests.get(url=page,
                                headers=user_agent,
                                params=params) 
        # controlling the crawl-rate
        start_time = time() 
        #pause the loop
        sleep(randint(8,15))
        #monitor the requests
        request += 1
        elapsed_time = time() - start_time
        print('Request:{}; Frequency: {} request/s'.format(request, request/elapsed_time))
        clear_output(wait = True)

        #throw a warning for non-200 status codes
        if response.status_code != 200:
            warn('Request: {}; Status code: {}'.format(request, response.status_code))

        #Break the loop if the number of requests is greater than expected
        if request > 72:
            warn('Number of request was greater than expected.')
            break


        #parse the content
        soup_page = bs(response.text) 
        #select all the articles for a single page
        containers = soup_page.findAll("li", {'class': 'article'})

        #scape the links of the articles
        pagelinks = []
        for link in containers:
            url = link.find('a')
            pagelinks.append(url.get('href'))

    print(pagelinks)


        #scrape the titles of the articles
        title = []
        for link in containers:
            atitle = link.find(class_ = 'entry-heading').find('a')
            thetitle = atitle.get_text()
            title.append(thetitle)

    print(title)
python loops web-scraping beautifulsoup web-crawler
1个回答
2
投票

放在 pagelinks = [] 不在 for page in urls:. 通过把它放在里面 for page in urls: 循环中,你会在每一次迭代的页面上覆盖pagelinks列表,所以,最后,你只能从最后一页获得10个链接。

def scrap(url):
    user_agent = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; Touch; rv:11.0) like Gecko'}
    request = 0
    urls = [f"{url}{x}" for x in range(1,11)]
    params = {
       "orderby": "relevance",
    }
    pagelinks = []
    title = []
    for page in urls:
        response = requests.get(url=page,
                                headers=user_agent,
                                params=params) 
        # controlling the crawl-rate
        start_time = time() 
        #pause the loop
        sleep(randint(8,15))
        #monitor the requests
        request += 1
        elapsed_time = time() - start_time
        print('Request:{}; Frequency: {} request/s'.format(request, request/elapsed_time))
        clear_output(wait = True)

        #throw a warning for non-200 status codes
        if response.status_code != 200:
            warn('Request: {}; Status code: {}'.format(request, response.status_code))

        #Break the loop if the number of requests is greater than expected
        if request > 72:
            warn('Number of request was greater than expected.')
            break


        #parse the content
        soup_page = bs(response.text) 
        #select all the articles for a single page
        containers = soup_page.findAll("li", {'class': 'article'})

        #scape the links of the articles

        for link in containers:
            url = link.find('a')
            pagelinks.append(url.get('href'))

        for link in containers:
            atitle = link.find(class_ = 'entry-heading').find('a')
            thetitle = atitle.get_text()
            title.append(thetitle)
    print(title)
    print(pagelinks)
© www.soinside.com 2019 - 2024. All rights reserved.