网页抓取:如何从html中提取与关键字匹配的链接,而其他关键字与网址中没有关键字?

问题描述 投票:0回答:1

我试图从网页中提取作业描述,如果它与某些关键字匹配,这是有效的,但我也想提取与HTML中的描述相对应的链接。问题是链接发生在所描述的关键字之前,并且URL不包含要搜索的关键字。如何提取与通过关键字找到的职位描述相匹配的链接?

这是我的代码:

import re, requests, time, os, csv, subprocess

from bs4 import BeautifulSoup


def get_jobs(url):

keywords = ["KI", "AI", "Big Data", "Data", "data", "big data", "Analytics", "analytics", "digitalisierung", "ML",
            "Machine Learning", "Daten", "Datenexperte", "Datensicherheitsexperte"]
headers = {'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.109 Safari/537.36'}

html = requests.get(url, headers=headers, timeout=5)

time.sleep(2)

soup = BeautifulSoup(html.text, 'html.parser')

jobs = soup.find_all('p',text=re.compile(r'\b(?:%s)\b' % '|'.join(keywords)))

# links = jobs.find_all('a')


jobs_found = []
for word in jobs:
    jobs_found.append(word)
with open("jobs.csv", 'a', encoding='utf-8') as toWrite:
    writer = csv.writer(toWrite)
    writer.writerows(jobs_found)
    # subprocess.call('./Autopilot3.py')
    print("Matched Jobs have been collected.")


get_jobs('https://www.auftrag.at//tenders.aspx')
python html web-scraping beautifulsoup
1个回答
0
投票

通过网络我看到链接总是比描述高两级。然后你冷使用find_parent()函数来获得找到的工作的a标签。

你的代码中有:

jobs = soup.find_all('p',text=re.compile(r'\b(?:%s)\b' % '|'.join(keywords)))

然后添加:

for i in jobs:
   print(i.find_parent('a').get('href'))

这将打印链接。请注意,这些链接是相对链接,不是绝对的。您应该添加根以查找特定页面。例如,如果您发现一个链接是:ETender.aspx?id=ed60009c-8d64-4759-a722-872e21cf9ea7&action=show。你必须添加到开头:https://www.auftrag.at/。作为最后的链接:https://www.auftrag.at/ETender.aspx?id=ed60009c-8d64-4759-a722-872e21cf9ea7&action=show

如果需要,可以像处理作业说明一样将它们添加到列表中。完整代码(不保存在csv中)将是:

import re, requests, time, os, csv, subprocess

from bs4 import BeautifulSoup


def get_jobs(url):

    keywords = ["KI", "AI", "Big Data", "Data", "data", "big data", "Analytics", "analytics", "digitalisierung", "ML",
                "Machine Learning", "Daten", "Datenexperte", "Datensicherheitsexperte"]
    headers = {'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.109 Safari/537.36'}

    html = requests.get(url, headers=headers, timeout=5)

    time.sleep(2)

    soup = BeautifulSoup(html.text, 'html.parser')

    jobs = soup.find_all('p',text=re.compile(r'\b(?:%s)\b' % '|'.join(keywords)))

    # links = jobs.find_all('a')


    jobs_found = []
    links = []
    for word in jobs:
        jobs_found.append(word)
        links.append(word.find_parent('a').get('href'))
    with open("jobs.csv", 'a', encoding='utf-8') as toWrite:
        writer = csv.writer(toWrite)
        writer.writerows(jobs_found)
        # subprocess.call('./Autopilot3.py')
        print("Matched Jobs have been collected.")

    return soup, jobs
soup, jobs = get_jobs('https://www.auftrag.at//tenders.aspx')

如果要添加完整的URL,只需更改行:

links.append(word.find_parent('a').get('href'))

至:

links.append("//".join(["//".join(url.split("//")[:2]),word.find_parent('a').get('href')]))
© www.soinside.com 2019 - 2024. All rights reserved.