确实使用BeautifulSoup python的前100名工作结果

问题描述 投票:1回答:3

我是python网络报废的新手,我想从确实刮掉前100个工作结果,我只能抓第一页结果,即前10名。我正在使用BeautifulSoup框架。这是我的代码,任何人都可以帮我解决这个问题吗?

import urllib2
from bs4 import BeautifulSoup
import json

URL = "https://www.indeed.co.in/jobs?q=software+developer&l=Bengaluru%2C+Karnataka"
soup = BeautifulSoup(urllib2.urlopen(URL).read(), 'html.parser')

results = soup.find_all('div', attrs={'class': 'jobsearch-SerpJobCard'})

for x in results:
company = x.find('span', attrs={"class":"company"})
print 'company:', company.text.strip()

job = x.find('a', attrs={'data-tn-element': "jobTitle"})
print 'job:', job.text.strip()
python web-scraping beautifulsoup
3个回答
1
投票

分批更改url中的起始值。您可以循环递增并添加add变量

https://www.indeed.co.in/jobs?q=software+developer&l=Bengaluru%2C+Karnataka&start=0

https://www.indeed.co.in/jobs?q=software+developer&l=Bengaluru,+Karnataka&start=1

EG

import requests
from bs4 import BeautifulSoup as bs
import pandas as pd
results = []
url = 'https://www.indeed.co.in/jobs?q=software+developer&l=Bengaluru,+Karnataka&start={}'
with requests.Session() as s:
    for page in range(5):
        res = s.get(url.format(page))
        soup = bs(res.content, 'lxml')
        titles = [item.text.strip() for item in soup.select('[data-tn-element=jobTitle]')]
        companies = [item.text.strip() for item in soup.select('.company')]
        data = list(zip(titles, companies))
        results.append(data)
newList = [item for sublist in results for item in sublist]
df = pd.DataFrame(newList)
df.to_json(r'C:\Users\User\Desktop\data.json')

1
投票

如果将代码包含在范围循环中,则可以执行此操作:

from bs4 import BeautifulSoup
import json
import urllib2

URL = "https://www.indeed.co.in/jobs?q=software+developer&l=Bengaluru%2C+Karnataka&start="

for i in range(0 , 100 , 10):
    soup = BeautifulSoup(urllib2.urlopen(URL+str(i)).read(), 'html.parser')
    results = soup.find_all('div', attrs={'class': 'jobsearch-SerpJobCard'})
    for x in results:
        company = x.find('span', attrs={"class":"company"})
        print 'company:', company.text.strip()

        job = x.find('a', attrs={'data-tn-element': "jobTitle"})
        print 'job:', job.text.strip()  

1
投票

尝试下面的代码。它将导航到下一页最多10页。如果你想要超过100条记录,只需将while page_num<100:替换为while True:

from bs4 import BeautifulSoup
import pandas as pd
import re
headers = {'User-Agent':
       'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36'}

page = "https://www.indeed.co.in/jobs?q=software+developer&l=Bengaluru%2C+Karnataka"
company_name = []
job_title = []
page_num = 10
session = requests.Session()
while True:
    pageTree = session.get(page, headers=headers)
    pageSoup = BeautifulSoup(pageTree.content, 'html.parser')
    jobs= pageSoup.find_all("a", {"data-tn-element": "jobTitle"})
    Companys = pageSoup.find_all("span", {"class": "company"})
    for Company, job in zip(Companys, jobs):
        companyname=Company.text
        company_name.append(companyname.replace("\n",""))
        job_title.append(job.text)
    if pageSoup.find("span", text=re.compile("Next")):
        page = "https://www.indeed.co.in/jobs?q=software+developer&l=Bengaluru%2C+Karnataka&start={}".format(page_num)
        page_num +=10
    else:
        break

print(company_name)
print(job_title)
df = pd.DataFrame({"company_name":company_name,"job_title":job_title})
print(df.head(1000))
© www.soinside.com 2019 - 2024. All rights reserved.