Python Web Scrape Cycle选项卡

问题描述 投票:0回答:1

寻求帮助以遍历网站上的所有选项卡以捕获所有相关信息。

在以下站点中,有一些标签为5x5,5x10,5x15,10x10等。我不确定如何构建它以便它将通过选项卡并在我的脚本中编写循环。非常感谢您的帮助。

下面是python脚本;

from urllib.request import urlopen as uReq
from bs4 import BeautifulSoup as soup
import csv

urls = [
    'https://www.lifestorage.com/storage-units/florida/orlando/32810/610-near-lockhart/?size=5x5'
]

filename = 'life_storage.csv'

f = open(filename, 'a+')
csv_writer = csv.writer(f) 

headers = ['unit_size', 'unit_type', 'description', 'online_price', 'reg_price', 'store_address', 'store_city', 'store_state', 'store_postalcode' ]

##unit_size = 5'x10' withouth the '
##unit_type = climate controlled or not (this could be blank if non-climate)
##descirption = the level it's on and type of access.
##online_price = $##/mo text
##reg_price = the scratched off $## text

csv_writer.writerow(headers)

for my_url in urls:
    uClient = uReq(my_url)
    page_html = uClient.read()
    uClient.close()
    page_soup = soup(page_html, 'html.parser')   


    store_locator = page_soup.findAll("div", {"itemprop": "address"})
    containers = page_soup.findAll("ul", {"id": "spaceList"})

    for container in containers:
        for store_location in store_locator:
            store_address1 = store_location.find("span", {"itemprop": "streetAddress"})
            store_address = store_address1.text
            store_city1 = store_location.find("span", {"itemprop": "addressLocality"})
            store_city = store_city1.text
            store_state1 = store_location.find("span", {"itemprop": "addressRegion"})
            store_state = store_state1.text
            store_postalcode1 = store_location.find("span", {"itemprop": "postalCode"})
            store_postalcode = store_postalcode1.text
            title_container = container.find("div", {"class": "storesRow"})
            unit_size = title_container.text
            unit_container = container.find("div", {"class": "storesRow"})
            unit_type = unit_container.strong.text
            description_container = container.find("ul", {"class": "features"})
            description = description_container.text
            online_price_container = container.find("div", {"class": "priceBox"})
            online_price =  online_price_container.strong.text
            reg_price_container = container.find("div", {"class": "priceBox"})
            reg_price = reg_price_container.i.text

        csv_writer.writerow([unit_size, unit_type, description, online_price, reg_price, store_address, store_city, store_state, store_postalcode])

f.close()

以下是html正文中与循环相关的片段;

//////////\\\\\\\Description BOX



<div class="storesRow">
    <strong>
<a href="/reservation/choose/?store=610&amp;type=1"> 5' x 5'<sup>*</sup> - Climate Controlled </a>
</strong>
    <ul class="features">
        <li>Indoor access</li>
        <li>Ground Level</li>
    </ul>
</div>



//////////\\\\\\\\\PRICE BOX

<div class="priceBox">
<strong>

                                        $25/mo





                                                <i> $27</i>
</strong>
<em class="pOnly ">Phone &amp; online only</em>
<div class="specialsMessage">
</div>
</div>


//////////\\\\\\\\\ADDRESS BOX


<div itemprop="address" itemscope="" itemtype="https://schema.org/PostalAddress">
<em>
<i class="fa fa-map-marker"></i>
<span itemprop="streetAddress">7244 Overland Rd </span>
<span itemprop="addressLocality">Orlando</span>,

        <span itemprop="addressRegion">FL</span>
<span itemprop="postalCode">32810</span>
</em>
</div>

当前输出enter image description here

渴望输出enter image description here

python html web-scraping beautifulsoup
1个回答
0
投票

你有错误的缩进 - qazxsw poi应该在内部qazxsw poi内。

但是从项目中挤出正确的文本可能需要更多的工作。见代码。

writerow()

结果:

for
© www.soinside.com 2019 - 2024. All rights reserved.