Python 爬虫建议

Python scraper advice

我已经在一个抓取工具上工作了一段时间,并且已经非常接近按预期达到 运行。我的代码如下:

import urllib.request
from bs4 import BeautifulSoup


# Crawls main site to get a list of city URLs
def getCityLinks():
    city_sauce = urllib.request.urlopen('https://www.prodigy-living.co.uk/') # Enter url here 
    city_soup = BeautifulSoup(city_sauce, 'html.parser')
    the_city_links = []

    for city in city_soup.findAll('div', class_="city-location-menu"):
        for a in city.findAll('a', href=True, text=True):
            the_city_links.append('https://www.prodigy-living.co.uk/' + a['href'])
    return the_city_links

# Crawls each of the city web pages to get a list of unit URLs
def getUnitLinks():
    getCityLinks()
    for the_city_links in getCityLinks():
        unit_sauce = urllib.request.urlopen(the_city_links)
        unit_soup = BeautifulSoup(unit_sauce, 'html.parser')
        for unit_href in unit_soup.findAll('a', class_="btn white-green icon-right-open-big", href=True):
            yield('the_url' + unit_href['href'])

the_unit_links = []
for link in getUnitLinks():
    the_unit_links.append(link)

# Soups returns all of the html for the items in the_unit_links


def soups():
    for the_links in the_unit_links:
        try:
            sauce = urllib.request.urlopen(the_links)
            for things in sauce:
                soup_maker = BeautifulSoup(things, 'html.parser')
                yield(soup_maker)
        except:
            print('Invalid url')

# Below scrapes property name, room type and room price

def getPropNames(soup):
    try:
        for propName in soup.findAll('div', class_="property-cta"):
            for h1 in propName.findAll('h1'):
                print(h1.text)
    except:
        print('Name not found')

def getPrice(soup):
    try:
        for price in soup.findAll('p', class_="room-price"):
            print(price.text)
    except:
        print('Price not found')


def getRoom(soup):
    try:
        for theRoom in soup.findAll('div', class_="featured-item-inner"):
            for h5 in theRoom.findAll('h5'):
                print(h5.text)
    except:
        print('Room not found')

for soup in soups():
    getPropNames(soup)
    getPrice(soup)
    getRoom(soup)

当我 运行 这样做时,它 return 收集了所有 url 的所有价格。但是,我没有 return 名字或房间,我也不确定为什么。我真的很感激任何关于此的指示,或改进我的代码的方法 - 已经学习 Python 几个月了!

我认为您正在抓取的 links 最终会将您重定向到另一个网站,在这种情况下您的抓取功能将无用! 例如,伯明翰某个房间的 link 会将您重定向到另一个网站。

此外,在 BS 中使用 findfind_all 方法时要小心。第一个 return 只有一个标签(当你想要一个 属性 名字时),而 find_all() 将 return 一个列表,例如,多个房间价格和类型。

无论如何,我已经稍微简化了您的代码,这就是我遇到您的问题的方式。也许您想从中得到一些启发:

import requests
from bs4 import BeautifulSoup

main_url = "https://www.prodigy-living.co.uk/"

# Getting individual cities url
re = requests.get(main_url)
soup = BeautifulSoup(re.text, "html.parser")
city_tags = soup.find("div", class_ = "footer-city-nav") # Bottom page not loaded dynamycally
cities_links = [main_url+tag["href"] for tag in city_tags.find_all("a")] # Links to cities


# Getting the individual links to the apts
indiv_apts = []

for link in cities_links[0:4]:
    print "At link: ", link
    re = requests.get(link)
    soup = BeautifulSoup(re.text, "html.parser")
    links_tags = soup.find_all("a", class_ = "btn white-green icon-right-open-big")

    for url in links_tags:
        indiv_apts.append(main_url+url.get("href"))

# Now defining your functions
def GetName(tag):
    print tag.find("h1").get_text()

def GetType_Price(tags_list):
    for tag in tags_list:
        print tag.find("h5").get_text()
        print tag.find("p", class_ = "room-price").get_text()

# Now scraping teach of the apts - name, price, room.
for link in indiv_apts[0:2]:
    print "At link: ", link
    re = requests.get(link)
    soup = BeautifulSoup(re.text, "html.parser")
    property_tag = soup.find("div", class_ = "property-cta")
    rooms_tags = soup.find_all("div", class_ = "featured-item")
    GetName(property_tag)
    GetType_Price(rooms_tags)

您会在列表的第二个元素处看到它,您会得到一个 AttributeError,因为您不再在您的网站页面上。确实:

>>> print indiv_apts[1]
https://www.prodigy-living.co.uk/http://www.iqstudentaccommodation.com/student-accommodation/birmingham/penworks-house?utm_source=prodigylivingwebsite&utm_campaign=birminghampagepenworksbutton&utm_medium=referral # You will not scrape the expected link right at the beginning

下次有一个精确的问题来解决,或者在其他情况下看看代码审查部分。

findfind_all 上:https://www.crummy.com/software/BeautifulSoup/bs4/doc/#calling-a-tag-is-like-calling-find-all

最后,我认为它也在这里回答了你的问题:

干杯:)