如何在抓取网页时单击下一步按钮
How to click Next button while scraping webpage
正在尝试抓取以下网站,http://www.starcitygames.com/catalog/category/Duel%20Decks%20Venser%20vs%20Koth,我需要能够单击下一步按钮才能继续到下一页。我尝试了几种不同的方法,我在下面尝试了两个代码片段,但是 none 有效,我真的不知道如何才能进入下一页。
# Scraping
def parse(self, response):
item = GameItem()
saved_name = ""
item["Category"] = response.css("span.titletext::text").extract()
for game in response.css("tr[class^=deckdbbody]"):
saved_name = game.css("a.card_popup::text").extract_first() or saved_name
item["card_name"] = saved_name.strip()
if item["card_name"] != None:
saved_name = item["card_name"].strip()
else:
item["card_name"] = saved_name
item["Condition"] = game.css("td[class^=deckdbbody].search_results_7 a::text").get()
item["stock"] = game.css("td[class^=deckdbbody].search_results_8::text").extract_first()
item["Price"] = game.css("td[class^=deckdbbody].search_results_9::text").extract_first()
yield item
next_page = response.css('#content > div:last-of-type > a\(\@href\):last-of-type').get()
if next_page is not None:
yield response.follow(next_page, self.parse)
# Scraping
def parse(self, response):
item = GameItem()
saved_name = ""
item["Category"] = response.css("span.titletext::text").extract()
for game in response.css("tr[class^=deckdbbody]"):
saved_name = game.css("a.card_popup::text").extract_first() or saved_name
item["card_name"] = saved_name.strip()
if item["card_name"] != None:
saved_name = item["card_name"].strip()
else:
item["card_name"] = saved_name
item["Condition"] = game.css("td[class^=deckdbbody].search_results_7 a::text").get()
item["stock"] = game.css("td[class^=deckdbbody].search_results_8::text").extract_first()
item["Price"] = game.css("td[class^=deckdbbody].search_results_9::text").extract_first()
yield item
next_page = response.css('table+ div a:nth-child(8)::attr(href)').get()
if next_page is not None:
yield response.follow(next_page, self.parse)
无法使用 CSS 表达式按文本查找元素。这就是为什么我强烈建议您在这部分使用 XPath:
next_page = response.xpath('//a[contains(., "- Next>>")]/@href').get()
if next_page is not None:
yield response.follow(next_page, self.parse)
正在尝试抓取以下网站,http://www.starcitygames.com/catalog/category/Duel%20Decks%20Venser%20vs%20Koth,我需要能够单击下一步按钮才能继续到下一页。我尝试了几种不同的方法,我在下面尝试了两个代码片段,但是 none 有效,我真的不知道如何才能进入下一页。
# Scraping
def parse(self, response):
item = GameItem()
saved_name = ""
item["Category"] = response.css("span.titletext::text").extract()
for game in response.css("tr[class^=deckdbbody]"):
saved_name = game.css("a.card_popup::text").extract_first() or saved_name
item["card_name"] = saved_name.strip()
if item["card_name"] != None:
saved_name = item["card_name"].strip()
else:
item["card_name"] = saved_name
item["Condition"] = game.css("td[class^=deckdbbody].search_results_7 a::text").get()
item["stock"] = game.css("td[class^=deckdbbody].search_results_8::text").extract_first()
item["Price"] = game.css("td[class^=deckdbbody].search_results_9::text").extract_first()
yield item
next_page = response.css('#content > div:last-of-type > a\(\@href\):last-of-type').get()
if next_page is not None:
yield response.follow(next_page, self.parse)
# Scraping
def parse(self, response):
item = GameItem()
saved_name = ""
item["Category"] = response.css("span.titletext::text").extract()
for game in response.css("tr[class^=deckdbbody]"):
saved_name = game.css("a.card_popup::text").extract_first() or saved_name
item["card_name"] = saved_name.strip()
if item["card_name"] != None:
saved_name = item["card_name"].strip()
else:
item["card_name"] = saved_name
item["Condition"] = game.css("td[class^=deckdbbody].search_results_7 a::text").get()
item["stock"] = game.css("td[class^=deckdbbody].search_results_8::text").extract_first()
item["Price"] = game.css("td[class^=deckdbbody].search_results_9::text").extract_first()
yield item
next_page = response.css('table+ div a:nth-child(8)::attr(href)').get()
if next_page is not None:
yield response.follow(next_page, self.parse)
无法使用 CSS 表达式按文本查找元素。这就是为什么我强烈建议您在这部分使用 XPath:
next_page = response.xpath('//a[contains(., "- Next>>")]/@href').get()
if next_page is not None:
yield response.follow(next_page, self.parse)