Splinter - 元素不可点击,因为另一个元素 <p> 遮挡了它
Splinter - Element is not clickable because another element <p> obscures it
我正在尝试从网站 src
获取一些缩略图,并单击 link,以便稍后获取大图。
为此,我使用 Splinter
和 BeautifulSoup
。
这是html
我需要获取的第一个元素:
为了做到这一点,我有以下代码:
executable_path = {"executable_path": "/path/to/geckodriver"}
browser = Browser("firefox", **executable_path, headless=False
def get_player_images():
url = f'https://www.premierleague.com/players'
# Initiate a splinter instance of the URL
browser.visit(url)
browser.find_by_tag('div[class="table playerIndex"]')
soup = BeautifulSoup(browser.html, 'html.parser')
for el in soup:
td = el.findAll('td')
for each_td in td:
link = each_td.find('a', href=True)
if link:
print (link['href'])
image = each_td.find('img')
if image:
print(image['src'])
# run
get_player_images()
但是我 运行 在浏览器打开后遇到了 2 个问题:
我只访问玩家的前两个 src
。在那之后,照片不见了,我不明白为什么。
/players/19970/Max-Aarons/overview
https://resources.premierleague.com/premierleague/photos/players/40x40/p232980.png
/players/13279/Abdul-Rahman-Baba/overview
https://resources.premierleague.com/premierleague/photos/players/40x40/p118335.png
/players/13286/Tammy-Abraham/overview
//platform-static-files.s3.amazonaws.com/premierleague/photos/players/40x40/Photo-Missing.png
/players/3512/Adam-Smith/overview
//platform-static-files.s3.amazonaws.com/premierleague/photos/players/40x40/Photo-Missing.png
/players/10905/Che-Adams/overview
....
此外,如果我尝试点击 href
link,使用:
if link:
browser.click_link_by_partial_href(link['href'])
我收到错误:
selenium.common.exceptions.ElementClickInterceptedException: Message: Element <a class="playerName" href="/players/19970/Max-Aarons/overview"> is not clickable at point (244,600) because another element <p> obscures it
我做错了什么?我 运行 在 selenium 方面遇到了很多麻烦。
播放器数据通过Javascript动态加载。您可以使用 requests
模块来获取信息。
例如:
import re
import json
import requests
from bs4 import BeautifulSoup
url = 'https://footballapi.pulselive.com/football/players?pageSize=30&compSeasons=274&altIds=true&page={page}&type=player&id=-1&compSeasonId=274'
img_url = 'https://resources.premierleague.com/premierleague/photos/players/250x250/{player_id}.png'
headers = {'Origin': 'https://www.premierleague.com'}
for page in range(1, 10): # <--- increase this to desired number of pages
data = requests.get(url.format(page=page), headers=headers).json()
# uncoment this to print all data:
# print(json.dumps(data, indent=4))
for player in data['content']:
print('{:<50} {}'.format(player['name']['display'], img_url.format(player_id=player['altIds']['opta'])))
打印:
Ethan Ampadu https://resources.premierleague.com/premierleague/photos/players/250x250/p199598.png
Joseph Anang https://resources.premierleague.com/premierleague/photos/players/250x250/p447879.png
Florin Andone https://resources.premierleague.com/premierleague/photos/players/250x250/p93284.png
André Gomes https://resources.premierleague.com/premierleague/photos/players/250x250/p120250.png
Andreas Pereira https://resources.premierleague.com/premierleague/photos/players/250x250/p156689.png
Angeliño https://resources.premierleague.com/premierleague/photos/players/250x250/p145235.png
Faustino Anjorin https://resources.premierleague.com/premierleague/photos/players/250x250/p223332.png
Michail Antonio https://resources.premierleague.com/premierleague/photos/players/250x250/p57531.png
Cameron Archer https://resources.premierleague.com/premierleague/photos/players/250x250/p433979.png
Archie Davies https://resources.premierleague.com/premierleague/photos/players/250x250/p215061.png
Stuart Armstrong https://resources.premierleague.com/premierleague/photos/players/250x250/p91047.png
Marko Arnautovic https://resources.premierleague.com/premierleague/photos/players/250x250/p41464.png
Kepa Arrizabalaga https://resources.premierleague.com/premierleague/photos/players/250x250/p109745.png
Harry Arter https://resources.premierleague.com/premierleague/photos/players/250x250/p48615.png
Daniel Arzani https://resources.premierleague.com/premierleague/photos/players/250x250/p200797.png
... and so on.
注意:要获得更小的缩略图,请将图片网址中的 250x250
更改为 40x40
我正在尝试从网站 src
获取一些缩略图,并单击 link,以便稍后获取大图。
为此,我使用 Splinter
和 BeautifulSoup
。
这是html
我需要获取的第一个元素:
为了做到这一点,我有以下代码:
executable_path = {"executable_path": "/path/to/geckodriver"}
browser = Browser("firefox", **executable_path, headless=False
def get_player_images():
url = f'https://www.premierleague.com/players'
# Initiate a splinter instance of the URL
browser.visit(url)
browser.find_by_tag('div[class="table playerIndex"]')
soup = BeautifulSoup(browser.html, 'html.parser')
for el in soup:
td = el.findAll('td')
for each_td in td:
link = each_td.find('a', href=True)
if link:
print (link['href'])
image = each_td.find('img')
if image:
print(image['src'])
# run
get_player_images()
但是我 运行 在浏览器打开后遇到了 2 个问题:
我只访问玩家的前两个 src
。在那之后,照片不见了,我不明白为什么。
/players/19970/Max-Aarons/overview
https://resources.premierleague.com/premierleague/photos/players/40x40/p232980.png
/players/13279/Abdul-Rahman-Baba/overview
https://resources.premierleague.com/premierleague/photos/players/40x40/p118335.png
/players/13286/Tammy-Abraham/overview
//platform-static-files.s3.amazonaws.com/premierleague/photos/players/40x40/Photo-Missing.png
/players/3512/Adam-Smith/overview
//platform-static-files.s3.amazonaws.com/premierleague/photos/players/40x40/Photo-Missing.png
/players/10905/Che-Adams/overview
....
此外,如果我尝试点击 href
link,使用:
if link:
browser.click_link_by_partial_href(link['href'])
我收到错误:
selenium.common.exceptions.ElementClickInterceptedException: Message: Element <a class="playerName" href="/players/19970/Max-Aarons/overview"> is not clickable at point (244,600) because another element <p> obscures it
我做错了什么?我 运行 在 selenium 方面遇到了很多麻烦。
播放器数据通过Javascript动态加载。您可以使用 requests
模块来获取信息。
例如:
import re
import json
import requests
from bs4 import BeautifulSoup
url = 'https://footballapi.pulselive.com/football/players?pageSize=30&compSeasons=274&altIds=true&page={page}&type=player&id=-1&compSeasonId=274'
img_url = 'https://resources.premierleague.com/premierleague/photos/players/250x250/{player_id}.png'
headers = {'Origin': 'https://www.premierleague.com'}
for page in range(1, 10): # <--- increase this to desired number of pages
data = requests.get(url.format(page=page), headers=headers).json()
# uncoment this to print all data:
# print(json.dumps(data, indent=4))
for player in data['content']:
print('{:<50} {}'.format(player['name']['display'], img_url.format(player_id=player['altIds']['opta'])))
打印:
Ethan Ampadu https://resources.premierleague.com/premierleague/photos/players/250x250/p199598.png
Joseph Anang https://resources.premierleague.com/premierleague/photos/players/250x250/p447879.png
Florin Andone https://resources.premierleague.com/premierleague/photos/players/250x250/p93284.png
André Gomes https://resources.premierleague.com/premierleague/photos/players/250x250/p120250.png
Andreas Pereira https://resources.premierleague.com/premierleague/photos/players/250x250/p156689.png
Angeliño https://resources.premierleague.com/premierleague/photos/players/250x250/p145235.png
Faustino Anjorin https://resources.premierleague.com/premierleague/photos/players/250x250/p223332.png
Michail Antonio https://resources.premierleague.com/premierleague/photos/players/250x250/p57531.png
Cameron Archer https://resources.premierleague.com/premierleague/photos/players/250x250/p433979.png
Archie Davies https://resources.premierleague.com/premierleague/photos/players/250x250/p215061.png
Stuart Armstrong https://resources.premierleague.com/premierleague/photos/players/250x250/p91047.png
Marko Arnautovic https://resources.premierleague.com/premierleague/photos/players/250x250/p41464.png
Kepa Arrizabalaga https://resources.premierleague.com/premierleague/photos/players/250x250/p109745.png
Harry Arter https://resources.premierleague.com/premierleague/photos/players/250x250/p48615.png
Daniel Arzani https://resources.premierleague.com/premierleague/photos/players/250x250/p200797.png
... and so on.
注意:要获得更小的缩略图,请将图片网址中的 250x250
更改为 40x40