soup.find_all 和 find_elements_by_class return 未找到元素
soup.find_all and find_elements_by_class return no elements found
我尝试了两种方法来查找投注奇数,但没有结果。我一无所获。
代码如下:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
import pandas as pd
PATH = "C:\Program Files (x86)\chromedriver.exe"
driver = webdriver.Chrome(PATH)
driver.get("https://www.optibet.lv/sport/wcg/CS:GO-5541")
odds = driver.find_elements_by_class('event-block-row__odd event-block-row__odd_clickable event-block-row__odd_without-middle-odd')
if odds is not None:
print('found odds element')
print(odds)
这没有用。它只是打印 'found odds element'。然后我尝试将 class 名称更改为 odds = driver.find_elements_by_class('odd__value')
无济于事。
之后我尝试使用 BeautifulSoup:
from selenium import webdriver
from bs4 import BeautifulSoup
url = "https://www.optibet.lv/sport/wcg/CS:GO-5541"
driver = webdriver.Chrome()
driver.get(url)
soup = BeautifulSoup(driver.page_source, 'html.parser')
containers = soup.find_all("div", class_="event-block-row__odd event-block-row__odd_clickable event-block-row__odd_without-middle-odd")
print (len(containers))
这个returns'0'。我没有想法,也不是很有经验。有帮助吗?
在获取 class 之前切换到 iframe。然后循环列表。
driver.get("https://www.optibet.lv/sport/wcg/CS:GO-5541")
driver.implicitly_wait(10)
driver.switch_to.frame(driver.find_element_by_css_selector("#iFrameResizer0"))
odds = driver.find_elements_by_class_name('odd__value')
if odds is not None:
print('found odds element')
for odd in odds:
print(odd.text)
许多网站都有爬虫保护,其次,您的网站非常繁重。您可以试试这个,但是 BeautifulSoup 有限制:
from bs4 import BeautifulSoup
import urllib.request
import bs4 as bs
url_1 = 'https://www.optibet.lv/sport/wcg/CS:GO-5541'
sauce_1 = urllib.request.urlopen(url_1).read()
soup_1 = bs.BeautifulSoup(sauce_1, 'lxml')
for table in soup_1.find('div', class_='event-block-row__odd event-block-row__odd_clickable event-block-row__odd_without-middle-odd'):
print(table.text)
我尝试了两种方法来查找投注奇数,但没有结果。我一无所获。
代码如下:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
import pandas as pd
PATH = "C:\Program Files (x86)\chromedriver.exe"
driver = webdriver.Chrome(PATH)
driver.get("https://www.optibet.lv/sport/wcg/CS:GO-5541")
odds = driver.find_elements_by_class('event-block-row__odd event-block-row__odd_clickable event-block-row__odd_without-middle-odd')
if odds is not None:
print('found odds element')
print(odds)
这没有用。它只是打印 'found odds element'。然后我尝试将 class 名称更改为 odds = driver.find_elements_by_class('odd__value')
无济于事。
之后我尝试使用 BeautifulSoup:
from selenium import webdriver
from bs4 import BeautifulSoup
url = "https://www.optibet.lv/sport/wcg/CS:GO-5541"
driver = webdriver.Chrome()
driver.get(url)
soup = BeautifulSoup(driver.page_source, 'html.parser')
containers = soup.find_all("div", class_="event-block-row__odd event-block-row__odd_clickable event-block-row__odd_without-middle-odd")
print (len(containers))
这个returns'0'。我没有想法,也不是很有经验。有帮助吗?
在获取 class 之前切换到 iframe。然后循环列表。
driver.get("https://www.optibet.lv/sport/wcg/CS:GO-5541")
driver.implicitly_wait(10)
driver.switch_to.frame(driver.find_element_by_css_selector("#iFrameResizer0"))
odds = driver.find_elements_by_class_name('odd__value')
if odds is not None:
print('found odds element')
for odd in odds:
print(odd.text)
许多网站都有爬虫保护,其次,您的网站非常繁重。您可以试试这个,但是 BeautifulSoup 有限制:
from bs4 import BeautifulSoup
import urllib.request
import bs4 as bs
url_1 = 'https://www.optibet.lv/sport/wcg/CS:GO-5541'
sauce_1 = urllib.request.urlopen(url_1).read()
soup_1 = bs.BeautifulSoup(sauce_1, 'lxml')
for table in soup_1.find('div', class_='event-block-row__odd event-block-row__odd_clickable event-block-row__odd_without-middle-odd'):
print(table.text)