Python Selenium 迭代 table link 秒点击每个 link

Python Selenium iterate table of links clicking each link

所以之前有人问过这个问题,但我仍在努力让它发挥作用。

该网页有一个 table 链接,我想循环点击每个链接。

到目前为止,这是我的代码

from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

driver = webdriver.Chrome(executable_path=r'C:\Users\my_path\chromedriver_96.exe')
driver.get(r"https://www.fidelity.co.uk/shares/ftse-350/")

try:
    element = WebDriverWait(driver, 20).until(
        EC.presence_of_element_located((By.CLASS_NAME, "table-scroll")))

    table = element.find_elements_by_xpath("//table//tbody/tr")
 
    for row in table[1:]:
        print(row.get_attribute('innerHTML'))
        # link.click()

finally:
    driver.close()

输出样本

            <td>FOUR</td>
            <td><a href="/factsheets/4IMPRINT-GROUP/GB0006640972-GBP/?id=GB0006640972GBP&amp;idType=isin&amp;marketCode=&amp;idCurrencyid=" target="_parent">4imprint Group plc</a></td>
            <td>Media &amp; Publishing</td>
        

            <td>888</td>
            <td><a href="/factsheets/888-HOLDINGS/GI000A0F6407-GBP/?id=GI000A0F6407GBP&amp;idType=isin&amp;marketCode=&amp;idCurrencyid=" target="_parent">888 Holdings</a></td>
            <td>Hotels &amp; Entertainment Services</td>
        

            <td>ASL</td>
            <td><a href="/factsheets/ABERFORTH-SMALLER-COMPANIES-TRUST/GB0000066554-GBP/?id=GB0000066554GBP&amp;idType=isin&amp;marketCode=&amp;idCurrencyid=" target="_parent">Aberforth Smaller Companies Trust</a></td>
            <td>Collective Investments</td>


如何单击 href 并迭代到下一个 href?

非常感谢。

编辑 我采用了这个解决方案(对 Prophet 的解决方案进行了一些小调整)

from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
import time
from selenium.webdriver.common.action_chains import ActionChains


driver = webdriver.Chrome(executable_path=r'C:\Users\my_path\chromedriver_96.exe')
driver.get(r"https://www.fidelity.co.uk/shares/ftse-350/")
actions = ActionChains(driver)
#close the cookies banner
WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.ID, "ensCloseBanner"))).click()
#wait for the first link in the table
WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, "//table//tbody/tr/td/a")))
#extra wait to make all the links loaded
time.sleep(1)
#get the total links amount
links = driver.find_elements_by_xpath('//table//tbody/tr/td/a') 

for index, val in enumerate(links):
    try:
        #get the links again after getting back to the initial page in the loop
        links = driver.find_elements_by_xpath('//table//tbody/tr/td/a')
        #scroll to the n-th link, it may be out of the initially visible area
        actions.move_to_element(links[index]).perform()
        links[index].click()
        #scrape the data on the new page and get back with the following command
        driver.execute_script("window.history.go(-1)") #you can alternatevely use this as well: driver.back()
        WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, "//table//tbody/tr/td/a")))
        time.sleep(2)
    except StaleElementReferenceException:  
        pass

要在此处执行您想执行的操作,您首先需要关闭页面底部的 cookie 横幅。
然后你可以遍历 table.
中的 links 由于单击每个 link 都会打开一个新页面,因此在清除那里的数据后,您将不得不返回主页并获取下一个 link。您不能只将所有 link 放入某个列表,然后遍历该列表,因为通过导航到另一个网页,Selenium 在初始页面上抓取的所有现有元素都会变得陈旧。
您的代码可以是这样的:

from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
import time


driver = webdriver.Chrome(executable_path=r'C:\Users\my_path\chromedriver_96.exe')
driver.get(r"https://www.fidelity.co.uk/shares/ftse-350/")
actions = ActionChains(driver)
#close the cookies banner
WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.ID, "ensCloseBanner"))).click()
#wait for the first link in the table
WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, "//table//tbody/tr/td/a")))
#extra wait to make all the links loaded
time.sleep(1)
#get the total links amount
links = driver.find_elements_by_xpath('//table//tbody/tr/td/a') 
for index, val in enumerate(links):
    #get the links again after getting back to the initial page in the loop
    links = driver.find_elements_by_xpath('//table//tbody/tr/td/a')
    #scroll to the n-th link, it may be out of the initially visible area
    actions.move_to_element(links[index]).perform()
    links[index].click()
    #scrape the data on the new page and get back with the following command
    driver.execute_script("window.history.go(-1)") #you can alternatevely use this as well: driver.back()
    WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, "//table//tbody/tr/td/a")))
    time.sleep(1)

您基本上必须执行以下操作:

  1. 如果可用,请单击 cookies 按钮
  2. 获取页面上的所有链接。
  3. 遍历链接列表,然后单击第一个(首先滚动到 Web 元素并对列表项执行此操作),然后导航回原始屏幕。

代码:

driver = webdriver.Chrome(driver_path)
driver.maximize_window()
wait = WebDriverWait(driver, 30)

driver.get("https://www.fidelity.co.uk/shares/ftse-350/")

try:
    wait.until(EC.element_to_be_clickable((By.ID, "ensCloseBanner"))).click()
    print('Click on the cookies button')
except:
    print('Could not click on the cookies button')
    pass

driver.execute_script("window.scrollTo(0, 750)")

try:
    all_links = wait.until(EC.presence_of_all_elements_located((By.XPATH, "//table//tbody/tr/td/a")))
    print("We have got to deal with", len(all_links), 'links')
    j = 0
    for link in range(len(all_links)):
        links = wait.until(EC.presence_of_all_elements_located((By.XPATH, f"//table//tbody/tr/td/a")))
        driver.execute_script("arguments[0].scrollIntoView(true);", links[j])
        time.sleep(1)
        links[j].click()
        # here write the code to scrape something once the click is performed
        time.sleep(1)
        driver.execute_script("window.history.go(-1)")
        j = j + 1
        print(j)
except:
    print('Bot Could not exceute all the links properly')
    pass

导入:

from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC

PS 要处理陈旧的元素引用,您必须在循环内再次定义 Web 元素列表。