如何在抓取中从 python 强制执行或在浏览器中呈现脚本?

How to forcefully execute or render a script in browser from python in scraping?

我正在从事数据抓取和机器学习方面的工作。我对 Python 和 Scraping 都不熟悉。我正在尝试抓取这个特定网站。

https://www.space-track.org/

根据我的监控,他们在登录和下一页之间执行了多个脚本。因此他们得到了那些 table 数据。我能够成功登录,然后通过会话从下一页获取数据,我缺少的是获取他们从两者之间执行脚本获取的数据。我需要来自 table

的数据

satcat

并实现分页。以下是我的代码

 import requests
from bs4 import BeautifulSoup
import urllib
from urllib.request import urlopen
import html2text
import time
from requests_html import HTMLSession
from requests_html import AsyncHTMLSession
with requests.Session() as s:
    #s = requests.Session()
    session = HTMLSession()

    url = 'https://www.space-track.org/'
    headers = {'User-Agent':'Mozilla/5.0(X11; Ubuntu; Linux x86_64; rv:66.0)Gecko/20100101 Firefox/66.0'}
    login_data = { "identity": "",
         "password": "",
         "btnLogin": "LOGIN"
     }
    login_data_extra={"identity": "", "password": ""}
    preLogin = session.get(url + 'auth/login', headers=headers)
    time.sleep(3)
    print('*******************************')
    print('\n')
    print('data to retrive csrf cookie')
    #print(preLogin.text)
    #soup = BeautifulSoup(preLogin.content,'html.parser')
    #afterpretty = soup.prettify()
    #login_data['spacetrack_csrf_token'] = soup.find('input',attrs={'name':'spacetrack_csrf_token'})['value']
    csrf = dict(session.cookies)['spacetrack_csrf_cookie']
    #csrf = p.headers['Set-Cookie'].split(";")[0].split("=")[-1]
    login_data['spacetrack_csrf_token'] = csrf
    #print(login_data)
   # html = open(p.content).read()
   # print (html2text.html2text(p.text))    

    #login_data['spacetrack_csrf_token'] = soup.find('spacetrack_csrf_token"')
    #print(login_data)

    login = session.post(url+'auth/login',data=login_data,headers=headers,allow_redirects=True)
    time.sleep(1)

    print('****************************************')
    print('\n')
    print('login api status code')
    print(login.url)
    #print(r.url)
    #print(r.content)
    print('******************************')
    print(' ')
    print(' ')
    print('\n')
    print('data post login')
    #async def get_pyclock():
    # r = await session.get(url)
    # await r.html.arender()
    # return r
    #postLogin  = session.run(get_pyclock)




    time.sleep(3)
    postLogin = session.get(url)
    postLogin.html.render(sleep=5, keep_page=True)

如您所见,我已经使用 requests_html 库来渲染 html,但我一直未能成功获取数据。这是内部在js中执行的url,它获取我的数据

https://www.space-track.org/master/loadSatCatData

任何人都可以帮助我如何抓取该数据或 javascript 吗?

谢谢:)

你可以去 selenium。它有一个函数browser.execute_script()。这将帮助您执行脚本。希望这有帮助:)