如何使用 python 请求和 BeautifulSoup 和抓取数据在 Aspx 动态网站中循环下拉菜单
How to loop in dropdown menu in Aspx dynamic websites using python requests and BeautifulSoup and scrape data
我删除了我之前的post因为不清楚
这是我在堆栈溢出中的第一个 post。对于我的问题,我阅读了 post "request using python to asp.net page" 和 Data Scraping, aspx ,我找到了我正在寻找的东西,但 需要一些小帮助
我的问题是我想通过网络抓取一个网站 http://up-rera.in/ , it is aspx dynamic website. By clicking inspect element websites throws to a different link which is this: http://upreraportal.cloudapp.net/View_projects.aspx
正在使用 Aspx
我的问题是如何在所有下拉菜单上循环并单击搜索以获取页面内容,例如我能够抓取阿格拉并且能够获取页面详细信息
由于这是我的学习阶段,所以我现在避免使用 selenium 以获取页面详细信息。
有没有人可以正确指导我并帮助我修改下面提到的代码:
import requests
from bs4 import BeautifulSoup
import os
import time
import csv
final_data = []
url = "http://upreraportal.cloudapp.net/View_projects.aspx"
headers= {'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
'Content-Type':'application/x-www-form-urlencoded',
'User-Agent':'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36'}
formfields={'__VIEWSTATE':'9VAv5iAKM/uLKHgQ6U91ShYmoKdKfrPqrxB2y86PhSY8pOPAulcgfrsPDINzwmvXGr+vdlE7FT6eBQCKtAFsJPQ9gQ9JIBTBCGCIjYwFuixL3vz6Q7R0OZTH2cwYmyfPHLOqxh8JbLDfyKW3r3e2UgP5N/4pI1k6DNoNAcEmNYGPzGwFHgUdJz3LYfYuFDZSydsVrwSB5CHAy/pErTJVDMmOackTy1q6Y+TNw7Cnq2imnKnBc70eldJn0gH/rtkrlPMS+WP3CXke6G7nLOzaUVIlnbHVoA232CPRcWuP1ykPjSfX12hAao6srrFMx5GUicO3Dvpir+z0U1BDEjux86Cu5/aFML2Go+3k9iHiaS3+WK/tNNui5vNAbQcPiZrnQy9wotJnw18bfHZzU/77uy22vaC+8vX1cmomiV70Ar33szSWTQjbrByyhbFbz9PHd3IVebHPlPGpdaUPxju5xkFQIJRnojsOARjc76WzTYCf479BiXUKNKflMFmr3Fp5S3BOdKFLBie1fBDgwaXX4PepOeZVm1ftY0YA4y8ObPxkJBcGh5YLxZ4vJr2z3pd8LT2i/2fyXJ9aXR9+SJzlWziu9bV8txiuJHSQNojr10mQv8MSCUAKUjT/fip8F3UE9l+zeQBOC++LEeQiTurHZD0GkNix8zQAHbNpGLBfvgocXZd/4KqqnBCLLwBVQobhRbJhbQJXbGYNs6zIXrnkx7CD9PjGKvRx9Eil19Yb5EqRLJQHSg5OdwafD1U+oyZwr3iUMXP/pJw5cTHMsK3X+dH4VkNxsG+KFzBzynKPdF17fQknzqwgmcQOxD6NN6158pi+9cM1UR4R7iwPwuBCOK04UaW3V1A9oWFGvKLls9OXbLq2DS4L3EyuorEHnxO+p8rrGWIS4aXpVVr4TxR3X79j4i8OVHhIUt8H+jo5deRZ6aG13+mXgZQd5Qu1Foo66M4sjUGs7VUcwYCXE/DP/NHToeU0hUi0sJs7+ftRy07U2Be/93TZjJXKIrsTQxxeNfyxQQMwBYZZRPPlH33t3o3gIo0Hx18tzGYj2v0gaBb+xBpx9mU9ytkceBdBPnZI1kJznArLquQQxN3IPjt6+80Vow74wy4Lvp7D+JCThAnQx4K8QbdKMWzCoKR63GTlBwLK2TiYMAVisM77XdrlH6F0g56PlGQt/RMtU0XM1QXgZvWr3KJDV8UTe0z1bj29sdTsHVJwME9eT62JGZFQAD4PoiqYl7nAB61ajAkcmxu0Zlg7+9N9tXbL44QOcY672uOQzRgDITmX6QdWnBqMjgmkIjSo1qo/VpUEzUXaVo5GHUn8ZOWI9xLrJWcOZeFl0ucyKZePMnIxeUU32EK/NY34eE6UfSTUkktkguisYIenZNfoPYehQF9ASL7t4qLiH5jca4FGgZW2kNKb3enjEmoKqbWDFMkc8/1lsk2eTd/GuhcTysVSxtvpDSlR0tjg8A2hVpR67t2rYm8iO/L1m8ImY48=',
'__VIEWSTATEGENERATOR':'4F1A7E70',
'__EVENTVALIDATION':'jVizPhFNJmo9F/GVlIrlMWMsjQe1UKHfYE4jlpTDfXZHWu9yAcpHUvT/1UsRpbgxYwZczJPd6gsvas8ilVSPkfwP1icGgOTXlWfzykkU86LyIEognwkhOfO1+suTK2e598vAjyLXRf555BXMtCO+oWoHcMjbVX2cHKtpBS1GyyqyyVB8IchAAtDEMD3G5bbzhvof6PX4Iwt5Sv1gXkHRKOR333OcYzmSGJvZgLsmo3qQ+5EOUIK5D71x/ZENmubZXvwbU0Ni6922E96RjCLh5cKgFSne5PcRDUeeDuEQhJLyD04K6N45Ow2RKyu7HN1n1YQGFfgAO3nMCsP51i7qEAohXK957z3m/H+FasHWF2u05laAWGVbPwT35utufotpPKi9qWAbCQSw9vW9HrvN01O97scG8HtWxIOnOdI6/nhke44FSpnvY1oPq+BuY2XKrb2404fKl5EPR4sjvNSYy1/8mn6IDH0eXvzoelNMwr/pKtKBESo3BthxTkkx5MR0J42qhgHURB9eUKlsGulAzjF27pyK4vjXxzlOlHG1pRiQm/wzB4om9dJmA27iaD7PJpQGgSwp7cTpbOuQgnwwrwUETxMOxuf3u1P9i+DzJqgKJbQ+pbKqtspwYuIpOR6r7dRh9nER2VXXD7fRfes1q2gQI29PtlbrRQViFM6ZlxqxqoAXVM8sk/RfSAL1LZ6qnlwGit2MvVYnAmBP9wtqcvqGaWjNdWLNsueL6DyUZ4qcLv42fVcOrsi8BPRnzJx0YiOYZ7gg7edHrJwpysSGDR1P/MZIYFEEUYh238e8I2EAeQZM70zHgQRsviD4o5r38VQf/cM9fjFii99E/mZ+6e0mIprhlM/g69MmkSahPQ5o/rhs8IJiM/GibjuZHSNfYiOspQYajMg0WIGeKWnywfaplt6/cqvcEbqt77tIx2Z0yGcXKYGehmhyHTWfaVkMuKbQP5Zw+F9X4Fv5ws76uCZkOxKV3wj3BW7+T2/nWwWMfGT1sD3LtQxiw0zhOXfY1bTB2XfxuL7+k5qE7TZWhKF4EMwLoaML9/yUA0dcXhoZBnSc',
'ctl00$ContentPlaceHolder1$DdlprojectDistrict':'Agra',
'ctl00$ContentPlaceHolder1$txtProject':'',
'ctl00$ContentPlaceHolder1$btnSearch':'Search'}
#here in form details check agra , i am able to scrape one city only,
# how to loop for all cities
r = requests.post(url, data=formfields, headers=headers)
data=r.text
soup = BeautifulSoup(data, "html.parser")
get_list = soup.find_all('option') #gets list of all <option> tag
for element in get_list :
cities = element["value"]
#final.append(cities)
#print(final)
get_details = soup.find_all("table", attrs={"id":"ContentPlaceHolder1_GridView1"})
for details in get_details:
text = details.find_all("tr")[1:]
for tds in text:
td = tds.find_all("td")[1]
rera = td.find_all("span")
rnumber = ""
for num in rera:
rnumber = num.text
print(rnumber)
试试下面的代码。它会给你所有你想要的结果。只需要稍微抽动一下。我只是从下拉菜单中抓取了不同的名称,并在循环中使用它们,这样您就可以一一获取所有数据。除了添加几行之外,我没有注意到其他内容。如果将代码包装在一个函数中,您的代码可能会更好。
顺便说一句,我已经将两个巨大的字符串放在两个变量中,这样您就不必担心它并使其更细一些。
这是修正后的代码:
import requests
from bs4 import BeautifulSoup
url = "http://upreraportal.cloudapp.net/View_projects.aspx"
response = requests.get(url).text
soup = BeautifulSoup(response,"lxml")
VIEWSTATE = soup.select("#__VIEWSTATE")[0]['value']
EVENTVALIDATION = soup.select("#__EVENTVALIDATION")[0]['value']
for title in soup.select("#ContentPlaceHolder1_DdlprojectDistrict [value]")[:-1]:
search_item = title.text
# print(search_item)
headers= {'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
'Content-Type':'application/x-www-form-urlencoded',
'User-Agent':'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36'}
formfields = {'__VIEWSTATE':VIEWSTATE, #Put the value in this variable
'__VIEWSTATEGENERATOR':'4F1A7E70',
'__EVENTVALIDATION':EVENTVALIDATION, #Put the value in this variable
'ctl00$ContentPlaceHolder1$DdlprojectDistrict':search_item, #this is where your city name changes in each iteration
'ctl00$ContentPlaceHolder1$txtProject':'',
'ctl00$ContentPlaceHolder1$btnSearch':'Search'}
#here in form details check agra , i am able to scrape one city only,
# how to loop for all cities
res = requests.post(url, data=formfields, headers=headers).text
soup = BeautifulSoup(res, "html.parser")
get_list = soup.find_all('option') #gets list of all <option> tag
for element in get_list :
cities = element["value"]
#final.append(cities)
#print(final)
get_details = soup.find_all("table", attrs={"id":"ContentPlaceHolder1_GridView1"})
for details in get_details:
text = details.find_all("tr")[1:]
for tds in text:
td = tds.find_all("td")[1]
rera = td.find_all("span")
rnumber = ""
for num in rera:
rnumber = num.text
print(rnumber)
我删除了我之前的post因为不清楚
这是我在堆栈溢出中的第一个 post。对于我的问题,我阅读了 post "request using python to asp.net page" 和 Data Scraping, aspx ,我找到了我正在寻找的东西,但 需要一些小帮助
我的问题是我想通过网络抓取一个网站 http://up-rera.in/ , it is aspx dynamic website. By clicking inspect element websites throws to a different link which is this: http://upreraportal.cloudapp.net/View_projects.aspx
正在使用 Aspx
我的问题是如何在所有下拉菜单上循环并单击搜索以获取页面内容,例如我能够抓取阿格拉并且能够获取页面详细信息
由于这是我的学习阶段,所以我现在避免使用 selenium 以获取页面详细信息。
有没有人可以正确指导我并帮助我修改下面提到的代码:
import requests
from bs4 import BeautifulSoup
import os
import time
import csv
final_data = []
url = "http://upreraportal.cloudapp.net/View_projects.aspx"
headers= {'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
'Content-Type':'application/x-www-form-urlencoded',
'User-Agent':'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36'}
formfields={'__VIEWSTATE':'9VAv5iAKM/uLKHgQ6U91ShYmoKdKfrPqrxB2y86PhSY8pOPAulcgfrsPDINzwmvXGr+vdlE7FT6eBQCKtAFsJPQ9gQ9JIBTBCGCIjYwFuixL3vz6Q7R0OZTH2cwYmyfPHLOqxh8JbLDfyKW3r3e2UgP5N/4pI1k6DNoNAcEmNYGPzGwFHgUdJz3LYfYuFDZSydsVrwSB5CHAy/pErTJVDMmOackTy1q6Y+TNw7Cnq2imnKnBc70eldJn0gH/rtkrlPMS+WP3CXke6G7nLOzaUVIlnbHVoA232CPRcWuP1ykPjSfX12hAao6srrFMx5GUicO3Dvpir+z0U1BDEjux86Cu5/aFML2Go+3k9iHiaS3+WK/tNNui5vNAbQcPiZrnQy9wotJnw18bfHZzU/77uy22vaC+8vX1cmomiV70Ar33szSWTQjbrByyhbFbz9PHd3IVebHPlPGpdaUPxju5xkFQIJRnojsOARjc76WzTYCf479BiXUKNKflMFmr3Fp5S3BOdKFLBie1fBDgwaXX4PepOeZVm1ftY0YA4y8ObPxkJBcGh5YLxZ4vJr2z3pd8LT2i/2fyXJ9aXR9+SJzlWziu9bV8txiuJHSQNojr10mQv8MSCUAKUjT/fip8F3UE9l+zeQBOC++LEeQiTurHZD0GkNix8zQAHbNpGLBfvgocXZd/4KqqnBCLLwBVQobhRbJhbQJXbGYNs6zIXrnkx7CD9PjGKvRx9Eil19Yb5EqRLJQHSg5OdwafD1U+oyZwr3iUMXP/pJw5cTHMsK3X+dH4VkNxsG+KFzBzynKPdF17fQknzqwgmcQOxD6NN6158pi+9cM1UR4R7iwPwuBCOK04UaW3V1A9oWFGvKLls9OXbLq2DS4L3EyuorEHnxO+p8rrGWIS4aXpVVr4TxR3X79j4i8OVHhIUt8H+jo5deRZ6aG13+mXgZQd5Qu1Foo66M4sjUGs7VUcwYCXE/DP/NHToeU0hUi0sJs7+ftRy07U2Be/93TZjJXKIrsTQxxeNfyxQQMwBYZZRPPlH33t3o3gIo0Hx18tzGYj2v0gaBb+xBpx9mU9ytkceBdBPnZI1kJznArLquQQxN3IPjt6+80Vow74wy4Lvp7D+JCThAnQx4K8QbdKMWzCoKR63GTlBwLK2TiYMAVisM77XdrlH6F0g56PlGQt/RMtU0XM1QXgZvWr3KJDV8UTe0z1bj29sdTsHVJwME9eT62JGZFQAD4PoiqYl7nAB61ajAkcmxu0Zlg7+9N9tXbL44QOcY672uOQzRgDITmX6QdWnBqMjgmkIjSo1qo/VpUEzUXaVo5GHUn8ZOWI9xLrJWcOZeFl0ucyKZePMnIxeUU32EK/NY34eE6UfSTUkktkguisYIenZNfoPYehQF9ASL7t4qLiH5jca4FGgZW2kNKb3enjEmoKqbWDFMkc8/1lsk2eTd/GuhcTysVSxtvpDSlR0tjg8A2hVpR67t2rYm8iO/L1m8ImY48=',
'__VIEWSTATEGENERATOR':'4F1A7E70',
'__EVENTVALIDATION':'jVizPhFNJmo9F/GVlIrlMWMsjQe1UKHfYE4jlpTDfXZHWu9yAcpHUvT/1UsRpbgxYwZczJPd6gsvas8ilVSPkfwP1icGgOTXlWfzykkU86LyIEognwkhOfO1+suTK2e598vAjyLXRf555BXMtCO+oWoHcMjbVX2cHKtpBS1GyyqyyVB8IchAAtDEMD3G5bbzhvof6PX4Iwt5Sv1gXkHRKOR333OcYzmSGJvZgLsmo3qQ+5EOUIK5D71x/ZENmubZXvwbU0Ni6922E96RjCLh5cKgFSne5PcRDUeeDuEQhJLyD04K6N45Ow2RKyu7HN1n1YQGFfgAO3nMCsP51i7qEAohXK957z3m/H+FasHWF2u05laAWGVbPwT35utufotpPKi9qWAbCQSw9vW9HrvN01O97scG8HtWxIOnOdI6/nhke44FSpnvY1oPq+BuY2XKrb2404fKl5EPR4sjvNSYy1/8mn6IDH0eXvzoelNMwr/pKtKBESo3BthxTkkx5MR0J42qhgHURB9eUKlsGulAzjF27pyK4vjXxzlOlHG1pRiQm/wzB4om9dJmA27iaD7PJpQGgSwp7cTpbOuQgnwwrwUETxMOxuf3u1P9i+DzJqgKJbQ+pbKqtspwYuIpOR6r7dRh9nER2VXXD7fRfes1q2gQI29PtlbrRQViFM6ZlxqxqoAXVM8sk/RfSAL1LZ6qnlwGit2MvVYnAmBP9wtqcvqGaWjNdWLNsueL6DyUZ4qcLv42fVcOrsi8BPRnzJx0YiOYZ7gg7edHrJwpysSGDR1P/MZIYFEEUYh238e8I2EAeQZM70zHgQRsviD4o5r38VQf/cM9fjFii99E/mZ+6e0mIprhlM/g69MmkSahPQ5o/rhs8IJiM/GibjuZHSNfYiOspQYajMg0WIGeKWnywfaplt6/cqvcEbqt77tIx2Z0yGcXKYGehmhyHTWfaVkMuKbQP5Zw+F9X4Fv5ws76uCZkOxKV3wj3BW7+T2/nWwWMfGT1sD3LtQxiw0zhOXfY1bTB2XfxuL7+k5qE7TZWhKF4EMwLoaML9/yUA0dcXhoZBnSc',
'ctl00$ContentPlaceHolder1$DdlprojectDistrict':'Agra',
'ctl00$ContentPlaceHolder1$txtProject':'',
'ctl00$ContentPlaceHolder1$btnSearch':'Search'}
#here in form details check agra , i am able to scrape one city only,
# how to loop for all cities
r = requests.post(url, data=formfields, headers=headers)
data=r.text
soup = BeautifulSoup(data, "html.parser")
get_list = soup.find_all('option') #gets list of all <option> tag
for element in get_list :
cities = element["value"]
#final.append(cities)
#print(final)
get_details = soup.find_all("table", attrs={"id":"ContentPlaceHolder1_GridView1"})
for details in get_details:
text = details.find_all("tr")[1:]
for tds in text:
td = tds.find_all("td")[1]
rera = td.find_all("span")
rnumber = ""
for num in rera:
rnumber = num.text
print(rnumber)
试试下面的代码。它会给你所有你想要的结果。只需要稍微抽动一下。我只是从下拉菜单中抓取了不同的名称,并在循环中使用它们,这样您就可以一一获取所有数据。除了添加几行之外,我没有注意到其他内容。如果将代码包装在一个函数中,您的代码可能会更好。
顺便说一句,我已经将两个巨大的字符串放在两个变量中,这样您就不必担心它并使其更细一些。 这是修正后的代码:
import requests
from bs4 import BeautifulSoup
url = "http://upreraportal.cloudapp.net/View_projects.aspx"
response = requests.get(url).text
soup = BeautifulSoup(response,"lxml")
VIEWSTATE = soup.select("#__VIEWSTATE")[0]['value']
EVENTVALIDATION = soup.select("#__EVENTVALIDATION")[0]['value']
for title in soup.select("#ContentPlaceHolder1_DdlprojectDistrict [value]")[:-1]:
search_item = title.text
# print(search_item)
headers= {'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
'Content-Type':'application/x-www-form-urlencoded',
'User-Agent':'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36'}
formfields = {'__VIEWSTATE':VIEWSTATE, #Put the value in this variable
'__VIEWSTATEGENERATOR':'4F1A7E70',
'__EVENTVALIDATION':EVENTVALIDATION, #Put the value in this variable
'ctl00$ContentPlaceHolder1$DdlprojectDistrict':search_item, #this is where your city name changes in each iteration
'ctl00$ContentPlaceHolder1$txtProject':'',
'ctl00$ContentPlaceHolder1$btnSearch':'Search'}
#here in form details check agra , i am able to scrape one city only,
# how to loop for all cities
res = requests.post(url, data=formfields, headers=headers).text
soup = BeautifulSoup(res, "html.parser")
get_list = soup.find_all('option') #gets list of all <option> tag
for element in get_list :
cities = element["value"]
#final.append(cities)
#print(final)
get_details = soup.find_all("table", attrs={"id":"ContentPlaceHolder1_GridView1"})
for details in get_details:
text = details.find_all("tr")[1:]
for tds in text:
td = tds.find_all("td")[1]
rera = td.find_all("span")
rnumber = ""
for num in rera:
rnumber = num.text
print(rnumber)