我如何从网站的 bs4 + lxml 中将成员分配给它在 python 上的角色?
How can I distribute member to it's role on python in bs4 + lxml from site?
我想从 nft-collectible 站点解析 python 上的角色卡:https://imaginaryones.com
这是我的代码:
find_title = soup.find("title")
find_all_card_titles = soup.findAll("div", class_="v-card__title")
find_all_names = soup.findAll("h4", class_="member__subtitle")
find_all_links_url = soup.findAll("a")
print('Name -', find_title.text)
for names in find_all_names:
for roles in find_all_card_titles:
print(names.text, '-', roles.text)
我得到这样的东西:
Clement - Creator
Clement - Biz / Strategist
Clement - Artist / Partnerships
Clement - PM / Community
Clement - Tech / Contracts
David - Creator
David - Biz / Strategist
David - Artist / Partnerships
David - PM / Community
David - Tech / Contracts
etc...
但我需要这样的结果:
Clement - Creator
David - Biz / Strategist
Gregory - Artist / Partnerships
etc...
我该怎么做?
尝试:
import requests
from bs4 import BeautifulSoup
url = "https://imaginaryones.com"
soup = BeautifulSoup(requests.get(url).content, "html.parser")
for member in soup.select(".team__list .member__subtitle"):
print(member.text, "-", member.find_previous(class_="v-card__title").text)
打印:
Clement - Creator
David - Biz / Strategist
Gregory - Artist / Partnerships
Caleb - PM / Community
Jerome - Tech / Contracts
我想从 nft-collectible 站点解析 python 上的角色卡:https://imaginaryones.com
这是我的代码:
find_title = soup.find("title")
find_all_card_titles = soup.findAll("div", class_="v-card__title")
find_all_names = soup.findAll("h4", class_="member__subtitle")
find_all_links_url = soup.findAll("a")
print('Name -', find_title.text)
for names in find_all_names:
for roles in find_all_card_titles:
print(names.text, '-', roles.text)
我得到这样的东西:
Clement - Creator
Clement - Biz / Strategist
Clement - Artist / Partnerships
Clement - PM / Community
Clement - Tech / Contracts
David - Creator
David - Biz / Strategist
David - Artist / Partnerships
David - PM / Community
David - Tech / Contracts
etc...
但我需要这样的结果:
Clement - Creator
David - Biz / Strategist
Gregory - Artist / Partnerships
etc...
我该怎么做?
尝试:
import requests
from bs4 import BeautifulSoup
url = "https://imaginaryones.com"
soup = BeautifulSoup(requests.get(url).content, "html.parser")
for member in soup.select(".team__list .member__subtitle"):
print(member.text, "-", member.find_previous(class_="v-card__title").text)
打印:
Clement - Creator
David - Biz / Strategist
Gregory - Artist / Partnerships
Caleb - PM / Community
Jerome - Tech / Contracts