find_all 也找不到作品
neither find_all nor find works
我试图在我们选择的用户的页面上抓取每个收藏夹的名称。但是使用这段代码我得到错误“ResultSet object 没有属性 'find_all'”但是如果我尝试使用 find 它会得到相反的错误并且它要求我使用 find_all。我是初学者,我不知道该怎么做。 (同样为了测试代码,您可以使用用户名“Kineta”,她是管理员,这样任何人都可以访问她的个人资料页面)。
感谢您的帮助
from bs4 import BeautifulSoup
import requests
usr_name = str(input('the user you are searching for '))
html_text = requests.get('https://myanimelist.net/profile/'+usr_name)
soup = BeautifulSoup(html_text.text, 'lxml')
favs = soup.find_all('div', class_='fav-slide-outer')
favs_title = favs.find_all('span', class_='title fs10')
print(favs_title)
您的程序抛出异常,因为您试图在 ResultSet
上使用 .find_all
(favs_title = favs.find_all(...),
ResultSetdoesn't have function
.find_all`).相反,您可以直接使用 CSS select 或 select 所有必需的元素:
import requests
from bs4 import BeautifulSoup
url = "https://myanimelist.net/profile/Kineta"
soup = BeautifulSoup(requests.get(url).content, "html.parser")
for t in soup.select(".fav-slide .title"):
print(t.text)
打印:
Kono Oto Tomare!
Yuukoku no Moriarty
Kaze ga Tsuyoku Fuiteiru
ACCA: 13-ku Kansatsu-ka
Fukigen na Mononokean
Kakuriyo no Yadomeshi
Shirokuma Cafe
Fruits Basket
Akatsuki no Yona
Colette wa Shinu Koto ni Shita
Okobore Hime to Entaku no Kishi
Meteor Methuselah
Inu x Boku SS
Vampire Juujikai
Mirako, Yuuta
Forger, Loid
Osaki, Kaname
Miyazumi, Tatsuru
Takaoka, Tetsuki
Okamoto, Souma
Shirota, Tsukasa
Archiviste, Noé
Fang, Li Ren
Fukuroi, Michiru
Sakurayashiki, Kaoru
James Moriarty, Albert
Souma, Kyou
Hades
Yona
Son, Hak
Mashima, Taichi
Ootomo, Jin
Collabel, Yuca
Masuda, Toshiki
Furukawa, Makoto
Satou, Takuya
Midorikawa, Hikaru
Miki, Shinichiro
Hino, Satoshi
Hosoya, Yoshimasa
Kimura, Ryouhei
Ono, Daisuke
KENN
Yoshino, Hiroyuki
Toriumi, Kousuke
Toyonaga, Toshiyuki
Ooishi, Masayoshi
Shirodaira, Kyou
Hakusensha
编辑:要获得 Anime/Manga/Character 个收藏夹:
import requests
from bs4 import BeautifulSoup
url = "https://myanimelist.net/profile/Kineta"
soup = BeautifulSoup(requests.get(url).content, "html.parser")
anime_favorites = [t.text for t in soup.select("#anime_favorites .title")]
manga_favorites = [t.text for t in soup.select("#manga_favorites .title")]
char_favorites = [t.text for t in soup.select("#character_favorites .title")]
print("Anime Favorites")
print("-" * 80)
print(*anime_favorites, sep="\n")
print()
print("Manga Favorites")
print("-" * 80)
print(*manga_favorites, sep="\n")
print()
print("Character Favorites")
print("-" * 80)
print(*char_favorites, sep="\n")
打印:
Anime Favorites
--------------------------------------------------------------------------------
Kono Oto Tomare!
Yuukoku no Moriarty
Kaze ga Tsuyoku Fuiteiru
ACCA: 13-ku Kansatsu-ka
Fukigen na Mononokean
Kakuriyo no Yadomeshi
Shirokuma Cafe
Manga Favorites
--------------------------------------------------------------------------------
Fruits Basket
Akatsuki no Yona
Colette wa Shinu Koto ni Shita
Okobore Hime to Entaku no Kishi
Meteor Methuselah
Inu x Boku SS
Vampire Juujikai
Character Favorites
--------------------------------------------------------------------------------
Mirako, Yuuta
Forger, Loid
Osaki, Kaname
Miyazumi, Tatsuru
Takaoka, Tetsuki
Okamoto, Souma
Shirota, Tsukasa
Archiviste, Noé
Fang, Li Ren
Fukuroi, Michiru
Sakurayashiki, Kaoru
James Moriarty, Albert
Souma, Kyou
Hades
Yona
Son, Hak
Mashima, Taichi
Ootomo, Jin
Collabel, Yuca
查找和find_all工作你只需要正确使用它们。您不能使用它们来搜索列表(例如示例中的 'favs' 变量)。您始终可以使用 for 循环遍历列表并使用 'find' 或 'find_all' 函数。
我更喜欢让它更容易一些,但你可以选择你喜欢的方式,因为我不确定我的方式是否更有效率:
from bs4 import BeautifulSoup
import requests
usr_name = str(input('the user you are searching for '))
html_text = requests.get('https://myanimelist.net/profile/'+usr_name)
soup = BeautifulSoup(html_text.text, 'lxml')
favs = soup.find_all('div', class_='fav-slide-outer')
for fav in favs:
tag=fav.span
print(tag.text)
如果您需要有关如何正确使用 bs4 函数的更多信息,我建议您查看他们的扩展坞 here。
我看了看页面,改了一下代码,这样你应该得到你需要的所有结果:
from bs4 import BeautifulSoup
import requests
usr_name = str(input('the user you are searching for '))
html_text = requests.get('https://myanimelist.net/profile/'+usr_name)
soup = BeautifulSoup(html_text.text, 'lxml')
favs = soup.find_all('li', class_='btn-fav')
for fav in favs:
tag=fav.span
print(tag.text)
我认为这里的问题不在于代码,而在于您搜索结果的方式以及网站的结构。
我试图在我们选择的用户的页面上抓取每个收藏夹的名称。但是使用这段代码我得到错误“ResultSet object 没有属性 'find_all'”但是如果我尝试使用 find 它会得到相反的错误并且它要求我使用 find_all。我是初学者,我不知道该怎么做。 (同样为了测试代码,您可以使用用户名“Kineta”,她是管理员,这样任何人都可以访问她的个人资料页面)。 感谢您的帮助
from bs4 import BeautifulSoup
import requests
usr_name = str(input('the user you are searching for '))
html_text = requests.get('https://myanimelist.net/profile/'+usr_name)
soup = BeautifulSoup(html_text.text, 'lxml')
favs = soup.find_all('div', class_='fav-slide-outer')
favs_title = favs.find_all('span', class_='title fs10')
print(favs_title)
您的程序抛出异常,因为您试图在 ResultSet
上使用 .find_all
(favs_title = favs.find_all(...),
ResultSetdoesn't have function
.find_all`).相反,您可以直接使用 CSS select 或 select 所有必需的元素:
import requests
from bs4 import BeautifulSoup
url = "https://myanimelist.net/profile/Kineta"
soup = BeautifulSoup(requests.get(url).content, "html.parser")
for t in soup.select(".fav-slide .title"):
print(t.text)
打印:
Kono Oto Tomare!
Yuukoku no Moriarty
Kaze ga Tsuyoku Fuiteiru
ACCA: 13-ku Kansatsu-ka
Fukigen na Mononokean
Kakuriyo no Yadomeshi
Shirokuma Cafe
Fruits Basket
Akatsuki no Yona
Colette wa Shinu Koto ni Shita
Okobore Hime to Entaku no Kishi
Meteor Methuselah
Inu x Boku SS
Vampire Juujikai
Mirako, Yuuta
Forger, Loid
Osaki, Kaname
Miyazumi, Tatsuru
Takaoka, Tetsuki
Okamoto, Souma
Shirota, Tsukasa
Archiviste, Noé
Fang, Li Ren
Fukuroi, Michiru
Sakurayashiki, Kaoru
James Moriarty, Albert
Souma, Kyou
Hades
Yona
Son, Hak
Mashima, Taichi
Ootomo, Jin
Collabel, Yuca
Masuda, Toshiki
Furukawa, Makoto
Satou, Takuya
Midorikawa, Hikaru
Miki, Shinichiro
Hino, Satoshi
Hosoya, Yoshimasa
Kimura, Ryouhei
Ono, Daisuke
KENN
Yoshino, Hiroyuki
Toriumi, Kousuke
Toyonaga, Toshiyuki
Ooishi, Masayoshi
Shirodaira, Kyou
Hakusensha
编辑:要获得 Anime/Manga/Character 个收藏夹:
import requests
from bs4 import BeautifulSoup
url = "https://myanimelist.net/profile/Kineta"
soup = BeautifulSoup(requests.get(url).content, "html.parser")
anime_favorites = [t.text for t in soup.select("#anime_favorites .title")]
manga_favorites = [t.text for t in soup.select("#manga_favorites .title")]
char_favorites = [t.text for t in soup.select("#character_favorites .title")]
print("Anime Favorites")
print("-" * 80)
print(*anime_favorites, sep="\n")
print()
print("Manga Favorites")
print("-" * 80)
print(*manga_favorites, sep="\n")
print()
print("Character Favorites")
print("-" * 80)
print(*char_favorites, sep="\n")
打印:
Anime Favorites
--------------------------------------------------------------------------------
Kono Oto Tomare!
Yuukoku no Moriarty
Kaze ga Tsuyoku Fuiteiru
ACCA: 13-ku Kansatsu-ka
Fukigen na Mononokean
Kakuriyo no Yadomeshi
Shirokuma Cafe
Manga Favorites
--------------------------------------------------------------------------------
Fruits Basket
Akatsuki no Yona
Colette wa Shinu Koto ni Shita
Okobore Hime to Entaku no Kishi
Meteor Methuselah
Inu x Boku SS
Vampire Juujikai
Character Favorites
--------------------------------------------------------------------------------
Mirako, Yuuta
Forger, Loid
Osaki, Kaname
Miyazumi, Tatsuru
Takaoka, Tetsuki
Okamoto, Souma
Shirota, Tsukasa
Archiviste, Noé
Fang, Li Ren
Fukuroi, Michiru
Sakurayashiki, Kaoru
James Moriarty, Albert
Souma, Kyou
Hades
Yona
Son, Hak
Mashima, Taichi
Ootomo, Jin
Collabel, Yuca
查找和find_all工作你只需要正确使用它们。您不能使用它们来搜索列表(例如示例中的 'favs' 变量)。您始终可以使用 for 循环遍历列表并使用 'find' 或 'find_all' 函数。
我更喜欢让它更容易一些,但你可以选择你喜欢的方式,因为我不确定我的方式是否更有效率:
from bs4 import BeautifulSoup
import requests
usr_name = str(input('the user you are searching for '))
html_text = requests.get('https://myanimelist.net/profile/'+usr_name)
soup = BeautifulSoup(html_text.text, 'lxml')
favs = soup.find_all('div', class_='fav-slide-outer')
for fav in favs:
tag=fav.span
print(tag.text)
如果您需要有关如何正确使用 bs4 函数的更多信息,我建议您查看他们的扩展坞 here。
我看了看页面,改了一下代码,这样你应该得到你需要的所有结果:
from bs4 import BeautifulSoup
import requests
usr_name = str(input('the user you are searching for '))
html_text = requests.get('https://myanimelist.net/profile/'+usr_name)
soup = BeautifulSoup(html_text.text, 'lxml')
favs = soup.find_all('li', class_='btn-fav')
for fav in favs:
tag=fav.span
print(tag.text)
我认为这里的问题不在于代码,而在于您搜索结果的方式以及网站的结构。