我从 3 天开始就一直在尝试这段代码。该代码用于维基百科搜索
I have been trying this code from 3 days now. The code is for a Wikipedia search
#Run the code and type the topic you want to be searched.
import wikipedia
import random
print("Type the topic you want to be searched.")
print("If an error occurrs, I am sorry but I got no Error Handler in here.")
print("Which means you jus hafta re-run the code.")
while True:
try:
print("Enter a topic to be searched for in wikipedia:-")
x = input()
results = wikipedia.summary(x, sentences=2)
page = wikipedia.page(x)
print(results)
print(f"Link for the page of this information:- {page.url}")
print("=============================+=============================+=============================+=============================+=============================+")
except wikipedia.DisambiguationError as e:
redirectedpage = random.choice(e.options)
result = wikipedia.summary(redirectedpage, sentences=2)
errorpage = wikipedia.page(redirectedpage)
print(result)
print(errorpage.url)
print("=============================+=============================+=============================+=============================+=============================+")
continue
但我得到一个错误....不完全是一个错误,而是一个建议,我无法弄清楚它到底是什么
Warning (from warnings module):
File "C:\Users\Me\AppData\Roaming\Python\Python39\site-packages\wikipedia\wikipedia.py", line 389
lis = BeautifulSoup(html).find_all('li')
GuessedAtParserWarning: No parser was explicitly specified, so I'm using the best available HTML parser for this system ("html.parser"). This usually isn't a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently.
The code that caused this warning is on line 389 of the file C:\Users\Me\AppData\Roaming\Python\Python39\site-packages\wikipedia\wikipedia.py. To get rid of this warning, pass the additional argument 'features="html.parser"' to the BeautifulSoup constructor.
你能帮忙吗?
您正在使用的维基百科包在内部获取维基百科的页面作为 html 并使用 BeautifulSoup 解析它以获得所需的输出。
问题是这个库的作者没有具体提到需要解析的文件类型,这里是html
这不是问题,因为 BeautifulSoup 会自动获取 html 但是,是的,在不同的系统上结果可能会有所不同(尽管不太可能)。
解决此问题的唯一方法是要求作者修复库或停止使用此库并使用请求和 BeautifulSoup.
自行获取页面
#Run the code and type the topic you want to be searched.
import wikipedia
import random
print("Type the topic you want to be searched.")
print("If an error occurrs, I am sorry but I got no Error Handler in here.")
print("Which means you jus hafta re-run the code.")
while True:
try:
print("Enter a topic to be searched for in wikipedia:-")
x = input()
results = wikipedia.summary(x, sentences=2)
page = wikipedia.page(x)
print(results)
print(f"Link for the page of this information:- {page.url}")
print("=============================+=============================+=============================+=============================+=============================+")
except wikipedia.DisambiguationError as e:
redirectedpage = random.choice(e.options)
result = wikipedia.summary(redirectedpage, sentences=2)
errorpage = wikipedia.page(redirectedpage)
print(result)
print(errorpage.url)
print("=============================+=============================+=============================+=============================+=============================+")
continue
但我得到一个错误....不完全是一个错误,而是一个建议,我无法弄清楚它到底是什么
Warning (from warnings module): File "C:\Users\Me\AppData\Roaming\Python\Python39\site-packages\wikipedia\wikipedia.py", line 389 lis = BeautifulSoup(html).find_all('li') GuessedAtParserWarning: No parser was explicitly specified, so I'm using the best available HTML parser for this system ("html.parser"). This usually isn't a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently. The code that caused this warning is on line 389 of the file C:\Users\Me\AppData\Roaming\Python\Python39\site-packages\wikipedia\wikipedia.py. To get rid of this warning, pass the additional argument 'features="html.parser"' to the BeautifulSoup constructor.
你能帮忙吗?
您正在使用的维基百科包在内部获取维基百科的页面作为 html 并使用 BeautifulSoup 解析它以获得所需的输出。
问题是这个库的作者没有具体提到需要解析的文件类型,这里是html
这不是问题,因为 BeautifulSoup 会自动获取 html 但是,是的,在不同的系统上结果可能会有所不同(尽管不太可能)。
解决此问题的唯一方法是要求作者修复库或停止使用此库并使用请求和 BeautifulSoup.
自行获取页面