有没有一种简单的方法可以从文本文件读取行到这个漂亮的汤库 python 脚本?
Is there a simple way to readlines from text file to this beautiful soup lib python script?
如何将 txt.file 中的行读入此脚本,而不必在脚本中列出 url?谢谢
from bs4 import BeautifulSoup
import requests
url = "http://www.url1.com"
response = requests.get(url)
data = response.text
soup = BeautifulSoup(data, 'html.parser')
categories = soup.find_all("a", {"class":'navlabellink nvoffset nnormal'})
for category in categories:
print(url + "," + category.text)
我的 text.file 内容有换行符分隔符:
http://www.url1.com
http://www.url2.com
http://www.url3.com
http://www.url4.com
http://www.url5.com
http://www.url6.com
http://www.url7.com
http://www.url8.com
http://www.url9.com
file1 = open('text.file', 'r')
Lines = file1.readlines()
count = 0
# Strips the newline character
for line in Lines:
print("Line{}: {}".format(count, line.strip()))
你只需用 url 变量
替换你的行
要从 a.txt
读取 URL,您可以使用此脚本:
import requests
from bs4 import BeautifulSoup
with open('a.txt', 'r') as f_in:
for line in map(str.strip, f_in):
if not line:
continue
response = requests.get(line)
data = response.text
soup = BeautifulSoup(data, 'html.parser')
categories = soup.find_all("a", {"class":'navlabellink nvoffset nnormal'})
for category in categories:
print(url + "," + category.text)
为了这个例子,假设您的文件名为 urls.txt
。在 Python 中,打开文件并读取其内容非常容易。
with open('urls.txt', 'r') as f:
urls = f.read().splitlines()
#Your list of URLs is now in the urls list!
'urls.txt'
之后的 'r'
只是告诉 Python 以阅读模式打开文件。如果您不需要修改文件,最好以 read-only 模式打开它。 f.read() returns 文件的全部内容,但它包含换行符 (\n
),因此 splitlines()
将删除这些字符并为您创建一个列表。
如何将 txt.file 中的行读入此脚本,而不必在脚本中列出 url?谢谢
from bs4 import BeautifulSoup
import requests
url = "http://www.url1.com"
response = requests.get(url)
data = response.text
soup = BeautifulSoup(data, 'html.parser')
categories = soup.find_all("a", {"class":'navlabellink nvoffset nnormal'})
for category in categories:
print(url + "," + category.text)
我的 text.file 内容有换行符分隔符:
http://www.url1.com
http://www.url2.com
http://www.url3.com
http://www.url4.com
http://www.url5.com
http://www.url6.com
http://www.url7.com
http://www.url8.com
http://www.url9.com
file1 = open('text.file', 'r')
Lines = file1.readlines()
count = 0
# Strips the newline character
for line in Lines:
print("Line{}: {}".format(count, line.strip()))
你只需用 url 变量
替换你的行要从 a.txt
读取 URL,您可以使用此脚本:
import requests
from bs4 import BeautifulSoup
with open('a.txt', 'r') as f_in:
for line in map(str.strip, f_in):
if not line:
continue
response = requests.get(line)
data = response.text
soup = BeautifulSoup(data, 'html.parser')
categories = soup.find_all("a", {"class":'navlabellink nvoffset nnormal'})
for category in categories:
print(url + "," + category.text)
为了这个例子,假设您的文件名为 urls.txt
。在 Python 中,打开文件并读取其内容非常容易。
with open('urls.txt', 'r') as f:
urls = f.read().splitlines()
#Your list of URLs is now in the urls list!
'urls.txt'
之后的 'r'
只是告诉 Python 以阅读模式打开文件。如果您不需要修改文件,最好以 read-only 模式打开它。 f.read() returns 文件的全部内容,但它包含换行符 (\n
),因此 splitlines()
将删除这些字符并为您创建一个列表。