如何将迭代输出变量捕获到列表中进行分析
How to capture iterated output variable into list for analysis
我正在尝试解析来自多个网页的 html 文本以进行情绪分析。在社区的帮助下,我已经能够迭代许多 url 并根据 textblob 库的情绪分析生成情绪分数,并成功使用打印功能为每个 url 输出一个分数。但是我无法实现,将我的 return 变量产生的许多输出放入一个列表中,这样我就可以通过使用存储的数字计算平均值并在图表中显示我的结果来进一步分析稍后。
带有打印功能的代码:
import requests
import json
import urllib
from bs4 import BeautifulSoup
from textblob import TextBlob
#you can add to this
urls = ["http://www.thestar.com/business/economy/2015/05/19/canadian-consumer-confidence-dips-but-continues-to-climb-in-us-report.html",
"http://globalnews.ca/news/2012054/canada-ripe-for-an-invasion-of-u-s-dollar-stores-experts-say/",
"http://www.cp24.com/news/tsx-flat-in-advance-of-fed-minutes-loonie-oil-prices-stabilize-1.2381931",
"http://www.marketpulse.com/20150522/us-and-canadian-gdp-to-close-out-week-in-fx/",
"http://www.theglobeandmail.com/report-on-business/canada-pension-plan-fund-sees-best-ever-annual-return/article24546796/",
"http://www.marketpulse.com/20150522/canadas-april-inflation-slowest-in-two-years/"]
def parse_websites(list_of_urls):
for url in list_of_urls:
html = urllib.urlopen(url).read()
soup = BeautifulSoup(html)
# kill all script and style elements
for script in soup(["script", "style"]):
script.extract() # rip it out
# get text
text = soup.get_text()
# break into lines and remove leading and trailing space on each
lines = (line.strip() for line in text.splitlines())
# break multi-headlines into a line each
chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
# drop blank lines
text = '\n'.join(chunk for chunk in chunks if chunk)
#print(text)
wiki = TextBlob(text)
r = wiki.sentiment.polarity
print r
parse_websites(urls)
输出:
>>>
0.10863027172
0.156074203574
0.0766585497835
0.0315555555556
0.0752548359411
0.0902824858757
>>>
但是当我使用 return 变量形成一个列表以使用这些值进行处理时,我没有得到任何结果,代码:
import requests
import json
import urllib
from bs4 import BeautifulSoup
from textblob import TextBlob
#you can add to this
urls = ["http://www.thestar.com/business/economy/2015/05/19/canadian-consumer-confidence-dips-but-continues-to-climb-in-us-report.html",
"http://globalnews.ca/news/2012054/canada-ripe-for-an-invasion-of-u-s-dollar-stores-experts-say/",
"http://www.cp24.com/news/tsx-flat-in-advance-of-fed-minutes-loonie-oil-prices-stabilize-1.2381931",
"http://www.marketpulse.com/20150522/us-and-canadian-gdp-to-close-out-week-in-fx/",
"http://www.theglobeandmail.com/report-on-business/canada-pension-plan-fund-sees-best-ever-annual-return/article24546796/",
"http://www.marketpulse.com/20150522/canadas-april-inflation-slowest-in-two-years/"]
def parse_websites(list_of_urls):
for url in list_of_urls:
html = urllib.urlopen(url).read()
soup = BeautifulSoup(html)
# kill all script and style elements
for script in soup(["script", "style"]):
script.extract() # rip it out
# get text
text = soup.get_text()
# break into lines and remove leading and trailing space on each
lines = (line.strip() for line in text.splitlines())
# break multi-headlines into a line each
chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
# drop blank lines
text = '\n'.join(chunk for chunk in chunks if chunk)
#print(text)
wiki = TextBlob(text)
r = wiki.sentiment.polarity
r = []
return [r]
parse_websites(urls)
输出:
Python 2.7.5 (default, May 15 2013, 22:43:36) [MSC v.1500 32 bit (Intel)] on win32
Type "copyright", "credits" or "license()" for more information.
>>> ================================ RESTART ================================
>>>
>>>
我怎样才能做到这一点,以便我可以使用数字并能够像这样从列表中添加、减去它们 [r1、r2、r3...]
提前谢谢你。
从您下面的代码中,您要求 python 到 return 一个空列表:
r = wiki.sentiment.polarity
r = [] #creat empty list r
return [r] #return empty list
如果我正确理解你的问题,你所要做的就是:
my_list = [] #create empty list
for url in list_of_urls:
html = urllib.urlopen(url).read()
soup = BeautifulSoup(html)
for script in soup(["script", "style"]):
script.extract() # rip it out
text = soup.get_text()
lines = (line.strip() for line in text.splitlines())
chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
text = '\n'.join(chunk for chunk in chunks if chunk)
wiki = TextBlob(text)
r = wiki.sentiment.polarity
my_list.append(r) #add r to list my_list
print my_list
[r1, r2, r3, ...]
或者,您可以创建一个以 url 作为键的字典
my_dictionary = {}
r = wiki.sentiment.polarity
my_dictionary[url] = r
print my_dictionary
{'url1': r1, 'url2 : r2, etc)
print my_dictionary['url1']
r1
词典对您来说可能更有意义,因为使用 url 作为键可以更轻松地检索、编辑和删除 "r"。
我是 Python 的新手,所以如果这没有意义,希望其他人能纠正我...
我正在尝试解析来自多个网页的 html 文本以进行情绪分析。在社区的帮助下,我已经能够迭代许多 url 并根据 textblob 库的情绪分析生成情绪分数,并成功使用打印功能为每个 url 输出一个分数。但是我无法实现,将我的 return 变量产生的许多输出放入一个列表中,这样我就可以通过使用存储的数字计算平均值并在图表中显示我的结果来进一步分析稍后。
带有打印功能的代码:
import requests
import json
import urllib
from bs4 import BeautifulSoup
from textblob import TextBlob
#you can add to this
urls = ["http://www.thestar.com/business/economy/2015/05/19/canadian-consumer-confidence-dips-but-continues-to-climb-in-us-report.html",
"http://globalnews.ca/news/2012054/canada-ripe-for-an-invasion-of-u-s-dollar-stores-experts-say/",
"http://www.cp24.com/news/tsx-flat-in-advance-of-fed-minutes-loonie-oil-prices-stabilize-1.2381931",
"http://www.marketpulse.com/20150522/us-and-canadian-gdp-to-close-out-week-in-fx/",
"http://www.theglobeandmail.com/report-on-business/canada-pension-plan-fund-sees-best-ever-annual-return/article24546796/",
"http://www.marketpulse.com/20150522/canadas-april-inflation-slowest-in-two-years/"]
def parse_websites(list_of_urls):
for url in list_of_urls:
html = urllib.urlopen(url).read()
soup = BeautifulSoup(html)
# kill all script and style elements
for script in soup(["script", "style"]):
script.extract() # rip it out
# get text
text = soup.get_text()
# break into lines and remove leading and trailing space on each
lines = (line.strip() for line in text.splitlines())
# break multi-headlines into a line each
chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
# drop blank lines
text = '\n'.join(chunk for chunk in chunks if chunk)
#print(text)
wiki = TextBlob(text)
r = wiki.sentiment.polarity
print r
parse_websites(urls)
输出:
>>>
0.10863027172
0.156074203574
0.0766585497835
0.0315555555556
0.0752548359411
0.0902824858757
>>>
但是当我使用 return 变量形成一个列表以使用这些值进行处理时,我没有得到任何结果,代码:
import requests
import json
import urllib
from bs4 import BeautifulSoup
from textblob import TextBlob
#you can add to this
urls = ["http://www.thestar.com/business/economy/2015/05/19/canadian-consumer-confidence-dips-but-continues-to-climb-in-us-report.html",
"http://globalnews.ca/news/2012054/canada-ripe-for-an-invasion-of-u-s-dollar-stores-experts-say/",
"http://www.cp24.com/news/tsx-flat-in-advance-of-fed-minutes-loonie-oil-prices-stabilize-1.2381931",
"http://www.marketpulse.com/20150522/us-and-canadian-gdp-to-close-out-week-in-fx/",
"http://www.theglobeandmail.com/report-on-business/canada-pension-plan-fund-sees-best-ever-annual-return/article24546796/",
"http://www.marketpulse.com/20150522/canadas-april-inflation-slowest-in-two-years/"]
def parse_websites(list_of_urls):
for url in list_of_urls:
html = urllib.urlopen(url).read()
soup = BeautifulSoup(html)
# kill all script and style elements
for script in soup(["script", "style"]):
script.extract() # rip it out
# get text
text = soup.get_text()
# break into lines and remove leading and trailing space on each
lines = (line.strip() for line in text.splitlines())
# break multi-headlines into a line each
chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
# drop blank lines
text = '\n'.join(chunk for chunk in chunks if chunk)
#print(text)
wiki = TextBlob(text)
r = wiki.sentiment.polarity
r = []
return [r]
parse_websites(urls)
输出:
Python 2.7.5 (default, May 15 2013, 22:43:36) [MSC v.1500 32 bit (Intel)] on win32
Type "copyright", "credits" or "license()" for more information.
>>> ================================ RESTART ================================
>>>
>>>
我怎样才能做到这一点,以便我可以使用数字并能够像这样从列表中添加、减去它们 [r1、r2、r3...]
提前谢谢你。
从您下面的代码中,您要求 python 到 return 一个空列表:
r = wiki.sentiment.polarity
r = [] #creat empty list r
return [r] #return empty list
如果我正确理解你的问题,你所要做的就是:
my_list = [] #create empty list
for url in list_of_urls:
html = urllib.urlopen(url).read()
soup = BeautifulSoup(html)
for script in soup(["script", "style"]):
script.extract() # rip it out
text = soup.get_text()
lines = (line.strip() for line in text.splitlines())
chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
text = '\n'.join(chunk for chunk in chunks if chunk)
wiki = TextBlob(text)
r = wiki.sentiment.polarity
my_list.append(r) #add r to list my_list
print my_list
[r1, r2, r3, ...]
或者,您可以创建一个以 url 作为键的字典
my_dictionary = {}
r = wiki.sentiment.polarity
my_dictionary[url] = r
print my_dictionary
{'url1': r1, 'url2 : r2, etc)
print my_dictionary['url1']
r1
词典对您来说可能更有意义,因为使用 url 作为键可以更轻松地检索、编辑和删除 "r"。
我是 Python 的新手,所以如果这没有意义,希望其他人能纠正我...