字典中短语的 R 情感分析

R sentiment analysis with phrases in dictionaries

我正在对我拥有的一组推文进行情绪分析,现在我想知道如何将短语添加到正面和负面词典中。

我已经阅读了我想要测试的短语的文件,但是当 运行进行情绪分析时,它没有给我结果。

阅读情感算法时,我可以看到它正在将单词与词典匹配,但是有没有一种方法可以同时扫描单词和短语?

代码如下:

    score.sentiment = function(sentences, pos.words, neg.words, .progress='none')
{
  require(plyr)  
  require(stringr)  
  # we got a vector of sentences. plyr will handle a list  
  # or a vector as an "l" for us  
  # we want a simple array ("a") of scores back, so we use  
  # "l" + "a" + "ply" = "laply":  
  scores = laply(sentences, function(sentence, pos.words, neg.words) {
    # clean up sentences with R's regex-driven global substitute, gsub():
    sentence = gsub('[[:punct:]]', '', sentence)
    sentence = gsub('[[:cntrl:]]', '', sentence)
    sentence = gsub('\d+', '', sentence)    
    # and convert to lower case:    
    sentence = tolower(sentence)    
    # split into words. str_split is in the stringr package    
    word.list = str_split(sentence, '\s+')    
    # sometimes a list() is one level of hierarchy too much    
    words = unlist(word.list)    
    # compare our words to the dictionaries of positive & negative terms
    pos.matches = match(words, pos)
    neg.matches = match(words, neg)   
    # match() returns the position of the matched term or NA    
    # we just want a TRUE/FALSE:    
    pos.matches = !is.na(pos.matches)   
    neg.matches = !is.na(neg.matches)   
    # and conveniently enough, TRUE/FALSE will be treated as 1/0 by sum():
    score = sum(pos.matches) - sum(neg.matches)    
    return(score)    
  }, pos.words, neg.words, .progress=.progress )  
  scores.df = data.frame(score=scores, text=sentences)  
  return(scores.df)  
}
analysis=score.sentiment(Tweets, pos, neg)
table(analysis$score)

这是我得到的结果:

0
20

而我追求的是此函数提供的标准 table 例如

-2 -1 0 1 2 
 1  2 3 4 5 

例如。

有人知道如何在短语上 运行 吗? 注意:TWEETS文件是一个句子文件。

函数score.sentiment 似乎有效。如果我尝试一个非常简单的设置,

Tweets = c("this is good", "how bad it is")
neg = c("bad")
pos = c("good")
analysis=score.sentiment(Tweets, pos, neg)
table(analysis$score)

我得到了预期的结果,

> table(analysis$score)

-1  1 
 1  1 

您如何将 20 条推文提供给该方法?根据你 posting 的结果,即 0 20,我会说你的问题是你的 20 条推文没有任何正面或负面的词,尽管你当然会这样已经注意到了。也许如果您 post 在您的推文列表中提供更多详细信息,您的正面和负面词语会更容易帮助您。

总之,您的功能似乎运行良好。

希望对您有所帮助。

通过评论澄清后编辑:

实际上,要解决您的问题,您需要将句子标记为 n-grams,其中 n 对应于您用于正面和负面列表的最大单词数 n-grams。你可以看到如何做到这一点,例如在 this SO question。为了完整起见,并且因为我自己测试过,这里有一个你可以做什么的例子。我将其简化为 bigrams (n=2) 并使用以下输入:

Tweets = c("rewarding hard work with raising taxes and VAT. #LabourManifesto", 
              "Ed Miliband is offering 'wrong choice' of 'more cuts' in #LabourManifesto")
pos = c("rewarding hard work")
neg = c("wrong choice")

您可以像这样创建一个二元词分词器,

library(tm)
library(RWeka)
BigramTokenizer <- function(x) NGramTokenizer(x, Weka_control(min=2,max=2))

并进行测试,

> BigramTokenizer("rewarding hard work with raising taxes and VAT. #LabourManifesto")
[1] "rewarding hard"       "hard work"            "work with"           
[4] "with raising"         "raising taxes"        "taxes and"           
[7] "and VAT"              "VAT #LabourManifesto"

然后在您的方法中,您只需替换这一行,

word.list = str_split(sentence, '\s+')

由此

word.list = BigramTokenizer(sentence)

当然,如果您将 word.list 更改为 ngram.list 或类似的东西会更好。

结果如预期,

> table(analysis$score)

-1  0 
 1  1

只需确定您的 n-gram 尺码并将其添加到 Weka_control 就可以了。

希望对您有所帮助。