文档情感强度 != sum(Sentence Magnitude)

Document Sentiment Magnitude != sum(Sentence Magnitude)

我目前正在使用 google 云 NL api 进行一些分析新闻文章的测试。一开始我很好奇文档大小是如何计算的,在这里搜索得到

Google Cloud Natural Language API - How is document magnitude calculated?

其中提到它是构成句子大小的总和。

在我自己的测试中,我发现情况并非如此。我有什么地方做错了吗?


为清楚起见,我在 conda 环境中使用 运行 Python 3.7.3,并使用从 conda-forge 获得的 google-云语言。

document =types.Document(content = str, type = enums.Document.Type.PLAIN_TEXT)
sentiment = client.analyze_sentiment(document=document)

sentence_sents = sentiment.sentences
test_mag = 0
for sent_obj in sentence_sents:
     test_mag += sent_obj.sentiment.magnitude

print(sentiment.document_sentiment.magnitude)
print(test_mag)

在另一个线程中,它有时可能只是绝对总和,但并非总是如此。

Google Natural Language Sentiment Analysis Aggregate Scores

"The way the aggregation works is breaking down the input text into smaller components, often ngrams, which is likely why the documentation talks about aggregation, however, the aggregation isn't a simple addition, one can't sum individual sentiment values of each entity to get a total score."

我假设分数和幅度计算就是这种情况。