NLTK:语料库级 bleu 与句子级 BLEU 分数
NLTK: corpus-level bleu vs sentence-level BLEU score
我在 python 中导入了 nltk 来计算 Ubuntu 上的 BLEU 分数。我了解句子级 BLEU 分数的工作原理,但我不了解语料库级 BLEU 分数的工作原理。
下面是我的语料库级 BLEU 分数代码:
import nltk
hypothesis = ['This', 'is', 'cat']
reference = ['This', 'is', 'a', 'cat']
BLEUscore = nltk.translate.bleu_score.corpus_bleu([reference], [hypothesis], weights = [1])
print(BLEUscore)
由于某种原因,上述代码的 bleu 分数为 0。我期望语料库级别的 BLEU 分数至少为 0.5。
这是我的句子级 BLEU 分数代码
import nltk
hypothesis = ['This', 'is', 'cat']
reference = ['This', 'is', 'a', 'cat']
BLEUscore = nltk.translate.bleu_score.sentence_bleu([reference], hypothesis, weights = [1])
print(BLEUscore)
这里的句子级 BLEU 分数是我预期的 0.71,考虑到简洁惩罚和缺失的单词 "a"。但是,我不明白语料库级别的 BLEU 分数是如何工作的。
如有任何帮助,我们将不胜感激。
一起来看看:
>>> help(nltk.translate.bleu_score.corpus_bleu)
Help on function corpus_bleu in module nltk.translate.bleu_score:
corpus_bleu(list_of_references, hypotheses, weights=(0.25, 0.25, 0.25, 0.25), smoothing_function=None)
Calculate a single corpus-level BLEU score (aka. system-level BLEU) for all
the hypotheses and their respective references.
Instead of averaging the sentence level BLEU scores (i.e. marco-average
precision), the original BLEU metric (Papineni et al. 2002) accounts for
the micro-average precision (i.e. summing the numerators and denominators
for each hypothesis-reference(s) pairs before the division).
...
你比我更能理解算法的描述,所以我不会尝试 "explain" 给你。如果文档字符串不够清晰,请查看 the source 本身。或在本地查找:
>>> nltk.translate.bleu_score.__file__
'.../lib/python3.4/site-packages/nltk/translate/bleu_score.py'
TL;DR:
>>> import nltk
>>> hypothesis = ['This', 'is', 'cat']
>>> reference = ['This', 'is', 'a', 'cat']
>>> references = [reference] # list of references for 1 sentence.
>>> list_of_references = [references] # list of references for all sentences in corpus.
>>> list_of_hypotheses = [hypothesis] # list of hypotheses that corresponds to list of references.
>>> nltk.translate.bleu_score.corpus_bleu(list_of_references, list_of_hypotheses)
0.6025286104785453
>>> nltk.translate.bleu_score.sentence_bleu(references, hypothesis)
0.6025286104785453
(注意:您必须在 develop
分支上拉取最新版本的 NLTK 以获得稳定版本的 BLEU 分数实现)
中长:
实际上,如果整个语料库中只有一个参考和一个假设,则 corpus_bleu()
和 sentence_bleu()
都应该 return 与上例所示的值相同。
在代码中,我们看到 sentence_bleu
is actually a duck-type of corpus_bleu
:
def sentence_bleu(references, hypothesis, weights=(0.25, 0.25, 0.25, 0.25),
smoothing_function=None):
return corpus_bleu([references], [hypothesis], weights, smoothing_function)
如果我们查看 sentence_bleu
的参数:
def sentence_bleu(references, hypothesis, weights=(0.25, 0.25, 0.25, 0.25),
smoothing_function=None):
""""
:param references: reference sentences
:type references: list(list(str))
:param hypothesis: a hypothesis sentence
:type hypothesis: list(str)
:param weights: weights for unigrams, bigrams, trigrams and so on
:type weights: list(float)
:return: The sentence-level BLEU score.
:rtype: float
"""
sentence_bleu
的引用的输入是 list(list(str))
。
所以如果你有一个句子字符串,例如"This is a cat"
,您必须对其进行标记以获取字符串列表,["This", "is", "a", "cat"]
,并且由于它允许多个引用,因此它必须是字符串列表的列表,例如如果您有第二个参考,"This is a feline",您对 sentence_bleu()
的输入将是:
references = [ ["This", "is", "a", "cat"], ["This", "is", "a", "feline"] ]
hypothesis = ["This", "is", "cat"]
sentence_bleu(references, hypothesis)
说到corpus_bleu()
list_of_references参数,基本就是a list of whatever the sentence_bleu()
takes as references:
def corpus_bleu(list_of_references, hypotheses, weights=(0.25, 0.25, 0.25, 0.25),
smoothing_function=None):
"""
:param references: a corpus of lists of reference sentences, w.r.t. hypotheses
:type references: list(list(list(str)))
:param hypotheses: a list of hypothesis sentences
:type hypotheses: list(list(str))
:param weights: weights for unigrams, bigrams, trigrams and so on
:type weights: list(float)
:return: The corpus-level BLEU score.
:rtype: float
"""
除了查看 nltk/translate/bleu_score.py
, you can also take a look at the unittest at nltk/test/unit/translate/test_bleu_score.py
中的 doctest 以了解如何使用 bleu_score.py
中的每个组件。
顺便说一下,由于 sentence_bleu
在 (nltk.translate.__init__.py
](https://github.com/nltk/nltk/blob/develop/nltk/translate/init.py#L21) 中被导入为 bleu
,因此使用
from nltk.translate import bleu
将等同于:
from nltk.translate.bleu_score import sentence_bleu
在代码中:
>>> from nltk.translate import bleu
>>> from nltk.translate.bleu_score import sentence_bleu
>>> from nltk.translate.bleu_score import corpus_bleu
>>> bleu == sentence_bleu
True
>>> bleu == corpus_bleu
False
我在 python 中导入了 nltk 来计算 Ubuntu 上的 BLEU 分数。我了解句子级 BLEU 分数的工作原理,但我不了解语料库级 BLEU 分数的工作原理。
下面是我的语料库级 BLEU 分数代码:
import nltk
hypothesis = ['This', 'is', 'cat']
reference = ['This', 'is', 'a', 'cat']
BLEUscore = nltk.translate.bleu_score.corpus_bleu([reference], [hypothesis], weights = [1])
print(BLEUscore)
由于某种原因,上述代码的 bleu 分数为 0。我期望语料库级别的 BLEU 分数至少为 0.5。
这是我的句子级 BLEU 分数代码
import nltk
hypothesis = ['This', 'is', 'cat']
reference = ['This', 'is', 'a', 'cat']
BLEUscore = nltk.translate.bleu_score.sentence_bleu([reference], hypothesis, weights = [1])
print(BLEUscore)
这里的句子级 BLEU 分数是我预期的 0.71,考虑到简洁惩罚和缺失的单词 "a"。但是,我不明白语料库级别的 BLEU 分数是如何工作的。
如有任何帮助,我们将不胜感激。
一起来看看:
>>> help(nltk.translate.bleu_score.corpus_bleu)
Help on function corpus_bleu in module nltk.translate.bleu_score:
corpus_bleu(list_of_references, hypotheses, weights=(0.25, 0.25, 0.25, 0.25), smoothing_function=None)
Calculate a single corpus-level BLEU score (aka. system-level BLEU) for all
the hypotheses and their respective references.
Instead of averaging the sentence level BLEU scores (i.e. marco-average
precision), the original BLEU metric (Papineni et al. 2002) accounts for
the micro-average precision (i.e. summing the numerators and denominators
for each hypothesis-reference(s) pairs before the division).
...
你比我更能理解算法的描述,所以我不会尝试 "explain" 给你。如果文档字符串不够清晰,请查看 the source 本身。或在本地查找:
>>> nltk.translate.bleu_score.__file__
'.../lib/python3.4/site-packages/nltk/translate/bleu_score.py'
TL;DR:
>>> import nltk
>>> hypothesis = ['This', 'is', 'cat']
>>> reference = ['This', 'is', 'a', 'cat']
>>> references = [reference] # list of references for 1 sentence.
>>> list_of_references = [references] # list of references for all sentences in corpus.
>>> list_of_hypotheses = [hypothesis] # list of hypotheses that corresponds to list of references.
>>> nltk.translate.bleu_score.corpus_bleu(list_of_references, list_of_hypotheses)
0.6025286104785453
>>> nltk.translate.bleu_score.sentence_bleu(references, hypothesis)
0.6025286104785453
(注意:您必须在 develop
分支上拉取最新版本的 NLTK 以获得稳定版本的 BLEU 分数实现)
中长:
实际上,如果整个语料库中只有一个参考和一个假设,则 corpus_bleu()
和 sentence_bleu()
都应该 return 与上例所示的值相同。
在代码中,我们看到 sentence_bleu
is actually a duck-type of corpus_bleu
:
def sentence_bleu(references, hypothesis, weights=(0.25, 0.25, 0.25, 0.25),
smoothing_function=None):
return corpus_bleu([references], [hypothesis], weights, smoothing_function)
如果我们查看 sentence_bleu
的参数:
def sentence_bleu(references, hypothesis, weights=(0.25, 0.25, 0.25, 0.25),
smoothing_function=None):
""""
:param references: reference sentences
:type references: list(list(str))
:param hypothesis: a hypothesis sentence
:type hypothesis: list(str)
:param weights: weights for unigrams, bigrams, trigrams and so on
:type weights: list(float)
:return: The sentence-level BLEU score.
:rtype: float
"""
sentence_bleu
的引用的输入是 list(list(str))
。
所以如果你有一个句子字符串,例如"This is a cat"
,您必须对其进行标记以获取字符串列表,["This", "is", "a", "cat"]
,并且由于它允许多个引用,因此它必须是字符串列表的列表,例如如果您有第二个参考,"This is a feline",您对 sentence_bleu()
的输入将是:
references = [ ["This", "is", "a", "cat"], ["This", "is", "a", "feline"] ]
hypothesis = ["This", "is", "cat"]
sentence_bleu(references, hypothesis)
说到corpus_bleu()
list_of_references参数,基本就是a list of whatever the sentence_bleu()
takes as references:
def corpus_bleu(list_of_references, hypotheses, weights=(0.25, 0.25, 0.25, 0.25),
smoothing_function=None):
"""
:param references: a corpus of lists of reference sentences, w.r.t. hypotheses
:type references: list(list(list(str)))
:param hypotheses: a list of hypothesis sentences
:type hypotheses: list(list(str))
:param weights: weights for unigrams, bigrams, trigrams and so on
:type weights: list(float)
:return: The corpus-level BLEU score.
:rtype: float
"""
除了查看 nltk/translate/bleu_score.py
, you can also take a look at the unittest at nltk/test/unit/translate/test_bleu_score.py
中的 doctest 以了解如何使用 bleu_score.py
中的每个组件。
顺便说一下,由于 sentence_bleu
在 (nltk.translate.__init__.py
](https://github.com/nltk/nltk/blob/develop/nltk/translate/init.py#L21) 中被导入为 bleu
,因此使用
from nltk.translate import bleu
将等同于:
from nltk.translate.bleu_score import sentence_bleu
在代码中:
>>> from nltk.translate import bleu
>>> from nltk.translate.bleu_score import sentence_bleu
>>> from nltk.translate.bleu_score import corpus_bleu
>>> bleu == sentence_bleu
True
>>> bleu == corpus_bleu
False