使用 Python Pytorch 的变形金刚总结 - 如何获得更长的输出?
Transformers summarization with Python Pytorch - how to get longer output?
我使用来自 https://github.com/huggingface/transformers/tree/master/examples/summarization 的 Ai 支持的摘要 - 最先进的结果。
我应该自己训练它以获得比原始 huggingface github 训练脚本中使用的更长的摘要输出吗?
:
python run_summarization.py \
--documents_dir $DATA_PATH \
--summaries_output_dir $SUMMARIES_PATH \ # optional
--no_cuda false \
--batch_size 4 \
--min_length 50 \
--max_length 200 \
--beam_size 5 \
--alpha 0.95 \
--block_trigram true \
--compute_rouge true
当我使用
进行推理时
--min_length 500 \
--max_length 600 \
我得到了 200 个令牌的良好输出,但文本的其余部分是
. . . [unused7] [unused7] [unused7] [unused8] [unused4] [unused7] [unused7] [unused4] [unused7] [unused8]. [unused4] [unused7] . [unused4] [unused8] [unused4] [unused8]. [unused4] [unused4] [unused8] [unused4] . . [unused4] [unused6] [unused4] [unused7] [unused6] [unused4] [unused8] [unused5] [unused4] [unused7] [unused4] [unused4] [unused7]. [unused4] [unused6]. [unused4] [unused4] [unused4] [unused8] [unused4] [unused7] [unused4] [unused8] [unused6] [unused4] [unused4] [unused4]. [unused4]. [unused5] [unused4] [unused8] [unused7] [unused4] [unused7] [unused9] [unused4] [unused7] [unused4] [unused7] [unused5] [unused4] [unused5] [unused4] [unused6] [unused4]. . . [unused5]. [unused4] [unused4] [unused4] [unused6] [unused5] [unused4] [unused4] [unused6] [unused4] [unused6] [unused4] [unused4] [unused5] [unused4]. [unused5] [unused4] . [unused4] [unused4] [unused8] [unused8] [unused4] [unused7] [unused4] [unused8] [unused4] [unused7] [unused4] [unused8] [unused4] [unused8] [unused4] [unused6]
简短的回答是:是的,可能。
为了更详细地解释这一点,我们必须查看 paper behind the implementation: In Table 1, you can clearly see that most of their generated headlines are much shorter than what you are trying to initialize. While that alone might not be an indicator that you couldn't generate anything longer, we can go even deeper and look at the meaning of the [unusedX]
tokens, as described by BERT dev Jacob Devlin:
Since [the [unusedX]
tokens] were not used they are effectively randomly initialized.
此外,总结论文描述了
Position embeddings in the original BERT model have a maximum length
of 512; we over-come this limitation by adding more position
em-beddings that are initialized randomly and fine-tuned with other
parameters in the encoder.
这是一个强有力的指标,表明超过一定长度后,它们可能会回落到默认初始化,不幸的是,这是随机的。问题是你是否还能挽救之前的预训练,简单地微调到你的objective,还是从头开始更好
我使用来自 https://github.com/huggingface/transformers/tree/master/examples/summarization 的 Ai 支持的摘要 - 最先进的结果。
我应该自己训练它以获得比原始 huggingface github 训练脚本中使用的更长的摘要输出吗? :
python run_summarization.py \
--documents_dir $DATA_PATH \
--summaries_output_dir $SUMMARIES_PATH \ # optional
--no_cuda false \
--batch_size 4 \
--min_length 50 \
--max_length 200 \
--beam_size 5 \
--alpha 0.95 \
--block_trigram true \
--compute_rouge true
当我使用
进行推理时--min_length 500 \
--max_length 600 \
我得到了 200 个令牌的良好输出,但文本的其余部分是
. . . [unused7] [unused7] [unused7] [unused8] [unused4] [unused7] [unused7] [unused4] [unused7] [unused8]. [unused4] [unused7] . [unused4] [unused8] [unused4] [unused8]. [unused4] [unused4] [unused8] [unused4] . . [unused4] [unused6] [unused4] [unused7] [unused6] [unused4] [unused8] [unused5] [unused4] [unused7] [unused4] [unused4] [unused7]. [unused4] [unused6]. [unused4] [unused4] [unused4] [unused8] [unused4] [unused7] [unused4] [unused8] [unused6] [unused4] [unused4] [unused4]. [unused4]. [unused5] [unused4] [unused8] [unused7] [unused4] [unused7] [unused9] [unused4] [unused7] [unused4] [unused7] [unused5] [unused4] [unused5] [unused4] [unused6] [unused4]. . . [unused5]. [unused4] [unused4] [unused4] [unused6] [unused5] [unused4] [unused4] [unused6] [unused4] [unused6] [unused4] [unused4] [unused5] [unused4]. [unused5] [unused4] . [unused4] [unused4] [unused8] [unused8] [unused4] [unused7] [unused4] [unused8] [unused4] [unused7] [unused4] [unused8] [unused4] [unused8] [unused4] [unused6]
简短的回答是:是的,可能。
为了更详细地解释这一点,我们必须查看 paper behind the implementation: In Table 1, you can clearly see that most of their generated headlines are much shorter than what you are trying to initialize. While that alone might not be an indicator that you couldn't generate anything longer, we can go even deeper and look at the meaning of the [unusedX]
tokens, as described by BERT dev Jacob Devlin:
Since [the
[unusedX]
tokens] were not used they are effectively randomly initialized.
此外,总结论文描述了
Position embeddings in the original BERT model have a maximum length of 512; we over-come this limitation by adding more position em-beddings that are initialized randomly and fine-tuned with other parameters in the encoder.
这是一个强有力的指标,表明超过一定长度后,它们可能会回落到默认初始化,不幸的是,这是随机的。问题是你是否还能挽救之前的预训练,简单地微调到你的objective,还是从头开始更好