从 BertForSequenceClassification 中提取特征

Extracting Features from BertForSequenceClassification

大家好,目前我正在尝试开发一个矛盾检测模型。使用和微调 BERT 模型我已经得到了相当统计的结果,但我认为通过一些其他功能我可以获得更好的准确性。我把自己定位在这个 Tutorial 上。微调后,我的模型如下所示:

==== Embedding Layer ====

bert.embeddings.word_embeddings.weight                  (30000, 768)
bert.embeddings.position_embeddings.weight                (512, 768)
bert.embeddings.token_type_embeddings.weight                (2, 768)
bert.embeddings.LayerNorm.weight                              (768,)
bert.embeddings.LayerNorm.bias                                (768,)

==== First Transformer ====

bert.encoder.layer.0.attention.self.query.weight          (768, 768)
bert.encoder.layer.0.attention.self.query.bias                (768,)
bert.encoder.layer.0.attention.self.key.weight            (768, 768)
bert.encoder.layer.0.attention.self.key.bias                  (768,)
bert.encoder.layer.0.attention.self.value.weight          (768, 768)
bert.encoder.layer.0.attention.self.value.bias                (768,)
bert.encoder.layer.0.attention.output.dense.weight        (768, 768)
bert.encoder.layer.0.attention.output.dense.bias              (768,)
bert.encoder.layer.0.attention.output.LayerNorm.weight        (768,)
bert.encoder.layer.0.attention.output.LayerNorm.bias          (768,)
bert.encoder.layer.0.intermediate.dense.weight           (3072, 768)
bert.encoder.layer.0.intermediate.dense.bias                 (3072,)
bert.encoder.layer.0.output.dense.weight                 (768, 3072)
bert.encoder.layer.0.output.dense.bias                        (768,)
bert.encoder.layer.0.output.LayerNorm.weight                  (768,)
bert.encoder.layer.0.output.LayerNorm.bias                    (768,)

==== Output Layer ====

bert.pooler.dense.weight                                  (768, 768)
bert.pooler.dense.bias                                        (768,)
classifier.weight                                           (2, 768)
classifier.bias                                                 (2,)

我的下一步是从这个模型中获取 [CLS] 标记,将其与一些手工制作的特征结合起来,并将它们输入到不同的模型 (MLP) 中进行分类。任何提示如何做到这一点?

您可以使用 bert 模型的池化输出(将 [CLS] 令牌馈送到池化层的上下文嵌入):

from transformers import BertModel, BertTokenizer

#replace bert-base-uncased with the path to your saved model
t = BertTokenizer.from_pretrained('bert-base-uncased')
m = BertModel.from_pretrained('bert-base-uncased')


i = t.batch_encode_plus(['this is a sample', 'different sample'], padding=True,return_tensors='pt')
o = m(**i)

print(o.keys())
#shape [batch_size, 768]
print(o.pooler_output.shape)
useMe = o.pooler_output