使用 HuggingFace 库在 Pytorch 中训练 n% 最后一层 BERT(训练 12 个中的最后 5 个 BERTLAYER。)
Train n% last layers of BERT in Pytorch using HuggingFace Library (train Last 5 BERTLAYER out of 12 .)
Bert 的架构类似于 encoder -> 12 BertLayer -> Pooling
。我想训练 Bert 模型的最后 40% 层。我可以将所有图层冻结为:
# freeze parameters
bert = AutoModel.from_pretrained('bert-base-uncased')
for param in bert.parameters():
param.requires_grad = False
但我想训练最后 40% 层。当我做 len(list(bert.parameters()))
时,它给了我 199。所以让我们假设 79 是参数的 40%。我可以做类似的事情吗:
for param in list(bert.parameters())[-79:]: # total trainable 199 Params: 79 is 40%
param.requires_grad = False
我认为它会冻结前 60% 层。
另外,谁能告诉我,根据架构,它会冻结哪些层?
您可能正在寻找 named_parameters。
for name, param in bert.named_parameters():
print(name)
输出:
embeddings.word_embeddings.weight
embeddings.position_embeddings.weight
embeddings.token_type_embeddings.weight
embeddings.LayerNorm.weight
embeddings.LayerNorm.bias
encoder.layer.0.attention.self.query.weight
encoder.layer.0.attention.self.query.bias
encoder.layer.0.attention.self.key.weight
...
named_parameters
还会告诉你你没有冻结前 60% 但最后 40%:
for name, param in bert.named_parameters():
if param.requires_grad == True:
print(name)
输出:
embeddings.word_embeddings.weight
embeddings.position_embeddings.weight
embeddings.token_type_embeddings.weight
embeddings.LayerNorm.weight
embeddings.LayerNorm.bias
encoder.layer.0.attention.self.query.weight
encoder.layer.0.attention.self.query.bias
encoder.layer.0.attention.self.key.weight
encoder.layer.0.attention.self.key.bias
encoder.layer.0.attention.self.value.weight
...
您可以通过以下方式冻结前 60%:
for name, param in list(bert.named_parameters())[:-79]:
print('I will be frozen: {}'.format(name))
param.requires_grad = False
Bert 的架构类似于 encoder -> 12 BertLayer -> Pooling
。我想训练 Bert 模型的最后 40% 层。我可以将所有图层冻结为:
# freeze parameters
bert = AutoModel.from_pretrained('bert-base-uncased')
for param in bert.parameters():
param.requires_grad = False
但我想训练最后 40% 层。当我做 len(list(bert.parameters()))
时,它给了我 199。所以让我们假设 79 是参数的 40%。我可以做类似的事情吗:
for param in list(bert.parameters())[-79:]: # total trainable 199 Params: 79 is 40%
param.requires_grad = False
我认为它会冻结前 60% 层。
另外,谁能告诉我,根据架构,它会冻结哪些层?
您可能正在寻找 named_parameters。
for name, param in bert.named_parameters():
print(name)
输出:
embeddings.word_embeddings.weight
embeddings.position_embeddings.weight
embeddings.token_type_embeddings.weight
embeddings.LayerNorm.weight
embeddings.LayerNorm.bias
encoder.layer.0.attention.self.query.weight
encoder.layer.0.attention.self.query.bias
encoder.layer.0.attention.self.key.weight
...
named_parameters
还会告诉你你没有冻结前 60% 但最后 40%:
for name, param in bert.named_parameters():
if param.requires_grad == True:
print(name)
输出:
embeddings.word_embeddings.weight
embeddings.position_embeddings.weight
embeddings.token_type_embeddings.weight
embeddings.LayerNorm.weight
embeddings.LayerNorm.bias
encoder.layer.0.attention.self.query.weight
encoder.layer.0.attention.self.query.bias
encoder.layer.0.attention.self.key.weight
encoder.layer.0.attention.self.key.bias
encoder.layer.0.attention.self.value.weight
...
您可以通过以下方式冻结前 60%:
for name, param in list(bert.named_parameters())[:-79]:
print('I will be frozen: {}'.format(name))
param.requires_grad = False