如何向标记器添加新的特殊标记?

How to add new special token to the tokenizer?

我想构建一个多class class化模型,我将对话数据作为 BERT 模型的输入(使用 bert-base-uncased)。

QUERY: I want to ask a question.
ANSWER: Sure, ask away.
QUERY: How is the weather today?
ANSWER: It is nice and sunny.
QUERY: Okay, nice to know.
ANSWER: Would you like to know anything else?

除此之外我还有两个输入。

我想知道我是否应该在对话中加入特殊标记以使其对 BERT 模型更有意义,例如:

[CLS]QUERY: I want to ask a question. [EOT]
ANSWER: Sure, ask away. [EOT]
QUERY: How is the weather today? [EOT]
ANSWER: It is nice and sunny. [EOT]
QUERY: Okay, nice to know. [EOT]
ANSWER: Would you like to know anything else? [SEP]

但是我无法添加新的 [EOT] 特殊令牌。
还是我应该为此使用 [SEP] 令牌?

编辑:重现步骤

from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased")

print(tokenizer.all_special_tokens) # --> ['[UNK]', '[SEP]', '[PAD]', '[CLS]', '[MASK]']
print(tokenizer.all_special_ids)    # --> [100, 102, 0, 101, 103]

num_added_toks = tokenizer.add_tokens(['[EOT]'])
model.resize_token_embeddings(len(tokenizer))  # --> Embedding(30523, 768)

tokenizer.convert_tokens_to_ids('[EOT]')  # --> 30522

text_to_encode = '''QUERY: I want to ask a question. [EOT]
ANSWER: Sure, ask away. [EOT]
QUERY: How is the weather today? [EOT]
ANSWER: It is nice and sunny. [EOT]
QUERY: Okay, nice to know. [EOT]
ANSWER: Would you like to know anything else?'''

enc = tokenizer.encode_plus(
  text_to_encode,
  max_length=128,
  add_special_tokens=True,
  return_token_type_ids=False,
  return_attention_mask=False,
)['input_ids']

print(tokenizer.convert_ids_to_tokens(enc))

结果:

['[CLS]', 'query', ':', 'i', 'want', 'to', 'ask', 'a', 'question', '.', '[', 'e', '##ot', ']', 'answer', ':', 'sure', ',', 'ask', 'away', '.', '[', 'e', '##ot', ']', 'query', ':', 'how', 'is', 'the', 'weather', 'today', '?', '[', 'e', '##ot', ']', 'answer', ':', 'it', 'is', 'nice', 'and', 'sunny', '.', '[', 'e', '##ot', ']', 'query', ':', 'okay', ',', 'nice', 'to', 'know', '.', '[', 'e', '##ot', ']', 'answer', ':', 'would', 'you', 'like', 'to', 'know', 'anything', 'else', '?', '[SEP]']

由于 [SEP] 标记的目的是充当两个句子之间的分隔符,因此它适合您 objective 使用 [SEP] 标记来分隔 QUERY 和 ANSWER 序列。

您还尝试添加不同的标记来标记 QUERY 或 ANSWER 的开始和结束,如 <BOQ><EOQ> 来标记 QUERY 的开始和结束。同样,<BOA><EOA> 标记 ANSWER 的开始和结束。

有时,使用现有标记比向词汇表中添加新标记效果更好,因为它需要大量的训练迭代以及学习新标记嵌入的数据。

但是,如果您的应用程序需要添加新令牌,则可以按如下方式添加:

num_added_toks = tokenizer.add_tokens(['[EOT]'], special_tokens=True) ##This line is updated
model.resize_token_embeddings(len(tokenizer))

###The tokenizer has to be saved if it has to be reused
tokenizer.save_pretrained(<output_dir>)