如何将另一个预训练的 BERT 模型与 ktrain 文本分类器一起使用?
How to use another pretrained BERT model with the ktrain text classifier?
我们如何为 ktrain 库中的文本分类器使用不同的预训练模型?使用时:
model = text.text_classifier('bert', (x_train, y_train) ,
preproc=preproc)
This uses the multilangual pretrained model
不过,我也想试试单语模型。即荷兰语:''wietsedv/bert-base-dutch-cased',它也用于其他 k-train 实现,for example.
但是,当尝试在文本分类器中使用此命令时它不起作用:
model = text.text_classifier('bert', (x_train, y_train) ,
> preproc=preproc, bert_model='wietsedv/bert-base-dutch-cased')
或
model = text.text_classifier('wietsedv/bert-base-dutch-cased', (x_train, y_train), preproc=preproc)
有人知道怎么做吗?谢谢!
ktrain中有两个文本分类API。第一个是 text_classifier
API,可用于 select 数量的变压器和 non-transformers 模型。第二个是 Transformer
API,它可以与任何 transformers
型号一起使用,包括您列出的型号。
后者在this tutorial notebook and this medium article中有详细解释。
例如,您可以在下面的示例中将 MODEL_NAME
替换为您想要的任何模型:
示例:
# load text data
categories = ['alt.atheism', 'soc.religion.christian','comp.graphics', 'sci.med']
from sklearn.datasets import fetch_20newsgroups
train_b = fetch_20newsgroups(subset='train', categories=categories, shuffle=True)
test_b = fetch_20newsgroups(subset='test',categories=categories, shuffle=True)
(x_train, y_train) = (train_b.data, train_b.target)
(x_test, y_test) = (test_b.data, test_b.target)
# build, train, and validate model (Transformer is wrapper around transformers library)
import ktrain
from ktrain import text
MODEL_NAME = 'distilbert-base-uncased' # replace this with model of choice
t = text.Transformer(MODEL_NAME, maxlen=500, class_names=train_b.target_names)
trn = t.preprocess_train(x_train, y_train)
val = t.preprocess_test(x_test, y_test)
model = t.get_classifier()
learner = ktrain.get_learner(model, train_data=trn, val_data=val, batch_size=6)
learner.fit_onecycle(5e-5, 4)
learner.validate(class_names=t.get_classes()) # class_names must be string values
# Output from learner.validate()
# precision recall f1-score support
#
# alt.atheism 0.92 0.93 0.93 319
# comp.graphics 0.97 0.97 0.97 389
# sci.med 0.97 0.95 0.96 396
#soc.religion.christian 0.96 0.96 0.96 398
#
# accuracy 0.96 1502
# macro avg 0.95 0.96 0.95 1502
# weighted avg 0.96 0.96 0.96 1502
我们如何为 ktrain 库中的文本分类器使用不同的预训练模型?使用时:
model = text.text_classifier('bert', (x_train, y_train) , preproc=preproc)
This uses the multilangual pretrained model
不过,我也想试试单语模型。即荷兰语:''wietsedv/bert-base-dutch-cased',它也用于其他 k-train 实现,for example.
但是,当尝试在文本分类器中使用此命令时它不起作用:
model = text.text_classifier('bert', (x_train, y_train) ,
> preproc=preproc, bert_model='wietsedv/bert-base-dutch-cased')
或
model = text.text_classifier('wietsedv/bert-base-dutch-cased', (x_train, y_train), preproc=preproc)
有人知道怎么做吗?谢谢!
ktrain中有两个文本分类API。第一个是 text_classifier
API,可用于 select 数量的变压器和 non-transformers 模型。第二个是 Transformer
API,它可以与任何 transformers
型号一起使用,包括您列出的型号。
后者在this tutorial notebook and this medium article中有详细解释。
例如,您可以在下面的示例中将 MODEL_NAME
替换为您想要的任何模型:
示例:
# load text data
categories = ['alt.atheism', 'soc.religion.christian','comp.graphics', 'sci.med']
from sklearn.datasets import fetch_20newsgroups
train_b = fetch_20newsgroups(subset='train', categories=categories, shuffle=True)
test_b = fetch_20newsgroups(subset='test',categories=categories, shuffle=True)
(x_train, y_train) = (train_b.data, train_b.target)
(x_test, y_test) = (test_b.data, test_b.target)
# build, train, and validate model (Transformer is wrapper around transformers library)
import ktrain
from ktrain import text
MODEL_NAME = 'distilbert-base-uncased' # replace this with model of choice
t = text.Transformer(MODEL_NAME, maxlen=500, class_names=train_b.target_names)
trn = t.preprocess_train(x_train, y_train)
val = t.preprocess_test(x_test, y_test)
model = t.get_classifier()
learner = ktrain.get_learner(model, train_data=trn, val_data=val, batch_size=6)
learner.fit_onecycle(5e-5, 4)
learner.validate(class_names=t.get_classes()) # class_names must be string values
# Output from learner.validate()
# precision recall f1-score support
#
# alt.atheism 0.92 0.93 0.93 319
# comp.graphics 0.97 0.97 0.97 389
# sci.med 0.97 0.95 0.96 396
#soc.religion.christian 0.96 0.96 0.96 398
#
# accuracy 0.96 1502
# macro avg 0.95 0.96 0.95 1502
# weighted avg 0.96 0.96 0.96 1502