默认转换器管道中使用了哪些功能?

What features are used in the default transformers pipeline?

我在找here at the feature extraction pipeline

我初始化如下:

from transformers import pipeline 
pipe = pipeline("feature-extraction") 
features = pipe("test")

而且我得到了很多功能。默认情况下使用什么模型?如何初始化此管道以使用特定的预训练模型?

len(features)
1
>>> features
[[[0.4122459590435028, 0.10175584256649017, 0.09342928230762482, -0.3119196593761444, -0.3226662278175354, -0.16414110362529755, 0.06356583535671234, -0.03167172893881798, -0.010002809576690197, -1.1153486967086792, -0.3304346203804016, 0.1727224737405777, -0.0904250368475914, -0.04243310168385506, -0.4745883047580719, 0.09118127077817917, 0.4240476191043854, 0.2237153798341751, 0.12108077108860016, -0.16883963346481323, 0.055300742387771606, -0.07225772738456726, 0.4521999955177307, -0.31655701994895935, 0.05917530879378319, -0.0343029648065567, 0.4157347083091736, 0.10791877657175064, -0
...etc

虽然文档告诉我:

All models may be used for this pipeline. See a list of all models, including community-contributed models on huggingface.co/models.

我不清楚在 link 中的何处初始化模型。 API 非常简洁。

不幸的是,正如您所说的那样,pipelines 文档相当稀少。 但是,源代码指定了默认使用哪些模型,请参阅 here。具体来说,模型是distilbert-base-cased.

模型的使用方法,看我的相关回答here。您可以像这样简单地指定 modeltokenizer 参数:

from transformers import pipeline

# Question answering pipeline, specifying the checkpoint identifier
pipeline('feature-extraction', model='bert-base-cased', tokenizer='bert-base-cased')