TfidfVectorizer 是否隐式地为大型数据集设置阈值?

Does the TfidfVectorizer implicitly threshold its fitted output for large datasets?

我正在尝试使用 sklearnTfidfVectorizer 输出输入列表的 tf-idf 分数,其中包括一元字母和二元字母。

这是我正在做的事情的本质:

comprehensive_ngrams = comprehensive_unigrams + comprehensive_bigrams # List of unigrams and bigrams(comprehensive_unigrams and comprehensive_bigrams are lists in their own right)
print("Length of input list: ", len(comprehensive_ngrams))
vectorizer = TfidfVectorizer(ngram_range = (1,2), lowercase = True)
vectorizer.fit(comprehensive_ngrams)
vocab = vectorizer.vocabulary_
print("Length of learned vocabulary: ", len(vocab))
term_document_matrix = vec.toarray()
print("Term document matrix shape is: ", term_document_matrix.shape)

此代码段输出以下内容:

Length of input list: 12333

Length of learned vocabulary: 6196

Term document matrix shape is: (12333, 6196)

将输入元素映射到 TfidfVectorizer 发出的位置索引的字典长度小于它所提供的唯一输入的数量。对于较小的数据集(大约 50 个元素),这似乎不是问题 - TfidfVectorizer 生成的字典的大小一旦被拟合就等于输入的大小。

我错过了什么?

确保 comprehensive_ngrams 是唯一单词的列表。即:

assert len(set(comprehensive_ngrams)) == len(comprehensive_ngrams)