使用 countVectorizer 计算我自己的词汇在 python 中出现的单词
Using countVectorizer to compute word occurrence for my own vocabulary in python
Doc1: ['And that was the fallacy. Once I was free to talk with staff members']
Doc2: ['In the new, stripped-down, every-job-counts business climate, these human']
Doc3 : ['Another reality makes emotional intelligence ever more crucial']
Doc4: ['The globalization of the workforce puts a particular premium on emotional']
Doc5: ['As business changes, so do the traits needed to excel. Data tracking']
这是我的词汇样本:
my_vocabulary= [‘was the fallacy’, ‘free to’, ‘stripped-down’, ‘ever more’, ‘of the workforce’, ‘the traits needed’]
重点是我词汇表中的每个单词都是二元组或三元组。我的词汇表包括我的文档集中所有可能的二元组和三元组,我只是在这里给了你一个样本。基于应用程序,这就是我的词汇应该如何。我正在尝试按以下方式使用 countVectorizer 来:
from sklearn.feature_extraction.text import CountVectorizer
doc_set = [Doc1, Doc2, Doc3, Doc4, Doc5]
vectorizer = CountVectorizer( vocabulary=my_vocabulary)
tf = vectorizer.fit_transform(doc_set)
我期待得到这样的东西:
print tf:
(0, 126) 1
(0, 6804) 1
(0, 5619) 1
(0, 5019) 2
(0, 5012) 1
(0, 999) 1
(0, 996) 1
(0, 4756) 4
其中第一列是文档 ID,第二列是词汇表中的单词 ID,第三列是该单词在该文档中的出现次数。但是 tf 是空的。我知道在一天结束时,我可以编写一个代码来遍历词汇表中的所有单词并计算出现次数并生成矩阵,但是我可以将 countVectorizer 用于我拥有的输入并节省时间吗?我在这里做错了什么吗?如果 countVectorizer 不是正确的方法,任何建议将不胜感激。
您可以通过在 CountVectorizer 中指定 ngram_range 参数来构建所有可能的 bi-grams 和 tri-grams 的词汇表。在 fit_tranform 之后,您可以使用 get_feature_names() 和 toarray() 方法查看词汇和频率。后者 returns 每个文档的频率矩阵。更多信息:http://scikit-learn.org/stable/modules/feature_extraction.html#text-feature-extraction
from sklearn.feature_extraction.text import CountVectorizer
Doc1 = 'And that was the fallacy. Once I was free to talk with staff members'
Doc2 = 'In the new, stripped-down, every-job-counts business climate, these human'
Doc3 = 'Another reality makes emotional intelligence ever more crucial'
Doc4 = 'The globalization of the workforce puts a particular premium on emotional'
Doc5 = 'As business changes, so do the traits needed to excel. Data tracking'
doc_set = [Doc1, Doc2, Doc3, Doc4, Doc5]
vectorizer = CountVectorizer(ngram_range=(2, 3))
tf = vectorizer.fit_transform(doc_set)
vectorizer.vocabulary_
vectorizer.get_feature_names()
tf.toarray()
至于您尝试做的事情,如果您在词汇表上训练 CountVectorizer 然后转换文档,它就会奏效。
my_vocabulary= ['was the fallacy', 'more crucial', 'particular premium', 'to excel', 'data tracking', 'another reality']
vectorizer = CountVectorizer(ngram_range=(2, 3))
vectorizer.fit_transform(my_vocabulary)
tf = vectorizer.transform(doc_set)
vectorizer.vocabulary_
Out[26]:
{'another reality': 0,
'data tracking': 1,
'more crucial': 2,
'particular premium': 3,
'the fallacy': 4,
'to excel': 5,
'was the': 6,
'was the fallacy': 7}
tf.toarray()
Out[25]:
array([[0, 0, 0, 0, 1, 0, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 0],
[1, 0, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 1, 0, 0]], dtype=int64)
Doc1: ['And that was the fallacy. Once I was free to talk with staff members']
Doc2: ['In the new, stripped-down, every-job-counts business climate, these human']
Doc3 : ['Another reality makes emotional intelligence ever more crucial']
Doc4: ['The globalization of the workforce puts a particular premium on emotional']
Doc5: ['As business changes, so do the traits needed to excel. Data tracking']
这是我的词汇样本:
my_vocabulary= [‘was the fallacy’, ‘free to’, ‘stripped-down’, ‘ever more’, ‘of the workforce’, ‘the traits needed’]
重点是我词汇表中的每个单词都是二元组或三元组。我的词汇表包括我的文档集中所有可能的二元组和三元组,我只是在这里给了你一个样本。基于应用程序,这就是我的词汇应该如何。我正在尝试按以下方式使用 countVectorizer 来:
from sklearn.feature_extraction.text import CountVectorizer
doc_set = [Doc1, Doc2, Doc3, Doc4, Doc5]
vectorizer = CountVectorizer( vocabulary=my_vocabulary)
tf = vectorizer.fit_transform(doc_set)
我期待得到这样的东西:
print tf:
(0, 126) 1
(0, 6804) 1
(0, 5619) 1
(0, 5019) 2
(0, 5012) 1
(0, 999) 1
(0, 996) 1
(0, 4756) 4
其中第一列是文档 ID,第二列是词汇表中的单词 ID,第三列是该单词在该文档中的出现次数。但是 tf 是空的。我知道在一天结束时,我可以编写一个代码来遍历词汇表中的所有单词并计算出现次数并生成矩阵,但是我可以将 countVectorizer 用于我拥有的输入并节省时间吗?我在这里做错了什么吗?如果 countVectorizer 不是正确的方法,任何建议将不胜感激。
您可以通过在 CountVectorizer 中指定 ngram_range 参数来构建所有可能的 bi-grams 和 tri-grams 的词汇表。在 fit_tranform 之后,您可以使用 get_feature_names() 和 toarray() 方法查看词汇和频率。后者 returns 每个文档的频率矩阵。更多信息:http://scikit-learn.org/stable/modules/feature_extraction.html#text-feature-extraction
from sklearn.feature_extraction.text import CountVectorizer
Doc1 = 'And that was the fallacy. Once I was free to talk with staff members'
Doc2 = 'In the new, stripped-down, every-job-counts business climate, these human'
Doc3 = 'Another reality makes emotional intelligence ever more crucial'
Doc4 = 'The globalization of the workforce puts a particular premium on emotional'
Doc5 = 'As business changes, so do the traits needed to excel. Data tracking'
doc_set = [Doc1, Doc2, Doc3, Doc4, Doc5]
vectorizer = CountVectorizer(ngram_range=(2, 3))
tf = vectorizer.fit_transform(doc_set)
vectorizer.vocabulary_
vectorizer.get_feature_names()
tf.toarray()
至于您尝试做的事情,如果您在词汇表上训练 CountVectorizer 然后转换文档,它就会奏效。
my_vocabulary= ['was the fallacy', 'more crucial', 'particular premium', 'to excel', 'data tracking', 'another reality']
vectorizer = CountVectorizer(ngram_range=(2, 3))
vectorizer.fit_transform(my_vocabulary)
tf = vectorizer.transform(doc_set)
vectorizer.vocabulary_
Out[26]:
{'another reality': 0,
'data tracking': 1,
'more crucial': 2,
'particular premium': 3,
'the fallacy': 4,
'to excel': 5,
'was the': 6,
'was the fallacy': 7}
tf.toarray()
Out[25]:
array([[0, 0, 0, 0, 1, 0, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 0],
[1, 0, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 1, 0, 0]], dtype=int64)