在 Python 中使用 h2o4gpu K-Means 对文本文档进行聚类

Clustering text documents using h2o4gpu K-Means in Python

我对使用 h2o4gpu 对文本文档进行聚类很感兴趣。作为参考,我遵循了 ,但更改了代码以反映 h2o4gpu。

from sklearn.feature_extraction.text import TfidfVectorizer
import h2o4gpu

documents = ["Human machine interface for lab abc computer applications",
         "A survey of user opinion of computer system response time",
         "The EPS user interface management system",
         "System and human system engineering testing of EPS",
         "Relation of user perceived response time to error measurement",
         "The generation of random binary unordered trees",
         "The intersection graph of paths in trees",
         "Graph minors IV Widths of trees and well quasi ordering",
         "Graph minors A survey"]

vectorizer = TfidfVectorizer(stop_words='english')
X = vectorizer.fit_transform(documents)

true_k = 2
model = h2o4gpu.KMeans(n_gpus=1, n_clusters=true_k, init='k-means++', 
max_iter=100, n_init=1)
model.fit(X)

但是,当运行上面的代码示例时,我收到以下错误:

Traceback (most recent call last):
File "dev.py", line 20, in <module>
model.fit(X)
File "/home/greg/anaconda3/lib/python3.6/site-packages/h2o4gpu/solvers/kmeans.py", line 810, in fit
res = self.model.fit(X, y)
File "/home/greg/anaconda3/lib/python3.6/site-packages/h2o4gpu/solvers/kmeans.py", line 303, in fit
X_np, _, _, _, _, _ = _get_data(X, ismatrix=True)
File "/home/greg/anaconda3/lib/python3.6/site-packages/h2o4gpu/solvers/utils.py", line 119, in _get_data
data, ismatrix=ismatrix, dtype=dtype, order=order)
File "/home/greg/anaconda3/lib/python3.6/site-packages/h2o4gpu/solvers/utils.py", line 79, in _to_np
outdata = outdata.astype(dtype, copy=False, order=nporder)
ValueError: setting an array element with a sequence.

我已经搜索了 h2o4gpu.feature_extraction.text.TfidfVectorizer,但在 h2o4gpu 中没有找到。也就是说,有没有办法解决这个问题?

软件版本

X = TfidfVectorizer(stop_words='english').fit_transform(documents)

Returns 一个稀疏矩阵对象 scipy.sparse.csr_matrix.

目前在 H2O4GPU 中,我们仅支持 KMeans 的密集表示。这意味着您必须将 X 转换为 2D Python vanilla 列表或 2D Numpy 数组,用 0.

填充缺失的元素
vectorizer = TfidfVectorizer(stop_words='english')
X = vectorizer.fit_transform(documents)
X_dense = X.toarray()

true_k = 2
model = h2o4gpu.KMeans(n_gpus=1, n_clusters=true_k, init='k-means++', 
max_iter=100, n_init=1)
model.fit(X_dense)

应该可以解决问题。这不是 NLP 的最佳解决方案,因为它可能需要更多内存,但我们在路线图上还没有对 KMeans 的稀疏支持。