使用 RAPIDs 的多 GPU Kmeans 集群冻结

MultiGPU Kmeans clustering with RAPIDs freezes

我是 Python 和 Rapids.AI 的新手,我正在尝试使用 Dask 和 RAPID 在多节点 GPU(我有 2 个 GPU)中重新创建 SKLearn KMeans(我正在使用 rapids 及其 docker,它也安装了 Jupyter Notebook)。

我在下面显示的代码(我还显示了 Iris 数据集的示例)冻结并且 jupyter notebook 单元格永远不会结束。我尝试使用 %debug 魔法键和 Dask 仪表板,但我没有得出任何明确的结论(我认为唯一的结论可能是由于 device_m_csv.iloc,但我不确定)。另一件事可能是我忘记了一些 wait()compute()persistent()(真的,我不确定在什么情况下应该正确使用它们)。

我会解释代码,以便更好地阅读:

抱歉无法提供更多数据,但我无法获取。任何需要解决疑问的东西,我都会很乐意提供。

您认为问题出在哪里或什么?

非常感谢您。

%%time

# Import libraries and show its versions
import numpy as np; print('NumPy Version:', np.__version__)
import pandas as pd; print('Pandas Version:', pd.__version__)
import sklearn; print('Scikit-Learn Version:', sklearn.__version__)
import nvstrings, nvcategory
import cupy; print('cuPY Version:', cupy.__version__)
import cudf; print('cuDF Version:', cudf.__version__)
import cuml; print('cuML Version:', cuml.__version__)
import dask; print('Dask Version:', dask.__version__)
import dask_cuda; print('DaskCuda Version:', dask_cuda.__version__)
import dask_cudf; print('DaskCuDF Version:', dask_cudf.__version__)
import matplotlib; print('MatPlotLib Version:', matplotlib.__version__)
import seaborn as sns; print('SeaBorn Version:', sns.__version__)
#import timeimport warnings

from dask import delayed
import dask.dataframe as dd
from dask.distributed import Client, LocalCluster, wait
from dask_ml.cluster import KMeans as skmKMeans
from dask_cuda import LocalCUDACluster

from sklearn import metrics
from sklearn.cluster import KMeans as skKMeans
from sklearn.metrics import adjusted_rand_score as sk_adjusted_rand_score, silhouette_score as sk_silhouette_score
from cuml.cluster import KMeans as cuKMeans
from cuml.dask.cluster.kmeans import KMeans as cumKMeans
from cuml.metrics import adjusted_rand_score as cu_adjusted_rand_score

# Configure matplotlib library
import matplotlib.pyplot as plt
%matplotlib inline

# Configure seaborn library
sns.set()
#sns.set(style="white", color_codes=True)
%config InlineBackend.figure_format = 'svg'

# Configure warnings
#warnings.filterwarnings("ignore")


####################################### KMEANS #############################################################
# Create local cluster
cluster = LocalCUDACluster(n_workers=2, threads_per_worker=1)
client = Client(cluster)

# Identify number of workers
n_workers = len(client.has_what().keys())

# Read data in host memory
device_m_csv = dask_cudf.read_csv('./DataSet/iris.csv', header = 0, delimiter = ',', chunksize='2kB') # Get complete CSV. Chunksize is 2kb for getting 2 partitions
#x = host_data.iloc[:, [0,1,2,3]].values
device_m_data = device_m_csv.iloc[:, [0, 1, 2, 3]] # Get data columns
device_m_labels = device_m_csv.iloc[:, 4] # Get labels column

# Plot data
#sns.pairplot(device_csv.to_pandas(), hue='variety');

# Define variables
label_type = { 'Setosa': 1, 'Versicolor': 2, 'Virginica': 3 } # Dictionary of variables type

# Create KMeans
cu_m_kmeans = cumKMeans(init = 'k-means||',
                     n_clusters = len(device_m_labels.unique()),
                     oversampling_factor = 40,
                     random_state = 0)
# Fit data in KMeans
cu_m_kmeans.fit(device_m_data)

# Predict data
cu_m_kmeans_labels_predicted = cu_m_kmeans.predict(device_m_data).compute()

# Check score
#print('Cluster centers:\n',cu_m_kmeans.cluster_centers_)
#print('adjusted_rand_score: ', sk_adjusted_rand_score(device_m_labels, cu_m_kmeans.labels_))
#print('silhouette_score: ', sk_silhouette_score(device_m_data.to_pandas(), cu_m_kmeans_labels_predicted))

# Close local cluster
client.close()
cluster.close()

鸢尾花数据集示例:


编辑 1

@Corey,这是我使用你的代码的输出:

NumPy Version: 1.17.5
Pandas Version: 0.25.3
Scikit-Learn Version: 0.22.1
cuPY Version: 6.7.0
cuDF Version: 0.12.0
cuML Version: 0.12.0
Dask Version: 2.10.1
DaskCuda Version: 0+unknown
DaskCuDF Version: 0.12.0
MatPlotLib Version: 3.1.3
SeaBorn Version: 0.10.0
Cluster centers:
           0         1         2         3
0  5.006000  3.428000  1.462000  0.246000
1  5.901613  2.748387  4.393548  1.433871
2  6.850000  3.073684  5.742105  2.071053
adjusted_rand_score:  0.7302382722834697
silhouette_score:  0.5528190123564102

我稍微修改了您的可重现示例,并能够在最近的 RAPIDS 每晚生成输出。

这是脚本的输出。

(cuml_dev_2) cjnolet@deeplearn ~ $ python ~/kmeans_mnmg_reproduce.py 
NumPy Version: 1.18.1
Pandas Version: 0.25.3
Scikit-Learn Version: 0.22.2.post1
cuPY Version: 7.2.0
cuDF Version: 0.13.0a+3237.g61e4d9c
cuML Version: 0.13.0a+891.g4f44f7f
Dask Version: 2.11.0+28.g10db6ba
DaskCuda Version: 0+unknown
DaskCuDF Version: 0.13.0a+3237.g61e4d9c
MatPlotLib Version: 3.2.0
SeaBorn Version: 0.10.0
/share/software/miniconda3/envs/cuml_dev_2/lib/python3.7/site-packages/dask/array/random.py:27: FutureWarning: dask.array.random.doc_wraps is deprecated and will be removed in a future version
  FutureWarning,
/share/software/miniconda3/envs/cuml_dev_2/lib/python3.7/site-packages/distributed/dashboard/core.py:79: UserWarning: 
Port 8787 is already in use. 
Perhaps you already have a cluster running?
Hosting the diagnostics dashboard on a random port instead.
  warnings.warn("\n" + msg)
bokeh.server.util - WARNING - Host wildcard '*' will allow connections originating from multiple (or possibly all) hostnames or IPs. Use non-wildcard values to restrict access explicitly
Cluster centers:
           0         1         2         3
0  5.883607  2.740984  4.388525  1.434426
1  5.006000  3.428000  1.462000  0.246000
2  6.853846  3.076923  5.715385  2.053846
adjusted_rand_score:  0.7163421126838475
silhouette_score:  0.5511916046195927

这里是生成此输出的修改后的脚本:

    # Import libraries and show its versions
    import numpy as np; print('NumPy Version:', np.__version__)
    import pandas as pd; print('Pandas Version:', pd.__version__)
    import sklearn; print('Scikit-Learn Version:', sklearn.__version__)
    import nvstrings, nvcategory
    import cupy; print('cuPY Version:', cupy.__version__)
    import cudf; print('cuDF Version:', cudf.__version__)
    import cuml; print('cuML Version:', cuml.__version__)
    import dask; print('Dask Version:', dask.__version__)
    import dask_cuda; print('DaskCuda Version:', dask_cuda.__version__)
    import dask_cudf; print('DaskCuDF Version:', dask_cudf.__version__)
    import matplotlib; print('MatPlotLib Version:', matplotlib.__version__)
    import seaborn as sns; print('SeaBorn Version:', sns.__version__)
    #import timeimport warnings

    from dask import delayed
    import dask.dataframe as dd
    from dask.distributed import Client, LocalCluster, wait
    from dask_ml.cluster import KMeans as skmKMeans
    from dask_cuda import LocalCUDACluster

    from sklearn import metrics
    from sklearn.cluster import KMeans as skKMeans
    from sklearn.metrics import adjusted_rand_score as sk_adjusted_rand_score, silhouette_score as sk_silhouette_score
    from cuml.cluster import KMeans as cuKMeans
    from cuml.dask.cluster.kmeans import KMeans as cumKMeans
    from cuml.metrics import adjusted_rand_score as cu_adjusted_rand_score
    # Configure matplotlib library
    import matplotlib.pyplot as plt

    # Configure seaborn library
    sns.set()
    #sns.set(style="white", color_codes=True)
    # Configure warnings
    #warnings.filterwarnings("ignore")


    ####################################### KMEANS #############################################################
    # Create local cluster
    cluster = LocalCUDACluster(n_workers=2, threads_per_worker=1)
    client = Client(cluster)

    # Identify number of workers
    n_workers = len(client.has_what().keys())

    # Read data in host memory
    from sklearn.datasets import load_iris

    loader = load_iris()

    #x = host_data.iloc[:, [0,1,2,3]].values
    device_m_data = dask_cudf.from_cudf(cudf.from_pandas(pd.DataFrame(loader.data)), npartitions=2) # Get data columns
    device_m_labels = dask_cudf.from_cudf(cudf.from_pandas(pd.DataFrame(loader.target)), npartitions=2)

    # Plot data
    #sns.pairplot(device_csv.to_pandas(), hue='variety');

    # Define variables
    label_type = { 'Setosa': 1, 'Versicolor': 2, 'Virginica': 3 } # Dictionary of variables type

    # Create KMeans
    cu_m_kmeans = cumKMeans(init = 'k-means||',
                     n_clusters = len(np.unique(loader.target)),
                     oversampling_factor = 40,
                     random_state = 0)
    # Fit data in KMeans
    cu_m_kmeans.fit(device_m_data)

    # Predict data
    cu_m_kmeans_labels_predicted = cu_m_kmeans.predict(device_m_data).compute()

    # Check score
    print('Cluster centers:\n',cu_m_kmeans.cluster_centers_)
    print('adjusted_rand_score: ', sk_adjusted_rand_score(loader.target, cu_m_kmeans_labels_predicted.values.get()))
    print('silhouette_score: ', sk_silhouette_score(device_m_data.compute().to_pandas(), cu_m_kmeans_labels_predicted))

    # Close local cluster
    client.close()
    cluster.close()

能否提供这些库版本的输出?我还建议 运行 修改后的脚本,看看它是否为您成功运行。如果不是,我们可以进一步深入了解它是否与 Docker 相关、RAPIDS 版本相关或其他原因。

如果您可以访问 运行 您的 Jupyter 笔记本的命令提示符,在构建 KMeans 对象时通过传入 verbose=True 来启用日志记录可能会有所帮助。这可以帮助我们隔离卡住的地方。

Dask 文档非常好而且内容广泛,但我承认有时提供的灵活性和功能数量可能有点让人不知所措。我认为将 Dask 视为分布式计算的 API 有助于用户控制几个不同的执行层,每一层都提供更细粒度的控制。

compute()wait()persist() 是来自一系列分布式计算的任务在一组 worker 上调度的方式的概念。所有这些计算的共同点是一个表示远程任务及其相互依赖关系的执行图。在某些时候,这个执行图被安排在一组工人身上。 Dask 提供了两个 API,这取决于图形底层的任务是立即(急切地)调度还是需要手动触发计算(惰性地)。

这两个 API 都会在创建依赖于其他任务结果的任务时构建执行图。前者使用 dask.futures API 进行立即异步执行,您有时可能希望在执行其他操作之前先将其结果 wait() 上。 dask.delayed API 用于惰性执行,需要调用 compute()persist() 等方法才能开始计算。

大多数情况下,像 RAPIDS 这样的库的用户更关心操纵他们的数据,而不是关心这些操纵是如何安排在一组工作人员身上的。 dask.dataframedask.array 对象建立在 delayedfutures API 之上。大多数用户与这些数据结构交互,而不是与 delayedfutures 对象交互,但如果您需要在分布式结构之外进行一些数据转换,了解它们并不是一个坏主意dataframearray 对象提供。

dask.dataframedask.array 都在可能的情况下构建惰性执行图,并提供 compute() 方法来实现图和 return 客户端的结果。它们都还提供了一个 persist() 方法来在后台异步启动计算。如果您想在后台开始计算但不想 return 将结果发送给客户端,wait() 很有用。

希望对您有所帮助。