TensorFlow:如何在 GPU 上验证它是 运行

TensorFlow: How to verify that it is running on GPU

我正在寻找一种简单的方法来验证我的 TF 图实际上是在 GPU 上 运行。

PS。验证是否使用了 cuDNN 库也很好。

有几种方法可以查看操作位置。

  1. 将 RunOptions 和 RunMetadata 添加到会话调用并查看操作和计算在 Tensorboard 中的放置。请参阅此处的代码:https://www.tensorflow.org/get_started/graph_viz

  2. 在会话 ConfigProto 中指定 log_device_placement 选项。这将记录到控制台操作放置在哪个设备上。 https://www.tensorflow.org/api_docs/python/tf/ConfigProto

  3. 在终端使用nvidia-smi查看GPU使用情况

当您在 Python 中导入 TF 时

import tensorflow as tf

您将获得这些指示 CUDA 库使用情况的日志

I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcublas.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcudnn.so.5 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcufft.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcurand.so.8.0 locally

此外,当您在 Config Proto 中构建图表并 运行 与 log_device_placement 的会话时,您将获得这些日志(显示它找到了 GPU 设备):

I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:910] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties: 
name: GeForce GTX 1060 6GB
major: 6 minor: 1 memoryClockRate (GHz) 1.759
pciBusID 0000:01:00.0
Total memory: 5.93GiB
Free memory: 4.94GiB
I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0:   Y 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0)

有一个相关的TensorFlow upstream issue。基本上它说 Python API 还没有公开这些信息。

然而 C++ API 确实如此。例如。有 tensorflow::KernelsRegisteredForOp()。我围绕它写了一个小的 Python 包装器,然后实现了 supported_devices_for_op here (in this commit).