TensorFlow 能否自动调度所有可用 GPU 的操作?
Can TensorFlow schedule operations to all available GPUs automatically?
我们已经阅读了TensorFlow关于调度的论文。它可能会预先执行 Graph
并找到 "right" 设备来放置操作。
但我们已经测试使用 tf.Session(config=tf.ConfigProto(log_device_placement=True))
并且没有指定任何设备到 运行。我们发现所有的操作都放在第一个GPU中。
日志看起来像这样。
Adam/epsilon: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Adam/epsilon: /job:localhost/replica:0/task:0/gpu:0
Adam/beta2: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Adam/beta2: /job:localhost/replica:0/task:0/gpu:0
Adam/beta1: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Adam/beta1: /job:localhost/replica:0/task:0/gpu:0
Adam/learning_rate: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Adam/learning_rate: /job:localhost/replica:0/task:0/gpu:0
Variable_3/Adam_1: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_3/Adam_1: /job:localhost/replica:0/task:0/gpu:0
Variable_3/Adam_1/read: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_3/Adam_1/read: /job:localhost/replica:0/task:0/gpu:0
Variable_3/Adam_1/Assign: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_3/Adam_1/Assign: /job:localhost/replica:0/task:0/gpu:0
Variable_3/Adam: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_3/Adam: /job:localhost/replica:0/task:0/gpu:0
Variable_3/Adam/read: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_3/Adam/read: /job:localhost/replica:0/task:0/gpu:0
Variable_3/Adam/Assign: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_3/Adam/Assign: /job:localhost/replica:0/task:0/gpu:0
Variable_2/Adam_1: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_2/Adam_1: /job:localhost/replica:0/task:0/gpu:0
Variable_2/Adam_1/read: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_2/Adam_1/read: /job:localhost/replica:0/task:0/gpu:0
Variable_2/Adam_1/Assign: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_2/Adam_1/Assign: /job:localhost/replica:0/task:0/gpu:0
Variable_2/Adam: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_2/Adam: /job:localhost/replica:0/task:0/gpu:0
Variable_2/Adam/read: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_2/Adam/read: /job:localhost/replica:0/task:0/gpu:0
Variable_2/Adam/Assign: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_2/Adam/Assign: /job:localhost/replica:0/task:0/gpu:0
Variable_1/Adam_1: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_1/Adam_1: /job:localhost/replica:0/task:0/gpu:0
Variable
也放在GPU中。我敢肯定调度器现在还不够好,用户的最佳实践是我们应该指定使用 CPU 或 GPU 的操作,尤其是当我们有多个 GPU 时。那正确吗?
从 v0.9 开始,TensorFlow 将所有操作放在您拥有的第一个 GPU 上。所以你观察到的是 100% 的预期。现在,如果您的问题是 "Could TensorFlow automatically distribute my graph on my 4 GPUs without my intervention?",截至 2016 年 8 月的答案是否定的。
如果您正在尝试利用本地计算机可用的所有 GPU 的强大功能,请查看此 variation of the cifar10 tutorial. The next level would be replicated training with distributed tensorflow,但这对于您正在尝试做的事情来说可能有点矫枉过正。
随着这些天所有虚拟化的进行,将特定操作分配给哪个设备的问题可能很快就会变得无关紧要。
我们已经阅读了TensorFlow关于调度的论文。它可能会预先执行 Graph
并找到 "right" 设备来放置操作。
但我们已经测试使用 tf.Session(config=tf.ConfigProto(log_device_placement=True))
并且没有指定任何设备到 运行。我们发现所有的操作都放在第一个GPU中。
日志看起来像这样。
Adam/epsilon: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Adam/epsilon: /job:localhost/replica:0/task:0/gpu:0
Adam/beta2: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Adam/beta2: /job:localhost/replica:0/task:0/gpu:0
Adam/beta1: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Adam/beta1: /job:localhost/replica:0/task:0/gpu:0
Adam/learning_rate: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Adam/learning_rate: /job:localhost/replica:0/task:0/gpu:0
Variable_3/Adam_1: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_3/Adam_1: /job:localhost/replica:0/task:0/gpu:0
Variable_3/Adam_1/read: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_3/Adam_1/read: /job:localhost/replica:0/task:0/gpu:0
Variable_3/Adam_1/Assign: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_3/Adam_1/Assign: /job:localhost/replica:0/task:0/gpu:0
Variable_3/Adam: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_3/Adam: /job:localhost/replica:0/task:0/gpu:0
Variable_3/Adam/read: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_3/Adam/read: /job:localhost/replica:0/task:0/gpu:0
Variable_3/Adam/Assign: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_3/Adam/Assign: /job:localhost/replica:0/task:0/gpu:0
Variable_2/Adam_1: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_2/Adam_1: /job:localhost/replica:0/task:0/gpu:0
Variable_2/Adam_1/read: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_2/Adam_1/read: /job:localhost/replica:0/task:0/gpu:0
Variable_2/Adam_1/Assign: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_2/Adam_1/Assign: /job:localhost/replica:0/task:0/gpu:0
Variable_2/Adam: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_2/Adam: /job:localhost/replica:0/task:0/gpu:0
Variable_2/Adam/read: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_2/Adam/read: /job:localhost/replica:0/task:0/gpu:0
Variable_2/Adam/Assign: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_2/Adam/Assign: /job:localhost/replica:0/task:0/gpu:0
Variable_1/Adam_1: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_1/Adam_1: /job:localhost/replica:0/task:0/gpu:0
Variable
也放在GPU中。我敢肯定调度器现在还不够好,用户的最佳实践是我们应该指定使用 CPU 或 GPU 的操作,尤其是当我们有多个 GPU 时。那正确吗?
从 v0.9 开始,TensorFlow 将所有操作放在您拥有的第一个 GPU 上。所以你观察到的是 100% 的预期。现在,如果您的问题是 "Could TensorFlow automatically distribute my graph on my 4 GPUs without my intervention?",截至 2016 年 8 月的答案是否定的。
如果您正在尝试利用本地计算机可用的所有 GPU 的强大功能,请查看此 variation of the cifar10 tutorial. The next level would be replicated training with distributed tensorflow,但这对于您正在尝试做的事情来说可能有点矫枉过正。
随着这些天所有虚拟化的进行,将特定操作分配给哪个设备的问题可能很快就会变得无关紧要。