TensorFlow 放置算法

TensorFlow placement algorithm

我想知道 TensorFlow 的放置算法(如白皮书所述)何时真正投入使用。到目前为止,我看到的所有分发 TensorFlow 的示例似乎都使用 "tf.device()".

手动指定节点应该在何处执行。

TensorFlow whitepaper was not included in the open-source release. Instead, the "simple placer" (whose implementation can be found in simple_placer.cc) is used, but it requires some explicit annotations (via tf.device()) to make yield an efficient placement. Higher-level constructs like tf.train.replica_device_setter() wrap tf.device() to specify common policies such as "shard the variables across parameter servers, and otherwise put all ops on the worker device," and we use this extensively in distributed training 的第 3.2.1 节中描述的动态放置算法。

在实践中,我们发现一小组注释通常会产生比动态放置器确定的更有效的放置,但改进放置算法仍然是一个活跃的研究领域。