tensorflowdynamic_rnn和rnn有什么区别?
What's the difference between tensorflow dynamic_rnn and rnn?
tf.nn
中有几个 类 与 RNN 相关。在我在网上找到的例子中,tf.nn.dynamic_rnn
和 tf.nn.rnn
似乎可以互换使用,或者至少我似乎无法弄清楚为什么用一个代替另一个。有什么区别?
它们几乎相同,只是输入和输出的结构略有不同。来自文档:
tf.nn.dynamic_rnn
This function is functionally identical to the function rnn
above, but >performs fully dynamic unrolling of inputs.
Unlike rnn
, the input inputs is not a Python list of Tensors, one for
each frame. Instead, inputs may be a single Tensor where the maximum
time is either the first or second dimension (see the parameter
time_major
). Alternatively, it may be a (possibly nested) tuple of
Tensors, each of them having matching batch and time dimensions. The
corresponding output is either a single Tensor having the same number
of time steps and batch size, or a (possibly nested) tuple of such
tensors, matching the nested structure of cell.output_size
.
有关详细信息,请浏览 source。
来自 Denny Britz 的 RNNs in Tensorflow, a Practical Guide and Undocumented Features,发表于 2016 年 8 月 21 日。
tf.nn.rnn
creates an unrolled graph for a fixed RNN length. That
means, if you call tf.nn.rnn
with inputs having 200 time steps you are
creating a static graph with 200 RNN steps. First, graph creation is
slow. Second, you’re unable to pass in longer sequences (> 200) than
you’ve originally specified.
tf.nn.dynamic_rnn
solves this. It uses a tf.While
loop to dynamically
construct the graph when it is executed. That means graph creation is
faster and you can feed batches of variable size.
tf.nn
中有几个 类 与 RNN 相关。在我在网上找到的例子中,tf.nn.dynamic_rnn
和 tf.nn.rnn
似乎可以互换使用,或者至少我似乎无法弄清楚为什么用一个代替另一个。有什么区别?
它们几乎相同,只是输入和输出的结构略有不同。来自文档:
tf.nn.dynamic_rnn
This function is functionally identical to the function
rnn
above, but >performs fully dynamic unrolling of inputs.Unlike
rnn
, the input inputs is not a Python list of Tensors, one for each frame. Instead, inputs may be a single Tensor where the maximum time is either the first or second dimension (see the parametertime_major
). Alternatively, it may be a (possibly nested) tuple of Tensors, each of them having matching batch and time dimensions. The corresponding output is either a single Tensor having the same number of time steps and batch size, or a (possibly nested) tuple of such tensors, matching the nested structure ofcell.output_size
.
有关详细信息,请浏览 source。
来自 Denny Britz 的 RNNs in Tensorflow, a Practical Guide and Undocumented Features,发表于 2016 年 8 月 21 日。
tf.nn.rnn
creates an unrolled graph for a fixed RNN length. That means, if you calltf.nn.rnn
with inputs having 200 time steps you are creating a static graph with 200 RNN steps. First, graph creation is slow. Second, you’re unable to pass in longer sequences (> 200) than you’ve originally specified.
tf.nn.dynamic_rnn
solves this. It uses atf.While
loop to dynamically construct the graph when it is executed. That means graph creation is faster and you can feed batches of variable size.