Tensorflow,tf.train.batch,没有结果

Tensorflow, tf.train.batch, no result

我是 tf.train.batch 的新手,所以我写了一个示例来测试它。当我 运行 代码时,我没有得到任何结果,过程仍然是 运行ning。

你以前遇到过同样的情况吗?非常感谢!

import os

os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
import numpy as np
import tensorflow as tf


a = [[1,2,3,4],[1,2,3,4],[1,2,3,4],[1,2,3,4]]
b = [1,2,3,4]
input_queue = tf.train.slice_input_producer([a, b],num_epochs=None,shuffle=False)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    coord = tf.train.Coordinator()
    threads = tf.train.start_queue_runners(sess=sess, coord=coord)
    for i in range(4):


        x,y = tf.train.batch([a,b], batch_size=2)


        x_,y_ =sess.run([x,y])
        print(x_,y_)

    coord.request_stop()
    coord.join(threads)

此外,函数 tf.train.slice_input_producer 有效。当我忽略 tf.train.batch 时,代码变为:

import os

os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
import numpy as np
import tensorflow as tf


a = [[1,2,3,4],[1,2,3,4],[1,2,3,4],[1,2,3,4]]
b = [1,2,3,4]
input_queue = tf.train.slice_input_producer([a, b],num_epochs=None,shuffle=False)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    coord = tf.train.Coordinator()
    threads = tf.train.start_queue_runners(sess=sess, coord=coord)
    for i in range(4):

     print(sess.run(input_queue))

coord.request_stop()
coord.join(threads)

结果是:

[array([1, 2, 3, 4]), 1]
[array([1, 2, 3, 4]), 2]
[array([1, 2, 3, 4]), 3]
[array([1, 2, 3, 4]), 4]

我认为主要问题是您没有指定 enqueue_manyTrue,因此它只会重复整个批次。您可以在 official document 阅读更多内容。 这是一个工作示例:

a = [[1,2,3,4],[1,2,3,4],[1,2,3,4],[1,2,3,4]]
b = [1,2,3,4]
a, b = tf.train.batch([a,b], batch_size=1, num_threads=1, capacity=4, enqueue_many=True)
with tf.Session() as sess:
    coord = tf.train.Coordinator()
    threads = tf.train.start_queue_runners(sess, coord)
    for i in range(4):
        print(sess.run([a,b]))
    coord.request_stop()
    coord.join(threads)