如何使用新数据集 API 仅循环一次在 tensorflow 中使用占位符提供的数据

how to loop over data feeded with a placeholder in tensorflow only one time using the new Dataset API

我开始使用新数据集 API 并且文档中没有描述我想做的一件事 (https://www.tensorflow.org/programmers_guide/datasets#training_workflows)

我的数据适合内存,所以我想将其加载到 tensorflow 中以提高训练效率,为此我现在看到 2 种方法:

一个是像这样直接加载图中的数据:

dataset = tf.contrib.data.Dataset.from_tensor_slices((X, Y))
iterator = dataset.make_initializable_iterator()
next_element = iterator.get_next()

# loop on epochs
for _ in range(5):
    # Initialize an iterator over the training dataset.
    sess.run(iterator.initializer)
    # loop over all the batch
    for _ in range(1000):
        s = time.time()
        try:
            sess.run(next_element)
        except tf.errors.OutOfRangeError:
            print("Finish epoch")

另一种是将数据加载到占位符中,这样数据就不会保存在图表中:

features_placeholder = tf.placeholder(features.dtype, features.shape)
labels_placeholder = tf.placeholder(labels.dtype, labels.shape)

dataset = tf.contrib.data.Dataset.from_tensor_slices((features_placeholder, labels_placeholder))
iterator = dataset.make_initializable_iterator()
next_element = iterator.get_next()

# loop on epochs
for _ in range(5):
    # Initialize an iterator over the training dataset.
    sess.run(iterator.initializer, feed_dict={features_placeholder: X, labels_placeholder: Y})
    # loop over all the batch
    for _ in range(1000):
        s = time.time()
        try:
            sess.run(next_element)
        except tf.errors.OutOfRangeError:
            print("Finish epoch")

第二个是我认为最好节省内存,但我不想在每个epoch 都提供数据。真是白白损失了性能

有没有办法用占位符只初始化一次迭代器?

像这样:

sess.run(iterator.initializer, feed_dict={features_placeholder: X, labels_placeholder: Y})

# loop on epochs
for _ in range(5):
    # Initialize an iterator over the training dataset.
    sess.run(iterator.initializer)
    # loop over all the batch
    for _ in range(1000):
        s = time.time()
        try:
            sess.run(next_element)
        except tf.errors.OutOfRangeError:
            print("Finish epoch")

这样我们可以保持第一个解决方案的性能并像第二个解决方案一样节省内存。

Note:

one solution is to define the number of epoch with dataset.repeat() method but with it we kind of loose track of where we are in the training.

I want to check after each epoch (one pass over all the data) the evolution of the loss.

首先,我建议在每次初始化迭代器时量化馈送 XY 的性能开销。对于像 tf.int32tf.float32 这样的基本类型,通常可以在不复制任何数据的情况下提供一个值,在这种情况下,开销可以忽略不计。即使需要一个副本,它也需要一个 memcpy(),这可能会非常快。 (另一方面,提供 tf.string 张量可能更昂贵,因为它需要多个小副本在 Python 和 C++ 字符串表示之间进行转换。)

假设这是一项重大开销,您可以通过将输入数据存储在 tf.Variable 中使其成为一次性成本。例如:

placeholder_X = tf.placeholder(X.dtype, X.shape)
var_X = tf.Variable(placeholder_X)
placeholder_Y = tf.placeholder(Y.dtype, Y.shape)
var_Y = tf.Variable(placeholder_Y)

dataset = tf.contrib.data.Dataset.from_tensor_slices((var_X, var_Y))
iterator = dataset.make_initializable_iterator()

# ...

# The contents of `X` and `Y` will be copied once, in this call.
sess.run(tf.global_variables_initializer(), feed_dict={
    placeholder_X: X, placeholder_Y = Y})

for _ in range(5):
  # The iterator will be initialized from the variables with no copy.
  sess.run(iterator.initializer)

  # ...

我认为您不需要在每个时期都进行初始化。你可以在训练循环之前做一次。但是你还需要在定义数据集时告诉数据集在每次迭代时重复和重新洗牌:

features_placeholder = tf.placeholder(features.dtype, features.shape)
labels_placeholder = tf.placeholder(labels.dtype, labels.shape)

dataset = tf.data.Dataset.from_tensor_slices((features_placeholder, labels_placeholder)).shuffle(buffer_value,reshuffle_each_iteration=True).repeat().batch(batch_num)
iterator = dataset.make_initializable_iterator()
next_element = iterator.get_next()

#Initialize an iterator over the training dataset.
sess.run(iterator.initializer, feed_dict={features_placeholder: X, labels_placeholder: Y})
    # loop on epochs
for _ in range(5):
    # loop over all the batch
    for _ in range(1000):
        s = time.time()
        sess.run(next_element)

请注意,由于重复已开启,因此您需要计算在每个纪元中循环一次数据所需的确切迭代次数,并将其设置在您的内部循环中。