张量来自不同的图

tensors are from different graphs

我是张量流的新手。尝试从 tfrecords 创建输入管道。 下面是我的代码片段,用于创建批次并输入我的 estimator

def generate_input_fn(image,label,batch_size=BATCH_SIZE):
    logging.info('creating batches...')    
    dataset = tf.data.Dataset.from_tensors((image, label)) #<-- dataset is 'TensorDataset'
    dataset = dataset.repeat().batch(batch_size)
    iterator=dataset.make_initializable_iterator()
    iterator.initializer
    return iterator.get_next()

iterator=dataset.make_initializable_iterator()行:

ValueError: Tensor("count:0", shape=(), dtype=int64, device=/device:CPU:0) must be from the same graph as Tensor("TensorDataset:0", shape=(), dtype=variant).

我想我不小心使用了来自不同图形的张量,但我不知道如何使用以及在哪一行代码中使用。我不知道哪个张量是 count:0 或者哪个是 TensorDataset:0.

谁能帮我调试一下。

错误日志:

      File "task.py", line 189, in main
    estimator.train(input_fn=lambda:generate_input_fn(image=image_data, label=label_data),steps=3,hooks=[logging_hook])
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/estimator/estimator.py", line 352, in train
    loss = self._train_model(input_fn, hooks, saving_listeners)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/estimator/estimator.py", line 809, in _train_model
    input_fn, model_fn_lib.ModeKeys.TRAIN))
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/estimator/estimator.py", line 668, in _get_features_and_labels_from_input_fn
    result = self._call_input_fn(input_fn, mode)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/estimator/estimator.py", line 760, in _call_input_fn
    return input_fn(**kwargs)
  File "task.py", line 189, in <lambda>
    estimator.train(input_fn=lambda:generate_input_fn(image=image_data, label=label_data),steps=3,hooks=[logging_hook])
  File "task.py", line 152, in generate_input_fn
    iterator=dataset.make_initializable_iterator()
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/data/ops/dataset_ops.py", line 107, in make_initializable_iterator
    initializer = gen_dataset_ops.make_iterator(self._as_variant_tensor(),
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/data/ops/dataset_ops.py", line 1399, in _as_variant_tensor
    self._input_dataset._as_variant_tensor(),  # pylint: disable=protected-access
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/data/ops/dataset_ops.py", line 1156, in _as_variant_tensor
    sparse.as_dense_types(self.output_types, self.output_classes)))
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_dataset_ops.py", line 1696, in repeat_dataset
    output_types=output_types, output_shapes=output_shapes, name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 350, in _apply_op_helper
    g = ops._get_graph_from_inputs(_Flatten(keywords.values()))
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 5284, in _get_graph_from_inputs
    _assert_same_graph(original_graph_element, graph_element)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 5220, in _assert_same_graph
    original_item))
ValueError: Tensor("count:0", shape=(), dtype=int64, device=/device:CPU:0) must be from the same graph as Tensor("TensorDataset:0", shape=(), dtype=variant).

如果我修改函数为:

image_placeholder=tf.placeholder(image.dtype,shape=image.shape)
label_placeholder=tf.placeholder(label.dtype,shape=label.shape)
dataset = tf.data.Dataset.from_tensors((image_placeholder, label_placeholder))

即添加占位符,然后我得到输出:

INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Graph was finalized.
2018-03-18 01:56:55.902917: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Killed

当您调用 estimator.train(input_fn) 时,将使用估计器 model_fn 中定义的图和 input_fn.[=16 中定义的图创建一个新图=]

因此,如果这些函数中的任何一个从它们的范围之外引用张量,它们将不会成为同一图表的一部分,您将收到错误消息。


简单的解决方案是确保您定义的每个张量都在 input_fnmodel_fn.

例如:

def generate_input_fn(batch_size):
    # Create the images and labels tensors here
    images = tf.placeholder(tf.float32, [None, 224, 224, 3])
    labels = tf.placeholder(tf.int64, [None])

    dataset = tf.data.Dataset.from_tensors((images, labels))
    dataset = dataset.repeat()
    dataset = dataset.batch(batch_size)
    dataset = dataset.prefetch(1)
    iterator = dataset.make_initializable_iterator()

    return iterator.get_next()