tensorflow 模型的预测失败

Prediction from tensorflow model fails

我对 tensorflow 非常陌生,正在尝试学习如何保存和加载以前训练过的模型。我使用 Estimator 创建了一个简单模型并对其进行了训练。

classifier = tf.estimator.Estimator(model_fn=bag_of_words_model)

# Train
train_input_fn = tf.estimator.inputs.numpy_input_fn(
    x={"words": x_train},       # x_train is 2D numpy array of shape (26, 5)
    y=y_train,                  # y_train is 1D panda series of length 26 
    batch_size=1000,
    num_epochs=None,
    shuffle=True)

classifier.train(input_fn=train_input_fn, steps=300)

然后我尝试保存模型:

def serving_input_receiver_fn():
    serialized_tf_example = tf.placeholder(dtype=tf.int64, shape=(None, 5), name='words')
    receiver_tensors = {"predictor_inputs": serialized_tf_example}
    features = {"words": tf.placeholder(tf.int64, shape=(None, 5))}
    return tf.estimator.export.ServingInputReceiver(features, receiver_tensors)

full_model_dir = classifier.export_savedmodel(export_dir_base="E:/models/", serving_input_receiver_fn=serving_input_receiver_fn)

我实际上是从this similar question复制了serving_input_receiver_fn。我不明白那个函数到底发生了什么。但这将我的模型存储在 E:/models/<some time stamp> 中。 我现在尝试加载这个保存的模型:

from tensorflow.contrib import predictor

classifier = predictor.from_saved_model("E:\models\<some time stamp>")

模型完美加载。此后,我对如何使用这个 classifier 对象来获得对新数据的预测感到震惊。我已经按照指南 here 来实现它,但是做不到 :(。这是我所做的:

predictions = classifier({'predictor_inputs': x_test})["output"]  # x_test is 2D numpy array same like x_train in the training part

我得到如下错误:

2019-01-10 12:43:38.603506: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
INFO:tensorflow:Restoring parameters from E:\models47101005\variables\variables
Traceback (most recent call last):
  File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\client\session.py", line 1334, in _do_call
    return fn(*args)
  File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\client\session.py", line 1319, in _run_fn
    options, feed_dict, fetch_list, target_list, run_metadata)
  File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\client\session.py", line 1407, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder' with dtype int64 and shape [?,5]
     [[{{node Placeholder}} = Placeholder[dtype=DT_INT64, shape=[?,5], _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "E:/ml_classif/tensorflow_bow_with_prob/load_model.py", line 85, in <module>
    predictions = classifier({'predictor_inputs': x_test})["output"]
  File "E:\ml_classif\venv\lib\site-packages\tensorflow\contrib\predictor\predictor.py", line 77, in __call__
    return self._session.run(fetches=self.fetch_tensors, feed_dict=feed_dict)
  File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\client\session.py", line 929, in run
    run_metadata_ptr)
  File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\client\session.py", line 1152, in _run
    feed_dict_tensor, options, run_metadata)
  File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\client\session.py", line 1328, in _do_run
    run_metadata)
  File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\client\session.py", line 1348, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder' with dtype int64 and shape [?,5]
     [[node Placeholder (defined at E:\ml_classif\venv\lib\site-packages\tensorflow\contrib\predictor\saved_model_predictor.py:153)  = Placeholder[dtype=DT_INT64, shape=[?,5], _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]

Caused by op 'Placeholder', defined at:
  File "E:/ml_classif/tensorflow_bow_with_prob/load_model.py", line 82, in <module>
    classifier = predictor.from_saved_model("E:\models\1547101005")
  File "E:\ml_classif\venv\lib\site-packages\tensorflow\contrib\predictor\predictor_factories.py", line 153, in from_saved_model
    config=config)
  File "E:\ml_classif\venv\lib\site-packages\tensorflow\contrib\predictor\saved_model_predictor.py", line 153, in __init__
    loader.load(self._session, tags.split(','), export_dir)
  File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\saved_model\loader_impl.py", line 197, in load
    return loader.load(sess, tags, import_scope, **saver_kwargs)
  File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\saved_model\loader_impl.py", line 350, in load
    **saver_kwargs)
  File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\saved_model\loader_impl.py", line 278, in load_graph
    meta_graph_def, import_scope=import_scope, **saver_kwargs)
  File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\training\saver.py", line 1696, in _import_meta_graph_with_return_elements
    **kwargs))
  File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\framework\meta_graph.py", line 806, in import_scoped_meta_graph_with_return_elements
    return_elements=return_elements)
  File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\util\deprecation.py", line 488, in new_func
    return func(*args, **kwargs)
  File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\framework\importer.py", line 442, in import_graph_def
    _ProcessNewOps(graph)
  File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\framework\importer.py", line 234, in _ProcessNewOps
    for new_op in graph._add_new_tf_operations(compute_devices=False):  # pylint: disable=protected-access
  File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\framework\ops.py", line 3440, in _add_new_tf_operations
    for c_op in c_api_util.new_tf_operations(self)
  File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\framework\ops.py", line 3440, in <listcomp>
    for c_op in c_api_util.new_tf_operations(self)
  File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\framework\ops.py", line 3299, in _create_op_from_tf_operation
    ret = Operation(c_op, self)
  File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\framework\ops.py", line 1770, in __init__
    self._traceback = tf_stack.extract_stack()

InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder' with dtype int64 and shape [?,5]
     [[node Placeholder (defined at E:\ml_classif\venv\lib\site-packages\tensorflow\contrib\predictor\saved_model_predictor.py:153)  = Placeholder[dtype=DT_INT64, shape=[?,5], _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]

它说我必须为占位符提供值(我认为是 serving_input_receiver_fn 中定义的那个)。如果不使用 tensorflow 的 Session 对象,我不知道该怎么做。

如果需要,请随时询问更多信息。

在对 serving_input_receiver_fn 有了一些模糊的理解后,我发现 features 不能是 placeholder,因为它创建了 2 个占位符(1 个用于 serialized_tf_example,其他 features)。我修改了函数如下(更改仅针对 features 变量):

def serving_input_receiver_fn():
    serialized_tf_example = tf.placeholder(dtype=tf.int64, shape=(None, 5), name='words')
    receiver_tensors = {"predictor_inputs": serialized_tf_example}
    features = {"words": tf.tile(serialized_tf_example, multiples=[1, 1])}  # Changed this
    return tf.estimator.export.ServingInputReceiver(features, receiver_tensors)

当我尝试预测加载模型的输出时,我现在没有收到任何错误。有用!唯一的问题是输出不正确(为此我发布了一个新问题 :))。