如何导出 Estimator 的最佳模型?
How to export Estimator's best model?
我正在基于带有 TF 记录的自定义估计器训练一个简单的 CNN。
我正在尝试导出 train_and_evaluate
阶段验证损失方面的最佳模型。
根据 tf.estimator.BestExporter
的文档,我应该提供 returns 一个 ServingInputReceiver
的函数,但在这样做之后, train_and_evaluate
阶段崩溃 NotFoundError: model/m01/eval; No such file or directory
.
似乎如果 BestExporter 不允许保存评估结果,就像没有导出器一样。我尝试了不同的 ServingInputReceiver
,但我一直收到同样的错误。
如定义here:
feature_spec = {
'shape': tf.VarLenFeature(tf.int64),
'image_raw': tf.FixedLenFeature((), tf.string),
'label_raw': tf.FixedLenFeature((43), tf.int64)
}
def serving_input_receiver_fn():
serialized_tf_example = tf.placeholder(dtype=tf.string,
shape=[120, 120, 3],
name='input_example_tensor')
receiver_tensors = {'image': serialized_tf_example}
features = tf.parse_example(serialized_tf_example, feature_spec)
return tf.estimator.export.ServingInputReceiver(features, receiver_tensors)
和here
def serving_input_receiver_fn():
feature_spec = {
'image': tf.FixedLenFeature((), tf.string)
}
return tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec)
这是我的出口商和培训程序:
exporter = tf.estimator.BestExporter(
name="best_exporter",
serving_input_receiver_fn=serving_input_receiver_fn,
exports_to_keep=5)
train_spec = tf.estimator.TrainSpec(
input_fn=lambda: imgs_input_fn(train_path, True, epochs, batch_size))
eval_spec = tf.estimator.EvalSpec(
input_fn=lambda: imgs_input_fn(eval_path, perform_shuffle=False, batch_size=1),
exporters=exporter)
tf.estimator.train_and_evaluate(ben_classifier, train_spec, eval_spec)
This is a gist 与输出。
为 BestExporter
定义 ServingInputReceiver
的正确方法是什么?
你能试试下面显示的代码吗:
def serving_input_receiver_fn():
"""
This is used to define inputs to serve the model.
:return: ServingInputReciever
"""
reciever_tensors = {
# The size of input image is flexible.
'image': tf.placeholder(tf.float32, [None, None, None, 1]),
}
# Convert give inputs to adjust to the model.
features = {
# Resize given images.
'image': tf.reshape(reciever_tensors[INPUT_FEATURE], [-1, INPUT_SHAPE])
}
return tf.estimator.export.ServingInputReceiver(receiver_tensors=reciever_tensors,
features=features)
然后使用tf.estimator.BestExporter
如下图:
best_exporter = tf.estimator.BestExporter(
serving_input_receiver_fn=serving_input_receiver_fn,
exports_to_keep=1)
exporters = [best_exporter]
eval_input_fn = tf.estimator.inputs.numpy_input_fn(
x={input_name: eval_data},
y=eval_labels,
num_epochs=1,
shuffle=False)
eval_spec = tf.estimator.EvalSpec(
input_fn=eval_input_fn,
throttle_secs=10,
start_delay_secs=10,
steps=None,
exporters=exporters)
# Train and evaluate the model.
tf.estimator.train_and_evaluate(classifier, train_spec=train_spec, eval_spec=eval_spec)
有关详细信息,请参阅 link:
https://github.com/yu-iskw/tensorflow-serving-example/blob/master/python/train/mnist_keras_estimator.py
我正在基于带有 TF 记录的自定义估计器训练一个简单的 CNN。
我正在尝试导出 train_and_evaluate
阶段验证损失方面的最佳模型。
根据 tf.estimator.BestExporter
的文档,我应该提供 returns 一个 ServingInputReceiver
的函数,但在这样做之后, train_and_evaluate
阶段崩溃 NotFoundError: model/m01/eval; No such file or directory
.
似乎如果 BestExporter 不允许保存评估结果,就像没有导出器一样。我尝试了不同的 ServingInputReceiver
,但我一直收到同样的错误。
如定义here:
feature_spec = {
'shape': tf.VarLenFeature(tf.int64),
'image_raw': tf.FixedLenFeature((), tf.string),
'label_raw': tf.FixedLenFeature((43), tf.int64)
}
def serving_input_receiver_fn():
serialized_tf_example = tf.placeholder(dtype=tf.string,
shape=[120, 120, 3],
name='input_example_tensor')
receiver_tensors = {'image': serialized_tf_example}
features = tf.parse_example(serialized_tf_example, feature_spec)
return tf.estimator.export.ServingInputReceiver(features, receiver_tensors)
和here
def serving_input_receiver_fn():
feature_spec = {
'image': tf.FixedLenFeature((), tf.string)
}
return tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec)
这是我的出口商和培训程序:
exporter = tf.estimator.BestExporter(
name="best_exporter",
serving_input_receiver_fn=serving_input_receiver_fn,
exports_to_keep=5)
train_spec = tf.estimator.TrainSpec(
input_fn=lambda: imgs_input_fn(train_path, True, epochs, batch_size))
eval_spec = tf.estimator.EvalSpec(
input_fn=lambda: imgs_input_fn(eval_path, perform_shuffle=False, batch_size=1),
exporters=exporter)
tf.estimator.train_and_evaluate(ben_classifier, train_spec, eval_spec)
This is a gist 与输出。
为 BestExporter
定义 ServingInputReceiver
的正确方法是什么?
你能试试下面显示的代码吗:
def serving_input_receiver_fn():
"""
This is used to define inputs to serve the model.
:return: ServingInputReciever
"""
reciever_tensors = {
# The size of input image is flexible.
'image': tf.placeholder(tf.float32, [None, None, None, 1]),
}
# Convert give inputs to adjust to the model.
features = {
# Resize given images.
'image': tf.reshape(reciever_tensors[INPUT_FEATURE], [-1, INPUT_SHAPE])
}
return tf.estimator.export.ServingInputReceiver(receiver_tensors=reciever_tensors,
features=features)
然后使用tf.estimator.BestExporter
如下图:
best_exporter = tf.estimator.BestExporter(
serving_input_receiver_fn=serving_input_receiver_fn,
exports_to_keep=1)
exporters = [best_exporter]
eval_input_fn = tf.estimator.inputs.numpy_input_fn(
x={input_name: eval_data},
y=eval_labels,
num_epochs=1,
shuffle=False)
eval_spec = tf.estimator.EvalSpec(
input_fn=eval_input_fn,
throttle_secs=10,
start_delay_secs=10,
steps=None,
exporters=exporters)
# Train and evaluate the model.
tf.estimator.train_and_evaluate(classifier, train_spec=train_spec, eval_spec=eval_spec)
有关详细信息,请参阅 link: https://github.com/yu-iskw/tensorflow-serving-example/blob/master/python/train/mnist_keras_estimator.py