Tensorflow 服务 - 在基本路径下找不到可服务版本 <model>

Tensorflow Serving - No versions of servable <model> found under base path

我目前正在尝试使用 tensorflow 服务来为训练有素的“textsum”模型提供服务。我正在使用 TF 0.11,经过一些阅读,它似乎自动调用 export_meta_graph 创建导出文件 ckptckpt.meta 个文件。

在textsum/log_root目录下,我有多个文件。一个是 model.ckpt-230381,另一个是 model.ckpt-230381.meta.

据我了解,这是我在尝试设置服务模型时应该能够指向的位置。我发出了以下命令:

bazel build //tensorflow_serving/model_servers:tensorflow_model_server

bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9000 --model_name=model  --model_base_path=tf_models/textsum/log_root/

根据 运行 以上命令,我收到以下消息:

W tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:204] No versions of servable model found under base path tf_models/textsum/log_root/

运行 inspect_checkpoint 在检查点文件上我看到了这个:

> I tensorflow/stream_executor/dso_loader.cc:111] successfully opened
> CUDA library libcublas.so locally I
> tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA
> library libcudnn.so locally I
> tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA
> library libcufft.so locally I
> tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA
> library libcuda.so.1 locally I
> tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA
> library libcurand.so locally seq2seq/output_projection/w (DT_FLOAT)
> [256,335906] seq2seq/output_projection/v (DT_FLOAT) [335906]
> seq2seq/encoder3/BiRNN/FW/LSTMCell/B (DT_FLOAT) [1024]
> seq2seq/encoder3/BiRNN/BW/LSTMCell/W_0 (DT_FLOAT) [768,1024]
> seq2seq/encoder3/BiRNN/BW/LSTMCell/B (DT_FLOAT) [1024]
> seq2seq/encoder2/BiRNN/FW/LSTMCell/B (DT_FLOAT) [1024]
> seq2seq/decoder/attention_decoder/Linear/Bias (DT_FLOAT) [128]
> seq2seq/decoder/attention_decoder/AttnW_0 (DT_FLOAT) [1,1,512,512]
> seq2seq/decoder/attention_decoder/AttnV_0 (DT_FLOAT) [512]
> seq2seq/encoder0/BiRNN/FW/LSTMCell/W_0 (DT_FLOAT) [384,1024]
> seq2seq/decoder/attention_decoder/LSTMCell/W_0 (DT_FLOAT) [384,1024]
> seq2seq/encoder1/BiRNN/BW/LSTMCell/W_0 (DT_FLOAT) [768,1024]
> global_step (DT_INT32) [] seq2seq/encoder1/BiRNN/BW/LSTMCell/B
> (DT_FLOAT) [1024]
> seq2seq/decoder/attention_decoder/AttnOutputProjection/Linear/Bias
> (DT_FLOAT) [256]
> seq2seq/decoder/attention_decoder/Attention_0/Linear/Matrix (DT_FLOAT)
> [512,512] seq2seq/decoder/attention_decoder/Attention_0/Linear/Bias
> (DT_FLOAT) [512] seq2seq/encoder2/BiRNN/BW/LSTMCell/B (DT_FLOAT)
> [1024] seq2seq/decoder/attention_decoder/Linear/Matrix (DT_FLOAT)
> [640,128]
> seq2seq/decoder/attention_decoder/AttnOutputProjection/Linear/Matrix
> (DT_FLOAT) [768,256] seq2seq/embedding/embedding (DT_FLOAT)
> [335906,128] seq2seq/encoder0/BiRNN/BW/LSTMCell/B (DT_FLOAT) [1024]
> seq2seq/encoder3/BiRNN/FW/LSTMCell/W_0 (DT_FLOAT) [768,1024]
> seq2seq/encoder0/BiRNN/BW/LSTMCell/W_0 (DT_FLOAT) [384,1024]
> seq2seq/encoder0/BiRNN/FW/LSTMCell/B (DT_FLOAT) [1024]
> seq2seq/decoder/attention_decoder/LSTMCell/B (DT_FLOAT) [1024]
> seq2seq/encoder1/BiRNN/FW/LSTMCell/B (DT_FLOAT) [1024]
> seq2seq/encoder2/BiRNN/FW/LSTMCell/W_0 (DT_FLOAT) [768,1024]
> seq2seq/encoder1/BiRNN/FW/LSTMCell/W_0 (DT_FLOAT) [768,1024]
> seq2seq/encoder2/BiRNN/BW/LSTMCell/W_0 (DT_FLOAT) [768,1024]

我是否误解了导出需要发生的事情?关于为什么找不到模型的任何想法?

尽管我仍在努力为 tensorflow 服务导出 textsum 模型,但我的问题似乎是我假设当模型保存上述文件时,这些文件与创建时创建的文件相同导出模型。根据我在 git 收到的答复,情况似乎并非如此,事实上我确实必须 运行 导出模型本身。届时TF服务应该可以看到模型了。