使用 TFLite 转换 LSTM 图失败

Convert LSTM Graph with TFLite Fails

伙计们,每当我尝试将我的 LSTM 图转换为 TFLite 时都会出错:

user@user:~/tensorflow/tensorflow$ bazel run --config=opt   //tensorflow/contrib/lite/toco:toco --   --input_file=/home/user/model/rnn/lstm_graph_mobilnet_v2_100_128.pb   --output_file=/home/user/model/rnn/lstm_graph_mobilnet_v2_100_128.tflite   --input_format=TENSORFLOW_GRAPHDEF   --output_format=TFLITE   --inference_type=FLOAT   --input_shape=1,10,2560   --input_array=input/x_input   --output_array=output/y_pred
WARNING: ignoring http_proxy in environment.
.......................
WARNING: /home/user/.cache/bazel/_bazel_user/9944cfee49d745019014aac0edc80315/external/protobuf_archive/WORKSPACE:1: Workspace name in /home/user/.cache/bazel/_bazel_user/9944cfee49d745019014aac0edc80315/external/protobuf_archive/WORKSPACE (@com_google_protobuf) does not match the name given in the repository's definition (@protobuf_archive); this will cause a build error in future versions
INFO: Analysed target //tensorflow/contrib/lite/toco:toco (84 packages loaded).
INFO: Found 1 target...
Target //tensorflow/contrib/lite/toco:toco up-to-date:
  bazel-bin/tensorflow/contrib/lite/toco/toco
INFO: Elapsed time: 88.490s, Critical Path: 35.68s
INFO: Build completed successfully, 1 total action

INFO: Running command line: bazel-bin/tensorflow/contrib/lite/toco/toco '--input_file=/home/user/model/rnn/lstm_graph_mobilnet_v2_100_128.pb' '--output_file=/home/users/model/rnn/lstm_graph_mobilnet_v2_100_128.tflite' '--input_format=TENSORFLOW_GRAPHDEF' '--output_format=TFLITE' '--inference_type=FLOAT' '--input_shape=1,10,2560' '--input_array=input/x_input' '--output_array=output/y_pred'
2018-07-10 16:38:59.794308: F tensorflow/contrib/lite/toco/tooling_util.cc:822] Check failed: d >= 1 (0 vs. 1)

推理时,batch size = 1,10个输入,每个输入长度为2560

为什么我的 d 维度为 0 >=1(0 对 1)?

有没有将 RNN 转换为 TFLite 的示例项目?

这个方法对我有用: LSTM pb to tflite

我遇到了类似的问题, 我在 Tensorflow 的顶部使用 tflearn api。 在将 tensorflow 模型转换为 tflite 格式时,我遇到了一些错误。

我通过从 lstm 层删除 dropout 参数重新训练模型,我的模型可以转换为 tflite 格式。

代码前:

net = tflearn.input_data(shape=[None, len(train_x[0])])
net = tflearn.embedding(net, input_dim=len(train_x[0]), output_dim=64)
net = tflearn.lstm(net, 16, dropout=0.4)
net = tflearn.fully_connected(net, len(train_y[0]), activation='softmax', name='output_layer')
net = tflearn.regression(net)

代码后:

net = tflearn.input_data(shape=[None, len(train_x[0])])
net = tflearn.embedding(net, input_dim=len(train_x[0]), output_dim=64)
net = tflearn.lstm(net, 16)
net = tflearn.fully_connected(net, len(train_y[0]), activation='softmax', name='output_layer')
net = tflearn.regression(net)

但我不认为删除 dropout 参数是个好主意,它只是一个 hack。如果你的模型在没有 dropout 选项的情况下表现良好,那么只有这个 hack 会起作用。