将 .tflite 转换为 .pb

Converting .tflite to .pb

问题:如何将 .tflite(序列化平面缓冲区)转换为 .pb(冻结模型)? documentation 只谈一种方式的转换。

用例是:我有一个经过训练的模型转换为.tflite但不幸的是,我没有模型的详细信息,我想要检查图表,我该怎么做?

我认为没有办法将 tflite 恢复回 pb,因为一些信息在转换后丢失了。我找到了一种间接的方法来了解 tflite 模型内部的内容,即回读每个张量。

interpreter = tf.contrib.lite.Interpreter(model_path=model_path)     
interpreter.allocate_tensors()

# trial some arbitrary numbers to find out the num of tensors
num_layer = 89 
for i in range(num_layer):
    detail = interpreter._get_tensor_details(i)
    print(i, detail['name'], detail['shape'])

你会看到类似下面的内容。由于目前仅支持有限的操作,因此对网络架构进行逆向工程并不难。我也在 my Github

上放了一些教程
0 MobilenetV1/Logits/AvgPool_1a/AvgPool [   1    1    1 1024]
1 MobilenetV1/Logits/Conv2d_1c_1x1/BiasAdd [   1    1    1 1001]
2 MobilenetV1/Logits/Conv2d_1c_1x1/Conv2D_bias [1001]
3 MobilenetV1/Logits/Conv2d_1c_1x1/weights_quant/FakeQuantWithMinMaxVars [1001    1    1 1024]
4 MobilenetV1/Logits/SpatialSqueeze [   1 1001]
5 MobilenetV1/Logits/SpatialSqueeze_shape [2]
6 MobilenetV1/MobilenetV1/Conv2d_0/Conv2D_Fold_bias [32]
7 MobilenetV1/MobilenetV1/Conv2d_0/Relu6 [  1 112 112  32]
8 MobilenetV1/MobilenetV1/Conv2d_0/weights_quant/FakeQuantWithMinMaxVars [32  3  3  3]
9 MobilenetV1/MobilenetV1/Conv2d_10_depthwise/Relu6 [  1  14  14 512]
10 MobilenetV1/MobilenetV1/Conv2d_10_depthwise/depthwise_Fold_bias [512]
11 MobilenetV1/MobilenetV1/Conv2d_10_depthwise/weights_quant/FakeQuantWithMinMaxVars [  1   3   3 512]
12 MobilenetV1/MobilenetV1/Conv2d_10_pointwise/Conv2D_Fold_bias [512]
13 MobilenetV1/MobilenetV1/Conv2d_10_pointwise/Relu6 [  1  14  14 512]
14 MobilenetV1/MobilenetV1/Conv2d_10_pointwise/weights_quant/FakeQuantWithMinMaxVars [512   1   1 512]
15 MobilenetV1/MobilenetV1/Conv2d_11_depthwise/Relu6 [  1  14  14 512]
16 MobilenetV1/MobilenetV1/Conv2d_11_depthwise/depthwise_Fold_bias [512]
17 MobilenetV1/MobilenetV1/Conv2d_11_depthwise/weights_quant/FakeQuantWithMinMaxVars [  1   3   3 512]
18 MobilenetV1/MobilenetV1/Conv2d_11_pointwise/Conv2D_Fold_bias [512]
19 MobilenetV1/MobilenetV1/Conv2d_11_pointwise/Relu6 [  1  14  14 512]
20 MobilenetV1/MobilenetV1/Conv2d_11_pointwise/weights_quant/FakeQuantWithMinMaxVars [512   1   1 512]

我找到了答案here

我们可以使用 Interpreter 来分析模型,同样的代码如下所示:

import numpy as np
import tensorflow as tf

# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path="converted_model.tflite")
interpreter.allocate_tensors()

# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()

# Test model on random input data.
input_shape = input_details[0]['shape']
input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'], input_data)

interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
print(output_data)

Netron 是我发现的最好的 analysis/visualising 工具,它可以理解很多格式,包括 .tflite

我已经用 TOCO 做到了,使用 tf 1.12

tensorflow_1.12/tensorflow/bazel-bin/tensorflow/contrib/lite/toco/toco -- 
output_file=coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.pb -- 
output_format=TENSORFLOW_GRAPHDEF --input_format=TFLITE -- 
input_file=coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.tflite -- 
inference_type=FLOAT --input_type=FLOAT --input_array="" --output_array="" -- 
input_shape=1,450,450,3 --dump_grapHviz=./

(您可以删除 dump_graphviz 选项)