如何让 toco 与 shape=[None, 24, 24, 3] 一起工作

How to get toco to work with shape=[None, 24, 24, 3]

我正在尝试获取图表以对小图像进行分类。 一切似乎都正常。但是,一旦我尝试将其转换为 tflite,它就不起作用了。

问题好像出在toco?

如果我使用 input_nodes = x(x 为 tf.placeholder(tf.float32,shape=[None, IMAGE_SIZE, IMAGE_SIZE, 3] , name="ipnode")),则会出现以下错误:

Traceback (most recent call last):
  File "create_model.py", line 217, in <module>
    tflite_model = tf.contrib.lite.toco_convert(input_graph_def, [input_nodes], [output_nodes])
  File "/.../tensorflow/lib/python2.7/site-packages/tensorflow/contrib/lite/python/convert.py", line 243, in toco_convert
    *args, **kwargs)
  File "/.../tensorflow/lib/python2.7/site-packages/tensorflow/contrib/lite/python/convert.py", line 212, in build_toco_convert_protos
    input_array.shape.dims.extend(map(int, input_tensor.get_shape()))
TypeError: __int__ returned non-int (type NoneType)

如果我使用

input_nodes = tf.placeholder(tf.float32,shape=[1, IMAGE_SIZE, IMAGE_SIZE, 3], name=input_node_names)

它运行通过,但不知何故在 Android 它崩溃了:

Caused by: java.lang.NullPointerException: Internal error: Cannot allocate memory for the interpreter: tensorflow/contrib/lite/kernels/conv.cc:191 input->dims->size != 4 (0 != 4)Node 0 failed to prepare.

        at org.tensorflow.lite.NativeInterpreterWrapper.createInterpreter(Native Method)
        at org.tensorflow.lite.NativeInterpreterWrapper.<init>(NativeInterpreterWrapper.java:75)
        at org.tensorflow.lite.NativeInterpreterWrapper.<init>(NativeInterpreterWrapper.java:54)
        at org.tensorflow.lite.Interpreter.<init>(Interpreter.java:114)

OS: Mac OS 10.13.6 (17G65) Python版本:2.7.15 张量流版本:1.10.1 Tensorflow-lite 版本 (android): 1.10.0


所以我想知道我可能做错了什么。在网上搜索了一下,结果如下,但似乎没有什么能清楚地告诉我问题是什么(或者我不太了解我需要做些什么来解决这个问题):

https://github.com/tensorflow/tensorflow/issues/18437(说是固定尺寸,怎么改成固定尺寸?)

https://github.com/tensorflow/tensorflow/issues/19982#issuecomment-397956218 https://github.com/tensorflow/tensorflow/issues/21336


创建图表并转换它的脚本:

from __future__ import division, print_function, absolute_import
# library for optmising inference
from tensorflow.python.tools import optimize_for_inference_lib
from tensorflow.python.tools import freeze_graph
import tensorflow as tf
# Higher level API tflearn
import tflearn
from tflearn.data_utils import shuffle, to_categorical
from tflearn.layers.core import input_data, dropout, fully_connected
from tflearn.layers.conv import conv_2d, max_pool_2d
from tflearn.layers.estimator import regression
from tflearn.data_preprocessing import ImagePreprocessing
from tflearn.data_augmentation import ImageAugmentation
from tflearn.data_utils import image_preloader
import numpy as np

# Data loading and preprocessing
#helper functions to download the CIFAR 10 data and load them dynamically

# from tflearn.datasets import cifar10
# (X, Y), (X_test, Y_test) = cifar10.load_data()
# X, Y = shuffle(X, Y)
# Y = to_categorical(Y,10)
# Y_test = to_categorical(Y_test,10)

IMAGE_FOLDER = 'datasets/button_images'
TRAIN_DATA = 'datasets/training_data.txt'
TEST_DATA = 'datasets/test_data.txt'
VALIDATION_DATA = 'datasets/validation_data.txt'

IMAGE_SIZE=24

train_proportion=0.7
test_proportion=0.2
validation_proportion=0.1

import glob
import os.path
import random
import math

# classes = filter(lambda f: not f.startswith('.'), os.listdir(IMAGE_FOLDER))
# classes.sort(key=str.lower)
classes = ['close', 'pause', 'play', 'stop', 'other']

nrOfClasses = len(classes)
print('Classes: ' + str(classes))

filesDepth2 = glob.glob(IMAGE_FOLDER + '/*/*')
images = filter(lambda f: not os.path.isdir(f), filesDepth2)
random.shuffle(images)

dir_path = os.path.dirname(os.path.realpath(__file__))
def createDataFile(images, skipPercentage, percentage, dataFile):
    total = len(images)
    fr = open(dataFile, 'w')
    start = int(math.ceil(skipPercentage * total))
    end = int(math.ceil((skipPercentage + percentage) * total))
    images_subset = images[start:end]
    for filename in images_subset:
        startClass = len(IMAGE_FOLDER) + 1
        endClass = filename.index('/', startClass)
        className = filename[startClass:endClass]
        fullPath = dir_path + '/' + filename
        classNameInt = classes.index(className) if className in classes else -1
        if classNameInt != -1:
            fr.write(fullPath + ' ' + str(classNameInt) + '\n')
    fr.close()

createDataFile(images, 0.0, 0.7, TRAIN_DATA)
createDataFile(images, 0.7, 0.9, TEST_DATA)
createDataFile(images, 0.9, 1.0, VALIDATION_DATA)

# TODO maybe use grayscale=True
X_train, Y_train = image_preloader(TRAIN_DATA, image_shape=(IMAGE_SIZE,IMAGE_SIZE),mode='file', categorical_labels=True,normalize=True)
X_test, Y_test = image_preloader(TEST_DATA, image_shape=(IMAGE_SIZE,IMAGE_SIZE),mode='file', categorical_labels=True,normalize=True)
X_val, Y_val = image_preloader(VALIDATION_DATA, image_shape=(IMAGE_SIZE,IMAGE_SIZE),mode='file', categorical_labels=True,normalize=True)


# input image
x = tf.placeholder(tf.float32,shape=[None, IMAGE_SIZE, IMAGE_SIZE, 3] , name="ipnode")
# input class
y_ = tf.placeholder(tf.float32,shape=[None, nrOfClasses] , name='input_class')


# AlexNet architecture
input_layer = x
network = conv_2d(input_layer, IMAGE_SIZE, 3, activation='relu')
network = max_pool_2d(network, 2)
network = conv_2d(network, 64, 3, activation='relu')
network = conv_2d(network, 64, 3, activation='relu')
network = max_pool_2d(network, 2)
network = fully_connected(network, 512, activation='relu')
network = fully_connected(network, nrOfClasses, activation='linear')
y_predicted = tf.nn.softmax(network , name="opnode")

#loss function
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y_predicted+np.exp(-nrOfClasses)), reduction_indices=[1]))
#optimiser -
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
#calculating accuracy of our model
correct_prediction = tf.equal(tf.argmax(y_predicted,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))


#TensorFlow session
sess = tf.Session()
#initialising variables
init = tf.global_variables_initializer()
sess.run(init)
#tensorboard for better visualisation
writer =tf.summary.FileWriter('tensorboard/', sess.graph)
epoch=30 # run for more iterations according your hardware's power
#change batch size according to your hardware's power. For GPU's use batch size in powers of 2 like 2,4,8,16...
batch_size=32
no_itr_per_epoch=len(X_train)//batch_size
n_test=len(X_test) #number of test samples


# Commencing training process
for iteration in range(epoch):
    print("Iteration no: {} ".format(iteration))

    previous_batch=0
    # Do our mini batches:
    for i in range(no_itr_per_epoch):
        current_batch=previous_batch+batch_size
        x_input=X_train[previous_batch:current_batch]
        x_images=np.reshape(x_input,[batch_size,IMAGE_SIZE,IMAGE_SIZE,3])

        y_input=Y_train[previous_batch:current_batch]
        y_label=np.reshape(y_input,[batch_size,nrOfClasses])
        previous_batch=previous_batch+batch_size

        _,loss=sess.run([train_step, cross_entropy], feed_dict={x: x_images,y_: y_label})
        #if i % 100==0 :
            #print ("Training loss : {}" .format(loss))



    x_test_images=np.reshape(X_test[0:n_test],[n_test,IMAGE_SIZE,IMAGE_SIZE,3])
    y_test_labels=np.reshape(Y_test[0:n_test],[n_test,nrOfClasses])
    Accuracy_test=sess.run(accuracy,
                           feed_dict={
                        x: x_test_images ,
                        y_: y_test_labels
                      })
    # Accuracy of the test set
    Accuracy_test=round(Accuracy_test*100,2)
    print("Accuracy ::  Test_set {} %  " .format(Accuracy_test))



#####################
#####################

# saving the graph
saver = tf.train.Saver()
model_directory='model_files/'
tf.train.write_graph(sess.graph_def, model_directory, 'savegraph.pbtxt')
saver.save(sess, 'model_files/model.ckpt')

#################
## Freeze the graph
#################
MODEL_NAME = 'button'
input_graph_path = 'model_files/savegraph.pbtxt'
checkpoint_path = 'model_files/model.ckpt'
input_saver_def_path = ""
input_binary = False
input_node_names = "ipnode"
output_node_names = "opnode"

input_nodes = tf.placeholder(tf.float32,shape=[1, IMAGE_SIZE, IMAGE_SIZE, 3], name=input_node_names)
output_nodes = y_predicted

restore_op_name = "save/restore_all"
filename_tensor_name = "save/Const:0"
output_frozen_graph_name = 'model_files/model_frozen_' + MODEL_NAME + '.pb'
output_optimized_graph_name = 'model_files/model_optimized_' + MODEL_NAME + '.pb'
output_converted_graph_name = 'model_files/model_converted_' + MODEL_NAME + '.tflite'
clear_devices = True

freeze_graph.freeze_graph(input_graph_path, input_saver_def_path,
                          input_binary, checkpoint_path, output_node_names,
                          restore_op_name, filename_tensor_name,
                          output_frozen_graph_name, clear_devices, "")

#################
## optimize graph
#################

input_graph_def = tf.GraphDef()
with tf.gfile.Open(output_frozen_graph_name, "r") as f:
    data = f.read()
    input_graph_def.ParseFromString(data)

output_graph_def = optimize_for_inference_lib.optimize_for_inference(
    input_graph_def,
    [input_node_names], # an array of the input node(s)
    [output_node_names], # an array of output nodes
    tf.float32.as_datatype_enum)

# save optimized graph
f = tf.gfile.FastGFile(output_optimized_graph_name, "w")
f.write(output_graph_def.SerializeToString())

#################
## convert graph
#################

input_graph_def = tf.GraphDef()
with tf.gfile.Open(output_optimized_graph_name, "r") as f:
    data = f.read()
    input_graph_def.ParseFromString(data)

tflite_model = tf.contrib.lite.toco_convert(input_graph_def, [input_nodes], [output_nodes])
open(output_converted_graph_name, "wb").write(tflite_model)

sess.close()

更新 1(使用 TocoConverter from_frozen_graph/from_session 问题)

使用以下内容:

converter = tf.contrib.lite.TocoConverter.from_frozen_graph(
    output_frozen_graph_name, [input_node_names], [output_node_names])
tflite_model = converter.convert()
open(output_converted_graph_name, "wb").write(tflite_model)

我收到错误:

Traceback (most recent call last):
  File "create_model.py", line 214, in <module>
    output_frozen_graph_name, [input_node_names], [output_node_names])
  File "/Users/.../tensorflow/lib/python2.7/site-packages/tensorflow/contrib/lite/python/lite.py", line 229, in from_frozen_graph
    raise ValueError("Please freeze the graph using freeze_graph.py.")
ValueError: Please freeze the graph using freeze_graph.py.

在检查为什么它不是冻结图时,似乎操作的类型为 VariableV2,名称为 is_training。但看不出那可能来自哪里 (see graph image here)。


如果我使用:

with tf.Session(graph=graph) as sess:
    converter = tf.contrib.lite.TocoConverter.from_session(sess, [x], [y_predicted])
    tflite_model = converter.convert()
    open(output_converted_graph_name, "wb").write(tflite_model)

操作完成,我有一个tflite文件。我也可以在 Android.

中正确使用该文件

但它确实 return 完全不同(错误)的结果。当针对冻结或优化图(pb 文件)进行测试时,结果是正确的。 Link to my test script (label_images.py adjusted for tflite)

此处所有当前脚本:gist on github


解决方案

不知道为什么,但是当我切换到使用命令行工具并仅使用冻结图(而不是优化图)时,它工作正常。

#################
## convert graph
#################

from subprocess import call
call([
    "toco",
    "--graph_def_file=" + output_frozen_graph_name,
    "--input_format=TENSORFLOW_GRAPHDEF",
    "--output_format=TFLITE",
    "--output_file=" + output_converted_graph_name,
    "--input_shape=1," + str(IMAGE_SIZE) + "," + str(IMAGE_SIZE) + ",3",
    "--input_type=FLOAT",
    "--input_array=" + input_node_names,
    "--output_array=" + output_node_names,
    "--inference_type=FLOAT",
    "--inference_input_type=FLOAT"
])

如果我使用冻结和优化图,它说:ValueError: Unable to parse input file 'model_files/model_optimized_button.pb'.

此处所有工作脚本:gist on github

TOCO 不接受输入张量形状的 None 值。

不使用 toco_convert,推荐的方法是使用 TocoConverter.from_frozen_graph()。对于批量大小为 None 的任何模型,它会自动将批量大小分配为 1。所提供代码的最后几行应如下所示:

converter = tf.contrib.lite.TocoConverter.from_frozen_graph(
  output_optimized_graph_name, [input_node_names], [output_node_names])
tflite_model = converter.convert()
open(output_converted_graph_name, "wb").write(tflite_model)