将 Tensorflow 预处理添加到现有的 Keras 模型(用于 Tensorflow Serving)
Add Tensorflow pre-processing to existing Keras model (for use in Tensorflow Serving)
我想在我导出的 Keras 模型中包含我的自定义预处理逻辑,以便在 Tensorflow Serving 中使用。
我的预处理执行字符串标记化,并使用外部字典将每个标记转换为索引以输入到嵌入层:
from keras.preprocessing import sequence
token_to_idx_dict = ... #read from file
# Custom Pythonic pre-processing steps on input_data
tokens = [tokenize(s) for s in input_data]
token_idxs = [[token_to_idx_dict[t] for t in ts] for ts in tokens]
tokens_padded = sequence.pad_sequences(token_idxs, maxlen=maxlen)
模型架构和训练:
model = Sequential()
model.add(Embedding(max_features, 128, input_length=maxlen))
model.add(LSTM(128, activation='sigmoid'))
model.add(Dense(n_classes, activation='softmax'))
model.compile(loss='sparse_categorical_crossentropy', optimizer='adam')
model.fit(x_train, y_train)
由于模型将在 Tensorflow Serving 中使用,我想将所有预处理逻辑合并到模型本身(在导出的模型文件中编码)。
问:我怎样才能只使用 Keras 库?
我发现 this guide 解释了如何结合 Keras 和 Tensorflow。但我仍然不确定如何将所有内容导出为一个模型。
我知道 Tensorflow 有内置的字符串拆分,file I/O, and dictionary lookup operations。
使用Tensorflow操作的预处理逻辑:
# Get input text
input_string_tensor = tf.placeholder(tf.string, shape={1})
# Split input text by whitespace
splitted_string = tf.string_split(input_string_tensor, " ")
# Read index lookup dictionary
token_to_idx_dict = tf.contrib.lookup.HashTable(tf.contrib.lookup.TextFileInitializer("vocab.txt", tf.string, 0, tf.int64, 1, delimiter=","), -1)
# Convert tokens to indexes
token_idxs = token_to_idx_dict.lookup(splitted_string)
# Pad zeros to fixed length
token_idxs_padded = tf.pad(token_idxs, ...)
问:如何将这些 Tensorflow 预定义的预处理操作和我的 Keras 层一起使用来训练模型,然后将模型导出为 "black box" 以便在 Tensorflow Serving 中使用?
我明白了,所以我要在这里回答我自己的问题。
要点如下:
首先,(在单独的代码文件中)我仅使用我自己的 pre-processing 函数使用 Keras 训练模型,导出 Keras 模型权重文件和我的 token-to-index 字典。
然后,我只复制了 Keras 模型架构,将输入设置为 pre-processed 张量输出,从之前训练的 Keras 模型加载权重文件,并将其夹在 Tensorflow pre-processing 之间操作和 Tensorflow 导出器。
成品:
import tensorflow as tf
from keras import backend as K
from keras.models import Sequential, Embedding, LSTM, Dense
from tensorflow.contrib.session_bundle import exporter
from tensorflow.contrib.lookup import HashTable, TextFileInitializer
# Initialize Keras with Tensorflow session
sess = tf.Session()
K.set_session(sess)
# Token to index lookup dictionary
token_to_idx_path = '...'
token_to_idx_dict = HashTable(TextFileInitializer(token_to_idx_path, tf.string, 0, tf.int64, 1, delimiter='\t'), 0)
maxlen = ...
# Pre-processing sub-graph using Tensorflow operations
input = tf.placeholder(tf.string, name='input')
sparse_tokenized_input = tf.string_split(input)
tokenized_input = tf.sparse_tensor_to_dense(sparse_tokenized_input, default_value='')
token_idxs = token_to_idx_dict.lookup(tokenized_input)
token_idxs_padded = tf.pad(token_idxs, [[0,0],[0,maxlen]])
token_idxs_embedding = tf.slice(token_idxs_padded, [0,0], [-1,maxlen])
# Initialize Keras model
model = Sequential()
e = Embedding(max_features, 128, input_length=maxlen)
e.set_input(token_idxs_embedding)
model.add(e)
model.add(LSTM(128, activation='sigmoid'))
model.add(Dense(num_classes, activation='softmax'))
# Load weights from previously trained Keras model
weights_path = '...'
model.load_weights(weights_path)
K.set_learning_phase(0)
# Export model in Tensorflow format
# (Official tutorial: https://github.com/tensorflow/serving/blob/master/tensorflow_serving/g3doc/serving_basic.md)
saver = tf.train.Saver(sharded=True)
model_exporter = exporter.Exporter(saver)
signature = exporter.classification_signature(input_tensor=model.input, scores_tensor=model.output)
model_exporter.init(sess.graph.as_graph_def(), default_graph_signature=signature)
model_dir = '...'
model_version = 1
model_exporter.export(model_dir, tf.constant(model_version), sess)
# Input example
with sess.as_default():
token_to_idx_dict.init.run()
sess.run(model.output, feed_dict={input: ["this is a raw input example"]})
已接受的答案非常有用,但是它使用了@Qululu 提到的过时的 Keras API 和过时的 TF Serving API(导出器),并且它没有显示如何导出模型,使其输入是原始的 tf 占位符(相对于 Keras model.input,这是 post 预处理)。以下是适用于 TF v1.4 和 Keras 2.1.2 的版本:
sess = tf.Session()
K.set_session(sess)
K._LEARNING_PHASE = tf.constant(0)
K.set_learning_phase(0)
max_features = 5000
max_lens = 500
dict_table = tf.contrib.lookup.HashTable(tf.contrib.lookup.TextFileInitializer("vocab.txt",tf.string, 0, tf.int64, TextFileIndex.LINE_NUMBER, vocab_size=max_features, delimiter=" "), 0)
x_input = tf.placeholder(tf.string, name='x_input', shape=(None,))
sparse_tokenized_input = tf.string_split(x_input)
tokenized_input = tf.sparse_tensor_to_dense(sparse_tokenized_input, default_value='')
token_idxs = dict_table.lookup(tokenized_input)
token_idxs_padded = tf.pad(token_idxs, [[0,0],[0, max_lens]])
token_idxs_embedding = tf.slice(token_idxs_padded, [0,0], [-1, max_lens])
model = Sequential()
model.add(InputLayer(input_tensor=token_idxs_embedding, input_shape=(None, max_lens)))
...REST OF MODEL...
model.load_weights("model.h5")
x_info = tf.saved_model.utils.build_tensor_info(x_input)
y_info = tf.saved_model.utils.build_tensor_info(model.output)
prediction_signature = tf.saved_model.signature_def_utils.build_signature_def(inputs={"text": x_info}, outputs={"prediction":y_info}, method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME)
builder = saved_model_builder.SavedModelBuilder("/path/to/model")
legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op')
init_op = tf.group(tf.global_variables_initializer(), tf.local_variables_initializer())
sess.run(init_op)
# Add the meta_graph and the variables to the builder
builder.add_meta_graph_and_variables(
sess, [tag_constants.SERVING],
signature_def_map={
signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:
prediction_signature,
},
legacy_init_op=legacy_init_op)
builder.save()
UPDATE 使用 Tensorflow 进行推理预处理是一项 CPU 操作,如果模型部署在 GPU 服务器上,则执行效率不高。 GPU 卡顿非常严重,吞吐量非常低。因此,为了在客户端进程中进行有效的预处理,我们放弃了这一点。
我想在我导出的 Keras 模型中包含我的自定义预处理逻辑,以便在 Tensorflow Serving 中使用。
我的预处理执行字符串标记化,并使用外部字典将每个标记转换为索引以输入到嵌入层:
from keras.preprocessing import sequence
token_to_idx_dict = ... #read from file
# Custom Pythonic pre-processing steps on input_data
tokens = [tokenize(s) for s in input_data]
token_idxs = [[token_to_idx_dict[t] for t in ts] for ts in tokens]
tokens_padded = sequence.pad_sequences(token_idxs, maxlen=maxlen)
模型架构和训练:
model = Sequential()
model.add(Embedding(max_features, 128, input_length=maxlen))
model.add(LSTM(128, activation='sigmoid'))
model.add(Dense(n_classes, activation='softmax'))
model.compile(loss='sparse_categorical_crossentropy', optimizer='adam')
model.fit(x_train, y_train)
由于模型将在 Tensorflow Serving 中使用,我想将所有预处理逻辑合并到模型本身(在导出的模型文件中编码)。
问:我怎样才能只使用 Keras 库?
我发现 this guide 解释了如何结合 Keras 和 Tensorflow。但我仍然不确定如何将所有内容导出为一个模型。
我知道 Tensorflow 有内置的字符串拆分,file I/O, and dictionary lookup operations。
使用Tensorflow操作的预处理逻辑:
# Get input text
input_string_tensor = tf.placeholder(tf.string, shape={1})
# Split input text by whitespace
splitted_string = tf.string_split(input_string_tensor, " ")
# Read index lookup dictionary
token_to_idx_dict = tf.contrib.lookup.HashTable(tf.contrib.lookup.TextFileInitializer("vocab.txt", tf.string, 0, tf.int64, 1, delimiter=","), -1)
# Convert tokens to indexes
token_idxs = token_to_idx_dict.lookup(splitted_string)
# Pad zeros to fixed length
token_idxs_padded = tf.pad(token_idxs, ...)
问:如何将这些 Tensorflow 预定义的预处理操作和我的 Keras 层一起使用来训练模型,然后将模型导出为 "black box" 以便在 Tensorflow Serving 中使用?
我明白了,所以我要在这里回答我自己的问题。
要点如下:
首先,(在单独的代码文件中)我仅使用我自己的 pre-processing 函数使用 Keras 训练模型,导出 Keras 模型权重文件和我的 token-to-index 字典。
然后,我只复制了 Keras 模型架构,将输入设置为 pre-processed 张量输出,从之前训练的 Keras 模型加载权重文件,并将其夹在 Tensorflow pre-processing 之间操作和 Tensorflow 导出器。
成品:
import tensorflow as tf
from keras import backend as K
from keras.models import Sequential, Embedding, LSTM, Dense
from tensorflow.contrib.session_bundle import exporter
from tensorflow.contrib.lookup import HashTable, TextFileInitializer
# Initialize Keras with Tensorflow session
sess = tf.Session()
K.set_session(sess)
# Token to index lookup dictionary
token_to_idx_path = '...'
token_to_idx_dict = HashTable(TextFileInitializer(token_to_idx_path, tf.string, 0, tf.int64, 1, delimiter='\t'), 0)
maxlen = ...
# Pre-processing sub-graph using Tensorflow operations
input = tf.placeholder(tf.string, name='input')
sparse_tokenized_input = tf.string_split(input)
tokenized_input = tf.sparse_tensor_to_dense(sparse_tokenized_input, default_value='')
token_idxs = token_to_idx_dict.lookup(tokenized_input)
token_idxs_padded = tf.pad(token_idxs, [[0,0],[0,maxlen]])
token_idxs_embedding = tf.slice(token_idxs_padded, [0,0], [-1,maxlen])
# Initialize Keras model
model = Sequential()
e = Embedding(max_features, 128, input_length=maxlen)
e.set_input(token_idxs_embedding)
model.add(e)
model.add(LSTM(128, activation='sigmoid'))
model.add(Dense(num_classes, activation='softmax'))
# Load weights from previously trained Keras model
weights_path = '...'
model.load_weights(weights_path)
K.set_learning_phase(0)
# Export model in Tensorflow format
# (Official tutorial: https://github.com/tensorflow/serving/blob/master/tensorflow_serving/g3doc/serving_basic.md)
saver = tf.train.Saver(sharded=True)
model_exporter = exporter.Exporter(saver)
signature = exporter.classification_signature(input_tensor=model.input, scores_tensor=model.output)
model_exporter.init(sess.graph.as_graph_def(), default_graph_signature=signature)
model_dir = '...'
model_version = 1
model_exporter.export(model_dir, tf.constant(model_version), sess)
# Input example
with sess.as_default():
token_to_idx_dict.init.run()
sess.run(model.output, feed_dict={input: ["this is a raw input example"]})
已接受的答案非常有用,但是它使用了@Qululu 提到的过时的 Keras API 和过时的 TF Serving API(导出器),并且它没有显示如何导出模型,使其输入是原始的 tf 占位符(相对于 Keras model.input,这是 post 预处理)。以下是适用于 TF v1.4 和 Keras 2.1.2 的版本:
sess = tf.Session()
K.set_session(sess)
K._LEARNING_PHASE = tf.constant(0)
K.set_learning_phase(0)
max_features = 5000
max_lens = 500
dict_table = tf.contrib.lookup.HashTable(tf.contrib.lookup.TextFileInitializer("vocab.txt",tf.string, 0, tf.int64, TextFileIndex.LINE_NUMBER, vocab_size=max_features, delimiter=" "), 0)
x_input = tf.placeholder(tf.string, name='x_input', shape=(None,))
sparse_tokenized_input = tf.string_split(x_input)
tokenized_input = tf.sparse_tensor_to_dense(sparse_tokenized_input, default_value='')
token_idxs = dict_table.lookup(tokenized_input)
token_idxs_padded = tf.pad(token_idxs, [[0,0],[0, max_lens]])
token_idxs_embedding = tf.slice(token_idxs_padded, [0,0], [-1, max_lens])
model = Sequential()
model.add(InputLayer(input_tensor=token_idxs_embedding, input_shape=(None, max_lens)))
...REST OF MODEL...
model.load_weights("model.h5")
x_info = tf.saved_model.utils.build_tensor_info(x_input)
y_info = tf.saved_model.utils.build_tensor_info(model.output)
prediction_signature = tf.saved_model.signature_def_utils.build_signature_def(inputs={"text": x_info}, outputs={"prediction":y_info}, method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME)
builder = saved_model_builder.SavedModelBuilder("/path/to/model")
legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op')
init_op = tf.group(tf.global_variables_initializer(), tf.local_variables_initializer())
sess.run(init_op)
# Add the meta_graph and the variables to the builder
builder.add_meta_graph_and_variables(
sess, [tag_constants.SERVING],
signature_def_map={
signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:
prediction_signature,
},
legacy_init_op=legacy_init_op)
builder.save()
UPDATE 使用 Tensorflow 进行推理预处理是一项 CPU 操作,如果模型部署在 GPU 服务器上,则执行效率不高。 GPU 卡顿非常严重,吞吐量非常低。因此,为了在客户端进程中进行有效的预处理,我们放弃了这一点。