ValueError: None is only supported in the 1st dimension. Tensor 'flatbuffer_data' has invalid shape '[None, None, 1, 512]'
ValueError: None is only supported in the 1st dimension. Tensor 'flatbuffer_data' has invalid shape '[None, None, 1, 512]'
我正在尝试将我的 tensorflow 模型 (2.0) 转换为 tensorflow lite 格式。我的模型有两个输入层如下:
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import load_model
from tensorflow.keras.layers import Lambda, Input, add, Dot, multiply, dot
from tensorflow.keras.backend import dot, transpose, expand_dims
from tensorflow.keras.models import Model
r1 = Input(shape=[None, 1, 512], name='flatbuffer_data') # I want to take a variable amount of
# 512 float embeddings from my flatbuffer, if the flatbuffer has 4, embeddings then it would
# be inferred as shape=[4, 1, 512], if it has a 100 embeddings, then it is [100, 1, 512].
r2 = Input(shape=[1, 512], name='query_embedding')
#Example code
minus_r1 = Lambda(lambda x: -x, name='invert_value')(r1)
subtracted = add([r2, minus_r1], name='embeddings_diff')
out1 = tf.argsort(subtracted)
out2 = tf.sort(subtracted)
model = Model([r1, r2], [out1, out2])
然后我在层上做一些张量操作并保存模型如下(没有训练因此没有可训练的参数,只有一些我想移植到 android 的线性代数运算)
model.save('combined_model.h5')
我得到了我的 tensorflow .h5 模型,因此当我尝试将其转换为 tensorflow lite 时,出现以下错误:
import tensorflow as tf
model = tf.keras.models.load_model('combined_model.h5')
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
#Error
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/aspiring1/.virtualenvs/faiss/lib/python3.6/site-packages/tensorflow_core/lite/python/lite.py", line 446, in convert
"invalid shape '{1}'.".format(_get_tensor_name(tensor), shape_list))
ValueError: None is only supported in the 1st dimension. Tensor 'flatbuffer_data' has invalid shape '[None, None, 1, 512]'.
我知道我们在 tensorflow 1.x 中使用 tensorflow 占位符进行动态和静态形状推断。 tensorflow 2.x 中有类似物吗?另外,我也很感激 tensorflow 1.x 中的解决方案。
我读过的一些答案和博客可能会有所帮助:
Understanding tensorflow shapes
使用上面的第一个 link 我也尝试创建一个 tensorflow 1.x 图表并尝试使用 saved model
格式保存它,但我没有得到想要的结果。
你可以在这里找到我的代码:tensorflow 1.x gist code
完整代码:https://drive.google.com/file/d/1MN4-FX_-hz3y-UAuf7OTj_XYuVTlsSTP/view?usp=sharing
为什么这不起作用?
I know that we had dynamic and static shape inference in tensorflow 1.x using tensorflow placeholders. Is there an analogue here in tensorflow 2.x
一切仍然正常。我认为问题在于 tf.lite
不处理动态形状。我认为它预先分配了所有的张量,一次并重新使用它们(我可能是错的)。
所以,首先是额外的维度:
[None, None, 1, 512]
keras.Input
总是包含批处理维度,tf.lite
可以处理未知(此限制在 tf-nightly
中似乎放宽了)。
但 lite
似乎更喜欢 1 的批次维度。如果切换到:
r1 = Input(shape=[4], batch_size=None, name='flatbuffer_data')
r2 = Input(shape=[4], batch_size=1, name='query_embedding')
转换通过,但尝试执行tflite模型时仍然失败,因为模型希望所有个未知维度为1
:
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
i = tf.lite.Interpreter(model_content=tflite_model)
i.allocate_tensors()
i.get_input_details()
i.set_tensor(0, tf.constant([[0.,0,0,0],[1,1,1,1],[2,2,2,2]]))
i.set_tensor(1, tf.constant([[0.,0,0,0]]))
ValueError: Cannot set tensor: Dimension mismatch. Got 3 but expected 1 for dimension 0 of input 0.
使用 tf-nightly
,您可以按照编写的方式转换模型,但这也无法 运行,因为未知维度假定为 1:
r1 = Input(shape=[None, 4], name='flatbuffer_data')
r2 = Input(shape=[1, 4], name='query_embedding')
...
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
i = tf.lite.Interpreter(model_content=tflite_model)
i.allocate_tensors()
print(i.get_input_details())
i.set_tensor(0, tf.constant([[[0.,0,0,0],[1,1,1,1],[2,2,2,2]]]))
i.set_tensor(1, tf.constant([[[0.,0,0,0]]]))
ValueError: Cannot set tensor: Dimension mismatch. Got 3 but expected 1 for dimension 1 of input 0.
解决方案?没有。差不多。
我认为您需要为该数组提供比预期更大的大小,并传递一个 int 告诉您的模型要切出多少元素:
n = Input(shape=(), dtype=tf.int32, name='num_inputs')
r1 = Input(shape=[1000, 4], name='flatbuffer_data')
r2 = Input(shape=[4], name='query_embedding')
#Example code
x = tf.reshape(r1, [1000,4])
x = tf.gather(x, tf.range(tf.squeeze(n)))
minus_r1 = Lambda(lambda x: -x, name='invert_value')(x)
subtracted = add([r2, minus_r1], name='embeddings_diff')
out1 = tf.argsort(subtracted, name='argsort')
out2 = tf.sort(subtracted, name="sorted")
model = Model([r1, r2, n], [out1, out2])
然后就可以了:
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
i = tf.lite.Interpreter(model_content=tflite_model)
i.allocate_tensors()
for d in i.get_input_details():
print(d)
a = np.zeros([1000, 4], dtype=np.float32)
a[:3] = [
[0.,0,0,0],
[1,1,1,1],
[2,2,2,2]]
i.set_tensor(0, tf.constant(a[np.newaxis,...], dtype=tf.float32))
i.set_tensor(1, tf.constant([[0.,0,0,0]]))
i.set_tensor(2, tf.constant([3], dtype=tf.int32))
i.invoke()
print()
for d in i.get_output_details():
print(i.get_tensor(d['index']))
[[ 0. 0. 0. 0.]
[-1. -1. -1. -1.]
[-2. -2. -2. -2.]]
[[0 1 2 3]
[0 1 2 3]
[0 1 2 3]]
OP 在 java 解释器中尝试了这个并得到:
java.lang.IllegalArgumentException: Internal error: Failed to apply delegate: Attempting to use a delegate that only supports static-sized tensors with a graph that has dynamic-sized tensors.
所以我们还没有完全完成。
我正在尝试将我的 tensorflow 模型 (2.0) 转换为 tensorflow lite 格式。我的模型有两个输入层如下:
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import load_model
from tensorflow.keras.layers import Lambda, Input, add, Dot, multiply, dot
from tensorflow.keras.backend import dot, transpose, expand_dims
from tensorflow.keras.models import Model
r1 = Input(shape=[None, 1, 512], name='flatbuffer_data') # I want to take a variable amount of
# 512 float embeddings from my flatbuffer, if the flatbuffer has 4, embeddings then it would
# be inferred as shape=[4, 1, 512], if it has a 100 embeddings, then it is [100, 1, 512].
r2 = Input(shape=[1, 512], name='query_embedding')
#Example code
minus_r1 = Lambda(lambda x: -x, name='invert_value')(r1)
subtracted = add([r2, minus_r1], name='embeddings_diff')
out1 = tf.argsort(subtracted)
out2 = tf.sort(subtracted)
model = Model([r1, r2], [out1, out2])
然后我在层上做一些张量操作并保存模型如下(没有训练因此没有可训练的参数,只有一些我想移植到 android 的线性代数运算)
model.save('combined_model.h5')
我得到了我的 tensorflow .h5 模型,因此当我尝试将其转换为 tensorflow lite 时,出现以下错误:
import tensorflow as tf
model = tf.keras.models.load_model('combined_model.h5')
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
#Error
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/aspiring1/.virtualenvs/faiss/lib/python3.6/site-packages/tensorflow_core/lite/python/lite.py", line 446, in convert
"invalid shape '{1}'.".format(_get_tensor_name(tensor), shape_list))
ValueError: None is only supported in the 1st dimension. Tensor 'flatbuffer_data' has invalid shape '[None, None, 1, 512]'.
我知道我们在 tensorflow 1.x 中使用 tensorflow 占位符进行动态和静态形状推断。 tensorflow 2.x 中有类似物吗?另外,我也很感激 tensorflow 1.x 中的解决方案。
我读过的一些答案和博客可能会有所帮助:
Understanding tensorflow shapes
使用上面的第一个 link 我也尝试创建一个 tensorflow 1.x 图表并尝试使用 saved model
格式保存它,但我没有得到想要的结果。
你可以在这里找到我的代码:tensorflow 1.x gist code
完整代码:https://drive.google.com/file/d/1MN4-FX_-hz3y-UAuf7OTj_XYuVTlsSTP/view?usp=sharing
为什么这不起作用?
I know that we had dynamic and static shape inference in tensorflow 1.x using tensorflow placeholders. Is there an analogue here in tensorflow 2.x
一切仍然正常。我认为问题在于 tf.lite
不处理动态形状。我认为它预先分配了所有的张量,一次并重新使用它们(我可能是错的)。
所以,首先是额外的维度:
[None, None, 1, 512]
keras.Input
总是包含批处理维度,tf.lite
可以处理未知(此限制在 tf-nightly
中似乎放宽了)。
但 lite
似乎更喜欢 1 的批次维度。如果切换到:
r1 = Input(shape=[4], batch_size=None, name='flatbuffer_data')
r2 = Input(shape=[4], batch_size=1, name='query_embedding')
转换通过,但尝试执行tflite模型时仍然失败,因为模型希望所有个未知维度为1
:
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
i = tf.lite.Interpreter(model_content=tflite_model)
i.allocate_tensors()
i.get_input_details()
i.set_tensor(0, tf.constant([[0.,0,0,0],[1,1,1,1],[2,2,2,2]]))
i.set_tensor(1, tf.constant([[0.,0,0,0]]))
ValueError: Cannot set tensor: Dimension mismatch. Got 3 but expected 1 for dimension 0 of input 0.
使用 tf-nightly
,您可以按照编写的方式转换模型,但这也无法 运行,因为未知维度假定为 1:
r1 = Input(shape=[None, 4], name='flatbuffer_data')
r2 = Input(shape=[1, 4], name='query_embedding')
...
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
i = tf.lite.Interpreter(model_content=tflite_model)
i.allocate_tensors()
print(i.get_input_details())
i.set_tensor(0, tf.constant([[[0.,0,0,0],[1,1,1,1],[2,2,2,2]]]))
i.set_tensor(1, tf.constant([[[0.,0,0,0]]]))
ValueError: Cannot set tensor: Dimension mismatch. Got 3 but expected 1 for dimension 1 of input 0.
解决方案?没有。差不多。
我认为您需要为该数组提供比预期更大的大小,并传递一个 int 告诉您的模型要切出多少元素:
n = Input(shape=(), dtype=tf.int32, name='num_inputs')
r1 = Input(shape=[1000, 4], name='flatbuffer_data')
r2 = Input(shape=[4], name='query_embedding')
#Example code
x = tf.reshape(r1, [1000,4])
x = tf.gather(x, tf.range(tf.squeeze(n)))
minus_r1 = Lambda(lambda x: -x, name='invert_value')(x)
subtracted = add([r2, minus_r1], name='embeddings_diff')
out1 = tf.argsort(subtracted, name='argsort')
out2 = tf.sort(subtracted, name="sorted")
model = Model([r1, r2, n], [out1, out2])
然后就可以了:
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
i = tf.lite.Interpreter(model_content=tflite_model)
i.allocate_tensors()
for d in i.get_input_details():
print(d)
a = np.zeros([1000, 4], dtype=np.float32)
a[:3] = [
[0.,0,0,0],
[1,1,1,1],
[2,2,2,2]]
i.set_tensor(0, tf.constant(a[np.newaxis,...], dtype=tf.float32))
i.set_tensor(1, tf.constant([[0.,0,0,0]]))
i.set_tensor(2, tf.constant([3], dtype=tf.int32))
i.invoke()
print()
for d in i.get_output_details():
print(i.get_tensor(d['index']))
[[ 0. 0. 0. 0.]
[-1. -1. -1. -1.]
[-2. -2. -2. -2.]]
[[0 1 2 3]
[0 1 2 3]
[0 1 2 3]]
OP 在 java 解释器中尝试了这个并得到:
java.lang.IllegalArgumentException: Internal error: Failed to apply delegate: Attempting to use a delegate that only supports static-sized tensors with a graph that has dynamic-sized tensors.
所以我们还没有完全完成。