TF2 IteratorGetNext 中的 XLA:不支持的操作错误
XLA in TF2 IteratorGetNext: unsupported op error
我正在尝试 运行 一个带有 XLA 的 .pb tensorflow 2 模型。
但是,我收到以下错误:
tensorflow.python.framework.errors_impl.InvalidArgumentError: Function invoked by the following node is not compilable: {{node __inference_predict_function_3130}} = __inference_predict_function_3130[_XlaMustCompile=true, config_proto="\n[=10=]7\n[=10=]3CPU0[=10=]1\n[=10=]7\n[=10=]3GPU0[=10=]02[=10=]2J[=10=]08[=10=]12[=10=]1[=10=]0", executor_type=""](dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, ...).
Uncompilable nodes:
IteratorGetNext: unsupported op: No registered 'IteratorGetNext' OpKernel for XLA_CPU_JIT devices compatible with node {{node IteratorGetNext}}
Stacktrace:
Node: __inference_predict_function_3130, function:
Node: IteratorGetNext, function: __inference_predict_function_3130
[Op:__inference_predict_function_3130]
错误的发生与模型无关,当我训练模型后直接应用它时也会发生错误。我想,我在做一些根本性的错误,或者 TF2 没有正确支持 XLA。没有 TF XLA 运行ning 的相同代码。
有人知道如何解决这个问题吗?
我在 Ubuntu 18.04 中工作,在 anaconda 中使用 python 3.8 和 TF 2.4.1
我的代码:
import tensorflow as tf
import numpy as np
import h5py
import sys
model_path_compile= 'model_Input/pbFolder'
data_inference_mat ='model_Input/data_inference/XXXX.MAT'
with h5py.File(data_inference_mat, 'r') as dataset:
try:
image_set = dataset['polar'][()].astype(np.uint16).T
image = np.cast[np.float32](image_set)
image /= 16384
except KeyError:
print('-----------------------ERROR--------------')
x = np.expand_dims(image, axis=0)
model_compile = tf.keras.models.load_model(model_path_compile)
with tf.device("device:XLA_CPU:0"):
y_pred = model_compile.predict(x)`
完整错误:
2021-07-19 16:09:02.521211: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-07-19 16:09:02.521416: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: SSE4.1 SSE4.2 AVX AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-07-19 16:09:02.522638: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
2021-07-19 16:09:03.357078: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2)
2021-07-19 16:09:03.378059: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2400000000 Hz
Traceback (most recent call last):
File "/media/ric/DATA/Software_Workspaces/MasterThesisWS/AI_HW_deploy/XLA/Tf2ToXLA_v2/TF2_RunModel.py", line 24, in <module>
y_pred = model_compile.predict(x)
File "/home/ric/anaconda3/envs/TfToXLA/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 1629, in predict
tmp_batch_outputs = self.predict_function(iterator)
File "/home/ric/anaconda3/envs/TfToXLA/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 828, in __call__
result = self._call(*args, **kwds)
File "/home/ric/anaconda3/envs/TfToXLA/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 894, in _call
return self._concrete_stateful_fn._call_flat(
File "/home/ric/anaconda3/envs/TfToXLA/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 1918, in _call_flat
return self._build_call_outputs(self._inference_function.call(
File "/home/ric/anaconda3/envs/TfToXLA/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 555, in call
outputs = execute.execute(
File "/home/ric/anaconda3/envs/TfToXLA/lib/python3.8/site-packages/tensorflow/python/eager/execute.py", line 59, in quick_execute
tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.InvalidArgumentError: Function invoked by the following node is not compilable: {{node __inference_predict_function_3130}} = __inference_predict_function_3130[_XlaMustCompile=true, config_proto="\n[=12=]7\n[=12=]3CPU0[=12=]1\n[=12=]7\n[=12=]3GPU0[=12=]02[=12=]2J[=12=]08[=12=]12[=12=]1[=12=]0", executor_type=""](dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, ...).
Uncompilable nodes:
IteratorGetNext: unsupported op: No registered 'IteratorGetNext' OpKernel for XLA_CPU_JIT devices compatible with node {{node IteratorGetNext}}
Stacktrace:
Node: __inference_predict_function_3130, function:
Node: IteratorGetNext, function: __inference_predict_function_3130
[Op:__inference_predict_function_3130]
经过几天的工作和各种方法,我终于找到了适合我的目的的解决方法。
因为我只想要一次模型执行的 LLVM IR,我可以使用 TensorFlow 的替代函数,model.predict_step。它只运行一次,因此不使用 IteratorGetNext 方法来避免初始错误。
我正在尝试 运行 一个带有 XLA 的 .pb tensorflow 2 模型。 但是,我收到以下错误:
tensorflow.python.framework.errors_impl.InvalidArgumentError: Function invoked by the following node is not compilable: {{node __inference_predict_function_3130}} = __inference_predict_function_3130[_XlaMustCompile=true, config_proto="\n[=10=]7\n[=10=]3CPU0[=10=]1\n[=10=]7\n[=10=]3GPU0[=10=]02[=10=]2J[=10=]08[=10=]12[=10=]1[=10=]0", executor_type=""](dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, ...).
Uncompilable nodes:
IteratorGetNext: unsupported op: No registered 'IteratorGetNext' OpKernel for XLA_CPU_JIT devices compatible with node {{node IteratorGetNext}}
Stacktrace:
Node: __inference_predict_function_3130, function:
Node: IteratorGetNext, function: __inference_predict_function_3130
[Op:__inference_predict_function_3130]
错误的发生与模型无关,当我训练模型后直接应用它时也会发生错误。我想,我在做一些根本性的错误,或者 TF2 没有正确支持 XLA。没有 TF XLA 运行ning 的相同代码。 有人知道如何解决这个问题吗?
我在 Ubuntu 18.04 中工作,在 anaconda 中使用 python 3.8 和 TF 2.4.1 我的代码:
import tensorflow as tf
import numpy as np
import h5py
import sys
model_path_compile= 'model_Input/pbFolder'
data_inference_mat ='model_Input/data_inference/XXXX.MAT'
with h5py.File(data_inference_mat, 'r') as dataset:
try:
image_set = dataset['polar'][()].astype(np.uint16).T
image = np.cast[np.float32](image_set)
image /= 16384
except KeyError:
print('-----------------------ERROR--------------')
x = np.expand_dims(image, axis=0)
model_compile = tf.keras.models.load_model(model_path_compile)
with tf.device("device:XLA_CPU:0"):
y_pred = model_compile.predict(x)`
完整错误:
2021-07-19 16:09:02.521211: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-07-19 16:09:02.521416: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: SSE4.1 SSE4.2 AVX AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-07-19 16:09:02.522638: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
2021-07-19 16:09:03.357078: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2)
2021-07-19 16:09:03.378059: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2400000000 Hz
Traceback (most recent call last):
File "/media/ric/DATA/Software_Workspaces/MasterThesisWS/AI_HW_deploy/XLA/Tf2ToXLA_v2/TF2_RunModel.py", line 24, in <module>
y_pred = model_compile.predict(x)
File "/home/ric/anaconda3/envs/TfToXLA/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 1629, in predict
tmp_batch_outputs = self.predict_function(iterator)
File "/home/ric/anaconda3/envs/TfToXLA/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 828, in __call__
result = self._call(*args, **kwds)
File "/home/ric/anaconda3/envs/TfToXLA/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 894, in _call
return self._concrete_stateful_fn._call_flat(
File "/home/ric/anaconda3/envs/TfToXLA/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 1918, in _call_flat
return self._build_call_outputs(self._inference_function.call(
File "/home/ric/anaconda3/envs/TfToXLA/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 555, in call
outputs = execute.execute(
File "/home/ric/anaconda3/envs/TfToXLA/lib/python3.8/site-packages/tensorflow/python/eager/execute.py", line 59, in quick_execute
tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.InvalidArgumentError: Function invoked by the following node is not compilable: {{node __inference_predict_function_3130}} = __inference_predict_function_3130[_XlaMustCompile=true, config_proto="\n[=12=]7\n[=12=]3CPU0[=12=]1\n[=12=]7\n[=12=]3GPU0[=12=]02[=12=]2J[=12=]08[=12=]12[=12=]1[=12=]0", executor_type=""](dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, ...).
Uncompilable nodes:
IteratorGetNext: unsupported op: No registered 'IteratorGetNext' OpKernel for XLA_CPU_JIT devices compatible with node {{node IteratorGetNext}}
Stacktrace:
Node: __inference_predict_function_3130, function:
Node: IteratorGetNext, function: __inference_predict_function_3130
[Op:__inference_predict_function_3130]
经过几天的工作和各种方法,我终于找到了适合我的目的的解决方法。
因为我只想要一次模型执行的 LLVM IR,我可以使用 TensorFlow 的替代函数,model.predict_step。它只运行一次,因此不使用 IteratorGetNext 方法来避免初始错误。