关于 tf.keras 中自定义层中的布尔列表的问题
a question about boolean list in custom layer in tf.keras
我正在尝试为我的模型构建自定义输出层,以便可以将角度范围限制在 [-90,90] 内。代码如下:
class OutputLayer(Layer):
def __init__(self):
super(OutputLayer, self).__init__()
def call(self, inputs, **kwargs):
if_larger_than_90 = (inputs > 90)
if_smaller_than_minus_90 = (inputs < -90)
outputs = inputs - 180.0 * if_larger_than_90 + 180.0 * if_smaller_than_minus_90
return outputs
当我尝试运行它时 returns 出现错误:
Traceback (most recent call last):
File "E:/Studium/Thesis/Transfer Learning.py", line 78, in <module>
main()
File "E:/Studium/Thesis/Transfer Learning.py", line 73, in main
metrics = a_new_model.evaluate(data_gen)
File "C:\ProgramData\Miniconda3\envs\TF_2G\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 833, in evaluate
use_multiprocessing=use_multiprocessing)
File "C:\ProgramData\Miniconda3\envs\TF_2G\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 456, in evaluate
sample_weight=sample_weight, steps=steps, callbacks=callbacks, **kwargs)
File "C:\ProgramData\Miniconda3\envs\TF_2G\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 396, in _model_iteration
distribution_strategy=strategy)
File "C:\ProgramData\Miniconda3\envs\TF_2G\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 610, in _process_inputs
training_v2_utils._prepare_model_with_inputs(model, adapter.get_dataset())
File "C:\ProgramData\Miniconda3\envs\TF_2G\lib\site-packages\tensorflow_core\python\keras\engine\training_v2_utils.py", line 185, in _prepare_model_with_inputs
inputs, target, _ = model._build_model_with_inputs(dataset, targets=None)
File "C:\ProgramData\Miniconda3\envs\TF_2G\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 2622, in _build_model_with_inputs
self._set_inputs(cast_inputs)
File "C:\ProgramData\Miniconda3\envs\TF_2G\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 2709, in _set_inputs
outputs = self(inputs, **kwargs)
File "C:\ProgramData\Miniconda3\envs\TF_2G\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 842, in __call__
outputs = call_fn(cast_inputs, *args, **kwargs)
File "C:\ProgramData\Miniconda3\envs\TF_2G\lib\site-packages\tensorflow_core\python\keras\engine\sequential.py", line 270, in call
outputs = layer(inputs, **kwargs)
File "C:\ProgramData\Miniconda3\envs\TF_2G\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 842, in __call__
outputs = call_fn(cast_inputs, *args, **kwargs)
File "C:\ProgramData\Miniconda3\envs\TF_2G\lib\site-packages\tensorflow_core\python\autograph\impl\api.py", line 237, in wrapper
raise e.ag_error_metadata.to_exception(e)
TypeError: in converted code:
E:/Studium/Thesis/Transfer Learning.py:19 call *
outputs = inputs - 180.0 * if_larger_than_90 + 180.0 * if_smaller_than_minus_90
C:\ProgramData\Miniconda3\envs\TF_2G\lib\site-packages\tensorflow_core\python\ops\math_ops.py:924 r_binary_op_wrapper
x = ops.convert_to_tensor(x, dtype=y.dtype.base_dtype, name="x")
C:\ProgramData\Miniconda3\envs\TF_2G\lib\site-packages\tensorflow_core\python\framework\ops.py:1184 convert_to_tensor
return convert_to_tensor_v2(value, dtype, preferred_dtype, name)
C:\ProgramData\Miniconda3\envs\TF_2G\lib\site-packages\tensorflow_core\python\framework\ops.py:1242 convert_to_tensor_v2
as_ref=False)
C:\ProgramData\Miniconda3\envs\TF_2G\lib\site-packages\tensorflow_core\python\framework\ops.py:1296 internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
C:\ProgramData\Miniconda3\envs\TF_2G\lib\site-packages\tensorflow_core\python\framework\tensor_conversion_registry.py:52 _default_conversion_function
return constant_op.constant(value, dtype, name=name)
C:\ProgramData\Miniconda3\envs\TF_2G\lib\site-packages\tensorflow_core\python\framework\constant_op.py:227 constant
allow_broadcast=True)
C:\ProgramData\Miniconda3\envs\TF_2G\lib\site-packages\tensorflow_core\python\framework\constant_op.py:265 _constant_impl
allow_broadcast=allow_broadcast))
C:\ProgramData\Miniconda3\envs\TF_2G\lib\site-packages\tensorflow_core\python\framework\tensor_util.py:449 make_tensor_proto
_AssertCompatible(values, dtype)
C:\ProgramData\Miniconda3\envs\TF_2G\lib\site-packages\tensorflow_core\python\framework\tensor_util.py:331 _AssertCompatible
(dtype.name, repr(mismatch), type(mismatch).__name__))
TypeError: Expected bool, got 180.0 of type 'float' instead.
Process finished with exit code 1
那么在Tensorflow中使用int*bool之类的命令是否违法?如果是这样,我怎样才能用其他方法达到同样的目的?
您可以将布尔值转换为浮点值:
if_larger_than_90 = tf.keras.backend.cast(inputs > 90, "float32")
然而,我尝试以这种方式限制网络似乎有点奇怪。最好构造一个损失,使输出保持在范围内,或者将其剪到网络之外。但如果它适合你 - 好的。
我正在尝试为我的模型构建自定义输出层,以便可以将角度范围限制在 [-90,90] 内。代码如下:
class OutputLayer(Layer):
def __init__(self):
super(OutputLayer, self).__init__()
def call(self, inputs, **kwargs):
if_larger_than_90 = (inputs > 90)
if_smaller_than_minus_90 = (inputs < -90)
outputs = inputs - 180.0 * if_larger_than_90 + 180.0 * if_smaller_than_minus_90
return outputs
当我尝试运行它时 returns 出现错误:
Traceback (most recent call last):
File "E:/Studium/Thesis/Transfer Learning.py", line 78, in <module>
main()
File "E:/Studium/Thesis/Transfer Learning.py", line 73, in main
metrics = a_new_model.evaluate(data_gen)
File "C:\ProgramData\Miniconda3\envs\TF_2G\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 833, in evaluate
use_multiprocessing=use_multiprocessing)
File "C:\ProgramData\Miniconda3\envs\TF_2G\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 456, in evaluate
sample_weight=sample_weight, steps=steps, callbacks=callbacks, **kwargs)
File "C:\ProgramData\Miniconda3\envs\TF_2G\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 396, in _model_iteration
distribution_strategy=strategy)
File "C:\ProgramData\Miniconda3\envs\TF_2G\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 610, in _process_inputs
training_v2_utils._prepare_model_with_inputs(model, adapter.get_dataset())
File "C:\ProgramData\Miniconda3\envs\TF_2G\lib\site-packages\tensorflow_core\python\keras\engine\training_v2_utils.py", line 185, in _prepare_model_with_inputs
inputs, target, _ = model._build_model_with_inputs(dataset, targets=None)
File "C:\ProgramData\Miniconda3\envs\TF_2G\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 2622, in _build_model_with_inputs
self._set_inputs(cast_inputs)
File "C:\ProgramData\Miniconda3\envs\TF_2G\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 2709, in _set_inputs
outputs = self(inputs, **kwargs)
File "C:\ProgramData\Miniconda3\envs\TF_2G\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 842, in __call__
outputs = call_fn(cast_inputs, *args, **kwargs)
File "C:\ProgramData\Miniconda3\envs\TF_2G\lib\site-packages\tensorflow_core\python\keras\engine\sequential.py", line 270, in call
outputs = layer(inputs, **kwargs)
File "C:\ProgramData\Miniconda3\envs\TF_2G\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 842, in __call__
outputs = call_fn(cast_inputs, *args, **kwargs)
File "C:\ProgramData\Miniconda3\envs\TF_2G\lib\site-packages\tensorflow_core\python\autograph\impl\api.py", line 237, in wrapper
raise e.ag_error_metadata.to_exception(e)
TypeError: in converted code:
E:/Studium/Thesis/Transfer Learning.py:19 call *
outputs = inputs - 180.0 * if_larger_than_90 + 180.0 * if_smaller_than_minus_90
C:\ProgramData\Miniconda3\envs\TF_2G\lib\site-packages\tensorflow_core\python\ops\math_ops.py:924 r_binary_op_wrapper
x = ops.convert_to_tensor(x, dtype=y.dtype.base_dtype, name="x")
C:\ProgramData\Miniconda3\envs\TF_2G\lib\site-packages\tensorflow_core\python\framework\ops.py:1184 convert_to_tensor
return convert_to_tensor_v2(value, dtype, preferred_dtype, name)
C:\ProgramData\Miniconda3\envs\TF_2G\lib\site-packages\tensorflow_core\python\framework\ops.py:1242 convert_to_tensor_v2
as_ref=False)
C:\ProgramData\Miniconda3\envs\TF_2G\lib\site-packages\tensorflow_core\python\framework\ops.py:1296 internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
C:\ProgramData\Miniconda3\envs\TF_2G\lib\site-packages\tensorflow_core\python\framework\tensor_conversion_registry.py:52 _default_conversion_function
return constant_op.constant(value, dtype, name=name)
C:\ProgramData\Miniconda3\envs\TF_2G\lib\site-packages\tensorflow_core\python\framework\constant_op.py:227 constant
allow_broadcast=True)
C:\ProgramData\Miniconda3\envs\TF_2G\lib\site-packages\tensorflow_core\python\framework\constant_op.py:265 _constant_impl
allow_broadcast=allow_broadcast))
C:\ProgramData\Miniconda3\envs\TF_2G\lib\site-packages\tensorflow_core\python\framework\tensor_util.py:449 make_tensor_proto
_AssertCompatible(values, dtype)
C:\ProgramData\Miniconda3\envs\TF_2G\lib\site-packages\tensorflow_core\python\framework\tensor_util.py:331 _AssertCompatible
(dtype.name, repr(mismatch), type(mismatch).__name__))
TypeError: Expected bool, got 180.0 of type 'float' instead.
Process finished with exit code 1
那么在Tensorflow中使用int*bool之类的命令是否违法?如果是这样,我怎样才能用其他方法达到同样的目的?
您可以将布尔值转换为浮点值:
if_larger_than_90 = tf.keras.backend.cast(inputs > 90, "float32")
然而,我尝试以这种方式限制网络似乎有点奇怪。最好构造一个损失,使输出保持在范围内,或者将其剪到网络之外。但如果它适合你 - 好的。