ValueError: Dimension 2 in both shapes must be equal, but are 512 and 511. Shapes are [?,384,512] and [?,384,511]
ValueError: Dimension 2 in both shapes must be equal, but are 512 and 511. Shapes are [?,384,512] and [?,384,511]
我正在尝试构建用于图像分割的 Unet 卷积神经网络,但是当我尝试使用输入数据编译模型时,我收到了形状不兼容的错误消息。
print(x_data.shape)
print(x_test.shape)
print(y_data.shape)
print(y_test.shape)
>>
(4, 767, 1022, 3)
(4, 767, 1022, 3)
(4, 767, 1022, 3)
(4, 767, 1022, 3)
>>>>
model = sm.Unet('resnet34', classes=1, activation='sigmoid')
model.compile(
'Adam',
loss=sm.losses.bce_jaccard_loss,
metrics=[sm.metrics.iou_score],
)
>>>>
model.fit(
x=x_data,
y=y_data,
batch_size=16,
epochs=100,
validation_data=(x_test, y_test),
)
>>
Epoch 1/100
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-27-6cf659e4ef4f> in <module>()
4 batch_size=16,
5 epochs=100,
----> 6 validation_data=(x_test, y_test),
7 )
10 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
971 except Exception as e: # pylint:disable=broad-except
972 if hasattr(e, "ag_error_metadata"):
--> 973 raise e.ag_error_metadata.to_exception(e)
974 else:
975 raise
ValueError: in user code:
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:806 train_function *
return step_function(self, iterator)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:796 step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:1211 run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2585 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2945 _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:789 run_step **
outputs = model.train_step(data)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:747 train_step
y_pred = self(x, training=True)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py:985 __call__
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/functional.py:386 call
inputs, training=training, mask=mask)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/functional.py:508 _run_internal_graph
outputs = node.layer(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py:985 __call__
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/layers/merge.py:183 call
return self._merge_function(inputs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/layers/merge.py:522 _merge_function
return K.concatenate(inputs, axis=self.axis)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/dispatch.py:201 wrapper
return target(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/backend.py:2881 concatenate
return array_ops.concat([to_dense(x) for x in tensors], axis)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/dispatch.py:201 wrapper
return target(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/array_ops.py:1654 concat
return gen_array_ops.concat_v2(values=values, axis=axis, name=name)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_array_ops.py:1222 concat_v2
"ConcatV2", values=values, axis=axis, name=name)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py:744 _apply_op_helper
attrs=attr_protos, op_def=op_def)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py:593 _create_op_internal
compute_device)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py:3485 _create_op_internal
op_def=op_def)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py:1975 __init__
control_input_ops, op_def)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py:1815 _create_c_op
raise ValueError(str(e))
ValueError: Dimension 2 in both shapes must be equal, but are 512 and 511. Shapes are [?,384,512] and [?,384,511]. for '{{node functional_3/decoder_stage3_concat/concat}} = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32](functional_3/decoder_stage3_upsampling/resize/ResizeNearestNeighbor, functional_3/relu0/Relu, functional_3/decoder_stage3_concat/concat/axis)' with input shapes: [?,384,512,64], [?,384,511,64], [] and with computed input tensors: input[2] = <3>.
当我已经检查了所有输入形状是否匹配时,究竟是什么问题?忽略了什么以及如何解决?
我已经试过了
import keras
keras.backend.set_image_data_format('channels_first')
如此处所示https://github.com/titu1994/Image-Super-Resolution/issues/27,但问题仍然存在。
使用 Google Colab。
晚会有点晚了,但你的问题是输入的宽度和高度不能被 32 整除;确保您为 UNet 使用可被 32 整除的值,您的问题将得到解决。
您无需更改 Colab 环境或将通道顺序设置为 channel_first
。
我正在尝试构建用于图像分割的 Unet 卷积神经网络,但是当我尝试使用输入数据编译模型时,我收到了形状不兼容的错误消息。
print(x_data.shape)
print(x_test.shape)
print(y_data.shape)
print(y_test.shape)
>>
(4, 767, 1022, 3)
(4, 767, 1022, 3)
(4, 767, 1022, 3)
(4, 767, 1022, 3)
>>>>
model = sm.Unet('resnet34', classes=1, activation='sigmoid')
model.compile(
'Adam',
loss=sm.losses.bce_jaccard_loss,
metrics=[sm.metrics.iou_score],
)
>>>>
model.fit(
x=x_data,
y=y_data,
batch_size=16,
epochs=100,
validation_data=(x_test, y_test),
)
>>
Epoch 1/100
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-27-6cf659e4ef4f> in <module>()
4 batch_size=16,
5 epochs=100,
----> 6 validation_data=(x_test, y_test),
7 )
10 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
971 except Exception as e: # pylint:disable=broad-except
972 if hasattr(e, "ag_error_metadata"):
--> 973 raise e.ag_error_metadata.to_exception(e)
974 else:
975 raise
ValueError: in user code:
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:806 train_function *
return step_function(self, iterator)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:796 step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:1211 run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2585 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2945 _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:789 run_step **
outputs = model.train_step(data)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:747 train_step
y_pred = self(x, training=True)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py:985 __call__
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/functional.py:386 call
inputs, training=training, mask=mask)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/functional.py:508 _run_internal_graph
outputs = node.layer(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py:985 __call__
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/layers/merge.py:183 call
return self._merge_function(inputs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/layers/merge.py:522 _merge_function
return K.concatenate(inputs, axis=self.axis)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/dispatch.py:201 wrapper
return target(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/backend.py:2881 concatenate
return array_ops.concat([to_dense(x) for x in tensors], axis)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/dispatch.py:201 wrapper
return target(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/array_ops.py:1654 concat
return gen_array_ops.concat_v2(values=values, axis=axis, name=name)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_array_ops.py:1222 concat_v2
"ConcatV2", values=values, axis=axis, name=name)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py:744 _apply_op_helper
attrs=attr_protos, op_def=op_def)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py:593 _create_op_internal
compute_device)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py:3485 _create_op_internal
op_def=op_def)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py:1975 __init__
control_input_ops, op_def)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py:1815 _create_c_op
raise ValueError(str(e))
ValueError: Dimension 2 in both shapes must be equal, but are 512 and 511. Shapes are [?,384,512] and [?,384,511]. for '{{node functional_3/decoder_stage3_concat/concat}} = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32](functional_3/decoder_stage3_upsampling/resize/ResizeNearestNeighbor, functional_3/relu0/Relu, functional_3/decoder_stage3_concat/concat/axis)' with input shapes: [?,384,512,64], [?,384,511,64], [] and with computed input tensors: input[2] = <3>.
当我已经检查了所有输入形状是否匹配时,究竟是什么问题?忽略了什么以及如何解决?
我已经试过了
import keras
keras.backend.set_image_data_format('channels_first')
如此处所示https://github.com/titu1994/Image-Super-Resolution/issues/27,但问题仍然存在。
使用 Google Colab。
晚会有点晚了,但你的问题是输入的宽度和高度不能被 32 整除;确保您为 UNet 使用可被 32 整除的值,您的问题将得到解决。
您无需更改 Colab 环境或将通道顺序设置为 channel_first
。