ValueError: Input 0 of layer sequential_9 is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [None, None, None]

ValueError: Input 0 of layer sequential_9 is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [None, None, None]

我正在尝试解决分类问题。我不知道为什么会收到此错误:

ValueError: Input 0 of layer sequential_9 is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [None, None, None]

这是主要代码:

model = createModel()
filesPath=getFilesPathWithoutSeizure(i, indexPat)
history=model.fit_generator(generate_arrays_for_training(indexPat, filesPath, end=75)##problem here
def createModel():
    input_shape=(1,11, 3840)
    model = Sequential()
    #C1
    model.add(Conv2D(16, (5, 5), strides=( 2, 2), padding='same',activation='relu',data_format= "channels_first", input_shape=input_shape))
    model.add(keras.layers.MaxPooling2D(pool_size=( 2, 2),data_format= "channels_first",  padding='same'))
    model.add(BatchNormalization())
    #C2
    model.add(Conv2D(32, ( 3, 3), strides=(1,1), padding='same',data_format= "channels_first",  activation='relu'))#incertezza se togliere padding
    model.add(keras.layers.MaxPooling2D(pool_size=(2, 2),data_format= "channels_first", padding='same'))
    model.add(BatchNormalization())
    #c3
    model.add(Conv2D(64, (3, 3), strides=( 1,1), padding='same',data_format= "channels_first",  activation='relu'))#incertezza se togliere padding
    model.add(keras.layers.MaxPooling2D(pool_size=(2, 2),data_format= "channels_first", padding='same'))
    model.add(BatchNormalization())
    model.add(Flatten())
    model.add(Dropout(0.5))
    model.add(Dense(256, activation='sigmoid'))
    model.add(Dropout(0.5))
    model.add(Dense(2, activation='softmax'))
    opt_adam = keras.optimizers.Adam(lr=0.00001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
    model.compile(loss='categorical_crossentropy', optimizer=opt_adam, metrics=['accuracy'])
    
    return model

错误:

    history=model.fit_generator(generate_arrays_for_training(indexPat, filesPath, end=75), #end=75),#It take the first 75%
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/util/deprecation.py", line 324, in new_func
    return func(*args, **kwargs)
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 1815, in fit_generator
    return self.fit(
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 108, in _method_wrapper
    return method(self, *args, **kwargs)
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 1098, in fit
    tmp_logs = train_function(iterator)
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 780, in __call__
    result = self._call(*args, **kwds)
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 823, in _call
    self._initialize(args, kwds, add_initializers_to=initializers)
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 696, in _initialize
    self._stateful_fn._get_concrete_function_internal_garbage_collected(  # pylint: disable=protected-access
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 2855, in _get_concrete_function_internal_garbage_collected
    graph_function, _, _ = self._maybe_define_function(args, kwargs)
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3213, in _maybe_define_function
    graph_function = self._create_graph_function(args, kwargs)
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3065, in _create_graph_function
    func_graph_module.func_graph_from_py_func(
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 986, in func_graph_from_py_func
    func_outputs = python_func(*func_args, **func_kwargs)
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 600, in wrapped_fn
    return weak_wrapped_fn().__wrapped__(*args, **kwds)
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 973, in wrapper
    raise e.ag_error_metadata.to_exception(e)
ValueError: in user code:

    /home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:806 train_function  *
        return step_function(self, iterator)
    /home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:796 step_function  **
        outputs = model.distribute_strategy.run(run_step, args=(data,))
    /home/user1/.local/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:1211 run
        return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
    /home/user1/.local/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:2585 call_for_each_replica
        return self._call_for_each_replica(fn, args, kwargs)
    /home/user1/.local/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:2945 _call_for_each_replica
        return fn(*args, **kwargs)
    /home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:789 run_step  **
        outputs = model.train_step(data)
    /home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:747 train_step
        y_pred = self(x, training=True)
    /home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py:975 __call__
        input_spec.assert_input_compatibility(self.input_spec, inputs,
    /home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/input_spec.py:191 assert_input_compatibility
        raise ValueError('Input ' + str(input_index) + ' of layer ' +

    ValueError: Input 0 of layer sequential_9 is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [None, None, None]

Keras其实总是隐藏0-th维度,也就是所谓的batch维度。在你放置 input_shape = (A, B, C) 的任何地方实际上不应该在那里提到批量维度,(A, B, C) 应该是一个对象的形状(或你的情况下的图像)。例如,如果你说 input_shape = (1,11, 3840) 那么它实际上意味着用于训练或预测的数据应该是一个形状类似于 (7, 1,11, 3840) 的 numpy 数组,即训练批次中有 7 个对象。所以这个 7 是批量大小,并行训练的对象数量。

因此,如果您的一个对象(例如图像)的形状为 (11, 3840),那么您必须在所有地方写 input_shape = (11, 3840),而无需提及批量大小。

为什么 Keras 隐藏 0-th 批次维度?因为 keras 需要不同大小的批处理,所以今天您可以提供 7 个对象,明天可以提供 9 个对象,同一个网络将适用于两者。但是一个对象的形状 (11, 3840) 不应该改变,并且为函数 generate_arrays_for_training() 生成的训练提供的数据应该总是大小 (BatchSize, 11, 3840),其中 BatchSize 可以变化,你可以生成一批 179 objects-images 每个形状 (11, 3840).

如果所有层的图像都应该是 3 维的,具有 1 个通道,那么您必须扩展生成的训练数据的维度,使用 this function 执行 X = np.expand_dims(X, 0) 以便您的训练 X 数据形状为 (1, 1, 11, 3840),例如批处理 1 个对象,只有这样你才能拥有 input_shape = (1, 11, 3840).

我还看到你到处都在写 data_format= "channels_first",默认情况下所有函数都是 channels_last,为了不在任何地方都写这个你可以重塑由 generate_arrays_for_training() 生成的数据就一次,如果它是 X 形状 (1, 1, 11, 3840) 那么你就做 X = X.transpose(0, 2, 3, 1)。而你的频道将成为最后一个维度。

移调将一个维度移动到另一个地方。但是对于您的情况,因为您只有 1 通道而不是转置,您可以重塑例如形状 (1, 1, 11, 3840)X 可以通过 X = X.reshape(1, 11, 3840, 1) 重塑,它将变成 (1, 11, 3840, 1) 的形状。只有当你不想到处写 "channels_first" 时才需要,但如果你不想美化你的代码,那么根本不需要 transposing/reshaping!

我记得我过去的 Keras 不知何故不喜欢大小为 1 的维度,它基本上试图在几个不同的函数中删除它们,即如果 keras 看到形状数组 (1, 2, 1, 3, 1, 4) 它几乎总是试图将其重塑为 (2, 3, 4)。因此 np.expand_dims() 实际上被忽略了。在这种情况下,唯一的解决方案可能是至少生成一批大小为 2 的图像。

你也可以阅读我的,虽然它有点无关,但它可以帮助你理解training/prediction在Keras中是如何工作的,特别是你可以阅读最后一段,编号[=47] =].

更新:由于下一次修改的帮助,问题似乎已解决:

  1. 在数据生成函数中需要做两次扩展dims,即X = np.expand_dims(np.expand_dims(X, 0), 0).

  2. 在数据生成函数中需要另一个X = X.transpose(0, 2, 3, 1)

  3. 在网络输入形状代码中设置为input_shape = (11, 3840, 1)

  4. 在网络代码中,所有子字符串 data_format = "channels_first" 都已删除。