Pythonkeras如何改变卷积层到lstm层后输入的大小
Python keras how to change the size of input after convolution layer into lstm layer
我对卷积层和 lstm 层之间的连接有问题。数据的形状为 (75,5),其中每个时间步有 75 个时间步 x 5 个数据点。我想要做的是对 (75x5) 进行卷积,获取新的卷积 (75x5) 数据并将该数据馈送到 lstm 层。但是,它不起作用,因为卷积层输出的形状有一些我不需要的过滤器。因此,卷积层输出的形状为 (1,75,5),而 lstm 层所需的输入为 (75,5)。我如何只使用第一个过滤器。
model = Sequential()
model.add(Convolution2D(1, 5,5,border_mode='same',input_shape=(1,75, 5)))
model.add(Activation('relu'))
model.add(LSTM(75, return_sequences=False, input_shape=(75, 5)))
model.add(Dropout(0.5))
model.add(Dense(1))
model.compile(loss='mse', optimizer='rmsprop')
这是出现的错误:
File "/usr/local/lib/python3.4/dist-packages/keras/layers/recurrent.py", line 378, in __init__
super(LSTM, self).__init__(**kwargs)
File "/usr/local/lib/python3.4/dist-packages/keras/layers/recurrent.py", line 97, in __init__
super(Recurrent, self).__init__(**kwargs)
File "/usr/local/lib/python3.4/dist-packages/keras/layers/core.py", line 43, in __init__
self.set_input_shape((None,) + tuple(kwargs['input_shape']))
File "/usr/local/lib/python3.4/dist-packages/keras/layers/core.py", line 138, in set_input_shape
', was provided with input shape ' + str(input_shape))
Exception: Invalid input shape - Layer expects input ndim=3, was provided with input shape (None, 1, 75, 5)
您可以在两者之间添加 Reshape() 层以使尺寸兼容。
http://keras.io/layers/core/#reshape
keras.layers.core.Reshape(dims)
Reshape an output to a certain shape.
Input shape
Arbitrary, although all dimensions in the input shaped must be fixed. Use the keyword argument input_shape (tuple of integers, does not include the samples axis) when using this layer as the first layer in a model.
Output shape
(batch_size,) + dims
Arguments
dims
: target shape. Tuple of integers, does not include the samples dimension (batch size).
我对卷积层和 lstm 层之间的连接有问题。数据的形状为 (75,5),其中每个时间步有 75 个时间步 x 5 个数据点。我想要做的是对 (75x5) 进行卷积,获取新的卷积 (75x5) 数据并将该数据馈送到 lstm 层。但是,它不起作用,因为卷积层输出的形状有一些我不需要的过滤器。因此,卷积层输出的形状为 (1,75,5),而 lstm 层所需的输入为 (75,5)。我如何只使用第一个过滤器。
model = Sequential()
model.add(Convolution2D(1, 5,5,border_mode='same',input_shape=(1,75, 5)))
model.add(Activation('relu'))
model.add(LSTM(75, return_sequences=False, input_shape=(75, 5)))
model.add(Dropout(0.5))
model.add(Dense(1))
model.compile(loss='mse', optimizer='rmsprop')
这是出现的错误:
File "/usr/local/lib/python3.4/dist-packages/keras/layers/recurrent.py", line 378, in __init__
super(LSTM, self).__init__(**kwargs)
File "/usr/local/lib/python3.4/dist-packages/keras/layers/recurrent.py", line 97, in __init__
super(Recurrent, self).__init__(**kwargs)
File "/usr/local/lib/python3.4/dist-packages/keras/layers/core.py", line 43, in __init__
self.set_input_shape((None,) + tuple(kwargs['input_shape']))
File "/usr/local/lib/python3.4/dist-packages/keras/layers/core.py", line 138, in set_input_shape
', was provided with input shape ' + str(input_shape))
Exception: Invalid input shape - Layer expects input ndim=3, was provided with input shape (None, 1, 75, 5)
您可以在两者之间添加 Reshape() 层以使尺寸兼容。
http://keras.io/layers/core/#reshape
keras.layers.core.Reshape(dims)
Reshape an output to a certain shape.
Input shape
Arbitrary, although all dimensions in the input shaped must be fixed. Use the keyword argument input_shape (tuple of integers, does not include the samples axis) when using this layer as the first layer in a model.
Output shape
(batch_size,) + dims
Arguments
dims
: target shape. Tuple of integers, does not include the samples dimension (batch size).