如何在 Keras 中将输入拆分为不同的通道
How to Split the Input into different channels in Keras
我有 20 个通道数据,每个通道有 5000 个值(总共 150,000 多条记录作为 .npy 文件存储在 HD 上)。
我正在按照 https://stanford.edu/~shervine/blog/keras-how-to-generate-data-on-the-fly.html 上提供的 keras fit_generator 教程来读取数据(每条记录被读取为 (5000, 20) float32 类型的 numpy 数组。
我已经理论化的网络,每个通道都有并行卷积网络,这些通道在末尾连接到,因此需要并行馈送数据。
从数据中仅读取和馈送单个通道并馈送到单个网络成功
def __data_generation(self, list_IDs_temp):
'Generates data containing batch_size samples' # X : (n_samples, *dim, n_channels)
# Initialization
if(self.n_channels == 1):
X = np.empty((self.batch_size, *self.dim))
else:
X = np.empty((self.batch_size, *self.dim, self.n_channels))
y = np.empty((self.batch_size), dtype=int)
# Generate data
for i, ID in enumerate(list_IDs_temp):
# Store sample
d = np.load(self.data_path + ID + '.npy')
d = d[:, self.required_channel]
d = np.expand_dims(d, 2)
X[i,] = d
# Store class
y[i] = self.labels[ID]
return X, keras.utils.to_categorical(y, num_classes=self.n_classes)
然而,当读取整个记录并尝试使用 Lambda 层将其切片提供给网络时,我得到
正在读取整个记录
X[i,] = np.load(self.data_path + ID + '.npy')
使用位于 https://github.com/keras-team/keras/issues/890 的 Lambda 切片层实现并调用
input = Input(shape=(5000, 20))
slicedInput = crop(2, 0, 1)(input)
我能够编译模型,它显示了预期的层大小。
当数据被馈送到这个网络时,我得到
ValueError: could not broadcast input array from shape (5000,20) into shape (5000,1)
任何帮助将不胜感激....
如Githubthread you are referencing, Lambda
layer can return only one output, and thus the proposed crop(dimension, start, end)
returns中所说只有一个"Tensor on a given dimension from start to end".
我相信你想要实现的可以通过这样的方式完成:
from keras.layers import Dense, Concatenate, Input, Lambda
from keras.models import Model
num_channels = 20
input = Input(shape=(5000, num_channels))
branch_outputs = []
for i in range(num_channels):
# Slicing the ith channel:
out = Lambda(lambda x: x[:, i])(input)
# Setting up your per-channel layers (replace with actual sub-models):
out = Dense(16)(out)
branch_outputs.append(out)
# Concatenating together the per-channel results:
out = Concatenate()(branch_outputs)
# Adding some further layers (replace or remove with your architecture):
out = Dense(10)(out)
# Building model:
model = Model(inputs=input, outputs=out)
model.compile(optimizer=keras.optimizers.Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])
# --------------
# Generating dummy data:
import numpy as np
data = np.random.random((64, 5000, num_channels))
targets = np.random.randint(2, size=(64, 10))
# Training the model:
model.fit(data, targets, epochs=2, batch_size=32)
# Epoch 1/2
# 32/64 [==============>...............] - ETA: 1s - loss: 37.1219 - acc: 0.1562
# 64/64 [==============================] - 2s 27ms/step - loss: 38.4801 - acc: 0.1875
# Epoch 2/2
# 32/64 [==============>...............] - ETA: 0s - loss: 38.9541 - acc: 0.0938
# 64/64 [==============================] - 0s 4ms/step - loss: 36.0179 - acc: 0.1875
我有 20 个通道数据,每个通道有 5000 个值(总共 150,000 多条记录作为 .npy 文件存储在 HD 上)。
我正在按照 https://stanford.edu/~shervine/blog/keras-how-to-generate-data-on-the-fly.html 上提供的 keras fit_generator 教程来读取数据(每条记录被读取为 (5000, 20) float32 类型的 numpy 数组。
我已经理论化的网络,每个通道都有并行卷积网络,这些通道在末尾连接到,因此需要并行馈送数据。 从数据中仅读取和馈送单个通道并馈送到单个网络成功
def __data_generation(self, list_IDs_temp):
'Generates data containing batch_size samples' # X : (n_samples, *dim, n_channels)
# Initialization
if(self.n_channels == 1):
X = np.empty((self.batch_size, *self.dim))
else:
X = np.empty((self.batch_size, *self.dim, self.n_channels))
y = np.empty((self.batch_size), dtype=int)
# Generate data
for i, ID in enumerate(list_IDs_temp):
# Store sample
d = np.load(self.data_path + ID + '.npy')
d = d[:, self.required_channel]
d = np.expand_dims(d, 2)
X[i,] = d
# Store class
y[i] = self.labels[ID]
return X, keras.utils.to_categorical(y, num_classes=self.n_classes)
然而,当读取整个记录并尝试使用 Lambda 层将其切片提供给网络时,我得到
正在读取整个记录
X[i,] = np.load(self.data_path + ID + '.npy')
使用位于 https://github.com/keras-team/keras/issues/890 的 Lambda 切片层实现并调用
input = Input(shape=(5000, 20))
slicedInput = crop(2, 0, 1)(input)
我能够编译模型,它显示了预期的层大小。
当数据被馈送到这个网络时,我得到
ValueError: could not broadcast input array from shape (5000,20) into shape (5000,1)
任何帮助将不胜感激....
如Githubthread you are referencing, Lambda
layer can return only one output, and thus the proposed crop(dimension, start, end)
returns中所说只有一个"Tensor on a given dimension from start to end".
我相信你想要实现的可以通过这样的方式完成:
from keras.layers import Dense, Concatenate, Input, Lambda
from keras.models import Model
num_channels = 20
input = Input(shape=(5000, num_channels))
branch_outputs = []
for i in range(num_channels):
# Slicing the ith channel:
out = Lambda(lambda x: x[:, i])(input)
# Setting up your per-channel layers (replace with actual sub-models):
out = Dense(16)(out)
branch_outputs.append(out)
# Concatenating together the per-channel results:
out = Concatenate()(branch_outputs)
# Adding some further layers (replace or remove with your architecture):
out = Dense(10)(out)
# Building model:
model = Model(inputs=input, outputs=out)
model.compile(optimizer=keras.optimizers.Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])
# --------------
# Generating dummy data:
import numpy as np
data = np.random.random((64, 5000, num_channels))
targets = np.random.randint(2, size=(64, 10))
# Training the model:
model.fit(data, targets, epochs=2, batch_size=32)
# Epoch 1/2
# 32/64 [==============>...............] - ETA: 1s - loss: 37.1219 - acc: 0.1562
# 64/64 [==============================] - 2s 27ms/step - loss: 38.4801 - acc: 0.1875
# Epoch 2/2
# 32/64 [==============>...............] - ETA: 0s - loss: 38.9541 - acc: 0.0938
# 64/64 [==============================] - 0s 4ms/step - loss: 36.0179 - acc: 0.1875