尝试将 HDF5 数据集与 Keras 一起使用时出错
Error while attempting to use an HDF5 dataset with Keras
尝试将 HDF5 数据集与 keras 一起使用时出现以下错误。看起来 Sequential.fit() 在制作验证数据切片时遇到切片的键不具有属性 'stop'。我不知道这是我的 HDF5 数据集的格式问题还是其他问题。任何帮助将不胜感激。
Traceback (most recent call last):
File "autoencoder.py", line 73, in module
validation_split=0.2)
File "/home/ben/.local/lib/python2.7/site-packages/keras/models.py", line 672, in fit
initial_epoch=initial_epoch)
File "/home/ben/.local/lib/python2.7/site-packages/keras/engine/training.py", line 1143, in fit
x, val_x = (slice_X(x, 0, split_at), slice_X(x, split_at))
File "/home/ben/.local/lib/python2.7/site-packages/keras/engine/training.py", line 301, in slice_X
return [x[start:stop] for x in X]
File "/home/ben/.local/lib/python2.7/site-packages/keras/utils/io_utils.py", line 71, in getitem
if key.stop + self.start <= self.end:
TypeError: unsupported operand type(s) for +: 'NoneType' and 'int'
training_input = HDF5Matrix("../../media/patches/data_rotated.h5", 'training_input_rotated')
training_target = HDF5Matrix("../../media/patches/data_rotated.h5", 'training_target_rotated')
# Model definition
autoencoder = Sequential()
autoencoder.add(Convolution2D(32, 3, 3, activation='relu', border_mode='same',input_shape=(64, 64, 3)))
autoencoder.add(MaxPooling2D((2, 2), border_mode='same'))
autoencoder.add(Convolution2D(64, 3, 3, activation='relu', border_mode='same'))
autoencoder.add(MaxPooling2D((2, 2), border_mode='same'))
autoencoder.add(Convolution2D(128, 3, 3, activation='relu', border_mode='same'))
autoencoder.add(Deconvolution2D(64, 3, 3, activation='relu', border_mode='same',output_shape=(None, 16, 16, 64),subsample=(2, 2)))
autoencoder.add(UpSampling2D((2, 2)))
autoencoder.add(Deconvolution2D(32, 3, 3, activation='relu', border_mode='same',output_shape=(None, 32, 32, 32),subsample=(2, 2)))
autoencoder.add(UpSampling2D((2, 2)))
autoencoder.add(Deconvolution2D(3, 3, 3, activation='sigmoid', border_mode='same',output_shape=(None, 64, 64, 3),subsample=(2, 2)))
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
autoencoder.summary()
# Callback configure
csv_logger = CSVLogger('../../runs/training_' + start_time + '.log')
prog_logger = ProgbarLogger()
checkpointer = ModelCheckpoint(filepath='../../runs/model_' + start_time + '.hdf5', verbose=1, save_best_only=False)
# Training call
history = autoencoder.fit(
x=training_input,
y=training_target,
batch_size=256,
nb_epoch=1000,
verbose=2,
callbacks=[csv_logger, prog_logger, checkpointer],
validation_split=0.2)
我没有修复错误,但我在 fit 调用中使用 validation_data 而不是 validation_split 来绕过它。
尝试将 HDF5 数据集与 keras 一起使用时出现以下错误。看起来 Sequential.fit() 在制作验证数据切片时遇到切片的键不具有属性 'stop'。我不知道这是我的 HDF5 数据集的格式问题还是其他问题。任何帮助将不胜感激。
Traceback (most recent call last):
File "autoencoder.py", line 73, in modulevalidation_split=0.2)
File "/home/ben/.local/lib/python2.7/site-packages/keras/models.py", line 672, in fit
initial_epoch=initial_epoch)
File "/home/ben/.local/lib/python2.7/site-packages/keras/engine/training.py", line 1143, in fit
x, val_x = (slice_X(x, 0, split_at), slice_X(x, split_at))
File "/home/ben/.local/lib/python2.7/site-packages/keras/engine/training.py", line 301, in slice_X
return [x[start:stop] for x in X]
File "/home/ben/.local/lib/python2.7/site-packages/keras/utils/io_utils.py", line 71, in getitem
if key.stop + self.start <= self.end:
TypeError: unsupported operand type(s) for +: 'NoneType' and 'int'
training_input = HDF5Matrix("../../media/patches/data_rotated.h5", 'training_input_rotated')
training_target = HDF5Matrix("../../media/patches/data_rotated.h5", 'training_target_rotated')
# Model definition
autoencoder = Sequential()
autoencoder.add(Convolution2D(32, 3, 3, activation='relu', border_mode='same',input_shape=(64, 64, 3)))
autoencoder.add(MaxPooling2D((2, 2), border_mode='same'))
autoencoder.add(Convolution2D(64, 3, 3, activation='relu', border_mode='same'))
autoencoder.add(MaxPooling2D((2, 2), border_mode='same'))
autoencoder.add(Convolution2D(128, 3, 3, activation='relu', border_mode='same'))
autoencoder.add(Deconvolution2D(64, 3, 3, activation='relu', border_mode='same',output_shape=(None, 16, 16, 64),subsample=(2, 2)))
autoencoder.add(UpSampling2D((2, 2)))
autoencoder.add(Deconvolution2D(32, 3, 3, activation='relu', border_mode='same',output_shape=(None, 32, 32, 32),subsample=(2, 2)))
autoencoder.add(UpSampling2D((2, 2)))
autoencoder.add(Deconvolution2D(3, 3, 3, activation='sigmoid', border_mode='same',output_shape=(None, 64, 64, 3),subsample=(2, 2)))
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
autoencoder.summary()
# Callback configure
csv_logger = CSVLogger('../../runs/training_' + start_time + '.log')
prog_logger = ProgbarLogger()
checkpointer = ModelCheckpoint(filepath='../../runs/model_' + start_time + '.hdf5', verbose=1, save_best_only=False)
# Training call
history = autoencoder.fit(
x=training_input,
y=training_target,
batch_size=256,
nb_epoch=1000,
verbose=2,
callbacks=[csv_logger, prog_logger, checkpointer],
validation_split=0.2)
我没有修复错误,但我在 fit 调用中使用 validation_data 而不是 validation_split 来绕过它。