Dataset.from_generator 无法将 numpy 数组的功能复制为 1D Convnet 的输入

Dataset.from_generator cannot replicate functionality of numpy arrays as input to 1D Convnet

我正在将许多长度为 100 和 3 个特征的时间序列输入一维卷积网络。我有太多这些无法使用 numpy 数组,因此我需要使用 Dataset.from_generator().

问题是当我在数据集上训练模型时,它给出了错误:

expected conv1d_input to have 3 dimensions, but got array with shape (100, 3)

下面的代码演示了这个问题。生成器将每个元素生成为预期的 (100,3) 数组。为什么模型无法将生成器输出识别为有效?

非常感谢您的帮助。朱利安

import numpy as np
import tensorflow as tf
def create_timeseries_element():
    # returns a random time series of 100 intervals, each with 3 features,
    # and a random one-hot array of 5 entries
    data = np.random.rand(100,3)
    label = np.eye(5, dtype='int')[np.random.choice(5)]
    return data, label

def data_generator():
    d, l = create_timeseries_element()
    yield (d, l)

model = tf.keras.models.Sequential([
    tf.keras.layers.Conv1D(128, 9, activation='relu', input_shape=(100, 3)),
    tf.keras.layers.Conv1D(128, 9, activation='relu'),
    tf.keras.layers.MaxPooling1D(2),
    tf.keras.layers.Conv1D(256, 5, activation='relu'),
    tf.keras.layers.Conv1D(256, 5, activation='relu'),
    tf.keras.layers.GlobalAveragePooling1D(),
    tf.keras.layers.Dropout(0.5),
    tf.keras.layers.Dense(5, activation='softmax')])
model.compile(optimizer='adam',
              loss='categorical_crossentropy',
              metrics=['accuracy'])

x_train = []
y_train = []
for _ in range(1000):
    d, l = create_timeseries_element()
    x_train.append(d)
    y_train.append(l)
x_train = np.array(x_train)
y_train = np.array(y_train)

# train model with numpy arrays - this works
model.fit(x=x_train, y=y_train)

ds = tf.data.Dataset.from_generator(data_generator, output_types=(tf.float32, tf.int32),
                                      output_shapes=(tf.TensorShape([100, 3]), tf.TensorShape([5])))
# train model with dataset - this fails
model.fit(ds)

模型需要 batch/list 个样本。您可以通过在创建数据集时简单地设置 batch 属性 来做到这一点,如下所示:

ds = tf.data.Dataset.from_generator(data_generator, output_types=(tf.float32, tf.int32),
                                      output_shapes=(tf.TensorShape([100, 3]), tf.TensorShape([5])))
ds = ds.batch(16)

您也可以在准备样品时采用其他方法。这样,您需要扩展样本维度,以便样本作为批处理(您也可以传递样本列表)并且您必须在 output_shapes 数据集和 [=14] 中进行以下修改=]函数

def create_timeseries_element():
    # returns a random time series of 100 intervals, each with 3 features,
    # and a random one-hot array of 5 entries
    # Expand dimensions to create a batch of single sample
    data = np.expand_dims(np.random.rand(100, 3), axis=0)
    label = np.expand_dims(np.eye(5, dtype='int')[np.random.choice(5)], axis=0)
    return data, label

ds = tf.data.Dataset.from_generator(data_generator, output_types=(tf.float32, tf.int32), output_shapes=(tf.TensorShape([None, 100, 3]), tf.TensorShape([None, 5])))

以上更改将为数据集的每个时期仅提供一个批次(第一个解决方案的样本)。在定义数据集时,您可以通过将参数传递给 data_generator 函数来生成所需的批次(第一个解决方案的样本)(例如 25),如下所示:

def data_generator(count=1):
    for _ in range(count):
        d, l = create_timeseries_element()
        yield (d, l)

ds = tf.data.Dataset.from_generator(data_generator, args=[25], output_types=(tf.float32, tf.int32), output_shapes=(tf.TensorShape([None, 100, 3]), tf.TensorShape([None, 5])))