LSTM 分类器的 Keras 官方示例使用真实值作为训练目标?

Keras official example of LSTM classifier using real values for training target?

根据 Keras 文档中的官方示例,正如预期的那样,使用 categorical_crossentropy 作为损失函数训练了堆叠 LSTM 分类器。 https://keras.io/getting-started/sequential-model-guide/#examples

但是 y_train 值是使用 numpy.random.random() 播种的,它输出实数,而不是 0,1 二元分类(这是典型的)

是否在后台将 y_train 值提升为 0,1 值?

您甚至可以针对 0,1 之间的真实值训练此损失函数吗?

accuracy是怎么算出来的?

令人困惑..不是吗?

from keras.models import Sequential
from keras.layers import LSTM, Dense
import numpy as np

data_dim = 16
timesteps = 8
num_classes = 10

# expected input data shape: (batch_size, timesteps, data_dim)
model = Sequential()
model.add(LSTM(32, return_sequences=True,
               input_shape=(timesteps, data_dim)))  # returns a sequence of vectors of dimension 32
model.add(LSTM(32, return_sequences=True))  # returns a sequence of vectors of dimension 32
model.add(LSTM(32))  # return a single vector of dimension 32
model.add(Dense(10, activation='softmax'))

model.compile(loss='categorical_crossentropy',
              optimizer='rmsprop',
              metrics=['accuracy'])

# Generate dummy training data
x_train = np.random.random((1000, timesteps, data_dim))
y_train = np.random.random((1000, num_classes))

# Generate dummy validation data
x_val = np.random.random((100, timesteps, data_dim))
y_val = np.random.random((100, num_classes))

model.fit(x_train, y_train,
          batch_size=64, epochs=5,
          validation_data=(x_val, y_val))

对于这个例子,y_train和y_test不再是one-hot编码,而是每个类的概率。所以它仍然适用于交叉熵。我们可以将 one-hot 编码视为概率向量的特例。

y_train[0]
array([0.30172708, 0.69581121, 0.23264601, 0.87881279, 0.46294832,
       0.5876406 , 0.16881395, 0.38856604, 0.00193709, 0.80681196])