建立一个快速的股票预测 GRU 模型

Building a quick GRU model for stock prediction

我是 RNNs 的初学者,想构建一个 运行ning 模型门控循环单元 GRU 用于股票预测。

我有一个用于训练数据的 numpy 数组,形状如下:

train_x.shape
(1122,20,320)
`1122` represents the total amount timestamps I have
`20` is the amount of timestamps I want to predict the future from
`320` is the number of features (different stocks)

我的train_y.shape是(1122,),用10表示二进制变量。 1 买入 0 卖出。

考虑到这一点,我开始尝试我的 GRU 模型:

 def GRU_model(train_x,train_y,test_x,test_y):

    model = Sequential()
    model.add(layers.Embedding(train_x.shape[0],50,input_length=320))
    model.add(layers.GRU(50, return_sequences=True,input_shape=(train_x.shape[1],1),activation='tanh'))
    model.add(layers.GRU(50, return_sequences=True,input_shape=(train_x.shape[1],1),activation='tanh'))
    model.add(layers.GRU(50, return_sequences=True,input_shape=(train_x.shape[1],1),activation='tanh'))
    model.add(layers.GRU(50,activation='tanh'))
    model.add(Dense(units=2))
    model.compile(optimizer=SGD(lr=0.01,decay=1e-7,momentum=0.9,nesterov=False),loss='mean_squared_error')
    
    model.fit(train_x,train_y,epochs=EPOCHS,batch_size=BATCH_SIZE)

    GRU_predict = model.predict(validation_x)

    return model,GRU_predict



my_gru_model,my_gru_predict = GRU_model(train_x,train_y,validation_x,validation_y)
ValueError: Input 0 of layer gru_42 is incompatible with the layer: expected ndim=3, found ndim=4. Full shape received: (None, 20, 320, 50)

显然我输入模型的尺寸不正确,但我不明白它们应该如何适应,所以模型可以 运行 顺利。

所以如果你有 1122 个数据样本,每个样本有 20 个时间步长,每个时间步长有 320 个特征,你想教你的模型在买卖之间做出二元决定,试试这样的事情:

import tensorflow as tf
tf.random.set_seed(1)

model = tf.keras.Sequential()
model.add(tf.keras.layers.GRU(50, return_sequences=True, input_shape=(20, 320), activation='tanh'))
model.add(tf.keras.layers.GRU(50,activation='tanh'))
model.add(tf.keras.layers.Dense(units=1, activation='sigmoid'))

model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=0.01,decay=1e-7,momentum=0.9,nesterov=False),loss='binary_crossentropy')
print(model.summary())

train_x = tf.random.normal((1122, 20, 320))
train_y = tf.random.uniform((1122,), maxval=2, dtype=tf.int32)
model.fit(train_x, train_y, epochs=5, batch_size=16)
Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 gru (GRU)                   (None, 20, 50)            55800     
                                                                 
 gru_1 (GRU)                 (None, 50)                15300     
                                                                 
 dense (Dense)               (None, 1)                 51        
                                                                 
=================================================================
Total params: 71,151
Trainable params: 71,151
Non-trainable params: 0
_________________________________________________________________
None
Epoch 1/5
71/71 [==============================] - 5s 21ms/step - loss: 0.7050
Epoch 2/5
71/71 [==============================] - 2s 22ms/step - loss: 0.6473
Epoch 3/5
71/71 [==============================] - 1s 21ms/step - loss: 0.5513
Epoch 4/5
71/71 [==============================] - 1s 21ms/step - loss: 0.3640
Epoch 5/5
71/71 [==============================] - 1s 20ms/step - loss: 0.1258
<keras.callbacks.History at 0x7f4eac87e610>

请注意,您只有一个输出节点,因为您的模型应该做出二元决策。这也是你不得不使用损失函数binary_crossentropy.

的原因

关于 GRU 层,它期望具有形状 (batch_size, timesteps, features) 的输入,但 batch_size 是在训练期间推断的,因此在 input_shape 中被省略。由于下一个 GRU 也需要这个形状,所以你在第一个 GRU 中使用参数 return_sequences=True,其中 returns 一个形状为 (batch_size, 20, 50) => 一个隐藏状态输出的序列 50对于每个输入时间步 n。此外,您不需要 Embedding 图层。它通常用于将表示文本的整数序列映射为n维向量表示。