玩 CartPole 时 Keras Q-learning 模型性能没有提高

Keras Q-learning model performance doesn't improve when playing CartPole

我正在尝试训练深度 Q 学习 Keras 模型来玩 CartPole-v1。然而,情况似乎并没有好转。我不认为这是一个错误,而是我缺乏关于如何正确使用 Keras 和 OpenAI Gym 的知识。我正在学习本教程 (https://adventuresinmachinelearning.com/reinforcement-learning-tutorial-python-keras/),它展示了如何训练机器人玩 NChain-v0(我能够遵循),但现在我正在尝试将我学到的知识应用到更复杂的环境中: CartPole-v1。这是下面的代码:

###import libraries
import gym
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import Adam


###prepare environment
env = gym.make('CartPole-v1') #our environment is CartPole-v1


###make model
model = Sequential()
model.add(Dense(128, input_shape=(env.observation_space.shape[0],), activation='relu'))
model.add(Dense(64, activation='relu'))
model.add(Dense(env.action_space.n, activation='linear'))
model.compile(loss='mse', optimizer=Adam(), metrics=['mae'])


###train model
def train_model(n_episodes=500, epsilon=0.5, decay_factor=0.999, gamma=0.95):
    G_array = []
    for episode in range(n_episodes):
        observation = env.reset()
        observation = observation.reshape(-1, env.observation_space.shape[0])
        epsilon *= decay_factor
        G = 0
        done = False
        while done != True:
            if np.random.random() < epsilon:
                action = env.action_space.sample()
            else:
                action = np.argmax(model.predict(observation))
            new_observation, reward, done, info = env.step(action) #It keeps going left! Why though?
            new_observation = new_observation.reshape(-1, env.observation_space.shape[0])
            target = reward + gamma*np.max(model.predict(new_observation))
            target_vector = model.predict(observation)[0]
            target_vector[action] = target
            model.fit(observation, target_vector.reshape(-1, env.action_space.n), epochs=1, verbose=0)
            observation = new_observation
            G += reward
        G_array.append(G)

    return G_array

G_array = train_model()
print(G_array)

'G_array' 的输出(每场比赛的总奖励)如下:

[14.0, 16.0, 18.0, 12.0, 16.0, 14.0, 17.0, 11.0, 11.0, 12.0, 11.0, 15.0, 13.0, 12.0, 12.0, 19.0, 13.0, 9.0, 10.0, 10.0, 11.0, 11.0, 14.0, 11.0, 10.0, 9.0, 10.0, 10.0, 12.0, 9.0, 15.0, 19.0, 11.0, 11.0, 10.0, 11.0, 13.0, 12.0, 13.0, 16.0, 12.0, 14.0, 9.0, 12.0, 20.0, 10.0, 12.0, 11.0, 9.0, 13.0, 13.0, 11.0, 13.0, 11.0, 24.0, 12.0, 11.0, 9.0, 9.0, 11.0, 10.0, 16.0, 10.0, 9.0, 9.0, 19.0, 10.0, 11.0, 13.0, 11.0, 11.0, 14.0, 23.0, 8.0, 13.0, 12.0, 15.0, 14.0, 11.0, 24.0, 9.0, 11.0, 11.0, 11.0, 10.0, 12.0, 11.0, 11.0, 10.0, 13.0, 18.0, 10.0, 17.0, 11.0, 13.0, 14.0, 12.0, 16.0, 13.0, 10.0, 10.0, 12.0, 22.0, 13.0, 11.0, 14.0, 10.0, 11.0, 11.0, 14.0, 14.0, 12.0, 18.0, 17.0, 9.0, 13.0, 12.0, 11.0, 11.0, 9.0, 16.0, 9.0, 18.0, 15.0, 12.0, 16.0, 13.0, 10.0, 13.0, 13.0, 17.0, 11.0, 11.0, 9.0, 9.0, 12.0, 9.0, 10.0, 9.0, 10.0, 18.0, 9.0, 11.0, 12.0, 10.0, 10.0, 10.0, 12.0, 12.0, 20.0, 13.0, 19.0, 9.0, 14.0, 14.0, 13.0, 19.0, 10.0, 18.0, 11.0, 11.0, 11.0, 8.0, 10.0, 14.0, 11.0, 16.0, 11.0, 13.0, 13.0, 9.0, 16.0, 11.0, 12.0, 13.0, 12.0, 11.0, 10.0, 11.0, 21.0, 12.0, 22.0, 12.0, 10.0, 13.0, 15.0, 19.0, 11.0, 10.0, 10.0, 11.0, 22.0, 11.0, 9.0, 26.0, 13.0, 11.0, 13.0, 13.0, 10.0, 10.0, 11.0, 12.0, 18.0, 9.0, 11.0, 13.0, 12.0, 13.0, 13.0, 12.0, 10.0, 11.0, 12.0, 12.0, 17.0, 11.0, 13.0, 13.0, 21.0, 12.0, 9.0, 14.0, 10.0, 15.0, 12.0, 12.0, 14.0, 11.0, 10.0, 14.0, 12.0, 12.0, 11.0, 8.0, 24.0, 9.0, 13.0, 10.0, 14.0, 10.0, 12.0, 13.0, 12.0, 13.0, 13.0, 14.0, 9.0, 17.0, 16.0, 9.0, 16.0, 14.0, 11.0, 9.0, 10.0, 15.0, 11.0, 9.0, 14.0, 12.0, 10.0, 13.0, 10.0, 10.0, 16.0, 15.0, 11.0, 8.0, 9.0, 9.0, 10.0, 9.0, 21.0, 13.0, 13.0, 10.0, 10.0, 11.0, 27.0, 13.0, 15.0, 11.0, 11.0, 12.0, 9.0, 10.0, 16.0, 10.0, 13.0, 13.0, 12.0, 12.0, 11.0, 17.0, 14.0, 9.0, 15.0, 26.0, 9.0, 9.0, 13.0, 9.0, 8.0, 12.0, 9.0, 10.0, 11.0, 9.0, 10.0, 9.0, 11.0, 9.0, 10.0, 12.0, 13.0, 13.0, 11.0, 11.0, 10.0, 15.0, 11.0, 11.0, 13.0, 10.0, 10.0, 12.0, 10.0, 10.0, 12.0, 9.0, 15.0, 29.0, 11.0, 9.0, 18.0, 11.0, 13.0, 13.0, 16.0, 13.0, 15.0, 10.0, 11.0, 18.0, 9.0, 9.0, 11.0, 15.0, 11.0, 11.0, 10.0, 25.0, 10.0, 9.0, 11.0, 15.0, 15.0, 11.0, 11.0, 11.0, 13.0, 9.0, 11.0, 9.0, 13.0, 12.0, 12.0, 14.0, 11.0, 14.0, 8.0, 10.0, 13.0, 10.0, 10.0, 10.0, 9.0, 13.0, 9.0, 12.0, 10.0, 11.0, 9.0, 11.0, 12.0, 20.0, 9.0, 10.0, 14.0, 9.0, 12.0, 13.0, 11.0, 11.0, 11.0, 10.0, 15.0, 14.0, 14.0, 12.0, 13.0, 12.0, 11.0, 10.0, 12.0, 12.0, 9.0, 11.0, 9.0, 11.0, 13.0, 10.0, 11.0, 11.0, 11.0, 12.0, 13.0, 13.0, 12.0, 8.0, 11.0, 13.0, 9.0, 12.0, 10.0, 10.0, 15.0, 12.0, 11.0, 10.0, 17.0, 10.0, 14.0, 9.0, 10.0, 10.0, 10.0, 12.0, 10.0, 10.0, 12.0, 10.0, 15.0, 10.0, 10.0, 9.0, 10.0, 10.0, 10.0, 19.0, 9.0, 10.0, 11.0, 10.0, 11.0, 11.0, 13.0, 10.0, 11.0, 12.0, 11.0, 12.0, 13.0, 11.0, 8.0, 12.0, 12.0, 14.0, 14.0, 11.0, 9.0, 11.0, 9.0, 12.0, 9.0, 8.0, 9.0, 12.0, 8.0, 10.0, 11.0, 13.0, 12.0, 12.0, 10.0, 11.0, 12.0, 10.0, 12.0, 13.0, 9.0, 9.0, 10.0, 15.0, 14.0, 16.0, 8.0, 19.0, 10.0]

这显然意味着模型在所有 500 集中都没有改进。如果我是使用 Keras 和 OpenAI Gym(尤其是 Keras)的完全初学者,请原谅。任何帮助表示赞赏。谢谢。

更新:通过一些调试,我最近注意到模型在大多数情况下倾向于向左移动,或者选择动作 0。这是否意味着我应该做一些 if 语句来修改奖励系统(例如,如果杆角小于 5 度,则增加奖励)?事实上,我现在正在这样做,但到目前为止无济于事。

强化学习非常嘈杂,而您的批量大小为 1,这使得它更加嘈杂。您可以尝试使用您更新的过去 episodes/updates 的内存缓冲区。您可以为此缓冲区使用集合中的 deque() 之类的东西。然后你根据给定的batch-size从这个内存缓冲区中随机采样。我发现这个 repo 非常有用(它包括一个 replay/memory 缓冲区和一个 RL 代理,因为你需要它) https://github.com/udacity/deep-reinforcement-learning/tree/master/dqn 尽管如此,RL 需要很长时间才能收敛,不像传统的深度学习在开始时 loss 下降得很快,在 RL 中 reward 不会增加很长时间然后突然开始增加。