OpenAI gym 的 Lunar Lander 模型没有收敛

Model for OpenAI gym's Lunar Lander not converging

我正在尝试使用 keras 的深度强化学习来训练代理人学习如何玩 Lunar Lander OpenAI gym environment。问题是我的模型没有收敛。这是我的代码:

import numpy as np
import gym

from keras.models import Sequential
from keras.layers import Dense
from keras import optimizers

def get_random_action(epsilon):
    return np.random.rand(1) < epsilon

def get_reward_prediction(q, a):
    qs_a = np.concatenate((q, table[a]), axis=0)
    x = np.zeros(shape=(1, environment_parameters + num_of_possible_actions))
    x[0] = qs_a
    guess = model.predict(x[0].reshape(1, x.shape[1]))
    r = guess[0][0]
    return r

results = []
epsilon = 0.05
alpha = 0.003
gamma = 0.3
environment_parameters = 8
num_of_possible_actions = 4
obs = 15
mem_max = 100000
epochs = 3
total_episodes = 15000

possible_actions = np.arange(0, num_of_possible_actions)
table = np.zeros((num_of_possible_actions, num_of_possible_actions))
table[np.arange(num_of_possible_actions), possible_actions] = 1

env = gym.make('LunarLander-v2')
env.reset()

i_x = np.random.random((5, environment_parameters + num_of_possible_actions))
i_y = np.random.random((5, 1))

model = Sequential()
model.add(Dense(512, activation='relu', input_dim=i_x.shape[1]))
model.add(Dense(i_y.shape[1]))

opt = optimizers.adam(lr=alpha)

model.compile(loss='mse', optimizer=opt, metrics=['accuracy'])

total_steps = 0
i_x = np.zeros(shape=(1, environment_parameters + num_of_possible_actions))
i_y = np.zeros(shape=(1, 1))

mem_x = np.zeros(shape=(1, environment_parameters + num_of_possible_actions))
mem_y = np.zeros(shape=(1, 1))
max_steps = 40000

for episode in range(total_episodes):
    g_x = np.zeros(shape=(1, environment_parameters + num_of_possible_actions))
    g_y = np.zeros(shape=(1, 1))
    q_t = env.reset()
    episode_reward = 0

    for step_number in range(max_steps):
        if episode < obs:
            a = env.action_space.sample()
        else:
            if get_random_action(epsilon, total_episodes, episode):
                a = env.action_space.sample()
            else:
                actions = np.zeros(shape=num_of_possible_actions)

                for i in range(4):
                    actions[i] = get_reward_prediction(q_t, i)

                a = np.argmax(actions)

        # env.render()
        qa = np.concatenate((q_t, table[a]), axis=0)

        s, r, episode_complete, data = env.step(a)
        episode_reward += r

        if step_number is 0:
            g_x[0] = qa
            g_y[0] = np.array([r])
            mem_x[0] = qa
            mem_y[0] = np.array([r])

        g_x = np.vstack((g_x, qa))
        g_y = np.vstack((g_y, np.array([r])))

        if episode_complete:
            for i in range(0, g_y.shape[0]):
                if i is 0:
                    g_y[(g_y.shape[0] - 1) - i][0] = g_y[(g_y.shape[0] - 1) - i][0]
                else:
                    g_y[(g_y.shape[0] - 1) - i][0] = g_y[(g_y.shape[0] - 1) - i][0] + gamma * g_y[(g_y.shape[0] - 1) - i + 1][0]

            if mem_x.shape[0] is 1:
                mem_x = g_x
                mem_y = g_y
            else:
                mem_x = np.concatenate((mem_x, g_x), axis=0)
                mem_y = np.concatenate((mem_y, g_y), axis=0)

            if np.alen(mem_x) >= mem_max:
                for l in range(np.alen(g_x)):
                    mem_x = np.delete(mem_x, 0, axis=0)
                    mem_y = np.delete(mem_y, 0, axis=0)

        q_t = s

        if episode_complete and episode >= obs:
            if episode%10 == 0:
                model.fit(mem_x, mem_y, batch_size=32, epochs=epochs, verbose=0)

        if episode_complete:
            results.append(episode_reward)
            break

我运行几万集了我的模型还是收敛不了。它将开始减少超过 5000 集的平均策略变化,同时增加平均奖励,但随后它会走到尽头,之后每集的平均奖励实际上会下降 下降。我试过弄乱超参数,但我还没有得到任何结果。我正在尝试在 DeepMind DQN paper.

之后为我的代码建模

您可能想要更改 get_random_action 函数以随每一集衰减 epsilon。毕竟,假设您的代理可以学习最优策略,在某些时候您根本不想采取随机行动,对吧?这是 get_random_action 的一个稍微不同的版本,可以为您完成此操作:

def get_random_action(epsilon, total_episodes, episode):
        explore_prob = epsilon - (epsilon * (episode / total_episodes))
        return np.random.rand(1) < explore_prob

在您的函数的这个修改版本中,epsilon 会随着每一集略微减少。这可能有助于您的模型收敛。

有几种方法可以衰减参数。有关更多信息,请查看 this Wikipedia article

我最近成功实施了这个。 https://github.com/tianchuliang/techblog/tree/master/OpenAIGym

基本上,我让代理 运行 随机 3000 帧,同时收集这些作为初始训练数据(状态)和标签(奖励),然后我每 100 帧训练我的神经网络模型并让该模型决定采取何种行动可获得最佳分数。

看我的github,可能会有帮助。哦,我的训练迭代也在 YouTube 上,https://www.youtube.com/watch?v=wrrr90Pevuwhttps://www.youtube.com/watch?v=TJzKbFAlKa0https://www.youtube.com/watch?v=y91uA_cDGGs