在健身房使用多离散动作 space 训练 DQN 智能体

Training DQN Agent with Multidiscrete action space in gym

我想用 Keras-rl 训练一个 DQN Agent。我的环境同时具有多离散动作和观察 spaces。我正在改编这个视频的代码:https://www.youtube.com/watch?v=bD6V3rcr_54&t=5s

那么,我分享我的代码

class ShowerEnv(Env):
    def __init__(self, max_machine_states_vec, production_rates_vec, production_threshold, scheduling_horizon, operations_horizon = 100):
        """
        Returns:
        self.action_space is a vector with the maximum production rate fro each machine, a binary call-to-maintenance and a binary call-to-schedule
        """

        num_machines = len(max_machine_states_vec)
        assert len(max_machine_states_vec) == len(production_rates_vec), "Machine states and production rates have different cardinality"
        # Actions we can take, down, stay, up
        self.action_space = MultiDiscrete(production_rates_vec + num_machines*[2] + [2]) ### Action space is the production rate from 0 to N and the choice of scheduling
        # Temperature array
        self.observation_space = MultiDiscrete(max_machine_states_vec + [scheduling_horizon+2]) ### Observation space is the 0,...,L for each machine + the scheduling state including "ns" (None = "ns")
        # Set start temp
        Code going on...
.
.
.
.
def build_model(states, actions):
    actions_number = reduce(lambda a,b: a*b, env.action_space.nvec)
    model = Sequential()    
    model.add(Dense(24, activation='relu', input_shape= (1, states[0]) ))
    model.add(Dense(24, activation='relu'))
    model.add(Dense(actions_number, activation='linear'))
    return model

def build_agent(model, actions):
    policy = BoltzmannQPolicy()
    memory = SequentialMemory(limit=50000, window_length=1)
    dqn = DQNAgent(model=model, memory=memory, policy=policy, 
                nb_actions=actions, nb_steps_warmup=10, target_model_update=1e-2)
    return dqn
.
.
.
.
states = env.observation_space.shape
actions_number = reduce(lambda a,b: a*b, env.action_space.nvec)

model = build_model(states, actions)
model.summary()

dqn = build_agent(model, actions)
dqn.compile(Adam(lr=1e-3), metrics=['mae'])
dqn.fit(env, nb_steps=50000, visualize=False, verbose=1)

用 2 个元素初始化后,5 个动作,我得到以下错误:

ValueError: Model output "Tensor("dense_2/BiasAdd:0", shape=(None, 1, 32), dtype=float32)" has invalid shape. DQN expects a model that has one dimension for each action, in this case [2 2 2 2 2]

我该如何解决这个问题。我很确定,因为我不完全理解如何将视频中的代码改编为 MultiDiscrete 动作 space。 谢谢:)

我遇到了同样的问题,不幸的是无法将 gym.spaces.MultiDiscreteKeras-rl 中的 DQNAgent 一起使用。

解决方案:

使用库 stable-baselines3 并使用 A2C 代理。实现起来非常简单。