如何将自定义 Openai 健身房环境与 Openai 稳定基线 RL 算法一起使用?
How to use a custom Openai gym environment with Openai stable-baselines RL algorithms?
我一直在尝试为来自 https://github.com/eivindeb/fixed-wing-gym by testing it with the openai stable-baselines algorithms but I have been running into issues for several days now. My baseline is the CartPole example Multiprocessing: Unleashing the Power of Vectorized Environments from https://stable-baselines.readthedocs.io/en/master/guide/examples.html#multiprocessing-unleashing-the-power-of-vectorized-environments 的固定翼无人机使用自定义 openai gym 环境,因为我需要提供参数并且我正在尝试使用多处理,我相信这个例子就是我所需要的.
我修改了基线示例如下:
import gym
import numpy as np
from stable_baselines.common.policies import MlpPolicy
from stable_baselines.common.vec_env import SubprocVecEnv
from stable_baselines.common import set_global_seeds
from stable_baselines import ACKTR, PPO2
from gym_fixed_wing.fixed_wing import FixedWingAircraft
def make_env(env_id, rank, seed=0):
"""
Utility function for multiprocessed env.
:param env_id: (str) the environment ID
:param num_env: (int) the number of environments you wish to have in subprocesses
:param seed: (int) the inital seed for RNG
:param rank: (int) index of the subprocess
"""
def _init():
env = FixedWingAircraft("fixed_wing_config.json")
#env = gym.make(env_id)
env.seed(seed + rank)
return env
set_global_seeds(seed)
return _init
if __name__ == '__main__':
env_id = "fixed_wing"
#env_id = "CartPole-v1"
num_cpu = 4 # Number of processes to use
# Create the vectorized environment
env = SubprocVecEnv([lambda: FixedWingAircraft for i in range(num_cpu)])
#env = SubprocVecEnv([make_env(env_id, i) for i in range(num_cpu)])
model = PPO2(MlpPolicy, env, verbose=1)
model.learn(total_timesteps=25000)
obs = env.reset()
for _ in range(1000):
action, _states = model.predict(obs)
obs, rewards, dones, info = env.step(action)
env.render()
我不断收到的错误如下:
Traceback (most recent call last):
File "/home/bonie/PycharmProjects/deepRL_fixedwing/fixed-wing-gym/gym_fixed_wing/ACKTR_fixedwing.py", line 38, in <module>
model = PPO2(MlpPolicy, env, verbose=1)
File "/home/bonie/PycharmProjects/deepRL_fixedwing/stable-baselines/stable_baselines/ppo2/ppo2.py", line 104, in __init__
self.setup_model()
File "/home/bonie/PycharmProjects/deepRL_fixedwing/stable-baselines/stable_baselines/ppo2/ppo2.py", line 134, in setup_model
n_batch_step, reuse=False, **self.policy_kwargs)
File "/home/bonie/PycharmProjects/deepRL_fixedwing/stable-baselines/stable_baselines/common/policies.py", line 660, in __init__
feature_extraction="mlp", **_kwargs)
File "/home/bonie/PycharmProjects/deepRL_fixedwing/stable-baselines/stable_baselines/common/policies.py", line 540, in __init__
scale=(feature_extraction == "cnn"))
File "/home/bonie/PycharmProjects/deepRL_fixedwing/stable-baselines/stable_baselines/common/policies.py", line 221, in __init__
scale=scale)
File "/home/bonie/PycharmProjects/deepRL_fixedwing/stable-baselines/stable_baselines/common/policies.py", line 117, in __init__
self._obs_ph, self._processed_obs = observation_input(ob_space, n_batch, scale=scale)
File "/home/bonie/PycharmProjects/deepRL_fixedwing/stable-baselines/stable_baselines/common/input.py", line 51, in observation_input
type(ob_space).__name__))
NotImplementedError: Error: the model does not support input space of type NoneType
我不确定真正输入什么作为 env_id
和 def make_env(env_id, rank, seed=0)
函数。我还认为并行进程的 VecEnv
函数设置不正确。
我在 Ubuntu 18.04 中使用 PyCharm IDE 使用 Python v3.6 进行编码。
此时任何建议都会很有帮助!
谢谢。
你自定义环境创建好了,但是你没有在openai gym
接口上注册。这就是 env_id
所指的。 gym
中的所有环境都可以通过调用其注册名称来设置。
所以基本上您需要做的就是遵循设置说明 here 并创建适当的 __init__.py
和 setup.py
脚本,并遵循相同的文件结构。
最后使用环境目录中的 pip install -e .
在本地安装您的包。
我一直在尝试为来自 https://github.com/eivindeb/fixed-wing-gym by testing it with the openai stable-baselines algorithms but I have been running into issues for several days now. My baseline is the CartPole example Multiprocessing: Unleashing the Power of Vectorized Environments from https://stable-baselines.readthedocs.io/en/master/guide/examples.html#multiprocessing-unleashing-the-power-of-vectorized-environments 的固定翼无人机使用自定义 openai gym 环境,因为我需要提供参数并且我正在尝试使用多处理,我相信这个例子就是我所需要的.
我修改了基线示例如下:
import gym
import numpy as np
from stable_baselines.common.policies import MlpPolicy
from stable_baselines.common.vec_env import SubprocVecEnv
from stable_baselines.common import set_global_seeds
from stable_baselines import ACKTR, PPO2
from gym_fixed_wing.fixed_wing import FixedWingAircraft
def make_env(env_id, rank, seed=0):
"""
Utility function for multiprocessed env.
:param env_id: (str) the environment ID
:param num_env: (int) the number of environments you wish to have in subprocesses
:param seed: (int) the inital seed for RNG
:param rank: (int) index of the subprocess
"""
def _init():
env = FixedWingAircraft("fixed_wing_config.json")
#env = gym.make(env_id)
env.seed(seed + rank)
return env
set_global_seeds(seed)
return _init
if __name__ == '__main__':
env_id = "fixed_wing"
#env_id = "CartPole-v1"
num_cpu = 4 # Number of processes to use
# Create the vectorized environment
env = SubprocVecEnv([lambda: FixedWingAircraft for i in range(num_cpu)])
#env = SubprocVecEnv([make_env(env_id, i) for i in range(num_cpu)])
model = PPO2(MlpPolicy, env, verbose=1)
model.learn(total_timesteps=25000)
obs = env.reset()
for _ in range(1000):
action, _states = model.predict(obs)
obs, rewards, dones, info = env.step(action)
env.render()
我不断收到的错误如下:
Traceback (most recent call last):
File "/home/bonie/PycharmProjects/deepRL_fixedwing/fixed-wing-gym/gym_fixed_wing/ACKTR_fixedwing.py", line 38, in <module>
model = PPO2(MlpPolicy, env, verbose=1)
File "/home/bonie/PycharmProjects/deepRL_fixedwing/stable-baselines/stable_baselines/ppo2/ppo2.py", line 104, in __init__
self.setup_model()
File "/home/bonie/PycharmProjects/deepRL_fixedwing/stable-baselines/stable_baselines/ppo2/ppo2.py", line 134, in setup_model
n_batch_step, reuse=False, **self.policy_kwargs)
File "/home/bonie/PycharmProjects/deepRL_fixedwing/stable-baselines/stable_baselines/common/policies.py", line 660, in __init__
feature_extraction="mlp", **_kwargs)
File "/home/bonie/PycharmProjects/deepRL_fixedwing/stable-baselines/stable_baselines/common/policies.py", line 540, in __init__
scale=(feature_extraction == "cnn"))
File "/home/bonie/PycharmProjects/deepRL_fixedwing/stable-baselines/stable_baselines/common/policies.py", line 221, in __init__
scale=scale)
File "/home/bonie/PycharmProjects/deepRL_fixedwing/stable-baselines/stable_baselines/common/policies.py", line 117, in __init__
self._obs_ph, self._processed_obs = observation_input(ob_space, n_batch, scale=scale)
File "/home/bonie/PycharmProjects/deepRL_fixedwing/stable-baselines/stable_baselines/common/input.py", line 51, in observation_input
type(ob_space).__name__))
NotImplementedError: Error: the model does not support input space of type NoneType
我不确定真正输入什么作为 env_id
和 def make_env(env_id, rank, seed=0)
函数。我还认为并行进程的 VecEnv
函数设置不正确。
我在 Ubuntu 18.04 中使用 PyCharm IDE 使用 Python v3.6 进行编码。
此时任何建议都会很有帮助!
谢谢。
你自定义环境创建好了,但是你没有在openai gym
接口上注册。这就是 env_id
所指的。 gym
中的所有环境都可以通过调用其注册名称来设置。
所以基本上您需要做的就是遵循设置说明 here 并创建适当的 __init__.py
和 setup.py
脚本,并遵循相同的文件结构。
最后使用环境目录中的 pip install -e .
在本地安装您的包。