在 keras-rl 中定义动作值
Define action values in keras-rl
我在 keras-rl 中有一个自定义环境,在构造函数中具有以下配置
def __init__(self, data):
#Declare the episode as the first episode
self.episode=1
#Initialize data
self.data=data
#Declare low and high as vectors with -inf values
self.low = numpy.array([-numpy.inf])
self.high = numpy.array([+numpy.inf])
self.observation_space = spaces.Box(self.low, self.high, dtype=numpy.float32)
#Define the space of actions as 3 (I want them to be 0, 1 and 2)
self.action_space = spaces.Discrete(3)
self.currentObservation = 0
self.limit = len(data)
#Initiates the values to be returned by the environment
self.reward = None
如您所见,我的代理将执行 3 个动作,根据动作的不同,将在下面的函数 step() 中计算不同的奖励:
def step(self, action):
assert self.action_space.contains(action)
#Initiates the reward
self.reward=0
#get the reward
self.possibleGain = self.data.iloc[self.currentObservation]['delta_next_day']
#If action is 1, calculate the reward
if(action == 1):
self.reward = self.possibleGain-self.operationCost
#If action is 2, calculate the reward as negative
elif(action==2):
self.reward = (-self.possibleGain)-self.operationCost
#If action is 0, no reward
elif(action==0):
self.reward = 0
#Finish episode
self.done=True
self.episode+=1
self.currentObservation+=1
if(self.currentObservation>=self.limit):
self.currentObservation=0
#Return the state, reward and if its done or not
return self.getObservation(), self.reward, self.done, {}
问题是,如果我在每一集中打印动作,它们是 0、2 和 4。我希望它们是 0、1 和 2。如何强制代理只识别keras-rl 的这 3 个动作?
我不确定为什么 self.action_space = spaces.Discrete(3)
给你的操作是 0,2,4
因为我无法用你发布的代码片段重现你的错误,所以我建议使用以下内容来定义你的操作
self.action_space = gym.spaces.Box(low=np.array([1]),high= np.array([3]), dtype=np.int)
这就是我从动作 space 中采样时得到的结果。
actions= gym.spaces.Box(low=np.array([1]),high= np.array([3]), dtype=np.int)
for i in range(10):
print(actions.sample())
[1]
[3]
[2]
[2]
[3]
[3]
[1]
[1]
[2]
[3]
希望对您有所帮助!
我在 keras-rl 中有一个自定义环境,在构造函数中具有以下配置
def __init__(self, data):
#Declare the episode as the first episode
self.episode=1
#Initialize data
self.data=data
#Declare low and high as vectors with -inf values
self.low = numpy.array([-numpy.inf])
self.high = numpy.array([+numpy.inf])
self.observation_space = spaces.Box(self.low, self.high, dtype=numpy.float32)
#Define the space of actions as 3 (I want them to be 0, 1 and 2)
self.action_space = spaces.Discrete(3)
self.currentObservation = 0
self.limit = len(data)
#Initiates the values to be returned by the environment
self.reward = None
如您所见,我的代理将执行 3 个动作,根据动作的不同,将在下面的函数 step() 中计算不同的奖励:
def step(self, action):
assert self.action_space.contains(action)
#Initiates the reward
self.reward=0
#get the reward
self.possibleGain = self.data.iloc[self.currentObservation]['delta_next_day']
#If action is 1, calculate the reward
if(action == 1):
self.reward = self.possibleGain-self.operationCost
#If action is 2, calculate the reward as negative
elif(action==2):
self.reward = (-self.possibleGain)-self.operationCost
#If action is 0, no reward
elif(action==0):
self.reward = 0
#Finish episode
self.done=True
self.episode+=1
self.currentObservation+=1
if(self.currentObservation>=self.limit):
self.currentObservation=0
#Return the state, reward and if its done or not
return self.getObservation(), self.reward, self.done, {}
问题是,如果我在每一集中打印动作,它们是 0、2 和 4。我希望它们是 0、1 和 2。如何强制代理只识别keras-rl 的这 3 个动作?
我不确定为什么 self.action_space = spaces.Discrete(3)
给你的操作是 0,2,4
因为我无法用你发布的代码片段重现你的错误,所以我建议使用以下内容来定义你的操作
self.action_space = gym.spaces.Box(low=np.array([1]),high= np.array([3]), dtype=np.int)
这就是我从动作 space 中采样时得到的结果。
actions= gym.spaces.Box(low=np.array([1]),high= np.array([3]), dtype=np.int)
for i in range(10):
print(actions.sample())
[1]
[3]
[2]
[2]
[3]
[3]
[1]
[1]
[2]
[3]
希望对您有所帮助!