我是健身房新手,我尝试做一个简单的qlearning程序,但由于某些(奇怪)原因,它不会让我摆脱渲染部分(这需要永远)...
这是我的程序:
import gymnasium as gym
import numpy as np
env = gym.make("MountainCar-v0", render_mode="human")
LEARNING_RATE = 0.1
DISCOUNT = 0.95
EPISODES = 25000
SHOW_EVERY = 500
DISCRETE_OS_SIZE = [20] * len(env.observation_space.low)
discrete_os_win_size = (env.observation_space.high - env.observation_space.low) / DISCRETE_OS_SIZE
q_table = np.random.uniform(low=-2, high=0, size=(DISCRETE_OS_SIZE + [env.action_space.n]))
def get_discrete_state(state):
discrete_state = (state - env.observation_space.low) / discrete_os_win_size
return tuple(discrete_state.astype(int))
for episode in range(EPISODES):
if episode % SHOW_EVERY == 0:
render = True
else:
render = False
print("Episode:", episode)
discrete_state = get_discrete_state(tuple(env.reset()[0].astype(int)))
done = False
while not done:
action = np.argmax(q_table[discrete_state])
new_state, reward, terminated, truncated, _ = env.step(action)
done = truncated or terminated
new_discrete_state = get_discrete_state(new_state)
# Rendering the episode
# (Even removing this part does not help)
if render:
env.render()
if not done:
# Updating the Q-table
max_future_q = np.max(q_table[new_discrete_state])
current_q = q_table[discrete_state + (action, )]
new_q = (1-LEARNING_RATE)* current_q + LEARNING_RATE * (reward + DISCOUNT * max_future_q)
q_table[discrete_state + (action, )] = new_q
# If the car made it to the goal
elif new_state[0] >= env.unwrapped.goal_position:
q_table[discrete_state + (action, )] = 0
print("MADE IT ON EPISODE:", episode)
discrete_state = new_discrete_state
env.close()
我尝试过:
env.render()
部分:不起作用discrete_state
并手动将其替换为默认值 (13, 10)
:有点效果(不渲染的剧集,但也不是 render
为 True 时的剧集)在体育馆文档中,它说:
按照惯例,如果 render_mode 是:
- “人类”:环境在当前显示器或终端中持续渲染,通常供人类消费。 此渲染应该在
期间发生,并且step()
不需要调用。 返回 None。render()
只要你把
render_mode
设置为'human'
,每一步都不可避免要渲染