深度强化学习问题:损失减少但智能体不学习

问题描述 投票:0回答:1

我希望有人能帮助我。我正在为 CartPole-v1 体育馆环境实现基本的香草策略梯度算法,但我不知道我做错了什么。 无论我尝试什么,在训练循环期间,损失都会减少(因此模型实际上正在学习一些东西),但剧集总奖励也会减少,直到达到大约 9-10 步(我想这是完成任务所需的最小步数)。使杆落下)。所以它正在学着做坏事!

我不知道这是否与符号有关,我计算损失的方式,优化器......我不知道。

对于我正在使用的折扣奖励

$ Q_{k,t} = \sum_{i=0}{\gamma^{i-t} r_i} $

对于损失:

$ L = -\sum_{k,t}Q_{k,t}log\pi_{ heta}(a_t | s_t)$

该代码混合了 Maxim Lapan 的 Deep RL Hands-On 书籍、Karpathy 的 Pong 示例(blogcode)以及个人调整。

这是我的代码:

import gymnasium as gym
import torch
from torch import nn
import torch.nn.functional as F
from torch.nn.init import xavier_uniform_
import numpy as np

GAMMA = 0.99
LEARNING_RATE = 0.001
BATCH_SIZE = 4
DEVICE = torch.device('mps')


class XavierLinear(nn.Linear):
    def __init__(self, in_features: int, out_features: int, bias: bool = True, device=None, dtype=None) -> None:
        super().__init__(in_features, out_features, bias, device, dtype)
        xavier_uniform_(self.weight)


class VPG(nn.Module):
    def __init__(self, input_size, output_size):
        super(VPG, self).__init__()
        self.net = nn.Sequential(
            XavierLinear(input_size, 128),
            nn.ReLU(),
            XavierLinear(128, output_size), 
        )

    def forward(self, x):
        return F.softmax(self.net(x), dim=0)


def run_episode(model, env):
    obs = env.reset()[0]
    obs = torch.Tensor(env.reset()[0]).to(DEVICE)
    te = tr = False
    rewards, outputs, actions = [], [], []
    while not (te or tr):
        probs = model(obs)
        action = probs.multinomial(1).item()
        obs, r, te, tr, _ = env.step(action)
        obs = torch.Tensor(obs).to(DEVICE)
        if (te or tr):
            r = 0
        rewards.append(r)
        outputs.append(probs)
        actions.append(action)
    return torch.Tensor(rewards).to(DEVICE), torch.concatenate(outputs).reshape(len(rewards), 2), actions

def discount_rewards(rewards):
    discounted_r = torch.zeros_like(rewards)
    additive_r = 0
    for idx in range(len(rewards)-1, -1, -1):
        to_add = GAMMA * additive_r
        additive_r = to_add + rewards[idx]
        discounted_r[idx] = additive_r
    return discounted_r.to(DEVICE)

def loss_function(discounted_r, probs, actions):
    logprobs = torch.log(probs)
    selected = logprobs[range(probs.shape[0]), actions]
    # discounted_r = (discounted_r - discounted_r.mean()) / discounted_r.std()
    weighted = selected * discounted_r
    return -weighted.sum()

# The actual training loop:

episode_total_reward = 0
batch_losses = torch.Tensor().to(DEVICE)
batch_actions = []
batch_disc_r = torch.Tensor().to(DEVICE)
batch_probs = torch.Tensor().to(DEVICE)
best_ep_reward = 0
losses, ep_total_lenghts = [], [0]

episodes = 0
TARGET_REWARD = 100

env = gym.make("CartPole-v1")
model = VPG(env.observation_space.shape[0],
            2).to(DEVICE)
optim = torch.optim.Adam(model.parameters(), lr=LEARNING_RATE)

while np.array(ep_total_lenghts)[-100:].mean() < TARGET_REWARD:
    rewards, probs, actions = run_episode(model, env)
    discounted_r = discount_rewards(rewards)
    episode_total_reward = rewards.shape[0]
    ep_total_lenghts.append(episode_total_reward)
    episodes += 1
    batch_actions += actions
    batch_disc_r = torch.concatenate([batch_disc_r, discounted_r])    
    batch_probs = torch.concatenate([batch_probs, probs])    

    if episodes % BATCH_SIZE == 0:
        loss = loss_function(batch_disc_r, batch_probs, batch_actions)
        losses.append(loss.item())
        model.zero_grad()
        loss.backward()
        optim.step()
        batch_actions = []
        batch_disc_r = torch.Tensor().to(DEVICE)
        batch_probs = torch.Tensor().to(DEVICE)
        print(f"Episode {episodes}. Loss: {loss}. Reward: {episode_total_reward}")
print(f"Success in {episodes} episodes. Loss: {loss}. Reward: {episode_total_reward}")

尝试过:改变损失函数中的符号,改变奖励(非终止步= 0和终止步= -1),手动更新权重(添加梯度或减去梯度...)。在每种情况下,我都会得到相同的结果:损失减少,但智能体没有学会保持杆子向上。

期望:损失减少,剧集总奖励(播放的步数)增加。

python deep-learning reinforcement-learning torch policy-gradient-descent
1个回答
0
投票

我现在唯一清楚地想到的是

VPG.forward

您正在对 dim=0 进行 softmax,但这通常是批次。您希望在动作空间上采用 softmax 来确定要采取的动作(概率或使用 eps 贪婪策略等)。因此,尝试更改为 dim=-1,如下所示:

class VPG(nn.Module):
    def __init__(self, input_size, output_size):
        super(VPG, self).__init__()
        self.net = nn.Sequential(
            XavierLinear(input_size, 128),
            nn.ReLU(),
            XavierLinear(128, output_size), 
        )

    def forward(self, x):
        return F.softmax(self.net(x), dim=-1)  # softmax over action space

您还重置了环境两次,这是不需要发生的,但这不会导致您看到的效果。

© www.soinside.com 2019 - 2024. All rights reserved.