大家好,欢迎来到IT知识分享网。
A2C(Advantage Actor-Critic)算法
1. A2C算法简介
(1).产生高方差的原因主要有以下几点:
(2).为什么A2C算法可以解决高方差的问题
A2C算法通过引入Advantage函数,将策略梯度分解为两部分:Advantage函数和策略函数。Advantage函数表示当前动作相对于其他动作的优势,而策略函数表示当前动作的概率分布。通过将策略梯度分解为这两部分,A2C算法可以更好地控制策略梯度的方差,从而提高算法的稳定性。
(3).Advantage函数是如何计算的?
为了解决这个高方差的问题,可以采用引入一个baseline的方式,即在计算期望的时候用累计奖励减去一个 Baseline ,这样做的好处是可以让梯度减小,因此梯度下降的步子也就更平缓,从而使训练过程更稳定。如果让 Q ( s t , a t ) Q(s_t,a_t) Q(st,at)减去一个 baseline 的话,最理想也是最自然的选择的就 V ( s t ) V(s_t) V(st)了,这样就可以构造出优势函数。
Advantage函数的计算公式为:
A ( s , a ) = Q ( s , a ) − V ( s ) A(s,a) = Q(s,a) – V(s) A(s,a)=Q(s,a)−V(s)
其中, Q ( s , a ) Q(s,a) Q(s,a)表示在状态 s s s下执行动作 a a a的Q值, V ( s ) V(s) V(s)表示在状态 s s s下的价值函数。Advantage函数表示在状态 s s s下执行动作 a a a相对于其他动作的优势。
(4).A2C算法的核心-并行架构
A2C算法通常采用同步并行的方式。这种方式下,多个worker(工作线程)同时运行,每个worker都维护一份Actor和Critic网络的副本。这些worker独立地与环境交互,收集数据,并更新本地网络参数。然后,这些worker会周期性地将自己的网络参数同步到一个全局的网络。
2. A2C算法
(1).A2C算法原理
这些 worker 是同步的,即每轮训练中,Global network 都会等待每个 worker 各自完成当前的 episode,然后把这些 worker 上传的梯度进行汇总并求平均,得到一个统一的梯度并用其更新主网络的参数,最后用这个参数同时更新所有的 worker。在任何时刻,不同 worker 使用的是同一套策略,它们是完全同步的,更新的时机也是同步的。由于各 worker彼此相同,其实 A2C 就相当于只有两个网络,其中一个 Global network 负责参数更新,另一个负责跟环境交互收集经验,只不过它利用了并行的多个环境,可以收集到去耦合的多组独立经验。
除此之外,a2c算法的损失函数包含三部分:策略损失、价值损失和熵损失。策略损失和基础Actor-Critic算法一样,都是用策略梯度来更新策略网络。价值损失是使用均方误差来更新价值网络。熵损失是为了鼓励探索,增加策略的随机性。A2C算法的损失函数如下:
loss = policy_loss + self.value_coeff * value_loss – self.entropy_coeff * entropy
优势函数主要影响的是策略损失,value_coeff与entropy_coeff是超参数,用于调整策略损失、价值损失和熵损失之间的平衡。
(2).A2C算法的代码实现
代码实现参考地址:https://github.com/rpatrik96/pytorch-a2c.git,
我将代码放在下面,GitHub的那个代码是19年的,所以有的库以及函数不支持,下面是改了一些库以及加了一些注释的代码,代码目录与GitHub上面的一样。
1.main.py
from stable_baselines3.common.env_util import make_atari_env from stable_baselines3.common.vec_env import VecFrameStack from agent import ICMAgent from runner import Runner from utils import get_args import wandb import os # constants if __name__ == '__main__': # wandb_key os.environ["WANDB_API_KEY"] = '*' """Argument parsing""" args = get_args() wandb.init(project="A2C", name="A2C", config= args,) """Environment""" # create the atari environments # NOTE: this wrapper automatically resets each env if the episode is done # 创建num_envs个atari环境,返回值是VecEnv类,是向量化的环境对象,这个对象可以同时管理多个并行的游戏环境实例 env = make_atari_env(args.env_name, n_envs=args.num_envs, seed=args.seed) # VecFrameStack用于向量化环境的帧堆叠包装器,用于处理图像观测 # 返回值是一个帧堆叠后的环境对象,这个环境对象可以在执行step或reset操作时,自动处理并返回堆叠后的观测结果 env = VecFrameStack(env, n_stack=args.n_stack) """Agent""" # 定义ICMagent类 ''' agent的工作: 1、初始化一些参数:堆叠帧、环境并行数、动作空间大小、学习率等 2、定义A2C网络 3、初始化LSTM的并行环境缓冲区 4、定义优化器 ''' agent = ICMAgent(args.n_stack, args.num_envs, env.action_space.n, lr=args.lr) """Train""" ''' runner的工作: 1、初始化一些参数:环境、agent、并行环境数、堆叠帧、rollout大小(更新前与环境交互的次数)、更新次数、梯度裁剪、值函数系数、熵系数、tensorboard、日志目录、cuda、随机种子 2、设置日志记录器 3、创建或者重置LSTM的并行环境缓冲区 4、定义使用的网络为ICMAgent网络 ''' runner = Runner(agent, env, args.num_envs, args.n_stack, args.rollout_size, args.num_updates, args.max_grad_norm, args.value_coeff, args.entropy_coeff, args.tensorboard, args.log_dir, args.cuda, args.seed) runner.train()
2.agent.py
import torch import torch.nn as nn import torch.optim as optim from model import A2CNet class ICMAgent(nn.Module): def __init__(self, n_stack, num_envs, num_actions, in_size=288, feat_size=256, lr=1e-4): """ Container class of an A2C and an ICM network, the baseline for experimenting with other curiosity-based methods. :param n_stack: number of frames stacked :param num_envs: number of parallel environments :param num_actions: size of the action space of the environment :param in_size: dimensionality of the input tensor :param feat_size: number of the features :param lr: learning rate """ super().__init__() # constants # 堆叠帧数 self.n_stack = n_stack # 并行环境数 self.num_envs = num_envs # 表示动作空间的大小,可以理解为动作的数量 self.num_actions = num_actions # 输入张量的维度 self.in_size = in_size # 特征的数量 self.feat_size = feat_size self.is_cuda = torch.cuda.is_available() # networks ''' A2C网络干的工作是: 1、初始化神经网络的权重和偏置 2、定义特征提取器,主要是按照输入的连续帧图像进行卷积和池化操作并提取特征 3、定义LSTM层,学习输入数据的时间序列模式和长期依赖关系 ''' self.a2c = A2CNet(self.n_stack, self.num_actions, self.in_size) if self.is_cuda: self.a2c.cuda() # init LSTM buffers with the number of the environments # 初始化LSTM缓冲区,以便为了管理LSTM内部状态,以便在多个并行环境训练中保持一致和有效的状态管理 self.a2c.set_recurrent_buffers(num_envs) # optimizer # 定义学习率和优化器 self.lr = lr self.optimizer = optim.Adam(self.a2c.parameters(), self.lr)
3.model.py
from pdb import set_trace import numpy as np import torch import torch.nn as nn import torch.nn.functional as F from torch.distributions import Categorical def init(module, weight_init, bias_init, gain=1): """ :param module: module to initialize :param weight_init: initialization scheme :param bias_init: bias initialization scheme :param gain: gain for weight initialization :return: initialized module """ weight_init(module.weight.data, gain=gain) bias_init(module.bias.data) return module class ConvBlock(nn.Module): def __init__(self, ch_in=4): """ A basic block of convolutional layers, consisting: - 4 Conv2d - LeakyReLU (after each Conv2d) - currently also an AvgPool2d (I know, a place for me is reserved in hell for that) :param ch_in: number of input channels, which is equivalent to the number of frames stacked together """ super().__init__() # constants # 滤波器的数量为32,滤波器也就是卷积核,主要用于特征提取与降维 self.num_filter = 32 # 卷积核大小为3,步长为2,padding为1 self.size = 3 # 步长是卷积核在输入数据上滑动时每次移动的像素数。 self.stride = 2 # padding是输入数据边缘填充的像素数,用于保持卷积后输出的尺寸与输入尺寸相同。 self.pad = self.size // 2 # 使用init函数来初始化卷积层 # 采用了init函数,使用正交初始化(nn.init.orthogonal_)和常数初始化(nn.init.constant_),来初始化卷积层的权重和偏置 # 并使用leaky_relu作为激活函数。 init_ = lambda m: init(m, nn.init.orthogonal_, lambda x: nn.init. constant_(x, 0), nn.init.calculate_gain('leaky_relu')) # layers # ch_in 表示输入的通道数,即堆叠在一起的帧的数量 # 定义了一个卷积神经网络的四个卷积层,每个卷积层后面都跟着一个LeakyReLU激活函数。 # 第一个输入特征图通道数为ch_in,经过第一个卷积层处理后通道数为num_filter # 输入四个通道的特征图,四个通道与每个卷积核进行卷积操作,然后将卷积后的结果相加得到一个通道的特征图,总共有32个卷积核,也就得到32个通道的特征图 # 后面三个卷积核的输入特征图通道数为num_filter,经过卷积层处理后通道数仍为num_filter self.conv1 = init_(nn.Conv2d(ch_in, self.num_filter, self.size, self.stride, self.pad)) self.conv2 = init_(nn.Conv2d(self.num_filter, self.num_filter, self.size, self.stride, self.pad)) self.conv3 = init_(nn.Conv2d(self.num_filter, self.num_filter, self.size, self.stride, self.pad)) self.conv4 = init_(nn.Conv2d(self.num_filter, self.num_filter, self.size, self.stride, self.pad)) def forward(self, x): x = F.leaky_relu(self.conv1(x)) x = F.leaky_relu(self.conv2(x)) x = F.leaky_relu(self.conv3(x)) x = F.leaky_relu(self.conv4(x)) # 平均池化层,用于下采样,减少特征图的尺寸,同时保持空间特征不变 # 对每个通道的特征图进行2*2的平均池化,意味着每个2*2的窗口内的值会被平均,然后这些平均值将构成池化后的特征图 # 当前步长为2,也就意味着池化后的特征图尺寸为输入特征图尺寸的一半 # 现在输出的x是一个四维张量[batch_size,ch_in,h,w] x = nn.AvgPool2d(2)(x) # needed as the input image is 84x84, not 42x42 # return torch.flatten(x) # set_trace() # x.shape[0]返回的是x这个四维张量的第一个维度的大小,即batch_size # x.view(x.shape[0], -1)返回的是二维张量,第一个维度是batch_size,第二个维度是-1,表示将剩余的维度展平 # 展平后不用考虑第二个维度的数据组织形式,因为全连接层在输入数据时会将每个输入样本视为一个独立的特征向量 # 全连接层对每一个输入的特征向量,都会乘以权重矩阵再加上偏置,最终会输出一个新的特征向量,这个过程和输入向量的维度无关。 return x.view(x.shape[0], -1) # retain batch size # 特征编码器网络 class FeatureEncoderNet(nn.Module): def __init__(self, n_stack, in_size, is_lstm=True): """ Network for feature encoding :param n_stack: number of frames stacked beside each other (passed to the CNN) :param in_size: input size of the LSTMCell if is_lstm==True else it's the output size :param is_lstm: flag to indicate wheter an LSTMCell is included after the CNN """ super().__init__() # constants # 输入张量的维度,这里为4 self.in_size = in_size # 隐藏层的神经元个数 self.h1 = 288 # 是否是长短期记忆网络 self.is_lstm = is_lstm # indicates whether the LSTM is needed # layers # 卷积块,通常包含卷积层、激活函数和池化层,用于图像或者序列数据的特征提取 self.conv = ConvBlock(ch_in=n_stack) if self.is_lstm: # LSTMCell层,用于处理序列数据,学习序列中的模式和长期依赖关系 # 这个是封装了的lstm网络,返回值是一个LSTMCell对象 self.lstm = nn.LSTMCell(input_size=self.in_size, hidden_size=self.h1) def reset_lstm(self, buf_size=None, reset_indices=None): """ Resets the inner state of the LSTMCell :param reset_indices: boolean list of the indices to reset (if True then that column will be zeroed) :param buf_size: buffer size (needed to generate the correct hidden state size) :return: """ # reset_indices表示需要重置的环境索引 # reset_indices是一个布尔列表或者布尔张量,用于指示哪些环境的LSTM状态需要重置,重置的原因通常是环境在某一时刻达到了终止状态,因此需要将其内部状态重置为0 # 如果rest_indices[i]为True,则表示第i个环境的LSTM状态需要重置 # buf_size表示缓冲区大小,即并行环境的数量 # 判断使不使用LSTMcell if self.is_lstm: # 这段代码不计算梯度,即不会更新参数 with torch.no_grad(): if reset_indices is None: # 重置所有环境的LSTM状态 # set device to that of the underlying network # (it does not matter, the device of which layer is queried) # 初始化所有并行环境(buf_size个)的LSTM的隐藏状态和细胞状态,并将他们设置为全零张量 # 张量的形状为[buf_size, self.h1],其中buf_size是并行环境的数量,self.h1是隐藏层的神经元个数 # device=self.lstm.weight_ih.device确保张量被创建在与LSTM权重相同的设备上,确保计算设备的一致性 self.h_t1 = self.c_t1 = torch.zeros(buf_size, self.h1, device=self.lstm.weight_ih.device) else: # set device to that of the underlying network # (it does not matter, the device of which layer is queried) # reset_indices.astype(np.uint8):将reset_indices(布尔数组)转换为uint8类型,True转换为1,False转换为0 # 将numpy数组转换为PyTorch张量,并将其移动到与LSTM权重相同的设备上 # 假设reset_indices = np.array([True, False, False, True]),那么输出后resetTensor = tensor([1, 0, 0, 1]) resetTensor = torch.as_tensor(reset_indices.astype(np.uint8), device=self.lstm.weight_ih.device) # 根据resetTensor的值,将self.h_t1和self.c_t1中对应位置的值置为0 # 计算resetTensor张量的所有元素和 if resetTensor.sum(): # resetTensor.view(-1, 1)将resetTensor重塑为一个列向量,形状为[resetTensor.sum(), 1] # torch.view(a,b):将张量重塑为形状为[a,b]的张量,若a=-1,则根据根据len(torch)/b自动计算a的值 # (1 - resetTensor.view(-1, 1)).float()形状是[4,1],self.h_t1和self.c_t1形状为[4,288] # 当维度不匹配时,会将(1 - resetTensor.view(-1, 1)).float()的形状广播为[4,288],然后进行逐元素(逐位置)相乘 # 0 * 1 = 0,1 * 1 = 1,所以resetTensor为True的位置的值会被置为0,False的位置的值会被保留 # ×乘(逐元素相乘)运算符:*,点乘(内积)运算符:torch.dot()和torch.mm() self.h_t1 = (1 - resetTensor.view(-1, 1)).float() * self.h_t1 self.c_t1 = (1 - resetTensor.view(-1, 1)).float() * self.c_t1 # 输入是当前状态的数据,输出当前状态在特征空间中的表示 def forward(self, x): """ In: [s_t] Current state (i.e. pixels) -> 1 channel image is needed Out: phi(s_t) Current state transformed into feature space :param x: input data representing the current state :return: """ # x = self.conv(x),输入的x是当前状态的数据,输出的x是经过卷积和池化后的特征图,是一个二维张量 x = self.conv(x) # return self.lin(x) if self.is_lstm: # 将x形状重新调整为[len(x)/in_size, in_size] x = x.view(-1, self.in_size) # set_trace() self.h_t1, self.c_t1 = self.lstm(x, (self.h_t1, self.c_t1)) # h_t1 is the output return self.h_t1 # [:, -1, :]#.reshape(-1) else: return x.view(-1, self.in_size) class A2CNet(nn.Module): def __init__(self, n_stack, num_actions, in_size=288, writer=None): """ Implementation of the Advantage Actor-Critic (A2C) network :param n_stack: number of frames stacked :param num_actions: size of the action space, pass env.action_space.n :param in_size: input size of the LSTMCell of the FeatureEncoderNet """ super().__init__() self.writer = writer # constants # input size:指明了LSTM单元的输入大小 self.in_size = in_size # in_size # 输入张量的维度 self.num_actions = num_actions # networks # 定义一个名为init的匿名函数 # 这个函数使用另一个函数init来初始化神经网络层,并传入了两个初始化的方法:nn.init.orthogonal_和nn.init.constant_(x, 0) # 这些初始化方法用于初始化神经网络的权重和偏置 init_ = lambda m: init(m, nn.init.orthogonal_, lambda x: nn.init.constant_(x, 0)) # 特征编码器网络的初始化 # 作用:将原始输入数据转化为适合策略网络Actor与价值网络Critic处理的特征表示 # 特征编码器中包括卷积层和LSTM层,卷积层用于提取特征和降维,LSTM层学习输入数据中的时间序列模式和长期依赖关系 self.feat_enc_net = FeatureEncoderNet(n_stack, self.in_size) # actor和critic负责将特征编码器中提取出来的特征转化为动作概率(所有可能动作的概率分布)和价值估计(当前状态的状态价值) self.actor = init_(nn.Linear(self.feat_enc_net.h1, self.num_actions)) # estimates what to do self.critic = init_(nn.Linear(self.feat_enc_net.h1,1)) # estimates how good the value function (how good the current state is) def set_recurrent_buffers(self, buf_size): """ Initializes LSTM buffers with the proper size, should be called after instatiation of the network. :param buf_size: size of the recurrent buffer :return: """ # 这里buf_size表示缓冲区的大小(即并行环境的数量) # 初始化LSTM缓冲区 self.feat_enc_net.reset_lstm(buf_size=buf_size) def reset_recurrent_buffers(self, reset_indices): """ :param reset_indices: boolean numpy array containing True at the indices which should be reset :return: """ # reset_indices是一个bool数组,表示重置环境的索引,指示哪些环境需要重置 self.feat_enc_net.reset_lstm(reset_indices=reset_indices) def forward(self, state): """ feature: current encoded state :param state: current state :return: """ # encode the state # 当前状态通过特征提取器提取特征,state:[batch_size,n_stake,h,w],feature:[batch_size,n_stake*h*w] feature = self.feat_enc_net(state) # calculate policy and value function # 根据提取的特征进行actor和critic的计算,policy:[batch_size,num_actions],value:[batch_size,1] policy = self.actor(feature) value = self.critic(feature) # 日志记录器记录feature,policy,value的直方图 if self.writer is not None: self.writer.add_histogram("feature", feature.detach()) self.writer.add_histogram("policy", policy.detach()) self.writer.add_histogram("value", value.detach()) # torch.squeeze(value):移除value中大小为1的维度 # 返回策略、值函数以及特征表示 return policy, torch.squeeze(value), feature def get_action(self, state): """ Method for selecting the next action :param state: current state :return: tuple of (action, log_prob_a_t, value) """ """Evaluate the A2C""" # self(state)这里代表一个神经网络的前向传播过程,输入当前状态,输出策略,值函数,特征表示 policy, value, feature = self(state) # use A3C to get policy and value """Calculate action""" # 1. convert policy outputs into probabilities # 2. sample the categorical distribution represented by these probabilities # policy是每个动作的评分,使用F.softmax(policy, dim=-1)可以转换为概率分布,dim=-1表示在最后一个维度上应用softmax函数 action_prob = F.softmax(policy, dim=-1) # Categorical(action_prob)将概率分布转换为Categorical分布,Categorical分布是离散分布,可以用来采样动作 cat = Categorical(action_prob) # 动作采样 action = cat.sample() # cat.log_prob(action)表示选中该动作的对数概率 # cat.entropy().mean()表示动作分布的熵,是衡量动作选择不确定性的指标 # 熵H(x)=-∑p(x)log(p(x)),熵越大,表示动作选择越随机 # 一个策略的熵越高,说明其输出的动作选择越随机,通过在损失函数中添加熵(熵的负值),可以鼓励策略保持一定的随机性 return (action, cat.log_prob(action), cat.entropy().mean(), value, feature)
4.runner.py
import numpy as np import torch import torch.nn as nn import torch.nn.functional as F from torch.utils.tensorboard import SummaryWriter from storage import RolloutStorage import wandb class Runner(object): def __init__(self, net, env, num_envs, n_stack, rollout_size=5, num_updates=, max_grad_norm=0.5, value_coeff=0.5, entropy_coeff=0.02, tensorboard_log=False, log_path="./log", is_cuda=True, seed=42): super().__init__() # constants # 并行环境数量 self.num_envs = num_envs # 每次更新策略前,代理与环境交互的次数 self.rollout_size = rollout_size # 策略被更新的总次数,决定了训练过程的长度和训练的总计算量 self.num_updates = num_updates # 堆叠的帧数 self.n_stack = n_stack self.seed = seed # 梯度范数的最大值,用于梯度裁剪 self.max_grad_norm = max_grad_norm # loss scaling coefficients self.is_cuda = torch.cuda.is_available() and is_cuda # objects """Tensorboard logger""" # 设置tensorboard的日志记录器 self.writer = SummaryWriter(comment="statistics", log_dir=log_path) if tensorboard_log else None """Environment""" self.env = env # 创建或者重置初始化缓冲区,用0填充 self.storage = RolloutStorage(self.rollout_size, self.num_envs, self.env.observation_space.shape[0:-1], self.n_stack, is_cuda=self.is_cuda, value_coeff=value_coeff, entropy_coeff=entropy_coeff, writer=self.writer) """Network""" # 使用的网络为ICMAgent,这个网络里面定义了A2C网络(包含特征提取网络)以及LSTM网络 self.net = net # 日志记录器 self.net.a2c.writer = self.writer if self.is_cuda: self.net = self.net.cuda() # self.writer.add_graph(self.net, input_to_model=(self.storage.states[0],)) --> not working for LSTMCEll def train(self): """Environment reset""" # 环境重置 obs = self.env.reset() # 将obs转换为tensor,并存储在self.storage.states[0]中 self.storage.states[0].copy_(self.storage.obs2tensor(obs)) # 初始化最好的损失为无穷 best_loss = np.inf # 开始训练 for num_update in range(self.num_updates): # 返回这一个batch_size的最终值和熵 final_value, entropy = self.episode_rollout() # 梯度清零 self.net.optimizer.zero_grad() """Assemble loss""" # 根据最终值和熵计算A2C损失 loss = self.storage.a2c_loss(final_value, entropy) # 反向传播 loss.backward(retain_graph=False) # gradient clipping # 梯度裁剪,防止梯度爆炸 nn.utils.clip_grad_norm_(self.net.parameters(), self.max_grad_norm) if self.writer is not None: self.writer.add_scalar("loss", loss.item()) # 优化器参数更新 self.net.optimizer.step() # it stores a lot of data which let's the graph # grow out of memory, so it is crucial to reset # 清空缓冲区,为下一轮的训练做准备,在一次rollout(环境交互周期)完成后重置缓冲区 self.storage.after_update() if loss < best_loss: # 保存最小化的loss best_loss = loss.item() wandb.log({
"best loss": best_loss},step=num_update) print("model saved with best loss: ", best_loss, " at update #", num_update) torch.save(self.net.state_dict(), "a2c_best_loss") elif num_update % 10 == 0: wandb.log({
"current loss": loss.item()},step=num_update) print("current loss: ", loss.item(), " at update #", num_update) self.storage.print_reward_stats(num_update) elif num_update % 100 == 0: torch.save(self.net.state_dict(), "a2c_time_log_no_norm") if self.writer is not None and len(self.storage.episode_rewards) > 1: self.writer.add_histogram("episode_rewards", torch.tensor(self.storage.episode_rewards)) self.env.close() def episode_rollout(self): # 初始化用于积累每个步骤的熵的变量 episode_entropy = 0 # rollout_size=5 for step in range(self.rollout_size): """Interact with the environments """ # call A2C # 从A2C模型中获取动作,动作的对数概率,熵,状态值,和A2C特征 # self.storage.get_state(step):返回第step个状态的克隆 # 通过get_action返回动作,动作的对数概率,熵,状态值,和A2C特征 a_t, log_p_a_t, entropy, value, a2c_features = self.net.a2c.get_action(self.storage.get_state(step)) # accumulate episode entropy # 熵累加 episode_entropy += entropy # interact # 环境推进到下一步 obs, rewards, dones, infos = self.env.step(a_t.cpu().numpy()) # save episode reward # 从infos中提取奖励信息 self.storage.log_episode_rewards(infos) # 将当前步骤的数据存储到存储器中 self.storage.insert(step, rewards, obs, a_t, log_p_a_t, value, dones) # done=True时,需要重置当前环境的LSTM状态,以便下一个episode self.net.a2c.reset_recurrent_buffers(reset_indices=dones) # Note: # get the estimate of the final reward # that's why we have the CRITIC --> estimate final reward # detach, as the final value will only be used as a with torch.no_grad(): # 计算最后一个环境状态的值函数与A2C特征 _, _, _, final_value, final_features = self.net.a2c.get_action(self.storage.get_state(step + 1)) # 返回最终值函数用于奖励估计,轨迹的熵值用于策略探索、熵正则化以及策略评估 return final_value, episode_entropy
5.storage.py
from collections import deque import numpy as np import torch import wandb class RolloutStorage(object): def __init__(self, rollout_size, num_envs, frame_shape, n_stack, feature_size=288, is_cuda=True, value_coeff=0.5, entropy_coeff=0.02, writer=None): """ :param rollout_size: number of steps after the policy gets updated :param num_envs: number of environments to train on parallel :param frame_shape: shape of a frame as a tuple :param n_stack: number of frames concatenated :param is_cuda: flag whether to use CUDA """ super().__init__() # 每次策略更新前代理与环境交互的次数 self.rollout_size = rollout_size self.num_envs = num_envs # 堆叠的帧数 self.n_stack = n_stack # 环境观测空间的形状 self.frame_shape = frame_shape # 特征空间的维度 self.feature_size = feature_size self.is_cuda = is_cuda # 长度为10的队列,用于存储最近10个episode的reward self.episode_rewards = deque(maxlen=10) # value_coeff是价值损失(critic loss)在总损失中的权重因子 self.value_coeff = value_coeff # entropy_coeff是熵损失(entropy loss)在总损失中的权重因子 self.entropy_coeff = entropy_coeff # 日志记录器 self.writer = writer # initialize the buffers with zeros # 创建或重置初始化缓冲区 self.reset_buffers() def _generate_buffer(self, size): """ Generates a `torch.zeros` tensor with the specified size. :param size: size of the tensor (tuple) :return: tensor filled with zeros of 'size' on the device specified by self.is_cuda """ if self.is_cuda: # 创建一个size形状的张量,元素初始化为0 return torch.zeros(size).cuda() else: return torch.zeros(size) # 创建或重置存储数据的缓冲区 def reset_buffers(self): """ Creates and/or resets the buffers - each of size (rollout_size, num_envs) - storing: - rewards - states - actions - log probabilities - values - dones NOTE: calling this function after a `.backward()` ensures that all data not needed in the future (which may `requires_grad()`) gets freed, thus avoiding memory leak :return: """ # 这里定义的rollout_size=5,num_envs=4,n_stack=4,frame_shape为观测空间的形状 # rewards返回值形状为[5,4],每一列都表示一个环境所对应的rewards self.rewards = self._generate_buffer((self.rollout_size, self.num_envs)) # here the +1 comes from the fact that we need an initial state at the beginning of each rollout # which is the last state of the previous rollout # states返回值形状为[6, 4, 4, *self.frame_shape],self.rollout_size + 1是因为每个回合需要一个初始状态,通常是前一个回合的最后状态 # *frame_shape表示将frame_shape中的元素作为参数传递给函数 self.states = self._generate_buffer((self.rollout_size + 1, self.num_envs, self.n_stack, *self.frame_shape)) # actions,log_probs,values,dones这几个的返回值形状都为[5, 4] self.actions = self._generate_buffer((self.rollout_size, self.num_envs)) # log_probs表示对数概率 self.log_probs = self._generate_buffer((self.rollout_size, self.num_envs)) self.values = self._generate_buffer((self.rollout_size, self.num_envs)) self.dones = self._generate_buffer((self.rollout_size, self.num_envs)) def after_update(self): """ Cleaning up buffers after a rollout is finished and copying the last state to index 0 :return: """ # 将之前的最后一个状态复制到状态列表的第一个位置 self.states[0].copy_(self.states[-1]) # 清空actions,log_probs,values缓冲区 self.actions = self._generate_buffer((self.rollout_size, self.num_envs)) self.log_probs = self._generate_buffer((self.rollout_size, self.num_envs)) self.values = self._generate_buffer((self.rollout_size, self.num_envs)) def get_state(self, step): """ Returns the observation of index step as a cloned object, otherwise torch.nn.autograd cannot calculate the gradients (indexing is the culprit) :param step: index of the state :return: """ # 这里返回的是states中第step个观测序列的观测值,并使用clone()方法返回一个副本,以避免梯度计算时出现的问题 return self.states[step].clone() # 将观测序列转换为张量 def obs2tensor(self, obs): # 1. reorder dimensions for nn.Conv2d (batch, ch_in, width, height) # 2. convert numpy array to _normalized_ FloatTensor # 1、重新排列数组的维度,以适应pytorch中的卷积神经网络所需的输入格式(batch, ch_in, width, height) # 2、将numpy数组转换为FloatTensor # obs.astype(np.float32)将numpy类型转化为32位浮点数,transpose((0, 3, 1, 2))将数组的维度重新排列,/ 255.将数值归一化到0-1之间 # transpose((0, 3, 1, 2))指定了新的维度顺序 # 0表示原始数组的第一个维度,通常指批处理中的样本索引,还是放在第一个维度 # 3表示原始数组的第四个维度,通常指通道数,放在第二个维度 # 1表示原始数组的第二个维度,通常指图像的高度,放在第三个维度 # 2表示原始数组的第三个维度,通常指图像的宽度,放在第四个维度 tensor = torch.from_numpy(obs.astype(np.float32).transpose((0, 3, 1, 2))) / 255. return tensor.cuda() if self.is_cuda else tensor def insert(self, step, reward, obs, action, log_prob, value, dones): """ Inserts new data into the log for each environment at index step :param step: index of the step :param reward: numpy array of the rewards :param obs: observation as a numpy array :param action: tensor of the actions :param log_prob: tensor of the log probabilities :param value: tensor of the values :param dones: numpy array of the dones (boolean) :return: """ self.rewards[step].copy_(torch.from_numpy(reward)) self.states[step + 1].copy_(self.obs2tensor(obs)) self.actions[step].copy_(action) self.log_probs[step].copy_(log_prob) self.values[step].copy_(value) self.dones[step].copy_(torch.ByteTensor(dones.data)) # 计算折扣奖励 def _discount_rewards(self, final_value, discount=0.99): """ Computes the discounted reward while respecting - if the episode is not done - the estimate of the final reward from that state (i.e. the value function passed as the argument `final_value`) :param final_value: estimate of the final reward by the critic :param discount: discount factor :return: """ """Setup""" # placeholder tensor to avoid dynamic allocation with insert # 创建一个用于存储折扣奖励的二维张量[rollout_size, num_envs] r_discounted = self._generate_buffer((self.rollout_size, self.num_envs)) """Calculate discounted rewards""" # setup the reward chain # if the rollout has brought the env to finish # then we proceed with 0 as final reward (there is nothing to gain in that episode) # but if we did not finish, then we use our estimate # masked_scatter_ copies from #1 where #0 is 1 -> but we need scattering, where # the episode is not finished, thus the (1-x) # (1 - self.dones[-1]).byte()生成一个布尔掩码,如果环境没有结束,则对应位置为1 # masked_scatter将final_value的值赋给R中对应位置,其中掩码为1的地方。如果环境已经结束(done=True),则R位置的值为0,否则,R位置的值为final_value R = self._generate_buffer(self.num_envs).masked_scatter((1 - self.dones[-1]).byte(), final_value) # 折扣奖励计算循环 # reversed表示从最后一步向前迭代 for i in reversed(range(self.rollout_size)): # the reward can only change if we are within the episode # i.e. while done==True, we use 0 # NOTE: this update rule also can handle, if a new episode has started during the rollout # in that case an intermediate value will be 0 # todo: add GAE # 更新R为当前奖励加上折扣后的R,即self.rewards[i] + discount * R # 使用masked_scatter将更新后的R存储在r_discounted中 R = self._generate_buffer(self.num_envs).masked_scatter((1 - self.dones[-1]).byte(), self.rewards[i] + discount * R) # 更新中间变量R r_discounted[i] = R # 返回折扣奖励 return r_discounted def a2c_loss(self, final_value, entropy): # calculate advantage # i.e. how good was the estimate of the value of the current state # 根据dones,final_value,rewards计算折扣奖励 rewards = self._discount_rewards(final_value) # 计算优势函数advantage=t时刻瞬时奖励+gamma*(t+1时刻的值函数)-t时刻的值函数 # t时刻瞬时奖励+gamma*(t+1时刻的值函数)是Q值,表示从状态s开始采取动作a后的预期总奖励,t时刻的值函数是V值,Q值-V值即为优势函数 # 所以优势函数是关于状态和动作的函数 advantage = rewards - self.values # weight the deviation of the predicted value (of the state) from the # actual reward (=advantage) with the negative log probability of the action taken # 计算策略损失,用于指导Actor的更新 # 根据策略梯度定理,策略的期望奖励是策略对数概率和优势函数的乘积的期望值 # self.log_probs是当前策略下选择动作的对数概率 # 动作的对数概率*优势函数再取负号表示最大化策略在优势函数较大时选择该动作的概率,从而使得策略更倾向于选择优势函数较大的动作 # .detach()表示不对优势函数进行反向传播 policy_loss = (-self.log_probs * advantage.detach()).mean() # the value loss weights the squared difference between the actual # and predicted rewards # 计算价值损失,用于指导Critic的更新 value_loss = advantage.pow(2).mean() # return the a2c loss # which is the sum of the actor (policy) and critic (advantage) losses # due to the fact that batches can be shorter (e.g. if an env is finished already) # MEAN is used instead of SUM # 将策略损失、价值损失以及熵损失结合起来,计算最终的A2C损失 loss = policy_loss + self.value_coeff * value_loss - self.entropy_coeff * entropy # 写入日志记录器中 if self.writer is not None: self.writer.add_scalar("a2c_loss", loss.item()) self.writer.add_scalar("policy_loss", policy_loss.item()) self.writer.add_scalar("value_loss", value_loss.item()) self.writer.add_histogram("advantage", advantage.detach()) self.writer.add_histogram("rewards", rewards.detach()) self.writer.add_histogram("action_prob", self.log_probs.detach()) return loss # 提取奖励 def log_episode_rewards(self, infos): """ Logs the episode rewards :param infos: infos output of env.step() :return: """ # infos里面包含着奖励等字段 for info in infos: if 'episode' in info.keys(): self.episode_rewards.append(info['episode']['r']) def print_reward_stats(self,num_update): if len(self.episode_rewards) > 1: wandb.log({
"mean_reward": np.mean(self.episode_rewards), "median_reward": np.median(self.episode_rewards), "min_reward": np.min(self.episode_rewards), "max_reward": np.max(self.episode_rewards) },step=num_update) print( "Mean/median reward {:.1f}/{:.1f}, min/max reward {:.1f}/{:.1f}\n".format( np.mean(self.episode_rewards), np.median( self.episode_rewards), np.min(self.episode_rewards), np.max(self.episode_rewards)))
6.utils.py
import argparse import torch def get_args(): """ Function for handling command line arguments :return: parsed command line arguments """ parser = argparse.ArgumentParser(description='PyTorch A2C') # training parser.add_argument('--cuda', action='store_true', default=True, help='CUDA flag') # 是否使用tensorboard parser.add_argument('--tensorboard', action='store_true', default=True, help='log with Tensorboard') # 模型保存路径 parser.add_argument('--log-dir', type=str, default="../log/a2c", help='log directory for Tensorboard') # 随机种子 parser.add_argument('--seed', type=int, default=42, metavar='SEED', help='random seed') # 设置梯度范数的最大值,如果梯度的范数超过这个值,梯度将会被裁剪,使其不超过这个值 parser.add_argument('--max-grad_norm', type=float, default=.5, metavar='MAX_GRAD_NORM', help='threshold for gradient clipping') # 学习率 parser.add_argument('--lr', type=float, default=1e-4, metavar='LR', help='learning rate') # environment # 环境名 parser.add_argument('--env-name', type=str, default='PongNoFrameskip-v4', help='environment name') # 要并行的环境数量 parser.add_argument('--num-envs', type=int, default=8, metavar='NUM_ENVS', help='number of parallel environemnts') # 设置要堆叠的帧数 # 堆叠帧用于将多个连续的图像帧作为一个状态输入,以捕捉动态变化的信息 parser.add_argument('--n-stack', type=int, default=4, metavar='N_STACK', help='number of frames stacked') # 设置每次回合中要执行的步骤数量,代理需要执行一系列步骤,然后使用这些步骤的数据来更新策略 # 可以理解为设置每次策略更新前,代理要与环境交互的次数 parser.add_argument('--rollout-size', type=int, default=5, metavar='ROLLOUT_SIZE', help='rollout size') # 设置策略被更新的总次数,决定了训练过程的长度和训练的总计算量 parser.add_argument('--num-updates', type=int, default=, metavar='NUM_UPDATES', help='number of updates') # model coefficients # 下面两个curiosity-coeff与icm-beta参数用于设置好奇心模块的参数,但是这个代码中没有体现出来 # 设置基于好奇心的探索系数,可以理解为,这个参数控制了好奇心探索策略在总策略中的权重 # 基于好奇心的探索策略通常添加内在奖励来实现,该奖励于代理对环境的预测误差或新奇程度相关联 # 在训练过程中,代理会受到外在奖励(环境奖励)和内在奖励(好奇心模块的奖励),curiosity-coeff决定了内在奖励在总奖励中的权重 parser.add_argument('--curiosity-coeff', type=float, default=.015, metavar='CURIOSITY_COEFF', help='curiosity-based exploration coefficient') # 设置内在好奇心模块的beta系数。ICM是一种增强探索的技术,通过引入内在奖励来鼓励代理探索环境 # ICM使用两个网络:一个逆模型(输入当前状态和下一个状态,输出在这两个动作之间的动作的预测)和一个前向模型(输入当前状态和采取的动作,输出预测的下一个状态) # bata系数在内在奖励的计算中起到平衡作用 # 内在奖励=(1-beta)*逆模型损失+beta*前向模型损失 parser.add_argument('--icm-beta', type=float, default=.2, metavar='ICM_BETA', help='beta for the ICM module') # 设置在A2C中价值损失权重因子,用于调节价值损失在总损失中的权重 # A2C中,总损失由策略损失(actor loss)、价值损失(critic loss)和熵损失组成,value-coeff控制了价值损失在总损失中的权重 # 策略损失:策略网络在当前策略下采取的动作与最优动作之间的差距 # 价值损失:价值网络对当前状态的预测与实际回报之间的差距 # 熵损失:鼓励策略的探索性,增加策略的随机性 parser.add_argument('--value-coeff', type=float, default=.5, metavar='VALUE_COEFF', help='value loss weight factor in the A2C loss') # 设置熵损失权重因子,鼓励策略的探索性,从而避免策略陷入过早的收敛或者局部最优解 parser.add_argument('--entropy-coeff', type=float, default=.02, metavar='ENTROPY_COEFF', help='entropy loss weight factor in the A2C loss') # Argument parsing return parser.parse_args()
3.总结
A2C算法是一种基于Actor-Critic框架的强化学习算法,它结合了策略梯度方法(Actor)和价值函数估计方法(Critic)的优点,通过同时优化策略和价值函数来提高学习效率和性能。A2C算法通过并行化多个环境来加速训练过程,并使用多个策略梯度更新来稳定训练过程。此外,A2C算法还引入了熵正则化项来鼓励策略的探索性,从而提高算法的泛化能力。
免责声明:本站所有文章内容,图片,视频等均是来源于用户投稿和互联网及文摘转载整编而成,不代表本站观点,不承担相关法律责任。其著作权各归其原作者或其出版社所有。如发现本站有涉嫌抄袭侵权/违法违规的内容,侵犯到您的权益,请在线联系站长,一经查实,本站将立刻删除。 本文来自网络,若有侵权,请联系删除,如若转载,请注明出处:https://haidsoft.com/112704.html