小编S26*_*673的帖子

在强化学习的策略梯度中反向传播什么损失或奖励?

我用 Python 编写了一个小脚本来解决具有策略梯度的各种 Gym 环境。

import gym, os
import numpy as np
#create environment
env = gym.make('Cartpole-v0')
env.reset()
s_size = len(env.reset())
a_size = 2

#import my neural network code
os.chdir(r'C:\---\---\---\Python Code')
import RLPolicy
policy = RLPolicy.NeuralNetwork([s_size,a_size],learning_rate=0.000001,['softmax']) #a 3layer network might be ([s_size, 5, a_size],learning_rate=1,['tanh','softmax'])
#it supports the sigmoid activation function also
print(policy.weights)

DISCOUNT = 0.95 #parameter for discounting future rewards

#first step
action = policy.feedforward(env.reset)
state,reward,done,info = env.step(action)

for t in range(3000):
    done = False
    states = [] #lists for …
Run Code Online (Sandbox Code Playgroud)

python reinforcement-learning backpropagation policy-gradient-descent

9
推荐指数
1
解决办法
549
查看次数