Max*_*kov 6 python artificial-intelligence q-learning keras tensorflow
也许我的问题看起来很愚蠢。
我正在研究 Q 学习算法。为了更好地理解它,我尝试将这个 FrozenLake示例的 Tenzorflow 代码改写成Keras代码。
我的代码:
import gym
import numpy as np
import random
from keras.layers import Dense
from keras.models import Sequential
from keras import backend as K
import matplotlib.pyplot as plt
%matplotlib inline
env = gym.make('FrozenLake-v0')
model = Sequential()
model.add(Dense(16, activation='relu', kernel_initializer='uniform', input_shape=(16,)))
model.add(Dense(4, activation='softmax', kernel_initializer='uniform'))
def custom_loss(yTrue, yPred):
return K.sum(K.square(yTrue - yPred))
model.compile(loss=custom_loss, optimizer='sgd')
# Set learning parameters
y = .99
e = 0.1
#create lists to contain total rewards and steps per episode
jList = []
rList = []
num_episodes = 2000
for i in range(num_episodes):
current_state = env.reset()
rAll = 0
d = False
j = 0
while j < 99:
j+=1
current_state_Q_values = model.predict(np.identity(16)[current_state:current_state+1], batch_size=1)
action = np.reshape(np.argmax(current_state_Q_values), (1,))
if np.random.rand(1) < e:
action[0] = env.action_space.sample() #random action
new_state, reward, d, _ = env.step(action[0])
rAll += reward
jList.append(j)
rList.append(rAll)
new_Qs = model.predict(np.identity(16)[new_state:new_state+1], batch_size=1)
max_newQ = np.max(new_Qs)
targetQ = current_state_Q_values
targetQ[0,action[0]] = reward + y*max_newQ
model.fit(np.identity(16)[current_state:current_state+1], targetQ, verbose=0, batch_size=1)
current_state = new_state
if d == True:
#Reduce chance of random action as we train the model.
e = 1./((i/50) + 10)
break
print("Percent of succesful episodes: " + str(sum(rList)/num_episodes) + "%")
Run Code Online (Sandbox Code Playgroud)
当我运行它时,效果不佳:成功剧集的百分比:0.052%
plt.plot(rList)
Run Code Online (Sandbox Code Playgroud)
将原来Tensorflow代码是好过得多:成功的事件的百分比:0.352%
plt.plot(rList)
Run Code Online (Sandbox Code Playgroud)
我做错了什么?
除了像评论中提到的 @Maldus 设置 use_bias=False 之外,您可以尝试的另一件事是从更高的 epsilon 值开始(例如 0.5、0.75)?一个技巧可能是仅在达到目标时才降低 epsilon 值。即不要在每集结束时减少 epsilon。这样你的玩家就可以继续随机探索地图,直到它开始收敛在一条好的路线上,然后减少 epsilon 参数将是一个好主意。
实际上,我已经在本要点中使用卷积层而不是密集层在 keras 中实现了类似的模型。设法让它在 2000 集以内运行。可能对其他人有一些帮助:)
| 归档时间: |
|
| 查看次数: |
669 次 |
| 最近记录: |