我正在关注本教程(https://www.tensorflow.org/agents/tutorials/1_dqn_tutorial?hl=en),了解如何使用 TF 代理实现深度 Q 网络算法,以使用 RL 解决车杆问题。
我创建q_net:
fc_layer_params = (100,)
q_net = q_network.QNetwork(
train_env.observation_spec(),
train_env.action_spec(),
fc_layer_params=fc_layer_params)
Run Code Online (Sandbox Code Playgroud)
当我使用q_net.summary()它时,它显示网络有 500 个输入层:
Model: "QNetwork"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
EncodingNetwork (EncodingNet multiple 500
_________________________________________________________________
dense_1 (Dense) multiple 202
=================================================================
Total params: 702
Trainable params: 702
Non-trainable params: 0
_________________________________________________________________
time: 3.63 ms (started: 2021-01-16 13:44:09 +00:00)
Run Code Online (Sandbox Code Playgroud)
我想知道为什么输入层的值是 500,如果对于车杆环境,我们的 Observation_Spec 和 Action_Spec 为:
Observation Spec:
BoundedArraySpec(shape=(4,), dtype=dtype('float32'), name='observation', minimum=[-4.8000002e+00 -3.4028235e+38 -4.1887903e-01 -3.4028235e+38], maximum=[4.8000002e+00 3.4028235e+38 …Run Code Online (Sandbox Code Playgroud) python artificial-intelligence reinforcement-learning neural-network tensorflow