我想为keras / tensorflow提供gpu支持,这就是为什么我安装了tensorflow-gpu的原因。所以我通过pip安装了tensorflow-gpu:
pip install-升级tensorflow-gpu
这导致:
from keras import backend as K
K.tensorflow_backend._get_available_gpus()
> []
Run Code Online (Sandbox Code Playgroud)
然后我找到了这个stackoverflow答案,该答案表明我应该在安装tensorflow-gpu后卸载tensorflow。这导致:
Using TensorFlow backend.
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-1-3d00d838479b> in <module>()
----> 1 from keras import backend as K
2 K.tensorflow_backend._get_available_gpus()
/raid/ntzioras/VirtualEnvironments/DeepLearning/lib/python3.4/site-packages/keras/__init__.py in <module>()
1 from __future__ import absolute_import
2
----> 3 from . import utils
4 from . import activations
5 from . import applications
/raid/ntzioras/VirtualEnvironments/DeepLearning/lib/python3.4/site-packages/keras/utils/__init__.py in <module>()
4 from . import data_utils
5 from . import …Run Code Online (Sandbox Code Playgroud) 我想像这样在VBA Makro中总结我的Const变量:
Private Type Company
Public Const CompanyNameColumns As String = "14"
Public Const CompanyNameStartRow As Integer = 5
Type End
Run Code Online (Sandbox Code Playgroud)
我无法运行此代码。我认为问题在于,无法在类型声明中定义Const。有没有解决办法?
我尝试通过自己的q学习实现来解决aigym山地车问题。
在尝试了不同的方法之后,它开始确实很好地工作,但是过了一会儿(20k集*每集1000个样本),我注意到我存储在Q表中的值变大了,因此它存储了-inf值。
在仿真过程中,我习惯于以下代码:
for t in range(SAMPLE_PER_EPISODE):
observation, reward, done, info = env.step(action)
R[state, action] = reward
history.append((state,action,reward))
max_indexes = np.argwhere(Q[state,] == np.amax(Q[state,])).flatten()
action = np.random.choice(max_indexes)
Run Code Online (Sandbox Code Playgroud)
为了学习,我在每一集之后都使用了以下代码:
#train
latest_best = 0
total_reward = 0
for entry in reversed(history):
Q[entry[0],entry[1]] = Q[entry[0],entry[1]] + lr * (entry[2] + latest_best * gamma)
latest_best = np.max(Q[entry[0],:])
total_reward += entry[2]
Run Code Online (Sandbox Code Playgroud)
使用该算法我获得了很好的结果,但是问题是-如上文所述-Q值的确很快达到了-inf
我认为我错误地实现了Q算法,但是将其更改为以下实现之后,它不再起作用(几乎和以前一样好):
#train
latest_best = 0
total_reward = 0
for entry in reversed(history):
# Here I changed the code
Q[entry[0],entry[1]] = Q[entry[0],entry[1]] + …Run Code Online (Sandbox Code Playgroud) 导入 numpy 时出现以下错误:
Traceback (most recent call last):
File "/home/xxx/Projects/Reinforcement-Learning/cardgame/reinforcement_learning_agent.py", line 3, in <module>
import numpy as np
File "/home/xxx/environments/machinelearning/lib/python3.5/site-packages/numpy/__init__.py", line 126, in <module>
from numpy.__config__ import show as show_config
File "/home/xxx/environments/machinelearning/lib/python3.5/site-packages/numpy/__config__.py", line 9, in <module>
os.environ["PATH"] += os.pathsep + extra_dll_dir
File "/usr/lib/python3.5/os.py", line 725, in __getitem__
raise KeyError(key) from None
KeyError: 'PATH'
Run Code Online (Sandbox Code Playgroud)
我可以想象这与我在虚拟环境中工作有关。
python ×3
const ×1
excel-vba ×1
keras ×1
ms-word ×1
numpy ×1
q-learning ×1
tensorflow ×1
vba ×1
word-vba ×1