我想指定运行我的进程的gpu.我将其设置如下:
import tensorflow as tf
with tf.device('/gpu:0'):
a = tf.constant(3.0)
with tf.Session() as sess:
while True:
print sess.run(a)
Run Code Online (Sandbox Code Playgroud)
但是它仍然在我的两个gpus中分配内存.
| 0 7479 C python 5437MiB
| 1 7479 C python 5437MiB
Run Code Online (Sandbox Code Playgroud) 当我开始训练一些神经网络时,它遇到了CUDA_ERROR_OUT_OF_MEMORY但是训练可以继续进行而没有错误.因为我想使用gpu内存,所以我设置了gpu_options.allow_growth = True.日志如下:
I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcurand.so locally
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:925] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_device.cc:951] Found device 0 with properties: …Run Code Online (Sandbox Code Playgroud) 出于某种原因,我想使用一些先前版本的tensorflow('tensorflow - ** - .whl',而不是github上的源代码),我在哪里可以下载以前的版本,如何知道相应cuda version的兼容性.
(春天mvc)首先我不知道下面的写作是否正确.如果它是对的,那么我不明白@autowired是如何在这里工作的.如果它是错的,那么当我有更多时我该怎么做比一个类实现一个接口.
public interface UserDao{
public User findUserByUserName(String username);
}
public class UserDaoImpl1 implements UserDao{
@Override
public User findUserByUserName(String username){
.......
}
}
public class UserDaoImpl2 implements UserDao{
@Override
public User findUserByUserName(String username){
.......
}
}
@Service
public class UserServiceImpl implements UserService{
@Autowired
private UserDao userDao;//how does atuowired work here?
@Override
public User loginCheck(User user){
......
}
}
Run Code Online (Sandbox Code Playgroud) [xx_xx@xxxx ~]$ python multiply.py
Traceback (most recent call last):
File "multiply.py", line 2, in <module>
import tensorflow as tf
File "/home/luohao/.usr/bin/python2.7.10/lib/python2.7/site-packages/tensorflow/__init__.py", line 4, in <module>
from tensorflow.python import *
File "/home/luohao/.usr/bin/python2.7.10/lib/python2.7/site-packages/tensorflow/python/__init__.py", line 22, in <module>
from tensorflow.python.client.client_lib import *
File "/home/luohao/.usr/bin/python2.7.10/lib/python2.7/site-packages/tensorflow/python/client/client_lib.py", line 35, in <module>
from tensorflow.python.client.session import InteractiveSession
File "/home/luohao/.usr/bin/python2.7.10/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 11, in <module>
from tensorflow.python import pywrap_tensorflow as tf_session
File "/home/luohao/.usr/bin/python2.7.10/lib/python2.7/site-packages/tensorflow/python/pywrap_tensorflow.py", line 28, in <module>
_pywrap_tensorflow = swig_import_helper()
File "/home/luohao/.usr/bin/python2.7.10/lib/python2.7/site-packages/tensorflow/python/pywrap_tensorflow.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow', fp, …Run Code Online (Sandbox Code Playgroud) 在我的神经网络中,我创建了一些tf.Variable对象如下:
weights = {
'wc1_0': tf.Variable(tf.random_normal([5, 5, 3, 64])),
'wc1_1': tf.Variable(tf.random_normal([5, 5, 3, 64]))
}
biases = {
'bc1_0': tf.Variable(tf.constant(0.0, shape=[64])),
'bc1_1': tf.Variable(tf.constant(0.0, shape=[64]))
}
Run Code Online (Sandbox Code Playgroud)
我将如何保存变量weights和biases不保存其它变量的具体数量的迭代之后?
在重复阅读多次后,我仍然在FCN中的"移位和缝合"技巧上挣扎.有人可以给出一些直觉的解释.