我刚刚通过安装了最新版本的 Tensorflow pip install tensorflow,每当我运行程序时,我都会收到日志消息:
W tensorflow/stream_executor/platform/default/dso_loader.cc:55] 无法加载动态库“cudart64_101.dll”;dlerror: 找不到 cudart64_101.dll
这很糟糕吗?如何修复错误?
我认为这些消息在最初的几次非常重要,但它只是无用的.它实际上使阅读和调试变得更糟.
我的tensorflow/stream_executor/dso_loader.cc:128]在本地成功打开了CUDA库libcublas.so.8.0我的tensorflow/stream_executor/dso_loader.cc:119]无法打开CUDA库libcudnn.so.LD_LIBRARY_PATH:I tensorflow/stream_executor/cuda/cuda_dnn.cc:3459]无法加载cuDNN DSO I tensorflow/stream_executor/dso_loader.cc:128]在本地成功打开了CUDA库libcufft.so.8.0 I tensorflow/stream_executor/dso_loader.cc: 128]在本地成功打开CUDA库libcuda.so.1我的tensorflow/stream_executor/dso_loader.cc:128]在本地成功打开了CUDA库libcurand.so.8.0
有没有办法压制那些只说它成功的?
使用ResNet50预训练的权重我正在尝试构建一个分类器.代码库完全在Keras高级Tensorflow API中实现.完整的代码发布在下面的GitHub链接中.
预训练模型的文件大小为94.7mb.
我加载了预先训练好的文件
new_model = Sequential()
new_model.add(ResNet50(include_top=False,
pooling='avg',
weights=resnet_weight_paths))
Run Code Online (Sandbox Code Playgroud)
并适合模型
train_generator = data_generator.flow_from_directory(
'path_to_the_training_set',
target_size = (IMG_SIZE,IMG_SIZE),
batch_size = 12,
class_mode = 'categorical'
)
validation_generator = data_generator.flow_from_directory(
'path_to_the_validation_set',
target_size = (IMG_SIZE,IMG_SIZE),
class_mode = 'categorical'
)
#compile the model
new_model.fit_generator(
train_generator,
steps_per_epoch = 3,
validation_data = validation_generator,
validation_steps = 1
)
Run Code Online (Sandbox Code Playgroud)
在训练数据集中,我有两个文件夹狗和猫,每个持有近10,000张图像.当我编译脚本时,我收到以下错误
Epoch 1/1 2018-05-12 13:04:45.847298:W tensorflow/core/framework/allocator.cc:101] 38535168的分配超过系统内存的10%.2018-05-12 13:04:46.845021:W tensorflow/core/framework/allocator.cc:101] 37171200的分配超过系统内存的10%.2018-05-12 13:04:47.552176:W tensorflow/core/framework/allocator.cc:101] 37171200的分配超过系统内存的10%.2018-05-12 13:04:48.199240:W tensorflow/core/framework/allocator.cc:101] 37171200的分配超过系统内存的10%.2018-05-12 13:04:48.918930:W tensorflow/core/framework/allocator.cc:101] 37171200的分配超过系统内存的10%.2018-05-12 13:04:49.274137:W tensorflow/core/framework/allocator.cc:101] 19267584的分配超过系统内存的10%.2018-05-12 13:04:49.647061:W tensorflow/core/framework/allocator.cc:101] 19267584的分配超过系统内存的10%.2018-05-12 …
我使用以下命令在Windows 7 SP1 x64 Ultimate(Python 3.5.2 | Anaconda自定义(64位))上安装了TensorFlow版本1.0.0-rc2:
pip install --upgrade https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-1.0.0rc2-cp35-cp35m-win_amd64.whl
Run Code Online (Sandbox Code Playgroud)
当我尝试在Eclipse 4.5或控制台中运行https://web.archive.org/web/20170214034751/https://www.tensorflow.org/get_started/os_setup#test_the_tensorflow_installation中的测试脚本时:
import tensorflow as tf
print('TensorFlow version: {0}'.format(tf.__version__))
hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
print(sess.run(hello))
Run Code Online (Sandbox Code Playgroud)
我收到一些错误消息:
TensorFlow version: 1.0.0-rc2
'Hello, TensorFlow!'
E c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflob
w\core\framework\op_kernel.cc:943] OpKernel ('op: "BestSplits" device_type: "CPU"') for unknown op: BestSplits
E c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:943] OpKernel ('op: "CountExtremelyRandomStats" device_type: "CPU"') for unknown op: CountExtremelyRandomStats
E c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:943] OpKernel ('op: "FinishedNodes" device_type: "CPU"') for unknown op: FinishedNodes
E c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:943] OpKernel ('op: "GrowTree" device_type: "CPU"') for unknown …Run Code Online (Sandbox Code Playgroud) 所以我正在玩他们昨天发布的Google的Tensorflow库,遇到了一个令我讨厌的烦人的bug.
我所做的是像往常一样设置python日志记录功能,结果是,如果我导入tensorflow库,控制台中的所有消息都开始加倍.有趣的是,如果您只使用该功能,则不会发生这种情况logging.warn/info/..().
不会使消息加倍的代码示例:
import tensorflow as tf
import logging
logging.warn('test')
Run Code Online (Sandbox Code Playgroud)
的一个代码示例做双所有消息:
import tensorflow as tf
import logging
logger = logging.getLogger('TEST')
ch = logging.StreamHandler()
logger.addHandler(ch)
logger.warn('test')
Run Code Online (Sandbox Code Playgroud)
现在,我是个简单的人.我喜欢它的功能logging,所以我使用它.使用logger对象和添加a的设置StreamHandler是我看到的其他人如何做到这一点的东西,但看起来它符合这个东西的意图.但是,我没有对日志库的深入了解,因为它总是有点工作.
因此,解释为什么消息加倍的任何帮助都将是最有帮助的.
我使用Ubuntu 14.04.3 LTS与Python 2.7.6,但错误发生在我尝试的所有Python 2.7版本中.
我正在使用nosetests对我的Tensorflow代码进行单元测试,但它会产生如此冗长的输出,这使得它无用.
以下测试
import unittest
import tensorflow as tf
class MyTest(unittest.TestCase):
def test_creation(self):
self.assertEquals(True, False)
Run Code Online (Sandbox Code Playgroud)
运行时nosetests会产生大量无用的日志记录:
FAIL: test_creation (tests.test_tf.MyTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/cebrian/GIT/thesis-nilm/code/deepmodels/tests/test_tf.py", line 10, in test_creation
self.assertEquals(True, False)
AssertionError: True != False
-------------------- >> begin captured logging << --------------------
tensorflow: Level 1: Registering Const (<function _ConstantShape at 0x7f4379131c80>) in shape functions.
tensorflow: Level 1: Registering Assert (<function no_outputs at 0x7f43791319b0>) in shape functions.
tensorflow: Level 1: Registering Print (<function _PrintGrad at 0x7f4378effd70>) in gradient. …Run Code Online (Sandbox Code Playgroud) 我刚刚在 anaconda python 上安装了 tensorflow v2.3。我尝试使用下面的 python 命令测试安装;
$ python -c "import tensorflow as tf; x = [[2.]]; print('tensorflow version', tf.__version__); print('hello, {}'.format(tf.matmul(x, x)))"
Run Code Online (Sandbox Code Playgroud)
我收到以下消息;
2020-12-15 07:59:12.411952: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
hello, [[4.]]
Run Code Online (Sandbox Code Playgroud)
从消息来看,似乎安装成功了。但究竟This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use …
我目前正在TensorFlow中实现YOLO,我对于需要多少内存感到有些惊讶.在我的GPU上,我可以使用批量大小为64 的Darknet框架训练YOLO .在TensorFlow上,我只能使用批量大小为6,而8我已经用完了内存.对于测试阶段,我可以使用批量大小64运行而不会耗尽内存.
我想知道如何计算每个张量消耗的内存量?默认情况下,所有张量都保存在GPU中吗?我可以简单地将总内存消耗计算为*32位的形状吗?
我注意到,因为我使用动量,我的所有张量都有一个/Momentum张量.这可能也会占用大量内存吗?
我用一种方法扩充我的数据集distorted_inputs,非常类似于CIFAR-10教程中定义的方法.可能是这部分占据了大量的记忆吗?我相信Darknet会对CPU进行修改.
我有以下代码:
import tensorflow as tf
print("Hello")
Run Code Online (Sandbox Code Playgroud)
输出是:
This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
Hello # This is printed about 5 seconds after the message
Run Code Online (Sandbox Code Playgroud)
我有一个 for 循环,其中包含几个不同的深度学习模型,会生成此警告:
WARNING:tensorflow:5 out of the last 5 calls to <function Model.make_predict_function.<locals>.predict_function at 0x000001B0A8CC90D0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer …Run Code Online (Sandbox Code Playgroud) 我已经创建了 keras 功能 API 模型,现在我正在尝试研究它的层输出,为从原始模型的输入开始到我选择的层结束的每一层创建子模型。我不明白在没有得到的情况下这样做的正确方法是什么
WARNING:tensorflow:11 out of the last 11 calls to
<function Model.make_predict_function.<locals>.predict_function at 0x7fb8ebc92700>
triggered tf.function retracing.
Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly
in a loop, (2) passing tensors with different shapes,
(3) passing Python objects instead of tensors.
For (1), please define your @tf.function outside of the loop.
For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes
that can avoid unnecessary retracing.
For (3), …Run Code Online (Sandbox Code Playgroud) tensorflow ×11
python ×9
keras ×3
anaconda ×1
keras-layer ×1
logging ×1
memory ×1
nose ×1
python-3.x ×1
resnet ×1