Tensorflow的内存成本在非常简单的"for循环"中逐渐增加

zee*_*hen 2 python tensorflow

我对张量流有一个非常奇怪的问题.我将我的问题简化为以下版本:

我问这个是因为我需要进行一系列训练,我只是把它们放在for循环中,然后我为每次迭代使用不同的参数.

为了简化问题,我只用张力流语言编写一个简单的矩阵乘法,然后我把这个"矩阵乘法训练"放在"for循环"中(当然你可以把其他复杂的函数放在for循环中,结论是一样的) .

我设置了100000次迭代次数,这意味着我将运行10000次训练示例.并且在每个循环中打印耗时,然后我可以观察到每次迭代的时间消耗是相同的,这是没有问题的.但是内存成本增加得非常快,最后我得到了错误:"内存耗尽"(我期望内存应该在每次迭代时保持相同)

import tensorflow as tf
import numpy as np
import datetime  

for i in range(100000):   # I must put the following code in this for loop
    starttime = datetime.datetime.now()
    graph=tf.Graph()
    with graph.as_default():  
        with tf.device("/cpu:0"):
            a=np.arange(100).reshape(1,-1)
            b=np.arange(10000).reshape(100,100)
            A = tf.placeholder(tf.float32, [1,100])
            B = tf.placeholder(tf.float32, [100,100])

            sess = tf.InteractiveSession()


            RESULT =tf.matmul(A,B)





            RESULT_o=sess.run(RESULT,feed_dict={A: a, B: b})
    endtime = datetime.datetime.now()   
    print(endtime-starttime)
Run Code Online (Sandbox Code Playgroud)

在此输入图像描述

I know the reason is that in each iteration, the program created a new operation,this will increase memory. I want to know is there any way to release the memory cost after each iteration? (this memory problem is same for GPU situation)

Vik*_*ngh 7

您的代码应该像这样构造:

import tensorflow as tf
import numpy as np

A = tf.placeholder(tf.float32, [1,100])
B = tf.placeholder(tf.float32, [100,100])
result = tf.matmul(A,B)

init_op = tf.global_variables_initializer()

# Later, when launching the model
with tf.Session() as sess:
    # Run the init operation. 
    # This will make sure that memory is only allocated for the variable once.
    sess.run(init_op)

    for i in range(100000):
        a = np.arange(100).reshape(1,-1)
        b = np.arange(10000).reshape(100,100)
        sess.run(result, feed_dict={A: a, B: b})
        if i % 1000 == 0:
            print(i, "processed")
Run Code Online (Sandbox Code Playgroud)

在这里,您将为第一次迭代分配一次内存,并在连续迭代中继续重用相同的内存块.