Tensorflow服务器:我不想为每个会话初始化全局变量

Slo*_*oke 15 python tensorflow

EDIT2:下面的Github链接包含从进程调用TF模型的问题的可能解决方案.它们包括急切执行和专用服务器进程,通过http请求提供TF模型预测.我想知道是否使用自定义服务器和请求我赢得任何时间相比每次初始化全局变量和调用tf.train.Server,但它似乎更优雅的方式.

我将调查内存泄漏,如果它消失了,请关闭此问题.

编辑:添加简单的可重现的问题示例:

https://github.com/hcl14/Tensorflow-server-launched-from-child-process


背景:我正在运行Tensorflow服务器,并从"分叉"进程连接到它.动态创建(和销毁)进程对我来说至关重要 - 我在那里移动了高负载的代码部分,因为奇怪的内存泄漏,Python剖析器看不到(线程无法解决问题).因此,我希望快速初始化流程并立即开始工作.仅当进程被销毁时才释放内存.

做实验,我发现了一个解决方案,当加载的模型和图形被保存到全局变量中,然后由子进程(默认使用'fork'模式)进行,然后调用服务器.

问题:对我来说奇怪的是,在加载keras模型之后,我无法锁定我不希望修改的图形,并且tf.global_variables_initializer()每次在子进程中打开新会话时我都需要运行.但是,在没有任何会话创建的主流程中运行虚拟运行Ok.我知道在这种情况下,tensorflow使用默认会话,但是图形上的所有变量都应该在模型运行后初始化,所以我希望新会话能够使用先前定义的图形工作.

因此,我认为修改模型会使Python对子进程('fork'模式)大量腌制,从而产生计算和内存开销.

请原谅我的很多代码.我使用的模型是遗留和黑盒子,所以我的问题可能与它有关.Tensorflow版本是1.2(我无法升级,模型不兼容),Python 3.6.5.

此外,也许我的解决方案效率低下而且效果更好,我将非常感谢您的建议.

我的设置如下:

1.Tensorflow服务器在主进程中启动:

初始化服务器:

def start_tf_server():
    import tensorflow as tf
    cluster = tf.train.ClusterSpec({"local": [tf_server_address]})
    server = tf.train.Server(cluster, job_name="local", task_index=0)    
    server.join() # block process from exiting
Run Code Online (Sandbox Code Playgroud)

在主要过程中:

p = multiprocessing.Process(target=start_tf_server)
p.daemon=True
p.start() # this process never ends, unless tf server crashes

# WARNING! Graph initialization must be made only after Tf server start!
# Otherwise everything will hang
# I suppose this is because of another session will be 
# created before the server one

# init model graph before branching processes
# share graph in the current process scope
interests = init_interests_for_process()
global_vars.multiprocess_globals["interests"] = interests
Run Code Online (Sandbox Code Playgroud)

2. init_interests_for_process()是一个模型初始值设定项,它加载我的遗留模型并在全局变量中共享它.我做了一个虚拟模型传递,在图上初始化了所有内容,然后想要锁定图形.但它不起作用:

def init_interests_for_process():
    # Prevent errors on my GPU and disable tensorflow 
    # complaining about CPU instructions
    import os
    os.environ["CUDA_VISIBLE_DEVICES"]= ""
    os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

    import tensorflow as tf

    from tensorflow.contrib.keras import models

    # create tensorflow graph
    graph = tf.get_default_graph()

    with graph.as_default():

        TOKENIZER = joblib.load(TOKENIZER_FILE)

        NN1_MODEL = models.load_model(NN1_MODEL_FILE)

        with open(NN1_CATEGORY_NAMES_FILE, 'r') as f:
            NN1_CATEGORY_NAMES = f.read().splitlines()

        NN2_MODEL = models.load_model(NN2_MODEL_FILE)

        with open(NN2_CATEGORY_NAMES_FILE, 'r') as f:
            NN2_CATEGORY_NAMES = f.read().splitlines()
        # global variable with all the data to be shared
        interests = {}

        interests["TOKENIZER"] = TOKENIZER
        interests["NN1_MODEL"] = NN1_MODEL
        interests["NN1_CATEGORY_NAMES"] = NN1_CATEGORY_NAMES
        interests["NN2_MODEL"] = NN2_MODEL
        interests["NN2_CATEGORY_NAMES"] = NN2_CATEGORY_NAMES
        interests['all_category_names'] = NN1_CATEGORY_NAMES + \
                                          NN2_CATEGORY_NAMES
        # Reconstruct a Python object from a file persisted with joblib.dump.
        interests["INTEREST_SETTINGS"] = joblib.load(INTEREST_SETTINGS_FILE)

        # dummy run to create graph
        x = tf.contrib.keras.preprocessing.sequence.pad_sequences(
                         TOKENIZER.texts_to_sequences("Dummy srting"),
                         maxlen=interests["INTEREST_SETTINGS"]["INPUT_LENGTH"]
                         )
        y1 = NN1_MODEL.predict(x)
        y2 = NN2_MODEL.predict(x)

        # PROBLEM: I want, but cannot lock graph, as child process 
        # wants to run its own tf.global_variables_initializer()
        # graph.finalize()

        interests["GRAPH"] = graph

        return interests
Run Code Online (Sandbox Code Playgroud)

3.现在我生成了进程(实际上,进程是从另一个进程生成的 - 层次结构很复杂):

def foo(q):
     result = call_function_which_uses_interests_model(some_data) 
     q.put(result)
     return # I've read it is essential for destroying local variables
q = Queue()
p = Process(target=foo,args=(q,))
p.start()
p.join()
result = q.get() # retrieve data
Run Code Online (Sandbox Code Playgroud)

4.在这个过程中我称之为模型:

# retrieve model from global variable
interests = global_vars.multiprocess_globals["interests"]

tokenizer = interests["TOKENIZER"]
nn1_model = interests["NN1_MODEL"]
nn1_category_names = interests["NN1_CATEGORY_NAMES"]
nn2_model = interests["NN2_MODEL"]
nn2_category_names = interests["NN2_CATEGORY_NAMES"]
input_length = interests["INTEREST_SETTINGS"]["INPUT_LENGTH"]

# retrieve graph
graph = interests["GRAPH"]

# open session for server
logger.debug('Trying tf server at ' + 'grpc://'+tf_server_address)
sess = tf.Session('grpc://'+tf_server_address, graph=graph)

# PROBLEM: and I need to run variables initializer:
sess.run(tf.global_variables_initializer())


tf.contrib.keras.backend.set_session(sess)

# finally, make a call to server:
with sess.as_default():        
    x = tf.contrib.keras.preprocessing.sequence.pad_sequences(
                            tokenizer.texts_to_sequences(input_str),
                            maxlen=input_length)
    y1 = nn1_model.predict(x)
    y2 = nn2_model.predict(x)
Run Code Online (Sandbox Code Playgroud)

如果我没有锁定图表并且每次生成新进程时都运行变量初始化程序,那么一切正常.(除了每次调用都有大约30-90 MB的内存泄漏,python内存分析器不可见).当我想锁定图形时,我得到有关未初始化变量的错误:

FailedPreconditionError (see above for traceback): 
Attempting to use uninitialized value gru_1/bias
       [[Node: gru_1/bias/read = Identity[T=DT_FLOAT, _class=["loc:@gru_1/bias"],
       _device="/job:local/replica:0/task:0/cpu:0"](gru_1/bias)]]
Run Code Online (Sandbox Code Playgroud)

提前致谢!

All*_*oie 1

您考虑过 TensorFlow Serving 吗?https://www.tensorflow.org/serving/

一般来说,您需要缓存会话,我相信这是 TF Serving 使用的策略。这将是迄今为止将 TF 模型部署到数据中心的最佳体验。

您也可以选择另一个方向 和tf.enable_eager_execution(),这消除了对会话的需要。变量仍然会被初始化,尽管它是在创建 Python 变量对象后立即发生的。

但如果您确实想创建和销毁会话,您可以用常量替换图中的变量(“冻结”它)。在这种情况下,我还会考虑禁用图形优化,因为session.run使用一组新的提要和获取的第一次调用默认情况下将花费一些时间来优化图形(通过RewriterConfig内部GraphOptions原型进行配置)。

(从对该问题的评论中扩展)