Jes*_*urn 7 java memory-leaks tensorflow
以下测试代码泄漏内存:
private static final float[] X = new float[]{1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9,0};
public void testTensorFlowMemory() {
// create a graph and session
try (Graph g = new Graph(); Session s = new Session(g)) {
// create a placeholder x and a const for the dimension to do a cumulative sum along
Output x = g.opBuilder("Placeholder", "x").setAttr("dtype", DataType.FLOAT).build().output(0);
Output dims = g.opBuilder("Const", "dims").setAttr("dtype", DataType.INT32).setAttr("value", Tensor.create(0)).build().output(0);
Output y = g.opBuilder("Cumsum", "y").addInput(x).addInput(dims).build().output(0);
// loop a bunch to test memory usage
for (int i=0; i<10000000; i++){
// create a tensor from X
Tensor tx = Tensor.create(X);
// run the graph and fetch the resulting y tensor
Tensor ty = s.runner().feed("x", tx).fetch("y").run().get(0);
// close the tensors to release their resources
tx.close();
ty.close();
}
System.out.println("non-threaded test finished");
}
}
Run Code Online (Sandbox Code Playgroud)
有什么明显的东西我做错了吗?基本流程是在该图上创建图形和会话,创建占位符和常量,以便在以x为单位的张量上执行累积和.运行生成的y操作后,我关闭x和y张量以释放其内存资源.
我相信到目前为止帮助的事情:
Tensor ty
线条会消除泄漏,所以它似乎在那里.有任何想法吗?谢谢!此外,这是一个Github项目,通过线程测试(以更快地增长内存)和无线测试(以显示它不是由于线程)来演示此问题.它使用maven,可以简单地运行:
mvn test
Run Code Online (Sandbox Code Playgroud)
我相信确实存在泄漏(特别是TF_DeleteStatus
对应于JNI代码中的分配缺失)(感谢重现的详细说明)
我建议您在http://github.com/tensorflow/tensorflow/issues上提交一个问题, 并希望它应该在最终的1.2版本之前修复.
(相关地,由于Tensor
创建的对象Tensor.create(0)
未被关闭,因此在循环外部也有泄漏)
更新:这是固定的,1.2.0-rc1应该不再有这个问题.
归档时间: |
|
查看次数: |
1041 次 |
最近记录: |