小编Ant*_*kov的帖子

Twisted getPage():请求大量页面时进程内存增长

我正在为contstant(每个30-120秒)编写一个脚本来获取大量URL(Icecast/Shoutcast服务器状态页面)的信息,大约500个网址.它工作正常,但python进程驻留大小不断增长.我确信它会无限增长,因为我让它运行了几个小时,从最初的30Mb开始需要1.2Gb RES.

我简化了脚本以便于理解以下内容:

from twisted.internet import reactor
from twisted.web.client import getPage
from twisted.enterprise import adbapi

def ok(res, url):
    print "OK: " + str(url)
    reactor.callLater(30, load, url)

def error(res, url):
    print "FAIL: " + str(url)
    reactor.callLater(30, load, url)

def db_ok(res):
    for item in res:
        if item[1]:
            print "ADDED: " + str(item[1])
            reactor.callLater(30, load, item[1])

def db_error(res):
    print "Database error: " + str(res)
    reactor.stop()

def load(url):
    d = getPage(url,
                headers={"Accept": "text/html"},
                timeout=30)
    d.addCallback(ok, url)
    d.addErrback(error, url)


dbpool = adbapi.ConnectionPool("MySQLdb", "host", "user", "passwd", …
Run Code Online (Sandbox Code Playgroud)

python memory-leaks cpython twisted

5
推荐指数
1
解决办法
682
查看次数

标签 统计

cpython ×1

memory-leaks ×1

python ×1

twisted ×1