我正在使用urllib2从ftp和http服务器加载文件.
某些服务器仅支持每个IP一个连接.问题是,urllib2不会立即关闭连接.看一下示例程序.
from urllib2 import urlopen
from time import sleep
url = 'ftp://user:pass@host/big_file.ext'
def load_file(url):
f = urlopen(url)
loaded = 0
while True:
data = f.read(1024)
if data == '':
break
loaded += len(data)
f.close()
#sleep(1)
print('loaded {0}'.format(loaded))
load_file(url)
load_file(url)
Run Code Online (Sandbox Code Playgroud)
代码从ftp-server加载两个文件(这里两个文件是相同的),只支持1个连接.这将打印以下日志:
loaded 463675266
Traceback (most recent call last):
File "conection_test.py", line 20, in <module>
load_file(url)
File "conection_test.py", line 7, in load_file
f = urlopen(url)
File "/usr/lib/python2.6/urllib2.py", line 126, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.6/urllib2.py", line 391, in open …Run Code Online (Sandbox Code Playgroud) 我想在我们的一个Web服务器上进行一些性能测试,以查看服务器如何处理大量持久连接.不幸的是,我对HTTP和Web测试并不十分熟悉.这是我迄今为止获得的Python代码:
import http.client
import argparse
import threading
def make_http_connection():
conn = http.client.HTTPConnection(options.server, timeout=30)
conn.connect()
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument("num", type=int, help="Number of connections to make (integer)")
parser.add_argument("server", type=str, help="Server and port to connect to. Do not prepend \'http://\' for this")
options = parser.parse_args()
for n in range(options.num):
connThread = threading.Thread(target = make_http_connection, args = ())
connThread.daemon = True
connThread.start()
while True:
try:
pass
except KeyboardInterrupt:
break
Run Code Online (Sandbox Code Playgroud)
我的主要问题是:如何保持这些连接存活?我设置了一个很长的超时,但这是一个非常粗糙的方法,我甚至不确定它会影响连接.只需要每隔一段时间请一个字节或两个字节吗?
(另外,在一个不相关的说明中,有没有比while True:我代码末尾的丑陋块更好的等待键盘中断的程序?)