我试图将Bjoern放在 Nginx 之后,以便轻松实现负载平衡和 DoS/DDoS 攻击缓解。
令我沮丧的是,我不仅发现它会像芯片一样丢弃连接(它在总连接数的 20% 到 50% 之间变化),而且如果不放在后面,它实际上似乎更快。
这是在具有 6GB RAM 和双核 2Ghz cpu 的机器上测试的。
我的应用程序是这样的:
import bjoern,redis
r = redis.StrictRedis(host='localhost', port=6379, db=0)
val = r.get('test:7')
def hello_world(environ, start_response):
status = '200 OK'
res = val
response_headers = [
('Content-type','text/plain'),
('Content-Length',str(len(res)))]
start_response(status, response_headers)
return [res]
# despite the name this is not a hello world as you can see
bjoern.run(hello_world, 'unix:/tmp/bjoern.sock')
Run Code Online (Sandbox Code Playgroud)
nginx配置:
user www-data;
worker_processes 2;
worker_rlimit_nofile 52000; # worker_connections * 2
pid /run/nginx.pid;
events {
multi_accept on;
worker_connections 18000;
use epoll;
}
http {
charset utf-8;
client_body_timeout 65;
client_header_timeout 65;
client_max_body_size 10m;
default_type application/octet-stream;
keepalive_timeout 20;
reset_timedout_connection on;
send_timeout 65;
server_tokens off;
sendfile on;
server_names_hash_bucket_size 64;
tcp_nodelay off;
tcp_nopush on;
error_log /var/log/nginx/error.log;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Run Code Online (Sandbox Code Playgroud)
和虚拟主机:
upstream backend {
server unix:/tmp/bjoern.sock;
}
server {
listen 80;
server_name _;
error_log /var/log/nginx/error.log;
location / {
proxy_buffering off;
proxy_redirect off;
proxy_pass http://backend;
}
}
Run Code Online (Sandbox Code Playgroud)
我得到的 Bjoern 通过 unix socket 放在 Nginx 后面的基准是这样的:
Benchmarking 127.0.0.1 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests
Server Software: nginx
Server Hostname: 127.0.0.1
Server Port: 80
Document Path: /
Document Length: 148 bytes
Concurrency Level: 1000
Time taken for tests: 0.983 seconds
Complete requests: 10000
Failed requests: 3
(Connect: 0, Receive: 0, Length: 3, Exceptions: 0)
Non-2xx responses: 3
Total transferred: 3000078 bytes
HTML transferred: 1480054 bytes
Requests per second: 10170.24 [#/sec] (mean)
Time per request: 98.326 [ms] (mean)
Time per request: 0.098 [ms] (mean, across all concurrent requests)
Transfer rate: 2979.64 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 15 4.8 15 35
Processing: 11 28 19.2 19 223
Waiting: 7 24 20.4 16 218
Total: 16 43 20.0 35 225
Percentage of the requests served within a certain time (ms)
50% 35
66% 38
75% 40
80% 40
90% 79
95% 97
98% 109
99% 115
100% 225 (longest request)
Run Code Online (Sandbox Code Playgroud)
每秒 10k 个请求,这次失败的请求较少,但仍然......
当 Bjoern 被直接击中时,基准测试结果如下:
更改bjoern.run(hello_world, 'unix:/tmp/bjoern.sock')
为后bjoern.run(hello_world, "127.0.0.1", 8000)
Benchmarking 127.0.0.1 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests
Server Software:
Server Hostname: 127.0.0.1
Server Port: 8000
Document Path: /
Document Length: 148 bytes
Concurrency Level: 100
Time taken for tests: 0.193 seconds
Complete requests: 10000
Failed requests: 0
Keep-Alive requests: 10000
Total transferred: 2380000 bytes
HTML transferred: 1480000 bytes
Requests per second: 51904.64 [#/sec] (mean)
Time per request: 1.927 [ms] (mean)
Time per request: 0.019 [ms] (mean, across all concurrent requests)
Transfer rate: 12063.77 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.3 0 4
Processing: 1 2 0.4 2 5
Waiting: 0 2 0.4 2 5
Total: 1 2 0.5 2 5
Percentage of the requests served within a certain time (ms)
50% 2
66% 2
75% 2
80% 2
90% 2
95% 3
98% 4
99% 4
100% 5 (longest request)
Run Code Online (Sandbox Code Playgroud)
每秒 50k 请求,在这种情况下甚至不是失败的请求。
我已经对系统变量进行了广泛的调整,例如somaxconn等,否则我想我不会单独使用 Bjoern 收到那么多请求。
Bjoern 怎么可能比 Nginx 快得多?
我真的很担心无法使用 Nginx 并从第一行中概述的内容中受益,希望您能帮助我找到罪魁祸首。
简短而简洁的问题是:如何在不损失性能的情况下将 Bjoern 代理传递给 Nginx?我是否应该留在 Bjoern 并以另一种方式实现负载平衡和 DoS/DDoS 攻击缓解?
小智 7
我认为答案在下面的文章中给出。
https://news.ycombinator.com/item?id=2036661
例如,让我们考虑这个思想实验:这里有人提到 Mongrel2 获得 4000 req/sec。让我们将名称“Mongrel2”替换为“服务器 A”,因为这个思想实验不仅限于 Mongrel2,而是所有服务器。我假设他正在对笔记本电脑上的 hello world 应用程序进行基准测试。假设假设的服务器 B“仅”获得 2000 个请求/秒。人们现在可能(错误地)得出结论:
服务器 B 的速度要慢得多。
在高流量的生产环境中,应该使用服务器 A 而不是服务器 B。
现在将服务器 A 放在 HAProxy 之后。HAproxy 被称为具有最小开销的高性能 HTTP 代理服务器。对这个设置进行基准测试,然后观察 req/sec 下降到大约 2000-3000(在典型的双核笔记本电脑上进行基准测试时)。
刚刚发生了什么?服务器 B 似乎很慢。但现实情况是,服务器 A 和服务器 B 都非常快,即使做最少量的额外工作也会对请求/秒数产生重大影响。在这种情况下,额外的上下文切换和对内核的 read()/write() 调用的开销已经足以使 req/sec 数量下降一半。任何相当复杂的 Web 应用程序逻辑都会使数量下降如此之多,以至于不同服务器之间的性能差异变得可以忽略不计。
归档时间: |
|
查看次数: |
7478 次 |
最近记录: |