Cor*_*son 3 webserver http nginx
我有一个Node.js应用程序服务器,它位于一个运行良好的Nginx配置后面.我预计会有一些负载增加,并认为我会通过设置另一个Nginx来为Node.js应用服务器上的静态文件提供服务.所以,基本上我已经在Nginx和Node.js面前设置了Nginx反向代理.
当我重装Nginx的,让它启动服务请求(Nginx- <> Nginx上的路由)/publicfile/,我注意到在速度显着下降.那拿了点东西Nginx< - > Node.js各地3秒不带Nginx< - > Nginx〜15秒!
我是Nginx的新手,并且花了大部分时间在这上面,最后决定发布一些社区帮助.谢谢!
面向Nginx的网络nginx.conf:
http {
# Main settings
sendfile on;
tcp_nopush on;
tcp_nodelay on;
client_header_timeout 1m;
client_body_timeout 1m;
client_header_buffer_size 2k;
client_body_buffer_size 256k;
client_max_body_size 256m;
large_client_header_buffers 4 8k;
send_timeout 30;
keepalive_timeout 60 60;
reset_timedout_connection on;
server_tokens off;
server_name_in_redirect off;
server_names_hash_max_size 512;
server_names_hash_bucket_size 512;
# Log format
log_format main '$remote_addr - $remote_user [$time_local] $request '
'"$status" $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
log_format bytes '$body_bytes_sent';
access_log /var/log/nginx/access.log main;
# Mime settings
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Compression
gzip on;
gzip_comp_level 9;
gzip_min_length 512;
gzip_buffers 8 64k;
gzip_types text/plain text/css text/javascript
application/x-javascript application/javascript;
gzip_proxied any;
# Proxy settings
#proxy_redirect of;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass_header Set-Cookie;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffers 32 4k;
real_ip_header CF-Connecting-IP;
# SSL PCI Compliance
# - removed for brevity
# Error pages
# - removed for brevity
# Cache
proxy_cache_path /var/cache/nginx levels=2 keys_zone=cache:10m inactive=60m max_size=512m;
proxy_cache_key "$host$request_uri $cookie_user";
proxy_temp_path /var/cache/nginx/temp;
proxy_ignore_headers Expires Cache-Control;
proxy_cache_use_stale error timeout invalid_header http_502;
proxy_cache_valid any 3d;
proxy_http_version 1.1; # recommended with keepalive connections
# WebSocket proxying - from http://nginx.org/en/docs/http/websocket.html
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
map $http_cookie $no_cache {
default 0;
~SESS 1;
~wordpress_logged_in 1;
}
upstream backend {
# my 'backend' server IP address (local network)
server xx.xxx.xxx.xx:80;
}
# Wildcard include
include /etc/nginx/conf.d/*.conf;
}
Run Code Online (Sandbox Code Playgroud)
面向Nginx Server块的Web 将静态文件转发到它后面的Nginx(在另一个盒子上):
server {
listen 80 default;
access_log /var/log/nginx/nginx.log main;
# pass static assets on to the app server nginx on port 80
location ~* (/min/|/audio/|/fonts/|/images/|/js/|/styles/|/templates/|/test/|/publicfile/) {
proxy_pass http://backend;
}
}
Run Code Online (Sandbox Code Playgroud)
最后是"后端"服务器:
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
sendfile_max_chunk 32;
# server_tokens off;
# server_names_hash_bucket_size 64;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
server {
root /home/admin/app/.tmp/public;
listen 80 default;
access_log /var/log/nginx/app-static-assets.log;
location /publicfile {
alias /home/admin/APP-UPLOADS;
}
}
}
Run Code Online (Sandbox Code Playgroud)
@keenanLawrence在上面的评论中提到了sendfile_max_chunk指令.
设置完毕后sendfile_max_chunk到512k,我看到在我的静态文件(磁盘)从Nginx的交付显著的速度提升.
我用它尝试从8k,32k,128k,和最终512k的差异似乎是每个服务器上的最佳配置chunk size取决于内容被传递,可用线程,与服务器的请求负载.
我还注意到当我改变worker_processes auto;到worker_processes 2;从使用worker_process每个cpu到仅使用时,性能上的另一个重大突破2.在我的情况下,这是更有效率,因为我也有Node.js应用程序服务器在同一台机器上运行,他们也在CPU上执行操作.