Mik*_*ike 23 ruby-on-rails passenger nginx unicorn
我们刚刚从乘客迁移到独角兽,以便托管几个rails应用程序.一切都很好但我们通过New Relic注意到请求在100到300毫秒之间排队.
这是图表:

我不知道这是从哪里来的这是我们的独角兽conf:
current_path = '/data/actor/current'
shared_path = '/data/actor/shared'
shared_bundler_gems_path = "/data/actor/shared/bundled_gems"
working_directory '/data/actor/current/'
worker_processes 6
listen '/var/run/engineyard/unicorn_actor.sock', :backlog => 1024
timeout 60
pid "/var/run/engineyard/unicorn_actor.pid"
logger Logger.new("log/unicorn.log")
stderr_path "log/unicorn.stderr.log"
stdout_path "log/unicorn.stdout.log"
preload_app true
if GC.respond_to?(:copy_on_write_friendly=)
GC.copy_on_write_friendly = true
end
before_fork do |server, worker|
if defined?(ActiveRecord::Base)
ActiveRecord::Base.connection.disconnect!
end
old_pid = "#{server.config[:pid]}.oldbin"
if File.exists?(old_pid) && server.pid != old_pid
begin
sig = (worker.nr + 1) >= server.worker_processes ? :TERM : :TTOU
Process.kill(sig, File.read(old_pid).to_i)
rescue Errno::ENOENT, Errno::ESRCH
# someone else did our job for us
end
end
sleep 1
end
if defined?(Bundler.settings)
before_exec do |server|
paths = (ENV["PATH"] || "").split(File::PATH_SEPARATOR)
paths.unshift "#{shared_bundler_gems_path}/bin"
ENV["PATH"] = paths.uniq.join(File::PATH_SEPARATOR)
ENV['GEM_HOME'] = ENV['GEM_PATH'] = shared_bundler_gems_path
ENV['BUNDLE_GEMFILE'] = "#{current_path}/Gemfile"
end
end
after_fork do |server, worker|
worker_pid = File.join(File.dirname(server.config[:pid]), "unicorn_worker_actor_#{worker.nr$
File.open(worker_pid, "w") { |f| f.puts Process.pid }
if defined?(ActiveRecord::Base)
ActiveRecord::Base.establish_connection
end
end
Run Code Online (Sandbox Code Playgroud)
我们的nginx.conf:
user deploy deploy;
worker_processes 6;
worker_rlimit_nofile 10240;
pid /var/run/nginx.pid;
events {
worker_connections 8192;
use epoll;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
sendfile on;
tcp_nopush on;
server_names_hash_bucket_size 128;
if_modified_since before;
gzip on;
gzip_http_version 1.0;
gzip_comp_level 2;
gzip_proxied any;
gzip_buffers 16 8k;
gzip_types application/json text/plain text/html text/css application/x-javascript t$
# gzip_disable "MSIE [1-6]\.(?!.*SV1)";
# Allow custom settings to be added to the http block
include /etc/nginx/http-custom.conf;
include /etc/nginx/stack.conf;
include /etc/nginx/servers/*.conf;
}
Run Code Online (Sandbox Code Playgroud)
和我们的应用程序特定的nginx conf:
upstream upstream_actor_ssl {
server unix:/var/run/engineyard/unicorn_actor.sock fail_timeout=0;
}
server {
listen 443;
server_name letitcast.com;
ssl on;
ssl_certificate /etc/nginx/ssl/letitcast.crt;
ssl_certificate_key /etc/nginx/ssl/letitcast.key;
ssl_session_cache shared:SSL:10m;
client_max_body_size 100M;
root /data/actor/current/public;
access_log /var/log/engineyard/nginx/actor.access.log main;
error_log /var/log/engineyard/nginx/actor.error.log notice;
location @app_actor {
include /etc/nginx/common/proxy.conf;
proxy_pass http://upstream_actor_ssl;
}
include /etc/nginx/servers/actor/custom.conf;
include /etc/nginx/servers/actor/custom.ssl.conf;
if ($request_filename ~* \.(css|jpg|gif|png)$) {
break;
}
location ~ ^/(images|javascripts|stylesheets)/ {
expires 10y;
}
error_page 404 /404.html;
error_page 500 502 504 /500.html;
error_page 503 /system/maintenance.html;
location = /system/maintenance.html { }
location / {
if (-f $document_root/system/maintenance.html) { return 503; }
try_files $uri $uri/index.html $uri.html @app_actor;
}
include /etc/nginx/servers/actor/custom.locations.conf;
}
Run Code Online (Sandbox Code Playgroud)
我们没有负载很重,所以我不明白为什么请求卡在队列中.根据独角兽的说法,我们有6名麒麟工人.
知道这可能来自哪里?
干杯
编辑:
每分钟平均请求数:大部分时间约为15次,超过300次,但自迁移以来我们没有遇到过.
CPU负载平均值:0.2-0.3
我试过8名工人,但没有改变任何东西.
我也用雨滴来看看独角兽工人们在做什么.
这是ruby脚本:
#!/usr/bin/ruby
# this is used to show or watch the number of active and queued
# connections on any listener socket from the command line
require 'raindrops'
require 'optparse'
require 'ipaddr'
usage = "Usage: #$0 [-d delay] ADDR..."
ARGV.size > 0 or abort usage
delay = false
# "normal" exits when driven on the command-line
trap(:INT) { exit 130 }
trap(:PIPE) { exit 0 }
opts = OptionParser.new('', 24, ' ') do |opts|
opts.banner = usage
opts.on('-d', '--delay=delay') { |nr| delay = nr.to_i }
opts.parse! ARGV
end
socks = []
ARGV.each do |f|
if !File.exists?(f)
puts "#{f} not found"
next
end
if !File.socket?(f)
puts "#{f} ain't a socket"
next
end
socks << f
end
fmt = "% -50s % 10u % 10u\n"
printf fmt.tr('u','s'), *%w(address active queued)
begin
stats = Raindrops::Linux.unix_listener_stats(socks)
stats.each do |addr,stats|
if stats.queued.to_i > 0
printf fmt, addr, stats.active, stats.queued
end
end
end while delay && sleep(delay)
Run Code Online (Sandbox Code Playgroud)
我是如何推出它的:
./linux-tcp-listener-stats.rb -d 0.1 /var/run/engineyard/unicorn_actor.sock
Run Code Online (Sandbox Code Playgroud)
因此,如果队列中有请求并且是否有输出,它基本上会检查每个1/10秒:
插座 | 正在处理的请求数 | 队列中的请求数
以下是结果的要点:
https://gist.github.com/f9c9e5209fbbfc611cb1
EDIT2:
我试图将nginx工作者的数量减少到昨晚,但它没有改变任何东西.
有关信息,我们托管在Engine Yard上,具有高CPU中等实例1.7 GB内存,5个EC2计算单元(2个虚拟内核,每个具有2.5 EC2计算单元)
我们托管4个rails应用程序,这个有6个worker,我们有一个有4个,一个有2个,另一个有一个.自从我们迁移到独角兽以来,他们都在经历请求排队.我不知道Passenger是否在作弊,但是当我们使用它时,New Relic没有记录任何请求排队.我们还有一个处理文件上传的node.js app,一个mysql数据库和2个redis.
编辑3:
我们使用的是ruby 1.9.2p290,nginx 1.0.10,unicorn 4.2.1和newrelic_rpm 3.3.3.我明天会尝试没有newrelic,并会告诉你这里的结果,但是对于我们使用带有新遗物的乘客的信息,相同版本的ruby和nginx并没有任何问题.
编辑4:
我试着增加client_body_buffer_size和proxy_buffers使用
client_body_buffer_size 256k;
proxy_buffers 8 256k;
但它并没有成功.
编辑5:
我们终于明白了......鼓号......获胜者是我们的SSL密码.当我们将其更改为RC4时,我们看到请求排队从100-300ms下降到30-100ms.
Mat*_*son 12
我刚刚诊断出一个类似的新遗物图,完全是SSL的错误.尝试将其关闭.我们看到400毫秒的请求排队时间,没有SSL就会下降到20毫秒.
关于为什么一些SSL提供商可能会变慢的一些有趣观点:http://blog.cloudflare.com/how-cloudflare-is-making-ssl-fast
| 归档时间: |
|
| 查看次数: |
7963 次 |
| 最近记录: |