我刚开始学,因为其快速的标杆中信高科框架。我做了一个简单的Hello World API,然后用Gunicorn连接它。表现得相当不错,但是当我Nginx的结合,它变得非常糟糕。我发现,随着Nginx的Gunicorn过程被限制在1% - 为每个进程4%的CPU资源。如果没有Nginx的,Gunicorn度可达10%,每个进程。我认为这是因为错误的Nginx的配置。谁能给我一些建议吗?
服务器信息:
OS: Ubuntu 18.04
Python version: 3.7.2
Sanic version: 18.12.0
Processor: i3-4130
中信高科+ Gunicorn性能:
wrk -t8 -c1000 -d60s --timeout 2s http://127.0.0.1:8080/
Running 1m test @ http://127.0.0.1:8080/
8 threads and 1000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 29.54ms 15.13ms 175.77ms 71.23%
Req/Sec 4.32k 1.29k 19.46k 64.77%
2060010 requests in 1.00m, 249.50MB read
Requests/sec: 34281.64
Transfer/sec: 4.15MB
中信高科+ Gunicorn + Nginx的性能:
wrk -t8 -c1000 -d60s --timeout 2s http://127.0.0.1:8081/
Running 1m test @ http://127.0.0.1:8081/
8 threads and 1000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 364.78ms 271.20ms 1.39s 67.53%
Req/Sec 370.88 251.66 3.52k 87.12%
177223 requests in 1.00m, 30.42MB read
Requests/sec: 2948.79
Transfer/sec: 518.25KB
中信高科应用:
from sanic import Sanic
from sanic.response import json
app = Sanic()
app.config.ACCESS_LOG = False
@app.route("/")
async def test(request):
return json({"hello": "world"})
Gunicorn命令:
gunicorn --bind 127.0.0.1:8080 --workers 8 --threads 4 app:app --worker-class sanic.worker.GunicornWorker --name SanicHelloWorld
全球Nginx的配置:
worker_processes 8;
worker_rlimit_nofile 400000;
thread_pool sanic_thread_pool threads=32 max_queue=65536;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
multi_accept on;
worker_connections 25000;
use epoll;
accept_mutex off;
}
http {
access_log off;
sendfile on;
sendfile_max_chunk 512k;
tcp_nopush on;
tcp_nodelay on;
server_names_hash_bucket_size 64;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
upstream sanic-test {
server 127.0.0.1:8080;
}
}
对于中信高科+ Gunicorn Nginx的配置:
server {
listen 8081;
listen [::]:8081;
server_name sanic-test.com www.sanic-test.com;
location / {
aio threads=sanic_thread_pool;
proxy_pass http://127.0.0.1:8080;
}
}
这可能是因为http://nginx.org/r/proxy_buffering被设置在默认情况下,例如,以on
当你使用http://nginx.org/r/proxy_pass。
通常情况下,Nginx的应该是控制背压的后端,因此,缓冲非常有意义,因为你不想让你真正后端受制于Slowloris攻击向量。同样,你应该做的缓存,并限制到真正的后端这背后nginx的,因此,您的测试,你设置一切到最大,但不能禁用缓存的连接数,在现实世界的场景只是一个不切实际的条件,因此,你在指标得到非常差的数字。
如果你只是想看看有多少性能通过简单地添加另一层你的HTTP堆栈的影响,你应该使用proxy_buffering off;
时设置proxy_pass
。否则,测试应该更现实:你真正的后台是不是应该能够与如http://nginx.org/r/proxy_temp_path规定处理每秒请求数超过了存储设备的IO参数。