最近公司搞socket代理,并对其压力测试。我对LVS配置了之后,效果还可以。后来寻找其他代理软件时,发现Nginx添加对于tcp反向代理支持的模块并支持对后端服务进行健康检查。所以简单的翻译了下,并对配置整理下。以作备用!
Nginx的安装与配置(http反向代理这里就不赘述了)
直接敷上tcp反向代理模块的配置,并附注释:
Name nginx_tcp_proxy_module - support TCP proxy with Nginx
下载该模块:
https://github.com/yaoweibin/nginx_tcp_proxy_module/downloads
安装须安装nginx_tcp_proxy_module这个模块:
$ tar -xzvf nginx-1.2.1.tar.gz $ cd nginx-1.2.1/ $ patch -p1 < /path/to/nginx_tcp_proxy_module/tcp.patch##打补丁让其支持tcp_proxy $ https://blog.csdn.net/weixin_34097242/article/details/configure --add-module=/path/to/nginx_tcp_proxy_module $ make $ make installNginx.conf文件中的配置内容如下(主要包含两大块http和tcp。其中,http支持web应用反向代理,tcp支持tcp反向代理):
http { server { listen 80; location /status { check_status; } } } ############上半部分是对于http反向代理服务器的配置,下面是对tcp反向代理的配置############ tcp { upstream cluster { # simple round-robin(这里应该是一个调度算法吧,和lvs里面的rr一个作用) server 192.168.0.1:80; server 192.168.0.2:80; check interval=3000 rise=2 fall=5 timeout=1000;(健康检查,这个下面有具体解释) #check interval=3000 rise=2 fall=5 timeout=1000 type=ssl_hello; #check interval=3000 rise=2 fall=5 timeout=1000 type=http; #check_http_send "GET / HTTP/1.0 "; #check_http_expect_alive http_2xx http_3xx; } server { listen 8888;(tcp反向代理监听端口,注意不能和http共用一个端口) proxy_pass cluster; } }到此就配置完毕,以下是解说: Description This module actually include many modules: ngx_tcp_module, ngx_tcp_core_module, ngx_tcp_upstream_module, ngx_tcp_proxy_module, ngx_tcp_websocket_module, ngx_tcp_ssl_module, ngx_tcp_upstream_ip_hash_module. All these modules work together to support TCP proxy with Nginx. I also added other features: ip_hash, upstream server health check, status monitor. The motivation of writing these modules is Nginx's high performance and robustness. At first, I developed this module just for general TCP proxy. And now, this module is frequently used in websocket reverse proxying. Note, You can't use the same listening port with HTTP modules!
针对tcp反向代理,后端各服务器的健康检查:
check syntax: *check interval=milliseconds [fall=count] [rise=count] [timeout=milliseconds] [type=tcp|ssl_hello|smtp|mysql|pop3|imap]* default: *none, if parameters omitted, default parameters are interval=30000 fall=5 rise=2 timeout=1000* context: *upstream* description: Add the health check for the upstream servers. At present, the check method is a simple tcp connect. The parameters' meanings are: * *interval*: the check request's interval time. * *fall*(fall_count): After fall_count check failures, the server is marked down.#在检查失效fall_count次数后,才将后端服务器标识为失效的服务器 * *rise*(rise_count): After rise_count check success, the server is marked up.#同理,检查rise_count次数之后,才将后端服务器列为失效服务器 * *timeout*: the check request's timeout.#检查请求超时时间 * *type*: the check protocol type:#检查协议类型 1. *tcp* is a simple tcp socket connect and peek one byte. 2. *ssl_hello* sends a client ssl hello packet and receives the server ssl hello packet. 3. *http* sends a http request packet, receives and parses the http response to diagnose if the upstream server is alive. 4. *smtp* sends a smtp request packet, receives and parses the smtp response to diagnose if the upstream server is alive. The response begins with '2' should be an OK response. 5. *mysql* connects to the mysql server, receives the greeting response to diagnose if the upstream server is alive. 6. *pop3* receives and parses the pop3 response to diagnose if the upstream server is alive. The response begins with '+' should be an OK response. 7. *imap* connects to the imap server, receives the greeting response to diagnose if the upstream server is alive