HAProxy. Why is the time to receive customer inquiries very long?

We have a haproxy setup (v 1.5.1) on Amazon EC2 that does two jobs

  • Traffic routing based on request subdomain
  • Terminating SSL

ulimit on our server is 128074 and concurrent connections are ~ 3000.

Our config file looks like this. The problem we are facing is that the Tq time is very long (2-3 sec) in the haproxy logs. Is there something wrong with the configuration or something that we are missing?

global
    daemon
    maxconn 64000
    tune.ssl.default-dh-param 2048
    log 127.0.0.1 local0 debug

defaults
    mode http
    option abortonclose
    option forwardfor
    option http-server-close
    option httplog
    timeout connect 9s
    timeout client 60s
    timeout server 30s
    stats enable
    stats uri /stats
    stats realm Haproxy\ Statistics
    stats auth username:nopass

frontend www-http
    bind *:80

    maxconn 64000
    http-request set-header U-Request-Source %[src]
    reqadd X-Forwarded-Proto:\ http

    errorfile 503 /var/www/html/sorry.html

    acl host_A    hdr_dom(host) -f /etc/A.lst
    acl host_B    hdr_dom(host) -f /etc/B.lst
    use_backend www-A         if host_A
    use_backend www-B         if host_B
    log global

frontend www-https 
    bind *:443 ssl crt /etc/ssl/private/my.pem no-sslv3
    http-request set-header U-Request-Source %[src]
    maxconn 64000
    reqadd X-Forwarded-Proto:\ https

    errorfile 503 /var/www/html/sorry.html

    acl host_A        hdr_dom(host) -f /etc/A.lst
    acl host_B        hdr_dom(host) -f /etc/B.lst

    use_backend www-A if host_A
    use_backend www-B if host_B
    log global


backend www-A
    redirect scheme https if !{ ssl_fc }
    server app1 app1.a.mydomain.com:80 check port 80

backend www-B
    redirect scheme https if !{ ssl_fc }
    server app1 app1.b.mydomain.com:80 check port 80

      

+3


source to share


1 answer


My first thought was that from the HAProxy docs:

If it Tq

is close to 3000, the packet was probably lost between the client and the proxy. This is very rare on local networks, but can happen when clients are on remote networks and send large requests.

... however, this is usually only true when Tq

really close to 3000 milliseconds. I sometimes see this in journals on transcontinental links, but this is quite rare. Instead, I suspect you are seeing this:

The setting option http-server-close

may display longer request times because it Tq

also measures the latency of additional requests.

This is a more likely explanation.

You can confirm this by locating one of the suspicious log entries and then scrolling down to find the previous one from the same source IP and port.

Examples from my logs:



Dec 28 20:29:00 localhost haproxy[28333]: x.x.x.x:45062 [28/Dec/2014:20:28:58.623] ...  2022/0/0/12/2034 200 18599 ... 

Dec 28 20:29:17 localhost haproxy[28333]: x.x.x.x:45062 [28/Dec/2014:20:29:00.657] ... 17091/0/0/45/17136 200 19599 ...

      

Both of these requests refer to the same IP address and the same source port - so they are two requests from the same client connection separated in time by ~ 17 seconds (I allow keepalives longer than the default for of this particular proxy cluster).

The timer Tq

(above, values ​​2022ms and 17091ms) is the "total time to receive a client request" - on an initial request from any given client, this timer stops when the line is interrupted end of headers is decoded. But on subsequent requests, this timer also includes the amount of idle time elapsed after the end or previous request until the next request arrives. (If I go back even further, I find even more requests from the same IP / port pair until I get to the first one, which actually had Tq

0, although this will not always be the case.)

If you can backtrack in the logs and find previous requests from the same client IP and port where all the time is adding up, then that's all you see - HAProxy is counting the time spent on an open, maintained connection waiting for the next request from client ... so this behavior is perfectly normal and should not be a cause for concern.

Usage option http-server-close

allows the client-side connection to remain open by closing the server connection after each request, giving you the advantage of keeping client connections alive, which optimizes the (generally) longer path (in terms of latency) in the chain without tying server server resources with simple connections.

http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#8.4

+7


source







All Articles