Discussion:
maxconn not respecting idle connections?
Claudio Kuenzler
2017-08-08 05:16:17 UTC
Permalink
Hi,

I've set "hard limits" with maxconn for each backend server but it seems
that established (keep-alive) connections are not accounted for in the
stats. This leads to HAProxy allowing more connections to the backend
server than actually defined with the maxconn value.

Config:

############
frontend app-in
bind *:18382
option httplog
timeout client 1h
timeout server 1h
timeout http-keep-alive 1h
maxconn 96
default_backend app-out

############
backend aoo-out
balance roundrobin
no option redispatch
option persist
timeout queue 1
timeout connect 5
timeout check 60s
timeout http-keep-alive 4m
cookie SERVERID insert indirect nocache
option httpchk GET /service?wsdl HTTP/1.0\r\nConnection:\ close
server app01-p-18383 10.10.10.11:18383 maxconn 5 maxqueue 1 cookie
1-18383 check fall 1 rise 2
server app01-p-18384 10.10.10.11:18384 maxconn 5 maxqueue 1 cookie
1-18384 check fall 1 rise 2
server app01-p-18385 10.10.10.11:18385 maxconn 5 maxqueue 1 cookie
1-18385 check fall 1 rise 2
server app01-p-18386 10.10.10.11:18386 maxconn 5 maxqueue 1 cookie
1-18386 check fall 1 rise 2
server app01-p-18387 10.10.10.11:18387 maxconn 5 maxqueue 1 cookie
1-18387 check fall 1 rise 2
server app01-p-18388 10.10.10.11:18388 maxconn 5 maxqueue 1 cookie
1-18388 check fall 1 rise 2
server app01-p-18389 10.10.10.11:18389 maxconn 5 maxqueue 1 cookie
1-18389 check fall 1 rise 2
server app01-p-18390 10.10.10.11:18390 maxconn 5 maxqueue 1 cookie
1-18390 check fall 1 rise 2
server app02-p-18383 10.10.10.12:18383 maxconn 5 maxqueue 1 cookie
2-18383 check fall 1 rise 2
server app02-p-18384 10.10.10.12:18384 maxconn 5 maxqueue 1 cookie
2-18384 check fall 1 rise 2
server app02-p-18385 10.10.10.12:18385 maxconn 5 maxqueue 1 cookie
2-18385 check fall 1 rise 2
server app02-p-18386 10.10.10.12:18386 maxconn 5 maxqueue 1 cookie
2-18386 check fall 1 rise 2
server app02-p-18387 10.10.10.12:18387 maxconn 5 maxqueue 1 cookie
2-18387 check fall 1 rise 2
server app02-p-18388 10.10.10.12:18388 maxconn 5 maxqueue 1 cookie
2-18388 check fall 1 rise 2
server app02-p-18389 10.10.10.12:18389 maxconn 5 maxqueue 1 cookie
2-18389 check fall 1 rise 2
server app02-p-18390 10.10.10.12:18390 maxconn 5 maxqueue 1 cookie
2-18390 check fall 1 rise 2

As you can see, each backend server allows a maximum of 5 concurrent
connections. But during a stress-test we saw that there were clearly more
connections going through HAProxy to the backend:

# for ip in 10.10.10.11 10.10.10.12; do for port in 18383 18384 18385 18386
18387 18388 18390; do echo "$ip $port" ; netstat -an | grep $ip | grep
$port; done; done
10.10.10.11 18383
tcp 0 0 10.10.10.10:54196 10.10.10.11:18383 ESTABLISHED
tcp 0 0 10.10.10.10:53898 10.10.10.11:18383 ESTABLISHED
tcp 0 0 10.10.10.10:54826 10.10.10.11:18383 ESTABLISHED
tcp 0 0 10.10.10.10:54660 10.10.10.11:18383 ESTABLISHED
tcp 0 0 10.10.10.10:54064 10.10.10.11:18383 ESTABLISHED
tcp 0 0 10.10.10.10:54434 10.10.10.11:18383 ESTABLISHED
10.10.10.11 18384
tcp 0 0 10.10.10.10:48452 10.10.10.11:18384 ESTABLISHED
tcp 0 0 10.10.10.10:49056 10.10.10.11:18384 ESTABLISHED
tcp 0 0 10.10.10.10:49220 10.10.10.11:18384 ESTABLISHED
tcp 0 0 10.10.10.10:48592 10.10.10.11:18384 ESTABLISHED
tcp 0 0 10.10.10.10:48292 10.10.10.11:18384 ESTABLISHED
tcp 0 0 10.10.10.10:48824 10.10.10.11:18384 ESTABLISHED
10.10.10.11 18385
tcp 0 0 10.10.10.10:59128 10.10.10.11:18385 ESTABLISHED
tcp 0 0 10.10.10.10:56566 10.10.10.11:18385 ESTABLISHED
tcp 0 0 10.10.10.10:56388 10.10.10.11:18385 ESTABLISHED
tcp 0 0 10.10.10.10:56704 10.10.10.11:18385 ESTABLISHED
tcp 0 0 10.10.10.10:57468 10.10.10.11:18385 ESTABLISHED
tcp 0 0 10.10.10.10:56854 10.10.10.11:18385 ESTABLISHED
tcp 0 0 10.10.10.10:57342 10.10.10.11:18385 ESTABLISHED
tcp 0 0 10.10.10.10:57072 10.10.10.11:18385 ESTABLISHED
10.10.10.11 18386
tcp 0 0 10.10.10.10:52090 10.10.10.11:18386 ESTABLISHED
tcp 0 0 10.10.10.10:52358 10.10.10.11:18386 ESTABLISHED
tcp 0 0 10.10.10.10:51400 10.10.10.11:18386 ESTABLISHED
tcp 0 0 10.10.10.10:51712 10.10.10.11:18386 ESTABLISHED
tcp 0 0 10.10.10.10:51866 10.10.10.11:18386 ESTABLISHED
tcp 0 0 10.10.10.10:52492 10.10.10.11:18386 ESTABLISHED
10.10.10.11 18387
tcp 0 0 10.10.10.10:40588 10.10.10.11:18387 ESTABLISHED
tcp 0 0 10.10.10.10:40272 10.10.10.11:18387 ESTABLISHED
tcp 0 0 10.10.10.10:41360 10.10.10.11:18387 ESTABLISHED
tcp 0 0 10.10.10.10:40986 10.10.10.11:18387 ESTABLISHED
tcp 0 0 10.10.10.10:40466 10.10.10.11:18387 ESTABLISHED
tcp 0 0 10.10.10.10:40738 10.10.10.11:18387 ESTABLISHED
10.10.10.11 18388
tcp 0 0 10.10.10.10:58976 10.10.10.11:18388 ESTABLISHED
tcp 0 0 10.10.10.10:59454 10.10.10.11:18388 ESTABLISHED
tcp 0 0 10.10.10.10:58820 10.10.10.11:18388 ESTABLISHED
tcp 0 0 10.10.10.10:59598 10.10.10.11:18388 ESTABLISHED
tcp 0 0 10.10.10.10:58500 10.10.10.11:18388 ESTABLISHED
tcp 0 0 10.10.10.10:58694 10.10.10.11:18388 ESTABLISHED
tcp 0 0 10.10.10.10:59218 10.10.10.11:18388 ESTABLISHED
tcp 0 0 10.10.10.10:32996 10.10.10.11:18388 ESTABLISHED
10.10.10.11 18390
tcp 0 0 10.10.10.10:34148 10.10.10.11:18390 ESTABLISHED
tcp 0 0 10.10.10.10:33698 10.10.10.11:18390 ESTABLISHED
tcp 0 0 10.10.10.10:33206 10.10.10.11:18390 ESTABLISHED
tcp 0 0 10.10.10.10:33524 10.10.10.11:18390 ESTABLISHED
tcp 0 0 10.10.10.10:33396 10.10.10.11:18390 ESTABLISHED
10.10.10.12 18383
tcp 0 0 10.10.10.10:52148 10.10.10.12:18383 ESTABLISHED
tcp 0 0 10.10.10.10:52354 10.10.10.12:18383 ESTABLISHED
tcp 0 0 10.10.10.10:52466 10.10.10.12:18383 ESTABLISHED
tcp 0 0 10.10.10.10:52926 10.10.10.12:18383 ESTABLISHED
tcp 0 0 10.10.10.10:52642 10.10.10.12:18383 ESTABLISHED
tcp 0 0 10.10.10.10:53090 10.10.10.12:18383 ESTABLISHED
10.10.10.12 18384
tcp 0 0 10.10.10.10:42608 10.10.10.12:18384 ESTABLISHED
tcp 0 0 10.10.10.10:43184 10.10.10.12:18384 ESTABLISHED
tcp 0 0 10.10.10.10:43342 10.10.10.12:18384 ESTABLISHED
tcp 0 0 10.10.10.10:42906 10.10.10.12:18384 ESTABLISHED
tcp 0 0 10.10.10.10:42402 10.10.10.12:18384 ESTABLISHED
tcp 0 0 10.10.10.10:42724 10.10.10.12:18384 ESTABLISHED
10.10.10.12 18385
tcp 0 0 10.10.10.10:48830 10.10.10.12:18385 ESTABLISHED
tcp 0 0 10.10.10.10:48408 10.10.10.12:18385 ESTABLISHED
tcp 0 0 10.10.10.10:48214 10.10.10.12:18385 ESTABLISHED
tcp 0 0 10.10.10.10:48100 10.10.10.12:18385 ESTABLISHED
tcp 0 0 10.10.10.10:47912 10.10.10.12:18385 ESTABLISHED
tcp 0 0 10.10.10.10:48676 10.10.10.12:18385 ESTABLISHED
10.10.10.12 18386
tcp 0 0 10.10.10.10:49612 10.10.10.12:18386 ESTABLISHED
tcp 0 0 10.10.10.10:49452 10.10.10.12:18386 ESTABLISHED
tcp 0 0 10.10.10.10:48684 10.10.10.12:18386 ESTABLISHED
tcp 0 0 10.10.10.10:48988 10.10.10.12:18386 ESTABLISHED
tcp 0 0 10.10.10.10:48874 10.10.10.12:18386 ESTABLISHED
10.10.10.12 18387
tcp 0 0 10.10.10.10:36182 10.10.10.12:18387 ESTABLISHED
tcp 0 0 10.10.10.10:36652 10.10.10.12:18387 ESTABLISHED
tcp 0 0 10.10.10.10:36804 10.10.10.12:18387 ESTABLISHED
tcp 0 0 10.10.10.10:35876 10.10.10.12:18387 ESTABLISHED
tcp 0 0 10.10.10.10:36070 10.10.10.12:18387 ESTABLISHED
tcp 0 0 10.10.10.10:36392 10.10.10.12:18387 ESTABLISHED
10.10.10.12 18388
tcp 0 0 10.10.10.10:59652 10.10.10.12:18388 ESTABLISHED
tcp 0 0 10.10.10.10:60424 10.10.10.12:18388 ESTABLISHED
tcp 0 0 10.10.10.10:60582 10.10.10.12:18388 ESTABLISHED
tcp 0 0 10.10.10.10:60154 10.10.10.12:18388 ESTABLISHED
tcp 0 0 10.10.10.10:59836 10.10.10.12:18388 ESTABLISHED
10.10.10.12 18390
tcp 0 0 10.10.10.10:52104 10.10.10.12:18390 ESTABLISHED
tcp 0 0 10.10.10.10:53028 10.10.10.12:18390 ESTABLISHED
tcp 0 0 10.10.10.10:52264 10.10.10.12:18390 ESTABLISHED
tcp 0 0 10.10.10.10:52868 10.10.10.12:18390 ESTABLISHED
tcp 0 0 10.10.10.10:52626 10.10.10.12:18390 ESTABLISHED
tcp 0 0 10.10.10.10:52398 10.10.10.12:18390 ESTABLISHED

(10.10.10.10 is obviously the HAProxy's IP)

Some of the backend servers even reached 8 concurrent connections. 6 would
have been possible (5 maxconn + 1 healthcheck).

In the html stats I saw that "Sessions Cur" was mostly at 1 or 0,
indicating that the already established connections were not counted in.
Also in the logged stats (to rule out a problem in the html display), the
srv_conn is nowhere near the actual number of connections:

Aug 7 16:32:09 haproxy haproxy[31083]: client:42724
[07/Aug/2017:16:32:08.168] app-in app-out/app02-p-18389 975/0/0/100/1075
200 798 - - --VN 97/96/2/1/0 0/0 "POST / HTTP/1.1"
Aug 7 16:32:09 haproxy haproxy[31083]: client:42698
[07/Aug/2017:16:32:08.158] app-in app-out/app01-p-18385 983/0/0/110/1093
200 799 - - --DN 97/96/1/1/0 0/0 "POST / HTTP/1.1"
Aug 7 16:32:09 haproxy haproxy[31083]: client:42546
[07/Aug/2017:16:32:08.080] app-in app-out/app01-p-18388 1093/0/0/110/1203
200 799 - - --DN 97/96/0/1/0 0/0 "POST / HTTP/1.1"
Aug 7 16:32:09 haproxy haproxy[31083]: client:42706
[07/Aug/2017:16:32:08.455] app-in app-out/app01-p-18389 932/0/0/99/1031 200
799 - - --DN 97/96/0/1/0 0/0 "POST / HTTP/1.1"
Aug 7 16:32:09 haproxy haproxy[31083]: client:42740
[07/Aug/2017:16:32:08.737] app-in app-out/app01-p-18389 913/0/0/102/1015
200 799 - - --DN 97/96/2/1/0 0/0 "POST / HTTP/1.1"
Aug 7 16:32:09 haproxy haproxy[31083]: client:42708
[07/Aug/2017:16:32:08.612] app-in app-out/app01-p-18390 1096/0/0/106/1202
200 798 - - --VN 97/96/1/1/0 0/0 "POST / HTTP/1.1"
Aug 7 16:32:09 haproxy haproxy[31083]: client:42626
[07/Aug/2017:16:32:08.776] app-in app-out/app02-p-18387 967/0/0/108/1075
200 800 - - --VN 97/96/0/1/0 0/0 "POST / HTTP/1.1"
Aug 7 16:32:10 haproxy haproxy[31083]: client:42724
[07/Aug/2017:16:32:09.244] app-in app-out/app02-p-18389 947/0/0/98/1045 200
798 - - --VN 97/96/1/1/0 0/0 "POST / HTTP/1.1"
Aug 7 16:32:10 haproxy haproxy[31083]: client:42698
[07/Aug/2017:16:32:09.252] app-in app-out/app01-p-18385 1002/0/0/107/1109
200 799 - - --DN 97/96/1/1/0 0/0 "POST / HTTP/1.1"
Aug 7 16:32:10 haproxy haproxy[31083]: client:42546
[07/Aug/2017:16:32:09.283] app-in app-out/app01-p-18388 1038/0/0/102/1140
200 799 - - --DN 97/96/1/1/0 0/0 "POST / HTTP/1.1"
Aug 7 16:32:10 haproxy haproxy[31083]: client:42740
[07/Aug/2017:16:32:09.752] app-in app-out/app01-p-18389 1071/0/0/100/1171
200 799 - - --DN 97/96/2/1/0 0/0 "POST / HTTP/1.1"
Aug 7 16:32:10 haproxy haproxy[31083]: client:42708
[07/Aug/2017:16:32:09.815] app-in app-out/app01-p-18390 1042/0/0/98/1140
200 798 - - --VN 97/96/1/1/0 0/0 "POST / HTTP/1.1"
Aug 7 16:32:11 haproxy haproxy[31083]: client:42626
[07/Aug/2017:16:32:09.852] app-in app-out/app02-p-18387 1057/0/0/98/1155
200 800 - - --VN 97/96/1/1/0 0/0 "POST / HTTP/1.1"
Aug 7 16:33:13 haproxy haproxy[31083]: client:42706
[07/Aug/2017:16:32:09.486] app-in app-out/app01-p-18389 64294/0/0/96/64390
200 799 - - --DN 96/96/2/1/0 0/0 "POST / HTTP/1.1"


As said before, these are all keep-alive connections. They can be running
idle for quite a long time (hence the long timeouts), but that's
application design.
To me it looks like only connections with current data-transfer are
accounted for in the CUR statistics, idle connections somehow drop out and
HAProxy then allows additional connections to the backend, causing it to
stale (it cannot handle more than 7 sessions). Am I wrong? Did I
misinterpret the maxconn purpose?
Or was there a major change from the version I use (HAProxy 1.6.3 on Ubuntu
Xenial/1.6.3-1ubuntu0.1)? In the change log I haven't seen a bugfix
concerning maxconn.

Thanks for any advice in advance,
ck
Willy Tarreau
2017-08-09 06:29:45 UTC
Permalink
Hi Claudio,
Post by Claudio Kuenzler
Hi,
I've set "hard limits" with maxconn for each backend server but it seems
that established (keep-alive) connections are not accounted for in the
stats. This leads to HAProxy allowing more connections to the backend
server than actually defined with the maxconn value.
(...)
Post by Claudio Kuenzler
To me it looks like only connections with current data-transfer are
accounted for in the CUR statistics, idle connections somehow drop out and
HAProxy then allows additional connections to the backend, causing it to
stale (it cannot handle more than 7 sessions). Am I wrong? Did I
misinterpret the maxconn purpose?
No you're totally right and it's by design. The thing is, you don't want
to leave requests waiting in a server's queue while the server has a ton
of idle connections. The vast majority of servers nowadays have a
dispatching frontend which is mostly insensitive to idle connections, and
only really see outstanding requests. This has been even more true since
all browsers started to implement the pre-connect feature a few years ago,
establishing idle connections to sites you've recently visited just in
case you'd want to visit them again, resulting in a huge amount of idle
connections on servers. So when using server-side keep-alive we continue
to ensure that the server never has to process more than a given number
of outstanding requests, and idle connections are not accounted for.

Now the question is, does it cause any problem for you or is it just that
it came as a surprize and you were worried that it could cause problems ?
The possible alternative would be to have an option to say that idle
connections are accounted for and that some of them will be killed before
passing a new connection to the server, but that will significantly reduce
the efficiency of server-side keep-alive.

If you're really short on server-side connections and want to optimize
them as much as possible, you can try to enable "http-reuse". It will
allow sharing of idle connections between frontend connections so that
a request may be sent over an existing connection. It is the way to
achieve the lowest number of concurrent connections on the server side.
But not all applications support this (most do nowadays), you need to
check (eg: some try to retrieve the source address once per connection
for logging purposes for example).

Regards,
Willy
Claudio Kuenzler
2017-08-09 08:09:47 UTC
Permalink
Hi Willy,
Post by Willy Tarreau
Now the question is, does it cause any problem for you or is it just that
it came as a surprize and you were worried that it could cause problems ?
Yes, unfortunately it does create a problem.

Each backend server (a SOAP API) can only handle up to 7 concurrent
connections. The 8th connection freezes/is waiting for a server response.
In order to satisfy this max connection requirement to the backend
server(s) I wanted to use "maxconn 6".
My hope was that HAProxy would not allow more than 6 concurrent connections
going to each backend server.
From POV of the SOAP API, an additional connection (7) is added because of
the regular healthcheck from HAProxy.

So now we have the problem that idle connections are not accounted for and
HAProxy keeps letting new connections going through.
This causes the SOAP backend servers to freeze up.
Post by Willy Tarreau
The possible alternative would be to have an option to say that idle
connections are accounted for and that some of them will be killed before
passing a new connection to the server, but that will significantly reduce
the efficiency of server-side keep-alive.
Yes, such an option would be really helpful. Should probably be turned off
by default, but it would be a great help for such scenarios.
But none of them should be killed if new requests are passed over to the
backend server.
In fact the option could do the same as counting the number of established
tcp connections going through HAProxy to the backend server and handle this
as CUR value.
The maxconn setting is then reached by all connections (whether they're
active or idle) resulting in HAProxy returning a 503 error.
Post by Willy Tarreau
If you're really short on server-side connections and want to optimize
them as much as possible, you can try to enable "http-reuse".
That's a good idea, but unfortunately won't work in this case.
Each session, even idle sessions, have a bound "ticket" in the SOAP API. A
reuse would basically hijack a session already in progress resulting in
data corruption.

I know it's a weird application design and I personally haven't ever seen
anything like this before.

I'm happy to provide my time/support for you to get in place such an
option. Just let me know.

In the meantime I'll probably try to solve this using iptables limits
behind HAProxy (between HAProxy and backend server).
Willy Tarreau
2017-08-09 09:12:57 UTC
Permalink
Hi Claudio,
Post by Claudio Kuenzler
Hi Willy,
Post by Willy Tarreau
Now the question is, does it cause any problem for you or is it just that
it came as a surprize and you were worried that it could cause problems ?
Yes, unfortunately it does create a problem.
Each backend server (a SOAP API) can only handle up to 7 concurrent
connections. The 8th connection freezes/is waiting for a server response.
In order to satisfy this max connection requirement to the backend
server(s) I wanted to use "maxconn 6".
My hope was that HAProxy would not allow more than 6 concurrent connections
going to each backend server.
From POV of the SOAP API, an additional connection (7) is added because of
the regular healthcheck from HAProxy.
OK.
Post by Claudio Kuenzler
So now we have the problem that idle connections are not accounted for and
HAProxy keeps letting new connections going through.
This causes the SOAP backend servers to freeze up.
I see. To be honnest, this is the first ever such report over the last
5 years that server-side keep-alive was implemented, so this tends to
confirm that this server is not behaving like most others.
Post by Claudio Kuenzler
Post by Willy Tarreau
The possible alternative would be to have an option to say that idle
connections are accounted for and that some of them will be killed before
passing a new connection to the server, but that will significantly reduce
the efficiency of server-side keep-alive.
Yes, such an option would be really helpful. Should probably be turned off
by default, but it would be a great help for such scenarios.
But none of them should be killed if new requests are passed over to the
backend server.
Yes we need to kill them, otherwise you'll end up exactly in the current
situation, except that instead of having the extra connection queued at
the server and waiting there for an idle connection to terminate, it would
be queued into haproxy waiting for such an idle connection to terminate.
Post by Claudio Kuenzler
In fact the option could do the same as counting the number of established
tcp connections going through HAProxy to the backend server and handle this
as CUR value.
I see but basically you'll never be able to send requests there due to
the pending (unused) idle connections blocking the count to a high value.
Post by Claudio Kuenzler
The maxconn setting is then reached by all connections (whether they're
active or idle) resulting in HAProxy returning a 503 error.
This is becoming a bit ugly.
Post by Claudio Kuenzler
Post by Willy Tarreau
If you're really short on server-side connections and want to optimize
them as much as possible, you can try to enable "http-reuse".
That's a good idea, but unfortunately won't work in this case.
Each session, even idle sessions, have a bound "ticket" in the SOAP API. A
reuse would basically hijack a session already in progress resulting in
data corruption.
So in fact this "application" pretends to speak HTTP but is not compatible
with the HTTP spec. You're possibly taking risks by placing HTTP components
between the client and this thing.
Post by Claudio Kuenzler
I know it's a weird application design and I personally haven't ever seen
anything like this before.
I easily believe you ;-)
Post by Claudio Kuenzler
I'm happy to provide my time/support for you to get in place such an
option. Just let me know.
Anyway it will not happen before 1.8 is released and we start to work on
1.9, as it would take time away from the already planned features.
Post by Claudio Kuenzler
In the meantime I'll probably try to solve this using iptables limits
behind HAProxy (between HAProxy and backend server).
It won't really work. There might be something which can work, which is
to chain to a TCP listener. It will enforce the maxconn count at the TCP
level. By having this :

listen foo
mode http
...
server app02-1 127.0.0.1:10001
server app02-2 127.0.0.1:10002
server app02-3 127.0.0.1:10003

listen app02-1
mode tcp
bind 127.0.0.1:10001
server app02-1 10.10.10.12:12345 maxconn 6 maxqueue 0

listen app02-2
mode tcp
bind 127.0.0.1:10002
server app02-1 10.10.10.12:12346 maxconn 6 maxqueue 0

...

By the way, looking at your config, I'm now wondering why
you are using HTTP mode in your frontend/backend instead of
TCP mode. You're adding a cookie but you mentionned that the
application doesn't support requests being mixed over connections
so I guess that the stickiness is only ensured at the connection
level and the cookie very likely is useless. Then by using only
"mode tcp" you could have exactly what you need, ie: maxconn
enforced at the TCP level.

Regards,
willy
Claudio Kuenzler
2017-08-09 09:44:48 UTC
Permalink
Salut Willy,
Post by Willy Tarreau
So in fact this "application" pretends to speak HTTP but is not compatible
with the HTTP spec. You're possibly taking risks by placing HTTP components
between the client and this thing.
Yeah, tell that Adobe. The application we're talking about are Indesign
Server instances.
Post by Willy Tarreau
Post by Claudio Kuenzler
I'm happy to provide my time/support for you to get in place such an
option. Just let me know.
Anyway it will not happen before 1.8 is released and we start to work on
1.9, as it would take time away from the already planned features.
OK whenever the timing's ready, let me know.
Post by Willy Tarreau
Post by Claudio Kuenzler
In the meantime I'll probably try to solve this using iptables limits
behind HAProxy (between HAProxy and backend server).
It won't really work.
It does. I've set a REJECT with a connection number above 7 like this (for
each backend server obvisously):
iptables -A OUTPUT -p tcp --syn -d 10.10.10.12/32 --dport 18390 -m
connlimit --connlimit-above 7 -j REJECT --reject-with tcp-reset

The only downside I see with this setup is that the healthcheck from
HAProxy returns a L4TOUT when 7 connections are already in use by the
(application-)client. The 8th connection (healthcheck coming from HAProxy)
is rejected by iptables.

I'm aware that I'm working around the issue here, but so far this is the
closest solution I came up with to meet the application requirements.
Post by Willy Tarreau
There might be something which can work, which is
to chain to a TCP listener. It will enforce the maxconn count at the TCP
listen foo
mode http
...
server app02-1 127.0.0.1:10001
server app02-2 127.0.0.1:10002
server app02-3 127.0.0.1:10003
listen app02-1
mode tcp
bind 127.0.0.1:10001
server app02-1 10.10.10.12:12345 maxconn 6 maxqueue 0
listen app02-2
mode tcp
bind 127.0.0.1:10002
server app02-1 10.10.10.12:12346 maxconn 6 maxqueue 0
...
By the way, looking at your config, I'm now wondering why
you are using HTTP mode in your frontend/backend instead of
TCP mode. You're adding a cookie but you mentionned that the
application doesn't support requests being mixed over connections
so I guess that the stickiness is only ensured at the connection
level and the cookie very likely is useless. Then by using only
"mode tcp" you could have exactly what you need, ie: maxconn
enforced at the TCP level.
This is a very good idea, but I don't think it would work.
I wrote in another mailing list thread before, that once a connection was
send to a backend server, all future requests MUST go to the same backend
server again (no failover, no balancing - a stale and fixed connection
between client and backend server).
To my current knowledge, this is only possible by setting a cookie which
can only be read in http mode. And "balance source" in this case won't
help, because the source IP of the client will be the same.
If I'm mistaken, please let me know.


In your tcp example I just noticed "maxqueue 0". Does this actually disable
the queue? I asked this in my previous mailing list thread :)

cheers,
ck
Lukas Tribus
2017-08-17 10:15:31 UTC
Permalink
Hello,
Post by Willy Tarreau
There might be something which can work, which is
to chain to a TCP listener. It will enforce the maxconn count at the TCP
level.
Or a simpler workaround, disable http keepalive on the backend with
"option http-server-close".


cheers,
lukas

Loading...