Discussion:
Gzip compression and transfer: chunked
Vladimir Mihailenco
2017-01-23 10:54:10 UTC
Permalink
Hi,

I am using haproxy as load balancer/reverse proxy for Rails/Go application.
I am upgrading from working Haproxy 1.6 config to 1.7.2. And it looks like
I need to change my existing config, because Haproxy 1.7 truncates
responses from Rails/Rack application.

With Haproxy 1.6 and enabled compression
- i can load full HTML (200kb)
- HTML is not compressed
- Transfer-encoding: "chunked"
- no Content-Length header

With same config Haproxy 1.7
- only first 14kb are avalable
- no Transfer-encoding
- Content-Length: 14359

With Haproxy 1.7 and compression disabled
- full HTML is available
- HTML is not compressed
- Transfer-encoding: "chunked"
- no Content-Length header

Any recommendations? Should I disable compression from Rails/Rack app?
Christopher Faulet
2017-01-23 11:06:24 UTC
Permalink
Post by Vladimir Mihailenco
Hi,
I am using haproxy as load balancer/reverse proxy for Rails/Go
application. I am upgrading from working Haproxy 1.6 config to 1.7.2.
And it looks like I need to change my existing config, because Haproxy
1.7 truncates responses from Rails/Rack application.
With Haproxy 1.6 and enabled compression
- i can load full HTML (200kb)
- HTML is not compressed
- Transfer-encoding: "chunked"
- no Content-Length header
With same config Haproxy 1.7
- only first 14kb are avalable
- no Transfer-encoding
- Content-Length: 14359
With Haproxy 1.7 and compression disabled
- full HTML is available
- HTML is not compressed
- Transfer-encoding: "chunked"
- no Content-Length header
Any recommendations? Should I disable compression from Rails/Rack app?
Hi,

Could you share your configurations, the both please ? And if possible,
the request/response headers for all scenarios. The compression was
rewritten in 1.7. So it is possible that something was broken.

Headers returned by your backend could be useful too.
--
Christopher
Vladimir Mihailenco
2017-01-24 09:55:54 UTC
Permalink
This is the config -
https://gist.github.com/vmihailenco/9010ad37f5aeb800095a6b18909ae7d5.
Backends don't have any options. I already tried to remove `http-reuse
safe`, but it does not make any difference.

Haproxy 1.7 with compression (HTML not fully loaded) -
https://gist.github.com/vmihailenco/05bda6e7a49b6f78cd2f749abb0cf5b3
Haproxy 1.7 without compression (HTML fully loaded) -
https://gist.github.com/vmihailenco/d8732e53acac3769a85b59afd7336bab
Haproxy 1.7 with compression and Rails configured to set
Content-Length via config.middleware.use
Rack::ContentLength (HTML fully loaded) -
https://gist.github.com/vmihailenco/13a809f486c4e1833ef813a019549180

On Mon, Jan 23, 2017 at 1:06 PM, Christopher Faulet <
Post by Christopher Faulet
Post by Vladimir Mihailenco
Hi,
I am using haproxy as load balancer/reverse proxy for Rails/Go
application. I am upgrading from working Haproxy 1.6 config to 1.7.2.
And it looks like I need to change my existing config, because Haproxy
1.7 truncates responses from Rails/Rack application.
With Haproxy 1.6 and enabled compression
- i can load full HTML (200kb)
- HTML is not compressed
- Transfer-encoding: "chunked"
- no Content-Length header
With same config Haproxy 1.7
- only first 14kb are avalable
- no Transfer-encoding
- Content-Length: 14359
With Haproxy 1.7 and compression disabled
- full HTML is available
- HTML is not compressed
- Transfer-encoding: "chunked"
- no Content-Length header
Any recommendations? Should I disable compression from Rails/Rack app?
Hi,
Could you share your configurations, the both please ? And if possible,
the request/response headers for all scenarios. The compression was
rewritten in 1.7. So it is possible that something was broken.
Headers returned by your backend could be useful too.
--
Christopher
Christopher Faulet
2017-01-25 09:41:26 UTC
Permalink
Post by Vladimir Mihailenco
This is the config -
https://gist.github.com/vmihailenco/9010ad37f5aeb800095a6b18909ae7d5.
Backends don't have any options. I already tried to remove `http-reuse
safe`, but it does not make any difference.
Haproxy 1.7 with compression (HTML not fully loaded) -
https://gist.github.com/vmihailenco/05bda6e7a49b6f78cd2f749abb0cf5b3
Haproxy 1.7 without compression (HTML fully loaded) -
https://gist.github.com/vmihailenco/d8732e53acac3769a85b59afd7336bab
Haproxy 1.7 with compression and Rails configured to set Content-Length
via config.middleware.use Rack::ContentLength (HTML fully loaded) -
https://gist.github.com/vmihailenco/13a809f486c4e1833ef813a019549180
Hi,

Thanks for details. There are some things that puzzle me.

I guess that when you disable the compression, it means that you comment
"compression" lines in the frontend section. In that case, we can see
the response is chunked. Because it is untouched by HAProxy, this comes
from your backend. Here there is no problem.

But when the compression is enabled, there is a Content-Length header
and no Transfer-Encoding header. That's really strange because, HAProxy
never adds Content-Length header, at any place. So, I'm tempted to think
that it comes from your backend. But that's contradicts my previous
remark. And there is no Content-Encoding header. So, it means that
HAProxy didn't compressed the response.

So, to be sure, if possible, it could be helpful to have tcpdump of data
exchanged between HAProxy and your backends (or something similar, as
you prefer). Send me them in private to not flood the ML. In the
meantime, I will try to investigate.
Post by Vladimir Mihailenco
With Haproxy 1.6 and enabled compression
- i can load full HTML (200kb)
- HTML is not compressed
- Transfer-encoding: "chunked"
- no Content-Length header
Just to be sure, here there is a typo error. You meant "HTML is
compressed", right ?
--
Christopher
Vladimir Mihailenco
2017-01-26 10:15:36 UTC
Permalink
Post by Christopher Faulet
Just to be sure, here there is a typo error. You meant "HTML is
compressed", right ?

No, it was not compressed with Haproxy 1.6. AFAIK compression is
automatically disabled for chunked responses in Haproxy <1.7 -
https://bogomips.org/rainbows-public/CD8781D3-288B-4B61-85ED-***@gmail.com/.
At least it corresponds with behavior I saw... Also Haproxy 1.7 changelog
says "MAJOR: http: re-enable compression on chunked encoding".
Post by Christopher Faulet
if possible, it could be helpful to have tcpdump of data exchanged between
HAProxy and your backends

I can't do it on existing staging, because it constantly receives some
requests and I am not that good at tcpdump to filter 1 request.

On Wed, Jan 25, 2017 at 11:41 AM, Christopher Faulet <
Post by Christopher Faulet
Post by Vladimir Mihailenco
This is the config -
https://gist.github.com/vmihailenco/9010ad37f5aeb800095a6b18909ae7d5.
Backends don't have any options. I already tried to remove `http-reuse
safe`, but it does not make any difference.
Haproxy 1.7 with compression (HTML not fully loaded) -
https://gist.github.com/vmihailenco/05bda6e7a49b6f78cd2f749abb0cf5b3
Haproxy 1.7 without compression (HTML fully loaded) -
https://gist.github.com/vmihailenco/d8732e53acac3769a85b59afd7336bab
Haproxy 1.7 with compression and Rails configured to set Content-Length
via config.middleware.use Rack::ContentLength (HTML fully loaded) -
https://gist.github.com/vmihailenco/13a809f486c4e1833ef813a019549180
Hi,
Thanks for details. There are some things that puzzle me.
I guess that when you disable the compression, it means that you comment
"compression" lines in the frontend section. In that case, we can see the
response is chunked. Because it is untouched by HAProxy, this comes from
your backend. Here there is no problem.
But when the compression is enabled, there is a Content-Length header and
no Transfer-Encoding header. That's really strange because, HAProxy never
adds Content-Length header, at any place. So, I'm tempted to think that it
comes from your backend. But that's contradicts my previous remark. And
there is no Content-Encoding header. So, it means that HAProxy didn't
compressed the response.
So, to be sure, if possible, it could be helpful to have tcpdump of data
exchanged between HAProxy and your backends (or something similar, as you
prefer). Send me them in private to not flood the ML. In the meantime, I
will try to investigate.
With Haproxy 1.6 and enabled compression
Post by Vladimir Mihailenco
- i can load full HTML (200kb)
- HTML is not compressed
- Transfer-encoding: "chunked"
- no Content-Length header
Just to be sure, here there is a typo error. You meant "HTML is
compressed", right ?
--
Christopher
Christopher Faulet
2017-01-31 14:56:24 UTC
Permalink
Hi Vladimir,

Sorry for my late reply, I was pretty busy these last days. I
investigated a little on your problem. I've done some tests and
carefully read the code. Everything seems to work as expected. I was not
able to reproduce what you experienced with HAProxy 1.7.2.

First, in HAProxy, except with a specific configuration, we never remove
any "Transfer-Encoding" headers. And we never add any "Content-Length"
headers. During the HTTP headers parsing, if the "Transfer-Encoding"
header is found with the value "chunked", we remove all "Content-Length"
headers, if any. This is always done, with or without the compression
filter. Then, when the compression is enabled, if the response payload
must be compressed by HAProxy, we remove all "Content-Length" headers
and add "Transfer-Encoding: chunked" if needed.

Then, when the response is truncated, there is no "Content-Encoding"
header. So I'm tempted to think that the GZIP compression is not used on
the response.

So, if there is a bug, it is well hidden (as always with chunked HTTP
payload ...*sigh*...). And it will be hard for me to hunt it without
more information about exchanges between HAProxy and your backend. The
best would be a full-packet network capture. But, if the bug is
reproducible, it can be a good start to have the output of the following
command:

curl -H header_1 -H header_2 ... -v --raw --compress -o raw_response
your_backend_url

Be sure to set same headers than for a request on HAProxy.

If the response contains sensitive information, you can remove them. I
only need to have the chunk sizes.
--
Christopher
Kristjan Koppel
2017-02-03 13:36:43 UTC
Permalink
Hi!





I seem to have run into the same (or at least similar) problem as reported by Vladimir Mihailenco a little while ago.



I'm running HAProxy v1.7.2 and my backend server is etcd v2.3.7. The client application is using HTTP/1.0 and I have compression enabled in HAProxy. With this configuration everything worked fine with HAProxy v1.6.10, but after upgrading to v1.7 the response is almost always cut off at about 4KB (let's say roughly 9 times out of 10, but it seems to be completely random).



If I switch the client to using HTTP/1.1 or comment out the compression lines in HAProxy config, then everything is fine again.



I'm able to reproduce this problem in a test environment with curl as the client and minimal HAProxy config:

global

chroot /var/lib/haproxy

stats socket /run/haproxy/admin.sock mode 660 level admin

stats timeout 30s

user haproxy

group haproxy

daemon

defaults

mode http

timeout connect 5000

timeout client 50000

timeout server 50000

compression algo gzip

compression type application/json

listen etcd

bind :80

server etcd1 127.0.0.1:2379



Here's a test with a partial response via HAProxy v1.7:

$ curl -0 -v -o /tmp/output http://localhost/v2/keys/test

&gt; GET /v2/keys/test HTTP/1.0

&gt; User-Agent: curl/7.38.0

&gt; Host: localhost

&gt; Accept: */*

&gt;

* HTTP 1.0, assume close after body

&lt; HTTP/1.0 200 OK

&lt; Content-Type: application/json

&lt; X-Etcd-Cluster-Id: 7e27652122e8b2ae

&lt; X-Etcd-Index: 4

&lt; X-Raft-Index: 22031

&lt; X-Raft-Term: 2

&lt; Date: Fri, 03 Feb 2017 13:10:43 GMT

&lt;

{ [data not shown]

100 3917 0 3917 0 0 767k 0 --:--:-- --:--:-- --:--:-- 956k



And a few tries later the same command gives a full response:

$ curl -0 -v -o /tmp/output http://localhost/v2/keys/test

&gt; GET /v2/keys/test HTTP/1.0

&gt; User-Agent: curl/7.38.0

&gt; Host: localhost

&gt; Accept: */*

&gt;

* HTTP 1.0, assume close after body

&lt; HTTP/1.0 200 OK

&lt; Content-Type: application/json

&lt; X-Etcd-Cluster-Id: 7e27652122e8b2ae

&lt; X-Etcd-Index: 4

&lt; X-Raft-Index: 22174

&lt; X-Raft-Term: 2

&lt; Date: Fri, 03 Feb 2017 13:11:55 GMT

&lt;

{ [data not shown]

100 9381 0 9381 0 0 1733k 0 --:--:-- --:--:-- --:--:-- 1832k



The only difference seems to be the number of bytes in the response body.



And if I send the request to etcd directly:

$ curl -0 -v -o /tmp/output http://localhost:2379/v2/keys/test

&gt; GET /v2/keys/test HTTP/1.0


&gt; User-Agent: curl/7.38.0


&gt; Host: localhost:2379

&gt; Accept: */*

&gt;

* HTTP 1.0, assume close after body

&lt; HTTP/1.0 200 OK

&lt; Content-Type: application/json

&lt; X-Etcd-Cluster-Id: 7e27652122e8b2ae

&lt; X-Etcd-Index: 4

&lt; X-Raft-Index: 23406

&lt; X-Raft-Term: 2

&lt; Date: Fri, 03 Feb 2017 13:22:11 GMT

&lt;

{ [data not shown]

100 9381 0 9381 0 0 1878k 0 --:--:-- --:--:-- --:--:-- 2290k



I'll be happy to provide any additional details if needed.
--
Kristjan
Christopher Faulet
2017-02-03 16:02:23 UTC
Permalink
Post by Kristjan Koppel
Hi!
I seem to have run into the same (or at least similar) problem as
reported by Vladimir Mihailenco a little while ago.
I'm running HAProxy v1.7.2 and my backend server is etcd v2.3.7. The
client application is using HTTP/1.0 and I have compression enabled in
HAProxy. With this configuration everything worked fine with HAProxy
v1.6.10, but after upgrading to v1.7 the response is almost always cut
off at about 4KB (let's say roughly 9 times out of 10, but it seems to
be completely random).
If I switch the client to using HTTP/1.1 or comment out the compression
lines in HAProxy config, then everything is fine again.
Hi Kristjan and Brian,

Thanks for these information. This helped me to find a bug. I don't know
yet if this is the same problem than Vladimir has experienced. I'll try
to send you a patch very soon. For now, I need to test all cases to be
sure that everything is fixed.
--
Christopher
Christopher Faulet
2017-02-06 09:36:23 UTC
Permalink
Hi guys,

Could you check if the attached patch fixes your bug please ?

If I'm right, the bug is about a premature close of the server
connection when the content length cannot be determined (neither
"Content-Length" nor "Transfer-encoding" headers) if a filter is used
(here, the compression).

Vladimir, I don't know if this will fix your bug because I need more
information to fully understand your problem. It could be an unrelated
bug. But, with a bit of luck, it will work.
--
Christopher
Brian Loss
2017-02-06 13:50:05 UTC
Permalink
It seems to do the trick for my issue. Thanks!

On Mon, Feb 6, 2017 at 4:37 AM Christopher Faulet <
Post by Christopher Faulet
Hi guys,
Could you check if the attached patch fixes your bug please ?
If I'm right, the bug is about a premature close of the server
connection when the content length cannot be determined (neither
"Content-Length" nor "Transfer-encoding" headers) if a filter is used
(here, the compression).
Vladimir, I don't know if this will fix your bug because I need more
information to fully understand your problem. It could be an unrelated
bug. But, with a bit of luck, it will work.
--
Christopher
Kristjan Koppel
2017-02-06 15:12:43 UTC
Permalink
Hi!



This patch fixes my issue as well. Thank you very much!
--
Kristjan




---- On Mon, 06 Feb 2017 11:36:23 +0200 Christopher Faulet &lt;***@capflam.org&gt; wrote ----




Hi guys,



Could you check if the attached patch fixes your bug please ?



If I'm right, the bug is about a premature close of the server

connection when the content length cannot be determined (neither

"Content-Length" nor "Transfer-encoding" headers) if a filter is used

(here, the compression).



Vladimir, I don't know if this will fix your bug because I need more

information to fully understand your problem. It could be an unrelated

bug. But, with a bit of luck, it will work.
--
Christopher
Brian Loss
2017-02-03 15:45:22 UTC
Permalink
I too am seeing something similar with HAProxy 1.7.2. My test configuration is:

global
maxconn 5

defaults
mode http
option http-server-close
compression algo gzip
timeout connect 30s
timeout client 1m
timeout server 1m

listen fe
bind :80
server primary 127.0.0.1:8080


I get 15085 bytes (that’s the curl output file, so it doesn’t include headers) and then the response is truncated. This configuration works fine on HAProxy 1.6.3. With HAProxy 1.7.2, if I comment out “compression algo gzip” then I get the full response. Alternatively, if I comment out “option http-server-close” then I get the full response. Or, if I leave both of those in, I can add “option http-pretend-keepalive” and I once again get the full response.

My server is Wildfly 9.0.2.Final. I don’t *think* its handling of the “Connection: close” header is broken, given that it worked fine on HAProxy 1.6.3. Let me know if any other details would help.
Loading...