Discussion:
Simply adding a filter causes read error
f***@yahoo.co.jp
2018-12-06 14:20:07 UTC
Permalink
Hi,
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px 'Helvetica Neue'}p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px 'Helvetica Neue'; min-height: 14.0px}I have a haproxy(v1.8.14) in front of several nginx backends, everything works fine until I add compression in haproxy.
My config looks like this:
### Config start #####global    maxconn         1000000    daemon    nbproc 2
defaults    retries 3    option redispatch    timeout client  60s    timeout connect 60s    timeout server  60s    timeout http-request 60s    timeout http-keep-alive 60s
frontend web    bind *:8000
    mode http    default_backend appbackend app    mode http    #filter compression    #filter trace     server nginx01 10.0.3.15:8080### Config end #####

Lua script used in wrk:a.lua:
local count = 0
request = function()    local url = "/?count=" .. count    count = count + 1    return wrk.format(    'GET',    url    )end

01. wrk test against nginx: everything if OK
wrk -c 1000 -s a.lua http://10.0.3.15:8080Running 10s test @ http://10.0.3.15:8080  2 threads and 1000 connections  Thread Stats   Avg      Stdev     Max   +/- Stdev    Latency    34.83ms   17.50ms 260.52ms   76.48%    Req/Sec    12.85k     2.12k   17.20k    62.63%  255603 requests in 10.03s, 1.23GB readRequests/sec:  25476.45Transfer/sec:    125.49MB

02. Wrk test against haproxy, no filters: everything is OK
wrk -c 1000 -s a.lua http://10.0.3.15:8000Running 10s test @ http://10.0.3.15:8000  2 threads and 1000 connections  Thread Stats   Avg      Stdev     Max   +/- Stdev    Latency    73.58ms  109.48ms   1.33s    97.39%    Req/Sec     7.83k     1.42k   11.95k    66.15%  155843 requests in 10.07s, 764.07MB readRequests/sec:  15476.31Transfer/sec:     75.88MB
03. Wrk test against haproxy, add filter compression: read error
Change
    #filter compression===>    filter compression
wrk -c 1000 -s a.lua http://10.0.3.15:8000Running 10s test @ http://10.0.3.15:8000  2 threads and 1000 connections  Thread Stats   Avg      Stdev     Max   +/- Stdev    Latency    60.43ms   42.63ms   1.06s    91.54%    Req/Sec     7.86k     1.40k   10.65k    67.54%  157025 requests in 10.11s, 769.87MB read  Socket errors: connect 0, read 20, write 0, timeout 0Requests/sec:  15530.67Transfer/sec:     76.14MB
04. Wrk test against haproxy, add filter trace, and update flt_trace.c:
static inttrace_attach(struct stream *s, struct filter *filter){        struct trace_config *conf = FLT_CONF(filter);        // add below       // ignore this filter to avoid performance down since there are many print        return 0; 
And change    #filter compression    #filter trace===>    #filter compression    filter trace
Running 10s test @ http://10.0.3.15:8000  2 threads and 1000 connections  Thread Stats   Avg      Stdev     Max   +/- Stdev    Latency    64.88ms   77.91ms   1.09s    98.26%    Req/Sec     7.84k     1.47k   11.57k    67.71%  155800 requests in 10.05s, 763.86MB read  Socket errors: connect 0, read 21, write 0, timeout 0Requests/sec:  15509.93Transfer/sec:     76.04MB

Is there any config error? Am I doing something wrong?
Thanks
f***@yahoo.co.jp
2018-12-06 14:30:21 UTC
Permalink
Sorry, please ignore this one with bad style.I will send another one.


----- Original Message -----
From: "***@yahoo.co.jp" <***@yahoo.co.jp>
To: "***@formilux.org" <***@formilux.org>
Date: 2018/12/6, Thu 23:20
Subject: Simply adding a filter causes read error

Hi,
#yiv8353377196 p.yiv8353377196p1 {margin:0.0px 0.0px 0.0px 0.0px;font:12.0px 'Helvetica Neue';}#yiv8353377196 p.yiv8353377196p2 {margin:0.0px 0.0px 0.0px 0.0px;font:12.0px 'Helvetica Neue';min-height:14.0px;}I have a haproxy(v1.8.14) in front of several nginx backends, everything works fine until I add compression in haproxy.
My config looks like this:
### Config start #####global    maxconn         1000000    daemon    nbproc 2
defaults    retries 3    option redispatch    timeout client  60s    timeout connect 60s    timeout server  60s    timeout http-request 60s    timeout http-keep-alive 60s
frontend web    bind *:8000
    mode http    default_backend appbackend app    mode http    #filter compression    #filter trace     server nginx01 10.0.3.15:8080### Config end #####

Lua script used in wrk:a.lua:
local count = 0
request = function()    local url = "/?count=" .. count    count = count + 1    return wrk.format(    'GET',    url    )end

01. wrk test against nginx: everything if OK
wrk -c 1000 -s a.lua http://10.0.3.15:8080Running 10s test @ http://10.0.3.15:8080  2 threads and 1000 connections  Thread Stats   Avg      Stdev     Max   +/- Stdev    Latency    34.83ms   17.50ms 260.52ms   76.48%    Req/Sec    12.85k     2.12k   17.20k    62.63%  255603 requests in 10.03s, 1.23GB readRequests/sec:  25476.45Transfer/sec:    125.49MB

02. Wrk test against haproxy, no filters: everything is OK
wrk -c 1000 -s a.lua http://10.0.3.15:8000Running 10s test @ http://10.0.3.15:8000  2 threads and 1000 connections  Thread Stats   Avg      Stdev     Max   +/- Stdev    Latency    73.58ms  109.48ms   1.33s    97.39%    Req/Sec     7.83k     1.42k   11.95k    66.15%  155843 requests in 10.07s, 764.07MB readRequests/sec:  15476.31Transfer/sec:     75.88MB
03. Wrk test against haproxy, add filter compression: read error
Change
    #filter compression===>    filter compression
wrk -c 1000 -s a.lua http://10.0.3.15:8000Running 10s test @ http://10.0.3.15:8000  2 threads and 1000 connections  Thread Stats   Avg      Stdev     Max   +/- Stdev    Latency    60.43ms   42.63ms   1.06s    91.54%    Req/Sec     7.86k     1.40k   10.65k    67.54%  157025 requests in 10.11s, 769.87MB read  Socket errors: connect 0, read 20, write 0, timeout 0Requests/sec:  15530.67Transfer/sec:     76.14MB
04. Wrk test against haproxy, add filter trace, and update flt_trace.c:
static inttrace_attach(struct stream *s, struct filter *filter){        struct trace_config *conf = FLT_CONF(filter);        // add below       // ignore this filter to avoid performance down since there are many print        return 0; 
And change    #filter compression    #filter trace===>    #filter compression    filter trace
Running 10s test @ http://10.0.3.15:8000  2 threads and 1000 connections  Thread Stats   Avg      Stdev     Max   +/- Stdev    Latency    64.88ms   77.91ms   1.09s    98.26%    Req/Sec     7.84k     1.47k   11.57k    67.71%  155800 requests in 10.05s, 763.86MB read  Socket errors: connect 0, read 21, write 0, timeout 0Requests/sec:  15509.93Transfer/sec:     76.04MB

Is there any config error? Am I doing something wrong?
Thanks
f***@yahoo.co.jp
2018-12-07 00:06:56 UTC
Permalink
Hi,
Thanks for the reply, I thought the mail format is corrupted..
I tried option http-pretend-keepalive, seems read error is gone, but timeout error raised(maybe its because the 1000 connections of wrk)
Thanks

----- Original Message -----
From: Aleksandar Lazic <al-***@none.at>
To: ***@yahoo.co.jp; "***@formilux.org" <***@formilux.org>
Date: 2018/12/6, Thu 23:53
Subject: Re: Simply adding a filter causes read error

Hi.
Post by f***@yahoo.co.jp
Hi,
I have a haproxy(v1.8.14) in front of several nginx backends, everything works
fine until I add compression in haproxy.
There is a similar thread about this topic.

https://www.mail-archive.com/***@formilux.org/msg31897.html

Can you try to add this option in your config and see if the problem is gone.

option http-pretend-keepalive

Regards
Aleks
Post by f***@yahoo.co.jp
### Config start #####
global
    maxconn         1000000
    daemon
    nbproc 2
defaults
    retries 3
    option redispatch
    timeout client  60s
    timeout connect 60s
    timeout server  60s
    timeout http-request 60s
    timeout http-keep-alive 60s
frontend web
    bind *:8000
    mode http
    default_backend app
backend app
    mode http
    #filter compression
    #filter trace 
    server nginx01 10.0.3.15:8080
### Config end #####
local count = 0
request = function()
    local url = "/?count=" .. count
    count = count + 1
    return wrk.format(
    'GET',
    url
    )
end
01. wrk test against nginx: everything if OK
wrk -c 1000 -s a.lua http://10.0.3.15:8080
  2 threads and 1000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    34.83ms   17.50ms 260.52ms   76.48%
    Req/Sec    12.85k     2.12k   17.20k    62.63%
  255603 requests in 10.03s, 1.23GB read
Requests/sec:  25476.45
Transfer/sec:    125.49MB
02. Wrk test against haproxy, no filters: everything is OK
wrk -c 1000 -s a.lua http://10.0.3.15:8000
  2 threads and 1000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    73.58ms  109.48ms   1.33s    97.39%
    Req/Sec     7.83k     1.42k   11.95k    66.15%
  155843 requests in 10.07s, 764.07MB read
Requests/sec:  15476.31
Transfer/sec:     75.88MB
03. Wrk test against haproxy, add filter compression: read error
Change
    #filter compression
===>
    filter compression
wrk -c 1000 -s a.lua http://10.0.3.15:8000
  2 threads and 1000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    60.43ms   42.63ms   1.06s    91.54%
    Req/Sec     7.86k     1.40k   10.65k    67.54%
  157025 requests in 10.11s, 769.87MB read
  Socket errors: connect 0, read 20, write 0, timeout 0
Requests/sec:  15530.67
Transfer/sec:     76.14MB
static int
trace_attach(struct stream *s, struct filter *filter)
{
        struct trace_config *conf = FLT_CONF(filter);
        // add below
       // ignore this filter to avoid performance down since there are many print
        return 0; 
And change
    #filter compression
    #filter trace
===>
    #filter compression
    filter trace
  2 threads and 1000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    64.88ms   77.91ms   1.09s    98.26%
    Req/Sec     7.84k     1.47k   11.57k    67.71%
  155800 requests in 10.05s, 763.86MB read
  Socket errors: connect 0, read 21, write 0, timeout 0
Requests/sec:  15509.93
Transfer/sec:     76.04MB
Is there any config error? Am I doing something wrong?
Thanks
f***@yahoo.co.jp
2018-12-07 07:37:47 UTC
Permalink
Hi
I tested more, and found that even with option http-pretend-keepalive enabled,
if I increase the test duration , the read error still appear.
Running 3m test @ http://10.0.3.15:8000  10 threads and 1000 connections  Thread Stats   Avg      Stdev     Max   +/- Stdev    Latency    19.84ms   56.36ms   1.34s    92.83%    Req/Sec    23.11k     2.55k   50.64k    87.10%  45986426 requests in 3.33m, 36.40GB read  Socket errors: connect 0, read 7046, write 0, timeout 0Requests/sec: 229817.63Transfer/sec:    186.30MB
thanks

----- Original Message -----
From: "***@yahoo.co.jp" <***@yahoo.co.jp>
To: Aleksandar Lazic <al-***@none.at>; "***@formilux.org" <***@formilux.org>
Date: 2018/12/7, Fri 09:06
Subject: Re: Simply adding a filter causes read error

Hi,
Thanks for the reply, I thought the mail format is corrupted..
I tried option http-pretend-keepalive, seems read error is gone, but timeout error raised(maybe its because the 1000 connections of wrk)
Thanks

----- Original Message -----
From: Aleksandar Lazic <al-***@none.at>
To: ***@yahoo.co.jp; "***@formilux.org" <***@formilux.org>
Date: 2018/12/6, Thu 23:53
Subject: Re: Simply adding a filter causes read error

Hi.
Post by f***@yahoo.co.jp
Hi,
I have a haproxy(v1.8.14) in front of several nginx backends, everything works
fine until I add compression in haproxy.
There is a similar thread about this topic.

https://www.mail-archive.com/***@formilux.org/msg31897.html

Can you try to add this option in your config and see if the problem is gone.

option http-pretend-keepalive

Regards
Aleks
Post by f***@yahoo.co.jp
### Config start #####
global
    maxconn         1000000
    daemon
    nbproc 2
defaults
    retries 3
    option redispatch
    timeout client  60s
    timeout connect 60s
    timeout server  60s
    timeout http-request 60s
    timeout http-keep-alive 60s
frontend web
    bind *:8000
    mode http
    default_backend app
backend app
    mode http
    #filter compression
    #filter trace 
    server nginx01 10.0.3.15:8080
### Config end #####
local count = 0
request = function()
    local url = "/?count=" .. count
    count = count + 1
    return wrk.format(
    'GET',
    url
    )
end
01. wrk test against nginx: everything if OK
wrk -c 1000 -s a.lua http://10.0.3.15:8080
  2 threads and 1000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    34.83ms   17.50ms 260.52ms   76.48%
    Req/Sec    12.85k     2.12k   17.20k    62.63%
  255603 requests in 10.03s, 1.23GB read
Requests/sec:  25476.45
Transfer/sec:    125.49MB
02. Wrk test against haproxy, no filters: everything is OK
wrk -c 1000 -s a.lua http://10.0.3.15:8000
  2 threads and 1000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    73.58ms  109.48ms   1.33s    97.39%
    Req/Sec     7.83k     1.42k   11.95k    66.15%
  155843 requests in 10.07s, 764.07MB read
Requests/sec:  15476.31
Transfer/sec:     75.88MB
03. Wrk test against haproxy, add filter compression: read error
Change
    #filter compression
===>
    filter compression
wrk -c 1000 -s a.lua http://10.0.3.15:8000
  2 threads and 1000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    60.43ms   42.63ms   1.06s    91.54%
    Req/Sec     7.86k     1.40k   10.65k    67.54%
  157025 requests in 10.11s, 769.87MB read
  Socket errors: connect 0, read 20, write 0, timeout 0
Requests/sec:  15530.67
Transfer/sec:     76.14MB
static int
trace_attach(struct stream *s, struct filter *filter)
{
        struct trace_config *conf = FLT_CONF(filter);
        // add below
       // ignore this filter to avoid performance down since there are many print
        return 0; 
And change
    #filter compression
    #filter trace
===>
    #filter compression
    filter trace
  2 threads and 1000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    64.88ms   77.91ms   1.09s    98.26%
    Req/Sec     7.84k     1.47k   11.57k    67.71%
  155800 requests in 10.05s, 763.86MB read
  Socket errors: connect 0, read 21, write 0, timeout 0
Requests/sec:  15509.93
Transfer/sec:     76.04MB
Is there any config error? Am I doing something wrong?
Thanks
f***@yahoo.co.jp
2018-12-07 13:59:51 UTC
Permalink
Hi
Thanks for the reply.
I have a test env with 3 identical servers( 8 core cpu and 32GB memory), one for wrk, one for nginx, and one for haproxy.
The network looks like wrk => haproxy => nginx. I have tuned OS settings like open file limits, etc.
And the test html file is default nginx index.html. There's no error when testing wrk => nginx, wrk => haproxy(no filter) => nginx.
Error began to appear if I add filter.
I've thought of performance affected by compression, but that's not true, because the request header sent by wrk does not accept compression.
I've even change the following code:
static inttrace_attach(struct stream *s, struct filter *filter){        struct trace_config *conf = FLT_CONF(filter);        return 0; // ignore this filter to avoid performance down since there are many print
And test with
    filter trace
This way I think there will be no performance affect, since the filter is ignored in the very beginning.
But still there are read errors.
Please let me know if you need more information.
Thanks,

----- Original Message -----
From: Aleksandar Lazic <al-***@none.at>
To: ***@yahoo.co.jp; "***@formilux.org" <***@formilux.org>
Date: 2018/12/7, Fri 22:12
Subject: Re: Simply adding a filter causes read error

Hi.
Post by f***@yahoo.co.jp
Hi
I tested more, and found that even with option http-pretend-keepalive enabled,
if I increase the test duration , the read error still appear.
Please can you show us some logs when the error appears.
Can you also tell us some data about the Server on which haproxy, wrk and nginx
is running and how the network setup looks like.

maybe you reach some system limits as Compression requires some more os/hw
resources.

Regards
Aleks
Post by f***@yahoo.co.jp
  10 threads and 1000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    19.84ms   56.36ms   1.34s    92.83%
    Req/Sec    23.11k     2.55k   50.64k    87.10%
  45986426 requests in 3.33m, 36.40GB read
  Socket errors: connect 0, read 7046, write 0, timeout 0
Requests/sec: 229817.63
Transfer/sec:    186.30MB
thanks
    ----- Original Message -----
    *Date:* 2018/12/7, Fri 09:06
    *Subject:* Re: Simply adding a filter causes read error
    Hi,
    Thanks for the reply, I thought the mail format is corrupted..
    I tried option http-pretend-keepalive, seems read error is gone, but timeout
    error raised(maybe its because the 1000 connections of wrk)
    Thanks
        ----- Original Message -----
        *Date:* 2018/12/6, Thu 23:53
        *Subject:* Re: Simply adding a filter causes read error
        Hi.
        > Hi,
        >
        > I have a haproxy(v1.8.14) in front of several nginx backends,
        everything works
        > fine until I add compression in haproxy.
        There is a similar thread about this topic.
        Can you try to add this option in your config and see if the problem is
        gone.
        option http-pretend-keepalive
        Regards
        Aleks
        >
        > ### Config start #####
        > global
        >     maxconn         1000000
        >     daemon
        >     nbproc 2
        >
        > defaults
        >     retries 3
        >     option redispatch
        >     timeout client  60s
        >     timeout connect 60s
        >     timeout server  60s
        >     timeout http-request 60s
        >     timeout http-keep-alive 60s
        >
        > frontend web
        >     bind *:8000
        >
        >     mode http
        >     default_backend app
        > backend app
        >     mode http
        >     #filter compression
        >     #filter trace 
        >     server nginx01 10.0.3.15:8080
        > ### Config end #####
        >
        >
        >
        > local count = 0
        >
        > request = function()
        >     local url = "/?count=" .. count
        >     count = count + 1
        >     return wrk.format(
        >     'GET',
        >     url
        >     )
        > end
        >
        >
        > 01. wrk test against nginx: everything if OK
        >
        > wrk -c 1000 -s a.lua http://10.0.3.15:8080 <http://10.0.3.15:8080/ >
        >   2 threads and 1000 connections
        >   Thread Stats   Avg      Stdev     Max   +/- Stdev
        >     Latency    34.83ms   17.50ms 260.52ms   76.48%
        >     Req/Sec    12.85k     2.12k   17.20k    62.63%
        >   255603 requests in 10.03s, 1.23GB read
        > Requests/sec:  25476.45
        > Transfer/sec:    125.49MB
        >
        >
        > 02. Wrk test against haproxy, no filters: everything is OK
        >
        > wrk -c 1000 -s a.lua http://10.0.3.15:8000 <http://10.0.3.15:8000/ >
        >   2 threads and 1000 connections
        >   Thread Stats   Avg      Stdev     Max   +/- Stdev
        >     Latency    73.58ms  109.48ms   1.33s    97.39%
        >     Req/Sec     7.83k     1.42k   11.95k    66.15%
        >   155843 requests in 10.07s, 764.07MB read
        > Requests/sec:  15476.31
        > Transfer/sec:     75.88MB
        >
        > 03. Wrk test against haproxy, add filter compression: read error
        >
        > Change
        >
        >     #filter compression
        > ===>
        >     filter compression
        >
        > wrk -c 1000 -s a.lua http://10.0.3.15:8000 <http://10.0.3.15:8000/ >
        >   2 threads and 1000 connections
        >   Thread Stats   Avg      Stdev     Max   +/- Stdev
        >     Latency    60.43ms   42.63ms   1.06s    91.54%
        >     Req/Sec     7.86k     1.40k   10.65k    67.54%
        >   157025 requests in 10.11s, 769.87MB read
        >   Socket errors: connect 0, read 20, write 0, timeout 0
        > Requests/sec:  15530.67
        > Transfer/sec:     76.14MB
        >
        >
        > static int
        > trace_attach(struct stream *s, struct filter *filter)
        > {
        >         struct trace_config *conf = FLT_CONF(filter);
        >         // add below
        >        // ignore this filter to avoid performance down since there are
        many print
        >         return 0; 
        >
        > And change
        >     #filter compression
        >     #filter trace
        > ===>
        >     #filter compression
        >     filter trace
        >
        >   2 threads and 1000 connections
        >   Thread Stats   Avg      Stdev     Max   +/- Stdev
        >     Latency    64.88ms   77.91ms   1.09s    98.26%
        >     Req/Sec     7.84k     1.47k   11.57k    67.71%
        >   155800 requests in 10.05s, 763.86MB read
        >   Socket errors: connect 0, read 21, write 0, timeout 0
        > Requests/sec:  15509.93
        > Transfer/sec:     76.04MB
        >
        >
        > Is there any config error? Am I doing something wrong?
        >
        > Thanks
        >
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px 'Helvetica Neue'}p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px 'Helvetica Neue'; min-height: 14.0px}
Loading...