Discussion:
HaProxy for SFTP load balancing
v***@abinnovative.com
2016-10-06 10:26:59 UTC
Permalink
Hi,

Greetings,
I am facing issue in HA for SFTP nodes with haproxy. I have 2 SFTP
nodes and sending files through haproxy, It is always passing that to
each node one by one.
But for ex:
when the second node is down. it is not passing files to only first
node. instead, one time to first node and one time to second node. That
means alternatively my second request is getting failures. how to fix
this. please help me asap.


_*haproxy.cfg*_ (Also attached the cfg file)


OS - linux ubuntu
--
*Thanks*
*Vijay*
Lukas Tribus
2016-10-06 14:04:33 UTC
Permalink
Hi Vijay,


enable health-checks, by adding the "check" keyword to both your server
configuration lines.


Lukas
v***@abinnovative.com
2016-10-06 14:07:18 UTC
Permalink
We gave, that doesn't works.


Vijay
Post by Lukas Tribus
Hi Vijay,
enable health-checks, by adding the "check" keyword to both your
server configuration lines.
Lukas
--
*Thanks*
*Vijay*
Andrew Smalley
2016-10-06 14:13:20 UTC
Permalink
If you want a connect to port check you can use the below example


listen sftp
    bind 192.168.100.100:8022 transparent
    mode http
    balance leastconn
    option forwardfor if-none
    stick on hdr(X-Forwarded-For,-1)
    stick on src
    stick-table type string len 64 size 10240k expire 30m peers
loadbalancer_replication
    server backup 127.0.0.1:9081 backup non-stick
    option http-keep-alive
    option redispatch
    option abortonclose
    maxconn 40000
    server RIP_ 192.168.100.0:80 weight RIP_Name check port 8022
inter 4000 rise 2 fall 2 minconn 100 maxconn 0 on-marked-down
shutdown-sessions

Of if you wish to define an external file to use then something like
below will work.

listen sftp
    bind 192.168.100.100:8022 transparent
    mode http
    balance leastconn
    option forwardfor if-none
    stick on hdr(X-Forwarded-For,-1)
    stick on src
    stick-table type string len 64 size 10240k expire 30m peers
loadbalancer_replication
    server backup 127.0.0.1:9081 backup non-stick
    option external-check
    external-check command /var/lib/loadbalancer.org/check/sftp_check.sh
    option http-keep-alive
    option redispatch
    option abortonclose
    maxconn 40000
    server RIP_ 192.168.100.0:80 weight RIP_Name check inter 4000
rise 2 fall 2 minconn 100 maxconn 0 on-marked-down
shutdown-sessions


I hope this helps?



Regards

Andrew Smalley

Loadbalancer.org Ltd.
Post by v***@abinnovative.com
We gave, that doesn't works.
Vijay
Hi Vijay,
enable health-checks, by adding the "check" keyword to both your server
configuration lines.
Lukas
--
*Thanks*
*Vijay*
m***@abinnovative.com
2016-10-18 14:16:33 UTC
Permalink
Hi Andrew,

We need High availability for SFTP


Haproxy installed in server with ip (1.2.3.4)

for this IP several clients are mapped as mentioned below

client1.hh.com mapped to IP 1.2.3.4
client2.hh.com mapped to IP 1.2.3.4
client3.hh.com mapped to IP 1.2.3.4


For client1 the associated sftp servers are sftp1,sftp2,sftp3.


When request comes from client1.hh.com, then it should be serviced by any of the sftp servers associated to this client i.e sftp1 or sftp2 or sftp3.


To achieve this, below is the haproxy.cfg


listen sftp-server
bind :2121
mode tcp
maxconn 2000
option redis-check
retries 3
option redispatch
#checking if the request is coming from client1
acl devclient1 ssl_fc_sni_reg -i devclient1.healthhub.net.in
#req.ssl_sni ssl_fc_sni_reg
balance roundrobin

use_backend srvs_devclient1 if devclient1


backend srvs_devclient1

balance roundrobin
server ftp01 172.31.10.247:22 check weight 2
server ftp02 172.31.10.156:22 check weight 2

But when I try to transfer file i'm getting exception as connection closed by foreign client.
I'm able to transfer file to sftp server directly(sftp1) which is up and running, but through haproxy it's not working.

Kindly suggest how to fetch the servername from which the request is coming so that I can map that particular client to the associated sftp servers.

More over if any of sftp server is down, haproxy should route the request to any of the associated sftp servers which are up.

Eg: sftp1 is down, haproxy proxy should able to route the request to sftp2 or sftp3 which are up and running.

Requesting you to assist us to resolve the issue.


Thanks in advance.



-----Original Message-----
From: "Andrew Smalley" <***@loadbalancer.org>
Sent: Thursday, October 6, 2016 7:43pm
To: "***@abinnovative.com" <***@abinnovative.com>
Cc: "Lukas Tribus" <***@gmx.net>, "HAProxy" <***@formilux.org>, ***@abinnovative.com
Subject: Re: HaProxy for SFTP load balancing




If you want a connect to port check you can use the below example


listen sftp    bind 192.168.100.100:8022 transparent    mode http    balance leastconn    option forwardfor if-none    stick on hdr(X-Forwarded-For,-1)    stick on src    stick-table type string len 64 size 10240k expire 30m peers loadbalancer_replication    server backup 127.0.0.1:9081 backup non-stick    option http-keep-alive    option redispatch    option abortonclose    maxconn 40000    server RIP_ 192.168.100.0:80 weight RIP_Name check port 8022 inter 4000 rise 2 fall 2 minconn 100 maxconn 0 on-marked-down shutdown-sessions


Of if you wish to define an external file to use then something like below will work.

listen sftp    bind 192.168.100.100:8022 transparent    mode http    balance leastconn    option forwardfor if-none    stick on hdr(X-Forwarded-For,-1)    stick on src    stick-table type string len 64 size 10240k expire 30m peers loadbalancer_replication    server backup 127.0.0.1:9081 backup non-stick    option external-check    external-check command /var/lib/[ loadbalancer.org/check/sftp_check.sh ]( http://loadbalancer.org/check/sftp_check.sh )    option http-keep-alive    option redispatch    option abortonclose    maxconn 40000    server RIP_ 192.168.100.0:80 weight RIP_Name check inter 4000 rise 2 fall 2 minconn 100 maxconn 0 on-marked-down shutdown-sessions



I hope this helps?









Regards

Andrew Smalley

Loadbalancer.org Ltd.


On 6 October 2016 at 15:07, [ ***@abinnovative.com ]( mailto:***@abinnovative.com ) <[ ***@abinnovative.com ]( mailto:***@abinnovative.com )> wrote:


We gave, that doesn't works.


Vijay



On 06-10-2016 07:34 PM, Lukas Tribus wrote:

Hi Vijay,


enable health-checks, by adding the "check" keyword to both your server configuration lines.


Lukas
--
Thanks
Vijay
Andrew Smalley
2016-10-18 14:50:34 UTC
Permalink
Hello Malreddy,

Below is a working VIP I have created on our loadbalancer.org appliance
which will do what you want without the ACL and

With regard the ACL, You will not be able to do some of this in TCP Mode

https://www.haproxy.com/doc/aloha/7.0/haproxy/acls.html

listen stfp
    bind 192.168.100.100:8022 transparent
    mode tcp
    balance leastconn
    stick on src
    stick-table type ip size 10240k expire 30m peers loadbalancer_replication
    server backup 127.0.0.1:9081 backup non-stick
    option redispatch
    option abortonclose
    maxconn 40000
    server sftp-1 192.168.100.101:22 weight 100 check port 22 inter
4000 rise 2 fall 2 minconn 0 maxconn 0 on-marked-down
shutdown-sessions



Regards

Andrew Smalley

Loadbalancer.org Ltd.
Post by m***@abinnovative.com
Hi Andrew,
We need High availability for SFTP
Haproxy installed in server with ip (1.2.3.4)
for this IP several clients are mapped as mentioned below
client1.hh.com mapped to IP 1.2.3.4
client2.hh.com mapped to IP 1.2.3.4
client3.hh.com mapped to IP 1.2.3.4
*For client1 the associated sftp servers are sftp1,sftp2,sftp3.*
When request comes from *client1.hh.com <http://client1.hh.com>*, then it
should be serviced by any of the sftp servers associated to this client i.e
sftp1 or sftp2 or sftp3.
To achieve this, below is the haproxy.cfg
listen sftp-server
bind :2121
mode tcp
maxconn 2000
option redis-check
retries 3
option redispatch
#checking if the request is coming from client1
*acl devclient1 ssl_fc_sni_reg -i devclient1.healthhub.net.in
<http://devclient1.healthhub.net.in>*
#req.ssl_sni ssl_fc_sni_reg
balance roundrobin
*use_backend srvs_devclient1 if devclient1*
*backend srvs_devclient1*
* balance roundrobin*
* server ftp01 172.31.10.247:22 <http://172.31.10.247:22> check weight 2*
* server ftp02 172.31.10.156:22 <http://172.31.10.156:22> check weight 2*
But when I try to transfer file i'm getting exception as connection closed
by foreign client.
I'm able to transfer file to sftp server directly(sftp1) which is up and
running, but through haproxy it's not working.
Kindly suggest how to fetch the servername from which the request is
coming so that I can map that particular client to the associated sftp
servers.
More over if any of sftp server is down, haproxy should route the request
to any of the associated sftp servers which are up.
Eg: sftp1 is down, haproxy proxy should able to route the request to sftp2
or sftp3 which are up and running.
Requesting you to assist us to resolve the issue.
Thanks in advance.
-----Original Message-----
Sent: Thursday, October 6, 2016 7:43pm
Subject: Re: HaProxy for SFTP load balancing
If you want a connect to port check you can use the below example
listen sftp bind 192.168.100.100:8022 transparent mode http balance
leastconn option forwardfor if-none stick on hdr(X-Forwarded-For,-1)
stick on src stick-table type string len 64 size 10240k expire 30m peers
loadbalancer_replication server backup 127.0.0.1:9081 backup non-stick
option http-keep-alive option redispatch option abortonclose maxconn 40000
server RIP_ 192.168.100.0:80 weight RIP_Name check port 8022 inter 4000
rise 2 fall 2 minconn 100 maxconn 0 on-marked-down shutdown-sessions
Of if you wish to define an external file to use then something like below will work.
listen sftp bind 192.168.100.100:8022 transparent mode http balance
leastconn option forwardfor if-none stick on hdr(X-Forwarded-For,-1)
stick on src stick-table type string len 64 size 10240k expire 30m peers
loadbalancer_replication server backup 127.0.0.1:9081 backup non-stick
option external-check external-check command /var/lib/loadbalancer.org/
check/sftp_check.sh option http-keep-alive option redispatch option
abortonclose maxconn 40000 server RIP_ 192.168.100.0:80 weight RIP_Name
check inter 4000 rise 2 fall 2 minconn 100 maxconn 0 on-marked-down
shutdown-sessions
I hope this helps?
Regards
Andrew Smalley
Loadbalancer.org Ltd.
Post by v***@abinnovative.com
We gave, that doesn't works.
Vijay
Hi Vijay,
enable health-checks, by adding the "check" keyword to both your server
configuration lines.
Lukas
--
*Thanks*
*Vijay*
Igor Cicimov
2016-10-06 21:40:53 UTC
Permalink
Post by v***@abinnovative.com
when the second node is down. it is not passing files to only first
node. instead, one time to first node and one time to second node. That
means alternatively my second request is getting failures. how to fix this.
please help me asap.
You need

option redispatch
Post by v***@abinnovative.com
haproxy.cfg (Also attached the cfg file)
OS - linux ubuntu
--
Thanks
Vijay
v***@abinnovative.com
2016-10-07 03:55:27 UTC
Permalink
Great & Thanks Fellaws.

But i see |option redispatch|only works for HTTP proxies, am a right.

I am using TCP .

_*haproxy.cfg*__*
*_

listen sftp-server
bind :2121
*mode tcp*
maxconn 2000
balance roundrobin
option tcplog
option tcp-check
server ftp01 172.21.10.100:22
server ftp02 172.21.10.101:22




Thanks
Vijay
Post by v***@abinnovative.com
when the second node is down. it is not passing files to only
first node. instead, one time to first node and one time to second
node. That means alternatively my second request is getting failures.
how to fix this. please help me asap.
You need
option redispatch
Post by v***@abinnovative.com
haproxy.cfg (Also attached the cfg file)
OS - linux ubuntu
--
Thanks
Vijay
--
*Thanks*
*Vijay*
Willy Tarreau
2016-10-07 05:54:15 UTC
Permalink
Post by v***@abinnovative.com
Great & Thanks Fellaws.
But i see |option redispatch|only works for HTTP proxies, am a right.
No, that's wrong. redispatch works at the TCP connection level. It's
used both with HTTP and TCP. What makes you think it's only for HTTP ?

Willy
Vijay .D.R
2016-10-07 08:52:33 UTC
Permalink
Igor Cicimov
2016-10-07 09:41:12 UTC
Permalink
Hi Willy,
I read it in a forum and I do the changes in haproxy.cfg like below and
it's not working as expected.
Listen
...
...
Retries 3
Option redispatch
Vijay, did you read Lukas's email? Did you do as he suggested? Without
health check nothing will work, how do you expect for haproxy to know that
a backend server is down?
Willy Tarreau
2016-10-07 15:16:53 UTC
Permalink
Hi Igor,
Post by Igor Cicimov
Listen
...
...
Retries 3
Option redispatch
Vijay, did you read Lukas's email? Did you do as he suggested? Without
health check nothing will work, how do you expect for haproxy to know that
a backend server is down?
In fact with non-determinism LB algorithms (ie: round robin), the algorithm
takes care of avoiding the previous server during a retry (if there's no
stickiness of course). That's not rocket science, it just tries to improve
the situation before the server is detected as down. But that must not be
used as an excuse for avoiding checks.

Cheers,
Willy
Igor Cicimov
2016-10-07 21:19:12 UTC
Permalink
Post by Willy Tarreau
Hi Igor,
Post by Igor Cicimov
Listen
...
...
Retries 3
Option redispatch
Vijay, did you read Lukas's email? Did you do as he suggested? Without
health check nothing will work, how do you expect for haproxy to know that
a backend server is down?
In fact with non-determinism LB algorithms (ie: round robin), the algorithm
takes care of avoiding the previous server during a retry (if there's no
stickiness of course). That's not rocket science, it just tries to improve
the situation before the server is detected as down. But that must not be
used as an excuse for avoiding checks.
Cheers,
Willy
Thanks Willy that makes sense ofcourse. I guess the OP has issue with the
ftp server then like it is not really down from the haproxy point of view
but not able to serve requests. Since we were not told what the issue is or
how did he test the failover it is just an assumption. More detail
inspection of the ftp server health when the issue occures might reveil
something.
v***@abinnovative.com
2016-10-07 22:52:31 UTC
Permalink
Hi All,

Thanks, I have changed the options like check port 22 rise falls
retries re-dispatch etc.. nothing works.

As my client utility is written in apache camel, so it is retrying 3
times min, I have only two nodes for the haproxy, so one node gets
alternatively failure and the second retry by camel is getting success.
but haproxy is not automatically switching to running node.

Thanks
Vijay
Post by Igor Cicimov
Post by Willy Tarreau
Hi Igor,
Post by Igor Cicimov
Listen
...
...
Retries 3
Option redispatch
Vijay, did you read Lukas's email? Did you do as he suggested? Without
health check nothing will work, how do you expect for haproxy to
know that
Post by Willy Tarreau
Post by Igor Cicimov
a backend server is down?
In fact with non-determinism LB algorithms (ie: round robin), the
algorithm
Post by Willy Tarreau
takes care of avoiding the previous server during a retry (if there's no
stickiness of course). That's not rocket science, it just tries to
improve
Post by Willy Tarreau
the situation before the server is detected as down. But that must
not be
Post by Willy Tarreau
used as an excuse for avoiding checks.
Cheers,
Willy
Thanks Willy that makes sense ofcourse. I guess the OP has issue with
the ftp server then like it is not really down from the haproxy point
of view but not able to serve requests. Since we were not told what
the issue is or how did he test the failover it is just an assumption.
More detail inspection of the ftp server health when the issue occures
might reveil something.
--
*Thanks*
*Vijay*
Igor Cicimov
2016-10-08 01:04:56 UTC
Permalink
Post by v***@abinnovative.com
Hi All,
Thanks, I have changed the options like check port 22 rise falls
retries re-dispatch etc.. nothing works.
As my client utility is written in apache camel, so it is retrying 3 times
min, I have only two nodes for the haproxy, so one node gets alternatively
failure and the second retry by camel is getting success. but haproxy is
not automatically switching to running node.
Thanks
Vijay
Post by Willy Tarreau
Hi Igor,
Post by Igor Cicimov
Listen
...
...
Retries 3
Option redispatch
Vijay, did you read Lukas's email? Did you do as he suggested? Without
health check nothing will work, how do you expect for haproxy to know
that
Post by Willy Tarreau
Post by Igor Cicimov
a backend server is down?
In fact with non-determinism LB algorithms (ie: round robin), the
algorithm
Post by Willy Tarreau
takes care of avoiding the previous server during a retry (if there's no
stickiness of course). That's not rocket science, it just tries to
improve
Post by Willy Tarreau
the situation before the server is detected as down. But that must not be
used as an excuse for avoiding checks.
Cheers,
Willy
Thanks Willy that makes sense ofcourse. I guess the OP has issue with the
ftp server then like it is not really down from the haproxy point of view
but not able to serve requests. Since we were not told what the issue is or
how did he test the failover it is just an assumption. More detail
inspection of the ftp server health when the issue occures might reveil
something.
--
*Thanks*
*Vijay*
I think some logs are in order Vijay :-) Can you please paste the relevant
lines of the failed and the consecutive successful request in haproxy? The
flags might tell us whats going on. Don't forget to obfuscate any sensitive
data.

Cheers,
Igor
Loading...