Balancing NGINX reverse proxy

classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Balancing NGINX reverse proxy

anish10dec
Hi,

I have been reading the documentation and also searching this forum for a
while, but could not find an answer to my question.
Currently, I have a 2 NGINX nodes acting as a reverse proxy (in a failover
setup using keepalived). The revproxy injects an authentication header, for
an online website (transport is https).

As the number of users grows, the load on the current machine starts to get
uncomfortably high and I would like to be able to spread the load over both
nodes.

What would be the best way to set this up?

I already tried adding both IP addresses to the DNS. But this, rather
predictably, only sent a handful of users to the secondary node.
I now plan about setting up an NGINX node in front of these revproxy nodes,
acting as a round-robin load balancer. Will this work? Given the fact that
traffic is over HTTPS, terminating the request will probably put all the
load on the load balancer and therefore does not solve my issue.

Your advice and help is greatly appreciated.

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272713,272713#msg-272713

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Balancing NGINX reverse proxy

Alex Samad
Hi

if I am reading this right, you currently have too much load on 1 nginx server and you wish to releave this by adding another nginx server in front of it ?

What I have is 2 nodes, but I use pacemaker instead of keepalive - i like it as a better solution - but thats for another thread.

what you can do with pacemaker is have 1 ip address distributed between multiple machines - up to 16 nodes I think from memory.

It uses linux iptables module for doing this. It is dependant on the source ip's being distributed, if there are clamp together the hashing algo will not make it any better.

How many requests are you getting to overload nginx - i thought it was able to handle very large amounts ?

are your nodes big enough do you need more CPU's or memory or ??

Alex




On 3 March 2017 at 01:40, polder_trash <[hidden email]> wrote:
Hi,

I have been reading the documentation and also searching this forum for a
while, but could not find an answer to my question.
Currently, I have a 2 NGINX nodes acting as a reverse proxy (in a failover
setup using keepalived). The revproxy injects an authentication header, for
an online website (transport is https).

As the number of users grows, the load on the current machine starts to get
uncomfortably high and I would like to be able to spread the load over both
nodes.

What would be the best way to set this up?

I already tried adding both IP addresses to the DNS. But this, rather
predictably, only sent a handful of users to the secondary node.
I now plan about setting up an NGINX node in front of these revproxy nodes,
acting as a round-robin load balancer. Will this work? Given the fact that
traffic is over HTTPS, terminating the request will probably put all the
load on the load balancer and therefore does not solve my issue.

Your advice and help is greatly appreciated.

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272713,272713#msg-272713

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx


_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Balancing NGINX reverse proxy

anish10dec
Alexsamad,
I might not have been clear, allow me to try again:

* currently 2 NGINX revproxy nodes, 1 active the other on standby in case
node 1 fails.
* Since I am injecting an authentication header into the request, the HTTPS
request has to be offloaded at the node and introduces additional load
compared to injecting into non-encrypted requests.
* Current peak load ~60 concurrent requests, ~100% load on CPU. Concurrent
requests expected to more than double, so revproxy will be bottleneck.

The NGINX revproxies run as a VM and I can ramp up the machine specs a
little bit, but I do not expect this to completely solve the issue here.
Therefore I am looking for some method of spreading the requests over
multiple backend revproxies, without the load balancer frontend having to
deal with SSL offloading.

From the KEMP LoadMaster documentation I found that this technique is called
SSL Passthrough. I am currently looking if that is also supported by NGINX.

What do you think? Will this solve my issue? Am I on the wrong track?

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272713,272729#msg-272729

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Balancing NGINX reverse proxy

Alex Samad
Hi

Firstly, I am fairly new to nginx.


From what I understand you have a standard sort of setup.


2 nodes (vm's) with haproxy, allowing nginx to be active / passive.

You have SSL requests which once nginx terminates the SSL, it injects a security header / token and then I presume it passes this on to a back end, i presume that the nginx to application server is non SSL.

You are having performance issue with the SSL + header inject part, which seems to be limiting you to approx 60req per sec before you hit 100% cpu..  This seems very very low to me looking at my prod setup - similar to yours I am seeing 600 connections and req/s ranging from 8-400 / sec.  all whilst the cpu stay very very low.

We try and use long lived TCP / SSL sessions, but we also use a thick client as well so have more control.

Not sure about KEMP loadmaster.

What I describe to you was our potential plans for when the load gets too much on the active/passive setup.

It would allow you to take your 60 session ? and distributed it between 2 or upto 16 (I believe this is the max for pacemaker). an active / active setup

The 2 node setup would be the same as yours


router -> vlan with the 2 nodes > Node A would only process node a data and node B would only process node b data.  This in theory would have the potential to double your req / sec.



Alex


On 3 March 2017 at 19:33, polder_trash <[hidden email]> wrote:
Alexsamad,
I might not have been clear, allow me to try again:

* currently 2 NGINX revproxy nodes, 1 active the other on standby in case
node 1 fails.
* Since I am injecting an authentication header into the request, the HTTPS
request has to be offloaded at the node and introduces additional load
compared to injecting into non-encrypted requests.
* Current peak load ~60 concurrent requests, ~100% load on CPU. Concurrent
requests expected to more than double, so revproxy will be bottleneck.

The NGINX revproxies run as a VM and I can ramp up the machine specs a
little bit, but I do not expect this to completely solve the issue here.
Therefore I am looking for some method of spreading the requests over
multiple backend revproxies, without the load balancer frontend having to
deal with SSL offloading.

From the KEMP LoadMaster documentation I found that this technique is called
SSL Passthrough. I am currently looking if that is also supported by NGINX.

What do you think? Will this solve my issue? Am I on the wrong track?

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272713,272729#msg-272729

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx


_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Balancing NGINX reverse proxy

Peter Booth
So I have a few different thoughts:

1. Yes nginx does support SSL pass through .  You can configure nginx to stream your request to your SSL backend. I do this when I don't have control of the backend and it has to be SSL. I don't think that's your situation.

2. I suspect that there's something wrong with your SSL configuration and/or your nginx VMs are underpowered. Can you test the throughput that you are requesting http static resources ? Check with webpagetest.org that the expense is only being paid on the first request.

3. It's generally better to terminate SSL as early as possible and have the bulk of your communication be unencrypted.
What spec are your VMs?

Sent from my iPhone

On Mar 5, 2017, at 6:00 PM, Alex Samad <[hidden email]> wrote:

Hi

Firstly, I am fairly new to nginx.


From what I understand you have a standard sort of setup.


2 nodes (vm's) with haproxy, allowing nginx to be active / passive.

You have SSL requests which once nginx terminates the SSL, it injects a security header / token and then I presume it passes this on to a back end, i presume that the nginx to application server is non SSL.

You are having performance issue with the SSL + header inject part, which seems to be limiting you to approx 60req per sec before you hit 100% cpu..  This seems very very low to me looking at my prod setup - similar to yours I am seeing 600 connections and req/s ranging from 8-400 / sec.  all whilst the cpu stay very very low.

We try and use long lived TCP / SSL sessions, but we also use a thick client as well so have more control.

Not sure about KEMP loadmaster.

What I describe to you was our potential plans for when the load gets too much on the active/passive setup.

It would allow you to take your 60 session ? and distributed it between 2 or upto 16 (I believe this is the max for pacemaker). an active / active setup

The 2 node setup would be the same as yours


router -> vlan with the 2 nodes > Node A would only process node a data and node B would only process node b data.  This in theory would have the potential to double your req / sec.



Alex


On 3 March 2017 at 19:33, polder_trash <[hidden email]> wrote:
Alexsamad,
I might not have been clear, allow me to try again:

* currently 2 NGINX revproxy nodes, 1 active the other on standby in case
node 1 fails.
* Since I am injecting an authentication header into the request, the HTTPS
request has to be offloaded at the node and introduces additional load
compared to injecting into non-encrypted requests.
* Current peak load ~60 concurrent requests, ~100% load on CPU. Concurrent
requests expected to more than double, so revproxy will be bottleneck.

The NGINX revproxies run as a VM and I can ramp up the machine specs a
little bit, but I do not expect this to completely solve the issue here.
Therefore I am looking for some method of spreading the requests over
multiple backend revproxies, without the load balancer frontend having to
deal with SSL offloading.

From the KEMP LoadMaster documentation I found that this technique is called
SSL Passthrough. I am currently looking if that is also supported by NGINX.

What do you think? Will this solve my issue? Am I on the wrong track?

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272713,272729#msg-272729

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Balancing NGINX reverse proxy

Steve Holdoway
In reply to this post by anish10dec
Hi,


On 03/03/17 03:40, polder_trash wrote:
> Hi,
>
>
> I already tried adding both IP addresses to the DNS. But this, rather
> predictably, only sent a handful of users to the secondary node.
>
This should not be the case ( well, for bind anyway ), as it should be
delivering them in a round robin fashion. Alternatively ( bind again )
you can set up a split horizon DNS to deliver as a function of source ip
address.

Maybe the TTL hadn't expired on existing lookups, or the client is doing
something strange?

Steve

--
Steve Holdoway BSc(Hons) MIITP
https://www.greengecko.co.nz/
Linkedin: https://www.linkedin.com/in/steveholdoway
Skype: sholdowa

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Balancing NGINX reverse proxy

anish10dec
Thanks for you answer.
It turned out a monitoring system was doing DNS lookups in the mean time,
therefore it appeared that 2 client got the same DNS response.

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272713,273041#msg-273041

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Loading...