Issues with limit_req_zone.

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

Issues with limit_req_zone.

rnburn
In limit_req_zone with rate set to 100/s and burst=50, we have below
observation
.
Scenario1
==========
no. of request made by jmeter = 170
# of request expected to be failing = 20
# of request actually failed = 23

Question: why 3 more request are failing and is this much of failure
expected

Scenario2
==========
no. of request made by jmeter = 160
# of request expected to be failing = 10
# of request actually failed = 14

Question: why 4 more request are failing and is this much of failure
expected

Scenario3
==========
no. of request made by jmeter = 145
# of request expected to be failing = 0
# of request actually failed = 4

Question: why 4 more request are failing when all expected to pass and is
this much of failure expected

Why is there a variation on the numbers than actual numbers specified. ?

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274089,274089#msg-274089

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: Issues with limit_req_zone.

Francis Daly
On Sun, May 07, 2017 at 04:31:37AM -0400, Vishnu Priya Matha wrote:

Hi there,

> In limit_req_zone with rate set to 100/s and burst=50, we have below
> observation
> .
> Scenario1
> ==========
> no. of request made by jmeter = 170
> # of request expected to be failing = 20
> # of request actually failed = 23
>
> Question: why 3 more request are failing and is this much of failure
> expected

Why do you expect 20 to fail?

I expect 0 to fail.

Unless you use "nodelay", in which case the number of failures depends
on how quickly the requests are received.

Note: "100/s" does not mean "accept 100, then accept no more until 1
second has passed". It means something closer to "accept 1, then accept
no more until 0.01 seconds has passed".

        f
--
Francis Daly        [hidden email]
_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: Issues with limit_req_zone.

rnburn
Then how does the burst_size play a role here ? How is the burst_size be
calculated ?

Since requests_per_sec  is 100/s => 1 request per 0.01 sec - then does that
mean 50 is also 50 per 0.01 sec or is it 1 per 0.02 sec ?

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274089,275256#msg-275256

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: Issues with limit_req_zone.

Francis Daly
On Mon, Jul 03, 2017 at 02:46:39AM -0400, Vishnu Priya Matha wrote:

Hi there,

> Then how does the burst_size play a role here ? How is the burst_size be
> calculated ?

"burst" means, roughly, "let this many happen quickly before fully
enforcing the one-per-period rule".

> Since requests_per_sec  is 100/s => 1 request per 0.01 sec - then does that
> mean 50 is also 50 per 0.01 sec or is it 1 per 0.02 sec ?

100/s means, roughly, "handle 1, then no more until 0.01s has passed".

With that config, "burst=50" means (again, roughly), "handle (up to)
50, then no more until (up to) 0.5s has passed".

You maintain the overall rate, but allow it be short-term exceeded
followed by a quiet period.

Test it by setting rate=1/s and burst=5.

Send in 10 requests very quickly.

See when they are handled.

Wait a while.

Send in 10 requests, the next one as soon as the previous one gets
a response.

See when they are handled.

        f
--
Francis Daly        [hidden email]
_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx