Buffering issues with nginx

classic Classic list List threaded Threaded
19 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Buffering issues with nginx

blason
No matter what configs I try, nginx still keeps buffering my requests.

These are the configs that I apply in my test:
```
        proxy_pass <a href="http://localhost:80;">http://localhost:80;
        proxy_request_buffering off;
        proxy_buffer_size 4k;
        proxy_buffers 8 4k;
        proxy_no_cache 1;
        proxy_set_header Host $host;
        proxy_max_temp_file_size 0;```

in my case I run nodejs app directly on port 80 and I run nginx on 8080,
then I run my tests against proxied and direct version.

My nodejs app needs to know exact amount of data that was sent to remote (to
calculate speed of transfer and for billing purposes). Ideally, I'd like to
know number of ACKed bytes of a TCP connection. Node itself cannot even
provide that (as this wasn't even possible until recently on windows).

In my first test I serve 25MB test binary data. Without
`proxy_max_temp_file_size 0` nginx would read all 25MB in 5ms and then would
continue sending that data to the originator on its own without me (e.g. my
node app) ever knowing if all 25MB were delivered or transfer was aborted
after 1MB.
After I applied all these configs above this started to work better, and
since I proxy in http1.0 mode I can use connection.close event on my node
app as a rough approximation when connection was completed.

However, when I send for example 500KB of binary data, nginx still reads in
all the data in couple of milliseconds and closes connection and there is no
way for me to know on node side if that data was actually properly
delivered.
For the test I download that data using wget and I use --limit-rate=10000 to
limit download on receiver side to 10KB/s. After 5 seconds I ctrl+C and
abort transfer after loading just 50KB, while my nodejs side actually thinks
that everything went well and all 500KB were loaded.


1) how can I make nginx not to buffer more than 64KB of data?
2) can my node app know how much data nginx ended up delivering?

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,275526,275526#msg-275526

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Buffering issues with nginx

Francis Daly
On Mon, Jul 17, 2017 at 02:06:27AM -0400, Dan34 wrote:

Hi there,

> No matter what configs I try, nginx still keeps buffering my requests.

proxy_request_buffering (http://nginx.org/r/proxy_request_buffering)
relates to nginx buffering the request from the client.

proxy_buffering (http://nginx.org/r/proxy_buffering) relates to nginx
buffering the response from the upstream server.

Are you concerned about the request or the response being buffered?

        f
--
Francis Daly        [hidden email]
_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Buffering issues with nginx

blason
Hi Francis,

> Are you concerned about the request or the response being buffered?

my problem is that response that my node app (upstream server) generates is
buffered by nginx.
My actual goal is to know speed and amount of data that my node app sent to
a client. So, when I sent 20MB binary blob from node I can wait till data is
sent out and I'll know approximate speed and size. The moment I put my node
app behind an nginx proxy things fall apart: suddenly all transfers are
instant, and I cannot even know if entire 20MB of data was delivered or if
the connection failed in the middle. When I updated some of these buffering
configs things improved, but still were failing with smaller uploads that
are still fully buffered by nginx.

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,275526,275544#msg-275544

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Buffering issues with nginx

Francis Daly
On Mon, Jul 17, 2017 at 09:47:39PM -0400, Dan34 wrote:

Hi there,

> > Are you concerned about the request or the response being buffered?
>
> my problem is that response that my node app (upstream server) generates is
> buffered by nginx.

So you want proxy_buffering (http://nginx.org/r/proxy_buffering) set to
"off" in the location that does the proxy_pass.

> My actual goal is to know speed and amount of data that my node app sent to
> a client.

It may be the case that that is not knowable, in general, in a single
http request.

Depending on the compromises you are willing to make, to accuracy or
convenience, you may be able to come up with something good enough.

> So, when I sent 20MB binary blob from node I can wait till data is
> sent out and I'll know approximate speed and size. The moment I put my node
> app behind an nginx proxy things fall apart: suddenly all transfers are
> instant, and I cannot even know if entire 20MB of data was delivered or if
> the connection failed in the middle.

Yes. That is (part of) what a proxy does. Even without nginx as a
reverse-proxy, your client might be talking through one or more proxy
servers. You will never know whether your response got to the actual
end client, without some extra verification step that only the end
client does.

> When I updated some of these buffering
> configs things improved, but still were failing with smaller uploads that
> are still fully buffered by nginx.

proxy_buffers and proxy_buffer_size can be tuned (lowered, in this case,
probably) to slow down nginx's receive-rate from your upstream.

If you can show one working configuration with a description of how
it does not do what you want it to do, possibly someone can offer some
advice on what to change.

Good luck with it,

        f
--
Francis Daly        [hidden email]
_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Buffering issues with nginx

Igal @ Lucee.org
In reply to this post by blason

my problem is that response that my node app (upstream server) generates is
buffered by nginx.
Generate the following header from node:

    X-Accel-Buffering: no

That will disable nginx's buffering for the request.


Igal Sapir
Lucee Core Developer
Lucee.org



_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Buffering issues with nginx

blason
> X-Accel-Buffering: no
> That will disable nginx's buffering for the request.

At first it looked like exactly what I was looking for (after reading nginx
docs), but after trying I observed that there were no effects from that.
In code that writes headers I added res.setHeader('X-Accel-Buffering',
'no');

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,275526,275603#msg-275603

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Buffering issues with nginx

blason
In reply to this post by Francis Daly
> Depending on the compromises you are willing to make, to accuracy or
> convenience, you may be able to come up with something good enough.

I have a more or less working solution. nginx breaks it and I'm trying to
figure out how to fix it.


> Yes. That is (part of) what a proxy does. Even without nginx as a
> reverse-proxy, your client might be talking through one or more proxy
> servers. You will never know whether your response got to the actual
> end client, without some extra verification step that only the end
> client does.

I don't care for the case if there are any other proxies, I care for bytes
that left my server. Specifically, bytes that left my server and were ACKed
by next point (either final user or some proxy in between). Verification
isn't an option.

> > When I updated some of these buffering
> > configs things improved, but still were failing with smaller uploads
that
> > are still fully buffered by nginx.

> proxy_buffers and proxy_buffer_size can be tuned (lowered, in this case,
> probably) to slow down nginx's receive-rate from your upstream.
>
> If you can show one working configuration with a description of how
> it does not do what you want it to do, possibly someone can offer some
> advice on what to change.

I tried proxy_buffers off; and it didn't make a difference. I'm fairly
confident that it's a bug in nginx, or some "feature" that doesn't get
disabled with any configs.

Here's full config that I use:

    location / {
        proxy_pass <a href="http://localhost:80;">http://localhost:80;
        #proxy_http_version 1.1;
        #proxy_http_version 1.0;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_set_header X-NginX-Proxy true;
        #proxy_set_header Upgrade $http_upgrade;
        #proxy_set_header Connection 'upgrade';
        #proxy_set_header Connection 'close';
        proxy_buffering off;
        proxy_request_buffering off;
        #proxy_buffer_size 4k;
        #proxy_buffers 3 4k;
        proxy_no_cache 1;
        proxy_set_header Host $host;
        #proxy_cache_bypass $http_upgrade;
        proxy_hide_header X-Powered-By;
        proxy_max_temp_file_size 0;
    }

I run nginx on 8080, for testing, since it's not suitable for live use on 80
in my case and I'm trying to figure out how to fix it.
And here's why I believe that there is a bug.

In my case, I wrote test code on node side that serves some binary content.
I can control speed at what node serves this content. On receiving end (on
the other side of the planet) I use wget with --limite-rate. In the test
that I'm trying to fix I send 5MB from nodejs at 20KB/s speed, client that
requests that binary data reads it at 10KB/s. Obviously overall speed has to
be 10KB/s as it's limited by the client that requests the data.

What happens is that entire connection from nginx to node is closed after
node sends all data to nginx. Basically in my test 5MB will take
approximately 500s to deliver, but node gets tcp connection closed 255 s
from start (when there is still 250 more seconds to go and 2.5MB is still
stuck on nginx side). So, no matter what I do nginx totally breaks my
scenario, it does not obey any configs and still buffers 2.5MB

Just in case if nginx devs ever read here, I have 1.12.1 version on ubuntu.

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,275526,275605#msg-275605

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Buffering issues with nginx

Valentin V. Bartenev-3
On Friday 21 July 2017 07:02:07 Dan34 wrote:
[..]

> I run nginx on 8080, for testing, since it's not suitable for live use on 80
> in my case and I'm trying to figure out how to fix it.
> And here's why I believe that there is a bug.
>
> In my case, I wrote test code on node side that serves some binary content.
> I can control speed at what node serves this content. On receiving end (on
> the other side of the planet) I use wget with --limite-rate. In the test
> that I'm trying to fix I send 5MB from nodejs at 20KB/s speed, client that
> requests that binary data reads it at 10KB/s. Obviously overall speed has to
> be 10KB/s as it's limited by the client that requests the data.
>
> What happens is that entire connection from nginx to node is closed after
> node sends all data to nginx. Basically in my test 5MB will take
> approximately 500s to deliver, but node gets tcp connection closed 255 s
> from start (when there is still 250 more seconds to go and 2.5MB is still
> stuck on nginx side). So, no matter what I do nginx totally breaks my
> scenario, it does not obey any configs and still buffers 2.5MB
>
[..]

No buffering doesn't mean no buffers used at all.

In your scenario there are at least 5 buffers involved, and 4 of them
are in OS kernel.

Here's the list:

1. Write socket buffer in kernel on node.js side where node.js
   writes data.

2. Read socket buffer in kernel for node.js connection from what nginx
   reads data.

3. Heap memory buffer to that nginx reads data from kernel socket buffer
   (controlled by proxy_buffers and proxy_buffer_size directives).

   No buffering here means that nginx doesn't keep that data in buffers
   for some time, but writes it immediately to write socket buffer in kernel
   for client connection.

4. Write socket buffer in kernel for client connection where nginx
   writes data.

5. Read socket buffer in kernel for client connection from what wget
   reads data.


   wbr, Valentin V. Bartenev

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Buffering issues with nginx

blason
Hello Valentin,

> 1. Write socket buffer in kernel on node.js side where node.js writes
data.

we can throw this out from equation, as I measure my end time by the event
when socket is closed on nodejs side, (I use http1.0 from nginx to node to
make it simple for this case).

> 2. Read socket buffer in kernel for node.js connection from what nginx
reads data.

SO_RCVBUF shouldn't be over 64KB by default. What does nginx use, is there a
config that controls it?.. still this shouldn't be a big issue, I'm fine if
there is such a constant buf.


> 3. Heap memory buffer to that nginx reads data from kernel socket buffer
(controlled by proxy_buffers
> and proxy_buffer_size directives).
>
> No buffering here means that nginx doesn't keep that data in buffers
> for some time, but writes it immediately to write socket buffer in kernel
> for client connection.

I'm trying to configure these to be skipped or used to minimum. E.g. I don't
wan any data to be held in these.

> 4. Write socket buffer in kernel for client connection where nginx writes
data.

SO_SNDBUF shouldn't be over 64KB by default, perhaps nginx changes it as
well. What's the value that nginx uses and is there a config that controls
it?

> 5. Read socket buffer in kernel for client connection from what wget reads
data.

We can throw this out from equation, we may assume these aren't used, as for
my test I use final time when wget finishes and prints stats. There is
obviously highly unlikely chance that wget actually reads data twice faster
from network, but shows slower speed in it's cli results and "waits" for
data even though it's already received and is in local buffers. This is
totally dumb and I don't think this might happen, but I could check with
wireshark just in case.


In short, these could affect my case: SO_RCVBUF, SO_SNDBUF on nginx side and
whatever buffering nginx uses for handling data. I run that same test with
25MB data and I got totally identical result: 12.5MB was buffered on nginx
side. That stuff that could affect my case cannot really add up to 12.5MB
and 10 minute of time.
There is a wild possibility that tcp window scaling resulted in some huge
window on node->nginx side and ended up storing that 12MB in tcp window
itself but i'm not sure if TCP window should be accounted into these
SO_RCVBUF or that RCVBUF is extra data on top of internals of TCP.

So,.. any ideas how come nginx ends up buffering 12.5MB data?

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,275526,275608#msg-275608

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Buffering issues with nginx

Valentin V. Bartenev-3
On Friday 21 July 2017 13:45:51 Dan34 wrote:
[..]

>
> In short, these could affect my case: SO_RCVBUF, SO_SNDBUF on nginx side and
> whatever buffering nginx uses for handling data. I run that same test with
> 25MB data and I got totally identical result: 12.5MB was buffered on nginx
> side. That stuff that could affect my case cannot really add up to 12.5MB
> and 10 minute of time.
> There is a wild possibility that tcp window scaling resulted in some huge
> window on node->nginx side and ended up storing that 12MB in tcp window
> itself but i'm not sure if TCP window should be accounted into these
> SO_RCVBUF or that RCVBUF is extra data on top of internals of TCP.
>

nginx doesn't set SO_RCVBUF/SO_SNDBUF by default, which usually means that
kernel will use system defaults and auto-scalling.


> So,.. any ideas how come nginx ends up buffering 12.5MB data?
>

You should check tcpdump (or wireshark) to see where actually 12.5MB
of data have been stuck.

  wbr, Valentin V. Bartenev

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Buffering issues with nginx

blason
> You should check tcpdump (or wireshark) to see where actually 12.5MB
> of data have been stuck.

Wireshark confirms my assumption. All the data is buffered by nginx. More
over, I see some buggy behavior, and I've seen that happen quite often.

This is localhost tcp screenshot: http://i.imgur.com/9Rz6Acs.png

You can see that after 1327 seconds nginx ACKed 18.5MB (which is 13.9MB/s).
node actually writes at 20MB/s to the socket, node will internally buffer
all unset data. At this point node stops sending any data and in 30 seconds
nginx closes socket (at 1399s).
Then nginx goes on to deliver all the data that it got buffered and when it
finishes sending 18.5M that it got from node before closing TCP connection
it also closes connection to wget. Wget simply restarts file transfer with a
new HTTP range request to download starting from 18.5MB, at this point you
can see on this screenshot that around 1820sec nginx sends new GET request
to node (that's the range get).


Here you can see outgoing packets from node around the same time when nginx
closed socket to node at 1399sec: http://i.imgur.com/pdnDIFS.png
 You can see that by this time remote (wget) ACKed exactly 14MB (as I run
wget with 10KB/s rate limit).

So, without any tcp buffers involved nginx does buffer like 5MB of data.
Moreover, when I review node->nginx packet capture, nginx clearly was
reading full speed (20KB/s speed limit on node side) and at some point
perhaps something triggered nginx to stop reading fullspeed. This happened
at 391sec, at which point nginx ACKed 7.8MB (which is exactly 20KB/s). At
the same time wget ACKed only 4MB, at this point nginx was buffering around
4MB and started to slow down read speed from node.

So, configs do not have any effect. What else should I check? Effectively,
in this scenario nginx should also read from node at 10KB/s (plus some fixed
buffer) and this doesn't seem to work properly in nginx.

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,275526,275611#msg-275611

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Buffering issues with nginx

blason
I wrote my own proxy and it appears that the data is all stuck in socket
buffers. If SNDBUF isn't set, then OS will resize it if you try to write
more data than remote can accept. Overall, in my tests I see that this
buffer grows to 2.5MB and in wireshark I see that difference grows up to
5MB. As docs from SO_SNDBUF state, actual internal value is usually twice
larger than what SO_SNDBUF ports then it kind of starts to make sense where
this max 5MB difference is wasted.

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,275526,275635#msg-275635

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Buffering issues with nginx

blason
I did some logs on my proxy test and compared results with wireshark trace
at some random point in time (t=511sec)
And numbers match exactly between logs and wireshark.

This is a log line from my test proxy:
time: 511s, bytesSent:5571760, down:{ SND:478720 OUTQ:280480 } up:{
RCV:5109117 INQ:3837047 }
SND is SNDBUF, RCV is RCVBUF, OUTQ is SIOCOUTQ, INQ is SIOCINQ/FIONREAD.
down is the link between wget and proxy, and up is the link from proxy to
node on localhost.

At the same time in wireshark the up link has 9409056 bytes ACKed, down link
has 5291529 bytes ACKed, or 9409056-5291529=4117527 bytes got accumulated
inside proxy process. This is exactly the same number of bytes that's stuck
in socket queues on up+down links (280480 + 3837047 =4117527).

From logs it also looks like when INQ and OUTQ reach values close to SND/RCV
then in wireshark I see TCP packets with window size = 0 to stop any
transmission.
Perhaps, if I set SND+RCV to 64K then total buffering should not exceed
128KB inside proxy buffers.

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,275526,275640#msg-275640

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Buffering issues with nginx

Francis Daly
On Mon, Jul 24, 2017 at 12:24:43PM -0400, Dan34 wrote:

Hi there,

> I did some logs on my proxy test and compared results with wireshark trace
> at some random point in time (t=511sec)
> And numbers match exactly between logs and wireshark.

Out of interest -- are these buffers especially big because "localhost"
is involved?

As in: if you do a direct wget-to-node connection on 127.0.0.1, do you
see the same problem as with nginx-to-node?

Or: if you make node and nginx be on different machines and do
wget-to-nginx, does that remove (or at least shrink) the problem?


It's not ideal, but if it's a pure-config way to get "total buffers"
to more closely match the desired buffers, it might be adequate.

Cheers,

        f
--
Francis Daly        [hidden email]
_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Buffering issues with nginx

blason
It looks like for localhost buffers are bigger, but even if it's not local
host I do get 5MB stuck in socket buffers. I was only able to get perfect
results by writing my own proxy in c and doing some obscure nodejs code to
avoid buffering.

In any case, if nginx does not provide a way to control sockets buffer I
cannot use it. For example, 'X-Accel-Buffering: no' supposedly disables
caching (I didn't see any effect of that anyways), so I wanted to add some
kind of headers to be able to tell nginx what buffers to set per connection.
In my case I do regular reverse proxy stuff with nginx, but on certain
connections I need exact control and nginx doesn't provide any of that.
haproxy for example worked much better for me, but it's sndbuf/rcvbuf
settings are global, which is equally unacceptable.

Would that be easy to add headers like these X-Accel-Up-Rcvbuf: 12345,
X-Accel-Down-Sndbuf: 4567 so that on getting them from upstream nginx would
configure sockets that are used by that connection?

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,275526,275729#msg-275729

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Buffering issues with nginx

blason
In nginx docs I see sndbuf option in listen directive.
Is there something that I don't understand about it, or developers of nginx
don't understand meaning of sndbuf... but I do not see a point to set sndbuf
on a listening socket. It just does not make any sense!
sndbuf/rcvbuf is needed perhaps for an upstream proxy connection, or for a
connected socket (from outside), but it just have no meaning to set that on
a listening socket. Total nonsense.

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,275526,275730#msg-275730

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Buffering issues with nginx

Valentin V. Bartenev-3
On Saturday 29 July 2017 22:41:25 Dan34 wrote:
> In nginx docs I see sndbuf option in listen directive.
> Is there something that I don't understand about it, or developers of nginx
> don't understand meaning of sndbuf... but I do not see a point to set sndbuf
> on a listening socket. It just does not make any sense!
> sndbuf/rcvbuf is needed perhaps for an upstream proxy connection, or for a
> connected socket (from outside), but it just have no meaning to set that on
> a listening socket. Total nonsense.
>

When nginx calls accept(), SO_RCVBUF/SO_SNDBUF options are inherited from the
listening socket.

  wbr, Valentin V. Bartenev

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Buffering issues with nginx

blason
Yes, I tested that and it appears to be the case. However, I don't see where
nginx sets rcvbuf on the upstream socket as this one cannot be inherited.
Somehow even with SND/RCV buffers set to low values and buffering disabled I
get around 2.5BM stuck on nginx side. With my own simple proxy I get perfect
results: when socket buffers are low there is no data accumulated on the
proxy side.

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,275526,275744#msg-275744

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Buffering issues with nginx

blason
I've run some tests and I'm pretty sure the reason I was getting 5MB stuck
on nginx side was because of RCVBUF on upstream socket uses default socket
buffers and by default it ends up with 5MB RCV buffer.

I added logs to check that value and even after I configured sndbuf and
rcvbuf inside listen directive I was getting 5MB buffer on upstream socket.
After I configured these buffers on upstream (by fiddling with nginx code) I
got immediate results and right away that 5MB delay disappeared.
Similar to 'X-Accel-Buffering' I added X-Accel-Up-RCVBUF and
X-Accel-Down-SNDBUF headers and they seem to be working as expected.

I'm testing this scenario: downstream has limited bandwidth and upstream
(node) can generate data much faster. My goal is to ensure that overall read
speed from upsteram gets limited by downstream, so that nginx doesn't try to
read faster than downstream is capable of. Basically I'm ok if nginx buffers
some constant amount of data (e.g. not more than 1 sec of data at downstream
speed.

Even after fixing it, nginx doesn't work as well as simple single-threaded
vanilla test proxy that I wrote for testing.
That vanilla proxy delivers perfect results for this simple reason: I set
upstream and downstream buffers to some low value (e.g. 128KB) and then I
use blocking recv and blocking send in the same thread. This way whatever it
reads from upstream it sends right away downstream in the same loop

Any reason why would this not work with nginx?.. I don't see why it wouldn't
work with async sockets the same way as with blocking read/send loop in
vanilla proxy.

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,275526,275758#msg-275758

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Loading...