proxy module handling early responses

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

proxy module handling early responses

Frank Liu
Hi,

When using nginx as a reverse proxy, in case of a large POST payload, what does nginx do when upstream server sends response before nginx finishes posting the full payload?

One use case is upstream enforces some payload limit and sends a HTTP/413 response when the payload read reaches certain limit. Will nginx catch this error, stop sending further, and return the 413 to client?

I see a stackoverflow discussion for a different use case, not sure how nginx behaves.

Regards,
Frank


_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: proxy module handling early responses

Maxim Dounin
Hello!

On Tue, Dec 17, 2019 at 06:37:58PM -0800, Frank Liu wrote:

> When using nginx as a reverse proxy, in case of a large POST payload, what
> does nginx do when upstream server sends response before nginx finishes
> posting the full payload?
>
> One use case is upstream enforces some payload limit and sends a HTTP/413
> response when the payload read reaches certain limit. Will nginx catch this
> error, stop sending further, and return the 413 to client?

Exactly.

--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: proxy module handling early responses

Frank Liu
Our upstream returns HTTP/413 along with "Connection: close" in the header, then closes the socket. It seems nginx catches the socket close in the middle of sending the large payload. This triggers additional 502 and client gets both 413 and 502 from nginx.

On Wed, Dec 18, 2019 at 7:22 AM Maxim Dounin <[hidden email]> wrote:
Hello!

On Tue, Dec 17, 2019 at 06:37:58PM -0800, Frank Liu wrote:

> When using nginx as a reverse proxy, in case of a large POST payload, what
> does nginx do when upstream server sends response before nginx finishes
> posting the full payload?
>
> One use case is upstream enforces some payload limit and sends a HTTP/413
> response when the payload read reaches certain limit. Will nginx catch this
> error, stop sending further, and return the 413 to client?

Exactly.

--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: proxy module handling early responses

Maxim Dounin
Hello!

On Wed, Dec 18, 2019 at 10:09:56AM -0800, Frank Liu wrote:

> Our upstream returns HTTP/413 along with "Connection: close" in the header,
> then closes the socket. It seems nginx catches the socket close in the
> middle of sending the large payload. This triggers additional 502 and
> client gets both 413 and 502 from nginx.

Your upstream server's behaviour is incorrect: it have to continue
reading data in the socket buffers and in transit (usually this is
called "lingering close", see http://nginx.org/r/lingering_close),
or nginx simply won't get the response.  The client will get
simple and quite reasonable 502 in such a situation (not "413 and
502").

This problem is explicitly documented in RFC 7230, "6.6.  
Tear-down" (https://tools.ietf.org/html/rfc7230#section-6.6):

   If a server performs an immediate close of a TCP connection, there is
   a significant risk that the client will not be able to read the last
   HTTP response.  If the server receives additional data from the
   client on a fully closed connection, such as another request that was
   sent by the client before receiving the server's response, the
   server's TCP stack will send a reset packet to the client;
   unfortunately, the reset packet might erase the client's
   unacknowledged input buffers before they can be read and interpreted
   by the client's HTTP parser.

   To avoid the TCP reset problem, servers typically close a connection
   in stages.  First, the server performs a half-close by closing only
   the write side of the read/write connection.  The server then
   continues to read from the connection until it receives a
   corresponding close by the client, or until the server is reasonably
   certain that its own TCP stack has received the client's
   acknowledgement of the packet(s) containing the server's last
   response.  Finally, the server fully closes the connection.

If the upstream server fails to do connection teardown properly,
the only option is to fix the upstream server: it should either
implemenent proper connection teardown, or avoid returning
responses without reading the request body first.

--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx