proxy module handling early responses

classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

proxy module handling early responses

Frank Liu
Hi,

When using nginx as a reverse proxy, in case of a large POST payload, what does nginx do when upstream server sends response before nginx finishes posting the full payload?

One use case is upstream enforces some payload limit and sends a HTTP/413 response when the payload read reaches certain limit. Will nginx catch this error, stop sending further, and return the 413 to client?

I see a stackoverflow discussion for a different use case, not sure how nginx behaves.

Regards,
Frank


_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: proxy module handling early responses

Maxim Dounin
Hello!

On Tue, Dec 17, 2019 at 06:37:58PM -0800, Frank Liu wrote:

> When using nginx as a reverse proxy, in case of a large POST payload, what
> does nginx do when upstream server sends response before nginx finishes
> posting the full payload?
>
> One use case is upstream enforces some payload limit and sends a HTTP/413
> response when the payload read reaches certain limit. Will nginx catch this
> error, stop sending further, and return the 413 to client?

Exactly.

--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: proxy module handling early responses

Frank Liu
Our upstream returns HTTP/413 along with "Connection: close" in the header, then closes the socket. It seems nginx catches the socket close in the middle of sending the large payload. This triggers additional 502 and client gets both 413 and 502 from nginx.

On Wed, Dec 18, 2019 at 7:22 AM Maxim Dounin <[hidden email]> wrote:
Hello!

On Tue, Dec 17, 2019 at 06:37:58PM -0800, Frank Liu wrote:

> When using nginx as a reverse proxy, in case of a large POST payload, what
> does nginx do when upstream server sends response before nginx finishes
> posting the full payload?
>
> One use case is upstream enforces some payload limit and sends a HTTP/413
> response when the payload read reaches certain limit. Will nginx catch this
> error, stop sending further, and return the 413 to client?

Exactly.

--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: proxy module handling early responses

Maxim Dounin
Hello!

On Wed, Dec 18, 2019 at 10:09:56AM -0800, Frank Liu wrote:

> Our upstream returns HTTP/413 along with "Connection: close" in the header,
> then closes the socket. It seems nginx catches the socket close in the
> middle of sending the large payload. This triggers additional 502 and
> client gets both 413 and 502 from nginx.

Your upstream server's behaviour is incorrect: it have to continue
reading data in the socket buffers and in transit (usually this is
called "lingering close", see http://nginx.org/r/lingering_close),
or nginx simply won't get the response.  The client will get
simple and quite reasonable 502 in such a situation (not "413 and
502").

This problem is explicitly documented in RFC 7230, "6.6.  
Tear-down" (https://tools.ietf.org/html/rfc7230#section-6.6):

   If a server performs an immediate close of a TCP connection, there is
   a significant risk that the client will not be able to read the last
   HTTP response.  If the server receives additional data from the
   client on a fully closed connection, such as another request that was
   sent by the client before receiving the server's response, the
   server's TCP stack will send a reset packet to the client;
   unfortunately, the reset packet might erase the client's
   unacknowledged input buffers before they can be read and interpreted
   by the client's HTTP parser.

   To avoid the TCP reset problem, servers typically close a connection
   in stages.  First, the server performs a half-close by closing only
   the write side of the read/write connection.  The server then
   continues to read from the connection until it receives a
   corresponding close by the client, or until the server is reasonably
   certain that its own TCP stack has received the client's
   acknowledgement of the packet(s) containing the server's last
   response.  Finally, the server fully closes the connection.

If the upstream server fails to do connection teardown properly,
the only option is to fix the upstream server: it should either
implemenent proper connection teardown, or avoid returning
responses without reading the request body first.

--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: proxy module handling early responses

Frank Liu
Hi,

If you read the same RFC, section 6.5, right before the section you mentioned, you can see:
   A client sending a message body SHOULD monitor the network connection
   for an error response while it is transmitting the request.  If the
   client sees a response that indicates the server does not wish to
   receive the message body and is closing the connection, the client
   SHOULD immediately cease transmitting the body and close its side of
   the connection.

In this case, server sent HTTP/413 (along with Connection: close) to indicate it did not wish to receive the message body. Does nginx immediately cease transmitting the body and close its side of the connection?

Thanks!
Frank


On Wed, Dec 18, 2019 at 11:37 AM Maxim Dounin <[hidden email]> wrote:
Hello!

On Wed, Dec 18, 2019 at 10:09:56AM -0800, Frank Liu wrote:

> Our upstream returns HTTP/413 along with "Connection: close" in the header,
> then closes the socket. It seems nginx catches the socket close in the
> middle of sending the large payload. This triggers additional 502 and
> client gets both 413 and 502 from nginx.

Your upstream server's behaviour is incorrect: it have to continue
reading data in the socket buffers and in transit (usually this is
called "lingering close", see http://nginx.org/r/lingering_close),
or nginx simply won't get the response.  The client will get
simple and quite reasonable 502 in such a situation (not "413 and
502").

This problem is explicitly documented in RFC 7230, "6.6. 
Tear-down" (https://tools.ietf.org/html/rfc7230#section-6.6):

   If a server performs an immediate close of a TCP connection, there is
   a significant risk that the client will not be able to read the last
   HTTP response.  If the server receives additional data from the
   client on a fully closed connection, such as another request that was
   sent by the client before receiving the server's response, the
   server's TCP stack will send a reset packet to the client;
   unfortunately, the reset packet might erase the client's
   unacknowledged input buffers before they can be read and interpreted
   by the client's HTTP parser.

   To avoid the TCP reset problem, servers typically close a connection
   in stages.  First, the server performs a half-close by closing only
   the write side of the read/write connection.  The server then
   continues to read from the connection until it receives a
   corresponding close by the client, or until the server is reasonably
   certain that its own TCP stack has received the client's
   acknowledgement of the packet(s) containing the server's last
   response.  Finally, the server fully closes the connection.

If the upstream server fails to do connection teardown properly,
the only option is to fix the upstream server: it should either
implemenent proper connection teardown, or avoid returning
responses without reading the request body first.

--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: proxy module handling early responses

Maxim Dounin
Hello!

On Fri, Jul 10, 2020 at 09:40:52AM -0700, Frank Liu wrote:

> If you read the same RFC, section 6.5, right before the section you
> mentioned, you can see:
>
>    A client sending a message body SHOULD monitor the network connection
>    for an error response while it is transmitting the request.  If the
>    client sees a response that indicates the server does not wish to
>    receive the message body and is closing the connection, the client
>    SHOULD immediately cease transmitting the body and close its side of
>    the connection.
>
> In this case, server sent HTTP/413 (along with Connection: close) to
> indicate it did not wish to receive the message body. Does nginx
> immediately cease transmitting the body and close its side of the
> connection?

It does.  But "immediately" from nginx point of view can easily
mean "way too late" from TCP stack point of view, resulting in
nginx not being able to get the response at all.

To re-iterate: if the upstream server fails to do connection
teardown properly, the only option is to fix the upstream server.  
This is not something which can be solved on nginx side.

Everthing which can be done on nginx side is believed to be
already implemented, including sending to the client partially
obtained responses with all the bytes nginx was able to read from
the socket (if nginx was able to read at least the response
headers).

--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx