Revisiting the long-overdue "TODO always gunzip" in ngx_http_gunzip_filter_module.c

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

Revisiting the long-overdue "TODO always gunzip" in ngx_http_gunzip_filter_module.c

J.R.
Recently I was looking into having my upstream server gzip content
that is sent to nginx (which is acting as a reverse proxy) to reduce
local bandwidth. However, I needed to decompress the response so nginx
could do some manipulation, then obviously it would get re-compressed
(typically with brotli) before sending out to the client.

After finding numerous posts saying that "that feature doesn't exist",
then looking at the source of the gunzip module and finding the "TODO"
saying the exact thing I was looking for, I managed to find this old
post from 2013 where someone made a patch to do the exact thing
needed:

http://mailman.nginx.org/pipermail/nginx-devel/2013-January/003276.html

It's only about 11 lines of code, attached is a newer patch I did
against 1.17.4 since the gunzip source changed a bit. Basically the
patch adds the option "gunzip_force on;"

I do agree with the TODO comment in the source that having a way for
other modules to request decompression would be great, but also having
a configuration option (like this patch) is also a good feature.

After implementing this patch, and configuring the upstream server to
use gzip level 1 (figured maximize speed and minimize cpu usage), my
local bandwidth usage between the two immediately dropped 80-85%, and
I swear even the load levels dropped a hair, probably due to less
networking overhead, despite the compressing & decompressing of
content.

I realize there is probably a mile-long TODO list for nginx features,
but something so trivial like this patch has such a huge impact on
network usage and I'm sure a better implementation would still only be
a minimal code change. Just trying to give this a nudge so it's not
forgotten, as I did come across quite a few posts online into how to
do such a thing, so the demand is out there.

Jason

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx

002_force_gunzip.patch (2K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Revisiting the long-overdue "TODO always gunzip" in ngx_http_gunzip_filter_module.c

Maxim Dounin
Hello!

On Sun, Oct 20, 2019 at 12:00:07PM -0500, J.R. wrote:

> Recently I was looking into having my upstream server gzip content
> that is sent to nginx (which is acting as a reverse proxy) to reduce
> local bandwidth. However, I needed to decompress the response so nginx
> could do some manipulation, then obviously it would get re-compressed
> (typically with brotli) before sending out to the client.
>
> After finding numerous posts saying that "that feature doesn't exist",
> then looking at the source of the gunzip module and finding the "TODO"
> saying the exact thing I was looking for, I managed to find this old
> post from 2013 where someone made a patch to do the exact thing
> needed:
>
> http://mailman.nginx.org/pipermail/nginx-devel/2013-January/003276.html
>
> It's only about 11 lines of code, attached is a newer patch I did
> against 1.17.4 since the gunzip source changed a bit. Basically the
> patch adds the option "gunzip_force on;"
>
> I do agree with the TODO comment in the source that having a way for
> other modules to request decompression would be great, but also having
> a configuration option (like this patch) is also a good feature.
>
> After implementing this patch, and configuring the upstream server to
> use gzip level 1 (figured maximize speed and minimize cpu usage), my
> local bandwidth usage between the two immediately dropped 80-85%, and
> I swear even the load levels dropped a hair, probably due to less
> networking overhead, despite the compressing & decompressing of
> content.
>
> I realize there is probably a mile-long TODO list for nginx features,
> but something so trivial like this patch has such a huge impact on
> network usage and I'm sure a better implementation would still only be
> a minimal code change. Just trying to give this a nudge so it's not
> forgotten, as I did come across quite a few posts online into how to
> do such a thing, so the demand is out there.

Just in case, my position hasn't changed since then:

http://mailman.nginx.org/pipermail/nginx-devel/2013-January/003284.html

Also note that if you really need to force gunziping for some reason,
you can do so out of the box by using an additional local proxying
layer with appropriate "proxy_set_header Accept-Encoding".

--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: Revisiting the long-overdue "TODO always gunzip" in ngx_http_gunzip_filter_module.c

J.R.
In reply to this post by J.R.
> Also note that if you really need to force gunziping for some reason,
> you can do so out of the box by using an additional local proxying
> layer with appropriate "proxy_set_header Accept-Encoding".

Yes, that is how I had it configured before patching, all content
between nginx and the upstream servers was uncompressed using the
directive you mentioned, but the goal was to have the data between the
two servers compressed to reduce network traffic.

Yes, I also read your comments from 2013, but after looking over the
current gunzip module code, I can't find any flag that another module
could set to force decompression? Or did I miss it buried somewhere?

Thanks for the quick response!

Jason
_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: Revisiting the long-overdue "TODO always gunzip" in ngx_http_gunzip_filter_module.c

Maxim Dounin
Hello!

On Mon, Oct 21, 2019 at 12:49:38PM -0500, J.R. wrote:

> > Also note that if you really need to force gunziping for some reason,
> > you can do so out of the box by using an additional local proxying
> > layer with appropriate "proxy_set_header Accept-Encoding".
>
> Yes, that is how I had it configured before patching, all content
> between nginx and the upstream servers was uncompressed using the
> directive you mentioned, but the goal was to have the data between the
> two servers compressed to reduce network traffic.

Well, it looks like I've failed to explain.  You can have things
compressed between servers and then decompressed on the frontend
server.  To do so, you can configure additional proxying on the
frontend server, for example:

    server {
        listen 80;
        server_name example.com;

        location / {
            proxy_pass <a href="http://127.0.0.1:8080;">http://127.0.0.1:8080;
            proxy_set_header Accept-Encoding "";
        }
    }

    server {
        listen 8080;
        server_name gunzip.example.com;

        location / {
            proxy_pass http://backend;
            proxy_set_header Accept-Encoding "gzip";
            gunzip on;
        }
    }

With such a configuration all traffic between the frontend server
and the backend servers can be compressed using gzip.  Yet
everything in the first server block isn't compressed, and can be
processed by sub filter and so on.

> Yes, I also read your comments from 2013, but after looking over the
> current gunzip module code, I can't find any flag that another module
> could set to force decompression? Or did I miss it buried somewhere?

It wasn't implemented.  Rather, it is something I think should be
implemented if we want to process/modify compressed responses
within nginx.

--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: Revisiting the long-overdue "TODO always gunzip" in ngx_http_gunzip_filter_module.c

J.R.
In reply to this post by J.R.
> Well, it looks like I've failed to explain.  You can have things
> compressed between servers and then decompressed on the frontend
> server.  To do so, you can configure additional proxying on the
> frontend server, for example:

Thanks for the sample configuration, that makes sense with the second
proxy. I will have to give that setup a try and see how it compares.

Always appreciate the prompt replies!

Jason
_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx