Migrating from Varnish

classic Classic list List threaded Threaded
10 messages Options
Reply | Threaded
Open this post in threaded view
|

Migrating from Varnish

Andrei
Hi all,

I've been using Varnish for 4 years now, but quite frankly I'm tired of using it for HTTP traffic and Nginx for SSL offloading when Nginx can just handle it all. One of the main issues I'm running into with the transition is related to cache purging, and setting custom expiry TTL's per zone/domain. My questions are:

- Does anyone have any recent working documentation on supported modules/Lua scripts which can achieve wildcard purges as well as specific URL purges?

- How should I go about defining custom cache TTL's for: frontpage, dynamic, and static content requests? Currently I have Varnish configured to set the ttl's based on request headers which are added in the config with regex matches against the host being accessed.

Any other caveats or suggestions I should possibly know of?

--Andrei


_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: Migrating from Varnish

Andrei
To follow up on the purge implementation, I would like to avoid going through the entire cache dir for a wildcard request, as the sites I have stack up over 200k objects. I'm wondering if there would be a clean way of taking a passive route, through which cache would be invalidated/"refreshed" by subsequent requests. As in I send a purge request for https://domain.com/.*, and subsequent requests for cached items would then fetch the request from the backend, and update the cache. If that makes any sense..

On Nov 23, 2017 17:00, "Andrei" <[hidden email]> wrote:
Hi all,

I've been using Varnish for 4 years now, but quite frankly I'm tired of using it for HTTP traffic and Nginx for SSL offloading when Nginx can just handle it all. One of the main issues I'm running into with the transition is related to cache purging, and setting custom expiry TTL's per zone/domain. My questions are:

- Does anyone have any recent working documentation on supported modules/Lua scripts which can achieve wildcard purges as well as specific URL purges?

- How should I go about defining custom cache TTL's for: frontpage, dynamic, and static content requests? Currently I have Varnish configured to set the ttl's based on request headers which are added in the config with regex matches against the host being accessed.

Any other caveats or suggestions I should possibly know of?

--Andrei


_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: Migrating from Varnish

Maxim Dounin
In reply to this post by Andrei
Hello!

On Thu, Nov 23, 2017 at 09:00:52AM -0600, Andrei wrote:

> Hi all,
>
> I've been using Varnish for 4 years now, but quite frankly I'm tired of
> using it for HTTP traffic and Nginx for SSL offloading when Nginx can just
> handle it all. One of the main issues I'm running into with the transition
> is related to cache purging, and setting custom expiry TTL's per
> zone/domain. My questions are:
>
> - Does anyone have any recent working documentation on supported
> modules/Lua scripts which can achieve wildcard purges as well as specific
> URL purges?

Cache purging is available in nginx-plus, see
http://nginx.org/r/proxy_cache_purge.

> - How should I go about defining custom cache TTL's for: frontpage,
> dynamic, and static content requests? Currently I have Varnish configured
> to set the ttl's based on request headers which are added in the config
> with regex matches against the host being accessed.

Normal nginx approach is to configure distinct server{} and
location{} blocks for different content, with appropriate cache
validity times.  For example:

    server {
        listen 80;
        server_name foo.example.com;

        location / {
            proxy_pass http://backend;
            proxy_cache one;
            proxy_cache_valid 200 5m;
        }

        location /static/ {
            proxy_pass http://backend;
            proxy_cache one;
            proxy_cache_valid 200 24h;
        }
    }

Note well that by default nginx respects what is returned by the
backend in various response headers, and proxy_cache_valid time
only applies if there are no explicit cache validity time set, see
http://nginx.org/r/proxy_ignore_headers.

--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: Migrating from Varnish

Andrei
Hello Maxim!

On Nov 23, 2017 17:55, "Maxim Dounin" <[hidden email]> wrote:
Hello!

On Thu, Nov 23, 2017 at 09:00:52AM -0600, Andrei wrote:

> Hi all,
>
> I've been using Varnish for 4 years now, but quite frankly I'm tired of
> using it for HTTP traffic and Nginx for SSL offloading when Nginx can just
> handle it all. One of the main issues I'm running into with the transition
> is related to cache purging, and setting custom expiry TTL's per
> zone/domain. My questions are:
>
> - Does anyone have any recent working documentation on supported
> modules/Lua scripts which can achieve wildcard purges as well as specific
> URL purges?

Cache purging is available in nginx-plus, see
http://nginx.org/r/proxy_cache_purge.
I'm aware of the paid version, but I don't have  a budget for it yet, and quite frankly this should be a core feature for any caching service. Are there no viable options for the community release? It's a rather pertinent feature to have in my transition

> - How should I go about defining custom cache TTL's for: frontpage,
> dynamic, and static content requests? Currently I have Varnish configured
> to set the ttl's based on request headers which are added in the config
> with regex matches against the host being accessed.

Normal nginx approach is to configure distinct server{} and
location{} blocks for different content, with appropriate cache
validity times.  For example:

    server {
        listen 80;
        server_name foo.example.com;

        location / {
            proxy_pass http://backend;
            proxy_cache one;
            proxy_cache_valid 200 5m;
        }

        location /static/ {
            proxy_pass http://backend;
            proxy_cache one;
            proxy_cache_valid 200 24h;
        }
    }

Note well that by default nginx respects what is returned by the
backend in various response headers, and proxy_cache_valid time
only applies if there are no explicit cache validity time set, see
http://nginx.org/r/proxy_ignore_headers.
So to override the ttls set by the backend, I would have to use proxy_ignore_headers for all headers which can directly affect the intended TTL?

Thank you for your time!

--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx


_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: Migrating from Varnish

Maxim Dounin
Hello!

On Thu, Nov 23, 2017 at 10:24:19AM -0600, Andrei wrote:

> > > - Does anyone have any recent working documentation on supported
> > > modules/Lua scripts which can achieve wildcard purges as well as specific
> > > URL purges?
> >
> > Cache purging is available in nginx-plus, see
> > http://nginx.org/r/proxy_cache_purge.
>
> I'm aware of the paid version, but I don't have  a budget for it yet, and
> quite frankly this should be a core feature for any caching service. Are
> there no viable options for the community release? It's a rather pertinent
> feature to have in my transition

I'm aware of at least one 3rd party module from Piotr Sikora,
https://github.com/FRiCKLE/ngx_cache_purge.  I've never tried to
use it, and AFAIK it doesn't support wildcard purges.  It is
mostly known to developers as a hack that starts segfaulting on
unrelated changes in proxy module, so obviously enough I can't
recommend using it.

Note though that I personally think that cache purging is
something that should _not_ be present in any caching service, and
I wouldn't recommend using nginx-plus functionality either.  
Proper controlling of cache validity times is something that
should be used instead.  This is what happens in browsers anyway,
and trying to "purge" things there won't work.

> > Note well that by default nginx respects what is returned by the
> > backend in various response headers, and proxy_cache_valid time
> > only applies if there are no explicit cache validity time set, see
> > http://nginx.org/r/proxy_ignore_headers.
>
> So to override the ttls set by the backend, I would have to use
> proxy_ignore_headers for all headers which can directly affect the intended
> TTL?

Yes, if you want to ignore what the backend set.  In many cases
this might not be a good idea though.

--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: Migrating from Varnish

ayman
In reply to this post by Andrei
Andrei Wrote:
-------------------------------------------------------
> I'm aware of the paid version, but I don't have  a budget for it yet,
> and
> quite frankly this should be a core feature for any caching service.
> Are
> there no viable options for the community release? It's a rather

https://github.com/FRiCKLE/ngx_cache_purge/

Easy to implement (add).

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277462,277476#msg-277476

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: Migrating from Varnish

Andrei
In reply to this post by Maxim Dounin
Hello,

On Thu, Nov 23, 2017 at 11:52 AM, Maxim Dounin <[hidden email]> wrote:
Hello!

On Thu, Nov 23, 2017 at 10:24:19AM -0600, Andrei wrote:

> > > - Does anyone have any recent working documentation on supported
> > > modules/Lua scripts which can achieve wildcard purges as well as specific
> > > URL purges?
> >
> > Cache purging is available in nginx-plus, see
> > http://nginx.org/r/proxy_cache_purge.
>
> I'm aware of the paid version, but I don't have  a budget for it yet, and
> quite frankly this should be a core feature for any caching service. Are
> there no viable options for the community release? It's a rather pertinent
> feature to have in my transition

I'm aware of at least one 3rd party module from Piotr Sikora,
https://github.com/FRiCKLE/ngx_cache_purge.  I've never tried to
use it, and AFAIK it doesn't support wildcard purges.  It is
mostly known to developers as a hack that starts segfaulting on
unrelated changes in proxy module, so obviously enough I can't
recommend using it.


Thanks for mentioning the segfaulting issues, definitely not something I want to run into. I saw that one, and some more details with Lua in these:


From what I'm seeing available, it looks as though my best bet is Lua, or pushing all the purge requests to a custom backend service that's going to queue/handle the file removals. Redis tracking also comes to mind since I'm going to be doing live stats for the traffic so that's another hop I have to factor in. With these options I'm thinking of doing a cache zone per domain, or perhaps group of domains. Are there any performance impacts from having for example tens or hundreds of cache zones defined?

Note though that I personally think that cache purging is
something that should _not_ be present in any caching service, and
I wouldn't recommend using nginx-plus functionality either.
Proper controlling of cache validity times is something that
should be used instead.  This is what happens in browsers anyway,
and trying to "purge" things there won't work.

I'm sorry but I strongly disagree here. Every respectable CDN service which offers caching, also offers purging. People want their content updated on edge service when changes are made to their application, and they want their applications to be able to talk to edge services. Take a busy news site for example. When a tag/post/page is updated, they expect viewers to be able to see it right then and there, not when cache expires. If they wait for cache to expire, they lose viewers and $$ due to delays.


> > Note well that by default nginx respects what is returned by the
> > backend in various response headers, and proxy_cache_valid time
> > only applies if there are no explicit cache validity time set, see
> > http://nginx.org/r/proxy_ignore_headers.
>
> So to override the ttls set by the backend, I would have to use
> proxy_ignore_headers for all headers which can directly affect the intended
> TTL?

Yes, if you want to ignore what the backend set.  In many cases
this might not be a good idea though.

I understand why it's not always a good idea. I have numerous checks and balances accumulated over the years at the moment which I'm working on porting over. Overriding backend cache headers on a granular level is something I enjoy :)


--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx


_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: Migrating from Varnish

Andrei
In reply to this post by ayman
Thanks for the tip. Have you ran into any issues as Maxim mentioned?

On Thu, Nov 23, 2017 at 11:53 AM, itpp2012 <[hidden email]> wrote:
Andrei Wrote:
-------------------------------------------------------
> I'm aware of the paid version, but I don't have  a budget for it yet,
> and
> quite frankly this should be a core feature for any caching service.
> Are
> there no viable options for the community release? It's a rather

https://github.com/FRiCKLE/ngx_cache_purge/

Easy to implement (add).

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277462,277476#msg-277476

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx


_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: Migrating from Varnish

ayman
Andrei Wrote:
-------------------------------------------------------
> Thanks for the tip. Have you ran into any issues as Maxim mentioned?
>

Not yet.

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277462,277487#msg-277487

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: Migrating from Varnish

Andrei
Would it be possible to use the Redis module to track cache? For example, I would like to log each "new" cache hit, and include the URL, cache expiration time, and possibly the file it's stored in?

On Nov 23, 2017 23:51, "itpp2012" <[hidden email]> wrote:
Andrei Wrote:
-------------------------------------------------------
> Thanks for the tip. Have you ran into any issues as Maxim mentioned?
>

Not yet.

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277462,277487#msg-277487

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx