Using proxy_store under heavy load.

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

Using proxy_store under heavy load.

zepolen
I'm using proxy_store to act as a frontend mirror/cache to amazon s3 for my sites photos.
Response times are slow, iotop reports ~10-15MB/s being written to disk by nginx.
The website implements a 'latest' feature, any new photos will requested by many users at the same time.
I'm thinking nginx is probably getting a request for a photo it doesn't already have, and while it is retrieving the file from s3, more requests come in for the same file, meaning more round trips and more temp files being created.
Does proxy_cache handle it differently? (ie, does it know that a url is currently being retrieved from the backed, and block other requests for that url until the file has been retrieved)
Reply | Threaded
Open this post in threaded view
|

Re: Using proxy_store under heavy load.

Igor Sysoev
On Wed, May 20, 2009 at 07:48:27PM +0300, zepolen wrote:

> I'm using proxy_store to act as a frontend mirror/cache to amazon s3 for my
> sites photos.
> Response times are slow, iotop reports ~10-15MB/s being written to disk by
> nginx.
> The website implements a 'latest' feature, any new photos will requested by
> many users at the same time.
> I'm thinking nginx is probably getting a request for a photo it doesn't
> already have, and while it is retrieving the file from s3, more requests
> come in for the same file, meaning more round trips and more temp files
> being created.
> Does proxy_cache handle it differently? (ie, does it know that a url is
> currently being retrieved from the backed, and block other requests for that
> url until the file has been retrieved)

No, proxy_cache does the same.


--
Igor Sysoev
http://sysoev.ru/en/

Reply | Threaded
Open this post in threaded view
|

Re: Using proxy_store under heavy load.

Dave Cheney
In reply to this post by zepolen
I suspect that the worker is spending a lot of time in the write() to the
disk of the cached file. How many workers are your running ?

Cheers

Dave

zepolen writes:

> « HTML content follows »
>
> I'm using proxy_store to act as a frontend mirror/cache to amazon s3 for
> my sites photos.
>
> Response times are slow, iotop reports ~10-15MB/s being written to disk by
> nginx.
>
> The website implements a 'latest' feature, any new photos will requested
> by many users at the same time.
>
> I'm thinking nginx is probably getting a request for a photo it doesn't
> already have, and while it is retrieving the file from s3, more requests
> come in for the same file, meaning more round trips and more temp files
> being created.
>
> Does proxy_cache handle it differently? (ie, does it know that a url is
> currently being retrieved from the backed, and block other requests for
> that url until the file has been retrieved)

Reply | Threaded
Open this post in threaded view
|

Re: Using proxy_store under heavy load.

zepolen
On Wed, May 27, 2009 at 4:23 AM, Dave Cheney <[hidden email]> wrote:
> I suspect that the worker is spending a lot of time in the write() to the disk of the cached file. How many workers are your running ?
>
>> Response times are slow, iotop reports ~10-15MB/s being written to disk by nginx.
>> I'm thinking nginx is probably getting a request for a photo it doesn't already have, and while it is retrieving the file from s3, more requests come in for the same file, meaning more round trips and more temp files being created.

It was a typo in the path where nginx was supposed to find the files
that had been stored. As a result every single file was being
retrieved from the backend, worse, it was being written to disk, only
to be discarded.

It made perfect sense in retrospect as outgoing eth traffic was also
stuck at about ~15MB/s, and incoming had jumped to the same level,
unfortunately had to wait for the graphs before I could realise the
problem.