On Mon, Sep 17, 2012 at 10:52 AM, Jarry <firstname.lastname@example.org> wrote:
> On 16-Sep-12 20:06, Michael Hampicke wrote:
>>> * Each Apache process is consuming 80-100MB of RAM.
>>> * Squid is consuming 666MB of RAM
>>> * memcached is consuming 822MB of RAM
>>> * mysqld is consuming 886MB of RAM
>>> * The kernel is using 110MB of RAM for buffers
>>> * The kernel is using 851MB of RAM for file cache (which benefits squid).
>> As Jerry did not specify which content his apache is serving, I used
>> 12MB of RAM per apache process (as a general rule of thumb). But if it's
>> dynamic content generated by a scripting language like php it could be a
>> lot more. But I think 80-100MB of RAM with php in the back should be a
>> good guess.
>> Important thing is:
>> MaxClients x memory footprint per apache process < available memory :-)
>> If you have lots of concurrent requests you may be better suited with
>> something lighter.... like lighttpd. Or start caching of some sort, like
>> Michael does.
> Thank you for all tips&tweaks. My apache is serving mostly dynamic
> content (drupal cms), and single apache process has ~35-40MB RES
> It is on VPS, with 1GB/2GB soft/hard RAM limits, only apache & mysql
> running. Mysqld needs ~100-200MB, and caching is covered by apc.
> I reduced maxclients down to 40, it should never run out of memory.
APC is for PHP opcode caching. Memcached is like a database cache
(depends on how clients choose to use it). squid is finished-object
Each cache reduces resource requirements for a different piece of the
overall application. APC allows mod_php to avoid some reparsing and
recompilation. Memcached allows an application to say "I can
regenerate this data if I _must_, but it's kinda expensive, and I'd
rather not." Squid captures HTTP requests, looks to see if it has a
copy of the object being requested. If it does, it looks to see if the
object has expired. If it has, it passes the request on to the backend
httpd. If it hasn't, it returns the object it already has. (There are
at least three mechanisms that protect clients from stale copies of
dynamically-generated data, and I would be very, very surprised if
drupal didn't leverage all of them, so it should be safe and
beneficial for you to add a squid proxy.)
TL;DR, caching is _never_ fully handled by one component. Now go back
and read what I wrote, if you didn't.
> BTW, how's that someone has apache process 10-20MB, and me 40MB?
> I'd like to reduce its size, but do not know how...
APC is going to be part of it. You might also look up some other ways
of performance-tuning mod_php.
Really, though, I'd recommend sticking an HTTP proxy in front of
apache first, so you can reduce the number of processes you need. The
setup I described is what runs rosettacode.org, which gets a fair
amount of traffic (averaging 50k-60k pageviews per week, 500 pageviews
per hour with spikes up to 1100, and a virgin pageview will (IIRC)
involve up to around 12 objects requested from the server.).
This setup is stable enough that it only requires enough attention
from me for security updates and an occasional log issue. (There's a
file not being handled by logrotate that I haven't had time to fix.)