FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Gentoo > Gentoo User

 
 
LinkBack Thread Tools
 
Old 09-16-2012, 06:55 AM
Jarry
 
Default Apache forked itself to death...

Hi,
strange thing happened to my web-server (apache-2.2.22-r1):
it started forking untill it used all ram/swap and stopped
responding. I counted ~60 apache processes running (ps -a),
all sleeping, top showed no load except all memory being used.
Log-files showed nothing suspicious to me, except for a few
"GET / HTTP/1.1 200 40" messages at the time when apache
was already unable to send reply.

Apparently my apache is not correctly configured when it
"forked to death", but maybe someone can help me. I have
about 1GB memory for apache. What should I change in my
config so that apache never runs out of memory?

server-info:
Timeouts: connection: 60 keep-alive: 15
MPM Name: Prefork
MPM Information: Max Daemons: 150 Threaded: no Forked: yes
Module Name: prefork.c
31: StartServers 5
32: MinSpareServers 5
33: MaxSpareServers 10
34: MaxClients 150

Jarry

--
__________________________________________________ _____________
This mailbox accepts e-mails only from selected mailing-lists!
Everything else is considered to be spam and therefore deleted.
 
Old 09-16-2012, 01:26 PM
Michael Hampicke
 
Default Apache forked itself to death...

Am 16.09.2012 08:55, schrieb Jarry:
> Hi,
> strange thing happened to my web-server (apache-2.2.22-r1):
> it started forking untill it used all ram/swap and stopped
> responding. I counted ~60 apache processes running (ps -a),
> all sleeping, top showed no load except all memory being used.
> Log-files showed nothing suspicious to me, except for a few
> "GET / HTTP/1.1 200 40" messages at the time when apache
> was already unable to send reply.
>
> Apparently my apache is not correctly configured when it
> "forked to death", but maybe someone can help me. I have
> about 1GB memory for apache. What should I change in my
> config so that apache never runs out of memory?
>
> server-info:
> Timeouts: connection: 60 keep-alive: 15
> MPM Name: Prefork
> MPM Information: Max Daemons: 150 Threaded: no Forked: yes
> Module Name: prefork.c
> 31: StartServers 5
> 32: MinSpareServers 5
> 33: MaxSpareServers 10
> 34: MaxClients 150
>
> Jarry
>


Hi,

try reducing MaxClients to 64, StartServers and MinSpareServers to 2 and
then observe how things develop. If you then feel apache is to slow to
respond to request under load, try increasing MinSpareServer one at a
time. But always keep in mind: every fork of apache eats your memory.
 
Old 09-16-2012, 01:59 PM
Michael Mol
 
Default Apache forked itself to death...

On Sun, Sep 16, 2012 at 9:26 AM, Michael Hampicke <gentoo-user@hadt.biz> wrote:
> Am 16.09.2012 08:55, schrieb Jarry:
>> Hi,
>> strange thing happened to my web-server (apache-2.2.22-r1):
>> it started forking untill it used all ram/swap and stopped
>> responding. I counted ~60 apache processes running (ps -a),
>> all sleeping, top showed no load except all memory being used.
>> Log-files showed nothing suspicious to me, except for a few
>> "GET / HTTP/1.1 200 40" messages at the time when apache
>> was already unable to send reply.
>>
>> Apparently my apache is not correctly configured when it
>> "forked to death", but maybe someone can help me. I have
>> about 1GB memory for apache. What should I change in my
>> config so that apache never runs out of memory?
>>
>> server-info:
>> Timeouts: connection: 60 keep-alive: 15
>> MPM Name: Prefork
>> MPM Information: Max Daemons: 150 Threaded: no Forked: yes
>> Module Name: prefork.c
>> 31: StartServers 5
>> 32: MinSpareServers 5
>> 33: MaxSpareServers 10
>> 34: MaxClients 150
>>
>> Jarry
>>
>
>
> Hi,
>
> try reducing MaxClients to 64, StartServers and MinSpareServers to 2 and
> then observe how things develop. If you then feel apache is to slow to
> respond to request under load, try increasing MinSpareServer one at a
> time. But always keep in mind: every fork of apache eats your memory.

And sucks up system entropy. And increases connection latency, if
you've already got a request waiting on that fork to spin up.

I have StartServers, MinSpareServers, MaxSpareServers and MaxClients
all pegged to the same value. And on the server in question, they'll
all pegged to '10'.

I have MaxRequestsPerChild set to 20000, so that any leaky processes
get cleaned up.

Because I need to fit a lot of operation into a limited space, I need
to be able to reasonably predict how much RAM is going to be in use by
each of my services. A "MaxClients" of 10 may seem small, but that's
what Squid is for; only requests Squid couldn't cache get passed on to
Apache.

The server I'm describing is a VM with 4GB of RAM, and is also running
MySQL, squid and memcached. For those playing with the numbers in
their head, each of these numbers reflect RES (code+data resident in
RAM):

* Each Apache process is consuming 80-100MB of RAM.
* Squid is consuming 666MB of RAM
* memcached is consuming 822MB of RAM
* mysqld is consuming 886MB of RAM
* The kernel is using 110MB of RAM for buffers
* The kernel is using 851MB of RAM for file cache (which benefits squid).

And, not RAM, but potentially of interest for the curious:
* The MySQL db is consuming 3.8GB on disk.
* The Squid cache is about 9.2GB on disk.


--
:wq
 
Old 09-16-2012, 06:06 PM
Michael Hampicke
 
Default Apache forked itself to death...

>
> * Each Apache process is consuming 80-100MB of RAM.
> * Squid is consuming 666MB of RAM
> * memcached is consuming 822MB of RAM
> * mysqld is consuming 886MB of RAM
> * The kernel is using 110MB of RAM for buffers
> * The kernel is using 851MB of RAM for file cache (which benefits squid).
>
> And, not RAM, but potentially of interest for the curious:
> * The MySQL db is consuming 3.8GB on disk.
> * The Squid cache is about 9.2GB on disk.
>

As Jerry did not specify which content his apache is serving, I used
12MB of RAM per apache process (as a general rule of thumb). But if it's
dynamic content generated by a scripting language like php it could be a
lot more. But I think 80-100MB of RAM with php in the back should be a
good guess.

Important thing is:

MaxClients x memory footprint per apache process < available memory :-)

If you have lots of concurrent requests you may be better suited with
something lighter.... like lighttpd. Or start caching of some sort, like
Michael does.
 
Old 09-17-2012, 02:52 PM
Jarry
 
Default Apache forked itself to death...

On 16-Sep-12 20:06, Michael Hampicke wrote:

* Each Apache process is consuming 80-100MB of RAM.
* Squid is consuming 666MB of RAM
* memcached is consuming 822MB of RAM
* mysqld is consuming 886MB of RAM
* The kernel is using 110MB of RAM for buffers
* The kernel is using 851MB of RAM for file cache (which benefits squid).



As Jerry did not specify which content his apache is serving, I used
12MB of RAM per apache process (as a general rule of thumb). But if it's
dynamic content generated by a scripting language like php it could be a
lot more. But I think 80-100MB of RAM with php in the back should be a
good guess.

Important thing is:

MaxClients x memory footprint per apache process < available memory :-)

If you have lots of concurrent requests you may be better suited with
something lighter.... like lighttpd. Or start caching of some sort, like
Michael does.


Thank you for all tips&tweaks. My apache is serving mostly dynamic
content (drupal cms), and single apache process has ~35-40MB RES
It is on VPS, with 1GB/2GB soft/hard RAM limits, only apache & mysql
running. Mysqld needs ~100-200MB, and caching is covered by apc.
I reduced maxclients down to 40, it should never run out of memory.

BTW, how's that someone has apache process 10-20MB, and me 40MB?
I'd like to reduce its size, but do not know how...

Jarry

--
__________________________________________________ _____________
This mailbox accepts e-mails only from selected mailing-lists!
Everything else is considered to be spam and therefore deleted.
 
Old 09-17-2012, 03:22 PM
Michael Mol
 
Default Apache forked itself to death...

On Mon, Sep 17, 2012 at 10:52 AM, Jarry <mr.jarry@gmail.com> wrote:
> On 16-Sep-12 20:06, Michael Hampicke wrote:
>>>
>>> * Each Apache process is consuming 80-100MB of RAM.
>>> * Squid is consuming 666MB of RAM
>>> * memcached is consuming 822MB of RAM
>>> * mysqld is consuming 886MB of RAM
>>> * The kernel is using 110MB of RAM for buffers
>>> * The kernel is using 851MB of RAM for file cache (which benefits squid).
>>>
>>
>> As Jerry did not specify which content his apache is serving, I used
>> 12MB of RAM per apache process (as a general rule of thumb). But if it's
>> dynamic content generated by a scripting language like php it could be a
>> lot more. But I think 80-100MB of RAM with php in the back should be a
>> good guess.
>>
>> Important thing is:
>>
>> MaxClients x memory footprint per apache process < available memory :-)
>>
>> If you have lots of concurrent requests you may be better suited with
>> something lighter.... like lighttpd. Or start caching of some sort, like
>> Michael does.
>
>
> Thank you for all tips&tweaks. My apache is serving mostly dynamic
> content (drupal cms), and single apache process has ~35-40MB RES
> It is on VPS, with 1GB/2GB soft/hard RAM limits, only apache & mysql
> running. Mysqld needs ~100-200MB, and caching is covered by apc.
> I reduced maxclients down to 40, it should never run out of memory.

APC is for PHP opcode caching. Memcached is like a database cache
(depends on how clients choose to use it). squid is finished-object
caching.

Each cache reduces resource requirements for a different piece of the
overall application. APC allows mod_php to avoid some reparsing and
recompilation. Memcached allows an application to say "I can
regenerate this data if I _must_, but it's kinda expensive, and I'd
rather not." Squid captures HTTP requests, looks to see if it has a
copy of the object being requested. If it does, it looks to see if the
object has expired. If it has, it passes the request on to the backend
httpd. If it hasn't, it returns the object it already has. (There are
at least three mechanisms that protect clients from stale copies of
dynamically-generated data, and I would be very, very surprised if
drupal didn't leverage all of them, so it should be safe and
beneficial for you to add a squid proxy.)

TL;DR, caching is _never_ fully handled by one component. Now go back
and read what I wrote, if you didn't.

>
> BTW, how's that someone has apache process 10-20MB, and me 40MB?
> I'd like to reduce its size, but do not know how...

APC is going to be part of it. You might also look up some other ways
of performance-tuning mod_php.

Really, though, I'd recommend sticking an HTTP proxy in front of
apache first, so you can reduce the number of processes you need. The
setup I described is what runs rosettacode.org, which gets a fair
amount of traffic (averaging 50k-60k pageviews per week, 500 pageviews
per hour with spikes up to 1100, and a virgin pageview will (IIRC)
involve up to around 12 objects requested from the server.).

This setup is stable enough that it only requires enough attention
from me for security updates and an occasional log issue. (There's a
file not being handled by logrotate that I haven't had time to fix.)

--
:wq
 

Thread Tools




All times are GMT. The time now is 07:03 PM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org