FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Ubuntu > Ubuntu Server Development

 
 
LinkBack Thread Tools
 
Old 04-21-2010, 03:00 PM
"Dr. Nils Jungclaus"
 
Default System Performance under heavy I/O load

Hi,



I am using 8.04 on several (well equipped) servers and experience the
following problem on all of them:



When doing larger I/O jobs like backup, I always get a very poor
interactive response of the system. Interactive in this case means
performance of database requests, web application requests and even
interactive tools like top. The usual setup looks like this:



- postgres DB as database backend

- apache as loadbalancer and certificate handler

- several parallel zope instances using zeo

- sometimes more things like vmware-server, samba, postfix



When I start a backup (via network using rsync, local to another HD
using rsync, or using a USB attached external drive), I get lots of
delayed processes in top (D), the iowait percentage goes up to 10 to 20
percent, but the throughput (watched via iostat) is not very high, at
least far away from the rates I get using only one device. The load
goes up to 20 or 30, and nothing really gets done by the system. It
seams to me that the system is standing on it's own feet.



I already tried the following:



- using deadline/cfq scheduler (cfq using ionice for backup processes,
gives the best results for me, but is still far away from hardware
capabilities)

- on USB devices, I tried different settings for
/sys/block/*/device/max_sectors



The hardware is a 24 core Opteron, Adaptec Raid with Raid 10 (getting
up to 500MB/s read performance) and 64GB Ram

Several other servers (16, 8 Cores, 32/16GB ram, Dell perc6i Raid)
behave similar.



Are there any hints on getting better I/O performance / better response
times on such machines?



In my opion, the kernel should be able to schedule the ressources in a
way that at least any of the hardware components is the restricting
factor. What I see is a more or less idle system, high load, high
iowait percentage, no throughput.



Any hints welcome!



*** Nils





--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam
 
Old 04-28-2010, 07:25 PM
Preston Hagar
 
Default System Performance under heavy I/O load

On Wed, Apr 21, 2010 at 10:00 AM, Dr. Nils Jungclaus
<Nils.Jungclaus@perfact.de> wrote:
> Hi,
>
> I am using 8.04 on several (well equipped) servers and experience the
> following problem on all of them:
>
> When doing larger I/O jobs like backup, I always get a very poor interactive
> response of the system. Interactive in this case means performance of
> database requests, web application requests and even interactive tools like
> top. The usual setup looks like this:
>
> - postgres DB as database backend
> - apache as loadbalancer and certificate handler
> - several parallel zope instances using zeo
> - sometimes more things like vmware-server, samba, postfix
>
> When I start a backup (via network using rsync, local to another HD using
> rsync, or using a USB attached external drive), I get lots of delayed
> processes in top (D), the iowait percentage goes up to 10 to 20 percent, but
> the throughput (watched via iostat) is not very high, at least far away from
> the rates I get using only one device. The load goes up to 20 or 30, and
> nothing really gets done by the system. It seams to me that the system is
> standing on it's own feet.
>
> I already tried the following:
>
> - using deadline/cfq scheduler (cfq using ionice for backup processes, gives
> the best results for me, but is still far away from hardware capabilities)
> - on USB devices, I tried different settings for
> /sys/block/*/device/max_sectors
>
> The hardware is a 24 core Opteron, Adaptec Raid with Raid 10 (getting up to
> 500MB/s read performance) and 64GB Ram
> Several other servers (16, 8 Cores, 32/16GB ram, Dell perc6i Raid) behave
> similar.
>
> Are there any hints on getting better I/O performance / better response
> times on such machines?
>
> In my opion, the kernel should be able to schedule the ressources in a way
> that at least any of the hardware components is the restricting factor. What
> I see is a more or less idle system, high load, high iowait percentage, no
> throughput.
>
> Any hints welcome!
>
> *** Nils
>


I have found a couple of more things to try in regards to rsync
backups (although my situation is likely less complex with not as
powerful servers or huge requirements).

One major improvement I have found is to break the rsync command up
into multiple rsync commands. For example, instead of just having:

rsync -av /var/data user@remoteserver:/mnt/backups/

I would do

rsync -av /var/data/www user@remoteserver:/mnt/backups/www
rsync -av /var/data/db user@remoteserver:/mnt/backups/db

and so on. I have found that when the file list for rsync is really
big it can really bog down the system. By breaking up the rsync
commands, the overall backup completes more quickly. The downside is
that it adds complexity and creates the potential to forget a folder.

Another suggestion, if you are not already doing it, is to not use the
-z flag if you are using it. Generally, in my experience, if you are
not transferring files over a network, then compression only adds CPU
overhead and nothing more.

One last suggestion is to nice the rsync command when it is run. One
one server I manage a regular rsync backup kept bogging down the
server everytime it was run. By adding nice before rsync and adding a
--bwlimit=100, I greatly reduced the strain it put on my server. That
said, it was a backup over the network and I am pretty sure the
--bwlimit has as much to do with the load reduction as the nice
command did.

Anyway, I am not sure if any of these ideas are right for your
situation, I just thought I would pass them along.

Preston

--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam
 
Old 04-28-2010, 09:08 PM
Gerald Drouillard
 
Default System Performance under heavy I/O load

On 4/28/2010 3:25 PM, Preston Hagar wrote:
> On Wed, Apr 21, 2010 at 10:00 AM, Dr. Nils Jungclaus
> <Nils.Jungclaus@perfact.de> wrote:
>
>> Hi,
>>
>> I am using 8.04 on several (well equipped) servers and experience the
>> following problem on all of them:
>>
>> When doing larger I/O jobs like backup, I always get a very poor interactive
>> response of the system. Interactive in this case means performance of
>> database requests, web application requests and even interactive tools like
>> top. The usual setup looks like this:
>>
>> - postgres DB as database backend
>> - apache as loadbalancer and certificate handler
>> - several parallel zope instances using zeo
>> - sometimes more things like vmware-server, samba, postfix
>>
>> When I start a backup (via network using rsync, local to another HD using
>> rsync, or using a USB attached external drive), I get lots of delayed
>> processes in top (D), the iowait percentage goes up to 10 to 20 percent, but
>> the throughput (watched via iostat) is not very high, at least far away from
>> the rates I get using only one device. The load goes up to 20 or 30, and
>> nothing really gets done by the system. It seams to me that the system is
>> standing on it's own feet.
>>
>>
Try:
nice -n16 ionice -c2 -n7 [your rsync command line here]

--
Regards
--------------------------------------
Gerald Drouillard
Technology Architect
Drouillard& Associates, Inc.
http://www.Drouillard.biz


--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam
 
Old 06-21-2010, 12:04 PM
"Dr. Nils Jungclaus"
 
Default System Performance under heavy I/O load

Hi,



I am using 8.04 on several (well equipped) servers and experience the
following problem on all of them:



When doing larger I/O jobs like backup, I always get a very poor
interactive response of the system. Interactive in this case means
performance of database requests, web application requests and even
interactive tools like top. The usual setup looks like this:



- postgres DB as database backend

- apache as loadbalancer and certificate handler

- several parallel zope instances using zeo

- sometimes more things like vmware-server, samba, postfix



When I start a backup (via network using rsync, local to another HD
using rsync, or using a USB attached external drive), I get lots of
delayed processes in top (D), the iowait percentage goes up to 10 to 20
percent, but the throughput (watched via iostat) is not very high, at
least far away from the rates I get using only one device. The load
goes up to 20 or 30, and nothing really gets done by the system. It
seams to me that the system is standing on it's own feet.



I already tried the following:



- using deadline/cfq scheduler (cfq using ionice for backup processes,
gives the best results for me, but is still far away from hardware
capabilities)

- on USB devices, I tried different settings for
/sys/block/*/device/max_sectors



The hardware is a 24 core (4x6) Opteron, Adaptec Raid with Raid 10
(getting
up to 500MB/s read performance) and 64GB Ram.

Several other servers (16, 8 Cores, 32/16GB ram, Dell perc6i Raid)
behave similar.



Are there any hints on getting better I/O performance / better response
times on such machines?



In my opinion, the kernel should be able to schedule the ressources in
a
way that at least any of the hardware components is the restricting
factor. What I see is a more or less idle system, high load, high
iowait percentage, no throughput.



Any hints welcome!



*** Nils





--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam
 
Old 06-21-2010, 02:32 PM
Joseph Salisbury
 
Default System Performance under heavy I/O load

On Mon, Jun 21, 2010 at 8:04 AM, Dr. Nils Jungclaus <Nils.Jungclaus@perfact.de> wrote:









Hi,



I am using 8.04 on several (well equipped) servers and experience the
following problem on all of them:



When doing larger I/O jobs like backup, I always get a very poor
interactive response of the system. Interactive in this case means
performance of database requests, web application requests and even
interactive tools like top. The usual setup looks like this:



- postgres DB as database backend

- apache as loadbalancer and certificate handler

- several parallel zope instances using zeo

- sometimes more things like vmware-server, samba, postfix



When I start a backup (via network using rsync, local to another HD
using rsync, or using a USB attached external drive), I get lots of
delayed processes in top (D), the iowait percentage goes up to 10 to 20
percent, but the throughput (watched via iostat) is not very high, at
least far away from the rates I get using only one device. The load
goes up to 20 or 30, and nothing really gets done by the system. It
seams to me that the system is standing on it's own feet.



I already tried the following:



- using deadline/cfq scheduler (cfq using ionice for backup processes,
gives the best results for me, but is still far away from hardware
capabilities)

- on USB devices, I tried different settings for
/sys/block/*/device/max_sectors



The hardware is a 24 core (4x6) Opteron, Adaptec Raid with Raid 10
(getting
up to 500MB/s read performance) and 64GB Ram.

Several other servers (16, 8 Cores, 32/16GB ram, Dell perc6i Raid)
behave similar.



Are there any hints on getting better I/O performance / better response
times on such machines?



In my opinion, the kernel should be able to schedule the ressources in
a
way that at least any of the hardware components is the restricting
factor. What I see is a more or less idle system, high load, high
iowait percentage, no throughput.



Any hints welcome!



Â*Â*Â* Nils






--

ubuntu-server mailing list

ubuntu-server@lists.ubuntu.com

https://lists.ubuntu.com/mailman/listinfo/ubuntu-server

More info: https://wiki.ubuntu.com/ServerTeam

Hello Nils,

Have you tried experimenting with the readahead settings?


By default Linux requests the
next 256 sectors when doing a read. In a very sequential environment(Like backups),
increasing this value can improve read performance.


You can set the read-ahead on
an sd device by using the "blockdev" command. This tells the
SCSI layer to read X sectors ahead. This is only valuable with sequential
I/O-type applications, and can cause performance problems with high random I/O, so check the performance of your other workloads after making changes.





Syntax:



blockdev
–setra X <device name> i.e.



#blockdev
–setra 4096 /dev/sda





(Note: 4096 is just an example value. You will have to
do testing to determine the optimal value for your system). The OS will
read-ahead X pages, and throughput may be higher.Â*



To check the existing read
ahead setting use:


#blockdev
–getra <device name>


Also, have you looked at the vmstat statistics in addition to iostat?Â* You may want to compare the size of your I/Os between the workloads.Â* Maybe you are performing much smaller I/Os when this problem happens?




Hope this helps,



Joe



--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam
 

Thread Tools




All times are GMT. The time now is 11:48 PM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright ©2007 - 2008, www.linux-archive.org