FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Ubuntu > Ubuntu Server Development

 
 
LinkBack Thread Tools
 
Old 05-12-2010, 07:19 AM
Imre Gergely
 
Default Virtualization and disk performance

With KVM guest, is that performance measured using virtio? If not, maybe
you could try that.

On 05/12/2010 09:30 AM, David Peall wrote:
> Hi
>
> I've been using KVM for a bunch VM's on hardy and now Lucid and with
> CPU and memory performing quite well its been no problem. I'm now
> looking at our ageing DB server and wanting to put it in a VM but the
> disk performance is dismal or am I doing it wrong?
> I'm quite comfortable if I loose 25% or even 33% through
> virtualization as the benefits are worth it.
>
> Here are the numbers I have so far (using dbench, ubuntu lucid and
> ext4):
>
> Bare metal using a slice for the host OS:
> Throughput 2586.65 MB/sec 10 clients 10 procs
> max_latency=18.029 ms
> Throughput 3631.62 MB/sec 50 clients 50 procs
> max_latency=239.773 ms
> Throughput 3635.12 MB/sec 100 clients 100 procs
> max_latency=458.094 ms
>
> Guest KVM machine using a block device
> Throughput 1130.52 MB/sec 10 clients 10 procs
> max_latency=262.047 ms
> Throughput 513.972 MB/sec 50 clients 50 procs
> max_latency=6561.761 ms
> Throughput 465.593 MB/sec 100 clients 100 procs
> max_latency=2520.585 ms
>
> I tried VMware just as a comparison using a vmdk file (not even a block
> device):
> Throughput 1482.44 MB/sec 10 clients 10 procs
> max_latency=53.682 ms
> Throughput 2049.45 MB/sec 50 clients 50 procs
> max_latency=492.187 ms
> Throughput 2098.71 MB/sec 100 clients 100 procs
> max_latency=681.216 ms
>
> Using LVM was worse and Qcow2 was even worse as expected.
>
> Thats a big pill to swallow for KVM.
>
> Any ideas what is the best way to get disk performance using KVM.
>
> Thanks
>

--
Imre Gergely
Yahoo!: gergelyimre | ICQ#: 101510959
MSN: gergely_imre | GoogleTalk: gergelyimre
gpg --keyserver subkeys.pgp.net --recv-keys 0x34525305

--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam
 
Old 05-12-2010, 07:25 AM
David Peall
 
Default Virtualization and disk performance

On Wed, 2010-05-12 at 10:19 +0300, Imre Gergely wrote:
> With KVM guest, is that performance measured using virtio? If not,
> maybe
> you could try that.
>
>

With virtio using a block device I get:
Throughput 45.2551 MB/sec 100 clients 100 procs max_latency=27865.397
ms

--
David Peall
Domain Name Services
--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam
 
Old 05-12-2010, 07:55 AM
Serge van Ginderachter
 
Default Virtualization and disk performance

Working at ibcn.intec.ugent.be, I am conducting some performance tests on different platforms (Xen, KVM, VMWare) and also on different host OS systems (Debian, Ubuntu, CentOS) and can acknowledge this. While those tests aren't finishedn and no report was written yet (I plan to do this in the following month, an dhoperfully publish it somewehere) I can give some conclusions.



On 12 May 2010 08:30, David Peall <david@dnservices.co.za> wrote:


Thats a big pill to swallow for KVM.



Any ideas what is the best way to get disk performance using KVM.

"Don't use local disks." KVM performs badly (ATM), I fail to see a solution right now.*I wouldn't recommend using virtualisation when you need strong disk performance, like for databases. If you really must use virtualisation, *and* need the best performance, consider runnig Xen on Debian Lenny, or VMWare ESX(i) 4



Some conclusions to my tests.
- Processor wise they all perform well- Memory bandwitch is overall comparable, but best on vmware- network speed is varying, but no conclusive tests so far; tuning for gigabit connections is of course needed

- disk access is the major botle neck on *all"platforms
About disk access I can add also those first observations:- vmware performs best, closely followed by Xen (using paravirtualisation of course)

- KVM (using virtio) performs worse and still has a lot to catch up on Xen- a recent KVM/Ubuntu combo (Karmic, Lucid) is performing worse than Centos 5.4 /KVM (which carries older but patched versions of KVM and vmlinuz)

- Debian Lenny + Xen is far more performant
Overall, I get a strong feeling that the Ubuntu flavours I tested perform badly, even compared to older versions of other platforms. It seems very obvious that Red Hat patches to the kernel are very optimized.


It might be interesting to consult the kernel team, but right now, I don't have the numbers compiled and ready to be published. I'll ping the list when that is ready.
--
* * Met cordiale groet,



* * Serge van Ginderachter


--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam
 
Old 05-12-2010, 11:07 AM
Benoit des Ligneris
 
Default Virtualization and disk performance

Hello,
We study quite extensively the performance of virtualized systems (mainly open source but VMWare as well). Even if this is an aging report you can find the key results here :http://www.slideshare.net/bligneri/comparison-of-open-source-virtualization-technology
(note : this is only a presentation of a more complete M.Sc. in Computer Science in French.).
We strongly recommend not to virtualize any file server if you expect some load. Even VMWare with supported hardware and software stack can be a real PITA : we had some very bad experience for one of our customer (40000+ users and several TB of shared space). Virtualization does not play well with high performance I/O and when you pay the premium for you nice SAN, Fiber channel, switch, extra-powerful servers you can accept a small performance drop but in our case stability problems are the real trouble and much worse to diagnose and address.

For databases, results are in line with the precedent item : if intensive I/O is expected and you want best performance and, once again, the best stability then ... same analysis. Also, even if some benchmark exists for virtualization, I strongly encourage you to use your own load as a benchmark : this provide the more accurate results and limit*surprises during the*roll-out.*

Ben

On Wed, May 12, 2010 at 3:25 AM, David Peall <david@dnservices.co.za> wrote:

On Wed, 2010-05-12 at 10:19 +0300, Imre Gergely wrote:

> With KVM guest, is that performance measured using virtio? If not,

> maybe

> you could try that.

>

>



With virtio using a block device I get:

Throughput 45.2551 MB/sec *100 clients *100 procs *max_latency=27865.397

ms



--

David Peall

Domain Name Services


--

ubuntu-server mailing list

ubuntu-server@lists.ubuntu.com

https://lists.ubuntu.com/mailman/listinfo/ubuntu-server

More info: https://wiki.ubuntu.com/ServerTeam


--
Benoit des Ligneris Ph. D., CEO
Revolution Linux
http://www.rlnx.com/



--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam
 

Thread Tools




All times are GMT. The time now is 02:49 AM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org