On 09/14/2011 04:14 PM, Ahmed Kamal wrote:
On Wed 14 Sep 2011 03:44:10 PM EET, Serge E. Hallyn wrote:
Quoting email@example.com (firstname.lastname@example.org):
From: "Serge E. Hallyn"<email@example.com>
Cc: firstname.lastname@example.org, Mark Mims<email@example.com>
Date: 13/09/2011 17:07
Subject: Re: building a list of KVM workloads
Thanks, guys. Unfortunately I'm having a harder time thinking through
how to properly classify these by characteristics. Here is an
* source code hosting (github, gitosis, etc)
* checkpointable (i.e. Mark's single point backup gitosis vms)
- qcow2 or qed based for snapshotting?
* web hosting
* Network performance (hard to generalize)
- various appliation layer/tiers
* db hosting
* desktop virtualization
- ideally, using spice?
Yes, but i haven't tried yet since installation is not 'standard' yet.
- should survive unexpected host reboots?
This is something REALLY important which, as far as i know, is better
managed with RedHat too :-(. I nearly died when I accidentally typed
'reboot' in the wrong terminal (after which i installed molly-guard
everywhere) and when i noticed there was no clean shutdown of the
Note that as of very recently, all your libvirt-managed VMs at least
should cleanly shut down before the host finishes shutting down.
Though I was thinking more of using caching and journaled filesystems,
and perhaps even the fs on the host.
more even: that reboot corrupted some of the running windows-Vms...
I did some research on that, but didn't find time to properly
it and implement the stuff I found (basically, the init scripts used in
redhat as far as i remember).
* windows workloads ?
I'll probably put these up on the wiki soon so we can all edit, but in
the meantime if you have any suggestions for improving the grouping or
filling in characteristics, please speak up.
I noticed that most of my load is due to cpu wait: disk IO I guess.
troubles with too much 'wait' are due to windows VMs.
All my VMs use qcow2. There is an option, when you create the disk
When running kvm by hand, I almost always use raw. The vm-tools which I
use very frequently use qcow2. It's worth publicizing some (new)
of performance with qcow, qed, and raw (both raw backing file and raw
partition), upon which results we can base recommendations for these
Any votes for which benchmark to use?
Likewise, something like kernbench on two identical VMs, one with
one without, would be interesting. Heck, memory and smp configurations,
smp=1/2/4/8 and -m 256,512,1024 would be intereseting. Though we'll
what I've used before, having -m 4096 and doing all work in tmpfs
was nice and quick.
Finally, virtio and fake scsi might have some different effects on
filesystems, so maybe we should compare xfs, jfs, ext4, ext3, and
That's getting to be a lot of things to measure, especially without an
automated system to do system
heck, we'll see if I end up re-writing one
manually, to 'preallocate', which is supposed to increase the
a lot: -o preallocation=metadata :
From KVM I/O slowness on RHEL 6
Hm, this would be worth measuring and publicizing as a part of this. I
always choose not to preallocate, and use cache=none. Just how much
does performance change (in both directions) when I do preallcoate, or
So, if you are using Red Hat Enterprise Linux or Fedora Linux as the
operating system for you virtualization server and you plan to use the
QCOW2 format, remember to manually create preallocated virtual disk
and to use a ‚Äúnone‚ÄĚ cache policy (you can also use a ‚Äúwrite-back‚ÄĚ
but be warned that your guests will be more prone to data loss).
If you can confirm this article, then I guess this should be a default
option when creating disk images from the GUI VMManager
If we can tie the results for certain configurations to particular
then we could perhaps go a bit further.
I haven't been following this thread closely, but my understanding is
that we're after testing KVM in lots of different situations? If that is
something that can benefit from community contribution perhaps someone
can start a matrix of workload testing needed, and I can bang some drums
to try and get interested members to test. Hopefully this matrix will
include the needed kvm's cli options mentioned or whatever is needed to
make testing easy
ubuntu-server mailing list
More info: https://wiki.ubuntu.com/ServerTeam