FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Ubuntu > Ubuntu Server Development

 
 
LinkBack Thread Tools
 
Old 09-09-2011, 02:03 PM
"Serge E. Hallyn"
 
Default building a list of KVM workloads

Hi,

At UDS it was decided we should compile a list of VM workloads we are
interested in, define their characteristics, measure the impact of
various tunables (drive types, caching, KSM, etc), and put out a list
of recommendations based upon those.

Myself, I mostly use KVM for testing (so, cache=none, etc). A start
to a list might be:

bug reproduction/distro-installer test (cache=none, throwaway)
guest desktop (kiosk)
apache server?
compute-intensive (UEC/openstack based distcc node?)

What would your list be?

thanks,
-serge

--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam
 
Old 09-09-2011, 02:38 PM
Mark Mims
 
Default building a list of KVM workloads

On Fri, 2011-09-09 at 09:03 -0500, Serge E. Hallyn wrote:
> Hi,
>
> At UDS it was decided we should compile a list of VM workloads we are
> interested in, define their characteristics, measure the impact of
> various tunables (drive types, caching, KSM, etc), and put out a list
> of recommendations based upon those.
>
> Myself, I mostly use KVM for testing (so, cache=none, etc). A start
> to a list might be:
>
> bug reproduction/distro-installer test (cache=none, throwaway)
> guest desktop (kiosk)
> apache server?
> compute-intensive (UEC/openstack based distcc node?)
>
> What would your list be?
I typically treat them like cloud instances and waste kvms quite
liberally...
- torrent sandbox (security)
- dav sandbox (security + virtio throughput to libvirt storage pools)
- nfs (virtio again... bridge performance)
- distro tests
- development envs
- irc proxy / bitlebee
- gitosis / private source repos (easy single backup point)

and have used them in production for a whole lot more in the past
- dedicated vpn gateways
- HIPAA-isolated web apps (lawyer said they were isolated enough...
even with ksmd)
- various application layers/tiers (very sensitive to virtual
networking choices)

...almost always behind libvirt (virsh mostly).

>
> thanks,
> -serge
>

--
Mark Mims, Ph.D.
Canonical Ltd.
mark.mims@canonical.com
+1(512)981-6467


--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam
 
Old 09-09-2011, 07:20 PM
Douglas Stanley
 
Default building a list of KVM workloads

On Fri, Sep 9, 2011 at 10:03 AM, Serge E. Hallyn
<serge.hallyn@canonical.com> wrote:
> Hi,
>
> At UDS it was decided we should compile a list of VM workloads we are
> interested in, define their characteristics, measure the impact of
> various tunables (drive types, caching, KSM, etc), and put out a list
> of recommendations based upon those.
>
> Myself, I mostly use KVM for testing (so, cache=none, etc). *A start
> to a list might be:
>
> * * bug reproduction/distro-installer test (cache=none, throwaway)
> * * guest desktop (kiosk)
> * * apache server?
> * * compute-intensive (UEC/openstack based distcc node?)
>
> What would your list be?
>

I think the shorter list would be of items I don't use VM's for and
that would be pretty much an empty list!
I guess the only thing I don't use vm's for...is running vm's...seriously.

I think you should have some items on the list that have stigma
attached...like "oh you can never run a db server as a vm". Bull!

> thanks,
> -serge
>
> --
> ubuntu-server mailing list
> ubuntu-server@lists.ubuntu.com
> https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
> More info: https://wiki.ubuntu.com/ServerTeam
>



--
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html

--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam
 
Old 09-09-2011, 11:58 PM
Todd Deshane
 
Default building a list of KVM workloads

On Fri, Sep 9, 2011 at 10:03 AM, Serge E. Hallyn
<serge.hallyn@canonical.com> wrote:
> Hi,
>
> At UDS it was decided we should compile a list of VM workloads we are
> interested in, define their characteristics, measure the impact of
> various tunables (drive types, caching, KSM, etc), and put out a list
> of recommendations based upon those.

Do you have a blueprint or link to the discussion on this?

I think I might be able to give some feedback on this if I had some
more background. My experience has been primarily with Xen, but I am
also familiar with using KVM.

Thanks,
Todd

--
Todd Deshane
http://www.linkedin.com/in/deshantm
http://www.xen.org/products/cloudxen.html
http://runningxen.com/

--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam
 
Old 09-11-2011, 02:20 PM
"Serge E. Hallyn"
 
Default building a list of KVM workloads

Quoting Todd Deshane (todd.deshane@xen.org):
> On Fri, Sep 9, 2011 at 10:03 AM, Serge E. Hallyn
> <serge.hallyn@canonical.com> wrote:
> > Hi,
> >
> > At UDS it was decided we should compile a list of VM workloads we are
> > interested in, define their characteristics, measure the impact of
> > various tunables (drive types, caching, KSM, etc), and put out a list
> > of recommendations based upon those.
>
> Do you have a blueprint or link to the discussion on this?
>
> I think I might be able to give some feedback on this if I had some
> more background. My experience has been primarily with Xen, but I am
> also familiar with using KVM.

Yup, the blueprint is at
https://blueprints.launchpad.net/ubuntu/+spec/server-o-kvm-document-suggested-changes
and the pad is at
http://pad.ubuntu.com/uds-o-server-o-kvm-document-suggested-changes

-serge

--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam
 
Old 09-12-2011, 11:53 AM
 
Default building a list of KVM workloads

*

> On Fri, Sep 9, 2011 at 10:03 AM, Serge E. Hallyn

> <serge.hallyn@canonical.com> wrote:

> > Hi,

> >

> > At UDS it was decided we should compile a list of VM workloads
we are

> > interested in, define their characteristics, measure the impact
of

> > various tunables (drive types, caching, KSM, etc), and put out
a list

> > of recommendations based upon those.

> >

> > Myself, I mostly use KVM for testing (so, cache=none, etc). *A
start

> > to a list might be:

> >

> > * * bug reproduction/distro-installer test (cache=none,
throwaway)

> > * * guest desktop (kiosk)

> > * * apache server?

> > * * compute-intensive (UEC/openstack based distcc node?)

> >

> > What would your list be?

> >



At my work, we use KVM for many things:



* DOMINO mail server running on a virtual MS2003 server

* Phone central SW on virtual XP desktop

* some virtual desktops running XP for legacy apps

* 3 MySQL DB servers for testing applications

* Arago/OpenEmbedded filesystem builder server

* Nagios server

* Alfresco server

* ISPConfig server (web hosting)

* Redmine server (git)

* GLPI server



And ever increasing...

And ever working properly (till now ;-)...

--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam
 
Old 09-13-2011, 03:07 PM
"Serge E. Hallyn"
 
Default building a list of KVM workloads

Thanks, guys. Unfortunately I'm having a harder time thinking through
how to properly classify these by characteristics. Here is an inadequate
attempt:

* source code hosting (github, gitosis, etc)
- characteristics?
* checkpointable (i.e. Mark's single point backup gitosis vms)
- qcow2 or qed based for snapshotting?
* web hosting
- characteristics?
* Network performance (hard to generalize)
- vpn
- various appliation layer/tiers
- characteristics?
* db hosting
- characteristics?
* desktop virtualization
- ideally, using spice?
- should survive unexpected host reboots?
* windows workloads ?
- characteristics?

I'll probably put these up on the wiki soon so we can all edit, but in
the meantime if you have any suggestions for improving the grouping or
filling in characteristics, please speak up.

BTW - using VMS to host VMS *is* in fact doable, at least on AMD. I've
done it in the past just for debugging qemu bugs in several releases on
a remote, borrowed AMD laptop. It might be worth adding to the list,
as it likely will have its own tuning requirements.

thanks,
-serge

--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam
 
Old 09-14-2011, 07:49 AM
 
Default building a list of KVM workloads

> From: "Serge E. Hallyn" <serge.hallyn@canonical.com>

> To: ubuntu-server <ubuntu-server@lists.ubuntu.com>

> Cc: jurgen.depicker@let.be, Mark Mims <mark.mims@canonical.com>

> Date: 13/09/2011 17:07

> Subject: Re: building a list of KVM workloads

>

> Thanks, guys. *Unfortunately I'm having a harder time thinking
through

> how to properly classify these by characteristics. *Here is an
inadequate

> attempt:

>

> * * source code hosting (github, gitosis, etc)

> * * - characteristics?

> * * checkpointable (i.e. Mark's single point backup gitosis vms)

> * * - qcow2 or qed based for snapshotting?

> * * web hosting

> * * - characteristics?

> * * Network performance (hard to generalize)

> * * - vpn

> * * - various appliation layer/tiers

> * * - characteristics?

> * * db hosting

> * * - characteristics?

> * * desktop virtualization

> * * - ideally, using spice?

Yes, but i haven't tried yet since installation is
not 'standard' yet.

http://www.linux-kvm.com/content/spice-ubuntu-wiki-available



> * * - should survive unexpected host
reboots?

This is something REALLY important which, as far as
i know, is better managed with RedHat too :-(. *I nearly died when
I accidentally typed 'reboot' in the wrong terminal (after which i installed
molly-guard everywhere) and when i noticed there was no clean shutdown
of the guests; more even: that reboot corrupted some of the running windows-Vms...

I did some research on that, but didn't find time
to properly synthetise it and implement the stuff I found (basically, the
init scripts used in redhat as far as i remember).

https://exain.wordpress.com/2009/05/22/auto-shutdown-kvm-virtual-machines-on-system-shutdown/

http://www.linux-kvm.com/content/stop-script-running-vms-using-virsh

https://help.ubuntu.com/community/KVM/Managing#Suspend%20and%20resume%20a%20Virtual%20Ma chine



> * * windows workloads ?

> * * - characteristics?

>

> I'll probably put these up on the wiki soon so we can all edit, but
in

> the meantime if you have any suggestions for improving the grouping
or

> filling in characteristics, please speak up.



I noticed that most of my load is due to cpu wait:
disk IO I guess. *Most troubles with too much 'wait' are due to windows
VMs.



All my VMs use qcow2. *There is an option, when
you create the disk images manually, to 'preallocate', which is supposed
to increase the performance a lot: -o preallocation=metadata
:

From KVM
I/O slowness on RHEL 6 http://www.ilsistemista.net/index.php/virtualization/11-kvm-io-slowness-on-rhel-6.html

So, if you are using Red Hat Enterprise Linux or Fedora
Linux as the host operating system for you virtualization server and you
plan to use the QCOW2 format, remember to manually create preallocated
virtual disk files and to use a “none” cache policy (you can also use
a “write-back” policy, but be warned that your guests will be more prone
to data loss).

If you can confirm this article, then I guess this
should be a default option when creating disk images from the GUI VMManager



>

> BTW - using VMS to host VMS *is* in fact doable, at least on AMD.
*I've

> done it in the past just for debugging qemu bugs in several releases
on

> a remote, borrowed AMD laptop. *It might be worth adding to the
list,

> as it likely will have its own tuning requirements.

>

> thanks,

> -serge


--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam
 
Old 09-14-2011, 01:44 PM
"Serge E. Hallyn"
 
Default building a list of KVM workloads

Quoting jurgen.depicker@let.be (jurgen.depicker@let.be):
> > From: "Serge E. Hallyn" <serge.hallyn@canonical.com>
> > To: ubuntu-server <ubuntu-server@lists.ubuntu.com>
> > Cc: jurgen.depicker@let.be, Mark Mims <mark.mims@canonical.com>
> > Date: 13/09/2011 17:07
> > Subject: Re: building a list of KVM workloads
> >
> > Thanks, guys. Unfortunately I'm having a harder time thinking through
> > how to properly classify these by characteristics. Here is an
> inadequate
> > attempt:
> >
> > * source code hosting (github, gitosis, etc)
> > - characteristics?
> > * checkpointable (i.e. Mark's single point backup gitosis vms)
> > - qcow2 or qed based for snapshotting?
> > * web hosting
> > - characteristics?
> > * Network performance (hard to generalize)
> > - vpn
> > - various appliation layer/tiers
> > - characteristics?
> > * db hosting
> > - characteristics?
> > * desktop virtualization
> > - ideally, using spice?
> Yes, but i haven't tried yet since installation is not 'standard' yet.
> http://www.linux-kvm.com/content/spice-ubuntu-wiki-available
>
> > - should survive unexpected host reboots?
> This is something REALLY important which, as far as i know, is better
> managed with RedHat too :-(. I nearly died when I accidentally typed
> 'reboot' in the wrong terminal (after which i installed molly-guard
> everywhere) and when i noticed there was no clean shutdown of the guests;

Note that as of very recently, all your libvirt-managed VMs at least
should cleanly shut down before the host finishes shutting down.

Though I was thinking more of using caching and journaled filesystems,
and perhaps even the fs on the host.

> more even: that reboot corrupted some of the running windows-Vms...
> I did some research on that, but didn't find time to properly synthetise
> it and implement the stuff I found (basically, the init scripts used in
> redhat as far as i remember).
> https://exain.wordpress.com/2009/05/22/auto-shutdown-kvm-virtual-machines-on-system-shutdown/
> http://www.linux-kvm.com/content/stop-script-running-vms-using-virsh
> https://help.ubuntu.com/community/KVM/Managing#Suspend%20and%20resume%20a%20Virtual%20Ma chine
>
> > * windows workloads ?
> > - characteristics?
> >
> > I'll probably put these up on the wiki soon so we can all edit, but in
> > the meantime if you have any suggestions for improving the grouping or
> > filling in characteristics, please speak up.
>
> I noticed that most of my load is due to cpu wait: disk IO I guess. Most
> troubles with too much 'wait' are due to windows VMs.
>
> All my VMs use qcow2. There is an option, when you create the disk images

When running kvm by hand, I almost always use raw. The vm-tools which I
use very frequently use qcow2. It's worth publicizing some (new) measurements
of performance with qcow, qed, and raw (both raw backing file and raw LVM
partition), upon which results we can base recommendations for these workloads.

Any votes for which benchmark to use?

Likewise, something like kernbench on two identical VMs, one with swap, and
one without, would be interesting. Heck, memory and smp configurations,
smp=1/2/4/8 and -m 256,512,1024 would be intereseting. Though we'll ignore
what I've used before, having -m 4096 and doing all work in tmpfs That
was nice and quick.

Finally, virtio and fake scsi might have some different effects on the usual
filesystems, so maybe we should compare xfs, jfs, ext4, ext3, and ext2 with
each.

That's getting to be a lot of things to measure, especially without an
automated system to do system install/setup/test/compile-results <cough>, but
heck, we'll see if I end up re-writing one

> manually, to 'preallocate', which is supposed to increase the performance
> a lot: -o preallocation=metadata :
> From KVM I/O slowness on RHEL 6
> http://www.ilsistemista.net/index.php/virtualization/11-kvm-io-slowness-on-rhel-6.html

Hm, this would be worth measuring and publicizing as a part of this. I
always choose not to preallocate, and use cache=none. Just how much
does performance change (in both directions) when I do preallcoate, or
use cache=writeback?

> So, if you are using Red Hat Enterprise Linux or Fedora Linux as the host
> operating system for you virtualization server and you plan to use the
> QCOW2 format, remember to manually create preallocated virtual disk files
> and to use a “none” cache policy (you can also use a “write-back” policy,
> but be warned that your guests will be more prone to data loss).
> If you can confirm this article, then I guess this should be a default
> option when creating disk images from the GUI VMManager

If we can tie the results for certain configurations to particular workloads,
then we could perhaps go a bit further.

thanks,
-serge

--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam
 
Old 09-14-2011, 02:23 PM
Ahmed Kamal
 
Default building a list of KVM workloads

On 09/14/2011 04:14 PM, Ahmed Kamal wrote:

On Wed 14 Sep 2011 03:44:10 PM EET, Serge E. Hallyn wrote:

Quoting jurgen.depicker@let.be (jurgen.depicker@let.be):

From: "Serge E. Hallyn"<serge.hallyn@canonical.com>
To: ubuntu-server<ubuntu-server@lists.ubuntu.com>
Cc: jurgen.depicker@let.be, Mark Mims<mark.mims@canonical.com>
Date: 13/09/2011 17:07
Subject: Re: building a list of KVM workloads

Thanks, guys. Unfortunately I'm having a harder time thinking through
how to properly classify these by characteristics. Here is an

inadequate

attempt:

* source code hosting (github, gitosis, etc)
- characteristics?
* checkpointable (i.e. Mark's single point backup gitosis vms)
- qcow2 or qed based for snapshotting?
* web hosting
- characteristics?
* Network performance (hard to generalize)
- vpn
- various appliation layer/tiers
- characteristics?
* db hosting
- characteristics?
* desktop virtualization
- ideally, using spice?

Yes, but i haven't tried yet since installation is not 'standard' yet.
http://www.linux-kvm.com/content/spice-ubuntu-wiki-available


- should survive unexpected host reboots?

This is something REALLY important which, as far as i know, is better
managed with RedHat too :-(. I nearly died when I accidentally typed
'reboot' in the wrong terminal (after which i installed molly-guard
everywhere) and when i noticed there was no clean shutdown of the
guests;


Note that as of very recently, all your libvirt-managed VMs at least
should cleanly shut down before the host finishes shutting down.

Though I was thinking more of using caching and journaled filesystems,
and perhaps even the fs on the host.


more even: that reboot corrupted some of the running windows-Vms...
I did some research on that, but didn't find time to properly
synthetise

it and implement the stuff I found (basically, the init scripts used in
redhat as far as i remember).
https://exain.wordpress.com/2009/05/22/auto-shutdown-kvm-virtual-machines-on-system-shutdown/


http://www.linux-kvm.com/content/stop-script-running-vms-using-virsh
https://help.ubuntu.com/community/KVM/Managing#Suspend%20and%20resume%20a%20Virtual%20Ma chine




* windows workloads ?
- characteristics?

I'll probably put these up on the wiki soon so we can all edit, but in
the meantime if you have any suggestions for improving the grouping or
filling in characteristics, please speak up.


I noticed that most of my load is due to cpu wait: disk IO I guess.
Most

troubles with too much 'wait' are due to windows VMs.

All my VMs use qcow2. There is an option, when you create the disk
images


When running kvm by hand, I almost always use raw. The vm-tools which I
use very frequently use qcow2. It's worth publicizing some (new)
measurements
of performance with qcow, qed, and raw (both raw backing file and raw
LVM
partition), upon which results we can base recommendations for these
workloads.


Any votes for which benchmark to use?

Likewise, something like kernbench on two identical VMs, one with
swap, and

one without, would be interesting. Heck, memory and smp configurations,
smp=1/2/4/8 and -m 256,512,1024 would be intereseting. Though we'll
ignore
what I've used before, having -m 4096 and doing all work in tmpfs
That

was nice and quick.

Finally, virtio and fake scsi might have some different effects on
the usual
filesystems, so maybe we should compare xfs, jfs, ext4, ext3, and
ext2 with

each.

That's getting to be a lot of things to measure, especially without an
automated system to do system
install/setup/test/compile-results<cough>, but

heck, we'll see if I end up re-writing one

manually, to 'preallocate', which is supposed to increase the
performance

a lot: -o preallocation=metadata :
From KVM I/O slowness on RHEL 6
http://www.ilsistemista.net/index.php/virtualization/11-kvm-io-slowness-on-rhel-6.html



Hm, this would be worth measuring and publicizing as a part of this. I
always choose not to preallocate, and use cache=none. Just how much
does performance change (in both directions) when I do preallcoate, or
use cache=writeback?

So, if you are using Red Hat Enterprise Linux or Fedora Linux as the
host

operating system for you virtualization server and you plan to use the
QCOW2 format, remember to manually create preallocated virtual disk
files
and to use a “none” cache policy (you can also use a “write-back”
policy,

but be warned that your guests will be more prone to data loss).
If you can confirm this article, then I guess this should be a default
option when creating disk images from the GUI VMManager


If we can tie the results for certain configurations to particular
workloads,

then we could perhaps go a bit further.

thanks,
-serge





I haven't been following this thread closely, but my understanding is
that we're after testing KVM in lots of different situations? If that is
something that can benefit from community contribution perhaps someone
can start a matrix of workload testing needed, and I can bang some drums
to try and get interested members to test. Hopefully this matrix will
include the needed kvm's cli options mentioned or whatever is needed to
make testing easy


Cheers

--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam
 

Thread Tools




All times are GMT. The time now is 07:06 AM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org