FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Ubuntu > Ubuntu Server Development

 
 
LinkBack Thread Tools
 
Old 03-22-2010, 05:39 PM
"Nikolai K. Bochev"
 
Default qcow2 format state in Lucid

Hello list,

I am quite curious about the state of qcow2 format in Lucid. As i recently discovered, when running a KVM machine with a qcow2 disk , i get a great performance hit under karmic. Seems that not only disk performance is decreased ( copying with 2 Mb/sec from the host to the guest and with around the same from the network to the guest ), but it hits the cpu on the guest pretty hard and the machine ( guest ) can become quite unstable. I know this is more of a* KVM/QEMU question, but i was wondering if anyone played around with Lucid already and with that part in particular. I have quite a few virtual hosts with lots of guests on them running on production environments and i want to know if i should just switch to raw format or wait until Lucid comes out and keep the qcow2 format of the disks.

Regards

--



Nikolai K. Bochev
System Administrator

Website : GrandstarCO | http://www.grandstarco.com ( Not yet operational )



--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam
 
Old 03-22-2010, 09:59 PM
Dustin Kirkland
 
Default qcow2 format state in Lucid

On Mon, Mar 22, 2010 at 12:39 PM, Nikolai K. Bochev
<n.bochev@grandstarco.com> wrote:
> I am quite curious about the state of qcow2 format in Lucid. As i recently
> discovered, when running a KVM machine with a qcow2 disk , i get a great
> performance hit under karmic. Seems that not only disk performance is
> decreased ( copying with 2 Mb/sec from the host to the guest and with around
> the same from the network to the guest ), but it hits the cpu on the guest
> pretty hard and the machine ( guest ) can become quite unstable. I know this
> is more of a* KVM/QEMU question, but i was wondering if anyone played around
> with Lucid already and with that part in particular. I have quite a few
> virtual hosts with lots of guests on them running on production environments
> and i want to know if i should just switch to raw format or wait until Lucid
> comes out and keep the qcow2 format of the disks.

I'm using KVM + qcow2 + virtio with Lucid host and Lucid guest quite
extensively.

Can you tell me more about your setup, as I'd like to see if I can
reproduce the problem you're seeing?

Specifically, what is your kvm command line when you're seeing the
problem, and what are you using to measure your performance or disk
throughput?

:-Dustin

--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam
 
Old 03-23-2010, 04:37 AM
"Nikolai K. Bochev"
 
Default qcow2 format state in Lucid

----- "Dustin Kirkland" <kirkland@canonical.com> wrote:

> On Mon, Mar 22, 2010 at 12:39 PM, Nikolai K. Bochev


> I'm using KVM + qcow2 + virtio with Lucid host and Lucid guest quite
>
> extensively.
>

I've encountered this when i updated from jaunty to karmic on hosts. The guests are different, but it's mostly either hardy or karmic.
Same slowdown no matter what guest.

>
>
> Can you tell me more about your setup, as I'd like to see if I can
>
> reproduce the problem you're seeing?

One of my hosts is like this ( sorry if it's long ) :

system X8DT3
/0 bus X8DT3
/0/0 memory 64KiB BIOS
/0/4 processor Intel(R) Xeon(R) CPU E5520 @ 2.27GHz
/0/4/5 memory 128KiB L1 cache
/0/4/6 memory 1MiB L2 cache
/0/4/7 memory 8MiB L3 cache
/0/8 processor Intel(R) Xeon(R) CPU E5520 @ 2.27GHz
/0/8/9 memory 128KiB L1 cache
/0/8/a memory 1MiB L2 cache
/0/8/b memory 8MiB L3 cache
/0/39 memory System Memory
/0/39/0 memory 2GiB DIMM 1333 MHz (0.8 ns)
/0/39/1 memory DIMM 1333 MHz (0.8 ns) [empty]
/0/39/2 memory 2GiB DIMM 1333 MHz (0.8 ns)
/0/39/3 memory DIMM 1333 MHz (0.8 ns) [empty]
/0/39/4 memory 2GiB DIMM 1333 MHz (0.8 ns)
/0/39/5 memory DIMM 1333 MHz (0.8 ns) [empty]
/0/47 memory System Memory
/0/47/0 memory 2GiB DIMM 1333 MHz (0.8 ns)
/0/47/1 memory DIMM 1333 MHz (0.8 ns) [empty]
/0/47/2 memory 2GiB DIMM 1333 MHz (0.8 ns)
/0/47/3 memory DIMM 1333 MHz (0.8 ns) [empty]
/0/47/4 memory 2GiB DIMM 1333 MHz (0.8 ns)
/0/47/5 memory DIMM 1333 MHz (0.8 ns) [empty]
/0/55 memory Flash Memory
/0/55/0 memory 4MiB FLASH Non-volatile 33 MHz (30.3 ns)
/0/1 memory
/0/2 memory
/0/100 bridge 5520 I/O Hub to ESI Port
/0/100/1 bridge 5520/5500/X58 I/O Hub PCI Express Root Port 1
/0/100/1/0 eth0 network 82576 Gigabit Network Connection
/0/100/1/0.1 eth1 network 82576 Gigabit Network Connection
/0/100/3 bridge 5520/5500/X58 I/O Hub PCI Express Root Port 3
/0/100/3/0 scsi0 storage MegaRAID SAS 1078
/0/100/3/0/2.0.0 /dev/sda disk 997GB MegaRAID SAS RMB
/0/100/3/0/2.0.0/1 /dev/sda1 volume 1906MiB EXT4 volume
/0/100/3/0/2.0.0/2 /dev/sda2 volume 927GiB Extended partition
/0/100/3/0/2.0.0/2/5 /dev/sda5 volume 927GiB Linux LVM Physical Volume partition
/0/100/3/0/2.1.0 /dev/sdb disk 498GB MegaRAID SAS RMB
/0/100/3/0/2.1.0/1 /dev/sdb1 volume 464GiB Linux LVM Physical Volume partition
/0/100/5 bridge 5520/X58 I/O Hub PCI Express Root Port 5
/0/100/7 bridge 5520/5500/X58 I/O Hub PCI Express Root Port 7
/0/100/8 bridge 5520/5500/X58 I/O Hub PCI Express Root Port 8
/0/100/9 bridge 5520/5500/X58 I/O Hub PCI Express Root Port 9
/0/100/14 generic 5520/5500/X58 I/O Hub System Management Registers
/0/100/14.1 generic 5520/5500/X58 I/O Hub GPIO and Scratch Pad Registers
/0/100/14.2 generic 5520/5500/X58 I/O Hub Control Status and RAS Registers
/0/100/14.3 generic 5520/5500/X58 I/O Hub Throttle Registers
/0/100/16 generic 5520/5500/X58 Chipset QuickData Technology Device
/0/100/16.1 generic 5520/5500/X58 Chipset QuickData Technology Device
/0/100/16.2 generic 5520/5500/X58 Chipset QuickData Technology Device
/0/100/16.3 generic 5520/5500/X58 Chipset QuickData Technology Device
/0/100/16.4 generic 5520/5500/X58 Chipset QuickData Technology Device
/0/100/16.5 generic 5520/5500/X58 Chipset QuickData Technology Device
/0/100/16.6 generic 5520/5500/X58 Chipset QuickData Technology Device
/0/100/16.7 generic 5520/5500/X58 Chipset QuickData Technology Device
/0/100/1a bus 82801JI (ICH10 Family) USB UHCI Controller #4
/0/100/1a.1 bus 82801JI (ICH10 Family) USB UHCI Controller #5
/0/100/1a.2 bus 82801JI (ICH10 Family) USB UHCI Controller #6
/0/100/1a.7 bus 82801JI (ICH10 Family) USB2 EHCI Controller #2
/0/100/1d bus 82801JI (ICH10 Family) USB UHCI Controller #1
/0/100/1d.1 bus 82801JI (ICH10 Family) USB UHCI Controller #2
/0/100/1d.2 bus 82801JI (ICH10 Family) USB UHCI Controller #3
/0/100/1d.7 bus 82801JI (ICH10 Family) USB2 EHCI Controller #1
/0/100/1e bridge 82801 PCI Bridge
/0/100/1e/3 display MGA G200eW WPCM450
/0/100/1f bridge 82801JIR (ICH10R) LPC Interface Controller
/0/100/1f.3 bus 82801JI (ICH10 Family) SMBus Controller
/0/101 bridge Intel Corporation
/0/102 bridge Intel Corporation
/0/103 bridge Intel Corporation
/0/104 bridge Intel Corporation
/0/105 bridge 5520/5500/X58 Physical Layer Port 0
/0/106 bridge 5520/5500 Physical Layer Port 1
/0/107 bridge Intel Corporation
/0/108 bridge Intel Corporation
/0/109 bridge Intel Corporation
/0/10a bridge Intel Corporation
/0/10b bridge Intel Corporation
/1 vnet0 network Ethernet interface
/2 vnet1 network Ethernet interface
/3 vnet2 network Ethernet interface
/4 vnet3 network Ethernet interface
/5 vnet4 network Ethernet interface
/6 vnet5 network Ethernet interface
/7 vnet6 network Ethernet interface
/8 vnet7 network Ethernet interface
/9 vnet8 network Ethernet interface
/a vnet9 network Ethernet interface
/b vnet10 network Ethernet interface



>
>
>
> Specifically, what is your kvm command line when you're seeing the
>
> problem, and what are you using to measure your performance or disk
>
> throughput?


I have around 7 or 8 hosts, most of them are using mdadm though, but that just reflects the overall speeds difference, i've encountered slowdowns no matter if i use hw raid or mdadm.
We're running mostly raid 5's and 10's but there's the occasional 6 and slowdowns are seen on all of them.
On the above configuration i have the following running :

2053 ? Sl 181:54 /usr/bin/kvm -S -M pc-0.11 -m 512 -smp 2 -name dev -uuid f0366094-81f3-22d9-ec9e-e93d30855a74 -monitor unix:/var/run/libvirt/qemu/dev.monitor,server,nowait -boot c -drive file=/srv/storage/virt/ubuntu-9.10-server-amd64.iso,if=ide,media=cdrom,index=2,format= -drive file=/srv/storage/virt/dev-sda.img,if=virtio,index=0,boot=on,format= -net nic,macaddr=52:54:00:67:39:db,vlan=0,model=e1000,n ame=e1000.0 -net tap,fd=15,vlan=0,name=tap.0 -net nic,macaddr=52:54:00:21:6b:1d,vlan=1,model=e1000,n ame=e1000.1 -net tap,fd=16,vlan=1,name=tap.1 -serial pty -parallel none -usb -vnc 127.0.0.1:0 -k en-us -vga cirrus
2274 ? Sl 960:12 /usr/bin/kvm -S -M pc-0.11 -m 2048 -smp 2 -name purplewing -uuid 2020a1fb-bf5e-6d7b-ef14-b6cb5b0d7c2e -monitor unix:/var/run/libvirt/qemu/purplewing.monitor,server,nowait -boot c -drive file=,if=ide,media=cdrom,index=2,format= -drive file=/srv/storage/virt/purplewing-sda.img,if=scsi,index=0,boot=on,format= -net nic,macaddr=52:54:00:0d:2c:c6,vlan=0,model=e1000,n ame=e1000.0 -net tap,fd=16,vlan=0,name=tap.0 -serial pty -parallel none -usb -vnc 127.0.0.1:1 -k en-us -vga cirrus
2458 ? Sl 148:40 /usr/bin/kvm -S -M pc-0.11 -m 2048 -smp 2 -name bluewing -uuid 20709a00-bdcb-d561-de00-b396c531643d -monitor unix:/var/run/libvirt/qemu/bluewing.monitor,server,nowait -boot c -drive file=/srv/storage/virt/bluewing-sda.qcow2,if=ide,index=0,boot=on,format=raw -drive file=,if=ide,media=cdrom,index=2,format= -net nic,macaddr=52:54:00:39:6e:31,vlan=0,model=e1000,n ame=e1000.0 -net tap,fd=17,vlan=0,name=tap.0 -net nic,macaddr=52:54:00:0f:37:07,vlan=1,model=e1000,n ame=e1000.1 -net tap,fd=18,vlan=1,name=tap.1 -serial pty -parallel none -usb -vnc 127.0.0.1:2 -k en-us -vga cirrus
2469 ? Sl 595:54 /usr/bin/kvm -S -M pc-0.11 -m 1024 -smp 2 -name redwing -uuid ae675ba2-dae6-46e1-cb94-56302696f596 -monitor unix:/var/run/libvirt/qemu/redwing.monitor,server,nowait -boot c -drive file=/srv/storage/virt/redwing-sda.img,if=scsi,index=0,boot=on,format= -net nic,macaddr=52:54:00:0a:ea:74,vlan=0,model=e1000,n ame=e1000.0 -net tap,fd=18,vlan=0,name=tap.0 -net nic,macaddr=52:54:00:09:9d:f2,vlan=1,name=nic.0 -net tap,fd=19,vlan=1,name=tap.1 -serial pty -parallel none -usb -vnc 127.0.0.1:3 -k en-us -vga cirrus
2480 ? Rl 34:38 /usr/bin/kvm -S -M pc-0.11 -m 1024 -smp 1 -name whitewing -uuid 62b51468-b022-7a3c-642a-88ca4218ee74 -monitor unix:/var/run/libvirt/qemu/whitewing.monitor,server,nowait -boot c -drive file=/srv/storage/virt/whitewing-sda.qcow2,if=ide,index=0,boot=on,format=raw -drive file=/srv/storage/virt/ubuntu-9.04-server-amd64.iso,if=ide,media=cdrom,index=2,format= -net nic,macaddr=52:54:00:6e:29:3e,vlan=0,model=e1000,n ame=e1000.0 -net tap,fd=19,vlan=0,name=tap.0 -net nic,macaddr=52:54:00:61:5a:11,vlan=1,model=e1000,n ame=e1000.1 -net tap,fd=20,vlan=1,name=tap.1 -serial pty -parallel none -usb -vnc 127.0.0.1:4 -k en-us -vga cirrus
2486 ? Rl 1893:04 /usr/bin/kvm -S -M pc-0.11 -m 2048 -smp 2 -name yellowwing -uuid ca1df004-80dd-646d-6fd1-36e1c112da2a -monitor unix:/var/run/libvirt/qemu/yellowwing.monitor,server,nowait -boot c -drive file=/srv/storage/virt/yellowwing-sda.qcow2,if=ide,index=0,boot=on,format=raw -drive file=/srv/storage/virt/ubuntu-8.04.3-server-amd64.iso,if=ide,media=cdrom,index=2,format= -net nic,macaddr=52:54:00:7e:d8:5d,vlan=0,model=e1000,n ame=e1000.0 -net tap,fd=20,vlan=0,name=tap.0 -net nic,macaddr=52:54:00:62:17:5e,vlan=1,model=e1000,n ame=e1000.1 -net tap,fd=21,vlan=1,name=tap.1 -serial pty -parallel none -usb -vnc 127.0.0.1:5 -k en-us -vga cirrus

Do not be fooled by the filenames of the disks tho as i have experimented and converted the between formats quite a few.
I've changed hdd drivers from virtio to scsi or even ata a few times, and networking drivers from virtio to e1000 or rtl8139 and there's mostly no difference.


I encountered this when i was preparing a host with just 2 guests on it - one web server and one file server ( samba ). Everything went fine until i started to rsync files from the network to the samba guest. I will try and get you exact measures of speed slowdowns later today ( as it is already in production and i can't experiment with it right now ), but we're talking something among the lines of :

host disk speeds - around 60MB/s ( reads ) and 30MB/s writes ( ext4 fs ). We got a gigabit network at the office.
guest disk speeds - around 2-7MB/s both reads and writes.

I'm using dstat and iotop to see disk speeds. rsync is :

rsync -avz source target

Now the speeds above would be acceptable if the guest was acting normally, but unfortunately once you start a longer copy ( say around a 2GB file ) TO the guest, it hits the guest CPU pretty hard ( CPU at 100%, loads around 5 or so ) and the machine becomes very slow and unresponsive. The host is acting normally.

All of this happens only with qcow2 format of the disks. If switch to raw , the disk speeds increase by 1/4 or so, and the cpu usage goes back to normal.

At first we thought it's the networking, so i ran iperf between two guests, and it did not have the same effect. Then we tried bonnie++ on the guests and we saw the slowdowns.

I will try to provide exact speeds and measures later today.

My only concern is if this behavior persist in Lucid. At the moment i have no free hardware to test Lucid and i am considering converting all qcow2 formats to raw, but i wanted to keep qcow2 for the snapshot ability.

>
>
>
> :-Dustin

--


Nikolai K. Bochev
System Administrator

Website : GrandstarCO | http://www.grandstarco.com




--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam
 
Old 03-23-2010, 06:31 AM
Imre Gergely
 
Default qcow2 format state in Lucid

On 03/22/2010 08:39 PM, Nikolai K. Bochev wrote:
>
> Hello list,
>
> I am quite curious about the state of qcow2 format in Lucid. As i
> recently discovered, when running a KVM machine with a qcow2 disk , i
> get a great performance hit under karmic. Seems that not only disk
> performance is decreased ( copying with 2 Mb/sec from the host to the
> guest and with around the same from the network to the guest ), but it
> hits the cpu on the guest pretty hard and the machine ( guest ) can
> become quite unstable. I know this is more of a KVM/QEMU question, but
> i was wondering if anyone played around with Lucid already and with that
> part in particular. I have quite a few virtual hosts with lots of guests
> on them running on production environments and i want to know if i
> should just switch to raw format or wait until Lucid comes out and keep
> the qcow2 format of the disks.

I did notice some strange slowdowns, but not like this. I have a Karmic
host and tried to install the (then) latest lucid from a nightly iso
alternate iso. It was extremely slow, it installed the base system in
like an hour or so, it just stood there, apparently without doing
anything (for example at kernel install).
I've installed a couple of guests, none of which were this slow.

I didn't do any further investigation because I needed that guest right
away, so I just installed a Karmic and upgraded to Lucid, was a lot
faster

Probably isn't related to this, but I just remembered. I'll try to
reproduce it when I get home.

--
Imre Gergely
Yahoo!: gergelyimre | ICQ#: 101510959
MSN: gergely_imre | GoogleTalk: gergelyimre
gpg --keyserver subkeys.pgp.net --recv-keys 0x34525305

--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam
 
Old 03-23-2010, 10:19 AM
Loc Minier
 
Default qcow2 format state in Lucid

Hi

In upstream qemu.git:
commit 0aa217e46124e873f75501f7187657e063f5903b
Author: Kevin Wolf <kwolf@redhat.com>
Date: Tue Jun 30 13:06:04 2009 +0200

qcow2: Make cache=writethrough default

The performance of qcow2 has improved meanwhile, so we don't need to
special-case it any more. Switch the default to write-through caching
like all other block drivers.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>

You can pass cache=writeback to get the old behavior, but it's a bit
unsafe: even if the guest believes some I/O have hit the disk, they
might still be in caches on the host.

Bye
--
Loc Minier

--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam
 

Thread Tools




All times are GMT. The time now is 06:24 PM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org