amuck-landowner

KVM - lower Disk IO in VM than on Host itself. Why?

Amitz

New Member
Dear all,

a friend of mine is running KVM Virtualization on his server and told me that unpacking a large archive took him 9s on the Host System itself and 49s on a KVM (12GB RAM, virtio, 3 CPUs) on that same Host System. That seems strange to me and I remember from previous discussions that I have read here (and earlier on LET) that there were some tweaks to speed up Disk IO on KVM machines. Do you have any hints for me that I could forward to my friend to help him out?

Thank you very much in advance!

Kind regards

Amitz 
 

splitice

Just a little bit crazy...
Verified Provider
If the compression is high it could have something to do with CPU overhead, but even so it shouldn't be that different. Is it a tar.gz or similar archive or something strange?
 

Enterprisevpssolutions

Article Submitter
Verified Provider
What type of platform are you using?

What is the storage type used for the vm virto,sata,ide,scsi

What do you have the i/o scheduler set to? If its raid and bbu with writeback make sure you set it to deadline on host as well as vm.

Did you align the partitions on the storage that the vm is on? What are your read/write speeds on host and vm?

What kind of array do you have for the host and the vms? raid 1 5 10 50 jbod
 

splitice

Just a little bit crazy...
Verified Provider
Also alignment issues could factor in, I believe these are also logged to kern.log/dmesg
 

kaniini

Beware the bunny-rabbit!
Verified Provider
What type of platform are you using?

What is the storage type used for the vm virto,sata,ide,scsi

What do you have the i/o scheduler set to? If its raid and bbu with writeback make sure you set it to deadline on host as well as vm.

Did you align the partitions on the storage that the vm is on? What are your read/write speeds on host and vm?

What kind of array do you have for the host and the vms? raid 1 5 10 50 jbod
Actually, if you are using a typical hardware RAID card, you should use 'noop', not deadline.  The RAID card does it's own request reordering.  VMs should always use noop.
 
Last edited by a moderator:

Enterprisevpssolutions

Article Submitter
Verified Provider
Actually, if you are using a typical hardware RAID card, you should use 'noop', not deadline.  The RAID card does it's own request reordering.  VMs should always use noop.
This also depends on the drives used and the raid setup you have This article can explain more about it http://davesmisc.wordpress.com/2011/07/23/linux-io-schedulers/ I have used both noop and deadline with both + and - for both choices. Proxmox I believe set the io scheduler to deadline by default and most linux distros set theres to cfq. 
 
Top
amuck-landowner