# KVM - lower Disk IO in VM than on Host itself. Why?



## Amitz (Aug 2, 2013)

Dear all,

a friend of mine is running KVM Virtualization on his server and told me that unpacking a large archive took him 9s on the Host System itself and 49s on a KVM (12GB RAM, virtio, 3 CPUs) on that same Host System. That seems strange to me and I remember from previous discussions that I have read here (and earlier on LET) that there were some tweaks to speed up Disk IO on KVM machines. Do you have any hints for me that I could forward to my friend to help him out?

Thank you very much in advance!

Kind regards

Amitz


----------



## peterw (Aug 2, 2013)

Switch to virtio drivers.


----------



## Amitz (Aug 2, 2013)

Thanks, but as said, virtio is enabled...


----------



## Maximum_VPS (Aug 2, 2013)

Was the KVM LVM group setup within the HOST LVM group?


----------



## Amitz (Aug 2, 2013)

@Maximum_VPS:

I will investigate and tell you!


----------



## splitice (Aug 3, 2013)

If the compression is high it could have something to do with CPU overhead, but even so it shouldn't be that different. Is it a tar.gz or similar archive or something strange?


----------



## Enterprisevpssolutions (Aug 3, 2013)

What type of platform are you using?

What is the storage type used for the vm virto,sata,ide,scsi

What do you have the i/o scheduler set to? If its raid and bbu with writeback make sure you set it to deadline on host as well as vm.

Did you align the partitions on the storage that the vm is on? What are your read/write speeds on host and vm?

What kind of array do you have for the host and the vms? raid 1 5 10 50 jbod


----------



## splitice (Aug 3, 2013)

Also alignment issues could factor in, I believe these are also logged to kern.log/dmesg


----------



## kaniini (Aug 4, 2013)

Enterprisevpssolutions said:


> What type of platform are you using?
> 
> What is the storage type used for the vm virto,sata,ide,scsi
> 
> ...


Actually, if you are using a typical hardware RAID card, you should use 'noop', not deadline.  The RAID card does it's own request reordering.  VMs should _always_ use noop.


----------



## Enterprisevpssolutions (Aug 9, 2013)

kaniini said:


> Actually, if you are using a typical hardware RAID card, you should use 'noop', not deadline.  The RAID card does it's own request reordering.  VMs should _always_ use noop.


This also depends on the drives used and the raid setup you have This article can explain more about it http://davesmisc.wordpress.com/2011/07/23/linux-io-schedulers/ I have used both noop and deadline with both + and - for both choices. Proxmox I believe set the io scheduler to deadline by default and most linux distros set theres to cfq.


----------

