# OpenVZ IO Limits!



## SkylarM (Oct 30, 2013)

Interesting read. Will be a while before it's stable and all that, but has some interesting benefits.

http://blog.openvz.org/45831.html - *per-container disk I/O bandwidth and IOPS limiting.*



> I/O bandwidth limiting was introduced in Parallels Cloud Server, and as of today is available in OpenVZ as well. Using the feature is very easy: you set a limit for a container (in megabytes per second), and watch it obeying the limit.


As well as IOP limitations. If used properly, I think this could be a solid opportunity to keep disk performance better across the board and limit what abusers would potentially be able to do. Looking forward to seeing this in action once it's stable. Basically takes the existing IO Priority system a step further, which if used properly could provide some solid benefits.


----------



## Francisco (Oct 30, 2013)

The I/O priority settings are garbage since they *require* CFQ. CFQ is such a terrible scheduler to use on a node.

Francisco


----------



## jarland (Oct 30, 2013)

Step in the right direction. Pretty soon 1000% overselling will be sustainable!


I kid


----------



## earl (Oct 30, 2013)

if your board has a few pci slots could you not get a couple of 4 port sata controller and do individual raid10 on the them? speed mght not be great, but at least each raid set is isolated? if you put 2x 4 port sata controller it's 12 disk you can have onboard..


----------



## Magiobiwan (Oct 30, 2013)

That's assuming you have ROOM for that many drives. And if all you got was simple SATA controller and not RAID cards, you'd have to do software raid for the arrays. I think a SAS HW RAID controller with as many SAS drives you can fit on it in RAID 10 is the best bet. The greatest number of drives in the array allows for the best IOPS.


----------



## earl (Oct 30, 2013)

yeah it would be software raid.. and while having a big raid 10 might give good performance I was thinking more along the lines if you were to oversell would it not be be best to have isolated raid configs in say 3 raid 10..

Have not played around with solusvm so not sure how it works but in proxmox you can create the VM under the different disk that you have available so having two disk when someone is doing a DD test on say disk 1 the vps's on disk 2 is not affected, well you get the idea..

you know sata controllers are around $30 each or less vs say a decent raid card that probably go for $300 and up.. 2 of those is already $600 plus the cost of the server so not sure you would get your ROI.


----------



## dcdan (Oct 30, 2013)

The folks who like dd tests will be disappointed...


----------



## earl (Oct 30, 2013)

@dcdan

Well, to what I've noticed especially on LET, that if the price is low enough even horrible DD can be forgiven..


----------



## mitsuhashi (Oct 30, 2013)

Is this going to be something like I/O limiting on CloudLinux shared hosts? Because that sucks donkey balls and worse for the customer. I'd much rather have a non-limited box where the provider is proactive in kicking out real abusers and letting normal users get on with their lives.

Me, to SSD shared provider: Your I/O seems to be slow. Installs are slow and my pages load slow. Are you really on SSD?

Provider: We use CloudLinux to limit I/O on shared accounts to 50MB/s, which is 50 times more than other providers let you use. If you need more than that, you must be doing something nasty on my server.

......


----------



## Francisco (Oct 31, 2013)

If it really is pure SSD then 50MB/sec should be more than enough for each user.

It's always possible that it's SSD cached, or, simply bullshit 

Francisco


----------



## mitsuhashi (Oct 31, 2013)

Francisco said:


> simply bullshit


This was my opinion as well. Couldn't get him to admit it, though.


----------



## earl (Oct 31, 2013)

why advertise ssd when you only get 50Mb/s.. the 15/yr vps I have with ramnode got 1.2Gb/s! and that's only cached ssd.. I'm sure it may have been the cache doing it's job, while in realty even 1/4 that speed should be fine. but it still looks nice..


----------



## Nick_A (Oct 31, 2013)

earl said:


> why advertise ssd when you only get 50Mb/s.. the 15/yr vps I have with ramnode got 1.2Gb/s! and that's only cached ssd.. I'm sure it may have been the cache doing it's job, while in realty even 1/4 that speed should be fine. but it still looks nice..


Pretty sure that's a record on our cached nodes ha...


----------



## earl (Oct 31, 2013)

Nick_A said:


> Pretty sure that's a record on our cached nodes ha...


While that DD does look pretty, what really gets me going is that 1Gbps port, it's very niceee..lol.


----------



## Magiobiwan (Oct 31, 2013)

dd doesn't matter much to me anymore. It's all about ioping and IOPS, as THAT is what will determine your "real life" performance.


----------

