amuck-landowner

OpenVZ IO Limits!

SkylarM

Well-Known Member
Verified Provider
Interesting read. Will be a while before it's stable and all that, but has some interesting benefits.

http://blog.openvz.org/45831.html - per-container disk I/O bandwidth and IOPS limiting.

I/O bandwidth limiting was introduced in Parallels Cloud Server, and as of today is available in OpenVZ as well. Using the feature is very easy: you set a limit for a container (in megabytes per second), and watch it obeying the limit.
As well as IOP limitations. If used properly, I think this could be a solid opportunity to keep disk performance better across the board and limit what abusers would potentially be able to do. Looking forward to seeing this in action once it's stable. Basically takes the existing IO Priority system a step further, which if used properly could provide some solid benefits.
 
Last edited by a moderator:

Francisco

Company Lube
Verified Provider
The I/O priority settings are garbage since they require CFQ. CFQ is such a terrible scheduler to use on a node.

Francisco
 

earl

Active Member
if your board has a few pci slots could you not get a couple of 4 port sata controller and do individual raid10 on the them? speed mght not be great, but at least each raid set is isolated? if you put 2x 4 port sata controller it's 12 disk you can have onboard..
 

Magiobiwan

Insert Witty Statement Here
Verified Provider
That's assuming you have ROOM for that many drives. And if all you got was simple SATA controller and not RAID cards, you'd have to do software raid for the arrays. I think a SAS HW RAID controller with as many SAS drives you can fit on it in RAID 10 is the best bet. The greatest number of drives in the array allows for the best IOPS.
 

earl

Active Member
yeah it would be software raid.. and while having a big raid 10 might give good performance I was thinking more along the lines if you were to oversell would it not be be best to have isolated raid configs in say 3 raid 10..

Have not played around with solusvm so not sure how it works but in proxmox you can create the VM under the different disk that you have available so having two disk when someone is doing a DD test on say disk 1 the vps's on disk 2 is not affected, well you get the idea..

you know sata controllers are around $30 each or less vs say a decent raid card that probably go for $300 and up.. 2 of those is already $600 plus the cost of the server so not sure you would get your ROI.
 
Last edited by a moderator:

earl

Active Member
@dcdan

Well, to what I've noticed especially on LET, that if the price is low enough even horrible DD can be forgiven..
 
Is this going to be something like I/O limiting on CloudLinux shared hosts? Because that sucks donkey balls and worse for the customer. I'd much rather have a non-limited box where the provider is proactive in kicking out real abusers and letting normal users get on with their lives.

Me, to SSD shared provider: Your I/O seems to be slow. Installs are slow and my pages load slow. Are you really on SSD?

Provider: We use CloudLinux to limit I/O on shared accounts to 50MB/s, which is 50 times more than other providers let you use. If you need more than that, you must be doing something nasty on my server.

......
 

Francisco

Company Lube
Verified Provider
If it really is pure SSD then 50MB/sec should be more than enough for each user.

It's always possible that it's SSD cached, or, simply bullshit :)

Francisco
 

earl

Active Member
why advertise ssd when you only get 50Mb/s.. the 15/yr vps I have with ramnode got 1.2Gb/s! and that's only cached ssd.. I'm sure it may have been the cache doing it's job, while in realty even 1/4 that speed should be fine. but it still looks nice..
 
Last edited by a moderator:

Nick_A

Provider of the year (2014)
why advertise ssd when you only get 50Mb/s.. the 15/yr vps I have with ramnode got 1.2Gb/s! and that's only cached ssd.. I'm sure it may have been the cache doing it's job, while in realty even 1/4 that speed should be fine. but it still looks nice..
Pretty sure that's a record on our cached nodes ha...
 

Magiobiwan

Insert Witty Statement Here
Verified Provider
dd doesn't matter much to me anymore. It's all about ioping and IOPS, as THAT is what will determine your "real life" performance. 
 
Top
amuck-landowner