amuck-landowner

VPS Slabbing - Who Does it and Admits to It?

scv

Massive Nerd
Verified Provider
I use slabbing in production. Some providers like to do the multiple small slab setup which looks good on paper but really kills performance in the real world. I've found a 50/50 split is ideal if you're dealing with lots of small containers. Anything more and you start to get contention between your slabs.

Some hosts like to abuse it (I'm recalling one of Rus Foster's companies doing OpenVZ on 2GB Xen VMs a few years back) but it does have technical merits. OpenVZ 2.6.32 has been notorious for crashes and debugging a kernel panic on bare metal is a bit of a pain in the ass, but within KVM it's much simpler. You even get the option of dumping the VM's memory, pausing execution, etc etc. The argument of mobility is valid as well. Slabbing provides an easier means to remote maintenance of your nodes whether it be kernel updates or virtual hardware changes.

Like drmike said OpenVZ doesn't really scale very well with large process counts, and slabbing is a good way of getting around that limitation. There are also a handful of benefits that stem from the virtualization, namely KSM and better I/O scheduling on modern kernels. KSM gives you a few extra gigs squeeze room when you're running many containers inside - lets you get slightly more density without actually "overselling". Using a modern kernel (3.11+) for your KVM host lets you benefit from I/O scheduling improvements on the guest, despite OpenVZ still being 2.6 based.

edit: Added link to LET regarding Rus Foster. Turns out it was 1.47GB not 2GB per node ;)
 
Last edited by a moderator:

drmike

100% Tier-1 Gogent
Some good responses by providers and actually useful/real reasons to consider slabbing.  Versus the unsavory reasons other folks run such elsewhere.

Definitely a few more companies from the responses that I'd be interested in having services with, slabbing or no slabbing.
 

GoodHosting

New Member
Hello,

 

Thought I'd put my two cents into this conversation (probably necro-posting, oh well.)

 

Firstly, I would like to say that at this time; we do not use any sort of slabbing in production.  We are currently playing around with slabbing in the back-end and with our development servers; and are having great results.

 

Pros:

- Mobility (much easier to migrate/resize than bare metal)

- Portability (between hypervisor, smart server, bare metal ...)

- Segemntation fault (ex: you __cannot__ hard limit CPU in KVM, but you can in OpenVZ...)

 

Cons:

- There are ways to oversell with slabbing (this goes down to the integrity of the provider, as "slabbing" doesn't mean the provider is overselling.  Chances are, the provider is overselling without even knowing it from the beginning; by over-assigning KVM servers CPU room.)

 

---

 

For us, it boils down to the flat fact that Windows VPS take far too much idle CPU, and spike far too high (100%/core ; regardless of "shares" or "quotas" set in cgroups.)  There is NO WAY WHATSOEVER to hard limit CPU usage in a KVM-hypervisor powered environment.

 

However, if you shove your micro KVM containers inside an OpenVZ container, and let them fight over share of a properly quota'd and pinned cpuset (on the VZ 'host'), then you get less problems, more space per node (density), and less downtime due to I/O or other bottlenecks for your clients.

 

---

 

If KVM would fix the above, we wouldn't even have to look in to slabbing, as our system already uses large pools of decentralized storage; and we PXE boot all our nodes off an iSCSI target (SAN/iSCSI HBA via Initiator.)
 

drmike

100% Tier-1 Gogent
Glad to hear more stories so this concept is understood and the downsides.  Plus shows openness of some companies... ehhh transparency...  I applaud the providers who say Yep, we do.
 

iWF-Jacob

New Member
Verified Provider
I don't really like the idea of slabbing. To me it almost seems morally wrong, not sure why... Perhaps it's almost as if you are "cheating" in terms of what is available and the size and scope of the company.

I would say the closest we get to slabbing, while it certainly isn't, is having shared hosting servers be containers on a hypervisor. It's actually pretty great, works well, and allows for quicker scaling, migrations, etc. In addition, always helps out on those license costs!
 

willie

Active Member
For us, it boils down to the flat fact that Windows VPS take far too much idle CPU, and spike far too high (100%/core ; regardless of "shares" or "quotas" set in cgroups.)  There is NO WAY WHATSOEVER to hard limit CPU usage in a KVM-hypervisor powered environment.
 

However, if you shove your micro KVM containers inside an OpenVZ container, and let them fight over share of a properly quota'd and pinned cpuset (on the VZ 'host'), then you get less problems, more space per node (density), and less downtime due to I/O or other bottlenecks for your clients.
Oh that's interesting about cpu limiting.  I wonder if that's why the large cloud places use Xen.

I had been wondering for a while if it's possible to run KVM under OpenVZ.  Since KVM is a kernel running in a user process, it only makes sense.  Is that what's going on?
 

Magiobiwan

Insert Witty Statement Here
Verified Provider
Given that KVM is basically QEMU (QEMU enhanced KVM), and since QEMU runs in Userland space, it's *possible* to run KVM under OpenVZ. I know it's possible because some clients try to run Windows XP in QEMU constantly. Now, SHOULD YOU? Oh dear god no. The performance would likely be trash. Run KVM directly on the host node instead; you'll get MUCH better performance.
 

kaniini

Beware the bunny-rabbit!
Verified Provider
Given that KVM is basically QEMU (QEMU enhanced KVM), and since QEMU runs in Userland space, it's *possible* to run KVM under OpenVZ. I know it's possible because some clients try to run Windows XP in QEMU constantly. Now, SHOULD YOU? Oh dear god no. The performance would likely be trash. Run KVM directly on the host node instead; you'll get MUCH better performance.
It's not possible, unless you give the container access to /dev/kvm.
 
Top
amuck-landowner