Hello,
Thought I'd put my two cents into this conversation (probably necro-posting, oh well.)
Firstly, I would like to say that at this time; we do not use any sort of slabbing in production. We are currently playing around with slabbing in the back-end and with our development servers; and are having great results.
Pros:
- Mobility (much easier to migrate/resize than bare metal)
- Portability (between hypervisor, smart server, bare metal ...)
- Segemntation fault (ex: you __cannot__ hard limit CPU in KVM, but you can in OpenVZ...)
Cons:
- There are ways to oversell with slabbing (this goes down to the integrity of the provider, as "slabbing" doesn't mean the provider is overselling. Chances are, the provider is overselling without even knowing it from the beginning; by over-assigning KVM servers CPU room.)
---
For us, it boils down to the flat fact that Windows VPS take far too much idle CPU, and spike far too high (100%/core ; regardless of "shares" or "quotas" set in cgroups.) There is NO WAY WHATSOEVER to hard limit CPU usage in a KVM-hypervisor powered environment.
However, if you shove your micro KVM containers inside an OpenVZ container, and let them fight over share of a properly quota'd and pinned cpuset (on the VZ 'host'), then you get less problems, more space per node (density), and less downtime due to I/O or other bottlenecks for your clients.
---
If KVM would fix the above, we wouldn't even have to look in to slabbing, as our system already uses large pools of decentralized storage; and we PXE boot all our nodes off an iSCSI target (SAN/iSCSI HBA via Initiator.)