# VPS Slabbing - Who Does it and Admits to It?



## drmike (Dec 22, 2013)

My latest boo-ya-kah is looking around to see who is nesting virtualization within virtualization - aka SLABBING.

We have more honest providers around here.  Who confesses to slabbing?

What are the many legitimate uses of such and what are the unsavory reasons for slabbing (other than inflating server count e-penis)?


----------



## Amitz (Dec 22, 2013)

You mean in a sense of "confess or get exposed"?


----------



## drmike (Dec 22, 2013)

Amitz said:


> You mean in a sense of "confess or get exposed"?


It's Sunday and the holiday season.  Unless you roll around in CC's bed we won't be outing anyone 

Certainly some folks want to say, yeah we do it and defend the practice.  Right?

United Server Slabbers Union 100.


----------



## WebSearchingPro (Dec 22, 2013)

I slab in private at home... 

Edit: All you can do when you have a ... windows machine...


----------



## drmike (Dec 22, 2013)

Oh you call that slabbing @WebSearchingPro?


----------



## Reece-DM (Dec 22, 2013)

Quite a interesting approach to hosting that I can see the benefits from it but isn't something i've put into action.


----------



## KuJoe (Dec 22, 2013)

My test OpenVZ nodes are actually a KVM VPS and a VMware ESXi VM. With my personal hatred for KVM I would never want to run our OpenVZ nodes on a KVM and ESXi is too expensive to make it cost effective for us to even try it.


----------



## Virtovo (Dec 22, 2013)

drmike said:


> My latest boo-ya-kah is looking around to see who is nesting virtualization within virtualization - aka SLABBING.
> 
> We have more honest providers around here.  Who confesses to slabbing?
> 
> What are the many legitimate uses of such and what are the unsavory reasons for slabbing (other than inflating server count e-penis)?


I'd guess one legitimate reason for slabbing is to build your base infrastructure with a single virtualisation type then float others on top of this.  It will make it easier going forward to adjust your deployment to react to market trends.  OpenVZ subs dropping?  Simply resize the OpenVZ instance to make more room for KVM services.


----------



## MannDude (Dec 22, 2013)

Is there any other benefit other than increasing perceived node count? Isn't it also used to bypass some OpenVZ limit on large nodes?

Explain it to me like im 5.


----------



## DomainBop (Dec 22, 2013)

The definitive list of providers who slab and publicly state they do:

1. Ginernet (OpenVZ inside KVM)  confession

2. OVH, "Classic" and "Low Latency" VPS lines, (OpenVZ inside VMWare)  confession (see what virtualization layer FAQ...)

Nobody else admits to doing it. @Moderator, this topic may now be closed


----------



## drmike (Dec 22, 2013)

MannDude said:


> it also used to bypass some OpenVZ limit on large nodes?


My understanding is much above 5k processes OpenVZ performance goes south quickly.   So, commonly slabbed to work around this with multiple process pools.

Stories about some of these slab workarounds involve rather puny servers (i.e. 32GB of RAM) with upwards of 4 7-8GB slabs.

It certainly does in public blow up you "server" count.   So long as everyone  stops paying attention and doesn't notice 4 nodes down at the same time, every time, the game goes on.

I can see slabbing being necessary to facilitate the very small plans (i.e. 128MB and less) in any real quantity.


----------



## KuJoe (Dec 22, 2013)

I think one of the biggest benefits of "slabbing" is mobility. For example: at one point we had a cPanel server that was an OpenVZ VPS, when it outgrew the node it was on I created a KVM VPS with more resources and just did a vzmigrate of the cPanel OpenVZ VPS from the OpenVZ node to the KVM VPS. KVM, Xen, and VMware allow for migrations similar to this (although not a simple one-liner) so having an OpenVZ node on one of them might make migrations easier if say you need to replace a stick of RAM or you want to upgrade the hardware without downtime. You can just move a single VPS instead of all of the clients that are hosted on it.


----------



## SkylarM (Dec 22, 2013)

KuJoe said:


> I think one of the biggest benefits of "slabbing" is mobility. For example: at one point we had a cPanel server that was an OpenVZ VPS, when it outgrew the node it was on I created a KVM VPS with more resources and just did a vzmigrate of the cPanel OpenVZ VPS from the OpenVZ node to the KVM VPS. KVM, Xen, and VMware allow for migrations similar to this (although not a simple one-liner) so having an OpenVZ node on one of them might make migrations easier if say you need to replace a stick of RAM or you want to upgrade the hardware without downtime. You can just move a single VPS instead of all of the clients that are hosted on it.


Migrating a "slab" wouldn't necessarily be any faster than doing a mass openvz migrate via a script that would migrate each and every VPS in an automated fashion (we upgraded about 200 clients/containers on Friday entirely automated)



drmike said:


> My understanding is much above 5k processes OpenVZ performance goes south quickly.   So, commonly slabbed to work around this with multiple process pools.
> 
> Stories about some of these slab workarounds involve rather puny servers (i.e. 32GB of RAM) with upwards of 4 7-8GB slabs.
> 
> ...


I'm not entirely sure how true this is. The limit can't be 5k, our newer and larger servers at capacity are over this "limit" you speak of.

If there IS a process limit, and we simply haven't hit it yet, then offering small ram packages (BlueVM did 96MB packages recently iirc?) would make sense in a slabbed setup, but still rather miss-leading as far as a company boasting about total node count. 

If there is a magical process limit for OpenVZ, then slabbing for these smaller packages makes sense, and MAYBE larger systems like Dual E5's with 128gb+ memory, but it would have zero place on lower systems such as E3's with 32GB of memory.

Edit:

This is direct from the Openvz site:

There is a restriction on the total number of processes in the system. More than about 16000 processes start to cause poor responsiveness of the system, worsening when the number grows. Total number of processes exceeding 32000 is very likely to cause hang of the system.

Note that in practice the number of processes is usually less. Each process consumes some memory, and the available memory and the "low memory" (see “Low memory”) limit the number of processes to lower values. With typical processes, it is normal to be able to run only up to 8000 processes in a system.

http://openvz.org/UBC_primary_parameters

In short, even with silly overselling, you'll hit CPU limitations on E3's in particular long before you hit a process limit (we're nowhere close on our new nodes, and these hold roughly 2-3x what our old nodes are capable of holding)


----------



## jarland (Dec 22, 2013)

I think one benefit is less impact from a kernel panic on one "node." It depends on your reason for splitting up a node. If it's to inflate numbers and oversell to a stupid degree while making the end user think its different physical nodes....I might judge you a bit. But if it's to minimize the impact of kernel issues and reboots, and you're willing to take the hit to resource overhead, I don't mind it.


----------



## Echelon (Dec 23, 2013)

'Slabbing' with the purpose of ease of management isn't necessarily bad. 'Slabbing' with the purpose of deception to the client is a headache for everyone involved. Even then, I find that 'Slabbing' is not worth the headaches it can draw up, since you're then having to dig into multiple virtualization methods to try and find issues that can occur over time.

Simply my two cents on the matter


----------



## KS_Phillip (Dec 23, 2013)

We do this for all of our OpenVZ stuff, whether it's directly leased OpenVZ containers or game  servers on our Flexible Gaming brand.  It just keeps management and segmentation that much simpler.


----------



## XLvps (Dec 23, 2013)

But for each slab wouldn't you still have to pay for a Control Panel instance?  SolusVM, Virtualizor, etc


----------



## Hxxx (Dec 23, 2013)

XLvps said:


> But for each slab wouldn't you still have to pay for a Control Panel instance?  SolusVM, Virtualizor, etc


Exactly...


----------



## Virtovo (Dec 23, 2013)

XLvps said:


> But for each slab wouldn't you still have to pay for a Control Panel instance?  SolusVM, Virtualizor, etc


Yes.  Although depending on your reasons for slabbing the cost may be negligible.  1 - 2 VPS Slave licence for Solus = $2 + Standard slave licence = $10


----------



## WebSearchingPro (Dec 23, 2013)

Virtovo said:


> Yes.  Although depending on your reasons for slabbing the cost may be negligible.  1 - 2 VPS Slave licence for Solus = $2 + Standard slave licence = $10


I think its unreasonable to think of it that way, if you put 32VPS on a node (32 x 1GB VPS) that would be quite a few micro/mini licenses.


----------



## Virtovo (Dec 23, 2013)

WebSearchingPro said:


> I think its unreasonable to think of it that way, if you put 32VPS on a node (32 x 1GB VPS) that would be quite a few micro/mini licenses.


It's $12 per node.  A $2 increase over the normal node pricing of $10.


----------



## WebSearchingPro (Dec 23, 2013)

Virtovo said:


> It's $12 per node.  A $2 increase over the normal node pricing of $10.


Still more expensive, assuming theres absolutely no overselling and dedicated resources


----------



## scv (Dec 23, 2013)

I use slabbing in production. Some providers like to do the multiple small slab setup which looks good on paper but really kills performance in the real world. I've found a 50/50 split is ideal if you're dealing with lots of small containers. Anything more and you start to get contention between your slabs.

Some hosts like to abuse it (I'm recalling one of Rus Foster's companies doing OpenVZ on 2GB Xen VMs a few years back) but it does have technical merits. OpenVZ 2.6.32 has been notorious for crashes and debugging a kernel panic on bare metal is a bit of a pain in the ass, but within KVM it's much simpler. You even get the option of dumping the VM's memory, pausing execution, etc etc. The argument of mobility is valid as well. Slabbing provides an easier means to remote maintenance of your nodes whether it be kernel updates or virtual hardware changes.

Like drmike said OpenVZ doesn't really scale very well with large process counts, and slabbing is a good way of getting around that limitation. There are also a handful of benefits that stem from the virtualization, namely KSM and better I/O scheduling on modern kernels. KSM gives you a few extra gigs squeeze room when you're running many containers inside - lets you get slightly more density without actually "overselling". Using a modern kernel (3.11+) for your KVM host lets you benefit from I/O scheduling improvements on the guest, despite OpenVZ still being 2.6 based.

edit: Added link to LET regarding Rus Foster. Turns out it was 1.47GB not 2GB per node


----------



## drmike (Dec 23, 2013)

Some good responses by providers and actually useful/real reasons to consider slabbing.  Versus the unsavory reasons other folks run such elsewhere.

Definitely a few more companies from the responses that I'd be interested in having services with, slabbing or no slabbing.


----------



## GoodHosting (Feb 1, 2014)

Hello,

 

Thought I'd put my two cents into this conversation (probably necro-posting, oh well.)

 

Firstly, I would like to say that at this time; we do not use any sort of slabbing in production.  We are currently playing around with slabbing in the back-end and with our development servers; and are having great results.

 

Pros:

- Mobility (much easier to migrate/resize than bare metal)

- Portability (between hypervisor, smart server, bare metal ...)

- Segemntation fault (ex: you __cannot__ hard limit CPU in KVM, but you can in OpenVZ...)

 

Cons:

- There are ways to oversell with slabbing (this goes down to the integrity of the provider, as "slabbing" doesn't mean the provider is overselling.  Chances are, the provider is overselling without even knowing it from the beginning; by over-assigning KVM servers CPU room.)

 

---

 

For us, it boils down to the flat fact that Windows VPS take far too much idle CPU, and spike far too high (100%/core ; regardless of "shares" or "quotas" set in cgroups.)  There is NO WAY WHATSOEVER to hard limit CPU usage in a KVM-hypervisor powered environment.

 

However, if you shove your micro KVM containers inside an OpenVZ container, and let them fight over share of a properly quota'd and pinned cpuset (on the VZ 'host'), then you get less problems, more space per node (density), and less downtime due to I/O or other bottlenecks for your clients.

 

---

 

If KVM would fix the above, we wouldn't even have to look in to slabbing, as our system already uses large pools of decentralized storage; and we PXE boot all our nodes off an iSCSI target (SAN/iSCSI HBA via Initiator.)


----------



## drmike (Feb 1, 2014)

Glad to hear more stories so this concept is understood and the downsides.  Plus shows openness of some companies... ehhh transparency...  I applaud the providers who say Yep, we do.


----------



## iWF-Jacob (Feb 1, 2014)

I don't really like the idea of slabbing. To me it almost seems morally wrong, not sure why... Perhaps it's almost as if you are "cheating" in terms of what is available and the size and scope of the company.

I would say the closest we get to slabbing, while it certainly isn't, is having shared hosting servers be containers on a hypervisor. It's actually pretty great, works well, and allows for quicker scaling, migrations, etc. In addition, always helps out on those license costs!


----------



## trewq (Feb 1, 2014)

iWF-Jacob said:


> It's actually pretty great, works well, and allows for quicker scaling, migrations, etc.


Mostly why people would slab.


----------



## willie (Feb 1, 2014)

GoodHosting said:


> For us, it boils down to the flat fact that Windows VPS take far too much idle CPU, and spike far too high (100%/core ; regardless of "shares" or "quotas" set in cgroups.)  There is NO WAY WHATSOEVER to hard limit CPU usage in a KVM-hypervisor powered environment.
> 
> 
> However, if you shove your micro KVM containers inside an OpenVZ container, and let them fight over share of a properly quota'd and pinned cpuset (on the VZ 'host'), then you get less problems, more space per node (density), and less downtime due to I/O or other bottlenecks for your clients.


Oh that's interesting about cpu limiting.  I wonder if that's why the large cloud places use Xen.

I had been wondering for a while if it's possible to run KVM under OpenVZ.  Since KVM is a kernel running in a user process, it only makes sense.  Is that what's going on?


----------



## Magiobiwan (Feb 2, 2014)

Given that KVM is basically QEMU (QEMU enhanced KVM), and since QEMU runs in Userland space, it's *possible* to run KVM under OpenVZ. I know it's possible because some clients try to run Windows XP in QEMU constantly. Now, SHOULD YOU? Oh dear god no. The performance would likely be trash. Run KVM directly on the host node instead; you'll get MUCH better performance.


----------



## kaniini (Feb 2, 2014)

Magiobiwan said:


> Given that KVM is basically QEMU (QEMU enhanced KVM), and since QEMU runs in Userland space, it's *possible* to run KVM under OpenVZ. I know it's possible because some clients try to run Windows XP in QEMU constantly. Now, SHOULD YOU? Oh dear god no. The performance would likely be trash. Run KVM directly on the host node instead; you'll get MUCH better performance.


It's not possible, unless you give the container access to /dev/kvm.


----------

