amuck-landowner

VPS Slabbing - Who Does it and Admits to It?

drmike

100% Tier-1 Gogent
My latest boo-ya-kah is looking around to see who is nesting virtualization within virtualization - aka SLABBING.

We have more honest providers around here.  Who confesses to slabbing?

What are the many legitimate uses of such and what are the unsavory reasons for slabbing (other than inflating server count e-penis)?
 

drmike

100% Tier-1 Gogent
You mean in a sense of "confess or get exposed"?
It's Sunday and the holiday season.  Unless you roll around in CC's bed we won't be outing anyone ;)

Certainly some folks want to say, yeah we do it and defend the practice.  Right?

United Server Slabbers Union 100.
 

Reece-DM

New Member
Verified Provider
Quite a interesting approach to hosting that I can see the benefits from it but isn't something i've put into action.
 

KuJoe

Well-Known Member
Verified Provider
My test OpenVZ nodes are actually a KVM VPS and a VMware ESXi VM. With my personal hatred for KVM I would never want to run our OpenVZ nodes on a KVM and ESXi is too expensive to make it cost effective for us to even try it.
 

Virtovo

New Member
Verified Provider
My latest boo-ya-kah is looking around to see who is nesting virtualization within virtualization - aka SLABBING.

We have more honest providers around here.  Who confesses to slabbing?

What are the many legitimate uses of such and what are the unsavory reasons for slabbing (other than inflating server count e-penis)?
I'd guess one legitimate reason for slabbing is to build your base infrastructure with a single virtualisation type then float others on top of this.  It will make it easier going forward to adjust your deployment to react to market trends.  OpenVZ subs dropping?  Simply resize the OpenVZ instance to make more room for KVM services.
 

MannDude

Just a dude
vpsBoard Founder
Moderator
Is there any other benefit other than increasing perceived node count? Isn't it also used to bypass some OpenVZ limit on large nodes?

Explain it to me like im 5. :)
 

DomainBop

Dormant VPSB Pathogen
The definitive list of providers who slab and publicly state they do:

1. Ginernet (OpenVZ inside KVM)  confession

2. OVH, "Classic" and "Low Latency" VPS lines, (OpenVZ inside VMWare)  confession (see what virtualization layer FAQ...)

Nobody else admits to doing it. @Moderator, this topic may now be closed :p
 

drmike

100% Tier-1 Gogent
 it also used to bypass some OpenVZ limit on large nodes?
My understanding is much above 5k processes OpenVZ performance goes south quickly.   So, commonly slabbed to work around this with multiple process pools.

Stories about some of these slab workarounds involve rather puny servers (i.e. 32GB of RAM) with upwards of 4 7-8GB slabs.

It certainly does in public blow up you "server" count.   So long as everyone  stops paying attention and doesn't notice 4 nodes down at the same time, every time, the game goes on.

I can see slabbing being necessary to facilitate the very small plans (i.e. 128MB and less) in any real quantity.
 
  • Like
Reactions: scv

KuJoe

Well-Known Member
Verified Provider
I think one of the biggest benefits of "slabbing" is mobility. For example: at one point we had a cPanel server that was an OpenVZ VPS, when it outgrew the node it was on I created a KVM VPS with more resources and just did a vzmigrate of the cPanel OpenVZ VPS from the OpenVZ node to the KVM VPS. KVM, Xen, and VMware allow for migrations similar to this (although not a simple one-liner) so having an OpenVZ node on one of them might make migrations easier if say you need to replace a stick of RAM or you want to upgrade the hardware without downtime. You can just move a single VPS instead of all of the clients that are hosted on it.
 
Last edited by a moderator:

SkylarM

Well-Known Member
Verified Provider
I think one of the biggest benefits of "slabbing" is mobility. For example: at one point we had a cPanel server that was an OpenVZ VPS, when it outgrew the node it was on I created a KVM VPS with more resources and just did a vzmigrate of the cPanel OpenVZ VPS from the OpenVZ node to the KVM VPS. KVM, Xen, and VMware allow for migrations similar to this (although not a simple one-liner) so having an OpenVZ node on one of them might make migrations easier if say you need to replace a stick of RAM or you want to upgrade the hardware without downtime. You can just move a single VPS instead of all of the clients that are hosted on it.
Migrating a "slab" wouldn't necessarily be any faster than doing a mass openvz migrate via a script that would migrate each and every VPS in an automated fashion (we upgraded about 200 clients/containers on Friday entirely automated)

My understanding is much above 5k processes OpenVZ performance goes south quickly.   So, commonly slabbed to work around this with multiple process pools.

Stories about some of these slab workarounds involve rather puny servers (i.e. 32GB of RAM) with upwards of 4 7-8GB slabs.

It certainly does in public blow up you "server" count.   So long as everyone  stops paying attention and doesn't notice 4 nodes down at the same time, every time, the game goes on.

I can see slabbing being necessary to facilitate the very small plans (i.e. 128MB and less) in any real quantity.
I'm not entirely sure how true this is. The limit can't be 5k, our newer and larger servers at capacity are over this "limit" you speak of.

If there IS a process limit, and we simply haven't hit it yet, then offering small ram packages (BlueVM did 96MB packages recently iirc?) would make sense in a slabbed setup, but still rather miss-leading as far as a company boasting about total node count. 

If there is a magical process limit for OpenVZ, then slabbing for these smaller packages makes sense, and MAYBE larger systems like Dual E5's with 128gb+ memory, but it would have zero place on lower systems such as E3's with 32GB of memory.

Edit:

This is direct from the Openvz site:

There is a restriction on the total number of processes in the system. More than about 16000 processes start to cause poor responsiveness of the system, worsening when the number grows. Total number of processes exceeding 32000 is very likely to cause hang of the system.

Note that in practice the number of processes is usually less. Each process consumes some memory, and the available memory and the "low memory" (see “Low memory”) limit the number of processes to lower values. With typical processes, it is normal to be able to run only up to 8000 processes in a system.

http://openvz.org/UBC_primary_parameters

In short, even with silly overselling, you'll hit CPU limitations on E3's in particular long before you hit a process limit (we're nowhere close on our new nodes, and these hold roughly 2-3x what our old nodes are capable of holding)
 
Last edited by a moderator:

jarland

The ocean is digital
I think one benefit is less impact from a kernel panic on one "node." It depends on your reason for splitting up a node. If it's to inflate numbers and oversell to a stupid degree while making the end user think its different physical nodes....I might judge you a bit. But if it's to minimize the impact of kernel issues and reboots, and you're willing to take the hit to resource overhead, I don't mind it.
 

Echelon

New Member
Verified Provider
'Slabbing' with the purpose of ease of management isn't necessarily bad. 'Slabbing' with the purpose of deception to the client is a headache for everyone involved. Even then, I find that 'Slabbing' is not worth the headaches it can draw up, since you're then having to dig into multiple virtualization methods to try and find issues that can occur over time.

Simply my two cents on the matter ;)
 

KS_Phillip

New Member
Verified Provider
We do this for all of our OpenVZ stuff, whether it's directly leased OpenVZ containers or game  servers on our Flexible Gaming brand.  It just keeps management and segmentation that much simpler.
 

XLvps

New Member
But for each slab wouldn't you still have to pay for a Control Panel instance?  SolusVM, Virtualizor, etc
 

Virtovo

New Member
Verified Provider
But for each slab wouldn't you still have to pay for a Control Panel instance?  SolusVM, Virtualizor, etc
Yes.  Although depending on your reasons for slabbing the cost may be negligible.  1 - 2 VPS Slave licence for Solus = $2 + Standard slave licence = $10
 

WebSearchingPro

VPS Peddler
Verified Provider
Yes.  Although depending on your reasons for slabbing the cost may be negligible.  1 - 2 VPS Slave licence for Solus = $2 + Standard slave licence = $10
I think its unreasonable to think of it that way, if you put 32VPS on a node (32 x 1GB VPS) that would be quite a few micro/mini licenses.  
 
Top
amuck-landowner