# Tool for detecting if a VPS node is "slabbed" or not



## kaniini (Jan 29, 2014)

Based on the GVH new Dallas node running in a hypervisor, I decided to begin writing a tool to probe an OpenVZ/LXC environment and see if it is slabbed.

There's a few ways we can do this.  First of all, almost all hypervisors provide basic information at CPUID leaf 0x40000000.  So, we can sniff that and see if there's a hypervisor running.  This is basically essential, because they all use CPUID to learn more about the VMM in early boot... this allows them to learn how to make hypercalls and so on.

Once we know more about the basic hypervisor... we can do more advanced things.  For example, it would maybe be possible to bitbang on VMware's bus by using inl() and outl() calls in combination with ioperm().  That would allow us to learn more about the hypervisor (such as version), etc.

Code here: https://github.com/kaniini/slabbed-or-not

It'd be interesting if people ran it, especially on machines they know _not_ to be slabbed, so I can verify that the code does the right thing on a baremetal environment.


----------



## drmike (Jan 29, 2014)

Bad ass... so for a non git head like me...  Explain how to git this and all that jazz... so I can automate it and include in my standard VPS pre-use testing....


----------



## kaniini (Jan 29, 2014)

Just download https://github.com/kaniini/slabbed-or-not/archive/0.1.tar.gz

Then... cd slabbed-or-not-0.1; make; ./slabbed-or-not


----------



## drmike (Jan 29, 2014)

Well that's simple... 

Funny, GVH-Jon / GreenValueHost offer created the motivation for the slab-detector...  This is going to be all the rage once other folks notice it  

The horrors as people try to mask details...


----------



## drmike (Jan 29, 2014)

First go, I got this:



> [email protected]:~/slabbed-or-not-0.1# ./slabbed-or-not
> Illegal instruction


----------



## drmike (Jan 29, 2014)

Then another VPS..... OpenVZ on this one...



> ~/slabbed-or-not-0.1# make; ./slabbed-or-not
> gcc -o slabbed-or-not slabbed-or-not.c
> Illegal instruction


----------



## kaniini (Jan 29, 2014)

Try version 0.1.1.  It had a late-breaking fix.

Basically, to detect if we're running on Xen PV or Xen HVM, we pass an illegal instruction.  On Xen PV, it will be interpreted as a normal instruction, on HVM it will trigger SIGILL (but that doesn't matter because the initial check will pass).

There's a few things we can do with VMware too like this, but I haven't bothered yet.

edit: https://github.com/kaniini/slabbed-or-not/archive/0.1.1.tar.gz is 0.1.1 if anyone is confused.


----------



## drmike (Jan 29, 2014)

Well that indeed works...   Good work.


----------



## Damian (Jan 29, 2014)

Anyone else feel that this is a case of "OMFG WE'VE FOUND A PROBLEM THAT WE MADE OURSELVES LET'S HAVE A HUGE DEBACLE ABOUT IT" for something that's not actually a problem? It's not like anyone's running on single-core Xeons from 2004; if the server can handle nested virt, what's the big deal?

We don't do this ourselves, so I think I might be missing the point. I'm open to education, of course.


----------



## drmike (Jan 29, 2014)

Well, we had a thread around here where some providers volunteered that they use slabs and gave good reasons why...  Recommended reading for those who haven't disclosed using such, if they need more logic conversationally speaking.

I'm sure we all know providers using such for good and others for wrong/abusive reasons.  I'm not even opening my mouth on this one...  May the furniture fly and land where it may and best of luck to all fingerbanged by this script.  opcorn:


----------



## kaniini (Jan 29, 2014)

Damian said:


> Anyone else feel that this is a case of "OMFG WE'VE FOUND A PROBLEM THAT WE MADE OURSELVES LET'S HAVE A HUGE DEBACLE ABOUT IT" for something that's not actually a problem? It's not like anyone's running on single-core Xeons from 2004; if the server can handle nested virt, what's the big deal?
> 
> We don't do this ourselves, so I think I might be missing the point. I'm open to education, of course.


Well, really, I don't care if people use slabs at all... I think you misinterpret my reasoning for releasing this tool.

There are some fairly legitimate use cases where slabbing might be advantageous -- live migrations being one of them.  If you've already put in high-availability infrastructure, it is a way where you can kind of get the advantages of HA for your OpenVZ deployments.  That's fine, really.

But then you have the dodgier hosts which do slabbing solely as a way to overcommit their servers even further (by taking advantage of slabbing to split a physical server up into multiple scheduling domains) -- these are the ones which are less than honest about their use of hypervisors on their OpenVZ deployments.

That said: I _do_ believe that if you are slabbing, then you should be forthright about it.  Then you have nothing to hide, and nothing to fear from this tool, right?

edit: I mean, basically, the only reason why I wrote this was because I bought a VPS last night that was so awful, I actually felt compelled to investigate why.  I mean, there was a lot of things that seemed off about it -- CPU info was whack, hypervisor bit was set, performance was crap.  Of course any reasonable person is going to investigate what hypervisor their container is running under if they see these things.  And, it's actually important that this tool exists, because it can explain strange behaviour with a VPS -- think about steal-time for example... that's something that you cannot see inside OpenVZ, and which only exists under a hypervisor.


----------



## Damian (Jan 29, 2014)

Good response, I do fully understand it now. The original posts had a "LOL WE'VE GOT THEM NOW" feel to it.


----------



## DomainBop (Jan 29, 2014)

> Funny, GVH-Jon / GreenValueHost offer created the motivation for the slab-detector..


Slabbing would explain how he's going to use that new /24 that's SWIPed to him...  Phoenix is apparently next on the GVH slabathon with another /24 there http://whois.arin.net/rest/org/GVH-8/pft

edit:



> I mean, basically, the only reason why I wrote this was because I bought a VPS last night that was so awful, I actually felt compelled to investigate why.  I mean, there was a lot of things that seemed off about it -- CPU info was whack, hypervisor bit was set, performance was crap.


Ginernet was slabbing (openvz inside kvm, 2 kvm's per server) and there was a noticeable performance hit in disk performance of 50%+ with the SSD drives they were using.  I think they stopped slabbing now but they were one of the few hosts who openly admitted to doing it (the ease of live migrations you mentioned was one reason they did it).

In terms of stability that slabbed VPS was a nightmare with frequent reboots but that may have been due to the node and the data centers both being DDoS magnets rather than the slabbing (although KVM's tendency to lock up when it gets hit with a DDoS may have been a contributing factor to the reboots).


----------



## kaniini (Jan 29, 2014)

Damian said:


> Good response, I do fully understand it now. The original posts had a "LOL WE'VE GOT THEM NOW" feel to it.


Hmm... I am not sure why they would... my post with this thread was basically a technical explanation of how the thing works, as well as the motivation for why I would release a polished up tool for it.


----------



## DomainBop (Jan 29, 2014)

> There's a few things we can do with VMware too like this, but I haven't bothered yet.


The OVH Classic and Low Latency VPS lines are OpenVZ running on VMWare


----------



## drmike (Jan 29, 2014)

"...reasonable person is going to investigate what hypervisor their container is running under if they see these things."

Maybe your "average" vpsBoard user does... but don't expect the peasants in sillyville to follow your high standards Lord kaniini.  For they, they starve from lack of knowledge.

Lips shut. Done.  Carry on.


----------



## AuroraZero (Jan 29, 2014)

drmike said:


> "...reasonable person is going to investigate what hypervisor their container is running under if they see these things."
> 
> Maybe your "average" vpsBoard user does... but don't expect the peasants in sillyville to follow your high standards Lord kaniini.  For they, they starve from lack of knowledge.
> 
> Lips shut. Done.  Carry on.


Do not worry man I make this same mistake all the time as well. I assume just because I, or in this case we, do it everyone else does the same. When the truth is the average person does not do it nor do they have same level of knowledge in the given field even.


----------



## Rallias (Jan 29, 2014)

DomainBop said:


> Ginernet was slabbing (openvz inside kvm, 2 kvm's per server) and there was a noticeable performance hit in disk performance of 50%+ with the SSD drives they were using.


I don't see that kind of IO degradation.



In fact, on nodes that I've seen that have been slabbed properly (xen pv, elevator=noop, openvswitch bridge), I've seen improved performance.


----------



## concerto49 (Jan 29, 2014)

Ronald Barnstoff said:


> I don't see that kind of IO degradation.
> 
> 
> 
> In fact, on nodes that I've seen that have been slabbed properly (xen pv, elevator=noop, openvswitch bridge), I've seen improved performance.


That's the thing. Why does it matter? What's the point here but drama? Does it work? Does it perform? Is it reliable? I think that's what matters to end users. They want a working product that's performant. They want the features.


----------



## kaniini (Jan 29, 2014)

concerto49 said:


> That's the thing. Why does it matter? What's the point here but drama? Does it work? Does it perform? Is it reliable? I think that's what matters to end users. They want a working product that's performant. They want the features.


No drama intended... it's just a tool designed to answer the question of whether or not you're running in a hypervisor or not.

Isn't the question "can you determine whether you are running in a hypervisor from within a restricted container" interesting enough without a drama angle?  If not that, then is "can we determine as much information about the hypervisor from inside the container" interesting?


----------



## jarland (Jan 29, 2014)

Lol when I saw this earlier I just knew...tonight would be good. It doesn't matter but it won't stop the drama. Popcorn up people.


----------



## Magiobiwan (Jan 29, 2014)

There's already a thread on LET about BuyVM. I need more popcorn. I ran out already.


----------



## kaniini (Jan 29, 2014)

jarland said:


> Lol when I saw this earlier I just knew...tonight would be good. It doesn't matter but it won't stop the drama. Popcorn up people.


In my opinion the only people who feel threatened by this tool are those which feel it would expose something they are not being transparent about.  Might be worth noting for any future purchase decisions...


----------



## Shados (Jan 29, 2014)

kaniini said:


> Isn't the question "can you determine whether you are running in a hypervisor from within a restricted container" interesting enough without a drama angle?  If not that, then is "can we determine as much information about the hypervisor from inside the container" interesting?


Of course it is; it's a neat technical question.


----------



## DomainBop (Jan 29, 2014)

concerto49 said:


> What's the point here but drama?


My whole point in downloading and running the test was that I wanted to create drama and publicly humiliate my provider, but alas, the server that Oles delivered to me today 1 minute and 53 seconds after ordering is not running under a hypervisor.  Just think of the drama that would have ensued if the test had shown that Oles was trying to pass off slabs as E3 dedis.  

//end troll post



> Isn't the question "can you determine whether you are running in a hypervisor from within a restricted container" interesting enough without a drama angle?



Yes.


----------



## kaniini (Jan 29, 2014)

DomainBop said:


> The OVH Classic and Low Latency VPS lines are OpenVZ running on VMWare


Interesting... I will pick one up and test with it.


----------



## rds100 (Jan 29, 2014)

For those of you who can read russian, here is an interestic publication worth reading - http://www.xakep.ru/post/58104/


----------



## Nett (Jan 30, 2014)

Nice stuff  Will use this on small/ridiculous providers.


----------



## kaniini (Jan 30, 2014)

version 0.2 of this tool has been tagged in GIT, downloadable here: https://github.com/kaniini/slabbed-or-not/archive/0.2.zip

This version is basically all about tickling VMware from inside an OpenVZ container.  If that doesn't apply to your situation, it's probably pretty boring.

Interesting aspect: This version could be modified to do evil things to VMware from inside the OpenVZ container on some versions of VMware.  For example, if you're running under VMware Workstation, some very trivial modifications to this tool would allow you to do things like disconnect the virtual HDD, from _inside the OpenVZ container_.

Good news: ESXi 5 _appears _to no longer allow device enumeration.  Older versions -- no idea.  If it lists devices, you can disconnect them by adding some code.  The hypervisor's hypercall port assumes if you can access it that you have permission to do these things to the VM.

Bad news: It is not practical to block access to the VMware hypercall port.  Even though ioperm() and iopl()/inl()/outl() syscalls are hidden behind the sys_rawio POSIX capability, you can just simply use some inline asm to make the hypercalls.

tl;dr: I wouldn't do slabbing with VMware if I were a provider.


----------



## joepie91 (Jan 30, 2014)

kaniini said:


> tl;dr: I wouldn't do slabbing with VMware if I were a provider.


But you _are_ a provider!


----------



## kaniini (Jan 30, 2014)

joepie91 said:


> But you _are_ a provider!


I meant in that context a provider that used a scheme that would benefit from slabbing (i.e. OpenVZ).


----------



## Nick_A (Jan 30, 2014)

This is great.


----------



## wcypierre (Jan 30, 2014)

kaniini said:


> version 0.2 of this tool has been tagged in GIT, downloadable here: https://github.com/kaniini/slabbed-or-not/archive/0.2.zip
> 
> This version is basically all about tickling VMware from inside an OpenVZ container.  If that doesn't apply to your situation, it's probably pretty boring.
> 
> ...


can you teach me how did you get into that instead?


----------



## SkylarM (Jan 30, 2014)

Wonder when the BlueVM does slabbing thread will pop up. They have actual service stability issues, and is likely related to their slabbing setup. Quite a bit of drama for a provider that's been solid.


----------



## manacit (Jan 30, 2014)

SkylarM said:


> Wonder when the BlueVM does slabbing thread will pop up. They have actual service stability issues, and is likely related to their slabbing setup. Quite a bit of drama for a provider that's been solid.


Clearly you haven't been on LET lately.


----------



## SkylarM (Jan 30, 2014)

manacit said:


> Clearly you haven't been on LET lately.


is it IN the buyvm thread? I stopped reading it on like page 2.


----------



## NodeBytes (Jan 30, 2014)

This forum is turning into LET.

Slabbing does not equal low quality. It has it's uses. Parts of AWS uses slabbing because it's efficient and stable.


----------



## DomainBop (Jan 30, 2014)

SkylarM said:


> Wonder when the BlueVM does slabbing thread will pop up. They have actual service stability issues, and is likely related to their slabbing setup. Quite a bit of drama for a provider that's been solid.


Johnston already said they do slab some of their smaller plans and gave the reason for doing it


----------



## MartinD (Jan 30, 2014)

NodeBytes said:


> This forum is turning into LET.
> 
> 
> Slabbing does not equal low quality. It has it's uses. Parts of AWS uses slabbing because it's efficient and stable.


No-one is saying slabbing is bad..


----------



## drmike (Jan 30, 2014)

MartinD said:


> No-one is saying slabbing is bad..


^--- THIS.

Now go look around and see if providers say they run nested virtualization in their marketing, FAQ, support, etc.

The fact that many providers have so much e-penis statistics dripping from their site, but neglect to mention they are even smarter than the average VPS kid offering (i.e. they are advanced in virtualization or at least know a guy who is) is ummm well, odd.

So who did folks find slabbing?  Yeah BuyVM,  BlueVM....  Who else?  Perhaps it is list time with some official input from folks to explain their use as they are discovered.


----------



## tchen (Jan 30, 2014)

DomainBop said:


> My whole point in downloading and running the test was that I wanted to create drama and publicly humiliate my provider, but alas, the server that Oles delivered to me today 1 minute and 53 seconds after ordering is not running under a hypervisor.  Just think of the drama that would have ensued if the test had shown that Oles was trying to pass off slabs as E3 dedis.
> 
> //end troll post


You've still got a chance.  Run it on the OVZ VPS.  Despite it being posted directly on their website, I'm sure it'd be enough to drum up a multi-pager LET thread or two


----------



## Magiobiwan (Jan 30, 2014)

I suspect the reason that providers DON'T publically say "We Slab" is BECAUSE of the public outlash like is happening in the clusterthread over on LET right now. People assume "slabbing == bad" without understanding WHY providers might do it. BlueVM slabs the nodes we put Blue0 to Blue1/2s on, because otherwise performance would be horrible. And those nodes have pretty darn good reliability. Performance is good too. Slabbing doesn't always equal bad.


----------



## drmike (Jan 30, 2014)

Magiobiwan said:


> BlueVM slabs the nodes we put Blue0 to Blue1/2s on...


Aren't you glad I posted a thread not to long ago about voluntary slab confessions... and BlueVM stepped up.... Congrats on BlueVM being upfront, honest and proactive.   Make sure Jonston sees what I just said and rewards support staff instead of screaming at you lads about me.


----------



## kaniini (Jan 30, 2014)

As I said earlier, slabbing is totally fine as long as you are honest about it.


----------



## kaniini (Jan 30, 2014)

wcypierre said:


> can you teach me how did you get into that instead?


VMware's hypercall interface uses I/O port knocking, which is used by unprivileged instructions inl/outl and inw/outw (thusly OpenVZ cannot trap them).

VMware hypercall 11 allows enumeration of the device tree, you call it and get back 4 bytes of a device table entry at a specified offset.  There are a maximum of 50 devices which may be connected to a VM within VMware.

VMware hypercall 12 allows connection/disconnection of a device tree element.  If you're on a vSphere hypervisor, then you're safe from this as they disabled hypercalls 11/12.  On Workstation though, it is possible to disconnect the disks through hypercall 12.

https://sites.google.com/site/chitchatvmback/backdoor is a listing of known hypercalls.  There's also the open-vm-tools source code, but trying to read that was ultimately a major waste of my time.


----------



## wcypierre (Jan 31, 2014)

kaniini said:


> VMware's hypercall interface uses I/O port knocking, which is used by unprivileged instructions inl/outl and inw/outw (thusly OpenVZ cannot trap them).
> 
> VMware hypercall 11 allows enumeration of the device tree, you call it and get back 4 bytes of a device table entry at a specified offset.  There are a maximum of 50 devices which may be connected to a VM within VMware.
> 
> ...


hmm........ interesting. Gotta brush up my low level skills before I get to know how does these things actually work


----------



## PwnyExpress (Feb 2, 2014)

I really don't get the point with the drama behind this.

As other posters have pointed out, running OVZ or LXC under a HVM (as you guys call "slabbing"), if done properly practically gives you free HA on your HVM nodes at the very least. I'm thinking of regular hardware maintenance where you'll have to take down the parent node along with your customer's nodes.


----------



## kaniini (Feb 2, 2014)

PwnyExpress said:


> I really don't get the point with the drama behind this.
> 
> 
> As other posters have pointed out, running OVZ or LXC under a HVM (as you guys call "slabbing"), if done properly practically gives you free HA on your HVM nodes at the very least. I'm thinking of regular hardware maintenance where you'll have to take down the parent node along with your customer's nodes.


What drama, exactly?  There is no drama in this thread, it is solely a technical discussion on whether or not nested virtualization can be successfully concealed to an OpenVZ container.


----------



## jmginer (Feb 14, 2014)

Yeah! in the past we have deployed OpenVZ servers using KVM nodes.

We have published here: http://lowendtalk.com/discussion/7977/openvz-inside-kvm

Is a good point to manage backups...

Now we dont do more, we prefer provide better performance and we are using bacula4host to do backups every 4 hours.

Regards!



DomainBop said:


> Slabbing would explain how he's going to use that new /24 that's SWIPed to him...  Phoenix is apparently next on the GVH slabathon with another /24 there http://whois.arin.net/rest/org/GVH-8/pft
> 
> edit:
> 
> ...


----------



## wlanboy (Feb 19, 2014)

kaniini said:


> As I said earlier, slabbing is totally fine as long as you are honest about it.


Second that.


----------

