amuck-landowner

Tool for detecting if a VPS node is "slabbed" or not

tchen

New Member
My whole point in downloading and running the test was that I wanted to create drama and publicly humiliate my provider, but alas, the server that Oles delivered to me today 1 minute and 53 seconds after ordering is not running under a hypervisor.  Just think of the drama that would have ensued if the test had shown that Oles was trying to pass off slabs as E3 dedis.  

//end troll post
You've still got a chance.  Run it on the OVZ VPS.  Despite it being posted directly on their website, I'm sure it'd be enough to drum up a multi-pager LET thread or two :p
 

Magiobiwan

Insert Witty Statement Here
Verified Provider
I suspect the reason that providers DON'T publically say "We Slab" is BECAUSE of the public outlash like is happening in the clusterthread over on LET right now. People assume "slabbing == bad" without understanding WHY providers might do it. BlueVM slabs the nodes we put Blue0 to Blue1/2s on, because otherwise performance would be horrible. And those nodes have pretty darn good reliability. Performance is good too. Slabbing doesn't always equal bad.
 

drmike

100% Tier-1 Gogent
BlueVM slabs the nodes we put Blue0 to Blue1/2s on...
Aren't you glad I posted a thread not to long ago about voluntary slab confessions... and BlueVM stepped up.... Congrats on BlueVM being upfront, honest and proactive.   Make sure Jonston sees what I just said and rewards support staff instead of screaming at you lads about me.
 

kaniini

Beware the bunny-rabbit!
Verified Provider
can you teach me how did you get into that instead? :D
VMware's hypercall interface uses I/O port knocking, which is used by unprivileged instructions inl/outl and inw/outw (thusly OpenVZ cannot trap them).

VMware hypercall 11 allows enumeration of the device tree, you call it and get back 4 bytes of a device table entry at a specified offset.  There are a maximum of 50 devices which may be connected to a VM within VMware.

VMware hypercall 12 allows connection/disconnection of a device tree element.  If you're on a vSphere hypervisor, then you're safe from this as they disabled hypercalls 11/12.  On Workstation though, it is possible to disconnect the disks through hypercall 12.

https://sites.google.com/site/chitchatvmback/backdoor is a listing of known hypercalls.  There's also the open-vm-tools source code, but trying to read that was ultimately a major waste of my time.
 

wcypierre

New Member
VMware's hypercall interface uses I/O port knocking, which is used by unprivileged instructions inl/outl and inw/outw (thusly OpenVZ cannot trap them).

VMware hypercall 11 allows enumeration of the device tree, you call it and get back 4 bytes of a device table entry at a specified offset.  There are a maximum of 50 devices which may be connected to a VM within VMware.

VMware hypercall 12 allows connection/disconnection of a device tree element.  If you're on a vSphere hypervisor, then you're safe from this as they disabled hypercalls 11/12.  On Workstation though, it is possible to disconnect the disks through hypercall 12.

https://sites.google.com/site/chitchatvmback/backdoor is a listing of known hypercalls.  There's also the open-vm-tools source code, but trying to read that was ultimately a major waste of my time.
hmm........ interesting. Gotta brush up my low level skills before I get to know how does these things actually work :D
 

PwnyExpress

New Member
I really don't get the point with the drama behind this.

As other posters have pointed out, running OVZ or LXC under a HVM (as you guys call "slabbing"), if done properly practically gives you free HA on your HVM nodes at the very least. I'm thinking of regular hardware maintenance where you'll have to take down the parent node along with your customer's nodes.
 

kaniini

Beware the bunny-rabbit!
Verified Provider
I really don't get the point with the drama behind this.


As other posters have pointed out, running OVZ or LXC under a HVM (as you guys call "slabbing"), if done properly practically gives you free HA on your HVM nodes at the very least. I'm thinking of regular hardware maintenance where you'll have to take down the parent node along with your customer's nodes.
What drama, exactly?  There is no drama in this thread, it is solely a technical discussion on whether or not nested virtualization can be successfully concealed to an OpenVZ container.
 

jmginer

New Member
Verified Provider
Yeah! in the past we have deployed OpenVZ servers using KVM nodes.

We have published here: http://lowendtalk.com/discussion/7977/openvz-inside-kvm

Is a good point to manage backups...

Now we dont do more, we prefer provide better performance and we are using bacula4host to do backups every 4 hours.

Regards!

Slabbing would explain how he's going to use that new /24 that's SWIPed to him...  Phoenix is apparently next on the GVH slabathon with another /24 there http://whois.arin.net/rest/org/GVH-8/pft

edit:

Ginernet was slabbing (openvz inside kvm, 2 kvm's per server) and there was a noticeable performance hit in disk performance of 50%+ with the SSD drives they were using.  I think they stopped slabbing now but they were one of the few hosts who openly admitted to doing it (the ease of live migrations you mentioned was one reason they did it).

In terms of stability that slabbed VPS was a nightmare with frequent reboots but that may have been due to the node and the data centers both being DDoS magnets rather than the slabbing (although KVM's tendency to lock up when it gets hit with a DDoS may have been a contributing factor to the reboots).
 
Last edited by a moderator:
Top
amuck-landowner