How To: Determining how many 'VPS neighbors' you have or if you are on an oversold OpenVZ node

Awmusic12635

Active Member
Verified Provider
One of our dev nodes:


[[email protected] ~]# cat /proc/cgroups
#subsys_name    hierarchy       num_cgroups     enabled
cpuset  3       6323    1
cpu     3       6323    1
cpuacct 3       6323    1
devices 4       6322    1
freezer 4       6322    1
net_cls 0       1       1
blkio   1       6323    1
perf_event      0       1       1
net_prio        0       1       1
memory  2       6322    1
I think you win
 

SentinelTower

New Member
Wow, this is a nice piece of information. Is there a way for providers to hide this file? However this only gives us the number of containers, how can we know if the node is oversold ?
 

Amitz

New Member
Damn. I just thought "Cool, let me check that on my VMs!" until I realised that I only have XEN VMs and dedicated servers in the meantime... Not a single OVZ left. What a pity.
 
Last edited by a moderator:

Fusl

New Member
Better don't host anything on my test host node at home:


[[email protected]:~] vzctl exec $(vzlist -1 | head -1) "cat /proc/cgroups"
Executing command: cat /proc/cgroups
#subsys_name hierarchy num_cgroups enabled
cpuset 3 19275 1
cpu 3 19275 1
cpuacct 3 19275 1
devices 4 19274 1
freezer 4 19274 1
net_cls 0 1 1
blkio 1 19275 1
perf_event 0 1 1
net_prio 0 1 1
memory 2 19274 1

And I wonder why it's so really f***ing hot there...
 

HalfEatenPie

The Irrational One
Retired Staff
Wow, this is a nice piece of information. Is there a way for providers to hide this file? However this only gives us the number of containers, how can we know if the node is oversold ?
I'm pretty sure general logical reasoning can be applied here.

Assuming how beefy the host node is (if they tell you that is...), what kind of configuration they have, you can roughly assume if it's oversold or not. 

As a general rule of thumb, an E3 node should not have 200 VPSes on it.  Or in matthew's case, several hundreds on a single hard drive setup >.>

Better don't host anything on my test host node at home:


[[email protected]:~] vzctl exec $(vzlist -1 | head -1) "cat /proc/cgroups"
Executing command: cat /proc/cgroups
#subsys_name hierarchy num_cgroups enabled
cpuset 3 19275 1
cpu 3 19275 1
cpuacct 3 19275 1
devices 4 19274 1
freezer 4 19274 1
net_cls 0 1 1
blkio 1 19275 1
perf_event 0 1 1
net_prio 0 1 1
memory 2 19274 1

And I wonder why it's so really f***ing hot there...
That...  What do you even run locally that requires that many OpenVZ VPSes?!!!!  I can't even find a reason to run more than 10 VPSes locally!
 
Last edited by a moderator:

dcdan

New Member
Verified Provider
OpenVZ w/ ploop on an NFS4 mount - Functional and performance testing for the love of OpenVZ :)
If you don't mind me asking, what OS template were you using?

Also, how many processes do you see on the host node? (ps aux | wc -l)

Thanks
 

Mid

New Member
If you don't mind me asking, what OS template were you using?

Also, how many processes do you see on the host node? (ps aux | wc -l)

Thanks
I am not a admin or hosting guy, but just a casual user 

The os should be debian/ubuntu/cent, asking for reliability?

processes must be 19273+x  where x < 100 

:)
 

dcdan

New Member
Verified Provider
I am not a admin or hosting guy, but just a casual user 

The os should be debian/ubuntu/cent, asking for reliability?

processes must be 19273+x  where x < 100 

:)
Well there are two reasons why I am curious:

1) Process count. After about 50000 processes on the host node it will start locking up. If each VPS runs 3 processes (absolute minimum, basically just init and two kernel processes) that's already almost 60000.

2) Even a minimal centos install times 19273 equals 10 TB of data

:)
 

HalfEatenPie

The Irrational One
Retired Staff
I'm 100% sure @Fusl knows what he's doing ;)  I mean it was even said that it was for performance testing, so obviously you're going to see some people trying to push the boundaries outside of normal parameters.  
 
Top