amuck-landowner

How To: Determining how many 'VPS neighbors' you have or if you are on an oversold OpenVZ node

MannDude

Just a dude
vpsBoard Founder
Moderator
First things first, let's give credit where credit is due. I wanted this to have more visible coverage and originally stumbled upon this information in which references this site.

Because part of the discussion references how to determine the number of containers on the host node when logged into the node directly (IE: as the provider) I am skipping that as that's not the access your normal end user would have. Instead I'll be focusing on how to determine the container count on a OpenVZ node when logged into your own container as a customer. Please note that this may not work on every OpenVZ container, as @mitgib pointed out however I've tested it on a couple containers myself and have posted the results below.

I am certain someone else can chime in with more technical information as my understanding of this is limited.

The command


cat /proc/cgroups

Yep, that's it.

Your results should appear similar to this:


root@dev:~# cat /proc/cgroups
#subsys_name hierarchy num_cgroups enabled
cpuset 3 295 1
cpu 3 295 1
cpuacct 3 295 1
devices 4 294 1
freezer 4 294 1
net_cls 0 1 1
blkio 1 299 1
perf_event 0 1 1
net_prio 0 1 1
memory 2 294 1


OR
 


root@other-dev:~# cat /proc/cgroups
#subsys_name hierarchy num_cgroups enabled
cpuset 2 75 1
cpu 2 75 1
cpuacct 2 75 1
devices 3 74 1
freezer 3 74 1
net_cls 0 1 1
blkio 1 75 1
perf_event 0 1 1


In the case of the first example, the number of containers on the hostnode should be 294 containers. Seems like a lot, but the provider is a budget provider and does not advertise 'non oversold' and performance is fine for what it is. It's just a cheap VPS so overselling is of course expected. In the second example, you guessed it, 74 total containers appear to be on the host node.


If your VPS provider is operating an older kernel then your results will likely appear like this and will not output the values above:


root@old-kernel:~# cat /proc/cgroups
#subsys_name hierarchy num_cgroups enabled


Feel free to run this on your OpenVZ containers and post your results, though just keep in mind some providers operate some large and beefy VPS nodes so a high container count should not automatically make you assume they are overselling or overloading their nodes to a large degree.

EDIT: @Mun made an awesome script here to do this for you:
 
Last edited by a moderator:

MannDude

Just a dude
vpsBoard Founder
Moderator
If I have missed anything or am incorrect, please let me know. Any corrections made will be added to the original post.
 

DomainBop

Dormant VPSB Pathogen
I only have 3 OVZ VPS's and 2 of them are on the same node in Sao Paulo so...

Iniz ( kernel 2.6.32-042stab106.6...big beefy dual X5670 with lots of RAM):

cat /proc/cgroups
#subsys_name    hierarchy    num_cgroups    enabled
cpuset    3    237    1
cpu    3    237    1
cpuacct    3    237    1
devices    4    236    1
freezer    4    236    1
net_cls    0    1    1
blkio    1    237    1
perf_event    0    1    1
net_prio    0    1    1
memory    2    236    1

Host1Plus...doesn't work, old kernel  2.6.32-042stab093.4
 

Munzy

Active Member
I knew this was going to be made eventually, so I built it with some hopeful ideas that I made it at least somewhat "OMG" proof.....


wget http://cdn.content-network.net/Mun/apps/container_counter/script.txt -O - | php
https://www.qwdsa.com/converse/threads/container-counter.131/

Let me have your suggestions......

Sample::


############################################################################
Container Counter
############################################################################
By: Mun
Ver: 1.0
Site: https://www.qwdsa.com/converse/threads/container-counter.131/
############################################################################
CPU(s):
----------------------------------------------------------------------------
Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz
Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz
Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz
Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz
----------------------------------------------------------------------------
Kernel:
----------------------------------------------------------------------------
2.6.32-042stab102.9

----------------------------------------------------------------------------
Container(s) On Node:
----------------------------------------------------------------------------
179
----------------------------------------------------------------------------
This is from Catalysthost Dallas
 

DomainBop

Dormant VPSB Pathogen
This test should work with newer versions of LXC and Linux- VServer (used by Edis) too shouldn't it?
 

rds100

New Member
Verified Provider
Something is not quite right:

# cat /proc/cgroups
#subsys_name    hierarchy    num_cgroups    enabled
cpuset    3    4    1
cpu    3    4    1
cpuacct    3    4    1
devices    4    3    1
freezer    4    3    1
net_cls    0    1    1
blkio    1    4    1
perf_event    0    1    1
net_prio    0    1    1
memory    2    3    1


This node has exactly two containers on it.
 
Last edited by a moderator:

MannDude

Just a dude
vpsBoard Founder
Moderator
Something is not quite right:


# cat /proc/cgroups
#subsys_name    hierarchy    num_cgroups    enabled
cpuset    3    4    1
cpu    3    4    1
cpuacct    3    4    1
devices    4    3    1
freezer    4    3    1
net_cls    0    1    1
blkio    1    4    1
perf_event    0    1    1
net_prio    0    1    1
memory    2    3    1

This node has exactly two containers on it.
What is the provider? Age of container? Size of container?

Completely possible if it's a new order, on a new node or a large VM on a small node.
 

WSWD

Active Member
Verified Provider
Interesting...it's actually one off on every node I tested this on.  The actual number is 1 less than what is listed, every time.
 
Last edited by a moderator:

WSWD

Active Member
Verified Provider
And of course I just read in the other thread that you need to -1.  :angry:    Back to what I was doing before... lol
 

rds100

New Member
Verified Provider
What is the provider? Age of container? Size of container?

Completely possible if it's a new order, on a new node or a large VM on a small node.
This is our test / QA node, hence i know exactly how many containers are on it.

My point was that the count isn't right, when is substract 1 from the cgroups count, it still leaves 3 and there are only two containers on the node.
 
Last edited by a moderator:

Munzy

Active Member
This is our test / QA node, hence i know exactly how many containers are on it.

My point was that the count isn't right, when is sustract 1 from the cgroups count, it still leaves 3 and there are only two containers on the node.
You are looking at the wrong number. it is the second column of numbers.

You can try my script and it should give you the right output.
 

rds100

New Member
Verified Provider
You are right, i was looking at the "cpuset" line, and should have looked at the "devices" line. So it works, but only shows the number of running containers, doesn't count the stopped ones.
 

drmike

100% Tier-1 Gogent
cat /proc/cgroups
#subsys_name hierarchy num_cgroups enabled
cpuset 3 627 1
cpu 3 627 1
cpuacct 3 627 1
devices 4 626 1
freezer 4 626 1
net_cls 0 1 1
blkio 1 627 1
perf_event 0 1 1
net_prio 0 1 1
memory 2 626 1

One of those companies doing the Lowendspirit stuff.
 
Last edited by a moderator:

MannDude

Just a dude
vpsBoard Founder
Moderator
cat /proc/cgroups
#subsys_name hierarchy num_cgroups enabled
cpuset 3 627 1
cpu 3 627 1
cpuacct 3 627 1
devices 4 626 1
freezer 4 626 1
net_cls 0 1 1
blkio 1 627 1
perf_event 0 1 1
net_prio 0 1 1
memory 2 626 1

One of those companies doing the Lowendspirit stuff.
:eek:

Well the LES stuff consists of micro containers, right? Still, lot of eggs in one basket :)
 

drmike

100% Tier-1 Gogent
:eek:

Well the LES stuff consists of micro containers, right? Still, lot of eggs in one basket :)
Yeah, it's micro instances - true lowendboxes.  64-256MB offerings.

Mind you the industry relies on people buying and things sitting there 99%+ idle.  I imagine the LES stuff is a curious mixed bag of use.... Probably chunk that uses it for VPN 24/7.  Rest of the purchases, bound to be idle and/or abandoned.

Asked somewhere I know about a 160+ container node (multiples actually).  Currently all those containers aren't even maxing out 1 core out of more than a dozen.  

I've been brutal about load numbers in the past.  Really applies in my mind where a company is selling BIG plans - think 1GB and above.  Cause you go hit 200 containers @ 1GB of RAM = 200GB on likely a 32GB maximum sized node in most shops = 6+x oversell to physical.
 

drmike

100% Tier-1 Gogent
root@dev:~# cat /proc/cgroups
#subsys_name hierarchy num_cgroups enabled
cpuset 3 295 1
cpu 3 295 1
cpuacct 3 295 1
devices 4 294 1
freezer 4 294 1
net_cls 0 1 1
blkio 1 299 1
perf_event 0 1 1
net_prio 0 1 1
memory 2 294 1
Which provider was this?
 

dcdan

New Member
Verified Provider
One of our dev nodes:

Code:
[root@dev3 ~]# cat /proc/cgroups
#subsys_name    hierarchy       num_cgroups     enabled
cpuset  3       6323    1
cpu     3       6323    1
cpuacct 3       6323    1
devices 4       6322    1
freezer 4       6322    1
net_cls 0       1       1
blkio   1       6323    1
perf_event      0       1       1
net_prio        0       1       1
memory  2       6322    1
 

k0nsl

Bad Goy
YourServer.se / Makonix: 256MB box located in Stockholm, Sweden.

Code:
root@lindholm:~# cat /proc/cgroups
#subsys_name    hierarchy       num_cgroups     enabled
cpuset  2       173     1
cpu     2       173     1
cpuacct 2       173     1
devices 3       174     1
freezer 3       174     1
net_cls 0       1       1
blkio   1       175     1
perf_event      0       1       1
net_prio        0       1       1
root@lindholm:~#
 
Last edited by a moderator:

lbft

New Member
New high score?


# cat /proc/cgroups
#subsys_name hierarchy num_cgroups enabled
cpuset 3 1338 1
cpu 3 1338 1
cpuacct 3 1338 1
devices 4 1337 1
freezer 4 1337 1
net_cls 0 1 1
blkio 1 1341 1
perf_event 0 1 1
net_prio 0 1 1
memory 2 1340 1


Leet.
 
Top
amuck-landowner