# How To: Determining how many 'VPS neighbors' you have or if you are on an oversold OpenVZ node



## MannDude (May 7, 2015)

First things first, let's give credit where credit is due. I wanted this to have more visible coverage and originally stumbled upon this information in which references this site.

Because part of the discussion references how to determine the number of containers on the host node when logged into the node directly (IE: as the provider) I am skipping that as that's not the access your normal end user would have. Instead I'll be focusing on how to determine the container count on a OpenVZ node when logged into your own container as a customer. Please note that this may not work on every OpenVZ container, as @mitgib pointed out however I've tested it on a couple containers myself and have posted the results below.

I am certain someone else can chime in with more technical information as my understanding of this is limited.

*The command*


cat /proc/cgroups

Yep, that's it.

Your results should appear similar to this:


[email protected]:~# cat /proc/cgroups
#subsys_name	hierarchy	num_cgroups	enabled
cpuset	3	295	1
cpu	3	295	1
cpuacct	3	295	1
devices	4	294	1
freezer	4	294	1
net_cls	0	1	1
blkio	1	299	1
perf_event	0	1	1
net_prio	0	1	1
memory	2	294	1


OR
 


[email protected]:~# cat /proc/cgroups
#subsys_name	hierarchy	num_cgroups	enabled
cpuset	2	75	1
cpu	2	75	1
cpuacct	2	75	1
devices	3	74	1
freezer	3	74	1
net_cls	0	1	1
blkio	1	75	1
perf_event	0	1	1


In the case of the first example, the number of containers on the hostnode _should be_ 294 containers. Seems like a lot, but the provider is a budget provider and does not advertise 'non oversold' and performance is fine for what it is. It's just a cheap VPS so overselling is of course expected. In the second example, you guessed it, 74 total containers appear to be on the host node.


If your VPS provider is operating an older kernel then your results will likely appear like this and will not output the values above:


[email protected]:~# cat /proc/cgroups
#subsys_name	hierarchy	num_cgroups	enabled


Feel free to run this on your OpenVZ containers and post your results, though just keep in mind some providers operate some large and beefy VPS nodes so a high container count should not automatically make you assume they are overselling or overloading their nodes to a large degree.

*EDIT: *@Mun made an awesome script here to do this for you:


----------



## MannDude (May 7, 2015)

If I have missed anything or am incorrect, please let me know. Any corrections made will be added to the original post.


----------



## DomainBop (May 7, 2015)

I only have 3 OVZ VPS's and 2 of them are on the same node in Sao Paulo so...

Iniz ( kernel 2.6.32-042stab106.6...big beefy dual X5670 with lots of RAM):

cat /proc/cgroups
#subsys_name    hierarchy    num_cgroups    enabled
cpuset    3    237    1
cpu    3    237    1
cpuacct    3    237    1
devices    4    236    1
freezer    4    236    1
net_cls    0    1    1
blkio    1    237    1
perf_event    0    1    1
net_prio    0    1    1
memory    2    236    1

Host1Plus...doesn't work, old kernel  2.6.32-042stab093.4


----------



## Munzy (May 7, 2015)

I knew this was going to be made eventually, so I built it with some hopeful ideas that I made it at least somewhat "OMG" proof.....


wget http://cdn.content-network.net/Mun/apps/container_counter/script.txt -O - | php
https://www.qwdsa.com/converse/threads/container-counter.131/

Let me have your suggestions......

Sample::


############################################################################
Container Counter
############################################################################
By: Mun
Ver: 1.0
Site: https://www.qwdsa.com/converse/threads/container-counter.131/
############################################################################
CPU(s):
----------------------------------------------------------------------------
Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz
Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz
Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz
Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz
----------------------------------------------------------------------------
Kernel:
----------------------------------------------------------------------------
2.6.32-042stab102.9

----------------------------------------------------------------------------
Container(s) On Node:
----------------------------------------------------------------------------
179
----------------------------------------------------------------------------
This is from Catalysthost Dallas


----------



## DomainBop (May 7, 2015)

This test should work with newer versions of LXC and Linux- VServer (used by Edis) too shouldn't it?


----------



## rds100 (May 7, 2015)

Something is not quite right:

# cat /proc/cgroups
#subsys_name    hierarchy    num_cgroups    enabled
cpuset    3    4    1
cpu    3    4    1
cpuacct    3    4    1
devices    4    3    1
freezer    4    3    1
net_cls    0    1    1
blkio    1    4    1
perf_event    0    1    1
net_prio    0    1    1
memory    2    3    1


This node has exactly two containers on it.


----------



## MannDude (May 7, 2015)

rds100 said:


> Something is not quite right:
> 
> 
> # cat /proc/cgroups
> ...


What is the provider? Age of container? Size of container?

Completely possible if it's a new order, on a new node or a large VM on a small node.


----------



## WSWD (May 7, 2015)

Interesting...it's actually one off on every node I tested this on.  The actual number is 1 less than what is listed, every time.


----------



## WSWD (May 7, 2015)

And of course I just read in the other thread that you need to -1.  :angry:    Back to what I was doing before... lol


----------



## rds100 (May 8, 2015)

MannDude said:


> What is the provider? Age of container? Size of container?
> 
> Completely possible if it's a new order, on a new node or a large VM on a small node.


This is our test / QA node, hence i know exactly how many containers are on it.

My point was that the count isn't right, when is substract 1 from the cgroups count, it still leaves 3 and there are only two containers on the node.


----------



## Munzy (May 8, 2015)

rds100 said:


> This is our test / QA node, hence i know exactly how many containers are on it.
> 
> My point was that the count isn't right, when is sustract 1 from the cgroups count, it still leaves 3 and there are only two containers on the node.


You are looking at the wrong number. it is the second column of numbers.

You can try my script and it should give you the right output.


----------



## rds100 (May 8, 2015)

You are right, i was looking at the "cpuset" line, and should have looked at the "devices" line. So it works, but only shows the number of running containers, doesn't count the stopped ones.


----------



## drmike (May 8, 2015)

cat /proc/cgroups
#subsys_name	hierarchy	num_cgroups	enabled
cpuset	3	627	1
cpu	3	627	1
cpuacct	3	627	1
devices	4	626	1
freezer	4	626	1
net_cls	0	1	1
blkio	1	627	1
perf_event	0	1	1
net_prio	0	1	1
memory	2	626	1

One of those companies doing the Lowendspirit stuff.


----------



## MannDude (May 8, 2015)

drmike said:


> cat /proc/cgroups
> #subsys_name	hierarchy	num_cgroups	enabled
> cpuset	3	627	1
> cpu	3	627	1
> ...




Well the LES stuff consists of micro containers, right? Still, lot of eggs in one basket


----------



## drmike (May 8, 2015)

MannDude said:


> Well the LES stuff consists of micro containers, right? Still, lot of eggs in one basket


Yeah, it's micro instances - true lowendboxes.  64-256MB offerings.

Mind you the industry relies on people buying and things sitting there 99%+ idle.  I imagine the LES stuff is a curious mixed bag of use.... Probably chunk that uses it for VPN 24/7.  Rest of the purchases, bound to be idle and/or abandoned.

Asked somewhere I know about a 160+ container node (multiples actually).  Currently all those containers aren't even maxing out 1 core out of more than a dozen.  

I've been brutal about load numbers in the past.  Really applies in my mind where a company is selling BIG plans - think 1GB and above.  Cause you go hit 200 containers @ 1GB of RAM = 200GB on likely a 32GB maximum sized node in most shops = 6+x oversell to physical.


----------



## drmike (May 8, 2015)

MannDude said:


> [email protected]:~# cat /proc/cgroups
> #subsys_name	hierarchy	num_cgroups	enabled
> cpuset	3	295	1
> cpu	3	295	1
> ...


Which provider was this?


----------



## MannDude (May 8, 2015)

drmike said:


> Which provider was this?


RamNode 128MB box.


----------



## dcdan (May 8, 2015)

One of our dev nodes:


```
[[email protected] ~]# cat /proc/cgroups
#subsys_name    hierarchy       num_cgroups     enabled
cpuset  3       6323    1
cpu     3       6323    1
cpuacct 3       6323    1
devices 4       6322    1
freezer 4       6322    1
net_cls 0       1       1
blkio   1       6323    1
perf_event      0       1       1
net_prio        0       1       1
memory  2       6322    1
```


----------



## k0nsl (May 8, 2015)

YourServer.se / Makonix: 256MB box located in Stockholm, Sweden.


```
[email protected]:~# cat /proc/cgroups
#subsys_name    hierarchy       num_cgroups     enabled
cpuset  2       173     1
cpu     2       173     1
cpuacct 2       173     1
devices 3       174     1
freezer 3       174     1
net_cls 0       1       1
blkio   1       175     1
perf_event      0       1       1
net_prio        0       1       1
[email protected]:~#
```


----------



## lbft (May 8, 2015)

New high score?


# cat /proc/cgroups
#subsys_name hierarchy num_cgroups enabled
cpuset 3 1338 1
cpu 3 1338 1
cpuacct 3 1338 1
devices 4 1337 1
freezer 4 1337 1
net_cls 0 1 1
blkio 1 1341 1
perf_event 0 1 1
net_prio 0 1 1
memory 2 1340 1


Leet.


----------



## Awmusic12635 (May 8, 2015)

dcdan said:


> One of our dev nodes:
> 
> 
> [[email protected] ~]# cat /proc/cgroups
> ...


I think you win


----------



## SentinelTower (May 8, 2015)

Wow, this is a nice piece of information. Is there a way for providers to hide this file? However this only gives us the number of containers, how can we know if the node is oversold ?


----------



## Amitz (May 8, 2015)

Damn. I just thought "Cool, let me check that on my VMs!" until I realised that I only have XEN VMs and dedicated servers in the meantime... Not a single OVZ left. What a pity.


----------



## tmzVPS-Daniel (May 8, 2015)

FINALLY! A way to prove clients that you do not over-sell. 

- Daniel


----------



## Geek (May 8, 2015)

You're welcome.


----------



## Kalam (May 8, 2015)

Hmm, apparently every single OpenVZ I have from multiple providers is running an older kernel.


----------



## dabtech (May 8, 2015)

XVM Labs


```
[email protected]:~# cat /proc/cgroups
#subsys_name	hierarchy	num_cgroups	enabled
cpuset	2	583	1
cpu	2	583	1
cpuacct	2	583	1
devices	3	582	1
freezer	3	582	1
net_cls	0	1	1
blkio	1	583	1
perf_event	0	1	1
net_prio	0	1	1
```


----------



## Fusl (May 11, 2015)

Better don't host anything on my test host node at home:


[[email protected]:~] vzctl exec $(vzlist -1 | head -1) "cat /proc/cgroups"
Executing command: cat /proc/cgroups
#subsys_name hierarchy num_cgroups enabled
cpuset 3 19275 1
cpu 3 19275 1
cpuacct 3 19275 1
devices 4 19274 1
freezer 4 19274 1
net_cls 0 1 1
blkio 1 19275 1
perf_event 0 1 1
net_prio 0 1 1
memory 2 19274 1

And I wonder why it's so really f***ing hot there...


----------



## William (May 11, 2015)

DomainBop said:


> This test should work with newer versions of LXC and Linux- VServer (used by Edis) too shouldn't it?


No, it won't work on Vserver and never will. OpenVZ only. No LXC either.


----------



## HalfEatenPie (May 11, 2015)

SentinelTower said:


> Wow, this is a nice piece of information. Is there a way for providers to hide this file? However this only gives us the number of containers, how can we know if the node is oversold ?


I'm pretty sure general logical reasoning can be applied here.

Assuming how beefy the host node is (if they tell you that is...), what kind of configuration they have, you can roughly assume if it's oversold or not. 

As a general rule of thumb, an E3 node should not have 200 VPSes on it.  Or in matthew's case, several hundreds on a single hard drive setup >.>



Fusl said:


> Better don't host anything on my test host node at home:
> 
> 
> [[email protected]:~] vzctl exec $(vzlist -1 | head -1) "cat /proc/cgroups"
> ...


That...  What do you even run locally that requires that many OpenVZ VPSes?!!!!  I can't even find a reason to run more than 10 VPSes locally!


----------



## Fusl (May 11, 2015)

HalfEatenPie said:


> That...  What do you even run locally that requires that many OpenVZ VPSes?!!!!  I can't even find a reason to run more than 10 VPSes locally!


OpenVZ w/ ploop on an NFS4 mount - Functional and performance testing for the love of OpenVZ


----------



## dcdan (May 11, 2015)

Fusl said:


> OpenVZ w/ ploop on an NFS4 mount - Functional and performance testing for the love of OpenVZ


If you don't mind me asking, what OS template were you using?

Also, how many processes do you see on the host node? (ps aux | wc -l)

Thanks


----------



## Mid (May 12, 2015)

Fusl said:


> OpenVZ w/ ploop on an NFS4 mount - Functional and performance testing for the love of OpenVZ


testing 19273 times !

Sorry to say, but please check with a doc whether you have 'OCD'. Seriously.


----------



## Mid (May 12, 2015)

dcdan said:


> If you don't mind me asking, what OS template were you using?
> 
> Also, how many processes do you see on the host node? (ps aux | wc -l)
> 
> Thanks


I am not a admin or hosting guy, but just a casual user 

The os should be debian/ubuntu/cent, asking for reliability?

processes must be 19273+x  where x < 100


----------



## dcdan (May 12, 2015)

Mid said:


> I am not a admin or hosting guy, but just a casual user
> 
> The os should be debian/ubuntu/cent, asking for reliability?
> 
> processes must be 19273+x  where x < 100


Well there are two reasons why I am curious:

1) Process count. After about 50000 processes on the host node it will start locking up. If each VPS runs 3 processes (absolute minimum, basically just init and two kernel processes) that's already almost 60000.

2) Even a minimal centos install times 19273 equals 10 TB of data


----------



## HalfEatenPie (May 12, 2015)

I'm 100% sure @Fusl knows what he's doing   I mean it was even said that it was for performance testing, so obviously you're going to see some people trying to push the boundaries outside of normal parameters.


----------



## dcdan (May 12, 2015)

HalfEatenPie said:


> I'm 100% sure @Fusl knows what he's doing   I mean it was even said that it was for performance testing, so obviously you're going to see some people trying to push the boundaries outside of normal parameters.


By no means I implied otherwise. Just genuinely curious.


----------



## MannDude (May 13, 2015)

Ah, looks like they're going to treat this as a bug instead of a feature: https://bugzilla.openvz.org/show_bug.cgi?id=3234


----------



## drmike (May 13, 2015)

MannDude said:


> Ah, looks like they're going to treat this as a bug instead of a feature: https://bugzilla.openvz.org/show_bug.cgi?id=3234


Shame, what sorry sports.  It was an honesty feature that made at least one piece of data transparent on providers servers.

Can't have that sort of thing happening   Must mask the real values... 

What's wrong, a bunch of providers ashamed of their load numbers getting out in public?


----------



## MartinD (May 13, 2015)

Could always overcome it with a level of slabbing.


----------



## dcdan (May 13, 2015)

MartinD said:


> Could always overcome it with a level of slabbing.


But then you would not see all those containers in  /proc/cgroups? Slabbing implies you are basically virtualizing your OpenVZ nodes which in turn would effectively "hide" containers running in the other slabs.


----------



## MartinD (May 13, 2015)

Yeah, that's what I meant - for providers to 'hide' what they're doing and how many containers are on a node.


----------



## Geek (May 13, 2015)

*eyeroll* a bug?  Really?  Don't think I've ever seen that "username" on the OVZ Bugzilla before, interestingly.  Too bad to have such a technique ruined because someone can't hang...


----------



## devonblzx (May 13, 2015)

mustard man does it again...


----------



## MannDude (May 13, 2015)

MustardMan must be a VPS hosting provider based on activity on LXCenter's forum and this comment:



> Seeing the number of containers on a node should ABSOLUTELY POSITIVELY NOT be
> allowed!  If there is enough interest to make it a feature (I highly doubt
> there is) someone can enable fine but NO WAY should container owners ever be
> able to see this kind of information on the node by default.
> ...



I can understand why a _provider_ may not want this information known however I can not fathom why an end-user would want it unknown.

If you believe it should be listed as a feature, and not a bug, please respond to the report here ( https://bugzilla.openvz.org/show_bug.cgi?id=3234).

I don't see how this is any different than checking the CPU information with _"cat /proc/cpuinfo"_ for example. It's simply a transparency feature that would allow end users to help diagnose VPS issues. A good example of it's use would be performance benchmarks and reviews. If you continually have piss-poor performance as an end-user and use it with other commands that reveal hostnode information it could help you determine that you chose a provider who has oversold their node significantly.

Of course, if I had it my way you would be able to determine the hostnode's actual RAM vs RAM allocated to containers and disk space stats.


----------



## Munzy (May 13, 2015)

https://twitter.com/mustardman296/followers

https://bugzilla.openvz.org/show_bug.cgi?id=2069

https://bugzilla.openvz.org/show_bug.cgi?id=1990

https://bugzilla.openvz.org/show_bug.cgi?id=2019

http://forum.lxcenter.org/index.php?t=msg&goto=72013&

https://productforums.google.com/forum/#!msg/drive/2YX9UPDuExw/5yAT6HTyKIgJ


----------



## Geek (May 13, 2015)

LOL... the Twitter.

"HyperVM is safe"

Was that before or after what's-his-name hung himself?


----------



## Geek (May 13, 2015)

I'll stick a few notes in the bug report later on tonight or in the morning.


----------



## MannDude (May 13, 2015)

> This node is running software Raid1


Ouch. Hopefully a dev/test node and not a customer packed production node...


----------



## Jonathan (May 13, 2015)

It's worth pointing out that this is a bug in the kernel and is being patched by Parallels so don't get too used to being able to do this guys.


----------



## MannDude (May 13, 2015)

KnownHost-Jonathan said:


> It's worth pointing out that this is a bug in the kernel and is being patched by Parallels so don't get too used to being able to do this guys.


Yep, it's being discussed in the last several comments.

I'd like to think of it as more of a useful feature than a bug, but will likely not be able to sway devs and providers to feel the same way.


----------



## drmike (May 13, 2015)

I love the OH SHIT reaction of providers out there....

This approach, per someone else's testing shows ONLY ACTIVE containers.  All containers offline are not counted.

So not getting the entire view of fun on a server, but active contention pool.


----------



## KuJoe (May 14, 2015)

A new kernel was released this morning to patch this and it's released as a security fix so I'm wondering if KernelCare/Ksplice will treat it like so and providers won't even need to reboot to fix it.

_Now for a quick little rant that most of you probably won't agree with..._

Everybody here should know by now that I am all for transparency and honesty (which leads to trust) but at the same time I am all for businesses to be allowed to make their own decisions about how transparent or honest they want to be. If clients ask me an exact number of VPSs on any given node, I won't tell them because the number is irrelevant and means nothing to them and the only thing it would do is scare away new clients who see a number over 20 and go "OMG my disk IO will be in the KB/s, my network speed will be slower than dial-up, and the CPUs are probably maxed out 24x7" not even bothering to look at our server status page or read reviews. For current clients, they could use that number as an excuse where it doesn't belong "My VPS is running slow now and it will never get better because that's expected when you have X VPSs on a node, time to cancel" not even bothering to check if there's a hardware issue or something minor that we are working on. Do we advertise that we don't oversell? No we don't, we explain it right in our FAQ.

Luckily there are a ton of clients and knowledgeable people on this forum who understand that overselling != diminished performance, unfortunately for every one of those there are thousands that don't understand that and thus why advertising the population of a node is not beneficial to a VPS provider who's business relies on sales from the general public where the vast majority aren't very technically inclined and will quickly judge a company based on the negatives of the virtualization used (i.e. OpenVZ is always oversold and companyA has X VPSs per node so they are more oversold than companyB even though the real numbers don't add up). Imagine if there was a website that listed all of the VPS providers and the number of VPSs per node, now imagine John Q visits that website and sees a provider they used that really sucked performance-wise with 100 VPSs per node. They look for another provider and find a really nice one but they have 101 VPSs per node and John remembers how bad the performance was with 100 VPSs per node so they don't even bother to see that the new provider doesn't sell 2GB plans for $12/year and doesn't use the a bargain bin special OVH server.

The bottom line is that the number of VPSs per node has no impact on anything and cannot be used to quantify anything performance-wise. All you have is a number out of context of anything and will mean different things for different providers. If the number was broken down to show you have much RAM and disk space each VPS got, then you could see how oversold a node is but even that doesn't give you a view point of the server's performance. Now if the number was broken down by CPU, RAM/swap, or disk IO usage, then you can get an idea of how over/underworked the server is, but you could see that by using your VPS for a while and gauging the responsiveness and running benchmarks. I guess you can use the number to keep track of a company's sales or turnover, but that's not something most companies are willing to disclose either.

For us vpsBoarders, that number is a neat metric but nothing else. Instead of wanting to see the number of VPSs per node, try the VPS in the real world and see if it fits your needs or not instead of relying on a number to calculate it for you.


----------



## MartinD (May 14, 2015)

KuJoe, thank you


----------



## Geek (May 14, 2015)

The mixed feelings on this are totally cool, and expected, really. I actually agree with a lot of what Joe said.

I just wish we had a happy medium...just some little way of saying "Yes, we run OVZ but we're not being blatant f***ing idiots about the number of containers running on our nodes."  Yada, yada, yada...


----------



## devonblzx (May 14, 2015)

Geek said:


> The mixed feelings on this are totally cool, and expected, really. I actually agree with a lot of what Joe said.
> 
> I just wish we had a happy medium...just some little way of saying "Yes, we run OVZ but we're not being blatant f***ing idiots about the number of containers running on our nodes."  Yada, yada, yada...


Exactly.  I've been using OpenVZ since 2006.  Sadly, you ask the random person on VPSBoards, LET, WHT, or any other community which virtualization technology is the worst, and most of them will say OpenVZ.   Then you ask them why?  "Because the host can oversell" is the common response.

I constantly have to explain to people that every technology can be oversold (and have been doing so since ~2008).   The difference with OpenVZ is OS-level virtualization (containerization) is more efficient so it actually performs better when two nodes are equally oversold. The low requirements of a container just allow for more containers per node than virtual machines.

This has become an issue because providers that have oversold to the extreme have used OpenVZ and dragged the technology's name through the dirt.   Back in the day, Xen didn't allow for memory and disk overselling and some people still to this day think that virtual machines can't be oversold.  Even back then, they could oversell disk I/O, CPU, network I/O, etc.

So if the companies putting 1000 containers on a box are outed, they probably deserved it because it is companies like that that have destroyed OpenVZ's reputation.


----------



## sleddog (May 14, 2015)

64MB Bandwagonhost, $3.99/year


#subsys_name	hierarchy	num_cgroups	enabled
cpuset	3	1334	1
cpu	3	1334	1
cpuacct	3	1334	1
devices	4	1333	1
freezer	4	1333	1
net_cls	0	1	1
blkio	1	1335	1
perf_event	0	1	1
net_prio	0	1	1
memory	2	1333	1


VM works great, love it


----------



## Geek (May 14, 2015)

dcdan said:


> But then you would not see all those containers in  /proc/cgroups? Slabbing implies you are basically virtualizing your OpenVZ nodes which in turn would effectively "hide" containers running in the other slabs.


There are actually a few commands you can throw at a container that can tell you whether or not it's nested inside Xen/KVM, but that can be for a rainy day or something.


----------



## Onra Host (May 14, 2015)

devonblzx said:


> Back in the day, Xen didn't allow for memory and disk overselling and some people still to this day think that virtual machines can't be oversold.  Even back then, they could oversell disk I/O, CPU, network I/O, etc.


Exactly! The common people has no clue Xen has been able to be oversold for a long time now...all a different couple ways too. Though you mostly need the know-how to even perform this, so it's still better then Joe Smo overselling his 32GB E3 with 100+ customers hehe.


----------



## drmike (May 14, 2015)

KuJoe said:


> Luckily there are a ton of clients and knowledgeable people on this forum who understand that overselling != diminished performance, unfortunately for every one of those there are thousands that don't understand that and thus why advertising the population of a node is not beneficial to a VPS provider who's business relies on sales from the general public where the vast majority aren't very technically inclined and will quickly judge a company based on the negatives of the virtualization use...
> 
> The bottom line is that the number of VPSs per node has no impact on anything and cannot be used to quantify anything performance-wise. All you have is a number out of context of anything and will mean different things for different providers. If the number was broken down to show you have much RAM and disk space each VPS got, then you could see how oversold a node is but even that doesn't give you a view point of the server's performance. Now if the number was broken down by CPU, RAM/swap, or disk IO usage, then you can get an idea of how over/underworked the server is...


Well I agree, but do take some slight issues with this.

OVZ only has industry traction based on offer price.   Show me some mega cheap annuals for $5-10 on KVM/Xen/etc.?  They are ALL OpenVZ, won't find the others.  Someone wants to offer such on other virtualizations, they would likely do well.  If they can manage oversell and implications in other virtualization.

A container count does speak to level a provider is going to monetize their server.  There are reasonable loading levels and reasonable income per U numbers.   Calling this optimizing resources is being borderline dishonest based on who I am conversing with and their actual knowledge level. 

You get some of these container counts we have have seen 600-1600 on a server and I fail to see how such could perform even at an acceptable level.   Only math at play there is scores of idle containers online, but sitting with no real use - only way such is viable as a loading stunt.  Even those better be running on an E5 to make it believable.

When I see those numbers I think what an insane customer base; buyers that just idle.  Those customer bases can only be one thing - extremely cheap, call it laughable cost annuals and very small plans at that.

I wonder how many companies charging $10GB of RAM or more are so heavily loading nodes to be embarrassed if such was made public?  Those are the shops I worry about as people actually host business type stuff there (as where the el cheapo stuff is hobby sandboxing). 

Arguably nothing can be used to quantify performance other than overall customer in container perception.   It's counter productive to the industry and providers who actually understand the technology.  Top level full server CPU, RAM/swap, disk IO, disk IOWAIT, etc. could give time in place snapshot view, but even that is flawed - need some graphing over long time periods to make sane sense and be believable.  This I hope becomes the normal approach for companies claiming transparency and wanting to run a good shop.


----------



## DomainBop (May 14, 2015)

Geek said:


> There are actually a few commands you can throw at a container that can tell you whether or not it's nested inside Xen/KVM, but that can be for a rainy day or something.


https://github.com/kaniini/slabbed-or-not / detects slabs nested inside Xen/KVM/VMware/Hyper-V/bhyve



> the companies putting 1000 containers on a box are outed, they probably deserved it because it is companies like that that have destroyed OpenVZ's reputation.


Overloaders are a problem but the inexperienced hosts (_you know, the ones whose experience consists of hosting a Minecraft server for a couple of friend_s) who tend to favor OVZ over other methods because they think it is "easier" than Xen/KVM and can be oversold more have probably done more damage to its reputation.


----------



## Geek (May 14, 2015)

I've seen that slabbed or not thing, it just runs the same commands I would anyhow  Can't argue with CPU features.


----------



## KuJoe (May 15, 2015)

drmike said:


> You get some of these container counts we have have seen 600-1600 on a server and I fail to see how such could perform even at an acceptable level.   Only math at play there is scores of idle containers online, but sitting with no real use - only way such is viable as a loading stunt.  Even those better be running on an E5 to make it believable.


600 32MB containers != 600 4GB containers so again, without any additional information about the containers having a container count on the node is useless. You are correct also about the idle containers, that's why if the number of containers included usage info it would be much better to get an idea from but having the number of containers itself can only lead to speculation and not real facts and we all know how speculation can hurt a provider more than actual facts so assuming the performance will be poor based on a number without anything else to support it will only be bad news for those providers who utilize OpenVZ and do it correctly like @Geek said.


----------



## devonblzx (May 15, 2015)

KuJoe said:


> 600 32MB containers != 600 4GB containers so again, without any additional information about the containers having a container count on the node is useless.


I do understand your argument, there does require some context, but I think the context is the responsiveness of your server.  If you see a lot of wait, or steal time, then you see there are 600 containers being run on the same system, you know what the problem is.

Even with 32MB systems 600 containers is most likely highly oversold.    Let's say you have an E3, 32GB system (probably a common config for most of these places).

Sure 600x32MB is ~19GB.  So the memory isn't oversold (they might even try to put 1000 containers in that case) but what about CPU?  What about disks?  What about network?

That is a 4 physical core system with 600 containers sharing it (probably each stated to be able to use 1 CPU).  You could potentially have 600 different containers trying to access the CPU or disk at the same time, especially if you have cronjobs or scheduled tasks kick off at the same time.

It comes back to the same thing as the Xen can't be oversold argument of 2008.  Yes, there is more to overselling than just memory and disk space.


----------



## KuJoe (May 15, 2015)

devonblzx said:


> I do understand your argument, there does require some context, but I think the context is the responsiveness of your server.  If you see a lot of wait, or steal time, then you see there are 600 containers being run on the same system, you know what the problem is.


 That's a bold assumption that might not be correct though which is my point. If clients see 600 containers and their VPS is slow, they would automatically assume it's because of the 600 containers and not something else and most likely cancel their service instead of opening a ticket to get it resolved.



devonblzx said:


> Even with 32MB systems 600 containers is most likely highly oversold.    Let's say you have an E3, 32GB system (probably a common config for most of these places).
> 
> Sure 600x32MB is ~19GB.  So the memory isn't oversold (they might even try to put 1000 containers in that case) but what about CPU?  What about disks?  What about network?
> 
> ...


 Oversold != diminished performance. Having run a production node with over 600 containers before on older hardware than a Intel E3 I can attest that it does not mean the VPSs would be slow be it CPU, disk IO, or network speed. The provider's abilities and knowledge are much more important in terms of performance than how many containers per node so taking the number of containers and applying it to anything out of context is as accurate as throwing a dart at a wall full of possible other issues, sure you might be correct once in a while but you won't be able to confirm it and assuming you're right won't benefit anybody.


----------



## DomainBop (May 15, 2015)

> Having run a production node with over 600 containers before on older hardware than a Intel E3 I can attest that it does not mean the VPSs would be slow be it CPU, disk IO, or network speed.


The "Slow" mentioned by @KuJoe and the "Responsiveness" mentioned by @devonblzx  are both very subjective measure and their definition will vary from customer to customer.  There really aren't any objective measures that can definitively define "oversold" (_although when nodes run out of disk space and crash as happened to me with a Dutch provider it's a good indication the provider is massively overselling resources_), and for some customer's uses slightly degraded performance on a node due to overselling may be acceptable to them.

As a customer I have a simple test: does the VPS do a good job of doing what I need it to do? If YES, I renew.  If NO, I find a new provider.


----------



## drmike (May 15, 2015)

I mean it comes back to the run around of smoking customers and banking on perception as many providers wear the virtual face mask while mugging  deceiving advertising [to] customers... 

Only benefit to consumers for packing nodes to the gills is super low price perhaps.  This is where a $20 VPS package in real shops is selling for < $3 a month on the cheap side.  If some hobbyist is getting a "deal" for their leisure-use of such a machine then I guess that mutual arrangement works.  Problem is 99% of VPS companies aren't selling that way or angle.  They are instead talking about high performance, enterprise and all that stuff that is bragging about how bad ass they are... Then the poor customer quickly feels like a dupe and if customer opens their mouth hole, the provider hides behind the VPS face mask using customer not knowing as way to legitimate their sub-par performance.

Same mugging mask is worn in a lot of shops to lure customers with bullshit claims of 24/7 support by certified ninjas who don't sleep and only live to serve the customer's whims.  Again, how does one prove or disprove such?  Through spot checks and perception indicators.   Often companies fail on this and BS about something to justify the failure.  Happens so much that it's sad.

Having bought from MANY VPS companies over the years I can say firsthand that resource contention, overloading, poor controls on other containers, etc. is precisely why I no longer am with a very long list of companies.  Companies I have services with still very well might overload.  Difference is they inherently get it and go beyond the kid with VPS-biz-in-a-box no knowledge.

This s why outside of a handful of proven companies I won't even try FREE VPS / Trial offer / 30 day refund containers from providers.  Not even worth the waste of my time to see the same fail generic deployments and knowledge level with gear that is meh - not so impressive.

Can you super load xxx - xxxx containers on a server?  Yes.  Will performance suck if any percentage of those folks actually use during same period of time, yes.  Is overloading common?  VERY.  

Oversold to me simply is the following:

    1. Disk ratio sold unreasonable vs. physical available pool size.  This is especially concerning where providers allocate or sell LARGE disk offers.  Small disk allocation shops could be worse as out of disk stands to effect, oh hundreds of customers on a single node.  I even created a commandline tool way back to "reserve" disk space your provider allocates.   Probably caused a few out of disk situations, oh well, bad providers need some ethics time to reflect.

    2.  RAM - hitting out of RAM on the server, going to swap on the bare server.  Also other games to boost RAM via SSD and other hacks.

   3.  xxx - xxxx containers but just a single Gbit uplink.   Unsure why damn near everyone thinks single Gbit is acceptable in a highly shared environment.  Imagine a 600 unit apartment building with a single quarter inch water line.  Gbit isn't even acceptable on my LAN.

   4.  Lack of cores - too many shops still running on dual quad cores or those E3's.  Nice idea where cheap gear buy and right dollar cost vs. loading.  Power packing that stuff meh.

   5. Stacking customers dense like it's just fine.   This EVERYONE IS IDLE guarantee eventually isn't true, someone uses.   Some shops shuffle that off to another server, some call it abuse, etc.    CPUs are better these days, but task switching still costs.  Slam 10k processes on most machines and it's just bad.

In other industries there are best practices on such over subscription.  Ratios.   In VPS land it remains the wild west, whatever one can get away with with a light observation of performance at best, often after the customers complain.

Lack of transparency about servers and real world performance is what is allowing kids and know-nothings to spring up and compete toe-to-toe.  The gaming and masking things is what is creating STUPID and ARTIFICIAL competition in the marketplace.  No way to differentiate, then the guy on a 10 year old server is on same footing as company of 500 employees with certifications running latest E5's with big RAM.

I know my comments here won't win any fans, it's a provider dense audience.  I am a consumer of services first and foremost.  Customers matter and sooner more shops realize this and limit the insane loading the better for customers, the industry and real businesses.  

++++ get to making numbers on servers public transparent.   Maybe not container count, but all other server load data.


----------



## KuJoe (May 15, 2015)

drmike said:


> I know my comments here won't win any fans, it's a provider dense audience.  I am a consumer of services first and foremost.  Customers matter and sooner more shops realize this and limit the insane loading the better for customers, the industry and real businesses.


How is raising prices for the same exact service better for the clients?


----------



## drmike (May 15, 2015)

KuJoe said:


> How is raising prices for the same exact service better for the clients?


This just doesn't add up.  Exact services don't exist unless two know-nothing vanilla installs of OpenVZ on identical hardware and network.  Big big big problem with VPS is most are generic in nature and hardly anyone goes and stands out as different, so assumption is they are all same.   True, most shops are generic and kind of similar.  Maybe most shops are power overloading nodes   Shame on them if so.

You are inferring that lowering prices is good.  (sure lowering prices is good for more customer buys / less purchase friction / rejection).   The question is why do so many companies have to go lowering prices to basement levels then go lowering the numbers more?   Are we to adopt this and think that 2GB VPS at $1 a month is reasonable because it's mathematically doable on latest hardware with mass oversell?

Lower prices ARE NOT better for consumers.  If this was true every major hosting BIG BUSINESS would have lowered their numbers to compete with lowend priced shops.  Customers would have demanded lower prices. Big hosting shops are barely lowering prices.  They are MUCH higher per month often than what a lowend per YEAR is.  Lots of customers just won't buy cheap because it's business unsound from support, to lack of phone support, to lack of auditing, to lack of any ABOUT US info, to just sketchy fly-by-night feeling most give.

Raising prices having sustainable prices SHOULD guarantee real business things - like actual staffed help desk working in native tongue and at acceptable knowledge level.  It should pay for monitoring and proactive staffing.  It might pay for actual administrator to manage systems.  It should also have relationship to number of containers on a server and resource contention - this point gets foggier as hardware is more capable of masking the load and contention even during use flare ups.

Way I see it is that a server has potential.  Often that potential is a mirror of the company selling such (the humans, their comforts, self worth, etc.).  Yes, there are some bargain lines from diverse companies that are doing it for biz diversity or market testing or lead gen - not many though.   The potential in a server is $XXX-$XXXX of monthly income.

If I run a more formal host with sustainable numbers I might get $10/GB without sales reduction on income.  I can load a 64GB box 1-1 @ 64 1GB containers (ignoring load and mix use case issues) and have $640 income for 64 containers.

In contrast, the price lowering approach will park hundreds of containers in there, upping contention, upping horror pool when breakage happens pushing oh 150-1000 customers in there and often not even hitting that $640 income number, often.

You are inferring that stacking many hundreds of containers is the exact same service for clients as one loading a few dozen.  It isn't.  Conceptually same, sure.

These issues are why more people are moving away from VPS and distrusting in general.   We dealt with similar can-do approaches with shared years ago.  Difference here is VPS is access to shell and more indicators of slowness and abuse.  That's why VPS customers are less clingy to brands and more likely to up and leave in a heartbeat --- plus a subset is just more technically able to do what they need (not stuck in coddled GUI wrapper).

Whole situation reminds me of the concept of dilution.  Think about it in soda pop terms - as we all probably have had a glass -  you order a Coke or Pepsi it arrives at your table flat and tasting barely like it should.  Why?  Often they are diluting the product with more water to lower their commodity cost on the syrup.  A lowly cheap glass of soda that probably costs the establishment  9 cents (well use to a decade ago) and they are skimming as if their commodity price is too high.  In the process of diluting, they are ruining the reputation of the commodity syrup company (Coke or Pepsi) and they are beating the customer.  

Never though in said restaurants have I seen them take the $1.50-2.50 soda price and lower it to 58 cents just to sell more.  I am entirely sure I've never seen such a place drop things to 58 cents then go intentionally diluting product since they have reduced profitability.  

^ --- This is how I feel lots of VPS shops are operating.  Banking on their customers having paid too little to complain or having spent so little on impulse buying that they aren't even using such services.

Non-active / idle customers might be good for node stretching, but they are horrendous customers.  Retention rates are going to be low since they don't need or want the services.  When a company has tons of that, they should revisit their business concept and customer base.


----------



## KuJoe (May 15, 2015)

drmike said:


> This just doesn't add up.  Exact services don't exist unless two know-nothing vanilla installs of OpenVZ on identical hardware and network.  Big big big problem with VPS is most are generic in nature and hardly anyone goes and stands out as different, so assumption is they are all same.   True, most shops are generic and kind of similar.  Maybe most shops are power overloading nodes   Shame on them if so.


I'm talking about the same provider offering the same services for different prices like you are suggesting. If CompanyA is selling a service for $2/month but need to sell it for $20/month to avoid having a high number of VPSs per node then you're still getting the same service from CompanyA but for a higher price. Sure you might get more resources but not everybody wants higher resources and they especially don't want to pay a premium for those resources when they only need a fraction of them.

A lot of companies offer lower prices so consumers don't have to spend a fortune on services they want/need. I personally enjoy offering an affordable service that most people can fit into their budget even if it means not collecting a paycheck myself. I'm taking the hit so clients don't have to and in return I ask that our clients and potential clients don't judge us based on a number that means nothing in the real world.


----------



## drmike (May 15, 2015)

KuJoe said:


> I'm talking about the same provider offering the same services for different prices like you are suggesting. If CompanyA is selling a service for $2/month but need to sell it for $20/month to avoid having a high number of VPSs per node then you're still getting the same service from CompanyA but for a higher price. Sure you might get more resources but not everybody wants higher resources and they especially don't want to pay a premium for those resources when they only need a fraction of them.


Same provider offering at two price points is doable.  One would be dedicated and guaranteed resources and the other would be pot luck shared.  Typically one is KVM for dedicated and other is OpenVZ on the shared.

Ideally more companies get around to right priced dedicated resource offerings - big lack of such.  I like what ServerAxis does with guaranteed IOPs and some of their terms and marketing.  Shared (OpenVZ) is nice, but it's basically the hosting ghetto like always.  So much destruction has been done to OpenVZ's reputation that I do believe we shall see the death of OpenVZ soon 



> A lot of companies offer lower prices so consumers don't have to spend a fortune on services they want/need. I personally enjoy offering an affordable service that most people can fit into their budget even if it means not collecting a paycheck myself. I'm taking the hit so clients don't have to and in return I ask that our clients and potential clients don't judge us based on a number that means nothing in the real world.


Why not just offer free hosting then?  Strange thing to work for free.  That's a hobby brand and nothing wrong with such if so.  Point where money is not having a paycheck but putting in time is rough.   Idealistic or something, who am I to judge.  Probably been there in different ways myself outside of hosting.  Never stuck to that for years on end though.  Man must have paid work, hobbies and free time.   KuJoe has work + free work + a time machine?  You must


----------



## KuJoe (May 16, 2015)

This thread got me thinking about how we can instill confidence in our clients and potential clients without the need for this useless number and I hope I'm on the right track: http://drgn.biz/servers/

For a fun little experiment, try to guess how many VPSs per node using the ranges below and see if you can determine the node population based on actual performance metrics:

More than 200 VPSs (2 nodes)

100-199 VPSs (3 nodes)

50-99 VPSs (4 nodes)

Less than 50 VPSs (3 nodes)

 

_Disclosure: Backups are currently running on some of the nodes so it may skew the performance results a bit._

_Also take note of the server name: ovz = OpenVZ, kvm = KVM, bkup = OpenVZ_


----------



## devonblzx (May 16, 2015)

KuJoe said:


> That's a bold assumption that might not be correct though which is my point. If clients see 600 containers and their VPS is slow, they would automatically assume it's because of the 600 containers and not something else and most likely cancel their service instead of opening a ticket to get it resolved.
> 
> 
> Oversold != diminished performance.


That's why I said "If you see a lot of wait or steal time".   That is the context.   Wait time represents the time your CPU is waiting on the disk, Steal time represents the time your virtual processor is waiting on the physical processor.

I think the bold assumption is that with 4 physical cores and 600 containers, you're assuming that 596 containers will be idle at any given point in time.  Otherwise, it does in fact equal diminished performance, even if not very noticeable.


----------



## drmike (May 17, 2015)

KuJoe said:


> This thread got me thinking about how we can instill confidence in our clients and potential clients without the need for this useless number and I hope I'm on the right track: http://drgn.biz/servers/
> 
> For a fun little experiment, try to guess how many VPSs per node using the ranges below and see if you can determine the node population based on actual performance metrics:
> 
> ...


Interesting public view.  I give you mass credit for that.   Hoping more folks adopt a similar approach.

Let's see if I can gander some guesses and get egg on my face


----------



## Fusl (Jun 2, 2015)

dcdan said:


> Well there are two reasons why I am curious:
> 
> 1) Process count. After about 50000 processes on the host node it will start locking up. If each VPS runs 3 processes (absolute minimum, basically just init and two kernel processes) that's already almost 60000.
> 
> 2) Even a minimal centos install times 19273 equals 10 TB of data


1) OpenVZ runs fine with (currently counting) 638776 processes if correctly tweaked. As partial OpenVZ developer I pretty much know how to get the most out of it  (*edit* Also, load average with this amount of processes is at 0.17)

2) Ever heard about deduplication? Compression? Also, we do have that much storage at home, 10TB would not be an issue at all.


----------



## QHoster.com (Aug 24, 2015)

What is the meaning of VPS and guaranteed resources then if you place 500 VPSes x 512MB RAM on a 32GB RAM node ?

What is the difference between offering then 512MB RAM or 4GB RAM or 8GB RAM VPS when placing 500 VPSes or number which can cause at any time 150GB RAM usage on a 32GB VPS node ?

"Who will offer more 'resources' for less $ ?"


----------



## zionvps (Aug 29, 2015)

Arguments aside, i ran this in one of the OpenVZ containers and i got this

 cat /proc/cgroups
#subsys_name    hierarchy       num_cgroups     enabled
cpuset  3       1       1
cpu     3       1       1
cpuacct 3       1       1
devices 4       1       1
freezer 4       1       1
net_cls 0       1       1
blkio   1       1       1
perf_event      0       1       1
net_prio        0       1       1
memory  2       1       1

My node definitely does not have one container.

Also, your other script's output is 

/# wget http://cdn.content-network.net/Mun/apps/container_counter/script.txt -O - | php
--2015-08-29 15:26:10--  http://cdn.content-network.net/Mun/apps/container_counter/script.txt
Resolving cdn.content-network.net (cdn.content-network.net)... 2607:5600:5cf::2, 192.254.77.173
Connecting to cdn.content-network.net (cdn.content-network.net)|2607:5600:5cf::2|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2712 (2.6K) [text/plain]
Saving to: `STDOUT'

100%[==============================================================================================================================>] 2,712       --.-K/s   in 0s

2015-08-29 15:26:10 (283 MB/s) - written to stdout [2712/2712]


############################################################################
                                Container Counter
############################################################################
    By: Mun
    Ver: 1.0
    Site: https://www.qwdsa.com/converse/threads/container-counter.131/
############################################################################
    CPU(s):
----------------------------------------------------------------------------
 AMD xxxx
 AMD xxxx
 AMD xxxx
 AMD xxxx
----------------------------------------------------------------------------
    Kernel:
----------------------------------------------------------------------------
2.6.32-042stab106.4

----------------------------------------------------------------------------
    Container(s) On Node:
----------------------------------------------------------------------------
0
----------------------------------------------------------------------------


----------



## Geek (Aug 29, 2015)

Eh, this was patched out and replaced with a fake output a few months ago.

There are other ways of getting some details about the node if you know where to look.  Hopefully mustard fuck can hold it together if I ever decide to post how to determine if your VPS is running on a blade. Being that it's completely irrelevant, there's really no need...


----------



## Obelus (Aug 12, 2018)

Any other way to see how many neighbors do we have since this is not working anymore?


----------

