# Before updating to the latest SolusVM release...



## Geek (May 7, 2015)

Copy your /usr/local/solusvm/data/config.ini and store it for reversal immediately afterwards.

.....sigh. These are primary UBCs that even _OpenVZ_ considers to be obsolete and could potentially degrade performance on mid/large sized nodes with plenty of room to breathe.



> Several OpenVZ configuration defaults have been added on a per hypervisor basis. Changes can be made to /usr/local/solusvm/data/config.ini:
> 
> [OPENVZ]
> 
> ...


----------



## Geek (May 7, 2015)

Also, there is now a way for OpenVZ users to identify *the exact amount of containers running on a node. **The exact number*, even containers that are shut down.  The only real way of preventing this from being seen now, is if a provider never updates past a certain kernel.  And we're at least three kernels into it's functionality, now..., so if you use KC/KS, it's already there.

Hope nunya were lying about overselling, cause it's about to get real.


----------



## Francisco (May 7, 2015)

Those are some retarded default UBCs.

Francisco


----------



## SolusVM (May 7, 2015)

Geek said:


> Copy your /usr/local/solusvm/data/config.ini and store it for reversal immediately afterwards.
> 
> .....sigh. These are primary UBCs that even _OpenVZ_ considers to be obsolete and could potentially degrade performance on mid/large sized nodes with plenty of room to breathe.


No need, they are in the .example file only and commented out by default.


----------



## Geek (May 7, 2015)

SolusVM said:


> No need, they are in the .example file only and commented out by default.


That's good to know for when I run it in QA.

Until then, I take little comfort in that just yet, considering a default SolusVM RHEL6 installation still uses beancounters in odd ways, runs a default CentOS install that's 6 years old now, conflicts with vztmpl --update, and seems to rely on a vz.conf that still defaults to UBC parameters if a "burst" (vswap calculation) value is not greater than or equal to it's physical memory allocation. 





Any chances of getting this a little more updated so that it's .... well ... recent?


----------



## Munzy (May 7, 2015)

Im curious on how one finds the number of containers running on a box, I have looked through most of the files I know openvz creates but I can't seem to find anything in relation to the number of containers running / sleeping on a node from inside the container. Nor is there any documentation that I could find.


----------



## Geek (May 7, 2015)

There's no docs out there because it's a fairly new introduction and I don't think many people (if any) have figured out that it's even possible yet, nor the connection between the two. 

http://jetfirenetworks.com/blog/you-can-easily-identify-oversold-openvz/

Or just follow this down the line.


[[email protected] ~]# vzlist -a

CTID NPROC STATUS IP_ADDR HOSTNAME
1001 11 running 2607:ff68:104:b ipv6.jetfi.re
1002 11 running - -
1004 11 running - -
1005 11 stopped - -
1006 11 running - -
1007 11 running - -
2001 11 running - -
2002 11 running - -
2003 11 suspended - -
2004 11 running - -
2005 11 running - -
2006 11 running - -
2007 11 running - -
2008 11 running - -
2009 11 running - -
2010 11 running - -
3001 11 running - -
3002 11 running - -
3003 11 running - -
3004 11 running - -
3005 11 running - -
3006 11 running - -
3007 11 running - -
3008 11 running - -
3009 11 running - -
3010 11 running - -

Subtract output header from wc:


```
[[email protected] ~]# vzlist -a |wc -l
27

[[email protected] ~]# ssh 2607:ff68:104:b***:: 
[email protected]:ff68:104:b***::'s password:
```
From inside your VPS: Exclude HN, subtract 1.


[[email protected] ~]# cat /proc/cgroups
#subsys_name hierarchy num_cgroups enabled
cpuset 3 28 1
cpu 3 28 1
cpuacct 3 28 1

devices 5 27 1
freezer 5 27 1

net_cls 0 1 1
blkio 1 28 1
perf_event 0 1 1
net_prio 0 1 1

memory 2 27 1


  Have fun.


----------



## mitgib (May 7, 2015)

I do not see this working on ksplice patched kernels 


[[email protected] ~]# uname -a
Linux e5clt19.hostigation.com 2.6.32-042stab081.5 #1 SMP Mon Sep 30 16:52:24 MSK 2013 x86_64 x86_64 x86_64 GNU/Linux
[[email protected] ~]# uptrack-uname -a
Linux e5clt19.hostigation.com 2.6.32-042stab108.1 #1 SMP Thu Apr 23 19:17:11 MSK 2015 x86_64 x86_64 x86_64 GNU/Linux
[[email protected] ~]# vzlist -a|wc -l
26
[[email protected] ~]# vzctl enter 6931
entered into CT 6931
[email protected] [/]# cat /proc/cgroups
#subsys_name    hierarchy       num_cgroups     enabled
[email protected] [/]#

So how about KernelCare


```
[[email protected] ~]# vzlist -a | wc -l
148
[[email protected] ~]# uname -a
Linux e5la20.hostigation.com 2.6.32-042stab104.1 #1 SMP Thu Jan 29 12:58:41 MSK 2015 x86_64 x86_64 x86_64 GNU/Linux
[[email protected] ~]# /usr/bin/kcarectl --info
kpatch-state: patch is applied
kpatch-for: Linux version 2.6.32-042stab104.1 ([email protected]) (gcc version 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) ) #1 SMP Thu Jan 29 12:58:41 MSK 2015
kpatch-build-time: Mon May  4 22:28:29 2015
kpatch-description: 6;2.6.32-042stab108.1

[[email protected] ~]# vzctl enter 7828
entered into CT 7828
[[email protected] /]# cat /proc/cgroups
#subsys_name    hierarchy       num_cgroups     enabled
[[email protected] /]#
```
 

Make that a big fat no to both


----------



## Nick_A (May 7, 2015)

Latest kernels are glitchy in terms of I/O anyway... We had to revert back a ways on some nodes and apply kcare patches.


----------



## Munzy (May 7, 2015)

mitgib said:


> I do not see this working on ksplice patched kernels
> 
> 
> [[email protected] ~]# uname -a
> ...



Anything below Linux 2.6.32-042stab102.9 might now work. I tried 95 and it hated me....  on all nodes I tested.


----------



## mitgib (May 7, 2015)

Munzy said:


> Anything below Linux 2.6.32-042stab102.9 might now work. I tried 95 and it hated me....  on all nodes I tested.


If you view my tests, it failed on 2.6.32-042stab104.1

It did work on a node with 2.6.32-042stab106.4


```
[[email protected] ~]# cat /proc/cgroups
#subsys_name    hierarchy       num_cgroups     enabled
cpuset  3       60      1
cpu     3       60      1
cpuacct 3       60      1
devices 4       59      1
freezer 4       59      1
net_cls 0       1       1
blkio   1       60      1
perf_event      0       1       1
net_prio        0       1       1
memory  2       59      1
[[email protected] ~]#
 

[[email protected] ~]# uname -a
Linux e3la03.hostigation.com 2.6.32-042stab106.4 #1 SMP Fri Mar 27 15:19:28 MSK 2015 x86_64 x86_64 x86_64 GNU/Linux
[[email protected] ~]# /usr/bin/kcarectl --info
kpatch-state: patch is applied
kpatch-for: Linux version 2.6.32-042stab106.4 ([email protected]) (gcc version 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) ) #1 SMP Fri Mar 27 15:19:28 MSK 2015
kpatch-build-time: Mon May  4 22:20:34 2015
kpatch-description: 4;2.6.32-042stab108.1
```


----------



## Nick_A (May 7, 2015)

104.1 works for me. 105 as well.


----------



## Munzy (May 7, 2015)

I knew this was going to be made eventually, so I built it with some hopeful ideas that I made it at least somewhat "OMG" proof.....


wget http://cdn.content-network.net/Mun/apps/container_counter/script.txt -O - | php

https://www.qwdsa.com/converse/threads/container-counter.131/

Let me have your suggestions......

Sample::


############################################################################
Container Counter
############################################################################
By: Mun
Ver: 1.0
Site: https://www.qwdsa.com/converse/threads/container-counter.131/
############################################################################
CPU(s):
----------------------------------------------------------------------------
Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz
Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz
Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz
Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz
----------------------------------------------------------------------------
Kernel:
----------------------------------------------------------------------------
2.6.32-042stab102.9

----------------------------------------------------------------------------
Container(s) On Node:
----------------------------------------------------------------------------
179
----------------------------------------------------------------------------

This is from Catalysthost Dallas


----------



## MannDude (May 7, 2015)

Munzy said:


> I knew this was going to be made eventually, so I built it with some hopeful ideas that I made it at least somewhat "OMG" proof.....
> 
> 
> wget http://cdn.content-network.net/Mun/apps/container_counter/script.txt -O - | php
> ...


Haha. Feel free to x-post here: 

I knew the guide would be made eventually, so I made one myself


----------



## Munzy (May 7, 2015)

############################################################################
Container Counter
############################################################################
By: Mun
Ver: 1.0
Site: https://www.qwdsa.com/converse/threads/container-counter.131/
############################################################################
CPU(s):
----------------------------------------------------------------------------
Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz
Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz
Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz
Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz
----------------------------------------------------------------------------
Kernel:
----------------------------------------------------------------------------
2.6.32-042stab102.9

----------------------------------------------------------------------------
Container(s) On Node:
----------------------------------------------------------------------------
207
----------------------------------------------------------------------------


Catalysthost Seattle, btw not bashing @ryanarp, but he is one of the few people with up to date kernels.....


----------



## Munzy (May 7, 2015)

############################################################################
Container Counter
############################################################################
By: Mun
Ver: 1.0
Site: https://www.qwdsa.com/converse/threads/container-counter.131/
############################################################################
CPU(s):
----------------------------------------------------------------------------
Intel(R) Xeon(R) CPU E5-1650 v2 @ 3.50GHz
Intel(R) Xeon(R) CPU E5-1650 v2 @ 3.50GHz
Intel(R) Xeon(R) CPU E5-1650 v2 @ 3.50GHz
Intel(R) Xeon(R) CPU E5-1650 v2 @ 3.50GHz
----------------------------------------------------------------------------
Kernel:
----------------------------------------------------------------------------
2.6.32-042stab084.14

----------------------------------------------------------------------------
Container(s) On Node:
----------------------------------------------------------------------------
76
----------------------------------------------------------------------------

http://ninjahawk.net/


----------



## KuJoe (May 8, 2015)

Isn't this a better way than installing PHP?


```
cat /proc/cgroups | grep devices | awk '{ print $3 }'
```


----------



## KuJoe (May 8, 2015)

So far only 1 node I've tested has been correct. The rest have been off by as many as 7 containers (VPS shows 7 containers less than actually on the node).


----------



## Munzy (May 8, 2015)

KuJoe said:


> Isn't this a better way than installing PHP?
> 
> 
> cat /proc/cgroups | grep devices | awk '{ print $3 }'


Ehh, I like coding in PHP when I am messing around


----------



## Geek (May 8, 2015)

mitgib said:


> I do not see this working on ksplice patched kernels
> 
> 
> [[email protected] ~]# uname -a
> ...


Hm.  Perhaps you do need to be booted into one of those kernels, then.  I've had a couple of PMs that confirm it works for them.


----------



## dcdan (May 8, 2015)

Both kernelcare and ksplice devs previously stated that they only implement bugfixes and security fixes, but not new features. So it is "normal" for it to not work on older kernels.


----------



## Geek (May 8, 2015)

That's a little odd.  I'll have to double check as to whether there's a cgroup representation for suspended/powerdown/failcnt CTs.  



KuJoe said:


> So far only 1 node I've tested has been correct. The rest have been off by as many as 7 containers (VPS shows 7 containers less than actually on the node).


----------



## dcdan (May 8, 2015)

Once you power down the container the numbers seem to decrease.


----------



## howardsl2 (May 8, 2015)

Some things I have observed, FYI -


The iptables raw table does not load from within containers for kernel versions older than 042stab093.4, it was fixed in that version.


The iptables string module does not work from within containers for kernel versions older than 042stab092.1, it was fixed in that version.


I have a RamNode VPS in Atlanta with 042stab090.5, on which the string module does not work.


----------



## Nick_A (May 8, 2015)

Feel free to open a ticket @


----------



## howardsl2 (May 8, 2015)

Nick_A said:


> Feel free to open a ticket @


Will try to reach you via tickets - Thanks Nick!


----------

