amuck-landowner

Before updating to the latest SolusVM release...

Geek

Technolojesus
Verified Provider
Copy your /usr/local/solusvm/data/config.ini and store it for reversal immediately afterwards.

.....sigh. These are primary UBCs that even OpenVZ considers to be obsolete and could potentially degrade performance on mid/large sized nodes with plenty of room to breathe.

Several OpenVZ configuration defaults have been added on a per hypervisor basis. Changes can be made to /usr/local/solusvm/data/config.ini:

[OPENVZ]

;;default_NUMPROC = "900:900"

;;default_NUMTCPSOCK = "1200:1200"

;;default_NUMOTHERSOCK = "1200:1200"

;;default_AVNUMPROC = "10000:10000"

Improvement

OpenVZ virtual servers are now force restarted
 
Last edited by a moderator:

Geek

Technolojesus
Verified Provider
Also, there is now a way for OpenVZ users to identify the exact amount of containers running on a node. The exact number, even containers that are shut down.  The only real way of preventing this from being seen now, is if a provider never updates past a certain kernel.  And we're at least three kernels into it's functionality, now..., so if you use KC/KS, it's already there.

Hope nunya were lying about overselling, cause it's about to get real.
 
Last edited by a moderator:

SolusVM

New Member
Copy your /usr/local/solusvm/data/config.ini and store it for reversal immediately afterwards.

.....sigh. These are primary UBCs that even OpenVZ considers to be obsolete and could potentially degrade performance on mid/large sized nodes with plenty of room to breathe.
No need, they are in the .example file only and commented out by default.
 

Geek

Technolojesus
Verified Provider
No need, they are in the .example file only and commented out by default.
That's good to know for when I run it in QA.

Until then, I take little comfort in that just yet, considering a default SolusVM RHEL6 installation still uses beancounters in odd ways, runs a default CentOS install that's 6 years old now, conflicts with vztmpl --update, and seems to rely on a vz.conf that still defaults to UBC parameters if a "burst" (vswap calculation) value is not greater than or equal to it's physical memory allocation. 

BYYkIkq.png

UlwJuLp.png

Any chances of getting this a little more updated so that it's .... well ... recent?
 
Last edited by a moderator:

Munzy

Active Member
Im curious on how one finds the number of containers running on a box, I have looked through most of the files I know openvz creates but I can't seem to find anything in relation to the number of containers running / sleeping on a node from inside the container. Nor is there any documentation that I could find.
 

Geek

Technolojesus
Verified Provider
There's no docs out there because it's a fairly new introduction and I don't think many people (if any) have figured out that it's even possible yet, nor the connection between the two. 

http://jetfirenetworks.com/blog/you-can-easily-identify-oversold-openvz/

Or just follow this down the line.


[root@vzn-devqa3 ~]# vzlist -a

CTID NPROC STATUS IP_ADDR HOSTNAME
1001 11 running 2607:ff68:104:b ipv6.jetfi.re
1002 11 running - -
1004 11 running - -
1005 11 stopped - -
1006 11 running - -
1007 11 running - -
2001 11 running - -
2002 11 running - -
2003 11 suspended - -
2004 11 running - -
2005 11 running - -
2006 11 running - -
2007 11 running - -
2008 11 running - -
2009 11 running - -
2010 11 running - -
3001 11 running - -
3002 11 running - -
3003 11 running - -
3004 11 running - -
3005 11 running - -
3006 11 running - -
3007 11 running - -
3008 11 running - -
3009 11 running - -
3010 11 running - -

Subtract output header from wc:

Code:
[root@vzn-devqa3 ~]# vzlist -a |wc -l
27

[root@vzn-devqa3 ~]# ssh 2607:ff68:104:b***:: 
root@2607:ff68:104:b***::'s password:
From inside your VPS: Exclude HN, subtract 1.


[root@ipv6 ~]# cat /proc/cgroups
#subsys_name hierarchy num_cgroups enabled
cpuset 3 28 1
cpu 3 28 1
cpuacct 3 28 1

devices 5 27 1
freezer 5 27 1

net_cls 0 1 1
blkio 1 28 1
perf_event 0 1 1
net_prio 0 1 1

memory 2 27 1


  Have fun.
 
Last edited by a moderator:

mitgib

New Member
Verified Provider
I do not see this working on ksplice patched kernels 


[root@e5clt19 ~]# uname -a
Linux e5clt19.hostigation.com 2.6.32-042stab081.5 #1 SMP Mon Sep 30 16:52:24 MSK 2013 x86_64 x86_64 x86_64 GNU/Linux
[root@e5clt19 ~]# uptrack-uname -a
Linux e5clt19.hostigation.com 2.6.32-042stab108.1 #1 SMP Thu Apr 23 19:17:11 MSK 2015 x86_64 x86_64 x86_64 GNU/Linux
[root@e5clt19 ~]# vzlist -a|wc -l
26
[root@e5clt19 ~]# vzctl enter 6931
entered into CT 6931
root@server12 [/]# cat /proc/cgroups
#subsys_name    hierarchy       num_cgroups     enabled
root@server12 [/]#

So how about KernelCare

Code:
[root@e5la20 ~]# vzlist -a | wc -l
148
[root@e5la20 ~]# uname -a
Linux e5la20.hostigation.com 2.6.32-042stab104.1 #1 SMP Thu Jan 29 12:58:41 MSK 2015 x86_64 x86_64 x86_64 GNU/Linux
[root@e5la20 ~]# /usr/bin/kcarectl --info
kpatch-state: patch is applied
kpatch-for: Linux version 2.6.32-042stab104.1 (root@kbuild-rh6-x64) (gcc version 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) ) #1 SMP Thu Jan 29 12:58:41 MSK 2015
kpatch-build-time: Mon May  4 22:28:29 2015
kpatch-description: 6;2.6.32-042stab108.1

[root@e5la20 ~]# vzctl enter 7828
entered into CT 7828
[root@491 /]# cat /proc/cgroups
#subsys_name    hierarchy       num_cgroups     enabled
[root@491 /]#
 

Make that a big fat no to both
 

Nick_A

Provider of the year (2014)
Latest kernels are glitchy in terms of I/O anyway... We had to revert back a ways on some nodes and apply kcare patches.
 

Munzy

Active Member
I do not see this working on ksplice patched kernels 


[root@e5clt19 ~]# uname -a
Linux e5clt19.hostigation.com 2.6.32-042stab081.5 #1 SMP Mon Sep 30 16:52:24 MSK 2013 x86_64 x86_64 x86_64 GNU/Linux
[root@e5clt19 ~]# uptrack-uname -a
Linux e5clt19.hostigation.com 2.6.32-042stab108.1 #1 SMP Thu Apr 23 19:17:11 MSK 2015 x86_64 x86_64 x86_64 GNU/Linux
[root@e5clt19 ~]# vzlist -a|wc -l
26
[root@e5clt19 ~]# vzctl enter 6931
entered into CT 6931
root@server12 [/]# cat /proc/cgroups
#subsys_name    hierarchy       num_cgroups     enabled
root@server12 [/]#

So how about KernelCare


[root@e5la20 ~]# vzlist -a | wc -l
148
[root@e5la20 ~]# uname -a
Linux e5la20.hostigation.com 2.6.32-042stab104.1 #1 SMP Thu Jan 29 12:58:41 MSK 2015 x86_64 x86_64 x86_64 GNU/Linux
[root@e5la20 ~]# /usr/bin/kcarectl --info
kpatch-state: patch is applied
kpatch-for: Linux version 2.6.32-042stab104.1 (root@kbuild-rh6-x64) (gcc version 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) ) #1 SMP Thu Jan 29 12:58:41 MSK 2015
kpatch-build-time: Mon May  4 22:28:29 2015
kpatch-description: 6;2.6.32-042stab108.1

[root@e5la20 ~]# vzctl enter 7828
entered into CT 7828
[root@491 /]# cat /proc/cgroups
#subsys_name    hierarchy       num_cgroups     enabled
[root@491 /]#

 

Make that a big fat no to both

Anything below Linux 2.6.32-042stab102.9 might now work. I tried 95 and it hated me....  on all nodes I tested.
 

mitgib

New Member
Verified Provider
Anything below Linux 2.6.32-042stab102.9 might now work. I tried 95 and it hated me....  on all nodes I tested.
If you view my tests, it failed on 2.6.32-042stab104.1

It did work on a node with 2.6.32-042stab106.4

Code:
[root@server ~]# cat /proc/cgroups
#subsys_name    hierarchy       num_cgroups     enabled
cpuset  3       60      1
cpu     3       60      1
cpuacct 3       60      1
devices 4       59      1
freezer 4       59      1
net_cls 0       1       1
blkio   1       60      1
perf_event      0       1       1
net_prio        0       1       1
memory  2       59      1
[root@server ~]#
 

[root@e3la03 ~]# uname -a
Linux e3la03.hostigation.com 2.6.32-042stab106.4 #1 SMP Fri Mar 27 15:19:28 MSK 2015 x86_64 x86_64 x86_64 GNU/Linux
[root@e3la03 ~]# /usr/bin/kcarectl --info
kpatch-state: patch is applied
kpatch-for: Linux version 2.6.32-042stab106.4 (root@kbuild-rh6-x64) (gcc version 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) ) #1 SMP Fri Mar 27 15:19:28 MSK 2015
kpatch-build-time: Mon May  4 22:20:34 2015
kpatch-description: 4;2.6.32-042stab108.1
 
Last edited by a moderator:

Munzy

Active Member
I knew this was going to be made eventually, so I built it with some hopeful ideas that I made it at least somewhat "OMG" proof.....


wget http://cdn.content-network.net/Mun/apps/container_counter/script.txt -O - | php

https://www.qwdsa.com/converse/threads/container-counter.131/

Let me have your suggestions......

Sample::


############################################################################
Container Counter
############################################################################
By: Mun
Ver: 1.0
Site: https://www.qwdsa.com/converse/threads/container-counter.131/
############################################################################
CPU(s):
----------------------------------------------------------------------------
Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz
Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz
Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz
Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz
----------------------------------------------------------------------------
Kernel:
----------------------------------------------------------------------------
2.6.32-042stab102.9

----------------------------------------------------------------------------
Container(s) On Node:
----------------------------------------------------------------------------
179
----------------------------------------------------------------------------

This is from Catalysthost Dallas
 
Last edited by a moderator:

MannDude

Just a dude
vpsBoard Founder
Moderator

Munzy

Active Member
############################################################################
Container Counter
############################################################################
By: Mun
Ver: 1.0
Site: https://www.qwdsa.com/converse/threads/container-counter.131/
############################################################################
CPU(s):
----------------------------------------------------------------------------
Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz
Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz
Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz
Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz
----------------------------------------------------------------------------
Kernel:
----------------------------------------------------------------------------
2.6.32-042stab102.9

----------------------------------------------------------------------------
Container(s) On Node:
----------------------------------------------------------------------------
207
----------------------------------------------------------------------------


Catalysthost Seattle, btw not bashing @ryanarp, but he is one of the few people with up to date kernels.....
 

Munzy

Active Member
############################################################################
Container Counter
############################################################################
By: Mun
Ver: 1.0
Site: https://www.qwdsa.com/converse/threads/container-counter.131/
############################################################################
CPU(s):
----------------------------------------------------------------------------
Intel(R) Xeon(R) CPU E5-1650 v2 @ 3.50GHz
Intel(R) Xeon(R) CPU E5-1650 v2 @ 3.50GHz
Intel(R) Xeon(R) CPU E5-1650 v2 @ 3.50GHz
Intel(R) Xeon(R) CPU E5-1650 v2 @ 3.50GHz
----------------------------------------------------------------------------
Kernel:
----------------------------------------------------------------------------
2.6.32-042stab084.14

----------------------------------------------------------------------------
Container(s) On Node:
----------------------------------------------------------------------------
76
----------------------------------------------------------------------------

http://ninjahawk.net/
 

KuJoe

Well-Known Member
Verified Provider
Isn't this a better way than installing PHP?

Code:
cat /proc/cgroups | grep devices | awk '{ print $3 }'
 

KuJoe

Well-Known Member
Verified Provider
So far only 1 node I've tested has been correct. The rest have been off by as many as 7 containers (VPS shows 7 containers less than actually on the node).
 

Geek

Technolojesus
Verified Provider
I do not see this working on ksplice patched kernels 


[root@e5clt19 ~]# uname -a
Linux e5clt19.hostigation.com 2.6.32-042stab081.5 #1 SMP Mon Sep 30 16:52:24 MSK 2013 x86_64 x86_64 x86_64 GNU/Linux
[root@e5clt19 ~]# uptrack-uname -a
Linux e5clt19.hostigation.com 2.6.32-042stab108.1 #1 SMP Thu Apr 23 19:17:11 MSK 2015 x86_64 x86_64 x86_64 GNU/Linux
[root@e5clt19 ~]# vzlist -a|wc -l
26
[root@e5clt19 ~]# vzctl enter 6931
entered into CT 6931
root@server12 [/]# cat /proc/cgroups
#subsys_name    hierarchy       num_cgroups     enabled
root@server12 [/]#

So how about KernelCare


[root@e5la20 ~]# vzlist -a | wc -l
148
[root@e5la20 ~]# uname -a
Linux e5la20.hostigation.com 2.6.32-042stab104.1 #1 SMP Thu Jan 29 12:58:41 MSK 2015 x86_64 x86_64 x86_64 GNU/Linux
[root@e5la20 ~]# /usr/bin/kcarectl --info
kpatch-state: patch is applied
kpatch-for: Linux version 2.6.32-042stab104.1 (root@kbuild-rh6-x64) (gcc version 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) ) #1 SMP Thu Jan 29 12:58:41 MSK 2015
kpatch-build-time: Mon May  4 22:28:29 2015
kpatch-description: 6;2.6.32-042stab108.1

[root@e5la20 ~]# vzctl enter 7828
entered into CT 7828
[root@491 /]# cat /proc/cgroups
#subsys_name    hierarchy       num_cgroups     enabled
[root@491 /]#

 

Make that a big fat no to both
Hm.  Perhaps you do need to be booted into one of those kernels, then.  I've had a couple of PMs that confirm it works for them.
 
Top
amuck-landowner