amuck-landowner

'vzctl destroy' could erase the wrong container ...yay

Geek

Technolojesus
Verified Provider
I'm in the middle of retiring a HWN right now, and I migrated about 20 containers from QA over to the replacement, one was a copy of a personal CT. All went fine until I went to remove a low numbered container and vzctl removed one with a larger number. Happened to be the copy of my CT that it erased.  That one, coincidentally, is an older Ploop device. The CTID is not a coincidence. ;)

vzctl.png

[root@vzn-mulva conf]# ls -l /vz/root/1337

ls: cannot access /vz/root/1337: No such file or directory

 

I don't know whether it's Ploop, vzctl or something drastically wrong with their newest kernel, so I'm blowing it away and starting fresh. Too risky. Just a heads-up for now. Keep an eye on vzctl.log, and FTLOG, if you don't have vzdump set up at the very least, get crackin'.

Peace.
 
Last edited by a moderator:

Geek

Technolojesus
Verified Provider
You aren't supposed to use anything < 100 since things are reserved normally :p


Francisco
I did read that, but last summer I asked them about it on Twitter. Was merely out of curiosity, but Kir said it wouldn't hurt anything. I suppose even project leads can get it wrong. :)
 
Last edited by a moderator:

Geek

Technolojesus
Verified Provider
Thank God we can still make containers with 512 GB RAM though. Even on an E3. What a relief.  :D

.. 

root@oversold-bullshit-simulator:~# free -m

             total       used       free     shared    buffers     cached

Mem:        524288         61     524226          0          0         38

-/+ buffers/cache:         22     524265

Swap:       262144          0     262144

root@oversold-bullshit-simulator:~# free -g

             total       used       free     shared    buffers     cached

Mem:           512          0        511          0          0          0

-/+ buffers/cache:          0        511

Swap:          256          0        256
 

coreyman

Active Member
Verified Provider
Thank God we can still make containers with 512 GB RAM though. Even on an E3. What a relief.  :D

.. 

root@oversold-bullshit-simulator:~# free -m

             total       used       free     shared    buffers     cached

Mem:        524288         61     524226          0          0         38

-/+ buffers/cache:         22     524265

Swap:       262144          0     262144

root@oversold-bullshit-simulator:~# free -g

             total       used       free     shared    buffers     cached

Mem:           512          0        511          0          0          0

-/+ buffers/cache:          0        511

Swap:          256          0        256
Why would we not be able to do that?
 

KuJoe

Well-Known Member
Verified Provider
@Geek, go check the /etc/vz/conf/8.conf.destroyed file and see what VE_ROOT and VE_PRIVATE are set to. I'm curious if there are any extra characters or something that would confuse vzctl on those lines.
 

Geek

Technolojesus
Verified Provider
# RAM

PHYSPAGES="0:262144"

 

# Swap

SWAPPAGES="0:196608"

 

# Disk quota parameters (in form of softlimit:hardlimit)

DISKSPACE="10485760:10485760"

DISKINODES="23592960:23592960"

QUOTATIME="0"

 

# CPU fair scheduler parameter

CPUUNITS="1000"

 

VE_ROOT="/vz/root/1337"

VE_PRIVATE="/vz/private/1337"

OSTEMPLATE="debian-7.0-x86_64"

ORIGIN_SAMPLE="vswap-jetfire"

IP_ADDRESS=""

KMEMSIZE="262144000:268435456"

NUMPROC="unlimited"

VMGUARPAGES="0:unlimited"

NUMTCPSOCK="unlimited"

TCPSNDBUF="unlimited"

TCPRCVBUF="unlimited"

NUMOTHERSOCK="unlimited"

LOCKEDPAGES="unlimited"

PRIVVMPAGES="unlimited"

SHMPAGES="unlimited"

OOMGUARPAGES="0:unlimited"

NUMFLOCK="unlimited"

NUMPTY="unlimited"

OTHERSOCKBUF="unlimited"

DGRAMRCVBUF="unlimited"

NUMFILE="unlimited"

QUOTAUGIDLIMIT="25000"

NUMSIGINFO="unlimited"

DCACHESIZE="134217728"

NUMIPTENT="unlimited"

CPULIMIT="200"

CPUS="2"

HOSTNAME="datclouddoe"

IOLIMIT="200m"

DISABLED="no"

AVNUMPROC="unlimited"

NETFILTER="full"
 
Last edited by a moderator:

Geek

Technolojesus
Verified Provider
I've always understood it that the full path to the directory on the node is required.
 

Geek

Technolojesus
Verified Provider
Why were they set to 1337?
Thank you for pointing this out.  I don't know what planet I was on but I completely missed that. At least I was still testing.  Damn...I blew an easy one. For shame.
 
Last edited by a moderator:

devonblzx

New Member
Verified Provider
Just some added explanation for anyone who comes across this.  Make sure to use $VEID in the config file when possible. (This is the default)

Example:

VE_PRIVATE="/vz/private/$VEID"

This will make it use the directory according to the actual ID of the server.  This way you won't be at risk of using a bad private/root directory if you copy, or reuse, the config.
 
Last edited by a moderator:

Geek

Technolojesus
Verified Provider
Just some added explanation for anyone who comes across this.  Make sure to use $VEID in the config file when possible. (This is the default)

Example:

VE_PRIVATE="/vz/private/$VEID"

This will make it use the directory according to the actual ID of the server.  This way you won't be at risk of using a bad private/root directory if you copy, or reuse, the config.
That's exactly what happened.  CT8 was built following the vzmigrate tests and I'm 99.9% sure that I copied it's config from the other CT, and forgot to change the VE_ROOT and VE_PRIVATE values. I write the vz.conf from blank and I did specify $VEID there for both values.

Basically I was blind.  :p

Wonder how hard it would be to put a check in place to verify a match between the VE_ROOT/VE_PRIVATE in the CT's config file and the container called by "vzctl destroy" before erasure? I don't see this being needed often since most people rely on SolusVM, but for those who perform administration from the CLI, an extra check couldn't hurt...
 
Last edited by a moderator:

devonblzx

New Member
Verified Provider
That's exactly what happened.  CT8 was built following the vzmigrate tests and I'm 99.9% sure that I copied it's config from the other CT, and forgot to change the VE_ROOT and VE_PRIVATE values. I write the vz.conf from blank and I did specify $VEID there for both values.

Basically I was blind.  :p

Wonder how hard it would be to put a check in place to verify a match between the VE_ROOT/VE_PRIVATE in the CT's config file and the container called by "vzctl destroy" before erasure? I don't see this being needed often since most people rely on SolusVM, but for those who perform administration from the CLI, an extra check couldn't hurt...
There is no action script for destroy which would be the easy way.   I've never had this problem before so I've never attempted to do a check but I'm sure it wouldn't be too hard with your own script.

Just include the config file and verify $VE_PRIVATE is equal to /vz/private/$VEID.

In bash, you include a file with a period:


#!/bin/bash
. /etc/vz/conf/$VEID.conf
if [ $VE_PRIVATE != "/vz/private/$VEID" ];
then
exit 1
fi

Of course you would have to specify VEID somewhere, but that's just an easy example.
 
Last edited by a moderator:
Top
amuck-landowner