Earlier in the evening, I restored container 2208 per client request. It happens to be a Ploop device. One of the few f***ing ones left in production.
2015-03-26T21:29:36-0700 : Opening delta /vz/private/2208/root.hdd/root.hdd
2015-03-26T21:29:36-0700 : Adding delta dev=/dev/ploop42417 img=/vz/private/2208/root.hdd/root.hdd (rw)
2015-03-26T21:29:36-0700 : Mounting /dev/ploop42417p1 at /vz/root/2208 fstype=ext4 data='balloon_ino=12,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,'
2015-03-26T21:29:36-0700 vzctl : CT 2208 : Container is mounted
2015-03-26T21:29:36-0700 vzctl : CT 2208 : Adding IP address(es): XXX.XXX.XXX.XXX
2015-03-26T21:29:39-0700 vzctl : CT 2208 : Setting CPU limit: 100
2015-03-26T21:29:39-0700 vzctl : CT 2208 : Setting CPU units: 1000
2015-03-26T21:29:39-0700 vzctl : CT 2208 : Setting CPUs: 1
All fine and dandy, then this at the last minute.:
2015-03-26T21:29:39-0700 : Error in ploop_fname_cmp (ploop.c:1068): stat (deleted)/vz/root/1939/home/virtfs/topnetwo/home/topnetwo: No such file or directory
2015-03-26T21:29:39-0700 vzctl : CT 2208 : Container start in progress...
CT 1939? What the crap?
Usual EXT4 whining....
Mar 26 21:29:21 vzn-divinity kernel: [6601405.552534] EXT4-fs (ploop42417p1): INFO: recovery required on readonly filesystem
Mar 26 21:29:21 vzn-divinity kernel: [6601405.552537] EXT4-fs (ploop42417p1): write access will be enabled during recovery
Mar 26 21:29:22 vzn-divinity kernel: [6601406.571458] EXT4-fs (ploop42417p1): orphan cleanup on readonly fs
Mar 26 21:29:22 vzn-divinity kernel: [6601406.617009] EXT4-fs (ploop42417p1): 5 orphan inodes deleted
Mar 26 21:29:22 vzn-divinity kernel: [6601406.618661] EXT4-fs (ploop42417p1): recovery complete
Mar 26 21:29:22 vzn-divinity kernel: [6601406.618993] EXT4-fs (ploop42417p1): mounted filesystem with ordered data mode. Opts:
Mar 26 21:29:22 vzn-divinity kernel: [6601406.622097] EXT4-fs (ploop42417p1): loaded balloon from 12 (20975624 blocks)
Mar 26 21:29:36 vzn-divinity kernel: [6601421.102028] ploop42417: p1
Mar 26 21:29:36 vzn-divinity kernel: [6601421.144646] ploop42417: p1
Mar 26 21:29:36 vzn-divinity kernel: [6601421.253238] EXT4-fs (ploop42417p1): mounted filesystem with ordered data mode. Opts:
Mar 26 21:29:36 vzn-divinity kernel: [6601421.255557] EXT4-fs (ploop42417p1): loaded balloon from 12 (20975624 blocks)
Mar 26 21:29:36 vzn-divinity kernel: [6601421.263002] CT: 2208: started
Mar 26 21:29:40 vzn-divinity kernel: [6601424.440744] Core dump to |/usr/libexec/abrt-hook-ccpp 11 0 56 0 0 1427430580 e pipe failed
Since I've admittedly fudged a manual CT config before, here's 2208.conf. Normal as far as I can tell, yes?
# -----------------------------------------------------------------------------------------------------
# Copyright 2011 John Edel, Jetfire Networks L.L.C.
# Refinements 2011-2015 Jetfire Networks L.L.C.
# Because you know I'm all about the maps, bout the maps, no Ploop
# -----------------------------------------------------------------------------------------------------
# Resource knobs
KMEMSIZE="268435456"
NUMPROC="unlimited"
VMGUARPAGES="0:unlimited"
NUMTCPSOCK="unlimited"
TCPSNDBUF="unlimited"
TCPRCVBUF="unlimited"
NUMOTHERSOCK="unlimited"
LOCKEDPAGES="unlimited"
PRIVVMPAGES="unlimited"
SHMPAGES="unlimited"
OOMGUARPAGES="0:unlimited"
NUMFLOCK="unlimited"
NUMPTY="unlimited"
OTHERSOCKBUF="unlimited"
DGRAMRCVBUF="unlimited"
NUMFILE="unlimited"
SWAPPAGES="0:262144"
DISKSPACE="5242880:5242880"
DISKINODES="2621440:2621440"
QUOTATIME="0"
QUOTAUGIDLIMIT="20000"
PHYSPAGES="0:262144"
NUMSIGINFO="unlimited"
DCACHESIZE="134217728"
NUMIPTENT="unlimited"
CPUUNITS="2000"
CPULIMIT="200"
CPUS="2"
HOSTNAME="XXXXX.XXXXX.XXX"
IP_ADDRESS="XXX.XXX.XXX.XXX"
# So says the node
ONBOOT="yes"
VE_ROOT="/vz/root/$VEID"
VE_PRIVATE="/vz/private/$VEID"
ORIGIN_SAMPLE="vswap-jetfire"
OSTEMPLATE="ct2208-vzdump"
# Additional CT capabilities
CAPABILITY="NET_ADMINn"
DEVNODES="net/tun:rw"
FEATURES="pppn"
DISABLED="no"
I can think of no reason why restoring one container would report anything about an entirely different one unless I'm doing something wrong and can't see it...
Looks to be whining about a cPanel bind mount. The only thing I can think is that CT 1939 removed that particular cPanel account (or perhaps removed it through WHMCS or the like) yet the bind mount might still exist...? Well, it's a theory at least.
Of course, I can't/won't drop into CT 1939 until I get a green light from the client. Right now I can only speculate. Both containers are operating and are in use, and I've had no reports of problems from their respective users, so ....I'm a little broken.
Anyone wanna take a guess at this?
-John
2015-03-26T21:29:36-0700 : Opening delta /vz/private/2208/root.hdd/root.hdd
2015-03-26T21:29:36-0700 : Adding delta dev=/dev/ploop42417 img=/vz/private/2208/root.hdd/root.hdd (rw)
2015-03-26T21:29:36-0700 : Mounting /dev/ploop42417p1 at /vz/root/2208 fstype=ext4 data='balloon_ino=12,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,'
2015-03-26T21:29:36-0700 vzctl : CT 2208 : Container is mounted
2015-03-26T21:29:36-0700 vzctl : CT 2208 : Adding IP address(es): XXX.XXX.XXX.XXX
2015-03-26T21:29:39-0700 vzctl : CT 2208 : Setting CPU limit: 100
2015-03-26T21:29:39-0700 vzctl : CT 2208 : Setting CPU units: 1000
2015-03-26T21:29:39-0700 vzctl : CT 2208 : Setting CPUs: 1
All fine and dandy, then this at the last minute.:
2015-03-26T21:29:39-0700 : Error in ploop_fname_cmp (ploop.c:1068): stat (deleted)/vz/root/1939/home/virtfs/topnetwo/home/topnetwo: No such file or directory
2015-03-26T21:29:39-0700 vzctl : CT 2208 : Container start in progress...
CT 1939? What the crap?
Usual EXT4 whining....
Mar 26 21:29:21 vzn-divinity kernel: [6601405.552534] EXT4-fs (ploop42417p1): INFO: recovery required on readonly filesystem
Mar 26 21:29:21 vzn-divinity kernel: [6601405.552537] EXT4-fs (ploop42417p1): write access will be enabled during recovery
Mar 26 21:29:22 vzn-divinity kernel: [6601406.571458] EXT4-fs (ploop42417p1): orphan cleanup on readonly fs
Mar 26 21:29:22 vzn-divinity kernel: [6601406.617009] EXT4-fs (ploop42417p1): 5 orphan inodes deleted
Mar 26 21:29:22 vzn-divinity kernel: [6601406.618661] EXT4-fs (ploop42417p1): recovery complete
Mar 26 21:29:22 vzn-divinity kernel: [6601406.618993] EXT4-fs (ploop42417p1): mounted filesystem with ordered data mode. Opts:
Mar 26 21:29:22 vzn-divinity kernel: [6601406.622097] EXT4-fs (ploop42417p1): loaded balloon from 12 (20975624 blocks)
Mar 26 21:29:36 vzn-divinity kernel: [6601421.102028] ploop42417: p1
Mar 26 21:29:36 vzn-divinity kernel: [6601421.144646] ploop42417: p1
Mar 26 21:29:36 vzn-divinity kernel: [6601421.253238] EXT4-fs (ploop42417p1): mounted filesystem with ordered data mode. Opts:
Mar 26 21:29:36 vzn-divinity kernel: [6601421.255557] EXT4-fs (ploop42417p1): loaded balloon from 12 (20975624 blocks)
Mar 26 21:29:36 vzn-divinity kernel: [6601421.263002] CT: 2208: started
Mar 26 21:29:40 vzn-divinity kernel: [6601424.440744] Core dump to |/usr/libexec/abrt-hook-ccpp 11 0 56 0 0 1427430580 e pipe failed
Since I've admittedly fudged a manual CT config before, here's 2208.conf. Normal as far as I can tell, yes?
# -----------------------------------------------------------------------------------------------------
# Copyright 2011 John Edel, Jetfire Networks L.L.C.
# Refinements 2011-2015 Jetfire Networks L.L.C.
# Because you know I'm all about the maps, bout the maps, no Ploop
# -----------------------------------------------------------------------------------------------------
# Resource knobs
KMEMSIZE="268435456"
NUMPROC="unlimited"
VMGUARPAGES="0:unlimited"
NUMTCPSOCK="unlimited"
TCPSNDBUF="unlimited"
TCPRCVBUF="unlimited"
NUMOTHERSOCK="unlimited"
LOCKEDPAGES="unlimited"
PRIVVMPAGES="unlimited"
SHMPAGES="unlimited"
OOMGUARPAGES="0:unlimited"
NUMFLOCK="unlimited"
NUMPTY="unlimited"
OTHERSOCKBUF="unlimited"
DGRAMRCVBUF="unlimited"
NUMFILE="unlimited"
SWAPPAGES="0:262144"
DISKSPACE="5242880:5242880"
DISKINODES="2621440:2621440"
QUOTATIME="0"
QUOTAUGIDLIMIT="20000"
PHYSPAGES="0:262144"
NUMSIGINFO="unlimited"
DCACHESIZE="134217728"
NUMIPTENT="unlimited"
CPUUNITS="2000"
CPULIMIT="200"
CPUS="2"
HOSTNAME="XXXXX.XXXXX.XXX"
IP_ADDRESS="XXX.XXX.XXX.XXX"
# So says the node
ONBOOT="yes"
VE_ROOT="/vz/root/$VEID"
VE_PRIVATE="/vz/private/$VEID"
ORIGIN_SAMPLE="vswap-jetfire"
OSTEMPLATE="ct2208-vzdump"
# Additional CT capabilities
CAPABILITY="NET_ADMINn"
DEVNODES="net/tun:rw"
FEATURES="pppn"
DISABLED="no"
I can think of no reason why restoring one container would report anything about an entirely different one unless I'm doing something wrong and can't see it...
Looks to be whining about a cPanel bind mount. The only thing I can think is that CT 1939 removed that particular cPanel account (or perhaps removed it through WHMCS or the like) yet the bind mount might still exist...? Well, it's a theory at least.
Of course, I can't/won't drop into CT 1939 until I get a green light from the client. Right now I can only speculate. Both containers are operating and are in use, and I've had no reports of problems from their respective users, so ....I'm a little broken.
Anyone wanna take a guess at this?
-John
Last edited by a moderator: