amuck-landowner

Heads up: OpenVZ updates will probably break your system

Geek

Technolojesus
Verified Provider
Well that may be the right type of approach as it may require a lower setting to attempt to truncate unused blocks.  Using that logic, I wonder if this approach would work:


vzctl set VEID --diskspace 1M
So by setting a ridiculously low value, my assumption would be that OpenVZ would first attempt to truncate then check to see if the setting is lower than the used disk space, thus failing to actually change the setting.

This may truncate without altering anything.  Of course you'd also want to skip the --save option.  Of course noting that I haven't tested, this is only a theory.
Good point.  I'll try and get it in before my UK people start their day. 

And another thing, why do all of my simfs converted Ploop devices report 4gb less diskspace than when they were spooled with SimFS?  Is that more wasted space somewhere?

Also, dumpe2fs isn't going to fire up unless I --save ... thoughts?
 

Francisco

Company Lube
Verified Provider
May the mods beat me for this necro but figured someone google-fooing would like to know.

I had a case today where a users VPS wouldn't boot up, giving ploop related corruption errors:

(02:20:41) lv-node28:~/442/root.hdd root: ploop mount root.hdd -r -m /mnt
Adding delta dev=/dev/ploop22693 img=root.hdd (ro)
Error in ploop_mount_fs (ploop.c:1231): Can't mount file system dev=/dev/ploop22693p1 target=/mnt data='(null)': No such device or address
...and...

Code:
(03:35:48) lv-node07:~ root: /usr/src/ploop_userspace/ploop_userspace 442.root.hdd
We process: 442.root.hdd
Ploop file size is: 2403336192
version: 2 disk type: 2 heads count: 16 cylinder count: 204800 sector count: 2048 size in tracks: 51200 size in sectors: 104857600 disk in use: 1953459801 first block offset: 2048 flags: 0
For storing 53687091200 bytes on disk we need 51200 ploop blocks
We have 1 BAT blocks
We have 262128 slots in 1 map
Number of non zero blocks in map: 2291
Please be careful because this disk used now! If you need consistent backup please stop VE
We found GPT table on this disk
We found ext4 signature at first partition block
Set device /dev/nbd0 as read only
Try to found partitions on ploop device
Error: Both the primary and backup GPT tables are corrupt.  Try making a fresh table, and using Parted's rescue feature to recover partitions.
First ploop partition was not detected properly, please call partx/partprobe manually
You could mount ploop filesystem with command: mount -r -o noload /dev/nbd0p1 /mnt
Lovely errors right? The issue is that OVZ is somehow corrupting their own GPT partition tables.
Well, here's the fix:

1) Backup the root.hdd, you will be making possibly destructive changes to it

2) Mount the ploop so it gets connected into /dev/ploop*

ploop mount root.hdd
3) Install testdisk/photorec

Code:
apt-get install testdisk
4) run testdisk on the drive

Code:
testdisk /dev/ploopXXXXXX
It must be on the bare ploop, not the p1 ending.
Now, just tell it look for a GPT partition table and do a 'quick search'. Select the first hit (should be a 2048 boundry), and tell it to write it to the disk. Exit out, umount the root.hdd and tell the VPS to boot. For me it booted without any shows of issue (with the customers permission we randomly opened files in /etc/ to confirm they were intact, etc).

DON'T BOTHER WITH PLOOP, THE DD PORN ISNT WORTH IT.

For us we had ploop on a good chunk of LV after doing a vzctl update and not re-applying our standard vz.conf. For about 1 - 2 weeks we had people reinstalling/provisioning on ploop and we've been moving them off as they ask. We've had others get random corruption issues for no reason as well.

PRAISE THE TESTDISK GODS

Why they chose to write their own fustercluck instead of just retrofitting qcow I have no idea. Sure the space trimming is a nice feature but it obviously doesn't work very well.

Francisco
 
Last edited by a moderator:
Top
amuck-landowner