amuck-landowner

[OpenVZ] Ploop 1.12.2

Geek

Technolojesus
Verified Provider
FYI.

-John

Changes
=======
(since 1.12.1)

Fixes:
* ploop balloon discard: fix wrt 042stab10x kernel (#3156)
* ploop_merge_snapshot_by_guid: fix offline merge with raw base image
* reread_part(): repeat ioctl if EBUSY (#3081)

Improvements:
* check_mount_restrictions(): check for all images
* ploop check dd.xml: lock dd
* ploop check dd.xml: skip check if ploop is used
* check_deltas(): read-only check for non-top deltas

See detailed changelog here:
http://git.openvz.org/?p=ploop;a=shortlog;h=ploop-1.12.2
 

Francisco

Company Lube
Verified Provider
I dunno, I'm still on the fence about ploop.

I like the idea on paper but the few people we've put on it have had nothing but issues.

Francisco
 

KuJoe

Well-Known Member
Verified Provider
I want to convert one of my production VPSs over to ploop but 2+ hours of downtime isn't my cup of tea. I've tried using the "0 downtime" script posted on their forums but it results in a 30 minute outage and a failed conversion at the end. :(
 

Francisco

Company Lube
Verified Provider
I am using ploop since several months and haven't experienced any issues yet.
Any customer that comes to us asking for huge inode limits we offer to put them on ploop with customized block sizing. We've only done this a few times and thankfully they've been fine for now.

We have a few other users, though, that have had a rough time and we, nor they, can figure out why. Their FS has gone read-only for no good reason. We've migrated them to development nodes thinking maybe the underlying hardware needed a FSCK and it still happened. The VPS was only ever being powered off to do force FSCK's.

In the end we simply offered to extract the users data and put them on the old setup. They agreed, I handled it in all of 5 minutes, and they've not had a single issue in the last few months.

I really do like the idea of it, I'm just curious why they insisted on using their own format, instead of just wrapping QCOW2. We've got some QCOW in use and it's great. Performance is great, the tools work as they should, etc. The biggest thing ploop has over QCOW is the shrinking option, but given @Geek's own experiences, it doesn't work very well anyway.

Francisco
 

Geek

Technolojesus
Verified Provider
I dunno, I'm still on the fence about ploop.


I like the idea on paper but the few people we've put on it have had nothing but issues.


Francisco

Yep, I remain of similar opinion. I had a little time to fart around with updated resize on the new kernel and, well, I'm now -13gb wasted space on that device. And that's just after a ploop-baloon discard. Don't get me started on the snapshots. 

I briefly flirted with the idea of Ploop on ZFS but without the quota support, it's just not ideal for what I provide.

@serverian, did you have any luck with your ext4 corruption from earlier this morning?  Some of our Russian friends are surgical with Ploop...  I mean they just tore into it. Pavel Odintsov has a tool to mount ploops in userspace. Googleable by name. Might come in handy with your recovery.  GL.

P.S. -- where did your thread go?  I only can only find it on a tab I didn't refresh this morning.
 
Last edited by a moderator:

Francisco

Company Lube
Verified Provider
ZFS + linux is extremely RAM whorish for a lack of a nicer term. The ARC isn't integrated into linux's base memory management which means when ZFS decides it's going to blow up its RAM usage, it's going to mark the RAM as actually used, even if all it's doing is caching.

While there are some locks for this all (zrc_arc_max & zfs_arc_meta_limit), on linux it doesn't really listen to them. It does for a while but if something starts pushing (rsyncing a large inode folder for instance), it isn't going to age out via LRU, it's just going to push past it.

They have plans to fix this, and did merge some patches to make ARC not thrash like a spoiled child, but it's still not going to handle large work loads very well.

Francisco
 
Top
amuck-landowner