# [OpenVZ] Ploop, Fuse, ext4 improvements in new test kernel



## Geek (Nov 19, 2014)

For those who wish to test the Ploop and fuse changes in the upcoming kernel release.

https://openvz.org/Download/kernel/rhel6-testing/042stab101.5


Rebase to RHEL6.6 kernel 2.6.32-504.el6 (security, bug fixes, and enhancements, see link below)
ve/net: add a separate field for NETIF_F_VIRTUAL and NETIF_F_VENET (PSBM-29226)
cpt: get rid of explicit i_size manipulation (PSBM-28962)
ms/ext4: lock i_mutex when truncating orphan inodes
ms/net: tcp: use correct net ns in cookie_v4_check() (PSBM-29553)
rst: verbose debug for rst_restore_process() (#3085, PSBM-25446)
cpt: put ct after nf_conntrack_hash_check_insert (PSBM-29589)
cpt: Restore file locks owners (PSBM-29240)
cpt: Dump whole siginfo_t::_sifields (PSBM-29266)
cpt: Allow to receive pending data from unix socketpair with closed second end (PSBM-29264)
ve/sysfs: create /sys/block/devName & /sys/dev/block/MAJ:MIN for CT ploops (PSBM-29112)
ve/net: venet_set_op() cleanup
cpt: Correct dump tmpfs having child mounts (PSBM-29265)
fuse: introduce fuse_release_ff()
fuse: fix deadlock in fuse_flush() (PSBM-29381)
fuse: fix erroneous unlock_page() in fuse_send_writepages()
ext4: update defragmentation codebase
ext4: cleanup GFP flags inside resize path
ve/net/ppp: fixed oops in oops in ppp_register_net_channel() (#3114, PSBM-29975)
nfs: fixed nfs_fattr/nfs_fh leak in nfs_lookup_revalidate() (PSBM-29924)
cpt: remove unused CPT_TEST_CAPS ioctl code
cpt: fix TEST_VECAPS ioctl wrt xsave (#3012)
cpt: Restore deleted delayed files in /tmp directory (PSBM-29582)
rst: Zeroify ITIMER_PROF and ITIMER_VIRTUAL counting errors (PSBM-29857)
ext4: fsync/mfsync cleanup
mm: debug memory allocation causing fs re-entrance
mm: compaction: restore irq only if lruvec lock is released (PSBM-29961)
See ya when I get over this damn flu.

-JE


----------



## drmike (Nov 19, 2014)

More plooping ;(


----------



## HalfEatenPie (Nov 19, 2014)

hehe.

Ploop.


----------



## Geek (Nov 19, 2014)




----------



## HalfEatenPie (Nov 19, 2014)

And....

I can never unsee that.

@Geek


----------



## DomainBop (Nov 20, 2014)

Ploop is the sound as well as the action!


----------



## drmike (Nov 20, 2014)

All I see from Ploop is more fake ass disk benchmarks blown up by plooping.


----------



## raj (Nov 20, 2014)

I need 284MB/s dd speeds for my 50 UV per month wordpress blog!!


----------



## drmike (Nov 20, 2014)

raj said:


> I need 284MB/s dd speeds for my 50 UV per month wordpress blog!!


That sure is the truth.

I need to break out a fat MySQL disk heavy benchmark for debunking the SSD numbers VPS hosts keep pimping.

Old approaches of dd'ing aren't telling much.


----------



## Geek (Nov 21, 2014)

raj said:


> I need 284MB/s dd speeds for my 50 UV per month wordpress blog!!





drmike said:


> All I see from Ploop is more fake ass disk benchmarks blown up by plooping.


Two things I feel strongly about - figuring out a way to log all instances of containers accessed with "vzctl enter" and calling out the providers who attempt to get away with misleading customers. I've already seen the tweets to OVZ about how happy people are with their 400 MB/sec DD output.  Admittedly I wasn't sure how this was being done at first, but it came together in time. 

As for benchmarks ... Madonna Mia ... my puny dev box with a single 1TB Barracuda was getting 600 MB/s on Ploop devices. Kinda pisses me off to think about what the kidiots of the industry could do to the growing reputation of containers (again) over a few (more) summers.



drmike said:


> That sure is the truth.
> 
> I need to break out a fat MySQL disk heavy benchmark for debunking the SSD numbers VPS hosts keep pimping.
> 
> Old approaches of dd'ing aren't telling much.


Please do. I'll donate hardware and QA space for the cause. Just this month I bid farewell to someone who wanted to try a KVM with SSD. While I humbly reversed the cancellation by request just a couple of days later, it goes to show you how misleading benchmarks can be with virtualized hosting. While I'm certainly not trying to impugn present company, it frustrates me when I see benchmarks used for sales.


----------



## Geek (Nov 21, 2014)

Single cheapo HDD in QA on a Ploop device. Look at the "Enterprise Pure SSD RAID 10!"  Hurl.


----------



## rds100 (Nov 21, 2014)

While ploop does make the useless dd benchmark even more useless, it also has real world advantages. Ever tried backing up or migrating a VPS with millions of files inside? Takes forever on simfs. With ploop it can work.


----------



## Francisco (Nov 21, 2014)

rds100 said:


> While ploop does make the useless dd benchmark even more useless, it also has real world advantages. Ever tried backing up or migrating a VPS with millions of files inside? Takes forever on simfs. With ploop it can work.


All ploop does is slim provisions the DD. The faster your single core performance is, the higher the benchmark reports.

Until you have a node crash, etc. FSCK still doesn't appear to be automated and there's already been many reports of FSCK not working properly, even with writebacks disabled.

We have a few users on PLOOP, namely the people that *need* 10M inodes. With the move to pure SSD's, though, the time to migrate them isn't all that rough.

Francisco


----------



## Geek (Nov 21, 2014)

cpt/rst of Ploop devices is failing on this kernel and is crashing HWNs.

http://3125.bugzilla.openvz.org/attachment.cgi?id=2186

https://bugzilla.openvz.org/show_bug.cgi?id=3125


----------

