amuck-landowner

Sanity check - disk speed testing

D. Strout

Resident IPv6 Proponent
A little while back, I posted for a VPS of mine with URPad. As I mentioned then, I have experienced terrible disk speed at times, and I'm seeing the same right now, so I opened a ticket. I mentioned the disk speeds I was getting, but did not mention how I tested (which was with the standard dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync). This is how the URPad agent responded:

Hi Strout,

Let me know how you are tracking your disk speed. From my testing 1022KB files are downloaded in 0.02 seconds. Which seems to be normal from my end.
++++++++++
[ <=> ] 19,537 --.-K/s in 0.02s 

2014-08-15 09:55:57 (1022 KB/s) - 'index.html.1' saved [19537]
Is it just me or is this a terrible way to test disk speed? Disk caching would make this appear to work fine, no matter how fast the disk actually is, right? I'm 99% sure this is the case, but this guy is supposed to know what he's talking about, right? I did the same test he did, except I downloaded two 1GB test files back to back instead of one 10MB file and saw the same disk speed issues (1GB VPS, runs out of memory to cache both files). Am I missing something?
 
Last edited by a moderator:

rds100

New Member
Verified Provider
Well, disk caching is there for a purpose and is used in real world situations. You are trying to bypass the disk caching with your artificial benchmark, but this is not a real workd usage scenario.
 

D. Strout

Resident IPv6 Proponent
True, but overall the whole server feels very sluggish. Updates took forever, unpacking archives from another server was slow, even directory listing was slow. That's why I turned to synthetic tests, which confirmed what I was experiencing overall. I really think the server is overloaded.
 

GIANT_CRAB

New Member
Well, disk caching is there for a purpose and is used in real world situations. You are trying to bypass the disk caching with your artificial benchmark, but this is not a real workd usage scenario.
That's no excuse for the disk to be shit
 

SGC-Hosting

New Member
Verified Provider
Both benchmark methods aren't accurate, but downloading a small file hardly qualifies as a disk benchmark.  They could have greatly oversold their disk I/O - especially if it's OpenVZ virtualization.  Is this a consistent issue?  If it's intermittent, there could be a few other users hammering the disk as well.  Surely they have some method of checking the node's load, no?
 

Jonathan

Woohoo
Verified Provider
Both benchmark methods aren't accurate, but downloading a small file hardly qualifies as a disk benchmark.  They could have greatly oversold their disk I/O - especially if it's OpenVZ virtualization.  Is this a consistent issue?  If it's intermittent, there could be a few other users hammering the disk as well.  Surely they have some method of checking the node's load, no?
I'm getting really tired of seeing this.  Just because a node is OpenVZ doesn't mean the disk is any more oversold than a Xen or KVM setup could be.  In this case sure it probably is, but stop making generalizations that are totally without merit based on the same things everyone else is saying.

Do some research, then come back here and explain to me technically why OpenVZ can be oversold more than Xen or KVM in concerns to disk I/O.  You won't find any technical reference backing this so please stop spouting off this garbage.

EDIT: To add some useful information for OP:

Since UrPad does indeed use OpenVZ it's pretty easy to track things down here.  Check "top".  Do you see any "wa" values even though you're not doing much I/O?  Is your load average artificially inflated even at idle with nothing going on?

If either of these are true then something is indeed oversold be it CPU or disk I/O.  The hard part is getting them to admit which since you can't figure this out yourself from within a VM.  My guess is that it is indeed a disk I/O issue here.
 
Last edited by a moderator:

SGC-Hosting

New Member
Verified Provider
I'm getting really tired of seeing this.  Just because a node is OpenVZ doesn't mean the disk is any more oversold than a Xen or KVM setup could be.  In this case sure it probably is, but stop making generalizations that are totally without merit based on the same things everyone else is saying.
All of them can be, and often are, oversold - I'll never argue that.  I was certainly wrong in throwing in that generalization -- just because many low-end providers stuff their OpenVZ nodes, doesn't make it a bad hypervisor in anyway, nor does it mean all providers are doing the same. I guess I let my KVM fanboyism take control a little bit in my last comment.
 

Jonathan

Woohoo
Verified Provider
All of them can be, and often are, oversold - I'll never argue that.  I was certainly wrong in throwing in that generalization -- just because many low-end providers stuff their OpenVZ nodes, doesn't make it a bad hypervisor in anyway, nor does it mean all providers are doing the same. I guess I let my KVM fanboyism take control a little bit in my last comment.
No problem :)  Just bothers me when people throw around nonsensical lies because of what some people choose to do with OpenVZ.  Doesn't make it a bad hyper visor by any means.  It still has a bit less overhead than Xen/KVM and this will likely never change ;)
 

tonyg

New Member
There is nothing wrong with a synthetic benchmark...as long as the person understands the intricacies of the tests.

Try clearing all the caches before disk tests:

sync && echo 3 > /proc/sys/vm/drop_caches
 

devonblzx

New Member
Verified Provider
No problem :)  Just bothers me when people throw around nonsensical lies because of what some people choose to do with OpenVZ.  Doesn't make it a bad hyper visor by any means.  It still has a bit less overhead than Xen/KVM and this will likely never change ;)

When you say a bit less, I think you mean a lot less.  Try running 1000 KVM/Xen VMs on a host without it crashing.  From our testing, native an OpenVZ VPS actually has about 30% better IOPS than Xen and KVM after full optimization with access to the same resources.  This was tested on LVM, raw, and qcow.  Xen and KVM were pretty similar in numbers, both had overhead compared to the host.  OpenVZ pretty much the same numbers as the host.

The only logical reason for OpenVZ being able to be oversold more than others is because it does have less overhead so it allows you to install more VPS per host.  That isn't necessarily a bad thing because when hosts actually care about their customers, restrict their overselling, and monitor their nodes, that less overhead turns into higher performance for the end user.

KVM/Xen offer their advantages when a customer needs custom access to kernel modules, ISO installations, windows or unix but when it comes to general web hosting or the most efficient way of splitting up a linux server OpenVZ, or Virtuozzo, is the way to go.
 

Shoaib_A

Member
All of them can be, and often are, oversold - I'll never argue that.  I was certainly wrong in throwing in that generalization -- just because many low-end providers stuff their OpenVZ nodes, doesn't make it a bad hypervisor in anyway, nor does it mean all providers are doing the same. I guess I let my KVM fanboyism take control a little bit in my last comment.
Just FYI, OpenVZ/Virtuozzo is a container based virtualization solution at OS level .It is not a true hypervisor. Also, I have nothing against OpenVZ/Virtuozzo.
 
Last edited by a moderator:

Jonathan

Woohoo
Verified Provider
When you say a bit less, I think you mean a lot less.  Try running 1000 KVM/Xen VMs on a host without it crashing.  From our testing, native an OpenVZ VPS actually has about 30% better IOPS than Xen and KVM after full optimization with access to the same resources.  This was tested on LVM, raw, and qcow.  Xen and KVM were pretty similar in numbers, both had overhead compared to the host.  OpenVZ pretty much the same numbers as the host.

The only logical reason for OpenVZ being able to be oversold more than others is because it does have less overhead so it allows you to install more VPS per host.  That isn't necessarily a bad thing because when hosts actually care about their customers, restrict their overselling, and monitor their nodes, that less overhead turns into higher performance for the end user.

KVM/Xen offer their advantages when a customer needs custom access to kernel modules, ISO installations, windows or unix but when it comes to general web hosting or the most efficient way of splitting up a linux server OpenVZ, or Virtuozzo, is the way to go.
Agreed.

Just FYI, OpenVZ/Virtuozzo is a container based virtualization solution at OS level .It is not a true hypervisor. Also, I have nothing against OpenVZ/Virtuozzo.
Yep, we know.
 

D. Strout

Resident IPv6 Proponent
Thanks guys. Finally got URPad to own up to things; they said they terminated a client who was abusing resources, we'll see how that turns out.
 

fm7

Active Member
There is nothing wrong with a synthetic benchmark...as long as the person understands the intricacies of the tests.

Try clearing all the caches before disk tests:

sync && echo 3 > /proc/sys/vm/drop_caches
AFAIK you can't drop caches in OpenVZ :)

DD test is essential if you usually download huge files at high bitrates because it requires a sustained high sequential write rate / disk throughput (1Gbit -> 100MB/s, 10Gbit -> 1GB/s, 40-56Gbit (InfiniBand) 4-5GB/s ...)
 
Last edited by a moderator:

Schultz

New Member
The most important question in this thread is, have you submitted a ticket outlining the issues you're having?
 
Top
amuck-landowner