# IO Ping Score



## cfg.co.in (Nov 5, 2013)

Is this a bad IO ping score?

[[email protected] ioping-0.7]# ./ioping .
4.0 kb from . (ext3 /dev/root): request=1 time=173 us
4.0 kb from . (ext3 /dev/root): request=2 time=192 us
4.0 kb from . (ext3 /dev/root): request=3 time=173 us
4.0 kb from . (ext3 /dev/root): request=4 time=226 us
4.0 kb from . (ext3 /dev/root): request=5 time=197 us
4.0 kb from . (ext3 /dev/root): request=6 time=120 us
4.0 kb from . (ext3 /dev/root): request=7 time=165 us
4.0 kb from . (ext3 /dev/root): request=8 time=173 us
4.0 kb from . (ext3 /dev/root): request=9 time=279 us
4.0 kb from . (ext3 /dev/root): request=10 time=178 us
4.0 kb from . (ext3 /dev/root): request=11 time=171 us
4.0 kb from . (ext3 /dev/root): request=12 time=247 us
4.0 kb from . (ext3 /dev/root): request=13 time=220 us
4.0 kb from . (ext3 /dev/root): request=14 time=124 us
4.0 kb from . (ext3 /dev/root): request=15 time=157 us
4.0 kb from . (ext3 /dev/root): request=16 time=435 us
4.0 kb from . (ext3 /dev/root): request=17 time=110 us
4.0 kb from . (ext3 /dev/root): request=18 time=117 us
4.0 kb from . (ext3 /dev/root): request=19 time=174 us
4.0 kb from . (ext3 /dev/root): request=20 time=197 us
4.0 kb from . (ext3 /dev/root): request=21 time=422 us
4.0 kb from . (ext3 /dev/root): request=22 time=142 us

--- . (ext3 /dev/root) ioping statistics ---
22 requests completed in 21.5 s, 5.0 k iops, 19.6 mb/s
min/avg/max/mdev = 110 us / 199 us / 435 us / 83 us

[[email protected] ioping-0.7]# ./ioping -R /dev/root

--- /dev/root (device 2.5 Gb) ioping statistics ---
8.5 k requests completed in 3.0 s, 3.0 k iops, 11.6 mb/s
min/avg/max/mdev = 71 us / 337 us / 3.5 ms / 110 us

[[email protected] ioping-0.7]# ./ioping -RL /dev/root

--- /dev/root (device 2.5 Gb) ioping statistics ---
643 requests completed in 3.0 s, 214 iops, 53.6 mb/s
min/avg/max/mdev = 3.4 ms / 4.7 ms / 8.5 ms / 675 us


----------



## Magiobiwan (Nov 5, 2013)

At 5k IOPS I'd say that's pretty darn good.


----------



## budi1413 (Nov 5, 2013)

Yeah, it's so good. What's wrong?


----------



## cfg.co.in (Nov 5, 2013)

Frankly speaking these tests are done on test servers which were under heavy i/o abuse node.

So we just wanted to ensure that containers are not effecting others performance if they doing abuse on their node


----------



## devonblzx (Nov 6, 2013)

1.)  In the last test, why are you using seek rate and sequential together?  Those conflict in my mind.  Seek rate is a good way of testing random I/O.

2.)  For someone who hosts servers, you should know what a good IOPS is .  3k iops is the most realistic one on there since you ran it using the seek rate test, which is fairly good.  Most SSDs I see are around 10k iops, but that is under zero load.  SATA hard drives are only 100-300iops (300 being a 10000rpm).

Don't rely on ioping results since it is a very short benchmark and may not be 100% accurate, but it can be a good baseline.

Are you running SSDs or just using a RAID card with memory caching?


----------



## MartinD (Nov 6, 2013)

This is what you call psychological marketing.


Edit: or more commonly, spam.


----------



## Deleted (Nov 7, 2013)

Why do people care about this stuff? ioping is using a syscall to measure latency, and you cannot get accurate measurements from userland due to context switching. Not only that, a majority of the 'latency' comes from the controller itself which is far, far away behind some controller.


----------



## MartinD (Nov 7, 2013)

I've been saying that for god knows how long. The same with 'dd' "tests" too. No-one listens.


----------



## Enterprisevpssolutions (Nov 7, 2013)

The reason people still use this is because sites like serverbear post hosting plans with benchmarks which clients compare results and one of scores is IOPS https://code.google.com/p/ioping/ the other is I/O Benchmark which uses the DD some of the sysadmin/clients who know the difference on what it is and know its just a baseline for scores and not real world results. But you have a lot of clients that do not know the difference and just see the high score and that is enough for them to make a choice on who to go to.


----------



## Shados (Nov 7, 2013)

Because benchmarking for your actual, application/project-specific performance requirements can't be easily done with a trivial, generic one-liner script and compared for EPeen with other so-called 'sysadmins'.


----------



## MartinD (Nov 8, 2013)

Enterprisevpssolutions said:


> The reason people still use this is because sites like serverbear post hosting plans with benchmarks which clients compare results and one of scores is IOPS https://code.google.com/p/ioping/ the other is I/O Benchmark which uses the DD some of the sysadmin/clients who know the difference on what it is and know its just a baseline for scores and not real world results. But you have a lot of clients that do not know the difference and just see the high score and that is enough for them to make a choice on who to go to.


Yes, thus perpetuating this idea that high scores are 'better' and lower scores mean shit hosts.

What's more tragic are the number of people who actually use these kind of 'tests' as a benchmark for performance. Sit in the IRC channel and just watch how many times people ask the bot for the 'dd' command. Whatever happened to the days when people ensured the services were operating as they should and that sold services (such as VPS's) just worked as they should? It seems to be a race for the OMGLELHUGEIOPS now instead of reliability.


----------



## drmike (Nov 8, 2013)

MartinD said:


> Yes, thus perpetuating this idea that high scores are 'better' and lower scores mean shit hosts.


Well, I've never been a fan of the e-penis server speedtest death races.  I think they are all *flawed*.  Why?  Because in theory no two environments are the same.  Containers may be sitting on perfectly peppy system, but settings limit io-operations per second, intentionally.

I care about as a purchaser: 1. Network throughput (when is everyone going to get with it and allocated gigabit already?)  2. Speed of my standard setup steps (mainly apt-get fetching and installing the heap of requirements).  3. Long term, low iowait.

Lately, I've been really like DigitalOcean and the performance of their servers.   Noticeably faster than what I see with most VPS providers on my install setups.

Here's the question... As providers, what do you propose or currently use to get good indication of actual performance / experience with your containers?

(Yes, reliability should be huge emphasis  )


----------



## peterw (Nov 8, 2013)

dd is a indicator which has to be interpreted. If a company sells "pure ssd" plans and you get 0.9 mb/s it might not be a ssd plan. But you will not notice any difference between 100mb/s and 200mb/s.


----------



## lbft (Nov 8, 2013)

MartinD said:


> Sit in the IRC channel and just watch how many times people ask the bot for the 'dd' command.


It's a heuristic. The "dd test" is useful as long as you understand that it's merely indicative, can be gamed, and absolutely is nothing like typical VPS usage.

I thought a few examples might help illustrate my point, so here are DD results and my subjective interpretation of the speed of the VPS (based mostly on the speed of running my setup script, since that's both resource intensive and essentially consistent across VPSes - when their workload afterwards isn't necessarily so.)


1073741824 bytes (1.1 GB) copied, 12.9992 s, 82.6 MB/s

This VPS runs great. I'm very happy with it.


1073741824 bytes (1.1 GB) copied, 2.99736 s, 358 MB/s
This VPS runs significantly faster than most, because it is a pure SSD VPS on a well-managed node. I have a comparatively high-IO workload that I am in the process of moving here, but for a typical workload I'd notice zero difference from the previous VPS.


1073741824 bytes (1.1 GB) copied, 305.249 s, 3.5 MB/s

This VPS performs atrociously and I have already submitted a cancellation request for it since simple tasks like installing a package take an eternity. iowait is off the charts doing the most mundane things.


1073741824 bytes (1.1 GB) copied, 2.84897 s, 377 MB/s
This VPS runs very well, but I have submitted a cancellation for it anyway since the provider is a sleazy bastard who I can't trust with my data (a case where the dd doesn't give the whole picture).


1073741824 bytes (1.1 GB) copied, 18.2293 s, 58.9 MB/s
This VPS performs a bit below average and is unlikely to get renewed - but that's nothing to do with disk performance, and everything to do with network performance (of which you can see precisely nothing here - another case where the dd doesn't give the whole picture).

If I need a rough idea of how a VPS is most likely to perform I absolutely will still use that silly dd command in combination with observing its performance doing stuff. If I need real hard numbers I'll figure out a benchmark approximating the particular real-world use I have planned for that VPS.


----------

