amuck-landowner

AnyNode OpenVZ 512MB

wlanboy

Content Contributer
Provider: AnyNode
Plan: OpenVZ 512mb VPS
Price: 6$ per month - free for 3 month testing
Location: Chicago, IL

Purchased: 06/2013

Hardware information:

  • cat /proc/cpuinfo

    processor : 0
    vendor_id : GenuineIntel
    cpu family : 6
    model : 26
    model name : Intel(R) Xeon(R) CPU L5520 @ 2.27GHz
    stepping : 5
    cpu MHz : 2266.746
    cache size : 4096 KB
    physical id : 0
    siblings : 3
    core id : 0
    cpu cores : 3
    apicid : 0
    initial apicid : 0
    fpu : yes
    fpu_exception : yes
    cpuid level : 11
    wp : yes
    flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx lm constant_tsc arch_perfmon rep_good unfair_spinlock pni ssse3 cx16 sse4_1 sse4_2 x2apic popcnt hypervisor lahf_lm
    bogomips : 4533.49
    clflush size : 64
    cache_alignment : 64
    address sizes : 40 bits physical, 48 bits virtual
    power management:

  • cat /proc/meminfo
    Code:
    MemTotal:         524288 kB
    MemFree:          324096 kB
    Cached:           141888 kB
    Active:            96720 kB
    Inactive:          92268 kB
    Active(anon):      18264 kB
    Inactive(anon):    28836 kB
    Active(file):      78456 kB
    Inactive(file):    63432 kB
    Unevictable:           0 kB
    Mlocked:               0 kB
    SwapTotal:        262144 kB
    SwapFree:         262144 kB
    Dirty:                 8 kB
    Writeback:             0 kB
    AnonPages:         47100 kB
    Shmem:              2604 kB
    Slab:              11192 kB
    SReclaimable:       8208 kB
    SUnreclaim:         2984 kB
  • dd
    Code:
    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync && rm -rf test
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 14.6437 s, 73.3 MB/s
  • second dd
    Code:
    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync && rm -rf test
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 15.4999 s, 69.3 MB/s
  • wget
    Code:
     wget cachefly.cachefly.net/100mb.test -O /dev/null
    --2013-07-17 11:29:35--  http://cachefly.cachefly.net/100mb.test
    Resolving cachefly.cachefly.net (cachefly.cachefly.net)... 205.234.175.175
    Connecting to cachefly.cachefly.net (cachefly.cachefly.net)|205.234.175.175|:80... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 104857600 (100M) [application/octet-stream]
    Saving to: `/dev/null'
    
    100%[========================================================================================================================================================>] 104,857,600 11.2M/s   in 9.0s
    
    2013-07-17 11:29:44 (11.1 MB/s) - `/dev/null' saved [104857600/104857600]
What services are running?

  • MongoDB
  • Node.js dev area

Support:

First note: This is a beta test of their service. 3 free months in exchange for a lot of testing and suggestions.

The first month was rough. They had to fix the Fedora and Ubuntu templates. The network was not as good as today. There was a problem with vswap too. And their own developed vps panel did need some workflow enhancements.

But the second month was without any issues. The support guys are friendly but need their time to respond to the ticket.

I think that was caused by the huge amount of testing.

Looking to the first month I remember about creating at least one ticket a day. Reinstalling my vps to test each template and reinstall my Node.js environment again and again to see if everything is working as it should.

Currently I am using the Debian7 template. A well made mininstall template I really like.
 

Overall experience:

I do have another vps in Chicago and as far as I can tell the network is as good as on the other provider. So he.net is not as bad as it's reputation as an uplink provider (for EU clients). CPU is ok but the dd is lacking sometimes. But that is just a number, the system is fast. The MongoDB is running well.

I will create a ticket today to see if some abuser is killing the disks. And I will update the review with a second dd test afterwards.

Final words: I would recommend them if someone is searching for a friendly provider in Chicago.

traceroute to lemonde.fr:


2 74.119.218.185.rdns.continuumdatacenters.com (74.119.218.185) 0.633 ms 0.519 ms 0.985 ms
3 gige-g11-16.core1.chi1.he.net (184.105.253.1) 3.977 ms 3.939 ms 3.639 ms
4 10gigabitethernet3-2.core1.atl1.he.net (184.105.223.226) 21.447 ms 21.449 ms 21.462 ms
5 10gigabitethernet4-1.core1.mia1.he.net (72.52.92.54) 37.695 ms 37.679 ms 37.571 ms
6 miami-6k-1.proxad.net (198.32.124.192) 176.709 ms 175.172 ms 173.878 ms
7 washington-6k-1-po4.intf.routers.proxad.net (212.27.56.185) 129.501 ms 130.630 ms *
8 th2-6k-3-po10.intf.routers.proxad.net (212.27.57.13) 129.606 ms 129.679 ms 131.816 ms
9 th2-crs16-1-be1006.intf.routers.proxad.net (212.27.59.205) 130.977 ms 129.926 ms 130.297 ms
10 dedibox-1-p.intf.routers.proxad.net (212.27.58.46) 172.053 ms 172.216 ms 172.116 ms
11 a9k1-1013.dc3.online.net (88.191.1.132) 130.964 ms 130.893 ms 133.018 ms
12 6k1-1046.dc2.online.net (88.191.1.254) 171.867 ms 172.151 ms 171.881 ms

traceroute to dvhn.nl:


2 74.119.218.185.rdns.continuumdatacenters.com (74.119.218.185) 222.644 ms 222.627 ms 222.521 ms
3 gige-g11-16.core1.chi1.he.net (184.105.253.1) 1.719 ms 1.853 ms 1.797 ms
4 100gigabitethernet7-2.core1.nyc4.he.net (184.105.223.162) 18.825 ms 19.029 ms 18.971 ms
5 10gigabitethernet6-4.core1.lon1.he.net (72.52.92.242) 97.228 ms 97.152 ms 87.345 ms
6 linx-2601.ge-0-1-0.jun1.thn.network.bit.nl (195.66.225.51) 87.560 ms 87.781 ms 87.713 ms
7 806.xe-0-0-0.jun1.bit-2a.network.bit.nl (213.136.1.109) 102.292 ms 102.213 ms 102.551 ms

traceroute to sueddeutsche.de:


2 74.119.218.185.rdns.continuumdatacenters.com (74.119.218.185) 0.620 ms 0.806 ms 0.725 ms
3 gige-g11-16.core1.chi1.he.net (184.105.253.1) 1.437 ms 1.376 ms 1.284 ms
4 100gigabitethernet7-2.core1.nyc4.he.net (184.105.223.162) 18.466 ms 18.414 ms 18.356 ms
5 10gigabitethernet6-4.core1.lon1.he.net (72.52.92.242) 87.297 ms 87.245 ms 87.323 ms
6 ldngw1.arcor-ip.net (195.66.224.209) 89.882 ms 87.437 ms lndgw2.arcor-ip.net (195.66.224.124) 134.071 ms
7 85.205.25.117 (85.205.25.117) 91.863 ms 85.205.25.113 (85.205.25.113) 91.356 ms 91.416 ms
8 92.79.213.165 (92.79.213.165) 107.949 ms 107.895 ms 107.451 ms
9 92.79.201.226 (92.79.201.226) 155.073 ms 155.196 ms 155.132 ms
10 92.79.203.158 (92.79.203.158) 143.793 ms 143.982 ms 144.786 ms
11 188.111.149.118 (188.111.149.118) 146.464 ms 136.345 ms 143.979 ms
12 195.50.167.227 (195.50.167.227) 145.258 ms 144.939 ms 145.141 ms

traceroute to washingtonpost.com:

Code:
 2  74.119.218.185.rdns.continuumdatacenters.com (74.119.218.185)  0.591 ms  0.700 ms  0.684 ms
 3  gige-g11-16.core1.chi1.he.net (184.105.253.1)  2.681 ms  2.408 ms  2.065 ms
 4  mpr1.ord7.us (206.223.119.86)  5.610 ms  5.748 ms  6.148 ms
 5  xe-2-3-0.cr1.ord2.us.above.net (64.125.22.209)  5.932 ms  6.179 ms  6.000 ms
 6  xe-2-1-0.cr1.lga5.us.above.net (64.125.27.170)  29.777 ms  29.059 ms  29.199 ms
 7  xe-3-2-0.cr1.dca2.us.above.net (64.125.26.101)  28.084 ms  28.052 ms  29.497 ms
 8  xe-1-1-0.mpr3.iad1.us.above.net (64.125.31.113)  28.421 ms  27.027 ms  27.072 ms
 9  64.124.201.150.allocated.above.net (64.124.201.150)  27.891 ms  27.776 ms  27.502 ms
10  208.185.109.100 (208.185.109.100)  27.799 ms  27.189 ms  28.016 ms
 
Last edited by a moderator:

wlanboy

Content Contributer
I run a second dd. Quite the same results.

Got a answer on my ticket too:

Code:
The low dd speeds are likely due to the fact we're running 4 disk RAID10s per node. 
I'm seeing around 100 MB/s on the node side which is acceptable but still low. 
Our next batch of nodes will be running 8 disk RAID10s with a more powerful controller 
so we should be seeing significant I/O improvement then.
 

Jack

Active Member
I run a second dd. Quite the same results.

Got a answer on my ticket too:


The low dd speeds are likely due to the fact we're running 4 disk RAID10s per node.
I'm seeing around 100 MB/s on the node side which is acceptable but still low.
Our next batch of nodes will be running 8 disk RAID10s with a more powerful controller
so we should be seeing significant I/O improvement then.

I run 4 disks in RAID10 with/out HW Controllers and they run at 150-200MB/s

What disks are you using to get <100MB/s?
 

scv

Massive Nerd
Verified Provider
We're using 4x WD Red 3TB in each node. I just ran another set of dd tests and results are more like what I'd expect but still lacking.


[scaveney@n1 ~]$ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync; rm test
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 6.46158 s, 166 MB/s
[scaveney@n1 ~]$ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync; rm test
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 6.11453 s, 176 MB/s
[scaveney@n1 ~]$ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync; rm test
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 6.83297 s, 157 MB/s


As was stated in the ticket wlanboy posted, our next batch of nodes are going to be running a more powerful controller with 8 disks. In addition to that we have a few tweaks we'd like to make to our nodes but it'll require a reboot, so we're holding off on that until our next scheduled maintenance window.
 

SeriesN

Active Member
Verified Provider
Jack,


You need to understand that this is a beta node. Probably every one is benchmarking and abusing the shit out of it.

I run 4 disks in RAID10 with/out HW Controllers and they run at 150-200MB/s


What disks are you using to get <100MB/s?
 

MartinD

Retired Staff
Verified Provider
Retired Staff
Ahh, good old 'dd' making an appearance. The 'benchmark' for everyone.

If only people actually knew what they were doing it would be a touch more palatable.

Say after me everyone....

'dd' IS NOT INDICATIVE OF REAL WORLD PERFORMANCE.
 
Last edited by a moderator:

notFound

Don't take me seriously!
Verified Provider
Ahh, good old 'dd' making an appearance. The 'benchmark' for everyone.

If only people actually knew what they were doing it would be a touch more palatable.

Say after me everyone....

'dd' IS NOT INDICATIVE OF REAL WORLD PERFORMANCE.
Hehe, I prefer ioping which is much more indicative of 'real life disk performance'.
 
Top
amuck-landowner