amuck-landowner

Vultr KVM 512 MB (NY)

wlanboy

Content Contributer
Provider: Vultr.com
Plan: KVM 512mb VPS
Price: 5$ per month
Location: New Jersey

Purchased: 03/2014

This is one of the reviews that are sponsored by vpsboard.

I will update each review every two months and will add notes on what happend during this time.

MannDude is funding the reviews and we are randomly selecting providers and test their service, their panels and their support.

If you want to discuss about this topic -> start here.

So back to the review of Vultr.

Hardware information:

  • cat /proc/cpuinfo

    processor : 0
    vendor_id : GenuineIntel
    cpu family : 6
    model : 60
    model name : Vultr Virtual CPU 2
    stepping : 1
    microcode : 0x1
    cpu MHz : 3399.996
    cache size : 4096 KB
    physical id : 0
    siblings : 1
    core id : 0
    cpu cores : 1
    apicid : 0
    initial apicid : 0
    fpu : yes
    fpu_exception : yes
    cpuid level : 13
    wp : yes
    flags : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc up rep_good nopl pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx hypervisor lahf_lm xsaveopt fsgsbase smep erms
    bogomips : 6799.99
    clflush size : 64
    cache_alignment : 64
    address sizes : 40 bits physical, 48 bits virtual
    power management:

  • cat /proc/meminfo
    Code:
    MemTotal:         508948 kB
    MemFree:           49076 kB
    Buffers:           21796 kB
    Cached:           352972 kB
    SwapCached:            4 kB
    Active:           133400 kB
    Inactive:         291260 kB
    Active(anon):       4012 kB
    Inactive(anon):    45964 kB
    Active(file):     129388 kB
    Inactive(file):   245296 kB
    Unevictable:           0 kB
    Mlocked:               0 kB
    SwapTotal:        901116 kB
    SwapFree:         901020 kB
    Dirty:                 0 kB
    Writeback:             0 kB
    AnonPages:         49912 kB
    Mapped:           214352 kB
    Shmem:                84 kB
    Slab:              26188 kB
    SReclaimable:      20084 kB
    SUnreclaim:         6104 kB
    KernelStack:         568 kB
    PageTables:         2480 kB
    NFS_Unstable:          0 kB
    Bounce:                0 kB
    WritebackTmp:          0 kB
    CommitLimit:     1155588 kB
    Committed_AS:     224924 kB
    VmallocTotal:   34359738367 kB
    VmallocUsed:        1624 kB
    VmallocChunk:   34359736735 kB
    HardwareCorrupted:     0 kB
    AnonHugePages:         0 kB
    HugePages_Total:       0
    HugePages_Free:        0
    HugePages_Rsvd:        0
    HugePages_Surp:        0
    Hugepagesize:       2048 kB
    DirectMap4k:       40952 kB
    DirectMap2M:      483328 kB
  • dd
    Code:
    dd if=/dev/zero of=test bs=16k count=8k conv=fdatasync && rm -rf test
    8192+0 records in
    8192+0 records out
    134217728 bytes (134 MB) copied, 0.310988 s, 432 MB/s
  • wget
    Code:
    wget cachefly.cachefly.net/100mb.test -O /dev/null
    --2014-04-20 13:47:22--  http://cachefly.cachefly.net/100mb.test
    Resolving cachefly.cachefly.net (cachefly.cachefly.net)... 205.234.175.175
    Connecting to cachefly.cachefly.net (cachefly.cachefly.net)|205.234.175.175|:80... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 104857600 (100M) [application/octet-stream]
    Saving to: `/dev/null'
    
    100%[===========================================================================================>] 104,857,600 78.0M/s   in 1.3s
    
    2014-04-20 13:47:24 (78.0 MB/s) - `/dev/null' saved [104857600/104857600]
Network:

traceroute dvhn.nl:


2 ethernet21-3-br2.pnj1.choopa.net (108.61.138.65) 0.268 ms 0.280 ms 0.279 ms
3 ae3.ar2.ewr1.us.nlayer.net (69.31.95.5) 1.423 ms 1.466 ms ae7.ar1.nyc3.us.nlayer.net (69.31.34.61) 4.661 ms
4 ae0-315.nyc41.ip4.tinet.net (199.229.230.93) 0.683 ms ae5-40g.cr1.nyc2.us.nlayer.net (69.31.34.133) 0.992 ms ae0-315.nyc41.ip4.tinet.net (199.229.230.93) 0.683 ms
5 ae4-133.nyc20.ip4.tinet.net (199.229.230.13) 1.001 ms 1.059 ms 1.059 ms
6 xe-3-0-2.lon10.ip4.tinet.net (89.149.183.57) 75.177 ms bit-gw.ip4.tinet.net (77.67.75.70) 70.888 ms xe-3-0-2.lon10.ip4.tinet.net (89.149.183.57) 75.185 ms
7 805.xe-0-0-0.jun1.bit-1.network.bit.nl (213.136.1.105) 98.832 ms bit-gw.ip4.tinet.net (77.67.75.70) 75.434 ms 806.xe-0-0-0.jun1.bit-2a.network.bit.nl (213.136.1.109) 98.139 ms
8 * * 806.xe-0-0-0.jun1.bit-2a.network.bit.nl (213.136.1.109) 102.739 ms

traceroute theguardian.co.uk:


2 ethernet21-3-br1.pnj1.choopa.net (108.61.138.61) 0.264 ms 0.278 ms 0.275 ms
3 ae7.ar2.nyc3.us.nlayer.net (69.31.34.77) 2.416 ms ae2.ar2.ewr1.us.nlayer.net (69.31.34.209) 2.002 ms ae7.ar2.nyc3.us.nlayer.net (69.31.34.77) 2.396 ms
4 ae6-40g.cr1.nyc2.us.nlayer.net (69.31.34.135) 1.003 ms ae7-40g.cr1.nyc2.us.nlayer.net (69.31.34.126) 0.994 ms ae6-40g.cr1.nyc2.us.nlayer.net (69.31.34.135) 0.998 ms
5 ae4-133.nyc20.ip4.tinet.net (199.229.230.13) 1.046 ms 1.153 ms 1.066 ms
6 xe-5-0-0.nyc32.ip4.tinet.net (213.200.80.122) 1.188 ms xe-8-3-0.nyc30.ip4.tinet.net (89.149.183.42) 1.209 ms xe-5-0-0.nyc32.ip4.tinet.net (213.200.80.122) 1.180 ms
7 te0-7-0-8.ccr21.jfk07.atlas.cogentco.com (154.54.10.141) 1.606 ms 3.047 ms 1.619 ms
8 be2056.ccr21.jfk02.atlas.cogentco.com (154.54.44.217) 2.017 ms be2059.mpd22.jfk02.atlas.cogentco.com (154.54.1.221) 2.067 ms be2057.ccr22.jfk02.atlas.cogentco.com (154.54.80.177) 1.954 ms
9 be2349.mpd21.lon13.atlas.cogentco.com (154.54.30.178) 71.914 ms be2347.ccr21.lon13.atlas.cogentco.com (154.54.27.142) 73.480 ms be2349.mpd21.lon13.atlas.cogentco.com (154.54.30.178) 73.280 ms
10 be2314.ccr21.lon01.atlas.cogentco.com (154.54.72.254) 73.114 ms 73.165 ms be2316.ccr21.lon01.atlas.cogentco.com (154.54.73.114) 73.181 ms
11 te1-1.mag02.lon01.atlas.cogentco.com (154.54.74.110) 72.340 ms te2-1.mag02.lon01.atlas.cogentco.com (154.54.74.114) 72.214 ms te1-1.mag02.lon01.atlas.cogentco.com (154.54.74.110) 87.701 ms
12 149.11.142.74 (149.11.142.74) 137.738 ms 137.643 ms 137.570 ms

traceroute sueddeutsche.de:


2 ethernet21-3-br2.pnj1.choopa.net (108.61.138.65) 2.270 ms 2.296 ms 2.345 ms
3 ae3.ar2.ewr1.us.nlayer.net (69.31.95.5) 4.727 ms 1.919 ms ae7.ar1.nyc3.us.nlayer.net (69.31.34.61) 3.017 ms
4 network (69.31.34.128) 3.076 ms 3.130 ms xe-3-2-0-dcr1.nyk.cw.net (69.31.94.34) 0.967 ms
5 ae1-xcr1.nyb.cw.net (195.2.10.182) 84.794 ms xe-3-2-0-dcr1.nyk.cw.net (69.31.94.34) 0.974 ms 0.971 ms
6 ae0-xcr1.man.cw.net (195.2.28.169) 86.961 ms 86.207 ms ae3-xcr2.lnd.cw.net (195.2.30.165) 85.081 ms
7 ae3-xcr2.lsw.cw.net (195.2.28.182) 81.032 ms ae3-xcr2.lnd.cw.net (195.2.30.165) 85.144 ms ae0-xcr1.man.cw.net (195.2.28.169) 85.974 ms
8 ae6-xcr1.amd.cw.net (195.2.25.37) 78.328 ms 81.191 ms 81.443 ms
9 ae6-xcr1.amd.cw.net (195.2.25.37) 80.070 ms ae3-xcr2.lsw.cw.net (195.2.28.182) 84.454 ms ae4-xcr1.amt.cw.net (195.2.25.222) 82.533 ms
10 vodafone-gw1.amt.cw.net (208.173.212.2) 91.478 ms ae4-xcr1.amt.cw.net (195.2.25.222) 84.299 ms vodafone-gw1.amt.cw.net (208.173.212.2) 89.809 ms
11 vodafone-gw1.amt.cw.net (208.173.212.2) 91.306 ms 92.79.213.137 (92.79.213.137) 90.071 ms vodafone-gw1.amt.cw.net (208.173.212.2) 91.294 ms
12 92.79.213.137 (92.79.213.137) 90.059 ms 86.088 ms 90.139 ms
13 92.79.201.226 (92.79.201.226) 93.426 ms 188.111.129.246 (188.111.129.246) 89.255 ms 89.246 ms
14 188.111.129.246 (188.111.129.246) 89.261 ms 92.79.201.226 (92.79.201.226) 91.692 ms 91.855 ms
15 92.79.203.158 (92.79.203.158) 92.869 ms 91.068 ms 89.346 ms
16 188.111.149.118 (188.111.149.118) 95.474 ms 92.79.203.158 (92.79.203.158) 92.973 ms 188.111.149.118 (188.111.149.118) 95.261 ms
17 145.253.180.29 (145.253.180.29) 97.403 ms 188.111.149.118 (188.111.149.118) 97.046 ms^C

What services are running?

  • MongoDB cluster node
  • Ruby cron jobs
  • Branch of wlanboy.com
Support:

I wrote several tickets that all get answered within 10 minutes.

Some templates and the fronend itself did have some quite huge bugs.

vultr-error1.jpg

After waiting some minutes I started the vnc console (no Java needed!) to see why the instance is not running.

What did I see?

That the automated installation is waiting for a response because the source config is broken.

To all KVM providers: Please check your templates!

But all of them were fixed.

Overall experience:

I am enyoing this vps and I do like their control panel.

This starts right at the beginning of a new vps:

vultr1a.jpg

vultr1b.jpg

The locations and the supported Operating Systems changed - so here is the updated version:

vultr1-update.jpg

Good to see that there is a fourth primary location - which include ddos protection.

vultr2.jpg

After that you have to wait some time...

vultr3.jpg

Controlling is quite easy:

vultr4a.jpg

Upgrades are available through the panel:

vultr4b.jpg

Vultr did had a hot start.

The project itself needed some additional time to make everything working as it should.

But now all templates are running and the frontend is doing what it should do.

The vps you get is fast - faster than a DO droplet in the same location.

The support is friendly and fast.

So an possible alternative to your favorite vps provider which does not offer your desired location.

And the current status report:

vultrkvmstatus1.jpg

5 minutes of network interrupts since day one - not a bad value at all.

I will refresh the uptime report every two months.

If you need any additional information from another location - just reply to this post.

I will spin up a instance on the location and run some tests.

Edit:
Those new panels do display too much information.
Had to edit all pictures.
 
Last edited by a moderator:

DomainBop

Dormant VPSB Pathogen
They actually updated the Control Panel
They also made 2 changes to new deployments of their $5 plan:

1. RAM increased to 768MB from 512MB

2. storage decreased to 15GB from 20GB

The vps you get is fast - faster than a DO droplet in the same location.
Vultr also offers much more computing power than DO.  The Unixbench scores for Vultr are 2x higher than DO offerings.  The 1-CPU Vultr offering has a Unixbench of around 2300 (compared to a 1-core DO of  1100-1300) and the 2 CPU offering I tested had a Unixbench of over 3900 [serverbear benchmark] ( A 2-core DO has a Unixbench of 1600- 2000).
 

mojeda

New Member
I have a Vultr storage server and I get some nice download speeds.

Wget to my Vultr server to a Backupsy server.

Code:
# wget <snip>ubuntu-14.04-server-i386.iso
--2014-04-20 18:08:12--  <snip>ubuntu-14.04-server-i386.iso
Resolving <snip> (snip)... 108.61.191.172
Connecting to <snip> (snip)|108.61.191.172|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 575668224 (549M) [application/octet-stream]
Saving to: `ubuntu-14.04-server-i386.iso'

100%[============================================================================================>] 575,668,224 73.8M/s   in 7.3s

2014-04-20 18:08:20 (74.7 MB/s) - `ubuntu-14.04-server-i386.iso' saved [575668224/575668224]
 

HalfEatenPie

The Irrational One
Retired Staff
Updated the title to work with the contents of the post!

Anyways, I have a Vultr Japan VPS that I use on a regular basis.  I absolutely love it.  Granted initially the network was ridiculously crazy (Asia -> USA -> Japan) but that got solved a few weeks after I contacted them.  
 

splitice

Just a little bit crazy...
Verified Provider
It would be nice to see some more information on the status reports (i.e what was monitored) also details like ping increases and jitter.

Thanks for this review, currently considering getting a couple Vultr VMs.
 

Amitz

New Member
I use a server in France for around 21 days now (also another one in NJ) and can absolutely not complain. Good service and fast support responses (minutes, not hours). Unfortunately, you cannot shrink your instances. The only way is up.
 

wlanboy

Content Contributer
Time for an update:

vultrkvmstatus2.jpg

0 minutes of downtime since the last update.

CPU and I/O are good.

Network too.

List of current locations:

vultr-current-locations.jpg

Currently all bookable.

Next nice feature:

You can manage (save) your own list of startup scripts that run under root after the vps is deployed:

vultr-sartup-scripts.jpg

And you can now handle your own snapshots:

vultr-snapshots.jpg

Just like how Vultr expands their feature list.
 

AThomasHowe

New Member
Just like how Vultr expands their feature list.
Take note digital ocean, some providers are giving a quality service in more locations and it doesn't take over two years to add a new feature. So many things on their user voice are great ideas that have been in progress for coming up to two years in some cases - BSD and custom ISOs for example. I know those two are down to deeper problems like the nested kernel system they use but IPv6 has only just gone publicly available... in Singapore...

Vultr London is also a top notch location.
 
Last edited by a moderator:

eva2000

Active Member
yeah confirmed add $100 deposit and got +$100.. doh wish i knew of that coupon before hand :)

edit: sweet gift code feature in billing works +$10 cheers @AThomasHowe 
 
Last edited by a moderator:
Top
amuck-landowner