# BlueVM OpenVZ 512 MB (CH)



## wlanboy (Sep 13, 2013)

*Provider*: BlueVM
*Plan*: OpenVZ 512mb VPS
*Price*: 25$ per year
*Location*: Zurich, Switzerland

*Purchased*: 08/2013

I have reviewed their KVM line in the US too.

*Hardware information:*


cat /proc/cpuinfo

processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 26
model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz
stepping : 5
cpu MHz : 2933.530
cache size : 8192 KB
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu de tsc msr pae cx8 sep cmov pat clflush mmx fxsr sse sse2 ss ht syscall nx lm rep_good aperfmperf unfair_spinlock pni ssse3 cx16 sse4_1 sse4_2 popcnt hypervisor lahf_lm ida dts
bogomips : 5867.06
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:

processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 26
model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz
stepping : 5
cpu MHz : 2933.530
cache size : 8192 KB
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu de tsc msr pae cx8 sep cmov pat clflush mmx fxsr sse sse2 ss ht syscall nx lm rep_good aperfmperf unfair_spinlock pni ssse3 cx16 sse4_1 sse4_2 popcnt hypervisor lahf_lm ida dts
bogomips : 5867.06
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:


cat /proc/meminfo

```
MemTotal:         524288 kB
MemFree:          369364 kB
Cached:            90224 kB
Active:            61968 kB
Inactive:          82412 kB
Active(anon):      14784 kB
Inactive(anon):    39372 kB
Active(file):      47184 kB
Inactive(file):    43040 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:        524288 kB
SwapFree:         476856 kB
Dirty:                 4 kB
Writeback:             0 kB
AnonPages:         54156 kB
Shmem:              2612 kB
Slab:              10528 kB
SReclaimable:       6120 kB
SUnreclaim:         4408 kB
```

dd

```
dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync && rm -rf test
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 14.6059 s, 73.5 MB/s
```

wget

```
wget cachefly.cachefly.net/100mb.test -O /dev/null
--2013-09-13 10:18:40--  http://cachefly.cachefly.net/100mb.test
Resolving cachefly.cachefly.net (cachefly.cachefly.net)... 205.234.175.175
Connecting to cachefly.cachefly.net (cachefly.cachefly.net)|205.234.175.175|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: `/dev/null'

100%[================================================================================================================================>] 104,857,600 43.0M/s   in 2.3s

2013-09-13 10:18:43 (43.0 MB/s) - `/dev/null' saved [104857600/104857600]
```

*What services are running?*


Control panel tests
PHP sandbox
"Can you please host this for me" - abandoned in three months projects
Playground
*Support:*

No tickets needed yet.

*Overall experience:*

It was a rough start. The node was overrun by people benchmarking the hell out of it. Afterwards it looks like they sorted out the abusers quite well. After another two days Everything was fine. Looking at the circumstances (new country, new datacenter, new nodes) they managed the "new location promotion" quite well.

On september the 8th the nodes 1, 2, 3, 5 got SSD cache and they added Tinet into the mix (HE.net, Computerline) too.

Looks like PrivateLayer Inc is ok.

And I always prefer native IPv6.

*Network*:

traceroute dvhn.nl:


traceroute to dvhn.nl (213.136.31.225), 30 hops max, 60 byte packets
2 v0215.zrh-eq4-m1.zrh4.computerline.net (91.135.66.49) 0.309 ms 0.297 ms 0.275 ms
3 r0201.zrh-eq4-c2.zrh4.computerline.net (91.135.64.6) 0.319 ms 0.409 ms 0.481 ms
4 v0120.ams-eq3-c1.ams3.computerline.net (91.135.64.153) 11.804 ms 11.882 ms 11.882 ms
5 amsix-501.xe-0-0-0.jun1.bit-2a.network.bit.nl (195.69.144.35) 13.650 ms 13.332 ms 13.621 ms
6 * * *


traceroute sueddeutsche.de:


traceroute to sueddeutsche.de (195.50.176.88), 30 hops max, 60 byte packets
2 10gigabitethernet1-4.core1.zrh1.he.net (216.66.87.9) 7.598 ms 7.612 ms 7.595 ms
3 10gigabitethernet15-2.core1.fra1.he.net (72.52.92.29) 7.426 ms 7.423 ms 7.433 ms
4 FFMGW3.arcor-ip.net (80.81.192.117) 11.365 ms 11.270 ms 9.509 ms
5 92.79.213.129 (92.79.213.129) 17.332 ms 17.219 ms 17.176 ms
6 188.111.129.46 (188.111.129.46) 15.646 ms 16.098 ms 16.053 ms
7 92.79.201.226 (92.79.201.226) 18.488 ms 20.629 ms 20.526 ms
8 92.79.203.158 (92.79.203.158) 19.841 ms 19.757 ms 19.657 ms
9 188.111.149.114 (188.111.149.114) 26.503 ms 188.111.149.118 (188.111.149.118) 21.042 ms 20.982 ms
10 195.50.167.226 (195.50.167.226) 24.432 ms 24.383 ms 24.667 ms
11 * * *

traceroute theguardian.co.uk:


traceroute to theguardian.co.uk (77.91.252.10), 30 hops max, 60 byte packets
2 10gigabitethernet1-4.core1.zrh1.he.net (216.66.87.9) 6.617 ms 6.653 ms 6.678 ms
3 10gigabitethernet15-2.core1.fra1.he.net (72.52.92.29) 7.424 ms 7.389 ms 7.400 ms
4 ffm-b2-link.telia.net (213.248.92.33) 7.462 ms 7.468 ms 7.433 ms
5 ffm-bb1-link.telia.net (213.155.133.140) 7.734 ms ffm-bb2-link.telia.net (80.91.246.218) 7.746 ms 46.832 ms
6 ffm-b10-link.telia.net (213.155.134.135) 7.819 ms ffm-b10-link.telia.net (80.91.251.248) 7.785 ms ffm-b10-link.telia.net (80.91.251.250) 7.820 ms
7 ae11.edge4.Frankfurt.Level3.net (4.68.70.105) 7.716 ms 7.839 ms 7.790 ms
8 vlan90.csw4.Frankfurt1.Level3.net (4.69.154.254) 20.896 ms vlan80.csw3.Frankfurt1.Level3.net (4.69.154.190) 20.771 ms vlan90.csw4.Frankfurt1.Level3.net (4.69.154.254) 20.799 ms
9 ae-72-72.ebr2.Frankfurt1.Level3.net (4.69.140.21) 20.749 ms ae-82-82.ebr2.Frankfurt1.Level3.net (4.69.140.25) 37.324 ms ae-62-62.ebr2.Frankfurt1.Level3.net (4.69.140.17) 21.676 ms
10 ae-23-23.ebr2.London1.Level3.net (4.69.148.193) 20.748 ms 21.081 ms ae-22-22.ebr2.London1.Level3.net (4.69.148.189) 20.998 ms
11 ae-58-223.csw2.London1.Level3.net (4.69.153.138) 20.917 ms ae-56-221.csw2.London1.Level3.net (4.69.153.130) 20.861 ms ae-59-224.csw2.London1.Level3.net (4.69.153.142) 20.903 ms
12 ae-21-52.car1.London1.Level3.net (4.69.139.98) 304.245 ms 304.215 ms 208.519 ms
13 GUARDIAN-UN.car1.London1.Level3.net (212.113.8.30) 17.458 ms 17.556 ms 17.584 ms
14 * * *

traceroute washingtonpost.com:


traceroute to washingtonpost.com (208.185.109.100), 30 hops max, 60 byte packets
2 10gigabitethernet1-4.core1.zrh1.he.net (216.66.87.9) 0.448 ms 0.249 ms 0.248 ms
3 10gigabitethernet15-2.core1.fra1.he.net (72.52.92.29) 13.377 ms 13.220 ms 13.101 ms
4 xe-1-2-0.mpr1.fra4.de.above.net (80.81.194.26) 8.292 ms 8.160 ms 8.652 ms
5 xe-3-3-0.mpr1.cdg11.fr.above.net (64.125.22.193) 15.985 ms 15.986 ms 15.864 ms
6 xe-3-3-0.mpr1.lhr2.uk.above.net (64.125.24.85) 26.145 ms 22.065 ms 22.412 ms
7 xe-5-2-0.cr1.dca2.us.above.net (64.125.26.21) 94.507 ms 94.398 ms 94.242 ms
8 xe-1-1-0.mpr3.iad1.us.above.net (64.125.31.113) 94.531 ms 94.645 ms 94.551 ms
9 64.124.201.150.allocated.above.net (64.124.201.150) 94.434 ms 94.364 ms 94.252 ms
10 208.185.109.100 (208.185.109.100) 94.623 ms 95.074 ms 95.021 ms

And really good ping times - beyond the 100ms - to the US.


----------



## BlueVM (Sep 13, 2013)

@wlanboy - Thanks for the review. Yeah we did have a rough start at that location. Had a couple of people bring over some mysql services they were running on SSDs with other providers. Anyway we helped them optimize their stuff and installed a couple of SSDs for caching, seems to have done the trick.

We're thinking about offering some IPv6 only VPS there pretty cheaply. Any thoughts?


----------



## Pmadd (Sep 13, 2013)

An ipv6 only vps sounds neat.


----------



## wlanboy (Sep 13, 2013)

BlueVM said:


> We're thinking about offering some IPv6 only VPS there pretty cheaply. Any thoughts?


That would be an instant buy if you offer internal networking.

I like to have some workers controlled by RabbitMQ (which is IPv6 ready).

Small 128MB of RAM IPv6 boxes would be great!

I don't know why a lot of vps providers do not offer internal networking.


----------



## wlanboy (Nov 19, 2013)

Time to update the review with a current status report:



One note to the last downtime: They had to restart the node and the vps was not started automatically.

Status email went to spam and I (the hosted pages too) did not notice that my php test environment went offline.

So I have a real downtime of  8 hours and 12 minutes.


----------



## wlanboy (Jan 19, 2014)

Time to update the stats:



Total downtime if 3 hours and 20 minutes.

The event on November the 27th was a node restart - did not had time to start my vps by hand.


----------



## shinehost (Jan 23, 2014)

very nice and detaild review!


----------



## twolf (Feb 11, 2014)

Do I understand correctly that you have/had working native IPv6 on that VPS?

When I asked way back (August/September 2013) for my VPS in that location, the answer I received was that the Feathur IPv6 module was still finishing development. And re-activating that support ticket now resulted in a 3rd level tech telling me that "Ipv6 is currently not functional at this time".


----------



## wlanboy (Feb 11, 2014)

twolf said:


> Do I understand correctly that you have/had working native IPv6 on that VPS?
> 
> When I asked way back (August/September 2013) for my VPS in that location, the answer I received was that the Feathur IPv6 module was still finishing development. And re-activating that support ticket now resulted in a 3rd level tech telling me that "Ipv6 is currently not functional at this time".


I did setup


----------



## twolf (Feb 11, 2014)

Thanks for the information, although I had hoped for a different answer.

I would prefer native IPv6 to setting up yet another tunnel. And as PrivateLayer itself is IPv6 enabled, it's just sad that BlueVM doesn't seem able to hook their customers up to that.


----------



## wlanboy (Feb 11, 2014)

twolf said:


> Thanks for the information, although I had hoped for a different answer.


Me too.

I am disapointed that BlueVM is still not pushing IPv6.

But at least the HE PoP is quite close: Zurich, CH 216.66.80.98


```
ping 216.66.80.98 -c 5
PING 216.66.80.98 (216.66.80.98) 56(84) bytes of data.
64 bytes from 216.66.80.98: icmp_req=1 ttl=62 time=0.278 ms
64 bytes from 216.66.80.98: icmp_req=2 ttl=62 time=0.305 ms
64 bytes from 216.66.80.98: icmp_req=3 ttl=62 time=0.300 ms
64 bytes from 216.66.80.98: icmp_req=4 ttl=62 time=0.336 ms
64 bytes from 216.66.80.98: icmp_req=5 ttl=62 time=0.335 ms

--- 216.66.80.98 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 3999ms
rtt min/avg/max/mdev = 0.278/0.310/0.336/0.031 ms
```


----------



## xCubex (Feb 11, 2014)

Cheers for the review, i have seen these around and have wondered, thanks!


----------



## Magiobiwan (Feb 11, 2014)

We'll be able to push native IPv6 in Zurich as soon as Feathur is ready for IPv6. While we've HAD IPv6 available there, we have not been manually assigning blocks to clients due to the fact that manually managing /64s would be extremely difficult and hard to track. IPv6 is coming along nicely for Feathur however.


----------



## peterw (Feb 13, 2014)

BlueVM said:


> @wlanboy - Thanks for the review. Yeah we did have a rough start at that location. Had a couple of people bring over some mysql services they were running on SSDs with other providers. Anyway we helped them optimize their stuff and installed a couple of SSDs for caching, seems to have done the trick.
> 
> We're thinking about offering some IPv6 only VPS there pretty cheaply. Any thoughts?


Did not see your offer for this.



Magiobiwan said:


> We'll be able to push native IPv6 in Zurich as soon as Feathur is ready for IPv6. While we've HAD IPv6 available there, we have not been manually assigning blocks to clients due to the fact that manually managing /64s would be extremely difficult and hard to track. IPv6 is coming along nicely for Feathur however.


When is soon?


----------



## Magiobiwan (Feb 13, 2014)

peterw said:


> Did not see your offer for this.
> 
> When is soon?


When Justin finishes IPv6 in Feathur. Depending on how busy we end up getting (yay sales), it could be a few weeks. And if people would quit asking to get IPv6 early, because we're not doing manual IPv6 allocations.


----------



## wlanboy (Mar 1, 2014)

There is one thing I don't like about Feathur - it doen't like to restart vps if the host went down.

I had to restart my vps myself for every single blib on the node.

But yesterday something else happend:



The host went down and my vps did not restart.

But this time my vps was disabled.

Disabled without an explanation and without any notice.

It was enabled again - thanks to the great IRC based support but my ticket is still open...

Yup I am upset but maybe I found just another Feathur feature.


----------



## BlueVM (Mar 1, 2014)

@wlanboy - OpenVZ should start the containers when the system starts up. That said none of our systems have gone down in Zurich in the last 24 days which leads me to believe your VPS may have crashed or been stopped by the abuse prevention script (for whatever reason). I'd like to look into it further so if you can PM me your Feathur email or VPS ID I'll look into it further.


----------



## wlanboy (Mar 1, 2014)

BlueVM said:


> I'd like to look into it further so if you can PM me your Feathur email or VPS ID I'll look into it further.


PM sent.


----------



## peterw (Mar 3, 2014)

@BlueVM You should have a good statement for this fault and increase your communication with your customers.


----------



## wlanboy (Apr 1, 2014)

Time for an update:



Quite a rough month for their CH location.

4 hours and 14 minutes of downtime during 19 days.

And a lot of package loss too.

I moved all services from this box that need at least some amount of bandwith.


private-layer-inc.10gigabitethernet1-4.core1.zrh1.he.net 28.4% 13 18.8 18.3 16.4 29.8 1.9

Please fix the package loss and don't ever reply on a ticket again with this statement;



> This has been resolved at this time.
> 
> If you experience any further issues please let us know.


If nothing has changed.


----------



## peterw (Apr 1, 2014)

@BlueVM When will the Swizz network be better again?


----------



## AuroraZero (Apr 1, 2014)

@wlanboy Thank you the very detailed and updated reviews. I am currently watching the CH and NY providers for some upcoming projects and appreciate the honesty you show in these. I realize not every provider can be the best in all locations but at least some level of competency is needed.

Not saying that @BlueVM and his team are incompetent far from it. They have been around a long time and have made their way in this sector. Just nice to see honesty from the provider and user all in one spot for a change is all.


----------



## wlanboy (Apr 13, 2014)

AuroraZero said:


> @wlanboy Thank you the very detailed and updated reviews.


Looks like the problem on this node gone away:



26 days without any network issue.

PS:

Looks like the


----------



## wlanboy (May 17, 2014)

Time for an update to my last BlueVM box:



CPU is ok, I/O degraded again, network is again beyond the 1 Mbit/s.

March was quite an easy month but April got the problems back.

2 hours and 13 minutes of downtime for a single day on 7 incidents.

I enjoyed the emails poping up again and again.

But it looks like they got the node back on the road.

Hopefully they fix the upstream issue soon.


----------



## wlanboy (Jun 21, 2014)

Time for an update:



0 minutes of downtime since the last update.

The uptime of the vps is 94 days.

CPU and I/O are ok.

Internal network is getting better:


--2014-06-21 19:22:39-- http://cachefly.cachefly.net/100mb.test
Resolving cachefly.cachefly.net (cachefly.cachefly.net)... 81.17.24.34
Connecting to cachefly.cachefly.net (cachefly.cachefly.net)|81.17.24.34|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: `/dev/null'

100%[===========================================================================================>] 104,857,600 25.2M/s in 3.9s

2014-06-21 19:22:43 (25.4 MB/s) - `/dev/null' saved [104857600/104857600]

If you you have any routes outside of the EU it is still bad:


```
Get:26 http://security.debian.org/ .....
Fetched 26.3 MB in 35s (731 kB/s)
```


----------



## wlanboy (Aug 10, 2014)

Time for an update:



1 hour and 19 minutes and 7 seconds of network downtime since the last update.

Uptime of the vps is 44 days.

CPU and I/O are ok.

Network within the EU is getting better:


wget cachefly.cachefly.net/100mb.test -O /dev/null
--2014-08-10 14:47:16-- http://cachefly.cachefly.net/100mb.test
Resolving cachefly.cachefly.net (cachefly.cachefly.net)... 81.17.24.34
Connecting to cachefly.cachefly.net (cachefly.cachefly.net)|81.17.24.34|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: `/dev/null'

100%[===========================================================================================>] 104,857,600 18.8M/s in 5.1s

2014-08-10 14:47:21 (19.7 MB/s) - `/dev/null' saved [104857600/104857600]

Connections to the US are still bad.


----------



## wlanboy (Aug 15, 2014)

Looks like the whole node is down:



If you go to the ticket section of their customer portal you see this:


----------



## DomainBop (Aug 16, 2014)

wlanboy said:


> Looks like the whole node is down:
> 
> 
> 
> If you go to the ticket section of their customer portal you see this:


BlueVM is the only thing down at PrivateLayer Switzerland.  BlueVM has a history of paying bills late (their corporate status in Colorado has been "noncompliant" since August 1st because they also have a history of not filing returns on time).  It's the weekend and PrivateLayer's billing department is probably closed until Monday...

tl;dr I'll be shocked if the Switzerland nodes are up before Monday.


----------



## wlanboy (Aug 16, 2014)

DomainBop said:


> BlueVM has a history of paying bills late (their corporate status in Colorado has been "noncompliant" since August 1st because they also have a history of not filing returns on time).  It's the weekend and PrivateLayer's billing department is probably closed until Monday...
> 
> tl;dr I'll be shocked if the Switzerland nodes are up before Monday.


dl;dr WTF?

Hopefully that is not true.


----------



## wlanboy (Aug 30, 2014)

Next issue:



> Dear Customer,
> We're aware of an issue preventing payments from some clients being sent. We are looking into this and expect it to be resolved soon. Suspensions and terminations have been halted until this is resolved.
> 
> Thanks you for your patience.


I have canceled the vps, so no more updates on this one.


----------

