amuck-landowner

NanoVZ OpenVZ 128 MB (France)

wlanboy

Content Contributer
Provider: NanoVZ
Plan: OpenVZ 128 MB VPS
Price: € 3 per year
Location: Roubaix, France
Purchased: 01/2015

Hardware information:

  • cat /proc/cpuinfo (1x)

    processor : 0
    vendor_id : GenuineIntel
    cpu family : 6
    model : 26
    model name : Intel(R) Xeon(R) CPU W3530 @ 2.80GHz
    stepping : 5cpu MHz : 2800.140
    cache size : 8192 KB
    physical id : 0
    siblings : 8
    core id : 0
    cpu cores : 4
    apicid : 0
    initial apicid : 0
    fpu : yes
    fpu_exception : yes
    cpuid level : 11
    wp : yes
    flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 sse4_2 popcnt lahf_lm ida dts tpr_shadow vnmi flexpriority ept vpidbogomips : 5600.28
    clflush size : 64
    cache_alignment : 64
    address sizes : 36 bits physical, 48 bits virtual
    power management:
  • cat /proc/meminfo
    Code:
    MemTotal:         131072 kB
    MemFree:          100784 kB
    Cached:            18156 kB
    Buffers:               0 kB
    Active:            17320 kB
    Inactive:           8908 kB
    Active(anon):       5152 kB
    Inactive(anon):     2920 kB
    Active(file):      12168 kB
    Inactive(file):     5988 kB
    Unevictable:           0 kB
    Mlocked:               0 kB
    SwapTotal:        131072 kB
    SwapFree:         125448 kB
    Dirty:                 4 kB
    Writeback:             0 kB
    AnonPages:          8072 kB
    Shmem:              2604 kB
    Slab:               4048 kB
    SReclaimable:       1124 kB
    SUnreclaim:         2924 kB
  • dd
    Code:
    dd if=/dev/zero of=test bs=16k count=8k conv=fdatasync && rm -rf test
    8192+0 records in
    8192+0 records out
    134217728 bytes (134 MB) copied, 2.47973 s, 54.1 MB/s
  • wget
    Code:
    wget cachefly.cachefly.net/100mb.test -O /dev/null
    --2015-02-25 00:21:23--  
    http://cachefly.cachefly.net/100mb.testResolving cachefly.cachefly.net (cachefly.cachefly.net)... 205.234.175.175
    Connecting to cachefly.cachefly.net (cachefly.cachefly.net)|205.234.175.175|:80... 
    connected.
    HTTP request sent, awaiting response... 
    200 OKLength: 104857600 (100M) 
    [application/octet-stream]
    Saving to: `/dev/null'
    100%[========================================================>] 104,857,600 31.8M/s   in 4.0s2015-02-25 00:21:27 (25.0 MB/s) - `/dev/null' saved [104857600/104857600]
Network:

  • 20 NAT IPv4 Ports
  • /80 IPv6 Subnet
  • 100 GB Transfer
traceroute dvhn.nl


2 rbx-g1-a9.fr.eu (213.186.32.253) 0.874 ms 1.045 ms 1.026 ms
3 ams-1-6k.nl.eu (94.23.122.186) 5.531 ms * *
4 amsix-501.xe-0-0-0.jun1.bit-2a.network.bit.nl (80.249.208.35) 7.341 ms 6.905 ms 7.282 ms
traceroute theguardian.co.uk


2 rbx-g1-a9.fr.eu (213.186.32.253) 0.837 ms 1.042 ms 1.071 ms
3 th2-g1-a9.fr.eu (91.121.215.132) 4.607 ms th2-g1-a9.fr.eu (91.121.131.210) 4.227 ms 4.600 ms
4 * * gsw-1-6k.fr.eu (91.121.215.135) 13.906 ms
5 * * *
6 ae-15-51.car5.London1.Level3.net (4.69.139.70) 7.661 ms 8.227 ms 7.977 ms
7 ae-15-51.car5.London1.Level3.net (4.69.139.70) 8.100 ms 8.618 ms 8.390 ms
8 GUARDIAN-UN.car5.London1.Level3.net (217.163.45.90) 7.910 ms 7.982 ms 7.925 ms
traceroute sueddeutsche.de


2 rbx-g1-a9.fr.eu (213.186.32.253) 0.874 ms 1.038 ms 1.038 ms
3 ams-1-6k.nl.eu (94.23.122.114) 5.646 ms * *
4 AMDGW1.arcor-ip.net (80.249.209.123) 6.443 ms 6.381 ms 10.108 ms
5 bln-145-254-5-158.arcor-ip.net (145.254.5.158) 19.876 ms 19.829 ms 19.747 ms
6 82.82.24.142 (82.82.24.142) 19.131 ms 19.200 ms 19.218 ms
7 212.204.41.194 (212.204.41.194) 26.180 ms 26.155 ms 26.101 ms
traceroute washingtonpost.com


2 rbx-g1-a9.fr.eu (213.186.32.253) 0.850 ms 0.996 ms 1.050 ms
3 th2-g1-a9.fr.eu (91.121.131.210) 4.471 ms th2-g1-a9.fr.eu (91.121.215.132) 4.488 ms th2-g1-a9.fr.eu (91.121.131.210) 4.374 ms
4 * * *
5 * * *
6 NEUSTAR-INC.edge3.Paris1.Level3.net (212.73.242.130) 4.416 ms 4.375 ms 4.316 ms
What services are running?

  • Postfix
  • Nginx (both behind CloudFlares IPv4 Proxy)
  • Php
  • Redis
Support:
No tickets needed.

Overall experience:
CPU ok, I/O ok and a good network connection.
Did not have to send a single ticket.

Update status:
nanovzfr1.jpg
3 hours 29 minutes 35 seconds of network downtime since the first month.
The node did have some rough times during the first weeks, network itself does have some spikes but is quite solid. Hoping for some calm weeks.
Uptime of the vps itself is 30 days.
CPU and I/O are ok.
Network is good within the EU.

I will refresh the uptime report every two months.
 
Last edited by a moderator:

wlanboy

Content Contributer
Time for an update:

nanovzfr2.jpg

9 days, 18 hours and 44 minutes of downtime since the last update.

Uptime of the vps itself is 6 days.

CPU and I/O are good.

The network throughput is getting worse - but not really bad:

Code:
wget cachefly.cachefly.net/100mb.test -O /dev/null
--2015-05-25 14:35:31--  http://cachefly.cachefly.net/100mb.test
Resolving cachefly.cachefly.net (cachefly.cachefly.net)... 205.234.175.175
Connecting to cachefly.cachefly.net (cachefly.cachefly.net)|205.234.175.175|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: `/dev/null'

100%[======================================================================================================================>] 104,857,600 30.7M/s   in 4.1s

2015-05-25 14:35:35 (24.1 MB/s) - `/dev/null' saved [104857600/104857600]
 

wlanboy

Content Contributer
Time for an update:

nanovzfr3.jpg

3 days, 20 hours and 52 minutes of network downtime since the last update.
Uptime of the vps itself is 8 days.

CPU and I/O are ok.

The network throughput is getting worse - but not really bad:
 

Code:
wget cachefly.cachefly.net/100mb.test -O /dev/null
converted 'http://cachefly.cachefly.net/100mb.test' (ANSI_X3.4-1968) -> 'http://cachefly.cachefly.net/100mb.test' (UTF-8)
--2015-08-08 23:44:54--  http://cachefly.cachefly.net/100mb.test
Resolving cachefly.cachefly.net (cachefly.cachefly.net)... 205.234.175.175
Connecting to cachefly.cachefly.net (cachefly.cachefly.net)|205.234.175.175|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: '/dev/null'

/dev/null                                100%[==================================================================================>] 100.00M  28.4MB/s   in 4.4s

2015-08-08 23:44:59 (22.9 MB/s) - '/dev/null' saved [104857600/104857600]
 

wlanboy

Content Contributer
Time for an update:

nanovzfr4.jpg

10 days, 11 hours of network downtime since the last update. OVH - VAC - cleaning of pings.
Uptime of the vps itself is 59 days.

CPU and I/O are ok.

The network throughput is getting worse - but not really bad:

Code:
wget cachefly.cachefly.net/100mb.test -O /dev/null
converted 'http://cachefly.cachefly.net/100mb.test' (ANSI_X3.4-1968) -> 'http://cachefly.cachefly.net/100mb.test' (UTF-8)
--2015-09-28 23:20:05--  http://cachefly.cachefly.net/100mb.test
Resolving cachefly.cachefly.net (cachefly.cachefly.net)... 205.234.175.175
Connecting to cachefly.cachefly.net (cachefly.cachefly.net)|205.234.175.175|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: '/dev/null'

/dev/null                              100%[===========================================================================>] 100.00M  30.7MB/s   in 4.0s

2015-09-28 23:20:10 (24.8 MB/s) - '/dev/null' saved [104857600/104857600]
 

DomainBop

Dormant VPSB Pathogen
Time for an update:

nanovzfr4.jpg

10 days, 11 hours of network downtime since the last update. OVH - VAC - cleaning of pings.
Uptime of the vps itself is 59 days.

Are you sure that it was OVH network downtime and wasn't NanoVZ node downtime/reboots (with the provider putting the VPS in suspend during the reboots so it wouldn't show any downtime during the reboots/maintenance periods)?  

For comparison purposes, 3 of my servers at OVH which haven't experienced the same type of  frequent "network issues" that NanoVZ seems to experience

An OVH server in Roubaix: in the past 10 months I'm showing a total of 15 minutes downtime in Roubaix and the 2 longest downtimes (8 minutes and 4 minutes) were due to me rebooting the server for upgrades.

JJzVfl2.png

For reference, OVH/RunAbove Strasbourg, almost all of the downtimes shown are the VPS being rebooted by me:

8d3UKpP.png

Server (one of the very rare 5 euro Kimsufi KS-1 BHS Atoms) at OVH/BHS Canada:

J4daujF.png

(3 performance graphs above are from StatusCake public monitoring graphs).

TL;DR: the frequent "network problems" NanoVZ is experiencing on their rented OVH server seem to be limited to their server...downtime on OVH's network is fairly rare.
 
Last edited by a moderator:

wlanboy

Content Contributer
Time for an update:

Are you sure that it was OVH network downtime and wasn't NanoVZ node downtime/reboots (with the provider putting the VPS in suspend during the reboots so it wouldn't show any downtime during the reboots/maintenance periods)?  
Uptime of the vps itself is 59 days. So it is network related. I am testing ipv4, ipv6 through ping and testing ipv6 through webpage too. Last test method does have the best uptime.
I am running a website and a mail server on this vps. Mails did not get bounced but website is not available from time to time,

But I cannot point my finger at the node ... no evidence. But a lot better than the vps in Düsseldorf (which was moved to Falkenstein without notice).
 

willie

Active Member
 a lot better than the vps in Düsseldorf (which was moved to Falkenstein without notice).

Fwiw, there is some confusion in the above.  Düsseldorf (DE01) was not moved to Falkenstein--it is still in Düsseldorf.  There is a separate node (DE02) in Falkenstein.  Both are now somewhat better than they were in the past.
 

wlanboy

Content Contributer
Fwiw, there is some confusion in the above.  Düsseldorf (DE01) was not moved to Falkenstein--it is still in Düsseldorf.  There is a separate node (DE02) in Falkenstein.  Both are now somewhat better than they were in the past.

My vps was moved - of course not the node.
 

willie

Active Member
That's really weird and doesn't sound usual.  I had one on DE1 (now have both) and it didn't get moved.  You could put in a ticket asking what happened.
 
Top
amuck-landowner