# BuyVM OpenVZ 128MB (NY)



## wlanboy (May 19, 2013)

*Provider*: BuyVM
*Plan*: OpenVZ 128mb VPS
*Price*: 15$ per year
*Location*: Buffalo, NY

*Purchased*: 02/2013

*Hardware information:*


cat /proc/cpuinfo

processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 45
model name : Intel(R) Xeon(R) CPU E5-2630L 0 @ 2.00GHz
stepping : 7
cpu MHz : 2000.354
cache size : 4096 KB
physical id : 0
siblings : 8
core id : 0
cpu cores : 8
apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc pni ssse3 cx16 sse4_1 sse4_2 popcnt lahf_lm
bogomips : 4000.70
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:


cat /proc/meminfo

```
MemTotal:       262144 kB
MemFree:        102004 kB
Buffers:             0 kB
Cached:              0 kB
SwapCached:          0 kB
Active:              0 kB
Inactive:            0 kB
HighTotal:           0 kB
HighFree:            0 kB
LowTotal:       262144 kB
LowFree:        102004 kB
SwapTotal:           0 kB
SwapFree:            0 kB
Dirty:             260 kB
Writeback:           0 kB
AnonPages:           0 kB
Mapped:              0 kB
Slab:                0 kB
PageTables:          0 kB
NFS_Unstable:        0 kB
Bounce:              0 kB
CommitLimit:         0 kB
Committed_AS:        0 kB
VmallocTotal:        0 kB
VmallocUsed:         0 kB
VmallocChunk:        0 kB
HugePages_Total:     0
HugePages_Free:      0
HugePages_Rsvd:      0
Hugepagesize:     2048 kB
```

dd

```
dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync && rm -rf test
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 4.75213 s, 226 MB/s
```

wget

```
wget cachefly.cachefly.net/100mb.test -O /dev/null

--2013-05-19 20:28:03--  http://cachefly.cachefly.net/100mb.test
Resolving cachefly.cachefly.net (cachefly.cachefly.net)... 205.234.175.175
Connecting to cachefly.cachefly.net (cachefly.cachefly.net)|205.234.175.175|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: `/dev/null'

100%[================================================================================================================================>] 104,857,600 6.89M/s   in 17s

2013-05-19 20:28:20 (5.71 MB/s) - `/dev/null' saved [104857600/104857600]
```

*What services are running?*


Ruby scripts
Thin cluster
Rails app
lighttpd + php

*Support:*

I have opened 2 support tickets since February. All get short answers within some minutes. If I look to them:


One ticket for activating the free 5GB backup space.
One ticket to reset my security question.
Nothing to complain, Friendly and fast support.

*Overall experience:*

I am a happy customer. The performance is good to run some websites. Compile time of Ruby was ok. Network for US connections is really good. Network routings to europe are ok (telia -> cogentco -> level3). VPS is as stable as SecureDragon. 100% uptime (hardware) for the last 12 weeks. But sometimes there are some minor connection issues. Ping to europe is about 120ms.


----------



## Francisco (May 19, 2013)

Thanks 


Francisco


----------



## KuJoe (May 19, 2013)

+1 to BuyVM! Out of the 4 VPSs I still renew, 2 of them are with BuyVM (1 OpenVZ and 1 KVM, both in NV).


----------



## drmike (May 19, 2013)

BuyVM is a staple.  Good guys and reliable servers.

Only nitpick is the network.  Buffalo is flaky.

Was pointed out recently elsewhere that Cogent in the mix has really messed up routes, especially to Europe.   Routes are often enough something like Cogent ---> Telia --> Cogent.  Certainly weird, since Cogent loves to haul traffic end to end to reduce their costs and peering.


----------



## acd (May 20, 2013)

This is the /proc/user_beancounters from my NY ovz128:




Spoiler



Version: 2.5

       uid  resource                     held              maxheld              barrier                limit              failcnt

    #####:  kmemsize                  3926739              8141259           2147483646           2147483646                    0

            lockedpages                     0                  427               999999               999999                    0

            privvmpages                 43368               185506                65536                65536                 4445

            shmpages                     1281                 6417                32768                32768                    4

            dummy                           0                    0                    0                    0                    0

            numproc                        38                   98                  500                  500                    0

            physpages                   11825               171083                    0  9223372036854775807                    0

            vmguarpages                     0                    0                32768                32768                    0

            oomguarpages                11825               171083                32768                32768                    0

            numtcpsock                     13                  172              7999992              7999992                    0

            numflock                        2                    8               999999               999999                    0

            numpty                          2                   16               500000               500000                    0

            numsiginfo                      0                   64               999999               999999                    0

            tcpsndbuf                  234536             13373560            214748160            396774400                    0

            tcprcvbuf                  212992             52609728            214748160            396774400                    0

            othersockbuf                48888               169512            214748160            396774400                    0

            dgramrcvbuf                     0                96496            214748160            396774400                    0

            numothersock                   37                   78              7999992              7999992                    0

            dcachesize                      0                    0           2147483646           2147483646                    0

            numfile                      1214                 2355             23999976             23999976                    0

            dummy                           0                    0                    0                    0                    0

            dummy                           0                    0                    0                    0                    0

            dummy                           0                    0                    0                    0                    0

            numiptent                      34                   35               999999               999999                    0




The uname shows the following, which is the mask for their 2.6.18 ovz kernel:



```
Linux lovecraft 2.6.32-pony6-3 #1 SMP Tue Mar 13 07:31:44 PDT 2012 i686 GNU/Linux
```


----------



## drmike (May 20, 2013)

acd said:


> This is the /proc/user_beancounters from my NY ovz128:
> 
> 
> 
> ...


----------



## Francisco (May 21, 2013)

physpages is set to unlimited? 

Francisco


----------



## drmike (May 21, 2013)

Back to the original review...

wget cachefly.cachefly.net/100mb.test -O /dev/null

--2013-05-19 20:28:03-- http://cachefly.cachefly.net/100mb.test
Resolving cachefly.cachefly.net (cachefly.cachefly.net)... 205.234.175.175
Connecting to cachefly.cachefly.net (cachefly.cachefly.net)|205.234.175.175|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: `/dev/null'

100%[================================================================================================================================>] 104,857,600 6.89M/s in 17s

6.89M/s... seems slow.

So, I just logged into my container there.  They handout gigabit, with obvious collision with other users for bandwidth resources.

100%[===================================================================================================================================================================================================>] 104,857,600 9.95MB/s   in 8.7s   

100%[===================================================================================================================================================================================================>] 104,857,600 3.81MB/s   in 24s    

100%[===================================================================================================================================================================================================>] 104,857,600 6.29MB/s   in 12s  

Seeing the cachefly speedtest file start out really fast 22MB/s and then just slow way down.

So, I had to look at where the network was taking us to get this file:

3:  buf-b1-link.telia.net                                 0.822ms asymm  5 

 4:  nyk-bb1-link.telia.net                                9.992ms asymm  6 

 5:  nyk-b5-link.telia.net                                10.252ms asymm  6 

 6:  xe-0.equinix.chcgil09.us.bb.gin.ntt.NET              13.671ms asymm  8 

 7:  ae-1.r06.chcgil09.us.bb.gin.ntt.net                  14.212ms asymm  8 

 8:  154.54.13.234                                        11.157ms 

 9:  te8-2-10G.ar3.DCA3.gblx.net                          22.718ms asymm 10 

10:  ae2-20g.ar1.ord6.us.nlayer.net                       15.837ms asymm  9 

11:  vip1.G-anycast1.cachefly.net                         14.746ms reached

 

Buffalo is East Coast, Chicago is mid-west (central US).   Routing this to Chicago = stupid.  Unsure who is determining where this file should be served up, but clearly, New York City has to be a Cachefly POP and likely much better speeds.

 

I did a bunch of other tests with a file I leave laying everywhere and didn't see this same sort of QoS artifact that I saw with the Cachefly test.  Saw much faster and uniform speeds.


----------



## wlanboy (May 21, 2013)

But that is something Buffalo specific:


```
2  *****.aggr.buffalo.nwnx.net (********)  1.426 ms  1.452 ms  1.359 ms
 3  te7-4.ccr01.buf02.atlas.cogentco.com (38.122.36.45)  1.225 ms  1.231 ms  1.216 ms
 4  216.156.0.253.ptr.us.xo.net (216.156.0.253)  17.634 ms nyk-bb1-link.telia.net (80.91.246.37)  9.676 ms 216.156.0.253.ptr.us.xo.net (216.156.0.253)  17.634 ms
 5  * * te9-7.ccr01.jfk01.atlas.cogentco.com (154.54.43.6)  10.965 ms
 6  te0-6-0-0.mpd22.jfk02.atlas.cogentco.com (154.54.1.209)  10.863 ms ae8.edge3.Chicago3.Level3.net (4.68.127.245)  13.962 ms  13.897 ms
 7  vlan52.ebr2.Chicago2.Level3.net (4.69.138.190)  104.222 ms vlan60.csw1.NewYork1.Level3.net (4.69.155.62)  90.689 ms te0-1-0-1.ccr21.jfk05.atlas.cogentco.com (154.54.31.2)  11.953 ms
 8  xe-9-0-0.edge1.NewYork1.Level3.net (4.68.111.45)  11.507 ms ae-61-61.ebr1.NewYork1.Level3.net (4.69.134.65)  90.610 ms ae-6-6.ebr2.Washington12.Level3.net (4.69.148.145)  104.601 ms
 9  ae-5-5.ebr2.Washington1.Level3.net (4.69.143.221)  105.962 ms  104.607 ms vlan70.csw2.NewYork1.Level3.net (4.69.155.126)  90.276 ms
```


----------



## drmike (May 21, 2013)

You poor boy @wlanboy   Interesting seeing CVPS route being so different from BuyVM's...

That was a traceroute to *cachefly.cachefly.net *correct?

That route is super stupid if so.  Cogent to Telia to XO to Cogent (NYC) to Level3 (Chicago) back to NY on Level3 to Washington DC on Level3 to NY on Level 3.

What in hell are the doing with that mess? 90-100ms  and thousands of miles of looping to get from Buffalo to NYC end to end....

Funny thing is, this far from the first time I've seen this sort of mess of misrouting at CC's locations.


----------



## wlanboy (May 21, 2013)

It was a traceroute to the accelerated network in germany.

Traceroute to france (lemonde.fr):


3 207.86.157.13 (207.86.157.13) 0.413 ms 0.503 ms te7-4.ccr01.buf02.atlas.cogentco.com (38.122.36.45) 139.799 ms
4 216.156.0.253.ptr.us.xo.net (216.156.0.253) 16.365 ms 16.172 ms nyk-bb1-link.telia.net (80.91.246.37) 23.097 ms
5 nyk-b5-link.telia.net (213.155.135.19) 9.761 ms 207.88.14.194.ptr.us.xo.net (207.88.14.194) 12.966 ms 13.080 ms
6 cogent-ic-151337-nyk-b5.c.telia.net (213.248.73.198) 10.210 ms te0-0-0-6.ccr22.lpl01.atlas.cogentco.com (130.117.0.254) 83.934 ms cogent-ic-151337-nyk-b5.c.telia.net (213.248.73.198) 10.173 ms
7 te0-6-0-5.ccr22.jfk02.atlas.cogentco.com (154.54.83.165) 10.367 ms be2004.mpd22.ord01.atlas.cogentco.com (154.54.5.9) 24.886 ms te0-3-0-1.ccr22.jfk02.atlas.cogentco.com (154.54.27.205) 10.548 ms
8 te0-1-0-7.mpd22.par01.atlas.cogentco.com (130.117.50.14) 88.567 ms te0-7-0-3.mpd21.par01.atlas.cogentco.com (154.54.43.153) 82.286 ms te0-7-0-3.ccr22.par01.atlas.cogentco.com (154.54.43.162) 84.694 ms
9 te0-7-0-3.mag21.par01.atlas.cogentco.com (130.117.49.86) 89.029 ms te0-7-0-33.mag21.par01.atlas.cogentco.com (154.54.74.162) 89.196 ms te0-0-0-27.mag21.par01.atlas.cogentco.com (154.54.74.138) 81.890 ms
10 te0-0-0-6.ccr21.lpl01.atlas.cogentco.com (154.54.28.94) 97.755 ms snsci.demarc.cogentco.com (149.6.160.50) 90.059 ms 149.6.115.6 (149.6.115.6) 86.183 ms
11 bzn-crs16-1-be1106.intf.routers.proxad.net (212.27.59.101) 82.636 ms 83.436 ms te0-2-0-4.ccr22.lon13.atlas.cogentco.com (154.54.60.105) 96.245 ms
12 dedibox-2-p.intf.routers.proxad.net (212.27.50.162) 89.885 ms 89.009 ms 85.772 ms

That is ok: NY -> JFK -> Paris

Traceroute to UK (guardian.co.uk):


3 buf-b1-link.telia.net (213.248.96.41) 0.297 ms 207.86.157.13 (207.86.157.13) 0.392 ms 0.482 ms
4 216.156.0.253.ptr.us.xo.net (216.156.0.253) 20.452 ms te0-0-0-32.ccr21.yyz02.atlas.cogentco.com (154.54.43.61) 12.470 ms te0-7-0-26.ccr22.yyz02.atlas.cogentco.com (154.54.27.69) 12.929 ms
5 nyk-b5-link.telia.net (213.155.131.137) 9.779 ms 207.88.14.194.ptr.us.xo.net (207.88.14.194) 12.989 ms te0-3-0-5.ccr22.ymq02.atlas.cogentco.com (154.54.42.230) 14.651 ms
6 xe-10-1-0.edge1.NewYork1.Level3.net (4.68.110.81) 9.881 ms te0-4-0-6.ccr22.lpl01.atlas.cogentco.com (154.54.44.214) 83.957 ms te0-2-0-6.ccr21.lpl01.atlas.cogentco.com (154.54.0.69) 83.789 ms
7 vlan80.csw3.NewYork1.Level3.net (4.69.155.190) 89.539 ms vlan70.csw2.NewYork1.Level3.net (4.69.155.126) 90.123 ms vlan51.ebr1.Chicago2.Level3.net (4.69.138.158) 102.706 ms
8 ae-6-6.ebr1.Chicago1.Level3.net (4.69.140.189) 101.896 ms te0-0-0-0.ccr21.lon01.atlas.cogentco.com (154.54.57.113) 84.143 ms ae-6-6.ebr1.Chicago1.Level3.net (4.69.140.189) 102.027 ms
9 ae-2-2.ebr2.NewYork2.Level3.net (4.69.132.66) 101.460 ms 101.775 ms ae-41-41.ebr2.London1.Level3.net (4.69.137.65) 89.690 ms
10 ae-59-224.csw2.London1.Level3.net (4.69.153.142) 89.788 ms 149.11.142.74 (149.11.142.74) 93.859 ms ae-1-100.ebr1.NewYork2.Level3.net (4.69.135.253) 100.543 ms
11 4.69.201.45 (4.69.201.45) 101.665 ms ae-21-52.car1.London1.Level3.net (4.69.139.98) 88.500 ms 77.91.255.137 (77.91.255.137) 94.644 ms
12 GUARDIAN-UN.car1.London1.Level3.net (212.113.8.30) 79.988 ms 77.91.255.194 (77.91.255.194) 95.004 ms GUARDIAN-UN.car1.London1.Level3.net (212.113.8.30) 79.787 ms

There it is again: Chicago.

Traceroute to NL (dvhn.nl):


3 te7-4.ccr01.buf02.atlas.cogentco.com (38.122.36.45) 10.252 ms 207.86.157.13 (207.86.157.13) 0.382 ms te7-4.ccr01.buf02.atlas.cogentco.com (38.122.36.45) 10.310 ms
4 216.156.0.253.ptr.us.xo.net (216.156.0.253) 22.182 ms nyk-bb1-link.telia.net (80.91.246.37) 9.718 ms 9.675 ms
5 207.88.14.194.ptr.us.xo.net (207.88.14.194) 13.048 ms 12.966 ms 12.963 ms
6 206.111.2.222.ptr.us.xo.net (206.111.2.222) 13.365 ms 44.589 ms xe-0-1-1.lon10.ip4.tinet.net (141.136.107.153) 83.036 ms
7 xe-5-1-0.lon10.ip4.tinet.net (89.149.187.141) 92.810 ms 80.153 ms bit-gw.ip4.tinet.net (77.67.75.70) 83.355 ms
8 bit-gw.ip4.tinet.net (77.67.75.70) 93.472 ms 90.884 ms 805.xe-0-0-0.jun1.bit-1.network.bit.nl (213.136.1.105) 97.154 ms
9 805.xe-0-0-0.jun1.bit-1.network.bit.nl (213.136.1.105) 108.582 ms 108.431 ms 806.xe-0-0-0.jun1.bit-2a.network.bit.nl (213.136.1.109) 96.056 ms

That is ok too.


----------



## drmike (May 21, 2013)

@wlanboy, That traceroute to UK (guardian.co.uk) = retarded routing.

What is super disturbing about that route is we have the upstream handoffs going on.   XO --> COGENT --> Level 3 --> then Level 3 goes to Chicago and gets lost.

What is disturbing about that?  Colocrossing still supposedly has Level 3 in their upstream mix.  They should be able to stuff these packets in Level 3 straight out of Buffalo over to JFK over the pond and where are there.

They've monkeyed things so poorly that you have routing like what you've shown that is just nuts and makes you wonder if they actually have Level 3 there.

Not to be outdone, the other blend they serving up @Francisco and BuyVM out of Buffalo is even more broken perhaps:

3:  buf-b1-link.telia.net                                 0.830ms asymm  5 

 4:  216.156.0.253.ptr.us.xo.net                          14.455ms asymm  5 

 5:  te0-3-0-3.ccr22.ymq02.atlas.cogentco.com             14.977ms asymm  9 

 6:  4.68.127.245                                         13.295ms asymm  8 

 7:  te0-3-0-4.mpd21.lon13.atlas.cogentco.com             83.727ms asymm  9 

 8:  ae-71-71.ebr1.NewYork1.Level3.net                    89.755ms asymm 12 

 9:  ae-2-2.ebr2.NewYork2.Level3.net                     102.046ms asymm 12 

10:  ae-59-224.csw2.London1.Level3.net                    88.594ms asymm 11 

11:  4.69.201.45                                         102.025ms asymm 12 

12:  77.91.255.194                                        96.667ms asymm 13 

13:  77.91.255.141                                        91.159ms asymm 15 

14:  77.91.255.137                                        90.005ms 

15:  GUARDIAN-UN.car1.London1.Level3.net                  92.514ms asymm 10 

16:  77.91.255.141                                       103.272ms asymm 15 

17:  no reply

18:  77.91.255.194                                       105.729ms asymm 13 

 

Figure that mess out.  Looks like:

XO --> Cogent --> Cogent (London) --> Level 3 (NYC) --> Level 3 (London) --> 4.69.201.45 (which appears to be NYC Level 3 --- try a traceroute to that IP for fun) ---> Level 3 (London)

The latency doesn't add up though since we've basically gone from NY state to London then back to NY state then back to London.

Maybe @Francisco can comment on these messed up routes?


----------



## Francisco (May 21, 2013)

buffalooed said:


> @wlanboy, That traceroute to UK (guardian.co.uk) = retarded routing.
> 
> What is super disturbing about that route is we have the upstream handoffs going on.   XO --> COGENT --> Level 3 --> then Level 3 goes to Chicago and gets lost.
> 
> ...


Well, it sounds like the routes are asymmetrical no?

It's very much possible that CC is doing some adjustments based on path/etc for preferred routes. I had the same thing at EGI but thankfully CC isn't being complete penny pincher's on that.


Francisco


----------



## drmike (May 21, 2013)

Unsure about the symmetry or missing the point there.

No way what is happening in my example should be, but the numbers of course say something is off.  Not doing the cross Atlantic trip twice in that time.

Very odd 

Certainly seems to be preferred path from CVPS.... Wondering if their customers are seeing a whole lot more "Gogent" these days.


----------



## Tux (May 21, 2013)

You forgot that Telia was the first hop.

If you have the sick pleasure of using their services or an ISP that peers with them like me (this isn't to BuyVM, but still):


[email protected]:~$ traceroute mc.nullblock.com
traceroute to mc.nullblock.com (192.227.135.245), 30 hops max, 60 byte packets
<Internal stuff>
6 bbr02atlnga-bue-3.atln.ga.charter.com (96.34.2.72) 23.260 ms 17.677 ms 23.415 ms
7 bbr01atlnga-tge-0-0-0-2.atln.ga.charter.com (96.34.0.38) 24.284 ms 25.730 ms 25.713 ms
8 atl-bb1-link.telia.net (80.239.130.249) 51.881 ms 51.857 ms 51.838 ms
9 ash-bb3-link.telia.net (80.91.252.213) 49.362 ms ash-bb3-link.telia.net (213.155.134.130) 47.064 ms ash-bb3-link.telia.net (80.91.252.217) 51.727 ms
10 nyk-bb1-link.telia.net (213.155.130.78) 55.371 ms nyk-bb1-link.telia.net (213.155.134.126) 59.126 ms nyk-bb1-link.telia.net (213.155.131.226) 62.771 ms
11 buf-b1-link.telia.net (80.91.246.36) 64.149 ms 66.422 ms 64.377 ms
12 giglinx-ic-155660-buf-b1.c.telia.net (213.248.96.42) 66.158 ms 65.961 ms 67.538 ms
13 host.colocrossing.com (75.127.11.234) 88.140 ms 85.302 ms 84.969 ms
14 Node13.weloveservers.net (192.227.135.130) 68.479 ms 64.031 ms 62.395 ms
15 host.colocrossing.com (192.227.135.245) 66.384 ms 63.602 ms 59.914 ms
[email protected]:~$ ping -c 5 mc.nullblock.com
PING mc.nullblock.com (192.227.135.245) 56(84) bytes of data.
64 bytes from host.colocrossing.com (192.227.135.245): icmp_req=2 ttl=47 time=62.2 ms
64 bytes from host.colocrossing.com (192.227.135.245): icmp_req=3 ttl=47 time=61.8 ms
64 bytes from host.colocrossing.com (192.227.135.245): icmp_req=4 ttl=47 time=82.5 ms
64 bytes from host.colocrossing.com (192.227.135.245): icmp_req=5 ttl=47 time=66.9 ms

--- mc.nullblock.com ping statistics ---
5 packets transmitted, 4 received, 20% packet loss, time 4012ms
rtt min/avg/max/mdev = 61.855/68.390/82.562/8.420 ms
[email protected]:~$ 

That being said, I don't plan to ever use ColoCrossing in Buffalo, nor will I even really use them at all. I might go back if their routing was much better. This makes single-homed Cogent look like the best thing ever to me.

I'll end my Telia rant now (and hopefully my posts full of traceroutes ), but I certainly support BuyVM's decision to stop giving more hardware to ColoCrossing. I imagine that you should give Ashburn a try to be honest. I don't see too many LEB providers there other than Amazon (but it's "cloud", which to me is another word for "shit").


----------



## drmike (May 21, 2013)

Tux said:


> You forgot that Telia was the first hop.
> 
> If you have the sick pleasure of using their services or an ISP that peers with them like me (this isn't to BuyVM, but still):
> 
> ...


Thanks for chiming in   I had to chuckle at some of it.  Cloud = SHIT.  Agreed.

Georgia to Buffalo, NY, in 65ms ... slow.

BuyVM holding back on hardware in Buffalo... Yeah, they've been.  Not a BuyVM kick in the sack, but plenty of other "providers" bit on Buffalo deals and they can't sell anything there.   Demand isn't great at that location.


----------



## Tux (May 22, 2013)

buffalooed said:


> Thanks for chiming in   I had to chuckle at some of it.  Cloud = SHIT.  Agreed.
> 
> Georgia to Buffalo, NY, in 65ms ... slow.
> 
> BuyVM holding back on hardware in Buffalo... Yeah, they've been.  Not a BuyVM kick in the sack, but plenty of other "providers" bit on Buffalo deals and they can't sell anything there.   Demand isn't great at that location.


Yeah. My normal ping to NY is around 37ms (to DigitalOcean NY, that is), so 65ms is very subpar. I wonder if Telia intentionally lags traffic for the lulz (likely) or my ISP is total garbage (also likely). Buffalo should only add around 3-10ms at worst.


----------



## drmike (May 22, 2013)

Buffalo to NYC for instance use to be +10ms.  More recently those numbers have increased.

Routing from Buffalo outward is seriously messed up lately. 

tracepath schools.nyc.gov

 4:  80.91.246.34                                          5.309ms asymm  6 

 5:  chi-bb1-link.telia.net                               13.522ms asymm  7 

 6:  te0-6-0-6.ccr22.jfk02.atlas.cogentco.com             10.736ms asymm  8 

 7:  nyc1-ar3-xe-1-0-0-0.us.twtelecom.net                 28.822ms asymm 10 

 8:  165.155.0.33                                         31.483ms asymm 11 

 9:  144.232.4.185                                        30.932ms asymm 10 

10:  144.232.8.164                                        24.164ms asymm  9 

11:  sl-st30-ash-0-1-0-0.sprintlink.net                   31.440ms asymm  9 

12:  sl-timewarner-374159-0.sprintlink.net                22.554ms asymm  9 

13:  nyc1-ar3-xe-0-0-0-0.us.twtelecom.net                 30.007ms asymm 10 

14:  165.155.0.33                                         32.523ms asymm 11 

 

See, going to Chicago again.  32ms to go from Buffalo to New York City. This is a joke.  No one wants to handle their hot potato routing.

Is there a least cost routing path option  ?

I am not even paying attention or trying to find goofy routes.   First pick and it's crazy busted.

Compare that to Choopa in New Jersey:

tracepath schools.nyc.gov

 2:  ethernet5-7-br2.pnj1.choopa.net                      12.062ms asymm  3 

 3:  198.32.160.35                                         0.985ms asymm  5 

 4:  fast0-0.iix-igr01.nycl.twtelecom.net                  1.431ms asymm  5 

 5:  165.155.0.33                                          3.900ms asymm  7 

 6:  165.155.0.33                                          3.726ms asymm  7


----------



## acd (May 23, 2013)

buffalooed said:


> So explain what we should note from the beancounter info...


Basically, they don't underallocate you for anything in particular. A lot of hosts might cap your numprocs lower or limit your kmem size or your tcp socks/file handles to something you might actually hit. Buyvm doesn't; they set the params at a reasonable level and really only cap your ram, just as you would expect.

That's not why I attached it to the thread though. I try to find out what beancounters is set to before I pick up an ovz, but most times that is impossible.


You should take some time to learn how to read them; being able to read beancounters is good for you as a user, especially when things are failing for "no apparent reason". That failcount column will tell you if your hitting resource limits frequently.


----------



## drmike (May 23, 2013)

acd said:


> Basically, they don't underallocate you for anything in particular. A lot of hosts might cap your numprocs lower or limit your kmem size or your tcp socks/file handles to something you might actually hit. Buyvm doesn't; they set the params at a reasonable level and really only cap your ram, just as you would expect.
> 
> 
> That's not why I attached it to the thread though. I try to find out what beancounters is set to before I pick up an ovz, but most times that is impossible.
> ...


Awesome info. Thanks for taking the time to explain this further. Definitely going to look at my providers and dig more into these various numbers and what each means.


----------



## Francisco (May 23, 2013)

Numproc is capped as a security net for our nodes. We felt 500 processes/threads was a very reasonable number and very rarely does anyone hit it.

It's better that than someone getting forkbombed and we're having to fight to get a node back under control.

Francisco


----------



## VPSDATABASE (May 26, 2013)

I can vouch for BuyVM

One of the best VPS providers for the budget.

They have some strong DDoS protection too.


----------



## wlanboy (Sep 9, 2013)

Time to update my review.

*What services are running?*


Ruby scripts
Thin cluster
Rails app
lighttpd with static pages
*Support:*

Not a single support ticket needed.

*Overall experience:*

I really enjoy my vps in Buffalo. No hassles, no downtimes, no support needed.

The routings to europe could be way better but they are good enough for webhosting. I prefer rock solid hosting to lightning routing if I only have to care about http traffic.

I am using this vps for ruby based webhosting. The type of webhosting were your friends/family/sports club call you at 5 a.m. because NodePing sent them an email.

I did not get any calls yet - so everything is working as it should.

Yes it is OpenVZ - but not oversold:


```
free -m
             total       used       free     shared    buffers     cached
Mem:           128        114         13          0          0         61
-/+ buffers/cache:         52         75
Swap:          128         17        110
```


----------



## wlanboy (Nov 2, 2013)

Want to add the current status report:



3 hours and 14 minutes of downtime since March the 31st.


----------



## wlanboy (Dec 28, 2013)

A small update after the move:



As you might know the 13 hours and 44 minutes of downtime can be ignored


----------



## wlanboy (Feb 13, 2014)

Time to update the stats:



Not a single blip this year.

I/O is good, CPU too but the network throughput could be better.

To be fair - I only feel that network cap when I sync my backups from the vps to the EU located backup space.


----------



## Francisco (Feb 13, 2014)

wlanboy said:


> Time to update the stats:
> 
> 
> 
> ...


Pull from our test file and see if it's doing the same thing at all. It could just be a poor route but if the speedtest (speedtest.ny.buyvm.net) shows better speeds then ticket 

Choopa has been a real improvement all around.

Thanks!

Francisco


----------



## wlanboy (Feb 14, 2014)

Francisco said:


> Pull from our test file and see if it's doing the same thing at all. It could just be a poor route but if the speedtest (speedtest.ny.buyvm.net) shows better speeds then ticket
> 
> 
> Choopa has been a real improvement all around.
> ...


Might be the common problem of scp.

You speedtest  : 1.4 MB/s

My vps with scp: 850 kbit/s [nj-node09]


----------



## wlanboy (Mar 18, 2014)

Time for an update:



Nothing happend at all.


----------



## wlanboy (Apr 20, 2014)

Time for an update:



So after 102 days without any problems 2 hours and 44 minutes of network downtime occured.

Fran tweeted about the DC problems - so I did not need to write any tickets.

Hopefully Choopa gets this sorted.


----------



## HalfEatenPie (May 12, 2014)

Howdy folks!  

Just letting you all know, I split the previous discussion about the frequency and updating of reviews  because?do=embed' frameborder='0' data-embedContent> it deviated away from the actual review content!  

If you'd like to continue that discussion, then please head over that way!

Thanks!  

Link (if the one above is broken):


----------



## wlanboy (Jun 15, 2014)

Time for an update:



39 minutes of network downtime since the last review.

CPU and I/O are good.

Network is getting better through the last weeks.


--2014-06-15 18:01:50-- http://cachefly.cachefly.net/100mb.test
Resolving cachefly.cachefly.net (cachefly.cachefly.net)... 205.234.175.175
Connecting to cachefly.cachefly.net (cachefly.cachefly.net)|205.234.175.175|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: `/dev/null'

100%[===========================================================================================>] 104,857,600 31.4M/s in 3.2s

2014-06-15 18:01:54 (31.4 MB/s) - `/dev/null' saved [104857600/104857600]

Uptime of the vps itself is 217 days.

Hopefully Choopa is able to get their network right (for more than one month) - they are on a good way.


----------



## wlanboy (Jul 18, 2014)

Time for an update:



1 hour and 6 minutes of downtime since the last update.

Uptime of the vps itself is 26 days.

CPU and I/O are ok.

Network is great.


```
wget cachefly.cachefly.net/100mb.test -O /dev/null
--2014-07-18 21:28:14--  http://cachefly.cachefly.net/100mb.test
Resolving cachefly.cachefly.net (cachefly.cachefly.net)... 205.234.175.175
Connecting to cachefly.cachefly.net (cachefly.cachefly.net)|205.234.175.175|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: `/dev/null'

100%[===========================================================================================>] 104,857,600 41.7M/s   in 2.4s

2014-07-18 21:28:17 (41.7 MB/s) - `/dev/null' saved [104857600/104857600]
```


----------



## wlanboy (Sep 28, 2014)

Time for an update:



2 hours and 19 minutes of downtime since the last update.

Uptime of the vps itself is 43 days.

CPU and I/O are ok.

Network is great.


```
wget cachefly.cachefly.net/100mb.test -O /dev/null
--2014-09-28 10:43:56--  http://cachefly.cachefly.net/100mb.test
Resolving cachefly.cachefly.net (cachefly.cachefly.net)... 205.234.175.175
Connecting to cachefly.cachefly.net (cachefly.cachefly.net)|205.234.175.175|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: `/dev/null'

100%[===========================================================================================>] 104,857,600 52.3M/s   in 1.9s

2014-09-28 10:43:58 (52.3 MB/s) - `/dev/null' saved [104857600/104857600]
```


----------



## wlanboy (Dec 7, 2014)

Time for an update:



2 hours and 47 minutes of downtime since the last update.

Uptime of the vps itself is 8 days.

CPU and I/O are ok.

Network is great.


```
wget cachefly.cachefly.net/100mb.test -O /dev/null
--2014-12-07 18:06:53--  http://cachefly.cachefly.net/100mb.test
Resolving cachefly.cachefly.net (cachefly.cachefly.net)... 205.234.175.175
Connecting to cachefly.cachefly.net (cachefly.cachefly.net)|205.234.175.175|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: `/dev/null'

100%[===================================================================================>] 104,857,600 42.1M/s   in 2.4s

2014-12-07 18:06:56 (42.1 MB/s) - `/dev/null' saved [104857600/104857600]
```


----------



## wlanboy (Jan 19, 2015)

Time for an update:



2 hours and 26 minutes of downtime since the last update.

Uptime of the vps itself is 19 days.

CPU and I/O are ok.

Network is great:


```
wget cachefly.cachefly.net/100mb.test -O /dev/null
--2015-01-19 18:22:00--  http://cachefly.cachefly.net/100mb.test
Resolving cachefly.cachefly.net (cachefly.cachefly.net)... 205.234.175.175
Connecting to cachefly.cachefly.net (cachefly.cachefly.net)|205.234.175.175|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: `/dev/null'

100%[======================>] 104,857,600 72.6M/s   in 1.4s

2015-01-19 18:22:01 (72.6 MB/s) - `/dev/null' saved [104857600/104857600]
```


----------



## wlanboy (Feb 15, 2015)

Time for an update:



1 hours and 11 minutes of downtime since the last update.

Uptime of the vps itself is 6 days.

CPU could be better and I/O is ok.

Network is great:


```
wget cachefly.cachefly.net/100mb.test -O /dev/null
--2015-02-15 22:25:06--  http://cachefly.cachefly.net/100mb.test
Resolving cachefly.cachefly.net (cachefly.cachefly.net)... 205.234.175.175
Connecting to cachefly.cachefly.net (cachefly.cachefly.net)|205.234.175.175|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: `/dev/null'

100%[========================================================>] 104,857,600 49.5M/s   in 2.0s

2015-02-15 22:25:08 (49.5 MB/s) - `/dev/null' saved [104857600/104857600]
```


----------



## Aldryic C'boas (Feb 15, 2015)

Performance on NJ OVZ is horrible, to be honest.  The NJ builds are identical to the original LV deployment, which was before we started implementing SSD cache properly.  While none of the node's resources are actually oversold, the hardware is just having trouble keeping up (especially during common CRON times) now that the deployment is close to capacity.

We do have some major hardware upgrades going out to NJ within the next couple weeks to address these issues and bring the performance back to ideal levels.  I'll let @Francisco chime in on that one with more info, since he'll be doing the actual buildouts.


----------



## Francisco (Feb 15, 2015)

The NJ caches have been slowly starting to fail so we've had to kick them out which has been causing some of the IO spikes. If there was a long period of down it's likely a kernel panic + forced FSCK needed. It's also quite possible it was a leak on staminus that they didn't deal with. I know there was one that took a bit for them to cleanup. We do local filtering as well but sometimes the floods are just nasty.

The E5 nodes don't have CPU shortage but some of the 128's do. Originally the L5520's were just fine for them but things like buggy rsyslogd's, lots of munin's, etc, are taxing them at peak points.

With that being said, we already got some of the equipment in NJ for the big facelift it's due. The 256MB+ plans are finally going pure SSD's, the 128MB's are getting move up to L56XX processors, & new SSD caches for those too. We'll also be bumping the RAID cache from 128MB -> 512MB with newer cards. At this point we're just awaiting a shipment of RAM to LV before I head down to LV to personally prep the gear before it gets shipped to NJ.

Francisco


----------



## wlanboy (Mar 22, 2015)

Time for an update:



57 minutes of downtime since the last update.

Uptime of the vps itself is 7 days.

CPU could be better and I/O is ok.

Network is great:


```
wget cachefly.cachefly.net/100mb.test -O /dev/null
--2015-03-22 14:30:38--  http://cachefly.cachefly.net/100mb.test
Resolving cachefly.cachefly.net (cachefly.cachefly.net)... 205.234.175.175
Connecting to cachefly.cachefly.net (cachefly.cachefly.net)|205.234.175.175|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: `/dev/null'

100%[========================================================>] 104,857,600 64.5M/s   in 1.6s

2015-03-22 14:30:40 (64.5 MB/s) - `/dev/null' saved [104857600/104857600]
```


----------



## wlanboy (Jul 2, 2015)

Time for an update:



1 day 9  hours 1 minute of downtime since the last update.

Uptime of the vps itself is 67 days.

CPU could be better and I/O is ok.

Network is great:


```
wget cachefly.cachefly.net/100mb.test -O /dev/null
--2015-07-02 08:03:09--  http://cachefly.cachefly.net/100mb.test
Resolving cachefly.cachefly.net (cachefly.cachefly.net)... 205.234.175.175
Connecting to cachefly.cachefly.net (cachefly.cachefly.net)|205.234.175.175|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: `/dev/null'

100%[=================================================================================================================================================>] 104,857,600 89.9M/s   in 1.1s

2015-07-02 08:03:10 (89.9 MB/s) - `/dev/null' saved [104857600/104857600]
```


----------



## wlanboy (Nov 18, 2015)

Time for an update:





30 minutes of network downtime since the last update.


CPU could be better and I/O is ok.


----------



## wlanboy (Jan 1, 2016)

Time for an update:





0 minutes of network downtime since the last update.


CPU could be better and I/O is ok.


----------



## wlanboy (May 27, 2016)

1 hour and 36 minutes of network downtime since the last update.


CPU could be better and I/O is ok.


----------



## drmike (Jun 1, 2016)

Glad to see you sticking long haul with these reviews @wlanboy


----------



## wlanboy (Sep 2, 2016)

38 minutes and 33 seconds of network downtime since the last update.


Uptime of the vps is 496 days.


CPU could be better and I/O is ok.


----------

