# NON-CC GVH



## Nett (Jan 28, 2014)

http://lowendtalk.com/discussion/20780/greenvaluehost-no-cc-here-ipv6-availible-5-m-100gb-disk-20tb-bw-powered-by-dual-e5s-ssds


----------



## Aldryic C'boas (Jan 28, 2014)

Well, good on him for branching out.  I don't suppose there's any test IPs floating around?


----------



## Nett (Jan 28, 2014)

Aldryic C said:


> Well, good on him for branching out.  I don't suppose there's any test IPs floating around?


No, and also no benchmarks.


----------



## Aldryic C'boas (Jan 28, 2014)

Eh, those will pop up eventually.  I was more interested in the traceroutes.


----------



## Nett (Jan 28, 2014)

Aldryic C said:


> Eh, those will pop up eventually.  I was more interested in the traceroutes.


I'm going to stress test it.


----------



## john (Jan 28, 2014)

Try 107.158.160.105

Any particular requests for trace routes?


```
[email protected]:/# traceroute 8.8.8.8
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
 1  50.2.186.19 (50.2.186.19)  0.094 ms  0.038 ms  0.033 ms
 2  23.90.60.57 (23.90.60.57)  0.382 ms  0.369 ms  0.359 ms
 3  23.90.60.17 (23.90.60.17)  0.393 ms  0.332 ms  0.244 ms
 4  lag-7-864.ear1.Dallas1.Level3.net (4.31.141.237)  0.689 ms  0.657 ms  0.818 ms
 5  * * *
 6  Google-level3-3x10G.Dallas.Level3.net (4.68.70.166)  47.819 ms  51.984 ms  51.769 ms
 7  * * 72.14.233.67 (72.14.233.67)  13.745 ms
 8  72.14.237.215 (72.14.237.215)  2.083 ms  2.022 ms 72.14.237.221 (72.14.237.221)  22.949 ms
 9  216.239.47.121 (216.239.47.121)  10.313 ms  8.695 ms 209.85.243.178 (209.85.243.178)  8.679 ms
10  216.239.46.59 (216.239.46.59)  19.286 ms  19.166 ms 216.239.46.63 (216.239.46.63)  7.841 ms
11  * * *
12  google-public-dns-a.google.com (8.8.8.8)  8.049 ms  8.372 ms  8.174 ms
```


----------



## D. Strout (Jan 28, 2014)

Pretty good offer, if it holds out. That's a lot of IPs. I've ordered the 512MB plan yearly, and I didn't receive any IPv6. I'm logging in now to do some speed tests. I'll post back when I get those in. Takes care of though, so if the VPS isn't crap, I'm happy 

*Edit:* Server is with Eonix Corp - the folks that run ServerHub. Their Dallas AS (AS62904) is single-homed to Level3. See http://bgp.he.net/AS62904 and  http://serverhub.com/products/dedicated/enterprise-dedicated-servers.php

*Edit 2:* Network speed tests are OK, about what I'd expect for the 350mb/s port speed. peaks at about 32MB/s to locations in LA, Chicago, and Miami. Overseas to Amsterdam starts slow and builds up to about 12MB/s. As many have mentioned on the LET thread, disk is slow - dd tests show around 25MB/s. Yet strangely, it doesn't seem to show. I ran updates on the OS and they installed very quickly, and the VPS seems fairly snappy. Maybe GVH is somehow capping artificial disk speed tests?


----------



## hellogoodbye (Jan 28, 2014)

CPU model : Genuine Intel(R) CPU @ 2.00GHz
Number of cores : 4
CPU frequency : 2000.054 MHz
Total amount of ram : 2024 MB
Total amount of swap : 2024 MB
System uptime : 2:54,
Download speed from CacheFly: 30.8MB/s
Download speed from Coloat, Atlanta GA: 32.9MB/s
Download speed from Softlayer, Dallas, TX: 39.3MB/s
Download speed from Linode, Tokyo, JP: 8.30MB/s
Download speed from i3d.net, Rotterdam, NL: 11.1MB/s
Download speed from Leaseweb, Haarlem, NL: 10.8MB/s
Download speed from Softlayer, Singapore: 5.44MB/s
Download speed from Softlayer, Seattle, WA: 15.8MB/s
Download speed from Softlayer, San Jose, CA: 24.3MB/s
Download speed from Softlayer, Washington, DC: 32.3MB/s
I/O speed : 23.4 MB/s


```
dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 84.2906 s, 12.7 MB/s
```


```
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
 1  50.2.186.19 (50.2.186.19)  0.080 ms  0.035 ms  0.031 ms
 2  23.90.60.57 (23.90.60.57)  0.409 ms  0.420 ms  0.413 ms
 3  23.90.60.17 (23.90.60.17)  0.439 ms  0.462 ms  0.381 ms
 4  lag-7-864.ear1.Dallas1.Level3.net (4.31.141.237)  1.091 ms  1.037 ms  1.177 ms
 5  ae-1-60.edge2.Dallas1.Level3.net (4.69.145.11)  1.141 ms  0.961 ms *
 6  Google-level3-3x10G.Dallas.Level3.net (4.68.70.166)  2.435 ms  1.492 ms  1.411 ms
 7  72.14.233.67 (72.14.233.67)  1.331 ms 72.14.233.65 (72.14.233.65)  1.164 ms 72.14.233.67 (72.14.233.67)  4.858 ms
 8  72.14.237.219 (72.14.237.219)  1.379 ms 72.14.237.221 (72.14.237.221)  1.448 ms  1.296 ms
 9  216.239.47.121 (216.239.47.121)  8.101 ms  8.106 ms 209.85.243.178 (209.85.243.178)  7.991 ms
10  216.239.46.39 (216.239.46.39)  8.839 ms 216.239.46.63 (216.239.46.63)  8.845 ms  8.777 ms
11  * * *
12  google-public-dns-a.google.com (8.8.8.8)  9.138 ms  9.311 ms  9.413 ms
```
Let me know if there are any other kinds of benchmarks/speed tests to try.


----------



## drmike (Jan 28, 2014)

Someone wondered the network upstream, man in the middle perhaps....

whois 50.2.186.19 [iP from hellogoodbye's VPS above]

#
# ARIN WHOIS data and services are subject to the Terms of Use
# available at: https://www.arin.net/whois_tou.html
#


#
# The following results may also be obtained via:
# http://whois.arin.net/rest/nets;q=50.2.186.19?showDetails=true&showARIN=false&ext=netref2
#


# start

NetRange: 50.2.184.0 - 50.2.187.255
CIDR: 50.2.184.0/22
OriginAS: AS30693
NetName: CUST-NETBLK-DAL-50-2-184-0-22-001
NetHandle: NET-50-2-184-0-1
Parent: NET-50-2-0-0-1
NetType: Reassigned
Comment: This space is static assigned to ServerHub Dedicated
Comment: Servers. Please contact ServerHub directly to report abuse. Visit our
Comment: report a problem help page at:
Comment: http://www.serverhub.com/help/policies/report-a-problem.php or submit a 
Comment: support ticket at: http://support.serverhub.com
RegDate: 2013-04-23
Updated: 2013-04-23
Ref: http://whois.arin.net/rest/net/NET-50-2-184-0-1

OrgName: ServerHub Dallas
OrgId: SD-106
Address: 8600 Harry Hines Blvd Suite 200
City: Dallas
StateProv: TX
PostalCode: 75235
Country: US
RegDate: 2013-04-23
Updated: 2013-04-23
Ref: http://whois.arin.net/rest/org/SD-106

OrgAbuseHandle: DSS18-ARIN
OrgAbuseName: Dedicated Server Support
OrgAbusePhone: +1-702-968-9305
OrgAbuseEmail: [email protected]serverhub.com
OrgAbuseRef: http://whois.arin.net/rest/poc/DSS18-ARIN

OrgTechHandle: DSS18-ARIN
OrgTechName: Dedicated Server Support
OrgTechPhone: +1-702-968-9305
OrgTechEmail: [email protected]serverhub.com
OrgTechRef: http://whois.arin.net/rest/poc/DSS18-ARIN

# end


# start

NetRange: 50.2.0.0 - 50.3.255.255
CIDR: 50.2.0.0/15
OriginAS: AS30693
NetName: EONIX-NET-50-2-0-0-1-BLK-7
NetHandle: NET-50-2-0-0-1
Parent: NET-50-0-0-0-0
NetType: Direct Allocation
Comment: Dedicated Servers, Cloud, VPS, Web Hosting and so much more.
Comment:
Comment: -This space is statically assigned.-
RegDate: 2010-06-25
Updated: 2013-02-27
Ref: http://whois.arin.net/rest/net/NET-50-2-0-0-1

OrgName: Eonix Corporation
OrgId: EONIX
Address: 2360 Corporate Circle Suite 400
City: Henderson
StateProv: NV
PostalCode: 89074
Country: US
RegDate: 2006-05-31
Updated: 2011-09-24
Ref: http://whois.arin.net/rest/org/EONIX

OrgTechHandle: ADMIN839-ARIN
OrgTechName: Administrator
OrgTechPhone: +1-877-841-3341
OrgTechEmail: [email protected]
OrgTechRef: http://whois.arin.net/rest/poc/ADMIN839-ARIN

OrgAbuseHandle: ADMIN839-ARIN
OrgAbuseName: Administrator
OrgAbusePhone: +1-877-841-3341
OrgAbuseEmail: [email protected]
OrgAbuseRef: http://whois.arin.net/rest/poc/ADMIN839-ARIN

RNOCHandle: ADMIN839-ARIN
RNOCName: Administrator
RNOCPhone: +1-877-841-3341
RNOCEmail: [email protected]
RNOCRef: http://whois.arin.net/rest/poc/ADMIN839-ARIN

RAbuseHandle: ADMIN839-ARIN
RAbuseName: Administrator
RAbusePhone: +1-877-841-3341
RAbuseEmail: [email protected]
RAbuseRef: http://whois.arin.net/rest/poc/ADMIN839-ARIN

RTechHandle: ADMIN839-ARIN
RTechName: Administrator
RTechPhone: +1-877-841-3341
RTechEmail: [email protected]
RTechRef: http://whois.arin.net/rest/poc/ADMIN839-ARIN


----------



## drmike (Jan 28, 2014)

hellogoodbye said:


> I/O speed : 23.4 MB/s
> dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
> 
> 
> ...


Ahh those speeds suck...

How is the network?


----------



## hellogoodbye (Jan 28, 2014)

drmike said:


> Ahh those speeds suck...
> 
> How is the network?


Did you mean this command, or something else?


```
wget -O /dev/null http://cachefly.cachefly.net/100mb.test
--2014-01-28 23:29:50--  http://cachefly.cachefly.net/100mb.test
Resolving cachefly.cachefly.net (cachefly.cachefly.net)... 205.234.175.175
Connecting to cachefly.cachefly.net (cachefly.cachefly.net)|205.234.175.175|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: `/dev/null'

100%[======================================>] 104,857,600 79.0M/s   in 1.3s

2014-01-28 23:29:51 (79.0 MB/s) - `/dev/null' saved [104857600/104857600]
```


----------



## DomainBop (Jan 28, 2014)

hellogoodbye said:


> CPU model : Genuine Intel(R) CPU @ 2.00GHz
> Number of cores : 4
> CPU frequency : 2000.054 MHz
> Total amount of ram : 2024 MB
> ...


Your VPS has 2GB RAM/2GB vRAM.  The advertised specs are 2GB "Guaranteed" RAM/4GB vRAM

As for the disk speeds...that's a new node with "LSI HARDWARE RAID-10 SSD Cached Disk Space"?  That server has problems...


----------



## hellogoodbye (Jan 28, 2014)

DomainBop said:


> Your VPS has 2GB RAM/2GB vRAM.  The advertised specs are 2GB "Guaranteed" RAM/4GB vRAM
> 
> As for the disk speeds...that's a new node with "LSI HARDWARE RAID-10 SSD Cached Disk Space"?  That server has problems...


I noticed the vSwap RAM discrepancy too, opened a ticket about it and currently waiting for a response. Doesn't seem like I'm the only one with low I/O either judging from the comments on the LET post.


----------



## SkylarM (Jan 28, 2014)

hellogoodbye said:


> I noticed the vSwap RAM discrepancy too, opened a ticket about it and currently waiting for a response. Doesn't seem like I'm the only one with low I/O either judging from the comments on the LET post.


I bet he doesn't know how Solus handles setting up vSwap in the package creation. He couldn't quite figure out how to set 100TB to 100TB instead of 97TB either 

(Hint: Solus package, vswap is calculated off of Burst minus Guaranteed. So 4096+2048=6144. 6144 is what the "burst" should be at in the package via Solus.)


----------



## kaniini (Jan 28, 2014)

I got one of these just now for my own amusement.

I am not a huge believer in the 'dd test' as a gold standard, but it seems really off for a box with RAID-10 SSDs.


[[email protected] ~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 23.5692 s, 45.6 MB/s

Also, the CPU info is kind of whack.  I think it is a prerelease engineering sample E5-2620 (they are floating around on eBay):


[[email protected] ~]# cat /proc/cpuinfo
processor	: 0
vendor_id	: GenuineIntel
cpu family	: 6
model : 45
model name	: Genuine Intel(R) CPU @ 2.00GHz
stepping	: 5
cpu MHz : 2000.054
cache size	: 15360 KB
fpu : yes
fpu_exception	: yes
cpuid level	: 13
wp : yes
flags : fpu de tsc msr pae cx8 sep cmov pat clflush mmx fxsr sse sse2 ss ht syscall nx lm rep_good aperfmperf unfair_spinlock pni pclmulqdq ssse3 cx16 sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes avx hypervisor lahf_lm ida arat epb xsaveopt pln pts dts
bogomips	: 4000.10
clflush size	: 64
cache_alignment	: 64
address sizes	: 46 bits physical, 48 bits virtual
power management:

processor	: 1
vendor_id	: GenuineIntel
cpu family	: 6
model : 45
model name	: Genuine Intel(R) CPU @ 2.00GHz
stepping	: 5
cpu MHz : 2000.054
cache size	: 15360 KB
fpu : yes
fpu_exception	: yes
cpuid level	: 13
wp : yes
flags : fpu de tsc msr pae cx8 sep cmov pat clflush mmx fxsr sse sse2 ss ht syscall nx lm rep_good aperfmperf unfair_spinlock pni pclmulqdq ssse3 cx16 sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes avx hypervisor lahf_lm ida arat epb xsaveopt pln pts dts
bogomips	: 4000.10
clflush size	: 64
cache_alignment	: 64
address sizes	: 46 bits physical, 48 bits virtual
power management:

CPU Family 6, Model 45 (this matches production E5-2620), but Stepping is 5 instead of 7.  At least, all of my E5's have stepping 7 (I just checked a handful of nodes to be sure).

Edit: it also seems like the network is laggy.  I wonder if someone is already running DDoS scripts on this node.  It feels like outbound DDoS to me.


----------



## Shados (Jan 29, 2014)

I bought a yearly 512MB one earlier, would like to contribute with some observations of my own, but...:












.

EDIT: Times in GMT+11/AEDT.


----------



## trewq (Jan 29, 2014)

Shados said:


> Times in GMT+11/AEDT.


Same time zone *high five*


----------



## DomainBop (Jan 29, 2014)

kaniini said:


> I am not a huge believer in the 'dd test' as a gold standard, but it seems really off for a box with RAID-10 SSDs.


It's SSD cached not "pure" SSD but it's still reallly off for what a new almost empty node with HW RAID-10 should be.


----------



## GIANT_CRAB (Jan 29, 2014)

RAID 10 floppy disks + SATA disks cached???????


----------



## Nett (Jan 29, 2014)

CD's or USB 2.0's maybe


----------



## joepie91 (Jan 29, 2014)

Bookmarked. Let's see how long this offer will last for.


----------



## Francisco (Jan 29, 2014)

Dat slabbin'


----------



## kaniini (Jan 29, 2014)

Francisco said:


> Dat slabbin'


Yeah I noticed the hypervisor bit was set later.  Going to see if I can find out more.  It's probably KVM I bet.


----------



## GVH-Jon (Jan 29, 2014)

Someone is hammering the I/O on tx3. The I/O performance should not be that low at all. I'll have everything looked at and resolved shortly, and the node will be more closely monitored.

tx3 is _single slabbed_ with a _Xen PV Hypervisor _for performance & optimization purposes.


----------



## kaniini (Jan 29, 2014)

Performance & optimization purposes?  Whaaaaaat?

He isn't lying though -- it is running as a Xen PV hypervisor...


[[email protected] ~]# ./cpuid2
Xen found... checking CPUID leaf 0x40000001
Detected Xen v4.3 [PV]

If anyone wants the code... let me know.  I think I will put this on Github so people can detect if they're being slabbed or not.


----------



## hellogoodbye (Jan 29, 2014)

GVH-Jon said:


> Someone is hammering the I/O on tx3. The I/O performance should not be that low at all. I'll have everything looked at and resolved shortly, and the node will be more closely monitored.
> 
> tx3 is _single slabbed_ with a _Xen PV Hypervisor _for performance & optimization purposes.


Oh good! I hope it's fixed soon, I just checked right now and it's still more or less the same:


```
dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 84.3402 s, 12.7 MB/s
```


```
CPU model : Genuine Intel(R) CPU @ 2.00GHz
Number of cores : 4
CPU frequency : 2000.054 MHz
Total amount of ram : 2024 MB
Total amount of swap : 2024 MB
System uptime : 11:21,
Download speed from CacheFly: 52.0MB/s
Download speed from Coloat, Atlanta GA: 41.7MB/s
Download speed from Softlayer, Dallas, TX: 46.1MB/s
Download speed from Linode, Tokyo, JP: 6.92MB/s
Download speed from i3d.net, Rotterdam, NL: 11.1MB/s
Download speed from Leaseweb, Haarlem, NL: 11.0MB/s
Download speed from Softlayer, Singapore: 5.52MB/s
Download speed from Softlayer, Seattle, WA: 29.9MB/s
Download speed from Softlayer, San Jose, CA: 28.1MB/s
Download speed from Softlayer, Washington, DC: 29.6MB/s
I/O speed : 15.3 MB/s
```


----------



## GVH-Jon (Jan 29, 2014)

kaniini said:


> Performance & optimization purposes?  Whaaaaaat?


The sys admin that set up tx3 couldn't get flashcache set up for some reason so in order to make use of the SSD we have attached to the mobo we decided to single slab the hardware node into the virtualization we found most suitable, Xen PV.

We're still working on getting things worked out though, a lot to do today.


----------



## zzrok (Jan 29, 2014)

GVH-Jon said:


> The sys admin that set up tx3 couldn't get flashcache set up for some reason so in order to make use of the SSD we have attached to the mobo we decided to single slab the hardware node into the virtualization we found most suitable, Xen PV.
> 
> We're still working on getting things worked out though, a lot to do today.


If you couldn't figure that out, why the hell are you selling the product?  Get your shit in order before you start selling.


----------



## kaniini (Jan 29, 2014)

GVH-Jon said:


> The sys admin that set up tx3 couldn't get flashcache set up for some reason so in order to make use of the SSD we have attached to the mobo we decided to single slab the hardware node into the virtualization we found most suitable, Xen PV.
> 
> We're still working on getting things worked out though, a lot to do today.


By flashcache do you mean on the RAID card, or is this mdraid + flashcache?

Either way, I can help you do this correctly for a customary fee.


----------



## SPINIKR-RO (Jan 29, 2014)

kaniini said:


> By flashcache do you mean on the RAID card, or is this mdraid + flashcache?
> 
> Either way, I can help you do this correctly for a customary fee.


Please, he has 18 staff to deal with things like this. Shame trying to belittle Jon with your offering to assist.. psh

Anyways his secretary likely autopened the above post, hes likely beach side by now.


----------



## drmike (Jan 29, 2014)

kaniini said:


> By flashcache do you mean on the RAID card, or is this mdraid + flashcache?
> 
> Either way, I can help you do this correctly for a customary fee.


I doubt he knows unless the person doing the work conveyed this.  I am certain he could use real technical help as-needed.



SPINIKR-RO said:


> Anyways his secretary likely autopened the above post, hes likely beach side by now.


Technically, I think he's in class.  It's a school day


----------



## hellogoodbye (Jan 29, 2014)

Another update, I/O speeds are looking a little better now!


```
dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 10.9726 s, 97.9 MB/s
```


```
CPU model :  Genuine Intel(R) CPU  @ 2.00GHz
Number of cores : 4
CPU frequency :  2000.054 MHz
Total amount of ram : 2024 MB
Total amount of swap : 4096 MB
System uptime :   19:55,
Download speed from CacheFly: 50.2MB/s
Download speed from Coloat, Atlanta GA: 43.0MB/s
Download speed from Softlayer, Dallas, TX: 48.0MB/s
Download speed from Linode, Tokyo, JP: 9.04MB/s
Download speed from i3d.net, Rotterdam, NL: 10.8MB/s
Download speed from Leaseweb, Haarlem, NL: 3.24MB/s
Download speed from Softlayer, Singapore: 4.19MB/s
Download speed from Softlayer, Seattle, WA: 19.2MB/s
Download speed from Softlayer, San Jose, CA: 28.0MB/s
Download speed from Softlayer, Washington, DC: 31.5MB/s
I/O speed :  84.4 MB/s
```


----------



## Virtovo (Jan 29, 2014)

hellogoodbye said:


> Another update, I/O speeds are looking a little better now!
> 
> 
> dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
> ...


Looking promising.  Once everyone is finished hammering the node it may rise even further.


----------



## DomainBop (Jan 29, 2014)

GVH-Jon said:


> The sys admin that set up tx3 couldn't get flashcache set up for some reason so in order to make use of the SSD we have attached to the mobo we decided to single slab the hardware node into the virtualization we found most suitable, Xen PV.
> 
> We're still working on getting things worked out though, a lot to do today.


A. the sys admin is obviously incompetent.  Did you find him on fiverr?

B. Why did you post an offer before you fixed the server (i.e. why did you intentionally sell your customers a piece of crap)?  Do you think it's fair to your customers that you advertised something as "SSD Cache" when the reality is it performs like a PATA drive?

If you want to build a long-term business that will be sustainable you need to start thinking more of your customers and how to offer them a reliable product, and less about how much $$$ you can make in the shortest period of time.

TL;DR version: look to Prometeus/RamNode, etc for inspiration on how to run a hosting business rather than trying to emulate Fabozo and his crew of circus performers.


----------



## Hxxx (Jan 29, 2014)

DomainBop said:


> A. the sys admin is obviously incompetent.  Did you find him on fiverr?
> 
> B. Why did you post an offer before you fixed the server (i.e. why did you intentionally sell your customers a piece of crap)?  Do you think it's fair to your customers that you advertised something as "SSD Cache" when the reality is it performs like a PATA drive?
> 
> ...


This above comment is so dumb. You always have to throw the same shit?


----------



## drmike (Jan 29, 2014)

DomainBop said:


> B. Why did you post an offer before you fixed the server (i.e. why did you intentionally sell your customers a piece of crap)?  Do you think it's fair to your customers that you advertised something as "SSD Cache" when the reality is it performs like a PATA drive?


This is a good summary and lesson for GVH-Jon.  Buy server, config and install, load test... test some more... pound the network a bit... reboot, retest...  thrash disk for a bit... repeat..

3 days later, maybe you can have confidence in selling on that server.  

It's a shame that his first non-CC offer underimpressed up until to this point.  SSD cache + RAID controller = not free.  But the numbers have gone up, which good.



hrr1963 said:


> This above comment is so dumb. You always have to throw the same shit?


I resemble that comment.


----------



## Virtovo (Jan 29, 2014)

DomainBop said:


> A. the sys admin is obviously incompetent.  Did you find him on fiverr?
> 
> B. Why did you post an offer before you fixed the server (i.e. why did you intentionally sell your customers a piece of crap)?  Do you think it's fair to your customers that you advertised something as "SSD Cache" when the reality is it performs like a PATA drive?
> 
> ...


Exactly.


----------



## D. Strout (Jan 30, 2014)

Anyone have one of this offer with _working_ IPv6? I didn't get any allocated at first. I ticketed and got them (along with one of the 25 free year + IPs slots), but they don't work. ping6, traceroute6, and wget -6 all fail. I sent in another ticket, but I'm curious if anyone else is having this issue.


----------



## Kadar (Jan 30, 2014)

> Greetings Valued Clients,
> 
> 
> 
> ...


----------



## SPINIKR-RO (Jan 30, 2014)

heh, well at least he is not selling the customers off again. 

Not really sure why the upstream provider has to do with not being able to fix a issue with the server. I assume all of the "cloud" stuff is a big VM provided by the upstream and GVH was looking to them to fix whatever the servers issue was.



> We're going to be discontinuing our offerings of cloud hosting in all locations with the exception of Los Angeles


Just to be clear, the entire argument as to why GVH sold their customers was to move everything to the 'cloud' and now no more cloud!


----------



## MannDude (Feb 1, 2014)

I suspect the new location was setup in a rush to distance himself from Colocrossing amidst the recent week's drama. I think it's a good move to diversify, and I think there are more people who are seeking non-CC hosting now than there are people seeking CC-specific hosting. With that said, it was rushed.

While I've given GVH-Jon shit in the past, and perhaps will again in the future, he takes it well. Other youngsters would get mad and throw public hissy fits and further damage their credibility, he on the other hand does seem to be making some positive progress in actually listening to suggestions.


----------



## iSky (Feb 1, 2014)

just to make this was sound interesting

the payment invoice confirmation and the suspension of my services was thrown into my spam box

LoL it was the yahoo filter, i never set it as spam

so maybe their email was flaged by yahoo as spam


----------



## Wintereise (Feb 1, 2014)

iSky said:


> just to make this was sound interesting
> 
> the payment invoice confirmation and the suspension of my services was thrown into my spam box
> 
> ...


Owing to the recent incidents (about alleged compromises), and the fact that Yahoo hasn't been on par with other large email service providers for years now -- you should probably seek another provider to host your mail.

They're also impossible to work with from a mail-server administrator's perspective.


----------



## Kadar (Feb 1, 2014)

Part 2 -



> Valued Clients,
> 
> A couple of things to note regarding the srv3 migration today:
> 
> ...


----------



## drmike (Feb 1, 2014)

Who writes this stuff?  The pain!

*"Clients will be responsible for checking periodically to see if their website has been migrated (this will be explained further in this email)"*

Why should clients have to frantically monitor their stuff to determine if a migration has happened?  If something is going to take days, I'd expect accounts to be gone through in some order where techs and support are integrated and notifying customers as they go.


----------



## Kadar (Feb 1, 2014)

Did anyone else notice the last octave of the ip is missing? My uptime monitor has been going crazy with the server going down all day today.


----------



## drmike (Feb 1, 2014)

Kadar said:


> Did anyone else notice the last octave of the ip is missing? My uptime monitor has been going crazy with the server going down all day today.


Which IP are you referring to  ?

This is the typo: *https://205.234.159:2087*

It should be:

*https://205.234.159.58:2087*

Is that what you meant?


----------



## Kadar (Feb 1, 2014)

Yeah I think its funny, and sadly its not non CC  its using CC in chicago


----------



## drmike (Feb 1, 2014)

Kadar said:


> Yeah I think its funny, and sadly its not non CC  its using CC in chicago


The kid only went non CC in hmmm... 

Let's see, ahh the shared hosting was ServerMania / is until that migration happens... Which is a CC partner.

New Texas offer for wild VPS (although somewhat reduced from totally wild prior) is ServerHub... I might have ahh previously said some linkage going on... But they did that whole NO-CC-HERE photo thing.

Yeah, looks like more and plenty of CC here with GVH.

I am digging the diversion though, almost got me with that NO-CC thing .  I know it's hard getting off the nipple and kicking the habit.  Especially when they make those crazy sales akin to dope peddlers  hooking folks on their drugs.  I give the kid credit, a half step still counts.


----------



## Nett (Feb 2, 2014)

GVH said:


> as our Buffalo, NY upstream cloud hosting provider has decided to be uncooperative with us.


haha


----------



## Kadar (Feb 3, 2014)

> Valued Clients,
> 
> 
> We are sending out this email to you today to explain the recent and current TX3 downtime, IO, and connectivity issues.
> ...


----------



## EMayes1991 (Feb 3, 2014)

Kadar said:


> > Valued Clients,
> >
> >
> > We are sending out this email to you today to explain the recent and current TX3 downtime, IO, and connectivity issues.
> > ...



I've been lurking here for a long time and I think this is where I put my foot down and post.
My VPS has been in Texas, it's been one problem after another with these guys. First disk speed are in the 20MB\s on SSD caching nodes, then the network goes down for two hours.

It feels like you are blaming IO speeds on "issues related to the network." I do not think these guys are using iSCSI to say that. We get lies, lies and more lies. All we have is empty promises and nothing more from GreenValueHost. Its nice how the email throws ServerHub under the bus immediately but they cannot account for their actions.



I think Alexander @ HostUS made a large mistake combining with these guys. I think he will be infected with the ego attitude as well as poor support and services.


----------



## hellogoodbye (Feb 3, 2014)

Where was this letter posted? If it was sent out as an email, I never received it and I'm on the TX3 node.

(Yes, I've checked my spam folder as well.)


----------



## telephone (Feb 3, 2014)

> Valued Clients,
> 
> 
> We are sending out this email to you today to explain the recent and current TX3 downtime, IO, and connectivity issues.
> ...


----------



## GVH-Jon (Feb 3, 2014)

http://i.imgur.com/IHnjEb5.png

We were not lying when we said that the issues were due to our upstream. This is a screenshot of part of the conversation that we had with our ServerHub account manager.


----------



## GVH-Jon (Feb 3, 2014)

If a confirmation is needed to verify our statements further I can contact our upstream and ask them to make a public statement confirming the facts in our email as true.


----------



## EMayes1991 (Feb 3, 2014)

GVH-Jon said:


> If a confirmation is needed to verify our statements further I can contact our upstream and ask them to make a public statement confirming the facts in our email as true.


Is it true the network made the disks slower?



GVH-Jon said:


> http://i.imgur.com/IHnjEb5.png
> 
> We were not lying when we said that the issues were due to our upstream. This is a screenshot of part of the conversation that we had with our ServerHub account manager.


Sorry, my english is bad, I did not mean this time, but in general.

"This is around the ballpark of the 5th time this month they've done this." This seems rather unprofessinal to say to your clients.


----------



## SPINIKR-RO (Feb 3, 2014)

It has been a very recent occurrence for L3 issues. I have seen a few on the outages list this week and last I think, just skimming. Though I dont know if its related or even in the same area.

That said the problem is obvious, same issues every time though it may be a perfect opportunity for GVH to blame some recent networking issue nothing says inexperience than 800+ accounts on 1 server and a statement saying that IO issues are related to a L3 networking incident.

Is this the same 'srv3' thats been in other discussions or is 'tx3' different. I assume the number just starts at three, per location.


----------



## GVH-Jon (Feb 3, 2014)

The low IO was found to be stemming from a client on our tx3 node whom was migrating & backing up a very large number of cPanel accounts over to his VPS hosted on our tx3 node simotaneously. As you may already be aware of, these take up a lot of processes. In order for the client to get everything done as soon as possible we did NOT throttle his port speed. All the downtime/network connectivity issues that have been happening have severely impacted/interrupted this process and it had to take longer and longer and longer, and thus this is how the network issues affected IO. We weren't going to suspend this client because they were using their VPS for a legitimate purpose and their high usage was only temporary.


----------



## EMayes1991 (Feb 3, 2014)

SPINIKR-RO said:


> Is this the same 'srv3' thats been in other discussions or is 'tx3' different. I assume the number just starts at three, per location.


srv3 is in Buffalo New York I think. 3 must be an unlucky number



GVH-Jon said:


> The low IO was found to be stemming from a client on our tx3 node whom was migrating & backing up a very large number of cPanel accounts over to his VPS hosted on our tx3 node simotaneously.


BS right here sir. Why would someone migrate from Tx3 to Tx3?


----------



## GVH-Jon (Feb 3, 2014)

EMayes1991 said:


> BS right here sir. Why would someone migrate from Tx3 to Tx3?


I meant that they were transferring accounts over to their VPS on TX3 from a VPS not hosted with us, while at the same time R1Soft on their cPanel server is making mass backups of all of the existing accounts on the VPS.


----------



## EMayes1991 (Feb 3, 2014)

GVH-Jon said:


> I meant that they were transferring accounts over to their VPS on TX3 from a VPS not hosted with us, while at the same time R1Soft on their cPanel server is making mass backups of all of the existing accounts on the VPS.


OK. Thx for the clarification. I hope you can fix all your problems.

Edit: I just realized that the disk speed has been bad all week.... Can anyone back me up verify they experienced the same?

This doesn't make sense that the backups has been running for about 5-7 days.


----------



## GVH-Jon (Feb 3, 2014)

EMayes1991 said:


> OK. Thx for the clarification. I hope you can fix all your problems.
> 
> Edit: I just realized that the disk speed has been bad all week.... Can anyone back me up?
> 
> This doesn't make sense that the backups has been running for about 5-7 days.


It does if you look at it this way:

A typical person isn't going to stare at a computer screen 24 hours all day long at all. So the network goes down, server goes offline, SSH connection drops, migration halts. Person notices a few hours or so later, restarts everything because the migration was halted because the network went down. Migration is going .. and then a few hours later again network goes down again. Process repeats over and over until the process is done. A lot of our TX3 customers noticed the I/O speed fluctuating and this is why.


----------



## EMayes1991 (Feb 3, 2014)

GVH-Jon said:


> It does if you look at it this way:
> 
> A typical person isn't going to stare at a computer screen 24 hours all day long at all. So the network goes down, server goes offline, SSH connection drops, migration halts. Person notices a few hours or so later, restarts everything because the migration was halted because the network went down. Migration is going .. and then a few hours later again network goes down again. Process repeats over and over until the process is done. A lot of our TX3 customers noticed the I/O speed fluctuating and this is why.



If thats the case, why did it take so long to let us know that you were having network issues?


----------



## GVH-Jon (Feb 3, 2014)

EMayes1991 said:


> If thats the case, why did it take so long to let us know that you were having network issues?


I admit late notification was a fault on our end, and I sincerely apologize. We'll do our best to be more proactive in notifying our clients of any network related issues in the future, I promise.


----------



## EMayes1991 (Feb 3, 2014)

GVH-Jon said:


> I promise.


OK....


----------



## Nett (Feb 3, 2014)

Kadar said:


> > Valued Clients,
> >
> >
> > We are sending out this email to you today to explain the recent and current TX3 downtime, IO, and connectivity issues.
> > ...


Outages for the whole week LOL.


----------



## Aldryic C'boas (Feb 3, 2014)

It's usually customary to finish crossing a bridge before you start to burn it...


----------



## Virtovo (Feb 3, 2014)

GVH-Jon said:


> It does if you look at it this way:
> 
> A typical person isn't going to stare at a computer screen 24 hours all day long at all. So the network goes down, server goes offline, SSH connection drops, migration halts. Person notices a few hours or so later, restarts everything because the migration was halted because the network went down. Migration is going .. and then a few hours later again network goes down again. Process repeats over and over until the process is done. A lot of our TX3 customers noticed the I/O speed fluctuating and this is why.



Howdy, it's called automated monitoring + alerts.  If you need some help with that let me know via PM.

I hate to say this but I told you to reconsider your deployment with Serverhub; however your BS machine just steamrollered over my advice.

Virtovo, on 17 Jan 2014 - 9:02 PM, said:





Virtovo said:


> With my recent experiences I'd strongly suggest reconsidering your Dallas location.





> We have personal contacts, an assigned account manager, our own internal team of staff (with at least 4 tech staff available AT ALL TIMES around the clock), etc etc so I'm sure we won't be having any issues.


----------



## concerto49 (Feb 4, 2014)

Serverhub from our usage has always had network issues.


----------



## Virtovo (Feb 4, 2014)

ServerHub

Inbox03:28

Emergency Network Maintenance EXTENSION Notification : : Dallas, TX : : 2/3/2014‏

ServerHub

Inbox03/02/2014

Emergency Network Maintenance Notification : : Dallas, TX : : 2/3/2014‏

ServerHub

Archive23/01/2014

Emergency Network Maintenance Notification : : Dallas, TX : : 1/23/2014‏‏

ServerHub

Archive21/01/2014

Emergency Network Maintenance Notification : : Dallas, TX : : 1/21/2014‏

ServerHub

Archive18/01/2014

Emergency Network Maintenance Notification : : Dallas, TX : : 1/18/2014‏


----------



## iwaswrongonce (Feb 4, 2014)

SPINIKR-RO said:


> Please, he has 18 staff to deal with things like this. Shame trying to belittle Jon with your offering to assist.. psh
> 
> Anyways his secretary likely autopened the above post, hes likely beach side by now.


:rofl:

Christ my sides...still laughing lol.


----------



## DomainBop (Feb 4, 2014)

> We've recently been informed that all of the recent downtime, I*O*, and connectivity issues with TX3 are stemming from network related issues. Our upstream provider for our TX3 node, ServerHub, has been having network connectivity issues in their Dallas location the entire month


So, are ServerHub's network problems in Texas also responsible for the IO issues that customers in the Buffalo location have been complaining about?

A quote from Mun on LET:



> I think he is referring to the fact that GVH still has an I/O speed of <20MBps on his NY node and has already moved into a partnership yet still hasn't fixed the server.


----------



## Kadar (Feb 4, 2014)

> GreenValueHost, an industry leading provider in premium budget shared web hosting, reseller hosting, virtual private server, and dedicated server hosting solutions, is proud to announce that we have expanded our virtual private server hosting services to Chicago, Illinois in premium datacenter space within the world famous Lakeside Technology Center 1.1m sq ft. carrier-neutral facility.
> 
> 
> We will be utilizing privately maintained datacenter space in the facility and a custom premium network consisting of Internap, nLayer, Cogent, and soon to come, Comcast and Equinix Exchange. Our new VPS nodes in this facility will be brand new HP Generation 8 Xeon E3-1240v2 quad core servers packed with plenty of enterprise SATA storage, accelerated by SSDs with caching from premium Intel SSD drives.
> ...


Premium means HVH out of CC in Illinois




NetRange:       205.234.128.0 - 205.234.255.255
CIDR:           205.234.128.0/17


OriginAS:       


NetName:        SCN-4


NetHandle:      NET-205-234-128-0-1


Parent:         NET-205-0-0-0-0


NetType:        Direct Allocation


RegDate:        2004-04-29


Updated:        2012-03-02


Ref:            http://whois.arin.net/rest/net/NET-205-234-128-0-1


OrgName:        Server Central Network


OrgId:          SCN-18


Address:        111 W. Jackson Blvd.


Address:        Suite 1600


City:           Chicago


StateProv:      IL


PostalCode:     60604


Country:        US


RegDate:        2002-03-05


Updated:        2013-03-25


Ref:            http://whois.arin.net/rest/org/SCN-18




NetRange:       205.234.128.0 - 205.234.255.255
CIDR:           205.234.128.0/17


OriginAS:       


NetName:        SCN-4


NetHandle:      NET-205-234-128-0-1


Parent:         NET-205-0-0-0-0


NetType:        Direct Allocation


RegDate:        2004-04-29


Updated:        2012-03-02


Ref:            http://whois.arin.net/rest/net/NET-205-234-128-0-1


OrgName:        Server Central Network


OrgId:          SCN-18


Address:        111 W. Jackson Blvd.


Address:        Suite 1600


City:           Chicago


StateProv:      IL


PostalCode:     60604


Country:        US


RegDate:        2002-03-05


Updated:        2013-03-25


Ref:            http://whois.arin.net/rest/org/SCN-18


ReferralServer: rwhois://rwhois.servercentral.net:4321


OrgAbuseHandle: ABUSE1669-ARIN


OrgAbuseName:   Abuse Department


OrgAbusePhone:  +1-312-829-1111 


OrgAbuseEmail:  


OrgAbuseRef:    http://whois.arin.net/rest/poc/ABUSE1669-ARIN


OrgTechHandle: NETWO1779-ARIN


OrgTechName:   Network Operations


OrgTechPhone:  +1-312-829-1111 


OrgTechEmail:  


OrgTechRef:    http://whois.arin.net/rest/poc/NETWO1779-ARIN


OrgNOCHandle: NETWO1779-ARIN


OrgNOCName:   Network Operations


OrgNOCPhone:  +1-312-829-1111 


OrgNOCEmail:  


OrgNOCRef:    http://whois.arin.net/rest/poc/NETWO1779-ARIN


RAbuseHandle: ABUSE1669-ARIN


RAbuseName:   Abuse Department


RAbusePhone:  +1-312-829-1111 


RAbuseEmail:  


RAbuseRef:    http://whois.arin.net/rest/poc/ABUSE1669-ARIN


RNOCHandle: NETWO1779-ARIN


RNOCName:   Network Operations


RNOCPhone:  +1-312-829-1111 


RNOCEmail:  


RNOCRef:    http://whois.arin.net/rest/poc/NETWO1779-ARIN


RTechHandle: NETWO1779-ARIN


RTechName:   Network Operations


RTechPhone:  +1-312-829-1111 


RTechEmail:  


RTechRef:    http://whois.arin.net/rest/poc/NETWO1779-ARIN


NetRange:       205.234.159.0 - 205.234.159.255


CIDR:           205.234.159.0/24


OriginAS:       AS36352


NetName:        SCNET-205-234-159-0-24


NetHandle:      NET-205-234-159-0-1


Parent:         NET-205-234-128-0-1


NetType:        Reallocated


RegDate:        2010-06-09


Updated:        2010-06-09


Ref:            http://whois.arin.net/rest/net/NET-205-234-159-0-1


OrgName:        ColoCrossing


OrgId:          VGS-9


Address:        8469 Sheridan Drive


Address:        ATTN: ARIN


City:           Williamsville


StateProv:      NY


PostalCode:     14221


Country:        US


RegDate:        2005-06-20


Updated:        2012-01-10


Ref:            http://whois.arin.net/rest/org/VGS-9


OrgAbuseHandle: ABUSE3246-ARIN


OrgAbuseName:   Abuse


OrgAbusePhone:  +1-800-518-9716 


OrgAbuseEmail:  


OrgAbuseRef:    http://whois.arin.net/rest/poc/ABUSE3246-ARIN


OrgNOCHandle: VIALA-ARIN


OrgNOCName:   Vial, Alex 


OrgNOCPhone:  +1-716-335-9628 


OrgNOCEmail:  


OrgNOCRef:    http://whois.arin.net/rest/poc/VIALA-ARIN


OrgTechHandle: NETWO882-ARIN


OrgTechName:   Network Operations


OrgTechPhone:  +1-800-518-9716 


OrgTechEmail:  


OrgTechRef:    http://whois.arin.net/rest/poc/NETWO882-ARIN


NetRange:       205.234.159.56 - 205.234.159.63


CIDR:           205.234.159.56/29


OriginAS:       AS36352


NetName:        CC-205-234-159-56-29


NetHandle:      NET-205-234-159-56-1


Parent:         NET-205-234-159-0-1


NetType:        Reallocated


RegDate:        2014-01-28


Updated:        2014-01-28


Ref:            http://whois.arin.net/rest/net/NET-205-234-159-56-1


OrgName:        Hudson Valley Host


OrgId:          HVH-9


Address:        610 Route 28


City:           Kingston


StateProv:      NY


PostalCode:     12401


Country:        US


RegDate:        2012-11-07


Updated:        2012-11-28


Ref:            http://whois.arin.net/rest/org/HVH-9


OrgAbuseHandle: HVHAT-ARIN


OrgAbuseName:   Hudson Valley Host Abuse Team


OrgAbusePhone:  +1-800-497-5377 


OrgAbuseEmail:  


OrgAbuseRef:    http://whois.arin.net/rest/poc/HVHAT-ARIN


OrgTechHandle: HVHA-ARIN


OrgTechName:   Hudson Valley Host Admin


OrgTechPhone:  +1-800-497-5377 


OrgTechEmail:  


OrgTechRef:    http://whois.arin.net/rest/poc/HVHA-ARIN
​


----------



## GVH-Jon (Feb 4, 2014)

Kadar,

We are using ColoCrossing's Dupont Fabros facility for Shared & Reseller hosting in Chicago, Illinois, however we are using Genesis Adaptive (http://genesisadapative.com) in the Lakeside Technology Center for VPS nodes.


----------



## Kadar (Feb 4, 2014)

Link doesn't work try again.


----------



## drmike (Feb 4, 2014)

Correct link:

https://portal.genesisadaptive.com/


----------



## Francisco (Feb 4, 2014)

Virtovo said:


> ServerHub
> 
> Inbox03:28
> 
> ...


Did they give reasoning for it?

I'm not really surprised that things will favor L3 over Cogent for so much of their transit. They'll have to setup

weights to try to balance things more.

Francisco


----------



## Francisco (Feb 4, 2014)

drmike said:


> Correct link:
> 
> https://portal.genesisadaptive.com/


Speaking as someone in the exact same shoes, the owner has to work on finishing his site. His 'dedicated servers'

tab was full of stub data and if I didn't click the 'from $xxx' link I would have assumed it was just an anchor tag.

I'm not sure why Jon would order from such a site unless he got recommended them by one of his advisers?

Their prices are high but Chicago isn't normally a cheap DC area.

Francisco


----------



## Virtovo (Feb 4, 2014)

Francisco said:


> Did they give reasoning for it?
> 
> 
> I'm not really surprised that things will favor L3 over Cogent for so much of their transit. They'll have to setup
> ...



[SIZE=small]*LOCATION: Dallas, TX*[/SIZE]

[SIZE=small]*START TIME: 6:35PM EST(11:35PM GMT)*[/SIZE]

[SIZE=small]*END TIME: 9:35PM EST(2:30AM GMT)*[/SIZE]

[SIZE=small]*LENGTH OF MAINTENANCE WINDOW: 3 hours*[/SIZE]

[SIZE=small]We are in the process of performing an emergency maintenance for Dallas to resolve an issue that is currently impacting connectivity. This maintenance may cause some minor interruption during this window. [/SIZE]

*LOCATION: Dallas, TX*

*START TIME: 12:10AM EST(5:10AM GMT)*

*END TIME: 12:20AM EST(5:20AM GMT)*

*LENGTH OF MAINTENANCE WINDOW: 20 minutes*


This maintenance is to reboot the backbone in order to re-implement IPv6. We appreciate your understanding.

[SIZE=small]*LOCATION: Dallas, TX*[/SIZE]

*START TIME: 6:00PM EST(11:00PM GMT)*

*END TIME: 8:00PM EST(1:00AM GMT)*

[SIZE=small]*LENGTH OF MAINTENANCE WINDOW: 2 hours*[/SIZE]

[SIZE=small]We are in the process of performing an emergency maintenance for Dallas to resolve an issue that is currently impacting connectivity. This maintenance may cause intermittent interruption during this window. IPv6 is now fully implemented and once this emergency maintenance window is complete, all will be back to normal.[/SIZE]

[SIZE=small]We appreciate your understanding.[/SIZE]

*LOCATION: Dallas, TX*

[SIZE=small]*START TIME: 6:40PM EST(11:40PM GMT)*[/SIZE]

[SIZE=small]*END TIME: 9:00PM EST(2:00AM GMT)*[/SIZE]

[SIZE=small]*LENGTH OF MAINTENANCE WINDOW: ~2 hours*[/SIZE]

[SIZE=small]We are in the process of performing an emergency maintenance for Dallas to resolve an issue that is currently impacting connectivity. This maintenance may cause some minor interruption during this window.  [/SIZE]

[SIZE=small]We appreciate your understanding.[/SIZE]

*LOCATION: Dallas, TX*

[SIZE=small]*START TIME: 10:00PM EST(3:00AM GMT)*[/SIZE]

[SIZE=small]*END TIME: 2:00AM EST(7:00AM GMT)*[/SIZE]

[SIZE=small]*LENGTH OF MAINTENANCE WINDOW: 4 hours*[/SIZE]

[SIZE=small]This is an extension of the previous network maintenance. We are currently receiving a lot of network congestion in Dallas due to a Level 3 peer being unreachable and not responding. The Dallas core network is up but due to the network congestion, there is heavy packet loss.  With the outage being on the Level 3 network, we are working closely with L3 engineers to make sure service is restored as quickly as possible.  We'll also keep a Network Issue open and updated as much as possible at https://my.serverhub.com/networkissues.php - We sincerely apologize for the inconvenience and highly appreciate your understanding.[/SIZE]


----------



## D. Strout (Feb 4, 2014)

In some ways, I feel like Dallas is a second-class location for ServerHub. They don't have test files for it on their VPSB ad landing page and they mention PhoenixNAP more. But in another way, it seems very important to them as they have a separate ASN for the Dallas location. But that seems to be part of the problem. The transition to the separate ASN occurred over the past several months, and it was hardly smooth. IPv6 was down for about two months while they moved things over, I observed _painfully_ slow network performance on occasion, and as has been noted there have been many "maintenance periods". I am glad my VPS with ServerHub is not a production box - I would have been in trouble. According to the network issue page linked in the previous page, things should be back to normal. I'm not in a position to test either my ServerHub box or my GVH box, but I'll post back when things are resolved. Still trying to figure out what ServerHub/Eonix are up to in Dallas.


----------



## GVH-Jon (Feb 4, 2014)

Francisco said:


> Speaking as someone in the exact same shoes, the owner has to work on finishing his site. His 'dedicated servers'
> 
> 
> tab was full of stub data and if I didn't click the 'from $xxx' link I would have assumed it was just an anchor tag.
> ...


They are a local company in the Chicago area and Lance met them in person. We're paying them literally around the ballpark of 3x what we would pay CC for the same exact configuration but to be honest we think it's totally worth it for the quality that they're providing.


----------



## DomainBop (Feb 4, 2014)

GVH-Jon said:


> They are a local company in the Chicago area and Lance met them in person. We're paying them literally around the ballpark of 3x what we would pay CC for the same exact configuration but to be honest we think it's totally worth it for the quality that they're providing.


ColoCrossing actually dumped 350 East Cermak in favor of Dupont Fabros in 2009.



> Recognizing increased demand, ColoCrossing chose to re-locate its Chicago based datacenter operations in early 2009 from 350 East Cermak (downtown), to Dupont Fabros in Elk Grove Village (8 miles from Chicago O'Hare airport). Since that time, ColoCrossing has been able to provide timely and effective service deployments ranging from dedicated servers, to super high density colocation.



As far as getting the exact same server configuration goes, you should seriously think of moving to E5's/X56xx (with 128GB+ RAM), etc if you are going to continue to offer high RAM plans.  The move would probably decrease the number of  complaints  that have been posted on various forums about problems with your service.  As an end user I will not even consider buying a high RAM plan on a 32GB E3 node.


----------



## D. Strout (Feb 5, 2014)

Level3 is indeed working and network speeds are where I expect them on my GVH VPS. My ServerHub VPS also has Level3 online, but network speeds are still very slow - 3MB/s or less. Both are having some routing issues: IPv4 traceroutes to Google go through Cogent rather than Level3, and instead of hitting Google's Dallas DC, they route up to Chicago - a 26ms trip that should be only 8ms. IPv6 traceroutes go through Level3 to Dallas as expected. No idea why. I have a ticket open at ServerHub about the routing issues and network speeds.


----------



## Nett (Feb 11, 2014)

Just figured out that GVH has a name similar to OVH. lol


----------

