amuck-landowner

Internap public cloud - I'm as mad as Gordon Ramsays

GIANT_CRAB

New Member
Alright, this is my first review on hosting services.

Please forgive me if insufficient details are provided and if I raged too much.

Benchmark: http://serverbear.com/benchmark/2013/07/01/xryRHv8YGsEuRtLE

I first met with Mano sales staff, he's a pretty cool guy working at Internap.

I talked to him and enjoyed the chat a lot, this gave me a pretty good impression of Internap.

I used it to host one Team Fortress 2 (inb4 tf2 is for casuals)

All the players on my server complained about lag and inconsistent ping.

I suspected it was because of the I/O since Internap had a good reputation for their network.

Later on, I benchmarked their public cloud and it was horrible.

Their "superior" network had terrible connection to overseas from Singapore. (Had around 400kb/s to Sydney.)

Their I/O was inconsistent (only displayed inconsistency once in the benchmark but on the server itself, I could feel it).

Copying data of 4GB took around 10 to 15 minutes, its really horrible, are they using SATA setup with RAID 1 on their SAN?

I contacted support team, they were slow in response which responded within 24 to 48 hours (top lel for their 24/7 sleeping support team) and only kept denying facts.

They told me "Benchmark numbers will be close to meaningless when it comes to real world applications."

Benchmark numbers cannot be fully relied on BUT what the hell are you saying about it being close to meaningless?

Its so incompetent.

I request for an escalation to the manager.

I told him, for network, I had 1 unstable ping of 8Ms difference within 3 pings, how is that not an issue?
On top of that, I did MTR over 300 Times and it had a high standard deviation of 12.

They told me "Apologies but there is nothing we can do to your concern right now, and we didn't see any packet loss based from your submitted trace route report.
We understand your sensitivity on the latency, but perhaps this not might be a suitable product for you, as long as we wanted to keep your business. Also, we're not insisting that our network is superior, it's just there is nothing for us to troubleshoot at this point."

What the hell? You want package losses on local network? Having SD of over 12 on local network is insane already and you want package losses?

Later, I further escalated this to NOC.

NOC replied "Our NOC advises that there are no problems in the traceroute. The RTTs to intermediate hops should be ignored, as routers on the Internet typically deprioritize ICMP packets directed at them. Since the latency to the final hop is fine, there is no problem."

Yeah yeah, but that doesn't explain the bloody slow connection (which are via TCP).

pei18RV.png

Oh gawd, I'm as mad as Gordon Ramsays.

Why the hell did I even approached Internap for their public cloud?

Their cloud is not cheap and network SHOULD be good.

I get this kind of lousy excuses from Internap, totally ruined.

A bloody LEB VPS with SSD to outrun their entire cloud.

Btw, if you looked carefully at the benchmark, they are actually using BULLDOZERS for their cloud, really smart eh?

Internap is the top troll provider.

Please advise me what to do next, I'm really pissed off. (If any providers are available in Singapore, feel free to approach me with an offer.)
 
Last edited by a moderator:

drmike

100% Tier-1 Gogent
Copying data of 4GB took around 10 to 15 minutes, its really horrible, are they using SATA setup with RAID 1 on their SAN?

Well, ask them how their servers connect to that SAN.  I'm guessing it is nothing fancy, maybe a few bonded gigabit NICs.

That's the downside of SANs, "slow" interconnects for a "near local" storage solution.  Yeah, you can get faster, but the stuff costs real cash and end up multiplying that cost in the VPS server-like model.

Heck, I stopped bothering with storage not the in same server years ago.  Too expensive, exotic and well, slow.  Surely it has become faster  :)

Mind you, local storage vs. a SAN is comparing  two different monsters and an apple vs. oranges type of comparison.   The SAN is typically far superior for redundancy, drive swapping, backups, etc.

All the other points though, like crap for a network and full clock cycle ticket responses, well, I don't think they are prioritizing this product offering like others.  Shameful.
 
Last edited by a moderator:

kaniini

Beware the bunny-rabbit!
Verified Provider
IIRC, Internap only provides guaranteed network access inside US and Europe.

Has this changed with their acquisition of Voxel?
 

GIANT_CRAB

New Member
IIRC, Internap only provides guaranteed network access inside US and Europe.

Has this changed with their acquisition of Voxel?
I'm in Asia and their network mix is really bad.

Btw, I requested cancellation and they told me they'll give me a pro-rated refund.

I'm still mad at how Internap uses bulldozers for their cloud, it just proved they don't CARE.
 
Last edited by a moderator:

andrewboring

New Member
Hey giant_crab,

I'm really sorry to hear that you had a bad experience. I work for Internap and just wanted to jump in on this thread and provide a little info. Buffalooed raises a good point: we do use a SAN-backed volume storage for our public cloud offering. Most VPS services (which are fundamentally different from true cloud services) use a local disk chassis, so you get different disk I/O performance. You can also get different disk I/O between different cloud providers, depend on how they are implemented. But our cloud isn't designed to work exactly like a VPS, though you can buy monthly virtual machines. Our cloud instances are really geared more toward compute power and horizontally scaled workloads because the data is assumed to not reside on the local compute instance.

That configuration will affect your Unixbench results (as you already noticed), which benchmarks the "whole system" including memory speed, CPU cycles, L2/L3 cache, disk I/O, etc. I don't personally play a lot of FPS games, but from what I've read TF2 does require higher disk I/O, especially if you're using a lot of custom maps or the replay featuers (recording video to disk can require a lot of disk throughput!). So our cloud offering is probably not a good use for that kind of game.

For playing TF2, here's an alternative option: rather than paying a monthy fee for a virtual machine that doesn't give you the disk I/O you need, you might want to try the on-demand physical servers. You can login to the portal and spin up a dedicated server (E3-1230, 8GB RAM) for $0.39/hr (with SATA disks) or $0.65/hr (2x SSDs). Then, install TF2, play for a few hours, and terminate it when you're done. If you play for 8 hours on the SSD system, you've spent about $5 ($0.65/hr * 8 hours). Good for short-term game play and tournaments, though if you leave it on 24x7x365 it will cost you as much as a dedicated server.

I know you mentioned a lot of issues with network and support. I don't have any answers for you at the moment, but I can let management know about your post here. Let me know if you have any questions. I may not have all the answers, but I'll try to clarify any issues and hopefully even resolve a few for you.

-Andrew Boring, Internap
 

drmike

100% Tier-1 Gogent
Thanks to @andrewboring!   

It warms my heart to see companies providing support and details out in public like this.

Lots of folks just misunderstand the cloud concept and make wrong comparisons.

Care to comment on the SAN you have implemented and general slowness?   Bonded Gbit I assume.
 

GIANT_CRAB

New Member
Let me know if you have any questions. I may not have all the answers, but I'll try to clarify any issues and hopefully even resolve a few for you.

Hi Andrew, 

Thanks for the reply.

Internap has a mix of good and bad staffs. (Probably because its a big company)

Some of them are really incompetent and gets me all wrecked up while some are nice like you.

Just one question, why is the cloud running on AMD bulldozers?

I mean, really, why would Internap do this?

AMD bulldozers are clearly not suitable for cloud setups. (But AMD claims its suitable, wtf.)

Do update me when you get a proper reply.
 
Last edited by a moderator:

andrewboring

New Member
Thanks to @andrewboring!   

It warms my heart to see companies providing support and details out in public like this.

Lots of folks just misunderstand the cloud concept and make wrong comparisons.

Care to comment on the SAN you have implemented and general slowness?   Bonded Gbit I assume.
Thanks, buffalooed. I once worked for a small hosting company that rebranded a fancy VPS system as "cloud". I feel indirectly responsible for that, so I now try to make up for it by educating others on what "cloud" really is :)

For the SAN itself, it's actually 10Gbit. But given that it is iSCSI, there's some TCP overhead compared to fiber channel. It's also serving multiple VMs as root volume storage, so even with some QoS guarantees, the performance characteristics are a little different from, say, a Fiber channel SAN supporting a small cluster of nodes.

For certain use cases, it gives reasonable performance. For example, I run a Redmine application for my projects on a small cloud instance. MySQL runs fine for the small amount of database transactions it handles. If I needed to make it available company-wide or even if it just had a lot more activity from the few users using it today, then I would need to start implementing some good caching mechanisms for the higher number of read requests, or just move the database off to a physical server to handle an increase in database writes. Then Apache/Rails would remain on one or more cloud instance to service users directly, since the disk accesses wouldn't be as critical.
 

andrewboring

New Member
Hi Andrew, 

Just one question, why is the cloud running on AMD bulldozers?

I mean, really, why would Internap do this?

AMD bulldozers are clearly not suitable for cloud setups. (But AMD claims its suitable, wtf.)

Do update me when you get a proper reply.
A few reasons that I'm aware of:

1. The blade system we purchased used AMD. I don't know whether there was an Intel option at the time.

2. The cpu density was pretty awesome (1024 cores in 8u of rack space is great for scale).

3. Piledriver core was released after we purchased our gear.

There may have been other factors as well.

Suitable for cloud? For certain workloads. We had one customer who tested the CPU performance for a mobile advertising delivery platform. It worked beautifully for them. It's probably better for web applications that scale horizontally, rather than a game sever that's probably more reliant on clock speed or compiled to take advantage of Intel-specific features. My spot reading on TF2 servers seems to suggest that TF2 needs more clock speed, so Sandy Bridge and later are probably the better option there.
 
Top
amuck-landowner