# Colocrossing's Core "Router" is a Switch



## drmike

Some fun info about Colocrossing for today.

The reason Colocrossing can't offer IPv6 and has delayed setting up BGP sessions
(See Crissic's LA departure thread) is because they don't actually have a large 
core router.

Instead, they use switches similar to the photo below as their "edge network". These
routers can bring up a BGP session but they can't hold "an internet worth of routes".
These switches top out around ~10,000 total routes.





So how does Colocrossing get around this? They use multipath between all of their peers.
Multipath does round-robin blasting packets down each peer in equal weight. Ever
wonder why you have really ugly traceroutes like this?










3 host.colocrossing.com (192.3.94.137) 0.652 ms host.colocrossing.com (192.3.94.133) 0.708 ms host.colocrossing.com (192.3.94.137) 0.826 ms
4 207.86.157.13 (207.86.157.13) 0.288 ms 0.287 ms buf-b1-link.telia.net (213.248.96.41) 0.326 ms
5 nyk-bb1-link.telia.net (80.91.246.37) 9.606 ms 9.604 ms 9.545 ms
6 tinet.yyz02.atlas.cogentco.com (154.54.13.74) 12.793 ms 12.846 ms nyk-b3-link.telia.net (80.91.245.80) 9.808 ms
7 tmobile-ic-302276-war-b1.c.telia.net (213.248.83.118) 27.436 ms xe-4-3-0.atl11.ip4.tinet.net (141.136.108.134) 49.765 ms xe-8-0-0.atl11.ip4.tinet.net (141.136.108.142) 49.862 ms
8 eth2-1.r1.ash1.us.atrato.net (78.152.34.117) 21.874 ms total-server-solutions-gw.ip4.tinet.net (173.241.130.54) 49.968 ms eth2-1.r1.ash1.us.atrato.net (78.152.34.117) 23.592 ms
9 eth3-1.r1.atl1.us.atrato.net (78.152.34.181) 46.245 ms total-server-solutions-gw.ip4.tinet.net (173.241.130.54) 61.983 ms


Notice how you have Cogent & Telia on the same hop, as well as Atrato once you start getting to [email protected]? That's because each packet of the traceroute is going through a different peer in round robin.

If they were running full routing tables they would have a single path through a single provider to get to a
destination (since that ISP/path would have been picked to be the best, assuming they aren't doing cost based balancing).

It's pretty sad and this setup actually causes random speed issues for some users, especially Comcast customers. You'll have good speed one one transfer then garbage the next.

I'm not sure where they "_invested_" $1,000,000 into their network but it's obviously a lie.


----------



## Tux

The daycare center in Buffalo has an awful setup... it's over an bonded DSL line


----------



## zzrok

I don't understand.  If what you say is true, why is hop 5 always the same?  It looks to me like telia is to blame, but I'm not a networking expert.


----------



## Francisco

zzrok said:


> I don't understand.  If what you say is true, why is hop 5 always the same?  It looks to me like telia is to blame, but I'm not a networking expert.


It isn't.

3 host.colocrossing.com (192.3.94.137) 1.743 ms host.colocrossing.com (192.3.94.133) 1.162 ms . (172.245.12.225) 1.756 ms
4 buf-b1-link.telia.net (213.248.96.41) 0.422 ms buff-b1-link.telia.net (62.115.34.137) 0.419 ms te7-4.ccr01.buf02.atlas.cogentco.com (38.122.36.45) 10.177 ms
5 te8-8.ccr02.cle04.atlas.cogentco.com (154.54.31.237) 23.280 ms nyk-bb1-link.telia.net (80.91.246.37) 9.988 ms te8-8.ccr02.cle04.atlas.cogentco.com (154.54.31.237) 23.260 ms
There's nothing wrong with multipath and it was going to be what we did in LV, but we have a much smaller deployment than what makes up all of CC's Buffalo setup.
I'd really hope they had full tables but I dunno...

Francisco


----------



## TheLinuxBug

Usually I am on board with your threads, however, what you have said here is a bit confusing and I am not sure if you understand correctly.  If they are indeed doing round robin, your first hop outside their network is what would change, not routes throughout a traceroute.  When it picks a path you would see the route directly after their core "router" would change on a per provider basis.  What you are showing above just looks like poor routing on Telia's behalf.

Please correct me if I am wrong.

Cheers!


----------



## Francisco

TheLinuxBug said:


> Usually I am on board with your threads, however, what you have said here is a bit confusing and I am not sure if you understand correctly.  If they are indeed doing round robin, your first hop outside their network is what would change, not routes throughout a traceroute.  When it picks a path you would see the route directly after their core "router" would change on a per provider basis.  What you are showing above just looks like poor routing on Telia's behalf.
> 
> Please correct me if I am wrong.
> 
> Cheers!


Nope.

If they have a full table in place formed from all of their upstreams then you'd never see the same hop have different upstreams like we do in this.

With multipath you are actually taking multiple default routes and then the switching/routing platform will round robin it.

Sometimes you'll see all the packets go out the same path but that's just luck. Remember, CC is pushing a lot of transit and a ton of PPS so the hashing/round robin'ing is constantly hammering away.

Francisco


----------



## zzrok

I still don't understand how, in the first post, hop 5 is the same each time, but it is different after that point.  Isn't it up to telia which route is taken after hop 5?  How can CC determine (influence) the route of the packet beyond the first router?


----------



## Francisco

zzrok said:


> I still don't understand how, in the first post, hop 5 is the same each time, but it is different after that point.  Isn't it up to telia which route is taken after hop 5?  How can CC determine the route of the packet beyond the first router?


In a normal BGP setup? Yes. In multipath? Maybe.

Normally multipath is used as an alternative to bonding interfaces. You have 40gbit from a single upstream? You'd setup 4 different BGP sessions with them and the packets would multipath between it giving you your 40gbit/sec upstream without the cost of running 100gbit connections and crap like that.

When you multipath between different providers, though, you end up getting really funny routes since packets will get switched between all members.

Notice my later trace which shows hop 5 hit 2 - 3 different providers. The original trace showing hop 5 all hitting telia was just a luck of the round-robin draw.

Again, I can't confirm what they're using, all I know is I got a BGP session with them for our ASN.

Francisco


----------



## weservit

Actually when you use multiple TIER1 carriers most paths will have the same length. Example:

When you do a BGP lookup for this IP 194.25.0.125 (an IP from Deutsche Telekom) at all TIER1 carriers you will get a direct path to AS3320. So when you use multiple TIER1 carriers like they do (Cogent, Telia) you will get 2 direct routes to AS3320. Without multipath BGP it's very hard to balance traffic at your uplinks without traffic engineering. So you don't make optimal use of your transit capacity.

Multipath BGP doesn't mean that you always get balanced routes, when the AS path is shorter at carrier1 than carrier2, traffic will go over carrier1. But when the path has the same AS length at carrier1 and carrier2 multipath BGP will balance it between these carriers.


----------



## Francisco

True, multipath is a sweet feature 

But I can't see 4 different ISP's all having the same length route to [email protected]

Francisco


----------



## weservit

Francisco said:


> True, multipath is a sweet feature
> 
> 
> But I can't see 4 different ISP's all having the same length route to [email protected]
> 
> 
> Francisco


I did some BGP lookups to [email protected]

Telia:

AS path: 5580 46562 40426 

Cogent:

AS path: 3257 46562 40426

As you can see the AS length is the same, when one of these had a shorter AS path you probably wouldn't have a multipath route.

Another advantage of multipath BGP is that you won't have a 100% outage to a specific route when one of your carriers fails. As soon as one carrier fails BGP has to relearn the routes to find another path, when you don't use multipath BGP you will experience a complete loss for a short time depending on how fast your router will relearn the routes.


----------



## MannDude

So... pardon my ignorance on this one, how does this negatively impact the customer? Does it? Or is this just proof that their 'million dollar network upgrade' didn't happen or is more hot air from CC?



Jon Biloh said:


> I'm not happy that we're not offering IPV6 yet, but its not as if there is no reason behind it. About six months ago we completed a *million dollar upgrade *to our switch infrastructure installing all new TOR devices and connected each back to our distribution layer by between 20 or 40 gbit. Those devices, Brocade ICX6450 are waiting for a software update to support OSPFv3 (for ipv6). Once that is released, which is currently a few months late, we'll be pushing forward for IPV6. People ask us why not just deliver IPv6 to colocation customers now (which we could, because we use Junipers at the distribution layer) the thing is that wouldn't be fair to our 60% of dedicated customers.


Src: http://lowendtalk.com/discussion/comment/316739/#Comment_316739


----------



## concerto49

The Juniper's do support IPv6 though? I don't think that part is the problem. I mean they can always take a default route from 1 carrier even and get IPv6 up IF that was the problem.


----------



## Francisco

MannDude said:


> So... pardon my ignorance on this one, how does this negatively impact the customer? Does it? Or is this just proof that their 'million dollar network upgrade' didn't happen or is more hot air from CC?
> 
> Src: http://lowendtalk.com/discussion/comment/316739/#Comment_316739


It just leads to wonky routes and possibly wonky speeds if you're on an ISP that's too cheap to pay for ports or unwilling to work out a peering deal (Comcast/Verizon/Rogers/Bell pretty much).

2 different traces to google.com show hops against all 4 upstreams with a latency different from 30ms <> 63ms at the final destination.

It's possible BUF is still due a big ass upgrade and is delayed due to how much of a headache it'll be. I'm pretty sure CC finished BUF up at the start of the year though.

Do they use their own upstream contracts in any other locations? Maybe CHI? I figure the others are just defaults from coloat/quadranet?.

Francisco


----------



## drmike

I'm sure glad and quiet on this   Beyond my knowledge base.  I don't monkey at that level of the network.

I've seen the issues in CC's network for eons and posted ample exampls of the wonkiness.  Never could quite figure out what the gremlin in the machine was.

Thanks to everyone participating with the technical.


----------



## Tux

Francisco said:


> Do they use their own upstream contracts in any other locations? Maybe CHI? I figure the others are just defaults from coloat/quadranet?


I think this is only the case in Atlanta and perhaps San Jose. Everywhere else (excluding Buffalo) gets the DC blend.


----------



## Francisco

Tux said:


> I think this is only the case in Atlanta and perhaps San Jose. Everywhere else (excluding Buffalo) gets the DC blend.


Yep I remember Jon mentioning having a lot of nlayer in SJ.

Coloat's inhouse blend is nice so it wouldn't be a bad thing to use.

Francisco


----------



## MannDude

Francisco said:


> Yep I remember Jon mentioning having a lot of nlayer in SJ.
> 
> Coloat's inhouse blend is nice so it wouldn't be a bad thing to use.
> 
> Francisco


Which of Colocrossing's locations are actually [email protected]? I see [email protected] operates a lot of locations CC does, but unsure which locations of their's Colocrossing rents from.

I only ask because I know in the past people have listed 'Quadranet' as their DC when they're using Colocrossing in LA, which I guess isn't technically untrue, just a bit misleading. I don't think I've seen someone say "[email protected]" in place of CC, though. (Yet)


----------



## drmike

[email protected] in.... Atlanta, Dallas....  I think....


----------



## Francisco

Dallas is the odd one. From what was said the bandwidth is with Quadranet where as the racks are directly with...colo4? colo4dallas? Something like that.

Francisco


----------



## drmike

Los Angeles is Quadranet, but wondering if directly out there or [email protected] in the middle...  Let me look....


----------



## drmike

HE's BGP tool shows Colo4 peering, so assumed CC uses Colo4's network in Dallas....

Don't see anything for Los Angeles though (company overlap/offering there)...

So I think [email protected] in Atlanta and Los Angeles.


----------



## nunim

MannDude said:


> So... pardon my ignorance on this one, how does this negatively impact the customer? Does it? Or is this just proof that their 'million dollar network upgrade' didn't happen or is more hot air from CC?
> 
> 
> Src: http://lowendtalk.com/discussion/comment/316739/#Comment_316739


They've been promising IPv6 is coming "in a month" since October 2011 at least, probably longer, this is the main reason I won't use any VPS provider in a CC location.






> I'm not happy that we're not offering IPV6 yet, but its not as if there is no reason behind it. About six months ago we completed a million dollar upgrade to our switch infrastructure installing all new TOR devices and connected each back to our distribution layer by between 20 or 40 gbit. Those devices, Brocade ICX6450 are waiting for a software update to support OSPFv3 (for ipv6). Once that is released, which is currently a few months late, we'll be pushing forward for IPV6. People ask us why not just deliver IPv6 to colocation customers now (which we could, because we use Junipers at the distribution layer) the thing is that wouldn't be fair to our 60% of dedicated customers.



That's pretty silly, they've been saying there technically capable of rolling out IPv6 for quite some time but wouldn't do it for this reason or that reason, if you have the capability and the demand why wouldn't you at least satisfy the customers you can?  Since when does CC care about what's fair?


----------



## MannDude

nunim said:


> They've been promising IPv6 is coming "in a month" since October 2011 at least, probably longer, this is the main reason I won't use any VPS provider in a CC location.


 Well what I don't understand then is this:



Jon Biloh said:


> People ask us why not just deliver IPv6 to colocation customers now (which we could, because we use Junipers at the distribution layer) the thing is that wouldn't be fair to our 60% of dedicated customers.


Source: http://lowendtalk.com/discussion/comment/316739/#Comment_316739

How is it possible that they could setup IPv6 for colo customers, but not their clients who are renting servers?


----------



## Francisco

It's possible their panel isn't V6 ready.

I'm not sure why they'd spend so much time/money developing their new panel and not have V6 support listed.

That's such a Solus move to pull.

Francisco


----------



## qps

MannDude said:


> How is it possible that they could setup IPv6 for colo customers, but not their clients who are renting servers?


They said in that thread that the switches they use for their dedicated servers don't support OSPFv3, which is required for IPv6 (if you are going to use OSPF).  They're waiting for a software update to enable this feature.


----------



## Francisco

qps said:


> They said in that thread that the switches they use for their dedicated servers don't support OSPFv3, which is required for IPv6 (if you are going to use OSPF).  They're waiting for a software update to enable this feature.


Yep they mentioned the OSPF part as well.

I always figured they backhauled VLAN's from their main router and bound off up there, instead of appending it to VLAN's at he switch level? Or am I having a brain fart over what OSPF would be useful for?

Francisco


----------



## qps

Francisco said:


> Yep they mentioned the OSPF part as well.
> 
> 
> I always figured they backhauled VLAN's from their main router and bound off up there, instead of appending it to VLAN's at he switch level? Or am I having a brain fart over what OSPF would be useful for?
> 
> 
> Francisco


Not sure why they don't use iBGP instead.


----------



## Francisco

qps said:


> Not sure why they don't use iBGP instead.


It's possible they somehow have more than 10k routes internally?

It's also most likely because brocade screws people pretty hard for licensing. OSPF is almost always supported in the base images I think where as BGP always tacks a few grand onto the sticker. Same thing in Junipers.

Francisco


----------



## Kenshin

weservit said:


> I did some BGP lookups to [email protected]
> 
> Telia:
> 
> AS path: 5580 46562 40426
> 
> Cogent:
> 
> AS path: 3257 46562 40426
> 
> As you can see the AS length is the same, when one of these had a shorter AS path you probably wouldn't have a multipath route.
> 
> Another advantage of multipath BGP is that you won't have a 100% outage to a specific route when one of your carriers fails. As soon as one carrier fails BGP has to relearn the routes to find another path, when you don't use multipath BGP you will experience a complete loss for a short time depending on how fast your router will relearn the routes.


That is NOT what multipath is for. Multipath load balances equal-path links and by default, only when the AS paths match exactly. As Francisco mentioned this is typically for balancing links with an upstream that you have multiple connections with at the same router to avoid bonding the ports instead. In CC's case, they must have intentionally disabled matching of the entire AS-path to force traffic out two different carriers. Load balancing across multiple carriers is a bad idea due to traceroute being a mess as well as latency difference, your transfer speeds will suffer drastically due to packets arriving at different timings. However, it is the fastest and easiest way to balance your outgoing traffic.



Francisco said:


> Yep they mentioned the OSPF part as well.
> 
> 
> I always figured they backhauled VLAN's from their main router and bound off up there, instead of appending it to VLAN's at he switch level? Or am I having a brain fart over what OSPF would be useful for?
> 
> 
> Francisco





qps said:


> Not sure why they don't use iBGP instead.


Reason is simple, if they claim to be using the Brocade ICX 6450 then it's not a mystery. This switch likely does their customer VLAN routing and it doesn't support BGP, only OSPFv2 (ipv4) and RIPv2 (ipv4). It supports IPv6 in hardware, but static routes only. Based on this information, OSPF is critical to their operation because they are redistributing default from their core routers to the Brocade, as well as VLAN routes from the Brocade to their core routers. They can't enable IPv6 OSPF since the firmware doesn't support it (yet?) and they need two way OSPF announcements if they want to route IPv6 properly, I cannot imagine them wanting to configure static routes all over the place just to get IPv6 running.

The use of the Brocade for customer facing ports also explains why CC doesn't want to offer BGP sessions, the switch doesn't support BGP and it's where they terminate customer VLAN and routing. In order to offer BGP sessions, they have to plug you into a switch/router that has BGP, and assuming these switches are used as top-of-rack switches I cannot imagine them wanting to pull new cross connects just for you. Their Brocades are likely running pure L3 uplinks to avoid doing L2 spanning tree for redundancy as well as to isolate VLAN numbers to the switch itself, so they can't just configure your port as a VLAN back to their core routers without making things overly complex.

There'll be probably people complaining why the use of Brocade if it doesn't support IPv6/BGP, blah blah. Brocade (ex-Foundry) is well known for reliable/high performance switches (L2 only) at cheap prices, in this case with basic layer 3 slapped on as a bonus. They are used extensively in L2 deployment especially for high port count setups and the equipment itself is rock solid. They are not known for their L3 deployments, so it's not surprising they lack OSPFv3 on these.

*Disclaimer* I do not work for CC. I just happen to have years of experience running Foundry/Brocade equipment so I'm familiar with the limitations of their platforms and can piece together the puzzle on a network level.


----------



## Francisco

Yep.

We have a superX that we use in LV for our switching now. It's garbage for layer3 but for layer 2? no sweat.

We have most of our blades filled and then a 10gig link from that to the router.

We never have issues with it and it's a nice change from the TOR's w/ LACP like we used to have.

Francisco


----------



## Kenshin

I used to have Foundry Bigirons as core routers that terminate VLANs for L2 TOR Foundry Fastiron switches. Great L2 performance, L3 completely iffy especially at high pps. Now switched to Juniper EX for L3 VLAN routing and HP/Force10 for cheap gigabit L2 with 10G upgrade if necessary in future.


----------



## mpkossen

MannDude said:


> So... pardon my ignorance on this one, how does this negatively impact the customer? Does it? Or is this just proof that their 'million dollar network upgrade' didn't happen or is more hot air from CC?
> 
> 
> Src: http://lowendtalk.com/discussion/comment/316739/#Comment_316739


They run OSPF according to the post you linked, not BGP. Also, Jon says in there they use Brocade, not Juniper. So, the OP pulled a switch photo from their website and made up a story around it, I assume  Anyway, nothing else I can make of it.


----------



## wlanboy

At least I learned something about Brocade products.


----------



## Deleted

Kenshin said:


> That is NOT what multipath is for. Multipath load balances equal-path links and by default, only when the AS paths match exactly. As Francisco mentioned this is typically for balancing links with an upstream that you have multiple connections with at the same router to avoid bonding the ports instead. In CC's case, they must have intentionally disabled matching of the entire AS-path to force traffic out two different carriers. Load balancing across multiple carriers is a bad idea due to traceroute being a mess as well as latency difference, your transfer speeds will suffer drastically due to packets arriving at different timings. However, it is the fastest and easiest way to balance your outgoing traffic.
> 
> Reason is simple, if they claim to be using the Brocade ICX 6450 then it's not a mystery. This switch likely does their customer VLAN routing and it doesn't support BGP, only OSPFv2 (ipv4) and RIPv2 (ipv4). It supports IPv6 in hardware, but static routes only. Based on this information, OSPF is critical to their operation because they are redistributing default from their core routers to the Brocade, as well as VLAN routes from the Brocade to their core routers. They can't enable IPv6 OSPF since the firmware doesn't support it (yet?) and they need two way OSPF announcements if they want to route IPv6 properly, I cannot imagine them wanting to configure static routes all over the place just to get IPv6 running.
> 
> The use of the Brocade for customer facing ports also explains why CC doesn't want to offer BGP sessions, the switch doesn't support BGP and it's where they terminate customer VLAN and routing. In order to offer BGP sessions, they have to plug you into a switch/router that has BGP, and assuming these switches are used as top-of-rack switches I cannot imagine them wanting to pull new cross connects just for you. Their Brocades are likely running pure L3 uplinks to avoid doing L2 spanning tree for redundancy as well as to isolate VLAN numbers to the switch itself, so they can't just configure your port as a VLAN back to their core routers without making things overly complex.
> 
> There'll be probably people complaining why the use of Brocade if it doesn't support IPv6/BGP, blah blah. Brocade (ex-Foundry) is well known for reliable/high performance switches (L2 only) at cheap prices, in this case with basic layer 3 slapped on as a bonus. They are used extensively in L2 deployment especially for high port count setups and the equipment itself is rock solid. They are not known for their L3 deployments, so it's not surprising they lack OSPFv3 on these.
> 
> *Disclaimer* I do not work for CC. I just happen to have years of experience running Foundry/Brocade equipment so I'm familiar with the limitations of their platforms and can piece together the puzzle on a network level.


We setup default routing long ago and did not take full tables since we were not really multihomed, years ago a sup720 costs an arm and a leg to do full BGP tables, so it was just more worthwhile to take defaults and use multihop with run-of-the-mill switching gear. 

Giving users BGP isn't really hard, but any layer3 customers wanted full tables from us, and we can't give people full routing tables with the gear we had at the time, so we had to turn them down. Is it worthwhile to spend $30k+ on a 6503 + sup720 for a customer that might do $1000 a month of transit costs? Is it worthwhile to accept all those prefixes if you're singled homed to the same AS (with different links)? 

Since I no longer work for CC, I cannot comment on their switching topology. At the beginning we used OSPF for our internal topology which is great for scaling, vlans, et al. I do not like broadcade stuff (their firmware is shit), I've always been partial to Juniper/Cisco.


----------



## Kenshin

Monkburger said:


> We setup default routing long ago and did not take full tables since we were not really multihomed, years ago a sup720 costs an arm and a leg to do full BGP tables, so it was just more worthwhile to take defaults and use multihop with run-of-the-mill switching gear.
> 
> Giving users BGP isn't really hard, but any layer3 customers wanted full tables from us, and we can't give people full routing tables with the gear we had at the time, so we had to turn them down. Is it worthwhile to spend $30k+ on a 6503 + sup720 for a customer that might do $1000 a month of transit costs? Is it worthwhile to accept all those prefixes if you're singled homed to the same AS (with different links)?
> 
> Since I no longer work for CC, I cannot comment on their switching topology. At the beginning we used OSPF for our internal topology which is great for scaling, vlans, et al. I do not like broadcade stuff (their firmware is shit), I've always been partial to Juniper/Cisco.


Single homed, sure default routing makes perfect sense. Now there's Telia + Cogent, doesn't make sense anymore (and I think previously L3?). One's a decent provider globally, the other is a decent provider within their own network only. Assuming there's no change in the equipment, all CC needs to do now is to take customer routes from Cogent and default Telia, that would be a simple and efficient setup without needing to load a full table.

I don't think full table is the problem here, most people taking service from CC will likely not bother picking up another transit and running cross connects, most of them just want to route their own IPs with their own ASN. If they run a pure OSPF setup, this is troublesome to arrange alongside.

I'm not surprised they maintained the same OSPF setup, like you said it's easy as hell to setup and maintain. Add VLAN, assign IP subnet, tag to port, redistribute into OSPF, done. It's not a wrong solution, just that Brocade's lack of IPv6 OSPF made it difficult for CC to implement IPv6. Brocade's firmware always had issues and they're pretty slow to fix stuff, there was plenty of horror stories in WHT with providers who used primarily Brocade and got hit by the Brocade/Foundry specific DOS attack or providers who ran into firmware bugs (Sagonet I think, many years ago). The perfect use case for Brocade is pure L2 with VLANs, period. Firmware wouldn't even matter anymore.


----------



## qps

Kenshin said:


> Single homed, sure default routing makes perfect sense. Now there's Telia + Cogent, doesn't make sense anymore (and I think previously L3?). One's a decent provider globally, the other is a decent provider within their own network only. Assuming there's no change in the equipment, all CC needs to do now is to take customer routes from Cogent and default Telia, that would be a simple and efficient setup without needing to load a full table.


Problem here is that their equipment (if it is true they are using EX-series switches to face their providers) isn't capable of much more than default routes.  Pretty much all they can do is default everything to one and use the other as a failover, or use things like they have them setup now.


----------



## Kenshin

qps said:


> Problem here is that their equipment (if it is true they are using EX-series switches to face their providers) isn't capable of much more than default routes.  Pretty much all they can do is default everything to one and use the other as a failover, or use things like they have them setup now.


EX4500 would be 12K FIB won't be sufficient, but I built my assumption that they still used the CIsco 6500 which should be able to do 200k FIB.

But god this day and age if they're moving multi-10Gs, with the MX series at decent price point I don't see why they would need to cut corners on router hardware. Full 10G Telia would be probably what, 10k/month or so? Cogent another 5k/month?  I'm sure there's budget for network hardware there somewhere.


----------



## Deleted

As I was the principle person responsible for setting up the network during it's birth, I am going to go out on a limb and defend taking default routes. Routers/switches with large amounts of memory for routing are not cheap, years and years ago when the routing table was ~200k routes (with /24's mixed in there), a 6500 series with a decent sup and nice backplane bandwidth was about $40k plus (direct from my old vendor). 

On the other hand, a smaller, faster switch (high PPS) was cheaper and could do more for less money (trunking, hsrp, etc). We went the cheaper route. While I /wanted/ a fully loaded 6506 with sup720 (they were $25k new), but we could not justify the cost JUST for full bgp routes. 95% of dedicated customers did not care about if we had full BGP or not, as long as their service worked (and didn't lag).

Back in the 90s when I worked for AboveNet, the motto at IAD4 was 'Keep it simple', which is exactly what I used when I designed the existing network. OSPF with eBGP (default routes), we used communities to alter our localpref in some cases, though. It worked. 

One thing I did NOT agree with is mixing vendor equipment, it's nice to do layer2 on different vendors (most of the time) but firmware bugs, broken RFC agreements made it difficult. I despised foundry's gear as being total shit made by people who wear leafs as shoes. 

It's simple numbers at the end of the day, the needs of the many (simple setup, default routes) outweigh the needs of the few (full blown routing, MPLS, et al)


----------



## Kenshin

I don't think anyone would have faulted you for using a default route setup when there's low bandwidth usage and what you did fit the business needs at that point of time.

My point is that looking at the prices today and the amount of bandwidth I'd estimate CC doing, justifying the equipment would be far easier compared to years ago. Network hardware prices for Juniper went down heavily since the MX platform stablized, their EX series have also pushed Cisco into a good price fight for performance. $40k was a lot of money years back and you needed 2 for redundancy. $40-60k today can get you a pair of MX80 which will do 2x40G all day long, and plug in nicely to the EX4500 (yey same vendor!).

I hear you on the Brocade/Foundry part. I believe you were at the time where RSTP/MSTP was still in development and every vendor had their own implementation that refused to talk to the other guy. Even today Cisco vs Juniper L2 issues are still existing, so while slightly better today but not totally gone.


----------



## RyanD

buffalooed said:


> Los Angeles is Quadranet, but wondering if directly out there or [email protected] in the middle...  Let me look....


I can easily clear that up for you 

ColoCrossing is only a client in our Atlanta location as is publicly known. We are not a "man in the middle" in any facility for them, they are only with us in the facility in which we wholly operate it. I think they work solely with facility operators. IE us (ATL), ServerCentral (CHI), Quadranet (LA),  Centralogic (Buffalo), and I have no idea where they are elsewhere.

We have a private suite in the Penthouse @ Quadranet, it was originally owned by another company and got bought by Quadranet. Only thing that changed for us is we picked up a port of IP transit from them to add into our blend


----------



## RyanD

Monkburger said:


> As I was the principle person responsible for setting up the network during it's birth, I am going to go out on a limb and defend taking default routes. Routers/switches with large amounts of memory for routing are not cheap, years and years ago when the routing table was ~200k routes (with /24's mixed in there), a 6500 series with a decent sup and nice backplane bandwidth was about $40k plus (direct from my old vendor).
> 
> On the other hand, a smaller, faster switch (high PPS) was cheaper and could do more for less money (trunking, hsrp, etc). We went the cheaper route. While I /wanted/ a fully loaded 6506 with sup720 (they were $25k new), but we could not justify the cost JUST for full bgp routes. 95% of dedicated customers did not care about if we had full BGP or not, as long as their service worked (and didn't lag).
> 
> Back in the 90s when I worked for AboveNet, the motto at IAD4 was 'Keep it simple', which is exactly what I used when I designed the existing network. OSPF with eBGP (default routes), we used communities to alter our localpref in some cases, though. It worked.
> 
> One thing I did NOT agree with is mixing vendor equipment, it's nice to do layer2 on different vendors (most of the time) but firmware bugs, broken RFC agreements made it difficult. I despised foundry's gear as being total shit made by people who wear leafs as shoes.
> 
> It's simple numbers at the end of the day, the needs of the many (simple setup, default routes) outweigh the needs of the few (full blown routing, MPLS, et al)


Not trying to discredit you or anything but those prices are high even by the standards of back then.

I mean in 2006 when we got our first 6500 with 2 x 16-port 1g (gbic) blades and 2 x 48-port 1g (copper) and sup720-3bxl it was only $19k.

Later when we added on 2 more SUP720's ran about 7k and WS-X6704's ran about 7k as well.  Now days the SUP720-3bxl is still a very serviceable sup (we still run about a dozen of them). Convergence times stink but othewise they chug right along. 

Even if you were looking for cost effective edge routing the 6500s w/7203bxl still aren't a bad choice today. You can deploy a 6506 w/8 x 10G and 96 x 1G (copper) for about $8k. If you wanted to load it out with 5 x 4-Port 6704 and use CFCs on the 6074's you could deploy a 7203bxl w/20x10G for about $13k

We use 3bxl's on all our 67xx line cards so it drives cost up a bit but it's worth it as you scale.


----------



## RyanD

Monkburger said:


> We setup default routing long ago and did not take full tables since we were not really multihomed, years ago a sup720 costs an arm and a leg to do full BGP tables, so it was just more worthwhile to take defaults and use multihop with run-of-the-mill switching gear.
> 
> Giving users BGP isn't really hard, but any layer3 customers wanted full tables from us, and we can't give people full routing tables with the gear we had at the time, so we had to turn them down. Is it worthwhile to spend $30k+ on a 6503 + sup720 for a customer that might do $1000 a month of transit costs? Is it worthwhile to accept all those prefixes if you're singled homed to the same AS (with different links)?
> 
> Since I no longer work for CC, I cannot comment on their switching topology. At the beginning we used OSPF for our internal topology which is great for scaling, vlans, et al. I do not like broadcade stuff (their firmware is shit), I've always been partial to Juniper/Cisco.



Certainly if you are single homed to a carrier (even if you have multiple uplinks to them) it is pointless to take full tables unless you really want to mess with aspaths for some odd reason (maybe tag different communities based upon prefix destination as?) 

Now, any time you move to more than one carrier, I don't see any reason why you wouldn't want full tables, the level of granular control you gain and ability to manipulate the otherwise 'dumb' decision making of the standard bgp path selection algorithms enables you to largely improve the quality of your network and delivery of traffic through it.

We've had Internap's FCP platform in place since 2008, I don't know what we would do without it. Now there are other options in the market such as Noction that do similar functionality but it takes a huge amount of work off our network team to have to worry about manual path manipulation and performance issues.  I am a huge fanboy


----------



## Deleted

RyanD said:


> Not trying to discredit you or anything but those prices are high even by the standards of back then.
> 
> I mean in 2006 when we got our first 6500 with 2 x 16-port 1g (gbic) blades and 2 x 48-port 1g (copper) and sup720-3bxl it was only $19k.
> 
> Later when we added on 2 more SUP720's ran about 7k and WS-X6704's ran about 7k as well.  Now days the SUP720-3bxl is still a very serviceable sup (we still run about a dozen of them). Convergence times stink but othewise they chug right along.
> 
> Even if you were looking for cost effective edge routing the 6500s w/7203bxl still aren't a bad choice today. You can deploy a 6506 w/8 x 10G and 96 x 1G (copper) for about $8k. If you wanted to load it out with 5 x 4-Port 6704 and use CFCs on the 6074's you could deploy a 7203bxl w/20x10G for about $13k
> 
> We use 3bxl's on all our 67xx line cards so it drives cost up a bit but it's worth it as you scale.


Trust me, that is exactly what they were when the 720's first came out. We priced a 6506-E with 3bxl, 2 6724's and 2 copper gig blades, it was $39,000USD (I can probably find the email from my vendor). 

We looked at the sup2's as well, due to their raw PPS (and competitive pricing), the sup32's were total shit and not as featured as the sup2's (which made me laugh).. This also included the tier1 smartnet contract as well. I also looked at Force10, as they have very good switching fabric latency (almost an order of magnitude better than the cisco stuff), but the cost was outrageous.


----------



## RyanD

Monkburger said:


> Trust me, that is exactly what they were when the 720's first came out. We priced a 6506-E with 3bxl, 2 6724's and 2 copper gig blades, it was $39,000USD (I can probably find the email from my vendor).
> 
> We looked at the sup2's as well, due to their raw PPS (and competitive pricing), the sup32's were total shit and not as featured as the sup2's (which made me laugh).. This also included the tier1 smartnet contract as well. I also looked at Force10, as they have very good switching fabric latency (almost an order of magnitude better than the cisco stuff), but the cost was outrageous.


That explains it then, new pricing + smartnet without much of a discount.  

As an aside I've worked with hardware placement in the cisco broker channel for the better part of 10 years now with very large cisco partners so I understand the discount levels and what things can actually be purchased for with proper negotiations 

Cisco itself and many purchasers are still stuck in the mindset that they need to buy new and carry huge warranties and have 4 hour replacement, blah blah.  Now days, equipment is so cheap everything should be treated with the Google/Utility model. Forget the warranties, buy a bunch more of it. When it dies, throw it out and replace it.  It's significantly cheaper to keep on-site spares and use your own staff for 15-minute replacement than deal with 4-hour replacement support contracts. I've never seen dell replace something in 4 hours, that 4 hour is always more like a 10 hour total process.


----------



## Shados

RyanD said:


> I've never seen dell replace something in 4 hours, that 4 hour is always more like a 10 hour total process.


Yeah, I can vouch for that too.


----------



## Deleted

I lost all respect for cisco the day they threatened OpenBSD with a patent lawsuit over VRRP (aka CARP). Their products are overrated.


----------



## VPSCorey

In Cisco's defense 4 hours is somtimes 30 minutes, though they send cards to us via Taxi sometimes.  However that 4 hour warranty can even make fortune 100 companies cringe every once in awhile.

EX4500's are not meant to be BGP routers if that photo is true, fine for TOR switches though.

FCP and Nocticon to me are awesome I agree with RyanD about that.  These devices can figure out things that may take a lot of time and research for a human to get correct in seconds and can improve over time.  Networks are getting more complex and a lot of optimizing is being done via computers now.



Monkburger said:


> Trust me, that is exactly what they were when the 720's first came out. We priced a 6506-E with 3bxl, 2 6724's and 2 copper gig blades, it was $39,000USD (I can probably find the email from my vendor).
> 
> We looked at the sup2's as well, due to their raw PPS (and competitive pricing), the sup32's were total shit and not as featured as the sup2's (which made me laugh).. This also included the tier1 smartnet contract as well. I also looked at Force10, as they have very good switching fabric latency (almost an order of magnitude better than the cisco stuff), but the cost was outrageous.



Cisco gives discounts the more you buy from them.  It's crap for the small guys though.


----------



## peterw

CC needs some EdgeMax power.


----------

