amuck-landowner

HA in Digital Ocean

Hxxx

Active Member
This have been known for a little while, based on the features enabled in the control panel at DO, but today I found this explanation while looking over twitter. Here is the link to the article, is so nicely crafted that I prefer to just place the link. https://www.digitalocean.com/community/tutorials/what-is-high-availability


TLDR; The floating IP feature at DO (which is not a new thing here) is ideal to create a basic HA infrastructure.


* Just creating the post to see if anybody want to contribute or criticize. We have a few infrastructure enthusiasts in here.
 

drmike

100% Tier-1 Gogent
HA is a really old topic.  Most customers (normal ones) have no idea of where to even start with this.  It's complicated and a bunch of pieces to plumb together.


To make HA work you need N+1 ideally (3 or more):

  • Database server with replication neartime and ability to rollback and checksum transactions
  • Web server with file backend neartime replication - distributed filesystem may be best way to achieve this.  Rsync is often the first attempt layer, but hard to get right.
  • Load balancer / gateway up front - there are specialized load balancer software solutions for this.  They can be quite heady, complex and demand dedicated server.  At minimum Nginx or Varnish and reverse cache style mode.
  • N+1 = 3 minimum locations.   Will need this for sanity.  Ideally 3 different geo-locations and 3 different provider with different upstreams in different geo-regions.

Floating IPs are a good concept.  I don't think many of us use them yet.  Documentation for them and actual samples of how to work with them are still lacking or mostly unseen.


ha-diagram-animated.gif


I included this graphic from DO article you posted  since it's a decent illustration of the floating IP.  It's a bit off though for wider interest.  


What is off is the Floating IP literally is being tagged to the active / live load balancer.  I find this approach dizzying and prone to messes.  In a single location you'd have IP up front say as a gateway and have your cluster of assets (multiples doing the same thing, but only one set live on traffic).


So always:  192.0.2.100 --> backend cluster #1 or backend cluster #2.


What this approach as illustrated is,  trying to achieve failover / redundancy for what should be the gateway with core load balancing.  That layer is well dev'ved for HA and all the fixings to stay reliable and graceful.


The lure to floating IP is that you can take a failure, have traffic inbound to the same machine and backside swap out the crashed asset with path to the N+1 redundancy install.  I get that.  It also seems to drive this single public route inbound, which in itself is a single point of failure (i.e. pointing to same cluster / location).


Floating IP to me seems to make sense when you have an Anycast setup with different HA clusters in different geolocations and where your inbound route is always the same, but the network transparently pushes them to 'closest' cluster and you have integrated HA somehow into BGP frontside and doubled up backside to deal with misrouted traffic that will hit downed assets.  
 

fm7

Active Member
Very true.


BTW I don't know the current status but when Floating IPs was launched DigitalOcean (and Vultr) haven't implemented "fault domains" (and ideally "upgrade domains") which is a basic requirement to HA.


A fault domain is a set of hardware components – computers, switches, and more – that share a single point of failure.” IEEE Computer Magazine March 2011 Issue.


 



Fault Domain #1




Fault Domain #2



Upgrade Domain #1



Instance #1



 


Upgrade Domain #2


 



Instance #2





In this case if for example Fault Domain #1 fails Instance #2 (in Fault Domain #2) will continue to be available. Of course when Fabric notices that Instance #1 doesn’t respond it will deploy your application to a new VM in a fault domain different than Fault Domain #2.


http://blog.toddysm.com/2010/04/upgrade-domains-and-fault-domains-in-windows-azure.html
 
Last edited by a moderator:

Hxxx

Active Member
Thanks drmike and fm7 for the contribution. I like these topics. I wish the forum were more about this and less random. HA is an interesting topic with multiple ways to achieve it. It has some area for improvement.
 

jarland

The ocean is digital
I think a lot of people look for high availability in the underlying stack, at the virtualization level. I think that's a bad thing. Creating true, reliable HA at that level is not only extremely expensive at the infrastructure level, but it also isn't a one-size-fits-all solution for every customer. Not everyone runs a Wordpress blog, for example.


HA should be tailored to the application, and that is why I think application level HA solutions are the best. It's relatively inexpensive, and it fits the situation because you configure your applications for it.


For that reason, I was very glad to see everyone at DO (where I work, if anyone is not aware) on the same page about that. Floating IP addresses offered the right start to application level HA. Do what you need to do on the backend, be it GlusterFS, MySQL master-master replication, etc. Then, use the floating IP to help you navigate away from issues with a particular instance of your application or even a problem with the hypervisor.


I think this was the right move, instead of chasing pipe dreams of the flawless high availability that are often talked about on forums (VM failover, SAN storage, etc.) but more often than not seem to only provide a new set of problems.
 
Last edited by a moderator:

fm7

Active Member
Thanks drmike and fm7 for the contribution. I like these topics. I wish the forum were more about this and less random. HA is an interesting topic with multiple ways to achieve it. It has some area for improvement.

The point here, and I think drmike made it clear, is something in the line if you don't need HA any solution (or marketing ploy) pretending to offer HA is good  :) but if you really need HA then you can't disregard / downplay drmike's post. There is no such thing as good enough HA. Simple like that.
 
Last edited by a moderator:

drmike

100% Tier-1 Gogent
A fault domain is a set of hardware components – computers, switches, and more – that share a single point of failure.” IEEE Computer Magazine March 2011 Issue.

Fault domain is simple N+1 redundancy and well, it goes a tad beyond that.  I'd think sitting on different bandwidth, different grid power, etc. would be necessary.  Otherwise just bumping the fail point upstream in the same facility which is meh.


N+1 with dual setups in the same local area (i.e. say 150 mile radius) is a decent idea where both ends are well connected and ideally directly.  I've done a deployment prior with a DC on hot standby with no public internet live to it (all private connected).  Used it for just in case worst scenario and mainly for backups day to day.

Thanks drmike and fm7 for the contribution. I like these topics. I wish the forum were more about this and less random. HA is an interesting topic with multiple ways to achieve it. It has some area for improvement.

We can / will cover more relative stuff.  Glad to have variety of conversation here.


HA has a lot more options today and more mature solutions.  I lost my grip on it as I am not pushing high traffic stuff like I use to (which needed HA setups).

HA should be tailored to the application, and that is why I think application level HA solutions are the best. It's relatively inexpensive, and it fits the situation because you configure your applications for it.

That's how I feel about HA too. Now bouncing from virtual servers to dedicated hardware, well, that tends to happen quickly or should.  Otherwise one is going to beat the heck out of a multi-tenant box and/or have not so hot performance / erratic performance out of the virtual HA choke point.  That's part of why I run nothing real on VPS instances unless they are slices of a dedi that I am fully in control of and aware of other usage.  <--- point here is HA on actual already active or quickly ramping up site.

... chasing pipe dreams of the flawless high availability that are often talked about on forums (VM failover, SAN storage, etc.) but more often than not seem to only provide a new set of problems.

Usually people talking SAN and VM failover are talking closed source or exotic large cost deployments. SANs fail too much and too ugly for me to ever mention them.  Heck, I am no fan of RAID either because of mass complexity and potentially horrible failure it can create.  Give me RAID for more spindles and please make it SSD today, when it fails, I toss the drives in dust bin and restore from backup (really, I have no patience for it).


Hardware based and exotic bought solutions for HA will teach you how to burn money.  Plenty of nice stuff, but, locked into a vendor relationship and all the oddness of their solution.  Don't like it?  Stuck because likely still lack the competence to paste and glue something from the open source world together.   Not trying to be that person with the open source neck beard bias, but...


Yeah work from the software up on HA.  All those layers we've mentioned and probably a good bit more when one wants a legit HA setup that is bulletproof.  Lots of automation scripts need dev'd to make it all happen gracefully too.


Finally, I haven't read this blog in a while, but use to be a favorite when I was more active on pushing masses of data: http://highscalability.com.   Not a how to as much as a view of what others are doing and solutions you may not be aware of.
 

fm7

Active Member
Fault domain is simple N+1 redundancy and well, it goes a tad beyond that.  I'd think sitting on different bandwidth, different grid power, etc. would be necessary.  Otherwise just bumping the fail point upstream in the same facility which is meh.

I must disagree :).


Having an incident compromising an entire facility you need to think disaster recovery.


Everything else, a true fault domain implementation in a true 2N data center will keep the infraestructure required to run your HA system (at least for a while).


Not to say I think different geolocations is something extreme (*)  -- and Azure's distributed storage and Online.net's SAN-HA implements it -- but many times HA is required but you don't have external locations (e.g. ships, planes, etc) and you must be serious about fault domains.


I guess it is important to say 2N without fault domains preclude HA, just the case of Online.net superb data centers.


(*) In fact I did run for a while a HA prototype system using Softlayer servers hosted in Dallas and Washington, DC.
 
Last edited by a moderator:

DomainBop

Dormant VPSB Pathogen
HA should be tailored to the application, and that is why I think application level HA solutions are the best. It's relatively inexpensive, and it fits the situation because you configure your applications for it.

Going that route is inexpensive compared to other solutions, but is it really HA or is it something else that uses "HA" as a marketing term?  


Availability definitions used by IBM:

High Availability (HA) – Provide service during defined periods, at acceptable or agreed upon levels, and mask unplanned outages from end-users. It employs fault tolerance, automated failure detection, recovery, bypass reconfiguration, testing, and problem and change Management. 


Continuous Operations (CO) -- Continuously operate and mask planned outages from end-users. CO employs non-disruptive hardware and software changes, nondisruptive configuration, and software coexistence. 


Continuous Availability (CA) -- Deliver non-disruptive service to the end user seven days a week, 24 hours a day (there are no planned or unplanned outages).

Some reading material for everyone:


If you skip past the zSystems marketing fluff in this IBM white paper there is some interesting reading:  High Availability with kVM...

The number one best practice when constructing a High Availability environment is to avoid single points of failure in order to minimize the impact of any individual failure. Replicate everything! z13 partitions hosting your KVMs, control units, I/O paths, ports and cards (such as FICON and FCP host adapters and OSA network adapters), FICON directors, fiber channel switches, network routers, disks and data...

From OpenStack:  High Availability Concepts

High availability is implemented with redundant hardware running redundant instances of each service. If one piece of hardware running one instance of a service fails, the system can then failover to use another instance of a service that is running on hardware that did not fail.


A crucial aspect of high availability is the elimination of single points of failure (SPOFs). A SPOF is an individual piece of equipment or software that causes system downtime or data loss if it fails. In order to eliminate SPOFs, check that mechanisms exist for redundancy of:

  • Network components, such as switches and routers
  • Applications and automatic service migration
  • Storage components
  • Facility services such as power, air conditioning, and fire protection...
  •  
 
Last edited by a moderator:
  • Like
Reactions: fm7

drmike

100% Tier-1 Gogent
Having an incident compromising an entire facility you need to think disaster recovery.


Everything else, a true fault domain implementation in a true 2N data center will keep the infraestructure required to run your HA system (at least for a while).


Not to say I think different geolocations is something extreme (*)  -- and Azure's distributed storage and Online.net's SAN-HA implements it -- but many times HA is required but you don't have external locations (e.g. ships, planes, etc) and you must be serious about fault domains.


I guess it is important to say 2N without fault domains preclude HA, just the case of Online.net superb data centers.


(*) In fact I did run for a while a HA prototype system using Softlayer servers hosted in Dallas and Washington, DC.

HA and disaster recovery go like sock in shoe for a proper business.  No reason why disaster recovery shouldn't be scripted and strung into the HA process.  Rather necessary to do such, otherwise you have fail of location, downtime and manual switch over.  Yes, different things HA and disaster recovery are, neither exclusively will get you independently through worst case scenario unscathed.  Unified and ironed out process that addresses both is the winner, but it's not for the faint of heart and typically costly for development, for documentation, for training, for idle remote build, etc.


I've never made much of a distinction between HA and disaster recovery.  Feel still one with actual needs like this must do both or learning process and intent of all this failed.  I realize it's a creeping weed where we could say how about this other incident / potential / etc.   But even that is due to address in proper planning before going to implement any of this.  Mentally discover the universe, the potential, plan the complete eco system, then and only then go to creating and implementing the solution(s).


HA redundancy like in planes and ships exists, they still have total and ugly failures that blow out redundancy planning.  Although total fails are likely stats wise severely lower than civilian solutions without said redundancy. Both are mainly military tools and overbuilt and overplanned strategically (or held to higher standards when civilian involved due to potential for loss of life).  Little one could inject on HA externally or even disaster recovery because of the nature of what they are (mobile).  It's an interesting point to bring up though.  Definitely a different prism view on the matter to consider...    I think most view HA like this mobile military approach with redundancy just on that single location with vendor / company support policies in place for when failure would occur.


Digging this twist on HA :)


Tell us @fm7 about your HA prototype.  What did you use for your stack build?
 
Last edited by a moderator:
  • Like
Reactions: fm7

wlanboy

Content Contributer
And if you build up everything correctly on the technical side you have to certify the whole pacakge against ISO 27001/27004.
First one is duable, second one a nightmare to your company's processes.


A nice read on Azure: https://blogs.msdn.microsoft.com/cloud_solution_architect/2015/08/05/azure-high-availability/
(and https://msdn.microsoft.com/en-us/library/azure/dn251004.aspx?f=255&MSPPError=-2147217396)


Looks like Azure is offering everything that each poster wants + that little security addon.
 

fm7

Active Member
I've never made much of a distinction between HA and disaster recovery.

I see HA and disaster recovery as completely different animals. :)


An enterprise has a very limited number of mission-critical systems and hundreds/thousands of less essential or less used software or data. In case of a facility-wide disaster or simple outages the downtime and eventualy loss of data is expected for that services and the recover would follow plans setting priorities, stages,

HA redundancy like in planes and ships exists, they still have total and ugly failures that blow out redundancy planning.  Although total fails are likely stats wise severely lower than civilian solutions without said redundancy. Both are mainly military tools and overbuilt and overplanned strategically (or held to higher standards when civilian involved due to potential for loss of life).  Little one could inject on HA externally or even disaster recovery because of the nature of what they are (mobile).

Actually you have civilian use cases in (fixed) exotic locations (E.g. offshore oil rigs, Antarctica laboratories) and, I (boldly) dare say, there are almost 100% industrial facilities where you can't transfer the processing to an external location.

Tell us @fm7 about your HA prototype.  What did you use for your stack build?



Not HA in the strict sense but rather tests with Postgres streaming replication and Tokyo Tyrant master-master replication using Softlayer's private (out-of-band) network between those data centers.


BTW regarding Softlayer's Global IP:

  • Modification of a global IP address to a new server or VSI can take up to five minutes to take effect. Within the SoftLayer network, the route change will take less than 1 minute to update.
  • Global IP's will not work for local load balancers
  • By itself, Global IP's are not an automatic failover solution due to the lack of health checks; however, it may be used as a component for a failover environment to circumvent DNS propagation.


In the same limited "high availability" way,  currently I run dovecot master-master and LDAP multi-master replication using data centers around the world.
 
Last edited by a moderator:

drmike

100% Tier-1 Gogent
BTW regarding Softlayer's Global IP:

  • Modification of a global IP address to a new server or VSI can take up to five minutes to take effect. Within the SoftLayer network, the route change will take less than 1 minute to update.
  • Global IP's will not work for local load balancers
  • By itself, Global IP's are not an automatic failover solution due to the lack of health checks; however, it may be used as a component for a failover environment to circumvent DNS propagation.

5 minutes wait :) ahhh what is this?  Just adding a new server or IP to your pool?


Global IPs == Anycast?
 

fm7

Active Member
Global IPs == Anycast?

No. Please take a look at this blog post:


http://blog.softlayer.com/2012/global-ip-addresses-what-are-they-and-how-do-they-work

Excerpts:


Global IP addresses can be provisioned in any data center on the SoftLayer network and moved to another facility if necessary. You can point it to a server in Dallas, and if you need to perform maintenance on the server in Dallas, you can move the IP address to a server in Amsterdam to seamlessly (and almost immediately) transition your traffic.


...


How Do Global IPs Work?



...


We allocate subnets of IP addresses specifically to the Global IP address pool, and we tell all the BBRs (backbone routers) that these IPs are special. When you order a global IP, we peel off one of those IPs and add a static route to your chosen server's IP address, and then tell all the BBRs that route. Rather than the server's IP being an endpoint, the network is expecting your server to act as a router, and do something with the packet when it is received. I know that could sound a little confusing since we aren't really using the server as a router, so let's follow a packet to your Global IP

  1. The external client sends the packet to a local switch
  2. The switch passes it to a router.
  3. The packet traverses a number of network hops (other routers) and enters the Softlayer network at one of the backbone routers (BBR).
  4. The BBR notes that this IP belongs to one of the special Global IP address subnets, and matches the destination IP with the static route to the destination server you chose when you provisioned the Global IP.
  5. The BBR forwards the packet to the DAR (distribution aggregate router), which then finds the FCR (front-end customer router), then hands it off to the switch.
  6. The switch hands the packet to your server, and your server accepts it on the public interface like a regular secondary IP.
  7. Your server then essentially "routes" the packet to an IP address on itself.

Because the Global IP address can be moved to different servers in different locations, whenever you change the destination IP, the static route is updated in our routing table quickly.


...






5 minutes wait :) ahhh what is this?  Just adding a new server or IP to your pool?



David Mytton Matt Freeman (@nonuby)4 years ago






We've been using this for over a month now and it's working well. We've rerouted the IP several times for maintenance and we regularly test the automated failover of the load balancer too. Routing always happens within a few seconds and we don't see any loss of traffic. So everything is working nicely. The big test will be when our primary data centre fails and we have to reroute to another DC. We have tested this with no issues but it's always difficult to replicate that kind of failure (source of the failure, load on their management systems, etc).


https://disqus.com/home/discussion/serverdensity/global_elastic_ips_multi_region_routing/


BBRs


networkexpansion.png


BBR, DAR, FCR


networkarchitecture1.png
 
Last edited by a moderator:
Top
amuck-landowner