High availability feasible for low cost?

Discussion in 'Questions and Answers' started by threz, Jun 19, 2013.

  1. threz

    threz New Member

    Jun 19, 2013
    After the whole SolusVM hack and a number of hosts experiencing downtime because of it, I started to see posts essentially claiming "if you wanted something reliable, you shouldn't have gone with a low end host."

    That got me thinking. Could you use a combination of hosts in different data centres to host something that's "mission critical" or "highly available" and not break the bank? 

    How would that compare in cost and features to a more expensive host that has a strict SLA?

    I'm definitely not the most experienced in this kind of thing, but I potentially have a (relatively small) project coming up that I'd like to be highly available and at least somewhat fault-tolerant. 

    I'm thinking something along the lines of:

    • Get 6 VPSs from the more reliable low-cost hosts
      2 128 MB VPSs for load balance servers (haproxy or similar)
    • 2 256 MB VPSs running a LEMP stack or similar for web servers
    • 2 512+ MB VPSs running a galera MySQL cluster
    [*]Rage4 for DNS w/ failover from uptime robot
    • Have half the servers above in one geographic area, and the others in another
    • Use geographic zones from Rage4 to point to each load balancer as the primary in a region
    • Use failover protection pointing to opposite load balancer
    [*]Load balancers would be used for both web traffic and MySQL requests
    • Each load balancer would favour their own geographic area
    • Would prefer other geographic servers only if the preferred server is under heavy load or unavailable

    If this works, I figure it would only cost ~$20/month. Is this feasible? Am I missing something here? What would you do?
    Last edited by a moderator: Jun 19, 2013
  2. WebSearchingPro

    WebSearchingPro VPS Peddler Verified Provider

    May 15, 2013
    I would think this would work, but if you think about it several providers had problems with the SolusVM issues. If you just happened to be unlucky you could loose both pieces of each HA area.

    Also, having a high priced host doesn't make a user exempt from these issues, Zero-Day exploits are still possible regardless of what host is picked. This case its security through obscurity by using a not-so-popular panel.

    Another note, if you think about it SLA doesnt mean squat, if a server is broke its broke a 99.999% uptime isnt going to make it turn back on magically.

    Even if you get a refund or whatever, that data was lost, customers were lost and there was downtime.

    /end rant
  3. drmike

    drmike 100% Tier-1 Gogent

    May 13, 2013
    Yes this will work and others are doing similar things.

    The problem with using HAProxy or anything similar by your numbers is that you only have 2 of them.  Geographically, 2 is not enough, redundancy wise, you need 3 and when adding geography you probably need, oh 5-6.

    Best to stick with Nginx since it is proven, small and fast.  

    Install your setups on say 512MB VPSes.  Put MySQL on each one (MySQL over internet latency is a horrific idea --- it works, but slowly).

    Use Rage4 and API monitoring service to prune VPSes from your pool when down.  

    Geographic aspect, is a custom setup in Rage for something like this.  Yep, you can do that custom setup.

    $20 a month?  Depends.  Probably higher in reality.  512MB VPSes tend to be around $5 a month  x 6 = $30 + Rage4 overage beyond 250k lookups a month.

    In addition to all that, I'd add one dedicated backup VPS that is not running your public stack and take steps to note replicate/overwrite things on it in case of hack/compromise/etc.

    A shared file system could be beneficial in such a configuration but RAM probably limits that severely.
  4. threz

    threz New Member

    Jun 19, 2013
    Thanks for the advice. 

    So, basically I should nix the 256 MB VPSes listed above and replace those with 512s (or higher) and run MySQL directly on each one. In order to keep them synced, put those in some sort of a master-master cluster (galera or similar), but don't actually send database requests to anything except the local MySQL server?

    With the Rage4 API monitoring service (like uptime robot?), can those VPSes be pruned automatically? 

    Should I still have the load balancers in front of those servers, or do everything via DNS?
  5. SilverKnightTech

    SilverKnightTech New Member Verified Provider

    May 16, 2013
    Your setup is pretty close but a few items that I personally would change.

    1.  If you have 2 haproxy ( 1 in each location ) then you would need/want at least 2 webservers at each location.  So 4 webservers total.

    2.  MySQL ( personally thoughts again here. ) I personally would get 1 larger VPS and place it in a third location and have each VPS connect to it. ** there is a ton of talks about how many MySQL servers to have all over the place, but personally to me this all depends on how many read/writes your getting.  If your site say is a WP blog and you cache everything out, the writes to the DB server would be very little, so if needed you could store a copy on each of the 4 smaller Web Server VPS's as the failover.  ( Read Only ) Yes 5 DB's in the cluster would be harder to setup but it can be done.  However this is the big part of the HA, how much data comes from your MySQL server(s).  Heck fire for this almost, I would even consider using EC2 and their load balancing DB server solutions.

    3.  This is the biggest part to me. BACKUPS

    Many providers will give you or sell you backup space.  Get some in each location. and then get another backup spot in a random location/provder ( BQBackup.com ) or some one like him.

    Again, these are just my thoughts.


  6. threz

    threz New Member

    Jun 19, 2013
    For #3 - backups are definitely part of my plan. I had just already figured that part (mostly) out, so I didn't include it in this discussion. 

    Thank you for the input guys. Some options to think about. I'm probably a couple months out from rolling this out, so I'll mull it over.