Best Redundancy

HalfEatenPie

The Irrational One
Retired Staff
So you have your server all setup.  

MySQL is running, your apache/nginx configuration is up.  Your website is now live.  Basically, you've done the bare bones. 

How do you create redundancy?  What rules do you use?  Are you content with just one backup system?  Do you do the "three servers" rule and have two backup servers?  

Do you have automated fail-over?  

Basically, what precautions do you use to make sure your website (or services) remains up?
 

BlackoutIsHere

New Member
Verified Provider
Well to be honest I use aggressive cloudflare rules when possible and I usually just backup to S3 with Duplicity. I am not running anything too critical.
 

Slownode

New Member
Keep hashes of all data and have 2 backups, each time you backup, you alternate, so if something goes poof you still have a backup.

RAID != RAID is my motto.

I'd rather have 3 lone drives than 2 RAID1 pairs, that's how I roll.
 

shawn_ky

Member
Probably overkill, but I have tahoe-lafs running between 10 servers for important files, mysql dumps being sent to email (smaller dbs), mysql dumps being backed up to a backup directory along with a script (snapback2) running that keeps daily, weekly and monthly backups that are rsync'd to another server. From there everything goes to owncloud and a NAS that is offsite. I have DNS replicating to 3 slaves across different companies/servers and MYSQL replicating to 2 slaves.. 

If something goes down, getting the data will be okay, but getting it all back up would be fun as I'd have to wait for DNS... would love to have a system set that would basically one server go down, next comes up seemlessly...
 
Last edited by a moderator:

wlanboy

Content Contributer
Two vps on two continents (different provider/different tier) doing Master-Master replication for MySQL and MongoDB.

Two vps (different provider/different tier) rsyncing my ruby scripts, websites and crons.

One vps doing the nasty things (mail server)

One vps collecting all encrypted backups - rsyncing to my NAS at home.

NAS rsyncing with the NAS hosted by my parents (living away about 300 miles away)

Did not have a downtime > 1 day per year. So for me following things are more important:

  • One staging vps (KVM) running web frontend and db to test everything
  • SVN holding every single line of code (including config)
  • SVN managing branches and tags for every important config of a vps

If one of my vps brakes I do following:

  • Login
  • Create users per script
  • Install servers, libs and ruby/node.js per script
  • svn checkout /etc and /home folders
  • restart services
  • Done
Everything else is done my cronjobs/rsync automatically.

And yes:

but getting it all back up would be fun as I'd have to wait for DNS...
I am not happy with DNS too.
 
Last edited by a moderator:

shawn_ky

Member
  • One staging vps (KVM) running web frontend and db to test everything
  • SVN holding every single line of code (including config)
  • SVN managing branches and tags for every important config of a vps
Interesting with SVN.... May need to look at this more...
 
Last edited by a moderator:

peterw

New Member
How do you create redundancy? What rules do you use?
Using three servers on three different networks in three different countries.

Do you have automated fail-over?
TTL set to 5 minutes and updating DNS records per script.

what precautions do you use to make sure your website (or services) remains up?
Selecting providers carefully and to host only services I am able to host. I will never host irc server, mail server or any js/py/java web thing.
 
Last edited by a moderator:

peterw

New Member
I am using dnydns features.

Any chance seeing the script?
#!/usr/bin/php
<?php
# php updateip.php 2 test.com
$type = $argv[1];
$domain = $argv[2];

echo $type;
echo "\n";
echo $domain;
echo "\n";

$servers = array('198.23.100.1', '198.23.100.2', '198.23.100.3');
$status = array();

function availableUrl($host, $port=80, $timeout=5) {
echo $host;
$fp = fSockOpen($host, $port, $errno, $errstr, $timeout);
return $fp!=false;
}

function updateIp($type, $domain, $newip) {
switch ($type) {
case 0:
#dyndns.org
$updateurl = "http://[USER]:[PASSWORD]@members.dyndns.org/nic/update?hostname=[DOMAIN]&myip=[IP]";
break;
case 1:
#no-ip
$updateurl = "http://[USER]:[PASSWORD]@dynupdate.no-ip.com/nic/update?hostname=[DOMAIN]&myip=[IP]";
break;
case 2:
#he.net
$updateurl = "http://[DOMAIN]:[PASSWORD]@dyn.dns.he.net/nic/update?hostname=[DOMAIN]&myip=[IP]";
break;
}

$updateurl = str_replace("[DOMAIN]",$domain,$updateurl);
$callerUrl = str_replace("[IP]",$newip,$updateurl);

echo $callerUrl;

try {
$response = http_get($callerUrl, array("timeout"=>5));
} catch (Exception $e) {
echo 'Exception on http_get: ', $e->getMessage(), '\n';
}
echo $response;
}

foreach ($servers as $ip) {
$check = availableUrl($ip);
array_push($status, $check);
}

if ($status[0] == true) {
updateIp($type, $domain, $servers[0]);
}
else if ($status[1] == true) {
updateIp($type, $domain, $servers[1]);
}
else if ($status[2] == true) {
updateIp($type, $domain, $servers[2]);
}

echo "update done";
echo "\n";
?>

I have my personal priority list for the servers. So the update path is easy: Always use server1 and only if it is not available use server2 and so on.
 

D. Strout

Resident IPv6 Proponent
lol redundancy. I keep local backups of everything with some uptime checking. If anything went down I would be notified and could put something back online within at most an hour. Nothing I have is critical enough to warrant redundancy, or any system more complex than this. So my site is down for an hour - big deal.

If I did need redundancy, though, (at least on a site without a DB) what I'd probably do is have a master server on which I make changes. At least two slaves would pull from this, and the public facing stuff would be run off of one of the slaves - round robin DNS among the slaves probably. If one slave went down, the DNS server would remove it from the RR rotation, leaving the other slave(s) to run stuff. If all the slaves went down, the DNS would be pointed to the main server which of course would be capable of running the service. But if I hadn't gotten around to bringing the slaves back online by then, I should be kicked out of the sysadmin business.
 

wlanboy

Content Contributer
I like the idea of small stupid and exchangeable frontend servers.

They are not writing anything to the DB (comments done via Disqus). So they need only a local db slave.

Still thinking about the best way to round robin through them. I don't like the idea of vps just sitting around and waiting for the failure of the master.

I've seen it done with Merc and Git... SVN kinda old as hell.
Old but working.
 

NodeBytes

Dedi Addict
Most VM's on my dedi are duped to KVM machines with Route53 health checks to failover if the dedi doesn't respond. 

I'm currently using offloaded MySQL which is backed up daily, I will be moving this to a VM or another dedi soon and adding a backup.

All servers are backed up to other storage VPSes with a couple of the providers that provide backup VPSes. Backupsy is a favorite.
 
Last edited by a moderator:

Quexis

New Member
Verified Provider
I've seen it done with Merc and Git... SVN kinda old as hell.
I use a triangular workflow for this (Git). I push to my server which has a post-receive hook making it push to a private Bitbucket repository. I have an academic account which allows me to create infinite private repositories, it's kind of great (and also one of the reasons why I advocate Bitbucket over Github).
 

dmmcintyre3

New Member
Backups every 30 min to 3 continents and a single stable VM. No cloudflare, CDNs, etc. This gives me plenty uptime and I haven't felt the need to add redundant webservers. 

DNS is replicated to 4 DDoS protected VPSs and I have never had a DNS related outage when using my own DNS clusters. However, I have had DNS related outages when using other DNS services (dns.he.net, cloudflare, afraid.org, etc)
 

VPSCorey

New Member
Verified Provider
Backups every 30 min to 3 continents and a single stable VM. No cloudflare, CDNs, etc. This gives me plenty uptime and I haven't felt the need to add redundant webservers. 

DNS is replicated to 4 DDoS protected VPSs and I have never had a DNS related outage when using my own DNS clusters. However, I have had DNS related outages when using other DNS services (dns.he.net, cloudflare, afraid.org, etc)
Expecting a meteor to strike?
 

Maximum_VPS

New Member
Verified Provider
Daily pull to offsite, weekly to cold hdd. "core services" ATM no failover exept DNS :/
 
Last edited by a moderator:
Top