amuck-landowner

VPS Business Idea

D. Strout

Resident IPv6 Proponent
OK, this might be a terrible idea due to how much work would be involved, but that's for others to decide. And, since I don't have the money or know-how to make this happen, it's for others to implement too if it's a good idea.

With that out of the way, here's the idea: an anycasted VPS provider. In this business model, a VPS provider would have several different locations, distributed globally (or countrily? is that a word?). When a customer buys a VPS from them, they would actually be paying for multiple servers, one in each DC, so it would be, for instance, 5x as expensive as a normal VPS if said provider had 5 locations. And perhaps also a bit extra for the "premium anycast experience". Each servers would come with one local IP address, but then there would be a single anycasted IP for the whole group of servers. This would allow the customer to have separate SSH access to each server. They could then either make different content available by region, or the provider could offer a "synchronization" option to make some or all of the VPSes clones of each other.

The biggest objection I can see to this is that there are already companies that offer anycast options. Cloudflare, for instance, copies your site to their server and makes it available via anycast. The reason a system like this is better, though, is that it gives you more versatility. With this you can anycast anything, like having a custom DNS setup, synchronized download mirrors, whatever. Part of the fun is seeing what people will do with the setup.
 

Nett

Article Submitter
Verified Provider
Just build a cloud. Multiple SSH logins for "one" server is not good.
 
Last edited by a moderator:

drmike

100% Tier-1 Gogent
What company Richard?  Sounds interesting.

Strange Friday day dream ideas out of the regulars :)  I like it...
 

Everyday

New Member
Verified Provider
Sounds interesting but also sounds very much like a CDN. Just curious, what advantage do you see in having root access to all the servers within the CDN?
 

fixidixi

Active Member
@D. Strout: I don't think youve considered this:

the way i imagine what youve said is that you are running 5 different vpses in 5 different dcs, and sometimes these get 'synced'. but thats the issue: the data on those vms must be in sync the whole time and that is much easily resolved by some cloud..
 

HalfEatenPie

The Irrational One
Retired Staff
@D. Strout: I don't think youve considered this:

the way i imagine what youve said is that you are running 5 different vpses in 5 different dcs, and sometimes these get 'synced'. but thats the issue: the data on those vms must be in sync the whole time and that is much easily resolved by some cloud..
I can't stress what @fixidixi said enough.

This is the most important and often overlooked part.  What we're talking about is syncing the operation of each VM to each other at the Virtualization level.  I'm assuming what he meant was input the same commands across all the VMs would equal the same VMs in each geographic location.  

First, one of the biggest common issue I can think of is the application level issues.  Most commonly, working with the databases.  During the network latency between each location how would one database know what to fix on the other database?  If one copy of the database is updated in one location how is the other copy affected in the other location if it is called during the "sync" period?  Not divulging too deep into this at the moment (because I don't consider myself an expert nor can I commit the time to perform an in-depth research on this at the moment), but you get the idea.  The latency between servers is considered huge in comparison to something that's right there on the network, therefore leaving those "holes" open where things can get messy.  Application level wise, I think it'd be a big problem to work with.  

Second, the way you implement this is important.  This is very similar to how many computer games tackle multiplayer functions.  

In some games, you can emulate each "player command" client-side, meaning that the calculations are done on the player's computers therefore giving a "rough" estimate/idea of what's happening.  The pro of this is that there's no "master" computer required, but the con of this is that it isn't 100% exactly the same.  Its similar, but it's not 100% the same result across all the boards when you include latency.  Another method is to have everything managed at the central server (so the server processes the information and then sends it out to all the clients, kind of like just sending the "results" out).  This is great because then you don't have any conflicting issues that could rise from the previous method.  But the issue with this method is that it can sometimes take way too long due to the network having to receive information, perform calculations (processing), then sending out the information (whereas the first one skips over the final sending out the information part because the data is already there locally).  

In relation to your concept (for servers), do we want to set it up so that all of your commands are sent to the server and processed at each VM?  Or should we have it setup so that there's one "master" server and all the other "slave" servers are just replicas of the master VM?  The first option has the potential for tons of issues (especially with updating local repos to each location, the latency ,etc.) and could potentially require the person performing maintenance on each specific server (thus defeating the entire purpose of this experiment).  The second one would then fall into the exact same Application-level issue I just talked about earlier (point 1).  

I love the idea, I'd love for it to work, but there's too many potential problems that would come out of this.  Right now it's being solved at the application level (e.g. MySQL has a feature for this, or you just do it via geoDNS and manually configure each VM).  I don't think it's feasible at this time yet (obviously ignoring major investment and whatnot).  

It's my two cents.  
 

peterw

New Member
You can create a shared file system but as Mr. Pie said there is not a solution for all applications. What should happen if I add a pear module on one vm? Nice idea but technology is lacking.
 

Everyday

New Member
Verified Provider
I think the master/slave configuration has merit but needs a lot of work. Maybe a way to have them all numbered so if master is down then slave1 is the next in line, like a chain. But still trying to work with a shared file system over distance will become a problem.
 

Thelen

New Member
Verified Provider
You can pretty much already do this with AWS you know, not on an IP basis, but with Route53. And I can't really see any need for set IP, except maybe DNS caching. Interesting idea though!
 
Top
amuck-landowner