I think it will all come to few major points:
- Customer Service
- If you can come up with your own software as a value-added service
- How strong your performance is going to be
- How soon can you adapt the market changes
- Not but the least, Pricing!
For example, DO is selling 8GB of RAM for $80, while the price point is good, you can find a lot cheaper Dedicated Servers with higher bandwidth for the same cost. So, the cloud hype only works for people who aren't aware of the industry, the ones who do, usually go for these typical small VPS/Dedicated Server hosting company as they tend to get a better performance and overall service.
This doesn't apply to DO, but for High availability setups like iwStack or Leaseweb Cloud or such.
The entire term behind cloud (for me anyways) is the redundancy. A single cheap dedicated server with a single hard drive means there's a single point of failure. Either the hard drive dies, there is a hardware problem, etc. You're down a creek without a paddle.
For absolute critical infrastructure, I use real cloud providers, one of which (my favorite) is Leaseweb. The fact that they have my VM in several SAN clusters and if the physical server goes down then they automatically reboot me on a working node. This system brings in high availability and depends entirely on how you value high availability. tldr: I don't have to deal with the hardware. I just deploy my application and I'm done. If there are hardware problems down the line, the system already takes care of it.
If I'm running a service that i'm fine with losing and re-configuring (e.g. my dev environment or lets say a hobby project like a game server), I won't place that much value on high availability. However, if it's something very important to me as in something that's critical, I would (in a heartbeat) put it on a proper setup with high availability.
For the cheaper/budget market, I agree that would be acceptable. However, for individuals who don't mind paying more to have the reliability of a real cloud setup and minimize risk (
Risk management here) is the way to go, even if it's a little bit more expensive.
Also, I don't see hardware as an issue at all. Usually everyone has the same hardware. "The latest Xeon core server!" to "Dual L5420s for everyone!". Everyone likes a cheap VPS or a dedicated server sure, and the bare minimum specs are fine enough to run your application. However, the important thing that separates the budget/hobby brands vs more "Premium" brands, is the bandwidth blend. Route/BGP optimization (such as Incero's Route bot or Internap's MIRO), and overall connection to the greater web. A server on the internet isn't a one-size-fits-all situation either. People have to actually see what peers are available, what networks are available, where the strongest part of each network is.
Take Vultr Japan location for example (since that's a case study area I am very familiar with). Choopa, parent company of Vultr, offers KVM VPSes out of their Japan location. Now I enjoy and use Vultr and I see their brand going a long way. However, their Japan bandwidth blend is very limited. Originally when they opened up, it was single-homed and any network off the Japan island would be re-routed through the United States. This meant additional latency from lets say a viewer from China, as instead of going directly from Japan to China, your data packets are sent from Japan to Seattle, to Los Angeles, to Hong Kong, to inside China. Now Vultr has improved their network blend since then and I applaud them for that.
You also have to look at the offered peering at the Datacenter. See what network blends are available. Certain datacenters are better with domestic traffic, but aren't as well tuned for international traffic. One focuses on volume, other focuses on quality. One example on this could be Leaseweb. Leaseweb has the "Premium Blend" vs the "Volume Blend". Now I've never really experienced their "Premium Blend" (as my dedicated server with them is in a location without a premium blend), but basically they have the Premium blend support more expensive uplinks. Certain bandwidth companies will give you an uplink for $XX.XX/Mbit, others would simply do it for $0.XX/Mbit (e.g. Cogent vs Level 3). That necessarily doesn't mean Level 3 is better because it's more expensive. That also doesn't mean Cogent is crap because it's cheaper. Again, it all depends on what your design parameter for your service is. Cogent is great with volume and that's why services like Netflix (which are very bandwidth intensive) use a ton of Cogent. However, as long as the network speed was fine the latency doesn't really matter, and therefore you sacrifice latency for volume.
Now certain services are sensitive to latency. The most common example is Game Servers, however there are more services such as finance systems and scientific research which are also sensitive to latency. However for the sake of simplicity and since everyone here is familiar with Game Servers, I'll go with that example. Game Servers are sensitive to latency because of the way the game server engine processes actions and events occurring within it. I mean there's of course interpolation and other "lag-mitigating" mathematical formulas and systems available, however those aren't solutions to the issue. The solution is to have more peers. This is where networks with better peering or higher quality bandwidth (as in less overselling and more available "on demand") are necessary. A lag in the latency could (in the game server sense) be the difference between life or death. Ever had that experience where you "headshot" someone but then you died first? More than likely, the other person's headshot/killing shot reached the server before your "killing blow" reached it, and the server accepted the later one since that information came in first. Apply this to financial systems and now you see how important this is. Therefore, most solution to this would be to have more peers. But of course making sure they have the bandwidth available to sustain it.
For those well informed, this is like "How to Find a Server 101". Understanding this simple concept is the start between a simple reseller who sells VPSes from their server/rack, to a proper specialist who understands the end client's needs. It's 2016. I see LowEndBox working to try and educate their userbase, but most of them are simple tutorials on how to setup a web server with the next "hot" software. People see that, and then they see SolusVM, and then suddenly a "new 13 year old web host CEO President Founder" comes in to the market and acts cocky as hell and think they own the place. The same questions constantly come up over and over again by different/new people who have yet to find the search button.
The future of the VPS Hosting Industry is good if providers and clients alike are able to understand this concept and are able to easily communicate their needs. This means the VPS Hosting Industry has continued to mature and now understands the importance of specialist, and is easily able to spot and remove a Johnny Nguyen scenario. Random 16 year old kid with his parent's credit card don't understand that the actions they take to "set up their own company" (and then later sell out or fail) is one of the reasons why some people see the VPS Industry as a joke or as the free-for-all wild west. They can easily miss the professionals who actually know what they're talking about like
@Francisco or
@mitgib and kind of group them in with the younguns (mostly due to the nature of the internet and how almost anyone can be anything on it). The kids may know what function creates what result, but they don't understand the fundamental logic behind it.
Anyways that's my two cents. This thing started out as a bitching post/response post and then kinda became an essay. So... GG. Also my views are a bit pessimistic and is kinda really very different from the original intended response from the thread. However, I think it's worth a look from a theoretical and ideal perspective.
tldr: I'm an old man that likes to write.