# vpsBoard - Community input and feedback on our infrastructure and future



## TheLinuxBug (Jul 10, 2013)

Hello Everyone,

Over the past month there have been several topics going back and forth, from selling Ad space to finding a way that we can make this place fully community supported. MannDude and I have been going back and forth in private message for a few weeks now on this topic and we thought that it would be something that we should allow the community to have some input on.

Currently vpsBoard's infrastructure is relatively small, as the site is hosted on a BuyVM 256Mb KVM instance and using BuyVM's offloaded MySQL services for MySQL. So far this has been a more than reasonable start, however, going forward as this community grows, this will need to change to keep things moving and working at reasonable speed.

My proposal to MannDude was to create a cluster of servers, instead of using just one or two large servers. Now there are probably many ways to do this, but we are discussing using not only the services that are the topic of this board (VPS) but also trying to keep this as low cost and community driven as possible. There has recently been a few different posts on Varnish here, so as you can imagine this is one of the parts that was discussed.

So, my proposed method was to use several DDOS protected VPS server as reverse proxy entry ways, with several backend web servers to provide the non-static content and use at least two reasonably sized SSD VPS servers in a Master-Master replication to provide the MySQL services needed by the cluster. On the reverse proxy using Varnish and Haproxy (to provide better session control as forums are very dependent on session cookies).

When discussing it with MannDude I threw together a quick Visio drawing of what this might look like, to give you a better idea of what I am talking about graphically. You can find the drawing here. 

About  this point, you are probably wondering, what does this have to do with me? Right? 

One of the reason I suggested  this clustered method is because we would be able to allow providers and community members alike who would like to contribute to this infrastructure an opportunity to do just that.  For the back end servers we could allow the community to provide these instances, allowing the cost to MannDude to be reduced.  Okay, so now your next question is likely. "How do we do this while being provider neutral?"  Well, using this type of setup would allow people to contribute resources in an anonymous manner as the only ips the members see when coming to the site would be those of our entry proxy servers and not those of the backend resources. Meaning, anyone who is willing to contribute is not going to be directly advertised, however, MannDude has said he will give an honorable mention to those who do decide to help (we have not yet determined how that would be displayed).  Those who do contribute also can not claim to host the site them selves, as it would be a distributed infrastructure.  In a case where say one contributor is not able to provide said service any longer, we could remove it from the cluster and still continue to run without much immediate effect.  Then, as we can, replace the node with a new contributed server.

All this said, maybe the way I have mentioned is not the best way and is certainly not the only way, so at the same time I am bringing this option to you, we are still very much open to some more suggestions on how this can be done or even made better.  So please, leave feedback, make suggestions and lets come up with a way to keep this a community initiative instead of having to have Ads plastered everywhere.  

Also, if this is an interest to you and you wish to help build this infrastructure or help to maintain it and you have some skill sets you would be willing to offer *for free* to help us out, please speak up and let us know.


----------



## Kenshin (Jul 10, 2013)

Having Master-Master MySQL servers across west/east coast just sounds like a nuclear bomb waiting to blow up. It'll make more sense if you just found a primary central US server with good routing between the west/east coast reverse proxies, then run slave replication (and backups) for quick failover in case crap happens. Much more deterministic failover rather than having to deal with master-master breakage.


----------



## texteditor (Jul 10, 2013)

Rent a dedicated server and then set up one large KVM container that uses all of the available resources, so we get the power of a dedicated server while 'technically' still using a VPS


----------



## shawn_ky (Jul 10, 2013)

I'd like to help, but at this point don't have the skillset... I'd like to learn more because this is something of what I'd like to do myself even. Don't really want to depend on one provider..   Count me onboard for whatever I can do to help, even if it meant buying a premium membership or something to aid in the cause...


----------



## drmike (Jul 10, 2013)

Master-Master... Can't say I've tried that. Who has done that in MySQL without an issue?

I won't kick this idea, as I implement the very same general idea 

I see Varnish to HAProxy.  Why both?   Varnish can do whatever HAProxy can for a project like this (I think).

How is the determination being made as to where a user ends up east or west?  GEO-DNS I take it....?


----------



## TheLinuxBug (Jul 10, 2013)

buffalooed said:


> Master-Master... Can't say I've tried that. Who has done that in MySQL without an issue?   I won't kick this idea, as I implement the very same general idea   I see Varnish to HAProxy.  Why both?   Varnish can do whatever HAProxy can for a project like this (I think).   How is the determination being made as to where a user ends up east or west?  GEO-DNS I take it....?



I have done Master-Master replication with MySQL 5.5, it actually works pretty well.  I suppose in a forum with a lot of activity that *@Kenshin* may be correct that having Master-Slave may be a bit more quick and reliable.  If the replication lags you could see a new item on one server and it not yet show up on another.  In my experience though, running one server in San Fran, CA and one is Ashburn, VA I have only seen this be an issue a few times (and usually it corrects in less than half a minute).  Also if one of the servers fails it can sometimes be a delicate process to get things resynced, but I haven't had that many issues with it. 

Most forums provide session cookies and such specific to said server you are accessing.  Unless you share the session directory on each backend server (using NFS, SSHFS, etc). For the best redundancy it isn't advised to use such remote mounts though, because if the mount server dies.. well then you have a huge issue.  Using HAproxy you can have it continue to send said user to the same backend (using a special cookie it inserts) which will allow you to keep your session while still distributing load between servers.  In the case where one backend fails, then it would provide you with the other backend with the only caveat being if you have not checked "Remember me" you will have to re-login to the forum. 

Depending on the setup, yes, GEO-DNS would be used to route the client to the correct reverse proxy for their region.

Hopefully that helps clarify things a bit.  If you have any other questions I can answer for you, please let me know.

Cheers!


----------



## Epidrive (Jul 11, 2013)

Why not just go w the new OVH infa. It is certainly a great solution to this though you must have to wait until its implemented. (P.s. they now offer ddos protected servers for almost the same normal amount you pay them per month)


----------



## TheLinuxBug (Jul 11, 2013)

FrapHost said:


> Why not just go w the new OVH infa. It is certainly a great solution to this though you must have to wait until its implemented. (P.s. they now offer ddos protected servers for almost the same normal amount you pay them per month)


 

Part of the goal (in my opinion) is to use VPS (as this is a board about VPS).  Also, we are trying to keep this low cost, and we will be asking providers if they wish to contribute some resources.  Another consideration is we would like to have availability in more than a single location (redundancy) in case something does happen to the server like the issues we have seen in the past with BuyVM. 

Cheers!


----------



## drmike (Jul 11, 2013)

TheLinuxBug said:


> Most forums provide session cookies and such specific to said server you are accessing.  Using HAproxy you can have it continue to send said user to the same backend (using a special cookie it inserts) which will allow you to keep your session while still distributing load between servers.  In the case where one backend fails, then it would provide you with the other backend with the only caveat being if you have not checked "Remember me" you will have to re-login to the forum.


Typically server determination from a pool of available backends is done via simple hashing.  Hashing of the requesting party's IP address or of the URL being requested.

There has to be a way to accomplish that inside of Varnish.  Been years since I dabbled with Varnish.  Less pieces involved, the better always.

Rest of the plan sounds pretty good to me.


----------



## manacit (Jul 11, 2013)

This is probably the most over-engineered setup possible for running an extremely simple, and relatively small, forum.

You're changing something that would be as simple as 

Request <-> Internet <-> (Server | Database) - Traditional, quick, easy. 

to

Request <-> Internet <-> Backend Server <-> Internet <-> Database <-> Other Database - Downright unwieldy. 

For what, to save 80ms of latency at most? It's going to cause you nothing but headache and time wasted. Not to mention the security concerns... 

Sure, you want it to be cheap. Know what else is cheap? Hosting it on a VPS. You don't even need a dedicated server at this point, just a VPS with a couple of gigs of RAM. Set up some caching, and it'd be quick (on a nice SSD or even HDD VPS, it'll probably still be quick!).

Hacky HA is only going to cause more problems than it'll fix.

You could pick up a small dedicated server that would host this until it dies, I guarantee it. You could likely even host it on a medium sized VPS for a lot of time to come.


----------



## TheLinuxBug (Jul 11, 2013)

buffalooed said:


> There has to be a way to accomplish that inside of Varnish.  Been years since I dabbled with Varnish.  Less pieces involved, the better always.



If you know of a way to accomplish  this I am all ears, I was actually trying to figure this out my self and hadn't had much luck yet. (On a setup I already have running with a clients forum)

*@manacit*, Part of the goal is redundancy, you can't accomplish that with a single server.  Also, once again the other idea is to reduce cost, purchasing dedicated servers does not help with this.  Though the stats on my drawing are extremely high end examples (as I stated I did it VERY quickly), as it is currently running on a 256Mb VM, the setup once fully configured would probably be made of 4 backend servers (2 backends for each frontend) with 256-512MB ram,     10-20Gb Hard drive and ~500-1000MB bandwidth.  The MySQL situation is still up for debate but I believe that could be done on 1-2 SSD VPS with 512Mb-1Gb ram. 

Edit: Also one important thing you may not know is that Varnish caches MySQL requests as well so unless it is brand new content, you will not see as much action to the MySQL server as you might think. 

Cheers!


----------



## manacit (Jul 11, 2013)

You guys are talking about "sticky sessions" - http://mesmor.com/2012/02/15/varnish-client-director-with-sticky-session/

You're turning a ~300 active-user forum into a six-server setup. The only reason you're "saving" money is because you want people to give it to you for free. If you actually had to pay for your setup, it'd be as much as an OVH server that would end up being more powerful and able to serve more people.

That HA setup is going to go down the trash the minute one host starts acting up and your SQL cluster gets messed up and someone needs to go in and fix it manually, causing the site to query a database that's 120ms away 15 times a page load slowing everything down even more. 

Choose a reliable host and you won't need some hacked up HA setup with donated virtual servers from shovehost sniffing for login data.


----------



## TheLinuxBug (Jul 11, 2013)

*@manacit*, Once again, how does a single server provide redundancy?   Honestly, this could run with 1 MySQL server and 2 backend servers running in round-robin with out issue.  The idea here is to show that expansion is easy and that as a COMMUNITY we can come together to provide the resources needed so we don't need to turn this place into an advertising outlet.  

Edit:  Thanks for the link to that article on sticky sessions, however in my works with vBulletin I was not able to perfect that, maybe with some help we could make it happen for here and remove a step. 



manacit said:


> That HA setup is going to go down the trash the minute one host starts acting up and your SQL cluster gets messed up and someone needs to go in and fix it manually, causing the site to query a database that's 120ms away 15 times a page load slowing everything down even more.


Actually I have this type of setup already in place for a clients forum and it is very reliable.  Also I think *@**Kenshin* made a valid point that Master-Slave may be a better way to go.  This is all up for discussion, nothing is in stone, heck this hasn't even been started.  Maybe you can offer a better solution that works with redundancy and provides the high availability needed?  Also, please read my previous reply to you about Varnish and it caching MySQL requests, thus there is no need for the "15 times a page load" you are talking about.

Cheers!


----------



## manacit (Jul 11, 2013)

My oh-so-not-subtle point was that you don't need redundancy, just get a server somewhere that's not a super-budget-lowend host that get's DDoSed all the time and is hosted in a facility held together by zipties (not literally, sorry FiberHub, but you need to work on that whole power thing), and you won't go down very often/at all. Toss CloudFlare in front of that, you're good. 

You know what will negate any "redundancy" - the minute someone finds out where any of your non-ddos protected assets are and takes them down. Or when one of them goes down. If you host it with one SQL server, you still have a single point of failure. If you make some six-servered beast for a small forum, it's not going to be a good time. Sure, one of your load balancers might be able to go down, but when latency gets ahold of you and it's taking 500ms just to get all of the queries for a page-load, you might as well just not.


----------



## TheLinuxBug (Jul 11, 2013)

manacit said:


> My oh-so-not-subtle point was that you don't need redundancy, just get a server somewhere that's not a super-budget-lowend host that get's DDoSed all the time and is hosted in a facility held together by zipties (not literally, sorry FiberHub, but you need to work on that whole power thing), and you won't go down very often/at all. Toss CloudFlare in front of that, you're good.



You mean like when BuyVM has planned maintenance so the site goes down?  Cloudflare has also been excessively flakey, and I believe MannDude is considering removing it all together for something different at this point.  If someone has the ability to flood out multiple locations which provide DDOS protection then there is a bigger problem outside the actual hosting setup, as I imagine this would mean great amount of data being pushed at these locations.  I highly doubt hosting it at OVH would help with that, I am sure even with their protection after a certain point they likely null route you (we will see once people start using it). 

Part of the goal here would be to get servers for the backend which are in close proximity to the frontend to avoid the large amount of latency, thus my wanting two backends for each front end.  

At the end of the day, there are a lot of things we could do about this, hell we can sell out just like Chief on LET to some faceless corporation and have them pay all the bills, or we can plaster the whole site with advertising?  However, my goal, once again, is to keep this a COMMUNITY driven site and to do that we need to keep the costs down.  Right now this whole forum is running on a 256Mb KVM  from BuyVM w/ MySQL offload, meaning we do not need a dedicated server amount of resources to run it.  Even with double the clients we could probably handle the load with just an extra KVM somewhere and a reverse proxy in front. 

I invite you to come up with a solution that is better than this which keeps costs of running low, advertising off the site, and a way for the community here to contribute.  If you have can some up with something better, I am more than happy to listen 

Cheers!


----------



## MannDude (Jul 11, 2013)

For the record, I am a fan of simplicity. I'm still not sold on this idea presented in this thread but am open to it.

Basically, all I want is the site to remain online with good performance to the rest of the world. Initially I was going to sell ad-space so I can cover the cost of an east coast USA KVM or dedicated, put up Cloudflare Business DDoS protection and call it a day. But in an effort to attempt to keep vpsBoard ad-free, and to keep it a community supported (and not advertiser supported, even if the advertisers are the community), I am open to this option as well.


----------



## manacit (Jul 11, 2013)

You've gone from a six+ vps set up to "an extra KVM" - that's pretty much my point, it's a small site, there's no need for so much infrastructure. That's how you keep it cheap. 

The idea would be to host somewhere slightly more expensive than BuyVM to avoid ending up somewhere like FiberHub which lacks any sort of power redundancy, etc. 

My solution? People have offered to help the site out with a KVM. Use that to host the site, not as one of a million different cobbled together servers. 

Clustering your x near your y is great until your y goes down and then you have to communicate from x to z a bunch, that's not very available.


----------



## TheLinuxBug (Jul 11, 2013)

*@**manacit*, Now I am curious.  Have you ever hosted a forum?  Do you understand the use of Varnish? Do you have experience to say that this is really as complicated as you say it is going to be?  I actively manage a setup similar to this for a forum, and I have never seen the dooms day issues you are describing?   

Cheers!


----------



## Ruchirablog (Jul 11, 2013)

This gotta be the most craziest hosting idea I ever seen for this type of web site. Jeez dude! You are making this over complicated. vpsb doesnt really need this at the moment nor future. Managing this type of cluster is too much work and too many points of failure. Before jumping in to conclusions you should think about weak points of current setup. And that list is small and there are many ways to improve on that before investing money and more importantly time for implementing an idea like yours.


----------



## manacit (Jul 11, 2013)

Yes, I have hosted a forum (lol), and yes, I have used Varnish (I was the one who linked to you sticky sessions, which I have successfully implemented, actually). 

I've set up MySQL clusters before and watched network latency or downtime cause them to go haywire.

What does your setup look like? I'm happy it works for you, but it's total overkill in this situation.


----------



## drmike (Jul 11, 2013)

manacit said:


> Choose a reliable host and you won't need some hacked up HA setup with donated virtual servers from shovehost sniffing for login data.


Well, this sort of HA setup as proposed will work and does work.  There are MANY HUGE sites that do essentially this.  Cloudflare does this essentially.

The sniffing for login data part, that's a *real concern* though.

Shouldn't be plugging these front end nodes on to just any network anywhere.  I'd start with BuyVM and SecureDragon since both reputable and both have DDoS protection services.  But, both are purely US West Coast so far.



manacit said:


> Choose a reliable host and you won't need some hacked up HA setup with donated virtual servers from shovehost sniffing for login data.



Reliable hosting is just one part of this.  Choosing Telx or Equinix solely isn't going to make everything run right and provide redundancy and geo-balancing --- unless you get setups in multiple geographic locations from them ==== $$$.  Plus the DDoS protection.



TheLinuxBug said:


> Varnish caches MySQL requests as well s


Does IPB support such or is Varnish being ran as a MySQL proxy to facilitate that?  Caching on database layer creates tons of issues unless plugins/mods exist in IPB to handle and control such.



manacit said:


> Toss CloudFlare in front of that, you're good.


CloudFlare for all it's success has plenty of failings.  See what they do when someone tosses a DDoS at the site for 12 hours.   Their service is good, but whatever the top advertised package is, they'll have you up there paying in no time for being a hassle

.



manacit said:


> The minute someone finds out where any of your non-ddos protected assets are and takes them down. Or when one of them goes down. If you host it with one SQL server, you still have a single point of failure. If you make some six-servered beast for a small forum, it's not going to be a good time. Sure, one of your load balancers might be able to go down, but when latency gets ahold of you and it's taking 500ms just to get all of the queries for a page-load, you might as well just not.



Where to start with this one...  How would someone find the front ends?  I suppose they could.  But you shouldn't be advertising them, so kind of hard.

If a front end node goes down (Varnish + proxy) you will end up needing to pull that node from the pool.  That will be best done at DNS level with a real API enabled monitoring service.

*Databases over the internet = bad and high latency.  Don't even bother if you are thinking that.*

 The Varnish + proxy stack should have MySQL bundled and the webserver also.   So each and every location can fully do everything.   It's a many to many distribution.  Complicated, yes.



TheLinuxBug said:


> backend which are in close proximity to the frontend


Close proximity better be the same datacenter.  Better be < 5ms.  Still high as heck and big delay.



TheLinuxBug said:


> advertising off the site


Unsure why the advertising hate exists.   I HATE ADVERTISING, because it is usually off topic and not relative to me.  Ads here?  Well, they would be relative to my interests and much of what I discuss.  So it is not evil.  Plus ads would be from productive community members.


----------



## drmike (Jul 11, 2013)

> I've set up MySQL clusters before and watched network latency or downtime cause them to go haywire.





^ this.


----------



## TheLinuxBug (Jul 11, 2013)

*@Ruchirablog*, Not really, already have 90% of the config already done and available for this.  Granted we may not need as many servers as I had proposed originally, in fact as stated this could be reduced to:  BuyVM DDOS Protected server with Varnis in front of 2 different KVMs in round robin with 256Mb ram that pull from BuyVMs offloaded MySQL server.  My goal once again is to show that expansion actually isn't that hard.  Also, it would allow multiple people to donate services to help this out, and in case of an issue where someone can no longer provide such services, would not cause an issue with the site.   As I have stated a few time, please, make suggestions I am open to them.  However, please understand that I really do have most of this down pat and it really isn't as hard to setup as you may  think with a little bit of experience and some trial and error to figure out the idiosyncrasies of iPB.

Edit:  Okay everyone, thanks for participating, and please keep up your suggestions, however it is time for me to get some sleep.  I don't make any final decisions, everything contributed here will be weighed out by MannDude and he will have final say on anything that happens.  My attempts here are just to allow the community to drive this site instead of the advertising (as a preference).

Once again, thanks to everyone who participates in this discussion.

Cheers!


----------



## manacit (Jul 11, 2013)

The idea of the setup is good, sure, but large sites that do this generally have dedicated hardware and full-time people managing the setup and solving issues that arise. 

CloudFlare might not be a magic pill, sure, but it's at least more battle-tested than most (all) of the LEB hosts around here are, not that it matters, since this is a tiny forum. 

The fact that you say Varnish caches MySQL queries is troubling - varnish is a reverse *HTTP* proxy. 

My worry is that people will be able to figure out what is hosted where based on the downtime of LEB hosts. If we have hosts publicly donating things, when something goes down (pulled out of rotation, etc), it wouldn't take a genius to figure out which hosts were down and then just DDoS the crap out of them. We've seen more cunning things happen recently.


----------



## Ruchirablog (Jul 11, 2013)

TheLinuxBug said:


> *@Ruchirablog*, Not really, already have 90% of the config already done and available for this.  Granted we may not need as many servers as I had proposed originally, in fact as stated this could be reduced to:  BuyVM DDOS Protected server with Varnis in front of 2 different KVMs in round robin with 256Mb ram that pull from BuyVMs offloaded MySQL server.  My goal once again is to show that expansion actually isn't that hard.  Also, it would allow multiple people to donate services to help this out, and in case of an issue where someone can no longer provide such services, would not cause an issue with the site.   As I have stated a few time, please, make suggestions I am open to them.  However, please understand that I really do have most of this down pat and it really isn't as hard to setup as you may  think with a little bit of experience and some trial and error to figure out the idiosyncrasies of iPB.
> 
> Cheers!


Still this wont be a HA setup because if BuyVM DDOS protected IP and server or offloaded mysql goes down, vpsb will go down. Achieving 100% HA isn't that important for a community web site. I don't understand why even 99% availability wouldn't suffice for a site like vpsb.


----------



## drmike (Jul 11, 2013)

manacit said:


> it wouldn't take a genius to figure out which hosts were down and then just DDoS the crap out of them.


True to an extent.  But there are multiple front ends.   

They would have to know all the front ends and DDoS multiple networks.

I've ran a similar solutions for 4+ years without any major issues.

Biggest piece is monitoring the top nodes and getting them pulled out of the pool quickly when downtime happens.


----------



## drmike (Jul 11, 2013)

Ruchirablog said:


> Still this wont be a HA setup because if BuyVM DDOS protected IP and server or offloaded mysql goes down, vpsb will go down.


True.

For this to work right each and every front nodey should have full stack to serve the site plus DDoS protection from that provider/network.


----------



## vanarp (Jul 11, 2013)

While I am not against this idea, I am concerned of the amount of time and effort required from Sysadmin point of view. And depending on volunteers for the sysadmin/support is probably not a good idea.

I propose to take one step at a time. A bigger KVM with Pure SSD to run MySQL/Varnish locally + CloudFlare to be the first step. See how it goes for a while before bringing in more components into the setup.


----------



## manacit (Jul 11, 2013)

buffalooed said:


> True to an extent.  But there are multiple front ends.
> 
> They would have to know all the front ends and DDoS multiple networks.
> 
> ...


My worry is less about the front ends and more about the back ends. If you're hosting the SQL in one place like many have said, you can put as many DDoS protected servers up front as you want, the minute someone DDoSes your SQL server, it's toast. Same with any intermediary servers, etc.


----------



## Ruchirablog (Jul 11, 2013)

manacit said:


> CloudFlare might not be a magic pill, sure, but it's at least more battle-tested than most (all) of the LEB hosts around here are, not that it matters, since this is a tiny forum.
> 
> My worry is that people will be able to figure out what is hosted where based on the downtime of LEB hosts. If we have hosts publicly donating things, when something goes down (pulled out of rotation, etc), it wouldn't take a genius to figure out which hosts were down and then just DDoS the crap out of them. We've seen more cunning things happen recently.



It wouldn't take a genius to find the real ip of vpsb even its masked behind the cloudflare. 

@Manndude Remove the function of uploading profiles pics via a url. It only takes few seconds to find the real ip of vpsb by using a simple service like iplogger.org


----------



## manacit (Jul 11, 2013)

vanarp said:


> I propose to take one step at a time. A bigger KVM with Pure SSD to run MySQL/Varnish locally + CloudFlare to be the first step. See how it goes for a while before bringing in more components into the setup.



This is *exactly* what I'm saying. But much more succinct. 

Also, Varnish is nice and all, but you'll likely end up not caching a lot for logged-in users, which makes it not quite as worthwhile as you'd want.


----------



## blergh (Jul 11, 2013)

texteditor said:


> Rent a dedicated server and then set up one large KVM container that uses all of the available resources, so we get the power of a dedicated server while 'technically' still using a VPS


Why?


----------



## peterw (Jul 11, 2013)

buffalooed said:


> For this to work right each and every front nodey should have full stack to serve the site plus DDoS protection from that provider/network.


A lot of money you are talking about.


----------



## drmike (Jul 11, 2013)

peterw said:


> A lot of money you are talking about.


Well, perhaps.

1GB VPSes should be doable.  2GB should be plenty of overkill space.  Separating the software stack on a site like this will be unnecessarily complicated and add unnecessary latency.

Frankly we need to revisit the affordable DDoS protection options out there.  CNServers seems to be what many use.   So SecureDragon + BuyVM would have common CNServers issue.

There aren't many options for DDoS protection.  At affordable rates, well even less.


----------



## KuJoe (Jul 11, 2013)

http://www.lowendtalk.com/discussion/3825/raymii-org-got-reddited-lwn-ed-and-news-ycomb-ed-on-a-128mb-leb-with-stats

^Why not do this?

Additionally, I've be happy to donate a VPS from Tampa, Denver, and Portland (DDOS Protected) for geographic diversity. While we also use CNServers like BuyVM, we are hosted inside of CNServers so the network layouts are different.


----------



## TheLinuxBug (Jul 11, 2013)

*@*, There are a few different options as I see it for DDOS protection:

Europe:

Girenet:

http://lowendtalk.com/discussion/11725/ginernet-4-99-month-ssd-openvz-4-gb-ram-ddos-protected#latest 

OVH:

Supposedly if their DDOS protection is any good, any $3/month OVH OpenVZ VPS should be able to function with DDOS protection.

USA:

BuyVM

ddosprotect.us (Secure Dragon)

Staminus

Really the "issue" if there is one is finding a service that is on the east coast of USA to handle DDOS, but with at least 2 locations it shouldn't be a huge issue. I haven't tried Girenet, but I would be interested to see how they perform.



peterw said:


> A lot of money you are talking about.



The largest output of money would be for the DDOS protected frontend servers. The hope is that the backend servers could be contributed.  



buffalooed said:


> 1GB VPSes should be doable.  2GB should be plenty of overkill space.  Separating the software stack on a site like this will be unnecessarily complicated and add unnecessary latency.



I am completely open to building out the backend servers with Nginx+php-fpm and MySQL on the same server.  We could use some decent 2GB SSD powered servers and there shouldn't be an issue with having MySQL on the backend instead of separated.

I will say this again for everyone's benefit:

What I listed above is a general IDEA, the goal of this thread is to find the BEST idea.  This could mean changing what I have proposed.  This is the information that we are seeking to get in this thread, not the "It can't be done" but the "This is how we can make it happen". 

It seems no providers are participating (well other than @Kenshin (thanks btw)), I was hoping a few of the guys who were offering to help out before would have some input on this as well.  I hope your not holding out just so you get ads plastered all over the place.... 

Cheers!


----------



## TheLinuxBug (Jul 11, 2013)

KuJoe said:


> Additionally, I've be happy to donate a VPS from Tampa, Denver, and Portland (DDOS Protected) for geographic diversity. While we also use CNServers like BuyVM, we are hosted inside of CNServers so the network layouts are different.


Thanks for being a team player @KuJoe   

Edit: My post was after yours, but I was still working on it when you posted.  I am happy to see you participating here, Thanks again!


----------



## manacit (Jul 11, 2013)

KuJoe said:


> http://www.lowendtalk.com/discussion/3825/raymii-org-got-reddited-lwn-ed-and-news-ycomb-ed-on-a-128mb-leb-with-stats
> 
> ^Why not do this?
> 
> Additionally, I've be happy to donate a VPS from Tampa, Denver, and Portland (DDOS Protected) for geographic diversity. While we also use CNServers like BuyVM, we are hosted inside of CNServers so the network layouts are different.


That setup is a lot simpler because his site is all static content without databases. A round robin DNS setup with a bunch of HTML files is easy-peasy. 

I still think we shouldn't jump into anything crazy, just get a big KVM that's DDoS protected, run the site on that, and then go up from there if we need it.


----------



## Kenshin (Jul 11, 2013)

I used to run a large local (SG) community forum, as well as currently running MySQL replication (master-slave) for both a stable distributed qmail setup as well as for shared hosting use (slave for quick failover + backing up without locking). I've read up a lot on master-master setups but never actually put it into testing or production, simply because for every 1 success story I read, there are 9 others with issues. It might work on the qmail (vpopmail) setup since the data is pretty straightforward, but for forum data it's practically suicide.

If Cloudflare is an issue, then just re-create the Cloudflare component with the use of multiple reverse proxies on DDOS protected IPs at various locations. Performance of the MySQL server is an issue, and if you take into account Layer7 attacks I'd rather investment in a dedicated server with SSD to ensure MySQL or Nginx can handle the load. Dedicated servers aren't even that expensive anymore, but having the available CPU & I/O would allow not just growth but also tanking of simple L7 attacks.

My proposed setup:

1-2 DDOS protected IPs running Varnish, ideally west coat + east coast/EU

1x Primary web server located in east coast or central USA. Dedicated server with SSD drives. MySQL master.

1x Secondary web server located in west coast or EU. VPS with SSD drives. MySQL slave.

1x Backup server. Daily backup of web files and database to enable at least 7 day complete roll back.

CDN for static elements to reduce http load and increase loading speed

By speeding up the page generation via dedicated server w/SSD, it can offset the additional latency required for the Varnish <-> web server, which would be the main traffic. Moving all static elements to CDN would speed things up for users like me who are in Asia and are always at a disadvantage. My aim is to reduce the overall page completion time, as well as have sufficient resources for growth/L7 attacks.

I left Manndude a message on IRC about offering OnApp CDN (pull based) for the static elements as long as he can integrate it into IPB. I have my own asia CDN POP so technically it's "free" for me to provide, and pricing for US/EU CDN POPs are cheap it won't hurt me much financially.


----------



## TheLinuxBug (Jul 12, 2013)

Sooo... does anyone else have any input on this?  I was hoping to see a few more suggestions or ideas?

Cheers!


----------



## Dylan (Jul 12, 2013)

I worry this is going to sound rude but in my experience one of the biggest problems a lot of IT people face is a tendency to over-engineer things -- and I think that's what's going on here. Resist that urge and keep it simple. This is a relatively small site with a niche audience and a larger KVM or lower-end dedicated server, with or without a MySQL slave, would be more than sufficient.

Also, you should look into Incapsula if you're displeased with CloudFlare. Similar CDN functionality, but with much more of a focus on security. I've been moving some sites from CF to Incapsula and haven't looked back. Only downside is that they don't give unlimited bandwidth and their DDoS protection plan is $299 instead of CloudFlare's $200.


----------



## mikho (Jul 12, 2013)

TheLinuxBug said:


> Hello Everyone,
> 
> 
> Over the past month there have been several topics going back and forth, from selling Ad space to finding a way that we can make this place fully community supported. MannDude and I have been going back and forth in private message for a few weeks now on this topic and we thought that it would be something that we should allow the community to have some input on.
> ...


Took me some time to get to my computer so I could reply to this thread. 

Like many others already said, this is a little way over the top. My concern is not the amount or specs of the VPS / dedicated or even the technique behind it, it's the sysadmin time to manage and fix things when it breaks down. Trust me, it will eventually.

The board is currently running on a KVM with 256mb RAM and offloaded mySQL (if I remember correct) and it's working "ok" for the* *most of us. Some interruptions happens, not counting the outages from the providers part (planned or unplanned). What I'm trying to say is that even technically your suggestion looks a little "over the top" at this moment with the current amount of users.

If some changes should be done to make a transition to another provider easier it could be stuff like:

* moving the mySQL database to a VPS of it's own, perhaps an SSD VPS.

* master -> slave replication of the database.

* rsync of files to a backup webserver, ready to kick in if/when there is a change of heart regarding the provider.

When we have the same amount of member as WHT we might consider a full-blown solution like you have suggested.

This is a "simple" forum, we will survive an outage for a couple of hours without having to many seizers 

It's worse if something like what happened to RamNode and CVPS where whole nodes are wiped. Thats when a backup plan should kick in. 

Hope it makes sense, its 2 AM (again) and I'm almost falling asleep over my keyboard.


----------



## KuJoe (Jul 12, 2013)

Speaking of backups... not sure how these are being handled now but I'd be willing to chip in here also if needed.


----------



## earl (Jul 13, 2013)

I would just sell AD space and put the site on a dedi...and if it can generate 10k/mo like LEB/LET that would buy a lot of beer and hookers hahaha..


----------



## mikho (Jul 13, 2013)

earl said:


> I would just sell AD space and put the site on a dedi...and if it can generate 10k/mo like LEB/LET that would buy a lot of beer and hookers hahaha..


+1 on the beer, you can keep the hookers


----------



## earl (Jul 13, 2013)

mikho said:


> +1 on the beer, you can keep the hookers


Gee thanks!! extra hooker for me!! hahaha I guess you had your fill.. if I'm not mistaken you are on vacation in Thailand?

I'm just kidding bout the hookers by the way.. me being a good catholic an all I don't condone prostitution, lol.. just sounds like the thing to do when your rolling in the dough..


----------



## mikho (Jul 13, 2013)

earl said:


> Gee thanks!! extra hooker for me!! hahaha I guess you had your fill.. if I'm not mistaken you are on vacation in Thailand?
> 
> 
> I'm just kidding bout the hookers by the way.. me being a good catholic an all I don't condone prostitution, lol.. just sounds like the thing to do when your rolling in the dough..


Back in Sweden since 2 weeks ago.


A little difference leaving 32 degrees celsius in the shade to 11degress celsius in the sun.


But I'll survive 


Getting back to work on monday 


A catholic?! Thank God you are not a priest.  j/k......


----------



## titanicsaled (Jul 13, 2013)

Just get another VPS in Europe!


----------



## threz (Jul 15, 2013)

I think there are two issues in this thread that should be dealt with, most likely individually. The way I see, this thread breaks down into:


How to fund the ongoing operations
How to provide increased reliability
@TheLinuxBug provided potential solutions for both of those issues in that he proposed that users donate VPSs/Servers to keep the place running an that @MannDude configure them in some sort of a cluster. 

I have some issues with both parts of the proposal. Firstly, I think that having users donate servers just adds way too much complexity, uncertainty and overhead for @MannDude or other administrators. How long will the donated server be paid for? How do we know for sure that the user has not retained some control over the server, and will just take it back when they want? Or worse, rummage through the database? 

Under that kind of system, I see @MannDude basically having to juggle and reconfigure servers in a constant rotation of replacing cancelled or offline donated servers. It's just too much work for something very uncertain... and I don't think it will be a successful long-term funding model for the site. 

Honestly, I think that @MannDude should put ads up on the site, as he started a discussion in a different thread. They may not be the most popular, but done well, relevant and vetted I think they could work well and would provide the most stable income to fund the continuation of this community without being dependent solely on the generosity of @MannDude or random users.

If ads are unequivocally off the table, then I think the next best thing would be having a donation system. This could work well and wouldn't have ads, but would mean that the admins would have to set up donations drives and potentially bug users for donations periodically. It seems like more work for more uncertainty, but would be better than donated servers. 

For the second point about site reliability, I tend to agree with other users that keeping it simple and just eating a small amount of downtime is the easiest and best way. This isn't some critical application that requires high availability or anything like that. Some downtime here and there should be acceptable. 

The problem with reliability is that trying to get rid of that single point of failure means there are no real simple solutions. The easiest way that I could think of would be to set up some GeoDNS with failover (non-instant, probably would take a few minutes) and then have 2+ VPSes synchronized. 

But, how do you synchronize the databases? Running a synchronous cluster over the internet would create huge latency, and running an asynchronous cluster could introduce some really tricky collisions and other issues. And then you have to monitor the servers with Pingdom or UptimeRobot or similar and integrate that with your DNS service to provide failover... it's all added complexity for what? A few minutes of extra uptime per month? 

I would keep the site on a reliable VPS, funded by the least intrusive ads that @MannDude can manage, and just don't sweat the downtime unless it becomes unbearable. When the site starts getting much bigger, and is bringing in enough money to fund a dedicated server, go that route. Do true IP Failover within the same datacenter. If the site really becomes hugely successful, and there is a desire for true high availability, then look into a proper solution (master/slave with failover?) when there are trusted admins and a guaranteed income to support it. 

But, really... we're just talking. This is @MannDude's site and he has the last say in what goes. I urge him to keep it simple, and don't give in to building something complex just because it sounds cool and _may_ work.


----------

