Current state of vpsBoard 02/04/2017Dear vpsBoard members and guests:
Over the last year or two vpsBoard activity and traffic has dwindled. I have had a change of career and interests, and as such am no longer an active member of the web hosting industry.
Due to time constraints and new interests I no longer wish to continue to maintain vpsBoard. The web site will remain only as an archive to preserve and showcase some of the great material, guides, and industry news that has been generated by members, some of which I remain in contact to this very day and now regard as personal friends.
I want to thank all of our members who helped make vpsBoard the fastest growing industry forum. In it's prime it was an active and ripe source of activity, news, guides and just general off-topic banter and fun.
I wish all members and guests the very best, whether it be with your business or your personal projects.
Search the Community
Showing results for tags 'cluster'.
Found 3 results
What we found is that the cloud was not meant to provide the level of IOPS performance we needed to run an agressive system like CephFS. ... The problem with CephFS is that in order to work, it needs to have a really performant underlaying infrastructure because it needs to read and write a lot of things really fast. If one of the hosts delays writing to the journal, then the rest of the fleet is waiting for that operation alone, and the whole file system is blocked. When this happens, all of the hosts halt, and you have a locked file system; no one can read or write anything and that basically takes everything down. ... Recap: What We Learned CephFS gives us more scalability and ostensibly performance but did not work well in the cloud on shared resources, despite tweaking and tuning it to try to make it work. There is a threshold of performance on the cloud and if you need more, you will have to pay a lot more, be punished with latencies, or leave the cloud. Moving to dedicated hardware is more economical and reliable for the scale and performance of our application. Building an observable system by pulling and aggregating performance data into understandable dashboards helps us spot non-obvious trends and correlations, leading to addressing issues faster. Monitoring some things can be really application specific which is why we are building our own gitlab-monitor Prometheus exporter. We plan to ship this with GitLab CE soon. https://about.gitlab.com/2016/11/10/why-choose-bare-metal/
Many moons ago, I used to work for a company that used some of these when they were first coming out. They were part of a fallover system for high-traffic, HA campaigns (like TV advertisments, etc) and they worked quite well. I'm seeing more of these coming out of the woodwork on the likes of eBay and specifically, newer versions with a lot of nodes crammed in to the one chassis. So I'm wondering if any of our providers here are using them and if so, why? For the most part, the only real reason I can think of is space saving - cramming 16 nodes in to an 8-10U chassis would make sense if you're trying to keep your physical footprint low but aside from that.. what benefits are there?