• Announcements

    • MannDude

      Current state of vpsBoard   02/04/2017

      Dear vpsBoard members and guests:

      Over the last year or two vpsBoard activity and traffic has dwindled. I have had a change of career and interests, and as such am no longer an active member of the web hosting industry.

      Due to time constraints and new interests I no longer wish to continue to maintain vpsBoard. The web site will remain only as an archive to preserve and showcase some of the great material, guides, and industry news that has been generated by members, some of which I remain in contact to this very day and now regard as personal friends.

      I want to thank all of our members who helped make vpsBoard the fastest growing industry forum. In it's prime it was an active and ripe source of activity, news, guides and just general off-topic banter and fun.

      I wish all members and guests the very best, whether it be with your business or your personal projects.

      -MannDude

dcdan

Verified Provider
  • Content count

    171
  • Joined

  • Last visited

Community Reputation

54 Excellent

About dcdan

  • Rank
    VPS Enthusiast

Profile Information

  • Gender
    Not Telling

Recent Profile Visitors

516 profile views
  1. 5x Dell C1100 servers for sale @ Phoenix NAP Dual Xeon L5639 CPUs 72 GB DDR3 ECC 1066 MHz RAM 4x 3.5" drive trays with support for 2.5" drives w/o adapter (no drives included) Latest BMC ver 1.82 Rails All servers have been working for 2 years without a single hiccup. $200 each ($1000 for all 5). Sorry, cannot split or ship. Local pickup @ PhoenixNAP ONLY on May 26 at 4 PM (see clarification below) Paypal or credit card payments are accepted from established members with at least 12 months of activity on the forum. Otherwise cash or wire. P.S. We have plenty of 1TB SSD drives in excellent condition to go with servers if needed (extra $). 4x EX3200-48T switches for sale @ QuadraNET Los Angeles All come with JunOS v12 and rack ears All switches have been pulled from working environment. $150 each ($600 for all 4). Sorry, cannot split or ship. Local pickup inside QuadraNET L.A. in person, near our cabs ONLY on May 30 at 4 PM. Exact location will be provided. Paypal or credit card payments are accepted from established members with at least 12 months of activity on the forum. Otherwise cash or wire. Contact via PM Clarification: This has to be picked up in person. Sorry we cannot ship or transfer to other clients within same datacenter. You need to have access to the datacenter and take the hardware straight from our hands (near our cabs) on the dates above. At PhoenixNAP we can carry outside to the parking lot at 4 PM, but at QuadraNET this is not an option.
  2. Repairing dead mobile drive

    1) Do you have a multimeter? 2) Have a look in here: http://forum.hddguru.com/viewtopic.php?t=18292 http://forum.hddguru.com/viewtopic.php?t=18075 http://forum.hddguru.com/viewtopic.php?f=1&t=15310
  3. Nodewatch Settings

    I do not remember seeing any support tickets related to that. Although I do see how this might be possible on a misconfigured system. On nodes with 500 containers Nodewatch uses under 10% of a single core when everything is configured properly.
  4. Nodewatch Settings

    In most cases conntrack-related suspensions are related to this: http://www.lowendtalk.com/discussion/49275/pptpd-automatic-installer-dos-issue-pptp-ovz-debian-sh-by-dadi-me Also, keep in mind that the default value for conntrack limit per VPS is 65536 or even 32768 (this is set in sysctl.conf), which means you will likely never trigger an alert with the nodewatch limit set to 75000. To verify the system limit set per VPS, do this on the host node: sysctl -a | grep conntrack_max
  5. SSH Protection

    Reminded me of how I got beaten on LET over introducing automated ssh port change to our brands - same shit, everyone was teaching us how we should stick with port 22 and we are dumb etc. Since then I went quiet on this topic (and many other topics for that matter) as all you get in return is beating plus some DDoS for good measure.
  6. OpenVZ + KVM

    Not sure I fully understand this concept. We were able to run KVM VPS on OpenVZ kernel for a very long time (with no changes to the OS), how is this different?
  7. Venom Security Vulnerability

    Time to switch to OpenVZ :lol: Generally whenever there is an OpenVZ-related exploit first thing you hear is "everyone should switch to KVM". For some reason it does not work the other way.
  8. But then you would not see all those containers in /proc/cgroups? Slabbing implies you are basically virtualizing your OpenVZ nodes which in turn would effectively "hide" containers running in the other slabs.
  9. By no means I implied otherwise. Just genuinely curious.
  10. Well there are two reasons why I am curious: 1) Process count. After about 50000 processes on the host node it will start locking up. If each VPS runs 3 processes (absolute minimum, basically just init and two kernel processes) that's already almost 60000. 2) Even a minimal centos install times 19273 equals 10 TB of data :)
  11. If you don't mind me asking, what OS template were you using? Also, how many processes do you see on the host node? (ps aux | wc -l) Thanks
  12. Once you power down the container the numbers seem to decrease.
  13. In our experience Paypal is quite random at these disputes anyway. We only win about 25-30% of them (and then we refund half of that anyway). Even when we mention "Intangible" in the dispute it does not affect the "ruling". Even in cases when the customer is obviously not playing nice and proof is provided. To my knowledge we never tried calling Paypal about these but that would not very efficient on $5/year plans... So in a way I welcome this, at least now we *know* we're screwed no matter what :)
  14. Both kernelcare and ksplice devs previously stated that they only implement bugfixes and security fixes, but not new features. So it is "normal" for it to not work on older kernels.
  15. One of our dev nodes: [[email protected] ~]# cat /proc/cgroups #subsys_name hierarchy num_cgroups enabled cpuset 3 6323 1 cpu 3 6323 1 cpuacct 3 6323 1 devices 4 6322 1 freezer 4 6322 1 net_cls 0 1 1 blkio 1 6323 1 perf_event 0 1 1 net_prio 0 1 1 memory 2 6322 1