amuck-landowner

We have constructed additional Pylons (BuyVM Upgrades)

willie

Active Member
. There's no reason for Aldryic to be picking up the payments since he'd have to then go through the trouble of adding it to his bank account and transferring it to me. We don't have a real office where we all gather or anything.


The only time we really see each other is during company vacations :p
We use a lot of video chat where I work and it's fun.  Remote staff fly in every few months.  It's reasonably affordable.

Usuallly by the time a business reaches your size (actually much earlier), it has company bank accounts.  So this transferring stuff just sounds perplexing.  Maybe the international angle makes it more complicated or something.

We do international payments by wire transfer, but I guess the amounts are usually in the K$ and up, so the fees don't eat too much of it.  Probably wouldn't work for VPS. 
 

Francisco

Company Lube
Verified Provider
We have company accounts but still, WU is a pain in the butt :)

Some users do pay by it. They end up paying an extra $5 - $10 in fee's on a $5/m VPS.

It sucks but for them it's likely the only way to order.

Francisco
 

SkylarM

Well-Known Member
Verified Provider
Nope. EGI's offices are on the 16th floor of coresite so they were 6 floors away from the racks we had in on the 10th floor. One of our workers, Matt, used to handle our onsite support.

Personally I like the L56xx's. They handle well and come close enough to the E5's to make me have to fight to justify upgrading to them. Our KVM boxes aren't busy enough to justify E5's, nor do I want to have to load them higher to try to make the E5's worth it. Loading them higher simply leads us down the same path.

We felt this was the most reasonable route.

Francisco
We've been running L5520's, but have started to transition into the Hex Core L56xx's. Older hardware means less ram, but as you said means less container density which ends up being a net gain. Similar CPU power in the E5's, just more customers which means less CPU access per client which just doesn't seem worth the additional cost to me.

Best of luck with the upgrades, BuyVM just keeps gettin better ;)
 

Francisco

Company Lube
Verified Provider
We've been running L5520's, but have started to transition into the Hex Core L56xx's. Older hardware means less ram, but as you said means less container density which ends up being a net gain. Similar CPU power in the E5's, just more customers which means less CPU access per client which just doesn't seem worth the additional cost to me.

Best of luck with the upgrades, BuyVM just keeps gettin better ;)
We used to use L5520's but we moved to L5638's for almost all of our OpenVZ nodes last year. The L5520's we nice but it was easy to bog them down during busy times. The L5638's were really worth it and was likely one of the better investments we've made.

The SSD's are the next big investment we're making. I can't remember the last time I saw someone complain about CPU processing. The biggest complaint right now is lag due to iowait spikes so the SSD's will for sure fix that.

Francisco
 

willie

Active Member
I gotta wonder how the dual L5639's compare in power consumption to Ivy Bridge or Haswell cpu's with the same amount of cpu speed.  If there's a 50 watt difference, that might be $100 per year per node or $300 in a 3-year lifecycle, depending on what you're paying per amp in the data center.  That might justify E5 all by itself.

I've found passmarks to be very accurate for predicting cpu throughput on my own workloads, though my stuff is computation-intensive and not anything like vps hosting.  Passmarks still might be ok as a rough comparison for vps.
 

Francisco

Company Lube
Verified Provider
We need the cores, though.

It doesn't matter if we have 6 cores that are 4Ghz each if some vm's get in a busy loop, etc.

An E5 will be at best the same power usage. You can get 60W E5's but you will pay out the nose for them.

Francisco
 

Francisco

Company Lube
Verified Provider
Can you comfortably say Fiberhub is a good home for your servers
Yep.

They've had their issues but they were honest, didn't bullshit us, and have gone to good lengths to address it and document it.

They keep in close touch about changes they're making as well as ETA's as they get them.

I'm not happy with the power stuff but I would rather be in a place that has fixed our network than be dealing with shit in SJC still.

Francisco
 

willie

Active Member
We need the cores, though.


It doesn't matter if we have 6 cores that are 4Ghz each if some vm's get in a busy loop, etc.
Is this some kind of deficiency in OpenVZ, that it can let a runaway process hose a hardware core, instead of limiting to some percentage?  Or if you want to allow cpu bursting, could you dynamically throttle if cpu stays above 80% for 2 minutes or something like that?
 

Francisco

Company Lube
Verified Provider
Is this some kind of deficiency in OpenVZ, that it can let a runaway process hose a hardware core, instead of limiting to some percentage?  Or if you want to allow cpu bursting, could you dynamically throttle if cpu stays above 80% for 2 minutes or something like that?
This is KVM.

Sure, weights and such come into play but it still means that users are fighting with a VPS that's acting up or being abusive.

What if 4 VM's act up ripping up CPU or are stuck in some sort of kernel panic/install busy loop? You've eaten 4 whole threads while the other 30 - 40 people on there are having to share just 4 cores.

The E3's are nice but they just aren't neighbour friendly.

Francisco
 

willie

Active Member
What if 4 VM's act up ripping up CPU or are stuck in some sort of kernel panic/install busy loop? You've eaten 4 whole threads while the other 30 - 40 people on there are having to share just 4 cores.


The E3's are nice but they just aren't neighbour friendly.
I thought a KVM container was just a giant user program running under timesharing like any other user program under a host kernel.  As such the host kernel should be able to control the total CPU consumption of the client container.  Is Xen better about this?  I remember hearing something to that effect, that this was the reason cloud services like EC2 use Xen.

I can accept that more cores is better on the general principle that more cores means more cycles, and more cycles is better.  But if the granularity of cores is that visible, that means the timesharing abstraction isn't working that well.
 
Top
amuck-landowner