# I'm being STOLEN FROM! (not really - what is reasonable kvm steal %)



## raindog308 (Sep 19, 2013)

Running top on a kvm vps, there is a statistic called "st %" which I read is the time "stolen" from my VPS.  Or more precisely, "waits for a real CPU while the hypervisor is servicing another virtual processor".  

So does that mean, "your VPS was assigned 2 virtual processors (in my kvm's case), so it expects to be able to use those two processors exclusively, but in the real world, you are sharing those with other processors so if there aren't cycles available, the KVM steal % is the amount of wait you experience waiting for those processors"?

What is a reasonable st %?  I see my BuyVM storage kvm frequently hit 50-60%.  Sometimes CPU is a little sluggish but usually it's fine.


----------



## Aldryic C'boas (Sep 19, 2013)

raindog308 said:


> What is a reasonable st %?  I see my BuyVM storage kvm frequently hit 50-60%.  Sometimes CPU is a little sluggish but usually it's fine.


 Those nodes were pretty beefed up, you shouldn't be feeling any pain on those.  Mind tossing us a ticket next time it occurs?  Anthony's gotten very adept at tracking down KVM CPU abuse - we'll get that straightened out.


----------



## raindog308 (Sep 19, 2013)

So I take it 50-60% is high?


----------



## Aldryic C'boas (Sep 19, 2013)

Fran's the expert here and could better explain what's going on - but on my own it's typically <7-10 if that.  So yeah, if you're seeing spikes, it's worth letting us know so bz can go hunting


----------



## AnthonySmith (Sep 19, 2013)

that's an odd one, I suppose it has something to do with the way the credit scheduler is configured or if cgroups have been made overly aggressive or at the same to far too relaxed, I just looked a KVM VPS I have and that is staying at 0.0, also checked a VPS on one of my very busy Xen nodes (using the standard credit scheduler) and that is at 0.0 - 0.5 max all others are just at 0

Sounds like a sure sign of CPU Abuse, cpu affinity helps but not sure how that translates on to KVM.

Out of interest what is your .wa like?


----------



## raindog308 (Sep 21, 2013)

There was a long peak but now it's back to normal - less than 10%

Actually, during this peak, the KVM would periodically drop off the network entirely - couldn't ping to/from it or connect in any way except the console.

A ticket fixed it...seems to be running, as submariners say about torpedos, hot, straight, and normal now.


----------



## Francisco (Sep 21, 2013)

We're actually replacing the E3's in all of our setups sometime next month.

The E3's simply don't provide enough cores for our needs so we're replacing all of them with dual hex core L5639's.

While the 5639's are a lower clock (2.13Ghz vs 3.xGhz), there won't be nearly as much, if any, of the contention current users might see. Most of the nodes are fine but I know there is a few Anthony has had to beat a few clients from doing seriously heavy workloads on.

We got an email going out this weekend documenting it all, as well as the SSD upgrades to OVZ's.

Francisco


----------



## Shados (Sep 21, 2013)

Francisco said:


> We're actually replacing the E3's in all of our setups sometime next month.
> 
> 
> The E3's simply don't provide enough cores for our needs so we're replacing all of them with dual hex core L5639's.
> ...


I have a dedi box with dual L5639's, and I have to say that they are really nice :3.


----------



## Francisco (Sep 21, 2013)

Shados said:


> I have a dedi box with dual L5639's, and I have to say that they are really nice :3.


We use L5638's in our OVZ's and we really like it.

An e5 build would be a bit faster (say 10 - 15%) but an E5 will run me $2000+ for CPU's/board, where as an L5639 setup is < $500.

I'm all for performance but I see no reason to bleed money for such a small margin.

Francisco


----------



## Reece-DM (Sep 22, 2013)

Francisco said:


> We use L5638's in our OVZ's and we really like it.
> 
> 
> An e5 build would be a bit faster (say 10 - 15%) but an E5 will run me $2000+ for CPU's/board, where as an L5639 setup is < $500.
> ...


Are you missing the costs of drives in that as well?


----------



## Francisco (Sep 22, 2013)

Reece said:


> Are you missing the costs of drives in that as well?


Drive cost is static between those 2 

Francisco


----------



## shovenose (Sep 22, 2013)

Correct me if I'm wrong but won't those use a lot more power than an E5?


----------



## Francisco (Sep 23, 2013)

shovenose said:


> Correct me if I'm wrong but won't those use a lot more power than an E5?


Of an E3? A little bit. It's really not an option. To fix the issue we'd have to start breaking shins far more often or start adding more nodes. We start adding more nodes and we gotta burn more power anyways.

I'd rather use the dual hex cores since I can safely add bigger plans w/o concern.

An E5, though, is normally 80W+ per socket. You can use 2630L's like we do on the east coast and they'll do 60W each, just like the L5639's.

Both servers use the same RAM.

Francisco


----------



## raindog308 (Oct 3, 2013)

Aldryic C said:


> Those nodes were pretty beefed up, you shouldn't be feeling any pain on those.  Mind tossing us a ticket next time it occurs?  Anthony's gotten very adept at tracking down KVM CPU abuse - we'll get that straightened out.


Just did 

Well...an email, given the WHMCS situation.


----------



## manacit (Oct 3, 2013)

I had a high CPU (load avg 5-6) process running on a FlipHost KVM for a few days (I asked, made sure he knew he tell me to stop/make my kvm lower priority) and I was only seeing 2-6% ni, even when i was at 90 - 100% cpu. That's a lot less than I was expecting, and super reasonable.


----------

