amuck-landowner

Ideal Load Average for a kvm node ?

SkillerzWeb

New Member
Ok lets say i got a server with kvm virtualization fully filled.. What would be the ideal cpu load average for good vps performance ?

I usually have like 30, 25, 20 etc.. And the disk is SSD.

Thank you.
 

MartinD

Retired Staff
Verified Provider
Retired Staff
That would depend on the CPU. And Ram. And drive storage array.

And the number of VM's you have. And what they're doing.

What you're asking is..

"Ok lets say i got a car, fully filled.. What would be the ideal average speed to get from A to B?

I usually drive at like, 50mph"

:)
 

HalfEatenPie

The Irrational One
Retired Staff
There's so many factors that contribute into this.  Asking for an actual "number" is... well...  it's not going to end well for you.  A load of 10 on an E3 is very different than a load of 10 on a Dual E5 server.  

The load number itself factors in not only CPU, but also IO, RAM, etc.  

As for calling a node "full", it mostly depends on person to person and what each VM is doing.  My definition is usually taken care of on a spreadsheet and depends on all the factors above.  In addition, focuses on how many cores are shared between people and tries to figure out a "general rule" of load management.  I always like to keep some wiggle room and of course room for server maintenance (enough room on the hard drive to install updates and additional software I might need, etc.).  

Or... you could always go the CVPS route and just stack people on a single E3 server until one person complains and move just that person over.  
 

SkillerzWeb

New Member
That would depend on the CPU. And Ram. And drive storage array.

And the number of VM's you have. And what they're doing.

What you're asking is..

"Ok lets say i got a car, fully filled.. What would be the ideal average speed to get from A to B?

I usually drive at like, 50mph"

:)
Well isn't load average is the queue of processes in a waiting list for the cpu to complete ? and so each cpu has different power to process.. But the waiting list/load average would be common right ? I mean a 2ghz core would have a load average of 0.5 for a process.. And a 4ghz core would have a load average of 0.25 for the same process.. So how come it isn't possible to have a ideal load average ?

@HalfEatenPie - I lol'd really hard "you could always go the CVPS route and just stack people on a single E3 server until one person complains and move just that person over.  "

-Thanks-
 
Last edited by a moderator:

MartinD

Retired Staff
Verified Provider
Retired Staff
 I mean a 2ghz core would have a load average of 0.5 for a process.. And a 4ghz core would have a load average of 0.25 for the same process.. So how come it isn't possible to have a ideal load average ?
I think your math is flawed a little here. That set aside, the 'ideal' load average depends entirely on the processor, the RAM, your storage array and what the VM's are running - like I said. HalfEatenPie also explained that the load on a given processor will differ depending on the task at hand.

'Ideal load' is subjective. You could say an overall load of 10 is 'ideal' and 'okay' but your customer who's doing a lot of complex calculations on intensive and large MySQL DB's might not feel the same way about it. Likewise, that same customer may think a load never above .5 is acceptable however another customer who's only running a BNC or single html page on their VM couldn't give two dinglies!

It just depends on the given situation and all manner of other factors so I don't think there is an 'ideal load'.
 

HalfEatenPie

The Irrational One
Retired Staff
'Ideal load' is subjective. You could say an overall load of 10 is 'ideal' and 'okay' but your customer who's doing a lot of complex calculations on intensive and large MySQL DB's might not feel the same way about it. Likewise, that same customer may think a load never above .5 is acceptable however another customer who's only running a BNC or single html page on their VM couldn't give two dinglies!
Hehe.  I run my BNC on a Prometeus 256 MB KVM.  Overpowered for the task? Hell yes.  But...  I love that Prometeus KVM to death. 

Well isn't load average is the queue of processes in a waiting list for the cpu to complete ? and so each cpu has different power to process.. But the waiting list/load average would be common right ? I mean a 2ghz core would have a load average of 0.5 for a process.. And a 4ghz core would have a load average of 0.25 for the same process.. So how come it isn't possible to have a ideal load average ?

@HalfEatenPie - I lol'd really hard "you could always go the CVPS route and just stack people on a single E3 server until one person complains and move just that person over.  "

-Thanks-
To put it very generally...

You can also get high load from IO abuse.  The CPU's probably not doing much but you'll get high load because there's an X amount of data that needs to be transferred through the small pipe.

You say "I have SSD" but SSD simply states that you have a bigger pipe to transfer data from storage to RAM.  Load due to IO can still happen.  

Also, the load number is relative to what machine you're using.  There's a difference in the load values between Dual 4.0 GHz core server and a Quad 2.0 GHz core server.  That's what we're trying to address here.  Regularly, you don't calculate load per CPU basis, you count it as a whole machine.  
 
Last edited by a moderator:

wlanboy

Content Contributer
I do explain the load by the road analogy.

A single-core CPU is like a road with one lane for one direction. You want to measure the traffic of a reoad. A suitable metric would be how many lanes (full of cars) are waiting to get on the road.

Back to the numbers

  • 0.00 means there's no traffic on the road at all. Between 0.00 and 1.00 means there's no waiting.
  • 1.00 means the road is exactly at capacity. All is still good, but nothing should happen.
  • over 1.00 means there are cars waiting. 2.00 means that there are cars able to fill two roads. One lane of of cars on the road and one lane waiting to get on the street.
On multi-processor system the load is relative to the number of processor cores available. The "100% utilization" mark is 1.00 on a single-core system, 2.00, on a dual-core, 4.00 on a quad-core system.

Not talking about multicore vs. multiprocessor or hyper-threading.

I/O wait is a complete other scenario.

I/O wait is time spent to handle hardware interrupts as a percentage of processor ticks.
 

George_Fusioned

Active Member
Verified Provider
Load on the node should be less than the number of CPU cores (or threads if HT-capable).
For example: on a E3 based node keep it under 8.00, on a Dual E5-2630 node keep it under 24.00 etc.
 
Top
amuck-landowner