amuck-landowner

CPU Fair Use

NodeBytes

Dedi Addict
I just got my first warning for too much resource usage and I was wondering what you all think is "fair use" on part of a VPS provider. Would a consistent 1.0 load be considered too much usage? What would you consider "fair use" in a shared VPS environment?
 

concerto49

New Member
Verified Provider
A load of 1 is pretty much a dedicated CPU core. Anything higher is even more. Say there are 4 cores in an E3 - it means you've got 1/4 of the server dedicated to you. It depends on the plan you are on, but it wouldn't make sense to have 256MB VPS plans take 1/4 of the server.
 

HalfEatenPie

The Irrational One
Retired Staff
Well, it depends on what plan you have.

But like what concerto49 said, just as a rule of thumb you can assume 1.00 load to be equivalent to 1 CPU usage.  If you're pulling a load of 1 on a 256mb VPS for 24 hours straight then that's a problem.  

Of course this is all at the discretion of your provider and their management habits.  
 

clarity

Active Member
As a user, I feel that I can max out my plan. If I have 1 core included, I should be able to have a load of 1 all the time.
 

WSWD

Active Member
Verified Provider
As a user, I feel that I can max out my plan. If I have 1 core included, I should be able to have a load of 1 all the time.
If you want a dedicated core, you should pay for it.  If you want to use 1 core all the time, on a 4 core server, for example, you should pay 1/4 of the cost of the server.
 

KuJoe

Well-Known Member
Verified Provider
There is a HUGE difference between dedicated resources and shared resources. In most cases, CPU and network ports are shared resources for VPSs so using 100% of your non-dedicated resource is normally a violation of TOS because you can easily impact the whole node.

As others have said, if you are on a node with a quad core CPU and you share that node with 10 VPSs, chances are you will not be able to hog 1 of those cores 24x7 unless you're paying a premium for a dedicated core.

Now if you're on a node with 24 cores and you are one of 10 VPSs on that node then there is a good chance you will get 1 core dedicated to you or at the very least you can run 100% CPU 24x7 without impacting the node to much.

The bottom line is, check with your provider. If they list the CPU are a shared resource then they are within their rights to limit the usage of that resource if they find you are preventing other clients from receiving a fair share.

As a provider, we have scripts in place to alert us if somebody maxes out a shared resource for X hours just for this reason. Normally we get a dozen or so alerts a day and 9 out of 10 times those VPSs that are maxing out the CPU are running scripts that violate our TOS anyways. We used to have clients that purchased a handful of 32MB VPSs from us and ran SETI@Home and FOLDING@Home on them and maxed out 3/4 of the node's CPU cores, that is NOT fair to other clients on that node fighting over 2 CPU cores at all times.
 
  • Like
Reactions: scv

scv

Massive Nerd
Verified Provider
If the CPU usage isn't affecting other customers, and isn't a general waste of cycles in a VPS environment (F@H, *coin mining, password cracking) there should be no problem with it. A provider should start to limit this usage once it starts to contend with other containers though.
 

cfg.co.in

New Member
Its a very open ended question and lots of factors are responsible for it.

But in short, you know what you doing and that code, script or sql query is worth running on that vps or not.

But exceptions are always there
 

NodeBytes

Dedi Addict
This is my main status server that runs Observium and a couple custom scripts so I've decided to move it to one of my dedicated servers. 
 

KuJoe

Well-Known Member
Verified Provider
We had to purchase additional cores for our monitoring VPS because Observium loves spiking the CPU.
 

Francisco

Company Lube
Verified Provider
We had to purchase additional cores for our monitoring VPS because Observium loves spiking the CPU.
Munin is the devil for that.

We had quite a few issues with our older 128MB nodes because a ton of users would have munin's all firing at once. We tracked the loads and the node would be 1 - 2 loads... spike to 10+ for 30 seconds then fall >_> Drove me mad.

Francisco
 

nunim

VPS Junkie
We had to purchase additional cores for our monitoring VPS because Observium loves spiking the CPU.
I have not experienced this problem but I'm sure you are monitoring far more servers.  I have an Observium install running on a RamNode 128Mb SSD, it is monitoring about 20 servers.

x966zkq.png
 

mtwiscool

New Member
In our trams of service we say 10% but we use 2 vcores with 15% each we would give full cores but we fear abuse due to it being free.
 

KuJoe

Well-Known Member
Verified Provider
@KuJoe - Random, but were you using the EdgeRouter Lite?
Yes. I really miss them and would love to get these installed into all of our DCs in the future.

I have not experienced this problem but I'm sure you are monitoring far more servers.  I have an Observium install running on a RamNode 128Mb SSD, it is monitoring about 20 servers.
You're monitoring a lot more servers than we are. I don't have any graphs on our monitoring servers but we are running more than just Observium on it so that could be the cause but before we installed Observium 1 core and 128MB of RAM was plenty.
 
Last edited by a moderator:

nunim

VPS Junkie
You're monitoring a lot more servers than we are. I don't have any graphs on our monitoring servers but we are running more than just Observium on it so that could be the cause but before we installed Observium 1 core and 128MB of RAM was plenty.
Well, I'm running a PHP looking glass and Obserivum but that's pretty much it.  I've configured Observium to use nginx+php-fpm instead of Apache so that helped me a lot with CPU and Memory load. I figured why not monitor the monitoring server, I had initially installed it on a 512 VPS as per their requirements but after setting up nginx and monitoring it for a week I realized I could get by with 128, although SSD surely helps as I can trash the DB.
 

peterw

New Member
Munin is the devil for that.


We had quite a few issues with our older 128MB nodes because a ton of users would have munin's all firing at once. We tracked the loads and the node would be 1 - 2 loads... spike to 10+ for 30 seconds then fall >_> Drove me mad.


Francisco
This takes me by suprise. I know many tasks that run at specific times. Do provider not talk to customers to share load times? I have one company sending me a suggestion to move some of my crons one hour a head. Did not have a problem with that.
 

Francisco

Company Lube
Verified Provider
This takes me by suprise. I know many tasks that run at specific times. Do provider not talk to customers to share load times? I have one company sending me a suggestion to move some of my crons one hour a head. Did not have a problem with that.
It's rare.

I've been looking into adding more tracking for this in Stallion to help users with it.

The problem is people really want things at the :05 or :20 marks for munin :)

Francisco
 

scv

Massive Nerd
Verified Provider
Yes. I really miss them and would love to get these installed into all of our DCs in the future.

I would reconsider that until UBNT has released an updated version of the hardware. Not a single ERL I received lasted a year before failing outright.
 

KuJoe

Well-Known Member
Verified Provider
@scv We've still got 2 that were running strong. Luckily we had redundancy in case of hardware failure.
 
Top
amuck-landowner