# Limiting the CPU resources a bash script can consume when running?



## MannDude (Jun 8, 2014)

Is it possible to limit the amount of CPU a bash script can consume when it runs? I've a bash script running via a cron job every 4 hours, and when it initiates it eats up the CPU for about 15-20 seconds which causes timeouts and/or extremely slow page loading at the time the script initiates.

Was just wondering if it's possible to limit how much CPU it can consume, and if so, how.

Thanks!


----------



## WebSearchingPro (Jun 8, 2014)

You should be able to use cpulimit.

manpage

Though that may require some workarounds like finding the PID of itself and executing cpulimit.


----------



## wcypierre (Jun 8, 2014)

You can choose to limit the process based on process name instead of pid though by passing "-e" instead of "-p"

http://www.cyberciti.biz/faq/cpu-usage-limiter-for-linux/


----------



## drmike (Jun 8, 2014)

Question:  What exactly is gobbling resources? (the main command firing that is tanking things)

I'll assume it is mysqldump   Common point of problems in such scenarios. Backups are prone to resource issues due to IO piling up.   To lesser extent (and rare) that it is truly CPU lagging.

For the IO related clog / bottleneck, there is ionice --->  man ionice  (kind of useless manpage)

Here's the command I'd be integrating into the backup script:


ionice -c3 nice -n19 mysqldump --single-transaction --quick --lock-tables=false (rest of the command particulars)
Additionally, I'd step script running a siongle command at time and make sure nothing else is punching the server hard.  If so, wrap it in ionice also.


----------



## Deleted (Jun 8, 2014)

limits.conf


----------



## tonyg (Jun 8, 2014)

I would audit the script to find out which process are being fired off that are eating cpu cycles.

As an example, sometimes a script will have separate grep and awk commands and in many cases only one needs to be run.

Also the liberal use of "cat" can be streamlined many times and keep pipes to a minimum.

There's way to many pitfalls that can slow down a script due to being poorly writen.


----------



## raindog308 (Jun 8, 2014)

You can use nice 



NAME
       nice - run a program with modified scheduling priority


SYNOPSIS
       nice [OPTION] [COMMAND [ARG]...]


DESCRIPTION
    Run  COMMAND  with an adjusted niceness, which affects process 
scheduling.  With no COMMAND, print the current niceness.  Nicenesses 
range  from  -20  (most  favorable scheduling) to 19 (least favorable).

       -n, --adjustment=N
              add integer N to the niceness (default 10)

       --help display this help and exit

       --version
              output version information and exit

   NOTE:  your  shell  may  have its own version of nice, which usually 
supersedes the version described here.  Please refer to your  shell’s  
documentation  for  details about the options it supports.
Linux also has ionice.


----------



## wcypierre (Jun 9, 2014)

raindog308 said:


> You can use nice
> 
> 
> 
> ...


nice won't help tbh, nice only affects the priority of the scheduler on the particular process, so instead of it being run for 1x, now it is being run for 0.25x which doesn't help much since our computers are fast enough anyway for the difference to be not noticeable for non-realtime apps. Not to mention that it only runs for 15-20 sec so it will slow down the execution speed, making the time needed be more than 20 sec, but the increase in cpu usage will still be there(which may be over what Manndude wanted). Hence, cpulimit would be the correct way to limit the cpu usage as it caps the cpu usage at the user's desired level.


----------



## raindog308 (Jun 9, 2014)

Perhaps I misunderstand nice.  My perception is that it works like this:

- let's say you have 100 processes

- let's say you have 100,000 ticks of the clock in a given period of time

- the processes with a higher priority will get a bigger share of those ticks.

So if you have two processes (one at a low priority, one at high), over a 20 minute period of time, the low priority job will get less CPU than the high priority job.  It's a "how big is your share over time" thing, though I suspect in practice it works best when you have many competing jobs rather than perhaps just a couple jobs fighting.  But even in that case, the job with the higher priority will finish soonest (if CPU is the only factor considered).

You're right though that nice will still let a job use all unused resources - it'll just surrender CPU cycles to new jobs with a higher priority.


----------



## wcypierre (Jun 9, 2014)

raindog308 said:


> Perhaps I misunderstand nice.  My perception is that it works like this:
> 
> - let's say you have 100 processes
> 
> ...


yes. literally that's it.


----------



## drmike (Jun 9, 2014)

> Hence, cpulimit would be the correct way to limit the cpu usage as it caps the cpu usage at the user's desired level.


Assumption at play here that the job running truly is pegging CPU and that is a problem. MySQLdump and other common utilities truly don't explode CPU to dumb levels, it's the IO silly.

I guarantee the CPU blow up is based on IO bottleneck.  IO always goes boom first (at least on hardware so far).

CPU blow up by it's lonesome like this, solely wouldn't be a problem.   IO based CPU blow up - or conjunction of both in this instance will create a clog.  Heck the job may be forcing RAM to burst to ceiling and beyond and create malfunctions where PHP goes loco and site crashes (know we've seen the end result being this).

This is why I recommended IONICE.

It's just prudent to start resource capping (nice'ing) when perception of issues.  Shouldn't hurt the job running - other than longer run time.

Me, I'd run the script manually in a terminal and monitor in another terminal the various CPU, RAM consumption, iowait, etc. Isolate any issues (if there are any) and identify anything that gets up there on consumption.


----------

