# SolusVM High RAM Usage



## VPSSoldiers (Aug 25, 2015)

So I have noticed over the past few days that the RAM usage on my hypervisor has slowly gone up its gotten to the point I'm concerned.

                   total       used       free     shared    buffers     cached
Mem:         72496      72169        326          1       5753      11578

VM usage should only be at most around 25GB and I can't seem to find where about 30GB has gone. To me it almost seems like the qemu processes are forking but not releasing the ram from the old processes. I would create a support ticket but I like to do it and they tend to lean towards "let us do it for you" plus I'm not super keen on the idea that someone else is poking around my servers that have clients on them. I also have KSM disabled, well I've never dealt with the KSM stuff so I had one thing that said it was disabled and something else that showed that it was enabled (so not really sure, guess it would be what ever is default from a fresh CentOS / SolusVM install).

Has anyone had this happen? Any ideas?


----------



## HalfEatenPie (Aug 25, 2015)

1 link.

http://www.linuxatemyram.com/

(since you only showed the first line and not the buffer/cache values)

Also, OpenVZ or KVM or Xen?


----------



## VPSSoldiers (Aug 25, 2015)

KVM (qemu)

-/+ buffers/cache:      54898      17598

And I realize that "unused ram is wasted ram" but what I dont understand is this started about 2 weeks ago and it took about a week to get down where it is now and now it seems to be using about 1% more each day even when changes arn't happening.


----------



## VPSSoldiers (Aug 25, 2015)

So I guess another question I should raise, when you set the max useable ram for a node does it take the caching into account or will it freak out (by that I mean will it consider the node full) when the caching + used space reaches that level?


----------



## DomainBop (Aug 25, 2015)

If you want to find the culprit, did you try the usual:

ps aux | awk '{print $4"\t"$11}' | sort | uniq -c | awk '{print $2" "$1" "$3}' | sort -nror: 

cat /proc/meminfo <--check the huge pages settings


HalfEatenPie said:


> 1 link.
> 
> http://www.linuxatemyram.com/
> 
> ...



Nice link...what if you have a poorly written PHP program with a memory leak that really is eating your RAM? 



Quote said:


> "unused ram is wasted ram"


RAM is relatively cheap, so I tend to think of unused RAM as insurance for traffic spikes.



Quote said:


> when you set the max useable ram for a node does it take the caching into account or will it freak out (by that I mean will it consider the node full) when the caching + used space reaches that level?


It will look at the used memory: (used=total - buffers - cache) ( -/+ buffers/cache:      *54898*    17598)


----------



## HalfEatenPie (Aug 25, 2015)

DomainBop said:


> If you want to find the culprit, did you try the usual:
> 
> ps aux | awk '{print $4"\t"$11}' | sort | uniq -c | awk '{print $2" "$1" "$3}' | sort -nror:
> 
> ...


Haha well you got me there.  I was simply going off the fact that we only got the first line of the free -m instead of the entire picture.


----------



## VPSSoldiers (Aug 25, 2015)

> ps aux | awk '{print $4"\t"$11}' | sort | uniq -c | awk '{print $2" "$1" "$3}' | sort -nr


Well that didn't show anything that looked bad highest percent was 2.5



> cat /proc/meminfo <--check the huge pages settings


AnonHugePages:  15190016 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB



> It will look at the used memory: (used=total - buffers - cache) ( -/+ buffers/cache:      *54898*    17598)


That makes me feel better, just confusing since the panel shows 76% memory usage, then again its not wrong.



> RAM is relatively cheap, so I tend to think of unused RAM as insurance for traffic spikes.



I can't remember where I saw the whole "Unused RAM is wasted RAM" statement but I've heard that for years, I've never really had any issues like this before so thats the assumption I have kind of been going off of. But like I said the main reason I'm so puzzled by this is because it was going fine using minimal ram up until about two weeks ago and then over a seven day span you can see the amount of free RAM go down day by day.


----------

