# Big RAM VPS - who actually uses them and for what?



## drmike (Dec 31, 2013)

It has been a while since I was last roped into trying one of the abundant 2GB and larger VPS plans.

Last go round (and I tried multiple providers over six month span) all failed.  Usually the provider would have a fit as I loaded a test dataset into MySQL and consumed disk IO + CPU.

While these plans might play with the face value of large RAM, they are often crushed by hard set limits on IOPs, CPU utilization and other "good behavior" limiters.

So, to those of you buying and using these plans, what have you managed to get working and what success have you had running anything over say at least a month.   Looking at use scenarios and no provider names (and if you are a provider, no talking about your own special container without the limitations).


----------



## mikho (Dec 31, 2013)

I think the vps with most RAM i have has 512mb with 1024 burst.


Currently down and has been for some time, only because I've been to lazy to submit a ticket about it.


----------



## sundaymouse (Dec 31, 2013)

Most hard set limits are on OpenVZ, where it is possible for providers to sell XXL containers at low-end price grades. It really depends on how much CPU and IO power you *continuously* use.

For example, if you have a 2GB Xen PV VPS (you can get that for about $20/mo somewhere now), with good CPU dispatch policy, if your high CPU+IO load only happens occasionally, I don't think the provider will really mind. At least Linode doesn't mind my occasional ffmpeg encoding (about one hour each week).

But, if you use 1000 IOPS and 2.0 load all the time, probably the only option is to go for a cheap dedi?


----------



## Alto (Dec 31, 2013)

In the past, I tend to have used the higher RAM plans where I'm unsure what my usage is going to be and I don't want to be a bad node neighbour; I normally scrap the plans once I have a decent idea of what I really need.


----------



## lbft (Dec 31, 2013)

drmike said:


> While these plans might play with the face value of large RAM, they are often crushed by hard set limits on IOPs, CPU utilization and other "good behavior" limiters.


High RAM oversold-type VPSes (I mean the >=2GB $7 type) are most useful for trading memory for some other resource (disk I/O, CPU usage, etc.)

Cache, cache, cache!


Tune your database (e.g. MySQL) so your whole DB fits in memory - that way reads don't need to hit disk at all and you only have to use precious IOPS for writes. It's great for a read-heavy workload, but watch your CPU usage. For a heavy random write workload you're likely better off with an SSD VPS.
Even better, if you have a largely static dataset, load it into an in-memory table at boot and never hit the disk at all.
For file-based stuff stick it in a tmpfs - even on OpenVZ I haven't come across a situation where I couldn't make a great big tmpfs (like, 90% of the VPS's memory size) if I wanted to, although I think there's something in beancounters that can limit its size if a provider so desired.
[*]If your data suits it, cache writes in memory too. It's especially great for data that is replaced frequently (e.g. stats, current statuses, etc. for stuff like characters in games and for monitoring systems). You have to be able to live with losing data not yet pushed to disk in case of power cut/unexpected reboot though, and I don't know of any off-the-shelf web apps that do this.
[*]Cache objects generated from the database in shared memory or memcached (or Redis I suppose, but I have no experience there).
[*]Store user session data in memory.
[*]Cache chunks of generated HTML (MediaWiki does this for its UI, for example, and it's an integral part of reddit's caching strategy where it saves them a bunch of latency and CPU usage despite having highly dynamic pages). CloudFlare's Railgun is a similar idea where they cache chunks of HTML to save transferring it over the network.
[*]Cache entire rendered pages. I know personally that nginx has useful stuff here like FastCGI caching and proxy caching. Stick the cache on a tmpfs. nginx even has a module to serve files directly from memcached. I'm sure other web servers have similar options, or otherwise stick Varnish in front (may require carefulness on OVZ, since last I heard it was a bit wonky when you mmap everything in the entire universe like Varnish and MongoDB do, although I notice that Varnish seems to have a memory-only storage backend these days).
Even then unless you specifically design your app to use a crapton of caching, and have a sufficiently large data set, you're going to bump into other limits before you use all your memory on some of the larger offers. And in that case depending on your data usage pattern you may run out of disk space first. And then there's the problem that cache invalidation is hard.

On the other hand, the biggest advantage of high RAM plans isn't using it at all - it's simply that you can choose to never have to worry about memory usage, because you're never ever going to be able to use it all accidentally.

TL;DR: don't configure your DB like it's a 128, put EVERYTHING in an in-memory data store, and forget about using it all anyway.


----------



## telephone (Dec 31, 2013)

I have quite a few high RAM apps in use (only on KVM):

- Gitlab (eats RAM for breakfast!)

- Jenkins

- Plex with Sick-Beard/SABnzbd

- Turn caching up (MySQL, Nginx, and OPcache)

- XFCE desktop via x2go

The only warning I've received was for Plex transcoding, and that was just a slap on the wrist as I had 5 devices transcoding at once  B)


----------



## SrsX (Dec 31, 2013)

In my mind, anything over 4GB of ram - I'd rather just get a dedicated server.

The biggest VPS I have is a 3GB ram one, which runs testing and skype. I run Skype on this due to the fact, there are kids out there who will *skype resolve* you and *ddos* you offline - so I'd rather they hit my VPS which is behind Voxility firewall - because it just pisses them off the fact they can't, plus it saves me having to call my ISP and ask them to assign me a new static IP.


----------



## bdtech (Dec 31, 2013)

lbft said:


> Cache, cache, cache!
> 
> 
> Cache entire rendered pages. I know personally that nginx has useful stuff here like FastCGI caching and proxy caching. Stick the cache on a tmpfs. nginx even has a module to serve files directly from memcached. I'm sure other web servers have similar options, or otherwise stick Varnish in front (may require carefulness on OVZ, since last I heard it was a bit wonky when you mmap everything in the entire universe like Varnish and MongoDB do, although I notice that Varnish seems to have a memory-only storage backend these days).


Is intentionally tmpfs'ing necessary? If the file is "HIT" enough, and you have free RAM available, Linux should cache it to RAM automatically.


----------



## drmike (Dec 31, 2013)

telephone said:


> I have quite a few high RAM apps in use (only on KVM):
> 
> - Gitlab (eats RAM for breakfast!)
> 
> ...


These are good uses.  You mentioned KVM... Take it these are higher priced plans?  Not the < $7 month ones?


----------



## drmike (Dec 31, 2013)

RAM abuse/use ---assuming here that you actually have RAM to use in such ways and the server isn't smacking SSDs mocking RAM....

Only way it really would work is if you have other servers in the same facility with super low latency....

I mean it will work if your other servers are 50ms away, but defeats most case/uses of such (performance).


----------



## Darwin (Dec 31, 2013)

lbft said:


> Cache, cache, cache!
> 
> TL;DR: don't configure your DB like it's a 128, put EVERYTHING in an in-memory data store, and forget about using it all anyway.



This. Cache cache and cache. If you know what you are doing you can develop an app that scales a lot because of cache usage. If you don't develop webapps and use something already baked, you can always use varnish and use a lot of ram to serve 10000s of requests


----------



## DomainBop (Dec 31, 2013)

I bought a 4GB Xen VPS last week.  Primary reason for buying: 500GB of backup space in a location that's <1ms from my SeFlow dedicated servers and <0.5 from my IWStack VPS.

I also have a 4GB RAM  VPS in Iceland (purchased for location and the 500Gb of disk space), and a few  >2GB VPS's in the Netherlands (used for smaller sites, development) but they're all on E5's and none of them are used for high traffic sites or CPU/IO intensive applications.  The heavy duty stuff all goes on dedicateds.  I won't buy any big RAM plan on an E3 because the performance and stability of the ones I've tried has...sucked.


----------



## nunim (Dec 31, 2013)

The only VPS that I have with more than 512MB of ram runs Windows. Even then I could probably get by with 1GB as I just use it for RDP.


I'm a big fan of optimization, I've got Observium running on a 128 RamNode SSD monitoring 35 servers and hosting a frequently used looking glass, still hardly runs above .2 load with 30MB or so to spare.


----------



## Tux (Dec 31, 2013)

I use big RAM VPSes due to the sheer amount of Java programs running on my machines, though I only really only have one 1GB VPS with RamNode doing that. I'm looking into upgrading that to 2GB for a project.


----------



## tchen (Dec 31, 2013)

I'm running a small logstash setup on one of the 3GB RAM series.  There's a Redis on the front to queue incoming syslog messages, with the regex'ing done by another logstash (yes, running two) - dumping it all into elasticsearch.  I haven't heard a peep from the provider so I assume it's not much of a load CPU/IO wise.


----------



## telephone (Dec 31, 2013)

drmike said:


> These are good uses.  You mentioned KVM... Take it these are higher priced plans?  Not the < $7 month ones?



Nope, you just have to jump on the specials  

I've had a myriad of them ranging from 2 GB to 4 GB, and all under $10 a month.

^ I'm down to just one now.


----------



## lbft (Dec 31, 2013)

bdtech said:


> Is intentionally tmpfs'ing necessary? If the file is "HIT" enough, and you have free RAM available, Linux should catch it to RAM automatically.


It's almost certainly not necessary but I'm paranoid about disk I/O usage on VPSes (I don't want to be a noisy neighbour), especially OpenVZ where I can't monitor how many IOPS I'm using with iostat.


----------



## wlanboy (Jan 1, 2014)

Most of my vps do have 128 MB of RAM.

Enough space for about everything.

Webstack (even VestaCP), mail servers, Ruby workers, MongoDB arbiters, RabbitMQ cluster nodes, etc.

But I do have three high RAM vps (all from specials):


1 GB OpenVZ - my site's backbone - (Redis + MongoDB master + RabbitMQ master + a lot of crons)
2 GB KVM - my Java world - (Jetty + Hudson + Ivy + Junit + Ant builder)
2 GB KVM - my devbox - (Git + a lot of LXC containers as staging environments)
Regarding the last one:

I used my backups to create 1:1 copies of my vps as lxc containers.

They cannot run all at the same time but if I want to test upgrades or new versions of software it is quite easy to do:


start lxc container
git pull + schema update
run tests
feel good and update the real vps
Saved my ass on some migrations.


----------



## mikho (Jan 1, 2014)

mikho said:


> I think the vps with most RAM i have has 512mb with 1024 burst.
> 
> 
> Currently down and has been for some time, only because I've been to lazy to submit a ticket about it.


Forgot all about it until Windows was mentioned. i do have a vmware vm with 3gb RAM @ $5/month for my online stuff.


----------



## willie (Jan 1, 2014)

I've been involved (at work) in using large EC2 instances to host database servers.  They work ok for that, but are a LOT more expensive than the supposed 2GB $7/month plans we see on this board and LEB.

I have a 1GB OpenVZ plan with ipxcore but I got it mostly for its largish disk storage.  I can't think of any times I actually did anything that used much of the ram.  I guess I could imagine running Redis in a vps like that, and see whether it got swapped. 

Most non-work things I do that want a lot of ram, also want enough cpu to drive shared vps hosts into conniptions.  So I do that stuff on dedicated servers.


----------



## MannDude (Jan 1, 2014)

I think a 512MB is the largest I have currently, a Digital Ocean box with VestaCP installed. Quite nice software.

Other than that, most things are 128MB or 256MB. Plenty of RAM to do everything I need/want to do.


----------



## wlanboy (Jan 2, 2014)

What's the limit OpenVZ or KVM can handle without struggles?


3 GB of RAM?
4 GB of RAM?


----------



## Awmusic12635 (Jan 2, 2014)

wlanboy said:


> What's the limit OpenVZ or KVM can handle without struggles?
> 
> 
> 3 GB of RAM?
> 4 GB of RAM?


There isn't a specific limit on where they start struggling. Assuming you have the whole server to yourself you could allocate a full 32GB of RAM to one server in OpenVZ or KVM and it would run just fine.


----------



## Eased (Jan 2, 2014)

I have an application server that eats up 16 vCPU cores and 8GB of RAM no problem, and then some.


----------



## wlanboy (Jan 2, 2014)

Fliphost said:


> There isn't a specific limit on where they start struggling. Assuming you have the whole server to yourself you could allocate a full 32GB of RAM to one server in OpenVZ or KVM and it would run just fine.


Ok - good to know.


At home I am using LXC without any problems on a 16 GB machine (splitted into 2 vps).



Eased said:


> I have an application server that eats up 16 vCPU cores and 8GB of RAM no problem, and then some.


Thank you for sharing the information.


----------



## lbft (Jan 3, 2014)

Eased said:


> I have an application server that eats up 16 vCPU cores and 8GB of RAM no problem, and then some.


You can't do that on the cheap high-RAM VPSes though, you'll get booted for CPU abuse. That's where the challenge comes in - actually doing something useful with a lot of RAM without a lot of CPU or disk I/O.


----------



## Eased (Jan 3, 2014)

lbft said:


> You can't do that on the cheap high-RAM VPSes though, you'll get booted for CPU abuse. That's where the challenge comes in - actually doing something useful with a lot of RAM without a lot of CPU or disk I/O.


@lbft, yes this is true. This server is not hosted with a budget provider, and its a VMware virtual machine with "dedicated" resources.


----------



## wlanboy (Jan 3, 2014)

Fliphost said:


> There isn't a specific limit on where they start struggling. Assuming you have the whole server to yourself you could allocate a full 32GB of RAM to one server in OpenVZ or KVM and it would run just fine.


So no technical limit but I dopt I would be able to use any shared 4 GB plan.



lbft said:


> You can't do that on the cheap high-RAM VPSes though, you'll get booted for CPU abuse. That's where the challenge comes in - actually doing something useful with a lot of RAM without a lot of CPU or disk I/O.


Second that. If you are using more than 2 GB of RAM (rule of thumb) you need dedicated CPU resources.


So a no-go for shared OpenVZ offers.


----------



## drmike (Jan 3, 2014)

> Second that. If you are using more than 2 GB of RAM (rule of thumb) you need dedicated CPU resources.
> So a no-go for shared OpenVZ offers.


I'll third that.   That's my rule and honestly, 2GB still is murky on these shared OpenVZ servers.


----------



## wlanboy (Jan 3, 2014)

drmike said:


> I'll third that.   That's my rule and honestly, 2GB still is murky on these shared OpenVZ servers.


I was thinking about the "real limit" too. You are not using the whole RAM because you need some room for apt-get and peaks.

So 1 GB of real RAM usage, 512 MB for cache/database and 512 MB for peaks and room to live.

This may work for 32 GB servers. so 16 OpenVZ instances - that might work.

But I have to admit for real world usage of big boxes running OpenVZ we have to lower the mark to 1 GB of RAM.


----------



## willie (Jan 3, 2014)

wlanboy said:


> So no technical limit but I dopt I would be able to use any shared 4 GB plan.
> 
> Second that. If you are using more than 2 GB of RAM (rule of thumb) you need dedicated CPU resources.
> 
> ...


This is not clear at all.  I've used a lot of 17GB and 34GB (I don't know why they size them like that) EC2 instances and I believe they're actually Xen VPS's running inside very large physical nodes, like 128GB or 256GB.  We also had some 256GB, 16 core physical servers that we ran LXC containers in, most pretty small but some of them 10-20GB.  We also used a 60.5GB instance (hi1.4xlarge) for some data analysis, but I get the impression that was a single-tenant server.

I see inceptionhosting has Xen VPS up to 16GB* and they seem more serious than the oversold OpenVZ plans that we see around here.  However once we're up in that price range I'd probably get a dedicated server unless the VPS had cloud-like hourly billing.

* https://inceptionhosting.com/usa-xen-vps-phoenix-miami/  -- for some reason the Inception VPS at other locations are much more expensive and only go to 4GB.


----------



## Magiobiwan (Jan 3, 2014)

Some clients use their Blue4 plans for MySQL stuff. Load up the DB in RAM instead of thrashing disk and it works MUCH faster. Some other people use them for VNC. With 2GB RAM (and 2GB vSwap) you can load up a GUI and a browser like Firefox, but that sometimes pushes the CPU boundary a bit.


----------



## Enterprisevpssolutions (Jan 3, 2014)

Kvm memory usages is much better than it was years ago. You have the kvm balloon feature with allows you to oversubscribe memory. I have seen some clients use 8G 16G and 32G vps systems both windows and linux. Yes dedicated servers are great but when you virtualize a system and allocate all the resources to it you only lose a few % on the resources but gain a few benefits of a vps ie backups, HA, snapshots, console access and management benefits.


----------



## wlanboy (Jan 6, 2014)

willie said:


> We also had some 256GB, 16 core physical servers that we ran LXC containers in, most pretty small but some of them 10-20GB.  We also used a 60.5GB instance (hi1.4xlarge) for some data analysis, but I get the impression that was a single-tenant server.


Yup, but LXC is not a full virtualization.

I am using LXC too to separate my big KVM into manageable (and moveable) containers.

Only services running on my KVM: OpenSSH server, fail2ban and iptables.

Everything else is inside of a LXC container.



Enterprisevpssolutions said:


> Yes dedicated servers are great but when you virtualize a system and allocate all the resources to it you only lose a few % on the resources but gain a few benefits of a vps ie backups, HA, snapshots, console access and management benefits.


Second that.

Containers are moveable. I do have some templates for different purposes.

I like the comfort of:


LXC inside of KVM:
One click creation of a LXC container including lighttpd, Ruby with all gems and dependencies, MongoDB server, etc with all ssh server, users, keys, etc allready configured. Done in 3 minutes. Recreated in 3 minutes.

Not talking of easy backup/recover or snapshots.


----------



## peterw (Jan 9, 2014)

LXC is something I have to test. Seems to be a good solution to split 2 GB KVM.

For Openvz I see the limit on 512MB of RAM.


----------



## maounique (Jan 9, 2014)

It depends. If no games are allowed, then 1-2 GB are mostly enough for mundane usage, however, throw in a few MC servers in an OVZ box and RAM will be needed.

We have people use all 8 GB RAM in OVerZold and not maxing the CPU. In fact I move them around so the big ram ones will go on the servers with high cpu usage and the cpu intensive on those that are low on RAM.

Rnning a DB in 8 GB ram should be mighty fast without loading the CPU (I am not talking monsters that need 4x 8 real cores and 512 GB RAM, but usual heavy DBs)

The main usage is shared hosting and games though.

We kept having demand for them (heck, people asked for larger than 8 GB iwstack instances and running heavy Windows inside) but OVZ is not doing everything, so we launched XenPower, less ram but more CPU and disk. It's been a massive hit, yet OVerZold usage does not decrease we barely manage to have stock, and there are tens of 8GB instances in the cloud.

There is a demand for high RAM VPSes, people want HA, snapshots, templates, imprating/exporting VMs installing from own ISO, creating complex setups with isolated networks, load balancing, external firewall, IPSec site-to-site and all with hourly billing, so, an 8 GB instance costs pennies a month if only up for a few hours.

Actually, to be honest I think the demand for high RAM VPSes is bigger than the regular LEBs, I mean there is stock for the lower ones (which is is something new those were flying fast before) and people ask for the big ones, Biz Xen was hardly selling one year ago in 1 GB variants, since then had to create 2 GB ones and even have a few people asking for a merger of two to run 4 GB at a very high price. At that price a dedi is much cheaper, but cant beat the power that a much bigger server can offer when almost all resources are available to you almost all the time if needed. Not to mention RAID, much better network, quiet and few neighbours.

I used to reccomend dedis over 8 GB and iwstack had an 8 GB instance just to have there in case someone will ever need one, but the usage was pretty steep from the start. Now we had to create 16 GB ones where people run DC edition Windows Server.


----------



## trexos (Jan 14, 2014)

I have one 8GB ram VPS with BlueVM for a friend of mine. He runs a few Minecraft servers.


----------



## Reece-DM (Jan 14, 2014)

We have quite a few people running cPanel w/ multiple sites on our 3GB offers.

Most are untouched so a little heavy on the RAM but its not a problem


----------

