While these plans might play with the face value of large RAM, they are often crushed by hard set limits on IOPs, CPU utilization and other "good behavior" limiters.
High RAM oversold-type VPSes (I mean the >=2GB $7 type) are most useful for trading memory for some other resource (disk I/O, CPU usage, etc.)
Cache, cache, cache!
- Tune your database (e.g. MySQL) so your whole DB fits in memory - that way reads don't need to hit disk at all and you only have to use precious IOPS for writes. It's great for a read-heavy workload, but watch your CPU usage. For a heavy random write workload you're likely better off with an SSD VPS.
Even better, if you have a largely static dataset, load it into an in-memory table at boot and never hit the disk at all.
- For file-based stuff stick it in a tmpfs - even on OpenVZ I haven't come across a situation where I couldn't make a great big tmpfs (like, 90% of the VPS's memory size) if I wanted to, although I think there's something in beancounters that can limit its size if a provider so desired.
[*]If your data suits it, cache writes in memory too. It's especially great for data that is replaced frequently (e.g. stats, current statuses, etc. for stuff like characters in games and for monitoring systems). You have to be able to live with losing data not yet pushed to disk in case of power cut/unexpected reboot though, and I don't know of any off-the-shelf web apps that do this.
[*]Cache objects generated from the database in shared memory or memcached (or Redis I suppose, but I have no experience there).
[*]Store user session data in memory.
[*]Cache chunks of generated HTML (MediaWiki does this for its UI, for example, and it's an integral part of reddit's caching strategy where it saves them a bunch of latency and CPU usage despite having highly dynamic pages). CloudFlare's Railgun is a similar idea where they cache chunks of HTML to save transferring it over the network.
[*]Cache entire rendered pages. I know personally that nginx has useful stuff here like FastCGI caching and proxy caching. Stick the cache on a tmpfs. nginx even has a module to serve files directly from memcached. I'm sure other web servers have similar options, or otherwise stick Varnish in front (may require carefulness on OVZ, since last I heard it was a bit wonky when you mmap everything in the entire universe like Varnish and MongoDB do, although I notice that Varnish seems to have a memory-only storage backend these days).
Even then unless you specifically design your app to use a crapton of caching, and have a sufficiently large data set, you're going to bump into other limits before you use all your memory on some of the larger offers. And in that case depending on your data usage pattern you may run out of disk space first. And then there's the problem that cache invalidation is hard.
On the other hand, the biggest advantage of high RAM plans isn't using it at all - it's simply that you can choose to never have to worry about memory usage, because you're never ever going to be able to use it all accidentally.
TL;DR: don't configure your DB like it's a 128, put EVERYTHING in an in-memory data store, and forget about using it all anyway.