amuck-landowner

Best performance tips when resource constraints are not an issue?

MannDude

Just a dude
vpsBoard Founder
Moderator
I know a lot of us here run websites on servers or VPSes that are limited in their resources, and we must make adjustments and tweaks to ensure things run well within those limits. The forum for vpsBoard actually runs quite comfortably within 512MB of RAM, utilizing roughly 150-200MB~ which includes PHP/Lighttpd/MySQL and other things.

However it got me thinking: If you had a similar setup but moved it to something much larger with a lot more spare resources, like a much larger VPS or even dedicated server, how would you change your setup to ensure things were running as quickly and effectively as possible? I assume heavy caching would be involved, but figured it'd be a good conversation to start :)
 
- Lots of memory for filesystem buffer cache. 

- Depending on your 'linux kernel version', toggle kernel.vsyscall64 to 2 to give a little bit of a speedup on gettimeofday() 

- Use a 32bit version of *NIX to save memory, as 64bit has swollen pointers / alignment padding issues that consumes lots of memory

- Use a PHP opcode cache like EAccelerator (which has a disk based on)
 

drmike

100% Tier-1 Gogent
Well, what I'd be doing with a larger dedicated box is more sandboxing, testing and pre-rollout planning.

Beyond that:

1. Front end caching for static content --  Varnish or Nginx ---> actual real server

2. Clustering.  Multiple actual real servers to avoid downtime, crash, etc.

3. More caching.  

4. Optimizing MySQL to utilize RAM / push entire dataset to RAM

Depends how much you have to spare though and if virtualized server or not.   I only rollout virtualized boxes these days.  So a small 16GB server might have 3 slices of 8GB, 4GB, 2-3GB with leftover just for spare.

Nice thing is you can deploy instances as needed and crush and perfect things.  Eventually automate the whole thing too :)

I'd still keep VPSes and other services for deployments.  At least for stuff that attracts public stupidity, DDoS targets, etc.

I do an existing common config with remote VPSes and a few spread out dedicated real servers that do the actual heavy lifting.  Front end VPS are end nodes, caching, traffic cleanup, etc.  Whole backend connectivity is invisible to end user and goes over private pipe.
 

splitice

Just a little bit crazy...
Verified Provider
Clustering is what I would do instead of one big server.

That aside, php5-fpm + APC (with .stat turned off if possible) + nginx + srcache + redis is pretty amazing, basically you hit the memory bandwidth limit on most sites before CPU or disk.
 

splitice

Just a little bit crazy...
Verified Provider
APC is a memory hog, and doesn't page compiled opcode to disk. I recommend against it.
There is absolutely no reason you would want to save opcode to disk, memory is MUCH faster. Saving and loading from disk would negate most of the performance APC will give you. APC provides a cache of the exact amount of memory you configure it for, it does not "hog memory".  The whole point of it is to cache into memory.

And as anyone with ANY serious experience with PHP knows, parsing is expensive.

5jESD.png


5jEU2.png


And not to be pedantic but: "resource constraints are not an issue"
 
Last edited by a moderator:
There is absolutely no reason you would want to save opcode to disk, memory is MUCH faster. Saving and loading from disk would negate most of the performance APC will give you. APC provides a cache of the exact amount of memory you configure it for, it does not "hog memory".  The whole point of it is to cache into memory.

And as anyone with ANY serious experience with PHP knows, parsing is expensive.
Nonsense. tmpfs on *NIX based servers uses both VFS and swap as a backend, and even if you paged it to disk RTT socket latency is STILL going to be an order of magnitude issue. Paging to disk is a good idea on already compiled opcode, keeping everything in memory is a pessimistic optimization and just consumes memory (and CPU cycles due to TLB et al). 

If you want maximum speed from PHP, don't use it (php) from apache, and use HipHop or something to compile it as native machine code. 
 

drmike

100% Tier-1 Gogent
I stay out of specific tuning especially PHP tuning.

Rule of the short arm is always:

1. Cache in RAM.

2. Spin out to disk cachable files (hoping there you have RAM for disk caching)

I enjoy using RAM.  I hate getting anything near disk, especially cachable items.  I do it in environments where we have literally millions of content blocks to cache with long to never expiration.  And we have have chunks of available spare RAM (4-12GB worth).

Neither of the comments above on this really disagree.  

Negate the disk bottleneck with SSDs or other non moving disk with great IOPs.

"If you want maximum speed from PHP, don't use it (php) from apache, and use HipHop or something to compile it as native machine code. "
Thanks for that recommendation.
 

drmike

100% Tier-1 Gogent
PS: You can use chunks of RAM for a RAM drive as well and sync files from RAM to disk or other way depending.   I've been doing this for decades on all sorts of platforms.
 

MannDude

Just a dude
vpsBoard Founder
Moderator
PS: You can use chunks of RAM for a RAM drive as well and sync files from RAM to disk or other way depending.   I've been doing this for decades on all sorts of platforms.
Care to elaborate on that?
 

splitice

Just a little bit crazy...
Verified Provider
Seems like a Micro optimization to me, since Linux will by default cache file's with the free portion of your ram. Unless he is referring to using a ramdisk (tmpfs) for short term writes (e.g php sessions).
 

drmike

100% Tier-1 Gogent
Seems like a Micro optimization to me, since Linux will by default cache file's with the free portion of your ram. Unless he is referring to using a ramdisk (tmpfs) for short term writes (e.g php sessions).
This is true also.  Idea here is that you have ample RAM and a small dataset so the OS cache knows/does best.

I tend to optimize machines to use the RAM.  The means maxing out MySQL on RAM,  ample consumption by other cache layers to boost MySQL (i.e key based storage).  It's a different approach certainly, but one that is more predictable.

Short term writes are one thing I write to RAM disks.  Others can be /tmp style data, including uploaded files.

Mainly though, I put active files/datasets in RAM.  Non database data.   I also use such for backups, exports, etc.   To get chunks done now with least delay and impact then let that feed out to disk later as a simple "copy" job versus holding up other software prone to breakage/delays that aren't so graceful/resources whoring.

Been using this technique since the days of floppy drives at least.
 
You could also get creative and use your video card memory as a VFS backed filesystem. 

I have one of them in production for squid, and it works /really/ well for small subsets of data (mostly html and others), plus I use it for mysql tmpdir :)
 

sundaymouse

New Member
I would create 8 Xen VPS's with 512MB or 1GB RAM each on that dedicated, and make identical configurations. Install HAProxy on the dedicated server itself, and do load balancing.
 

drmike

100% Tier-1 Gogent
You could also get creative and use your video card memory as a VFS backed filesystem. 

I have one of them in production for squid, and it works /really/ well for small subsets of data (mostly html and others), plus I use it for mysql tmpdir :)
Now this is my speed.

Might you have a write up/how to?   I often have servers with unused video RAM doing zip.  This is excellent idea.
 

splitice

Just a little bit crazy...
Verified Provider
APC was pulled from 5.5 since Zend Opcache is built in (which should eventually give better performance). APC development has stalled, but for pre 5.5 it is the best of the caches in my opinion.

APCu carries on the APC api (think user caching) for php5.5, Zend Opcache does the opcode caching (plus optimization).
 
Top
amuck-landowner