amuck-landowner

PHP-CGI vs PHP-FPM on NGINX

tonyg

New Member
I did some testing and found php-cgi less memory hungry and with less errors under heavy load testing with httperf.

Unfortunatley I didn't keep the data from the tests.

KVM VPS with 128mb + Debian 7x  with nginx compiled from source.
 

coreyman

Active Member
Verified Provider
I did some testing and found php-cgi less memory hungry and with less errors under heavy load testing with httperf.

Unfortunatley I didn't keep the data from the tests.

KVM VPS with 128mb + Debian 7x  with nginx compiled from source.
What file were you pointing httperf to? How many requests per second were you sending to the server? How many cores did the CPU have?
 

tchen

New Member
Well guys I was doing some server optimization today and wrote this article on my findings with php-cgi vs php-fpm. Any constructive feedback is welcome as I am not an expert on this topic.

Please check it out on my blog - http://www.bitaccel.com/blog/php-cgi-vs-php-fpm-on-nginx/ and let me know what you think about my findings here.
Did you bother to look at the FPM queue backlog while running the tests?  The fun part with benchmarking FPM throughput is that unless you're using response time percentiles (not just raw throughput) you're missing out on a lot of how it works.  Default FPM has a limitless queue length that will hold all incoming requests while it deals with the processing.  Nginx's timeout catches all, but it takes a darn while for 200's and 500's to equalize into a steady state similar to that of FastCGI's response shape.  There's some other optimizations of course.
 

tonyg

New Member
It was a contact form with no images or css. I don't remember the requests per second. It was a one core VPS.
 

coreyman

Active Member
Verified Provider
It was a contact form with no images or css. I don't remember the requests per second. It was a one core VPS.
What type of CPU? 1 core of a newer xeon might be way more powerful than two cores of my atom :)
 

coreyman

Active Member
Verified Provider
Did you bother to look at the FPM queue backlog while running the tests?  The fun part with benchmarking FPM throughput is that unless you're using response time percentiles (not just raw throughput) you're missing out on a lot of how it works.  Default FPM has a limitless queue length that will hold all incoming requests while it deals with the processing.  Nginx's timeout catches all, but it takes a darn while for 200's and 500's to equalize into a steady state similar to that of FastCGI's response shape.  There's some other optimizations of course.
No I did not know about the backlog, and I'm not sure that tsung measures response times. Being able to serve more requests per seconds equates to better response times does it not? What do you mean by Nginx's timeout catches all, but it takes a darn while for 200's and 500's to equalize into a steady state similar to that of FastCGI's response shape
 

tonyg

New Member
What type of CPU? 1 core of a newer xeon might be way more powerful than two cores of my atom :)
It was a RamNode KVM VPS which historically are pretty darn powerfull.

CPU Model:        QEMU Virtual CPU version (cpu64-rhel6)


CPU Frequency:    3400.022 MHz
 

Rendus

New Member
I'm going to ignore tonyg's response, since it's even more meaningless than the test on the linked blog...

You made a very efficient error generator. That's all that can be taken away from that post, based on the information provided. What was the benchmark hammering? A Wordpress blog? A phpinfo() script? If that script made any connections to a DB, had you increased/modified the number of connections/threads the DB would accept? What was the peak queries per second (QPS) handled by the DB server during the benchmark? 

And really, the only *important* question to answer wasn't even sought out: What was your peak concurrency without errors?

In any case, moving to php-fpm is the right decision, far and away. Spawning processes is going to kill performance, 
 

coreyman

Active Member
Verified Provider
I'm going to ignore tonyg's response, since it's even more meaningless than the test on the linked blog...

You made a very efficient error generator. That's all that can be taken away from that post, based on the information provided. What was the benchmark hammering? A Wordpress blog? A phpinfo() script? If that script made any connections to a DB, had you increased/modified the number of connections/threads the DB would accept? What was the peak queries per second (QPS) handled by the DB server during the benchmark? 

And really, the only *important* question to answer wasn't even sought out: What was your peak concurrency without errors?

In any case, moving to php-fpm is the right decision, far and away. Spawning processes is going to kill performance, 
Hi Rendus, I figure that I updated the post to give details about what I was testing while you were writing yours. I'm not sure how to configure tsung properly to limit the number of queries per second(if you know how please let me know), and I am not benchmarking a DB server. I am benchmarking a php script, so isn't the DB server irrelevant here?
 

tchen

New Member
No I did not know about the backlog, and I'm not sure that tsung measures response times. Being able to serve more requests per seconds equates to better response times does it not? What do you mean by Nginx's timeout catches all, but it takes a darn while for 200's and 500's to equalize into a steady state similar to that of FastCGI's response shape
Let's say we get 20 requests/s.  In FPM, it pushes 20 into the queue.  In the FCGI, it roundrobins any free child process.  If none are able to accept, it 500's immediately. Let's say your CPU can handle 10 PHP requests at any given time (and no FPM isn't faster).  After that first second, we get 50% failure from FCGI.  0% failure from FPM. 

On the second interval, and the FPM queue is now 30 long.  FCGI still errors out 50% of the time.  FPM 0%.

30 seconds in (or whatever the nginx http timeout is), it starts to give up on the first requests inserted into the queue that haven't returned a result.  It sends back the 500 and you start to see FPM failure rates ramp up.  Within some intervals, it's going to dump 500's back to more than 10reqs just because it's suffering from backflow problems.  Its real throughput is actually 50% just like FCGI. 

The fun part is that I don't think Nginx has a way to tell FPM that 'I don't care about this request anymore', hence whatever dead requests are in the pool queue will still have to be processed, which actually kills the response time for anyone actively still waiting.

edit: to be specific, most people hit the proxy_read_timeout (default 60s) or fastcgi_read_timeout (default 90s) depending on how you integrate with the backend.
 
Last edited by a moderator:

tonyg

New Member
@coreman

The sites that I am running are mostly static with only a few php pages.

If I was running a full php site I would likely be using php-fpm.

Regardless, I got better overall performace with php-cgi for my configuration.
 

coreyman

Active Member
Verified Provider
Let's say we get 20 requests/s.  In FPM, it pushes 20 into the queue.  In the FCGI, it roundrobins any free child process.  If none are able to accept, it 500's immediately. Let's say your CPU can handle 10 PHP requests at any given time (and no FPM isn't faster).  After that first second, we get 50% failure from FCGI.  0% failure from FPM. 

On the second interval, and the FPM queue is now 30 long.  FCGI still errors out 50% of the time.  FPM 0%.

30 seconds in (or whatever the nginx http timeout is), it starts to give up on the first requests inserted into the queue that haven't returned a result.  It sends back the 500 and you start to see FPM failure rates ramp up.  Within some intervals, it's going to dump 500's back to more than 10reqs just because it's suffering from backflow problems.  Its real throughput is actually 50% just like FCGI. 

The fun part is that I don't think Nginx has a way to tell FPM that 'I don't care about this request anymore', hence whatever dead requests are in the pool queue will still have to be processed, which actually kills the response time for anyone actively still waiting.
Well that doesn't change the fact that php-cgi was causing really high load on the server but php-fpm handled everything gracefully. Thank you for that new information though, I will incorporate that in my tuning and article. As we can see from the graphs though, after 30 seconds php-fpm was still handling more requests than php-cgi per second.
 
Last edited by a moderator:

tonyg

New Member
I'm going to ignore tonyg's response, since it's even more meaningless than the test on the linked blog...
@ Rendus

I am truly sorry that my post and the OP's didn't live up to your extremely high standards for forum posts.

Really...may you find it in your heart to forgive us lowly peons.
 

Rendus

New Member
Hey Corey, first I want to apologize for my tone in my reply to you - I didn't mean to take a negative tone, and on re-reading, that's what I get out of what I said.

I've never used Tsung before myself, so I haven't the slightest clue - I tried installing it just now, and I can't get it to do anything but die a terrible, agonizing death... Oh well.

I tend to use siege for concurrency testing. Tsung looks like it's way, way over the top for what you're looking to do with it, although its graph generation is pretty nice - it's a distributed benchmarking platform for throwing thousands of hits per second at a cluster, more than hundreds at some poor Atom.

Any PHP script might be hitting an SQL database for who-knows-why - An almost completely static contact page might decide it wants to track the number of times the page is loaded. And it may not open a persistent connection to the DB to do it.

A contact form script should just scream right out of the gate in a decent configuration, if it isn't doing anything particularly stupid like writing to a DB or the filesystem.

I think Tchen is on the right track here - I for some reason thought you were slamming a Wordpress install (and was quietly impressed by the results), but I'll throw in a suggestion that you make sure some sort of bytecode caching is installed and enabled - apc, xcache, etc.

To put your numbers into perspective here - I have a pet project server monitor script that, on every single pageload, fopens 6 different multi-megabyte log files, jumps into the end of it, and parses the last five lines lines, performs some operations on them, and belches out a ~100k (ungzipped) page. Without any tweaking at all, this script gets 75-100 peak concurrency with a 2 second response time on a single core, 512MB KVM VPS that's running a variety of other services for real projects.

Edit with some settings on the above config:

php.ini:

pm.max_children = 5
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 3

access.log and error_log are both enabled, for both nginx and php5-fpm, adding even more IO per request.

Using a Unix socket rather than a TCP port.

It's pretty much just a stock nginx/php5-fpm install from their respective ppas.

The setting you may want to look into. to limit your backlog is:

; Set listen(2) backlog.
; Default Value: 65535 (-1 on FreeBSD and OpenBSD)
;listen.backlog = 65535
 

That'll let 65 thousand outstanding PHP requests sit waiting for your queue to clear out.
 
Last edited by a moderator:

coreyman

Active Member
Verified Provider
Hey Corey, first I want to apologize for my tone in my reply to you - I didn't mean to take a negative tone, and on re-reading, that's what I get out of what I said.

I've never used Tsung before myself, so I haven't the slightest clue - I tried installing it just now, and I can't get it to do anything but die a terrible, agonizing death... Oh well.

I tend to use siege for concurrency testing. Tsung looks like it's way, way over the top for what you're looking to do with it, although its graph generation is pretty nice - it's a distributed benchmarking platform for throwing thousands of hits per second at a cluster, more than hundreds at some poor Atom.

Any PHP script might be hitting an SQL database for who-knows-why - An almost completely static contact page might decide it wants to track the number of times the page is loaded. And it may not open a persistent connection to the DB to do it.

A contact form script should just scream right out of the gate in a decent configuration, if it isn't doing anything particularly stupid like writing to a DB or the filesystem.

I think Tchen is on the right track here - I for some reason thought you were slamming a Wordpress install (and was quietly impressed by the results), but I'll throw in a suggestion that you make sure some sort of bytecode caching is installed and enabled - apc, xcache, etc.

To put your numbers into perspective here - I have a pet project server monitor script that, on every single pageload, fopens 6 different multi-megabyte log files, jumps into the end of it, and parses the last five lines lines, performs some operations on them, and belches out a ~100k (ungzipped) page. Without any tweaking at all, this script gets 75-100 peak concurrency with a 2 second response time on a single core, 512MB KVM VPS that's running a variety of other services for real projects.

Edit with some settings on the above config:

php.ini:

pm.max_children = 5


pm.start_servers = 2


pm.min_spare_servers = 1


pm.max_spare_servers = 3

access.log and error_log are both enabled, for both nginx and php5-fpm, adding even more IO per request.

Using a Unix socket rather than a TCP port.

It's pretty much just a stock nginx/php5-fpm install from their respective ppas.

The setting you may want to look into. to limit your backlog is:

; Set listen(2) backlog.


; Default Value: 65535 (-1 on FreeBSD and OpenBSD)


;listen.backlog = 65535

That'll let 65 thousand outstanding PHP requests sit waiting for your queue to clear out.
I'm not bragging about anything or even touching on a bytecode cache, I am creating an article about php-cgi vs php-fpm on the code I have provided. You both have the wrong idea about what I'm doing here.

I'll take a look at siege for concurrency though :)

Even though in this test on the graphs you can see that one is doing 10 requests per second and the other is doing 100.
 
Last edited by a moderator:

tchen

New Member
Well that doesn't change the fact that php-cgi was causing really high load on the server but php-fpm handled everything gracefully. Thank you for that new information though, I will incorporate that in my tuning and article. As we can see from the graphs though, after 30 seconds php-fpm was still handling more requests than php-cgi per second.
Is your FCGI graph using the same number of children?  What's your process recycling set at?  Those would be the first I'd look at, but I know there's something off about your benchmarks.

Numerous other people have benchmarked PHP CGI, PHP FCGI and FPM next to each other to determine if there was any underlying processing speed difference and the answer's been close to nil. 

http://vpsbible.com/php/php-benchmarking-phpfpm-fastcgi-spawnfcgi/

http://www.eschrade.com/page/why-is-fastcgi-w-nginx-so-much-faster-than-apache-w-mod_php/

There are some architectural differences such as the queuing, recycling, and processing spawning, which you do hit often when encountering 500s but if you don't account for how it really works, you're drawing entirely wrong conclusions.
 
Last edited by a moderator:

coreyman

Active Member
Verified Provider
Is your FCGI graph using the same number of children?  What's your process recycling set at?  Those would be the first I'd look at, but I know there's something off about your benchmarks.

Numerous other people have benchmarked PHP CGI, PHP FCGI and FPM next to each other to determine if there was any underlying processing speed difference and the answer's been close to nil. 

http://vpsbible.com/php/php-benchmarking-phpfpm-fastcgi-spawnfcgi/

http://www.eschrade.com/page/why-is-fastcgi-w-nginx-so-much-faster-than-apache-w-mod_php/

There are some architectural differences such as the queuing, recycling, and processing spawning, which you do hit often when encountering 500s but if you don't account for how it really works, you're drawing entirely wrong conclusions.
Yes fcgi was using the same children.

Looks to me like php-fpm won @vpsbible. Second one you linked is apache with mod_php vs nginx with php-fpm
 

tchen

New Member
Yes fcgi was using the same children.

Looks to me like php-fpm won @vpsbible. Second one you linked is apache with mod_php vs nginx with php-fpm
Yes, PHP-FPM won in the first.  Hardly a 10-X win you're making it out to be.  The second batch set with Apache vs Nginx is an example of looking at what the bottlenecks are and eliminating the non-relevant ones.  Removing .htaccess from Apache takes the web listener out of the equation and as long as queue-depths aren't deep, the two PHP types are comparable (with embedded php winning by the same negligible margin as the reverse case at @vpsbible).

I don't know your config so I can't sit here and bat suggestions all night.  You have more insight into your test rig so if at the end of the day, you want to stand by your conclusion that PHP-FPM is 10x faster than PHP-FCGI, go right ahead.  All I can do is tell you that it doesn't smell right and point you to other reports that should make you think twice :)

P.S. I have no hate of FPM in case you wonder.  I run it by default.
 
Last edited by a moderator:

coreyman

Active Member
Verified Provider
Yes, PHP-FPM won in the first.  Hardly a 10-X win you're making it out to be.  The second batch set with Apache vs Nginx is an example of looking at what the bottlenecks are and eliminating the non-relevant ones.  Removing .htaccess from Apache takes the web listener out of the equation and as long as queue-depths aren't deep, the two PHP types are comparable (with embedded php winning by the same negligible margin as the reverse case at @vpsbible).

I don't know your config so I can't sit here and bat suggestions all night.  You have more insight into your test rig so if at the end of the day, you want to stand by your conclusion that PHP-FPM is 10x faster than PHP-FCGI, go right ahead.  All I can do is tell you that it doesn't smell right and point you to other reports that should make you think twice :)

P.S. I have no hate of FPM in case you wonder.  I run it by default.
Well I edited my article to include that 'There may be some php-cgi modifications you experts can do that will make php-cgi perform closer to php-fpm that I am unaware of and couldn’t find through my research. '

For me - 

  1. Out of the box php-fpm performed 100% better with no optimizations.
 
Top
amuck-landowner