Ed Bloom asked:
I’m currently testing a new nginx/php-fpm I have setup on a new VPS with 4GB of RAM:
from my php-fpm process pool config:
pm = static pm.max_children = 10
I have a simple load.php script which has the following simulate a long running mysql query:
<?php echo sleep(5); echo "you see me after 5 seconds"; ?>
I then throw some load at this script as follows:
ab -n 1000 -c 100 http://mydomain.com/load.php
When I tail the nginx access logs I see a trickle of requests at a time – rather than the high throughput I was expecting.
How exactly does the data flow from nginx to the php backend and back?
If I have 10 static child processes, will the php backend process 10 requests (at 5 seconds each) before then accepting another 10 requests? Certainly from sitting looking at the logs that would appear to be roughly the case – is this correct?
If this is the case and I’m running a php app that if fairly heavy at the DB end (and consequently at the php process time layer), should I simply be increasing the number of child processes to speed up how long it takes to concurrently handle requests?
pm.max_children means you can only have that many php processes running at once. If they are all taking a long time, then you are tied up, and other incoming connections must wait. To resolve this, increase the size of the php-fpm pool to the maximum you can get away with given your resource constraints.
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.