Things get more difficult if we are talking about shared hosting on an apache + mod_fcgid + php environment, as there are several different parameters to be tuned in order for large uploads to work.
As far as we could see, there's a lot of people hitting different limits in their applications and looking for solutions, but no handy summary of where to make the necessary changes.
Traditionally the PHP interpreter ran inside the apache process. Apache would load the PHP library (mod_php.so) and use it to parse the PHP based pages. Using a wrapper for the execution of PHP opens new possibilities. This is how the mod_fcgid developers see their approach to the problem:
mod_fcgid is a high performance alternative to mod_cgi or mod_cgid, which starts a sufficient number instances of the CGI program to handle concurrent requests, and these programs remain running to handle further incoming requests. It is favored by the PHP developers, for example, as a preferred alternative to running mod_php in-process, delivering very similar performance.
Not going into lengthy performance discussions (performance is cheap these days) we see see security as the main reason for adopting a mod_fcgi based architecture. In fact, it can be combined with SuExec to have each apache virtual host executing PHP with a different user. This is truly a life saver in terms of preventing damage and analyzing evidence from hacking attempts.
A detailed howto can be found here. A very nice tool for web hosting that integrates Apache, PHP and SuExec on RHEL/CentOS can be found here.
Time related parameters
The following PHP variables are involved:
max_execution_time - This sets the maximum CPU time in seconds a script is allowed to run
max_input_time - This sets the maximum time in seconds a script is allowed to parse input data, like POST, GET and file uploads
These variables can be tuned on the vhost specific php.ini file, eg
The exact path to each php.ini depends on your system.
The following Apache and mod_fcgid variables are also involved and should be appropriately set in the vhost directive in httpd.conf:
Timeout - Apache variable that is used for several different things, including "the length of time to wait for output from a CGI script". This defaults do 300 seconds and is used at apache level regardless of how PHP or other scripts are configured.
IPCCommTimeout / FcgidIOTimeout - This is specific to mod_fcgid and does NOT override any other settings. The default is 40 seconds
Note: FcgidIOTimeout replaces the initial IPCCommTimeout for the same purpose. See here.
Thus, if, from the client upstream bandwidth and the file sizes to support, the expected upload time is up to 10 minutes, you should set on php.ini
max_input_time = 600
and on the corresponding vhost on httpd.conf
IPCCommTimeout 600For example, if the client wants to upload 50MB over an ADSL line with an announced upstream rate of 1Mbps, the upload time in ideal conditions would be:
t = ( 50 * 1024 * 1024 * 8 ) / (1 * 1000 * 1000 * 0.8 ) ~ 524.29 s
In the previous formula 0.8 roughly accounts for the ADSL overhead and it is assumed that traffic on the ADSL "neighborhood" is low enough not to interfere with this file transfer.
As for max_execution_time it is harder to estimate as it's CPU time (ie, only counts when the process is actually running, not waiting for I/O) but you can start using the default which is 30. Depending on the total server load this may or may not have to be changed.
Size related parameters
The following PHP variables are involved:
upload_max_filesize - The maximum size of an uploaded file via PHP (uses HTTP upload)
post_max_size - Sets max size of post data allowed. This setting also affects file upload. To upload large files, this value must be larger than .
memory_limit - the memory limit for individual PHP scripts
Thus, to support files up to, say, 50M you should set something like:
post_max_size = 51M
upload_max_filesize = 50M
The memory_limit value is harder to estimate as it depends on what the script does with the file. For example for roundcube there was a popular bug regarding the amount of memory consumed by attachments (see here) but generally applications aren't so demanding.
The PHP documentation recomends memory_limit larger than post_max_size, so as a rule of thumb starting with 16 + post_max_size (16M is the default PHP value) should be enough. However, we think the documentation is wrong / outdated. To run things tight one can perfectly start with the default value of 16M and see if anything fails. Examining error_log will make clear if the script runs out of memory:
[warn] mod_fcgid: stderr: PHP Fatal error: Allowed memory size of 8388608 bytes exhausted (tried to allocate 4864 bytes)...
During our tests we realized that a simple file management web application could upload large files under the default PHP 16M memory_limit without any problems (tested on RHEL with php 5.1 - see similar comments here, here, and here).
The apache directive LimitRequestBody can also prevent large uploads. However, it defaults to 0 and is usually not present on httpd.cond, which means no limit is enforced by default.
PHP File Uploads