Secure webspaces with NGINX, PHP-FPM chroots and Let's Encrypt
This article explains how to setup a web hosting environment based on the NGINX
web server with PHP-FPM
in a chrooted configuration for serving PHP web applications in a secure manner and automated TLS support for all webspaces with Let's Encrypt.
The steps described here were tested on Debian Stretch but might also work on other distributions with some minor changes.
If you are interested in an easy-to-use implementation of the configuration described in this article have a look here.
Table of contents
Feature overview
- Functional features
- Several users share one web server
- Each user has their own directory disclosed by
NGINX
- Each user has at least one domain linked to its web directory
- Support for serving PHP files
- Seperate PHP process management for each webspace with PHP-FPM
- PHP OPCache can still be used along with chroots
- Security-related features
- The user's files are only allowed to be accessed by the user and the web server
- The PHP scripts are run as the user who owns the webspace
- The PHP scripts are only allowed to access the files of the webspace owner (chroot)
- Full TLS support with secure ciphers, an up-to-date configuration and automatic renewal of certificates for any domain with Let's Encrypt
Install the packages
NGINX
NGINX
provides debian repositories for stable
and oldstable
releases of Debian which can be appended to /etc/apt/sources.list
. There are two releases of NGINX
, called stable
and mainline
. The differences are explained in this announcement. If you do not plan to use a lot of third-party modules along with NGINX
it might probably be best to go for the mainline
release. If not sure, read the section What Version should I use?
in the article linked above.
First add the URLs of the NGINX APT repository to /etc/apt/sources.list
- /etc/apt/sources.list
deb http://nginx.org/packages/mainline/debian/ stretch nginx deb-src http://nginx.org/packages/mainline/debian/ stretch nginx
Add the key that was used to sign NGINX
packages to your apt keyring
root@webhost:~# wget -O- -q http://nginx.org/keys/nginx_signing.key | apt-key add -
Update your repositories and install the webserver. APT
will automatically choose the latest version available (the one from the NGINX
repository).
root@webhost:~# apt-get update root@webhost:~# apt-get install nginx
PHP-FPM
For this setup we will use the FastCGI Process Manager PHP-FPM
. It can be installed from the Debian repositories. The actual version available in Debian is PHP7.0
root@webhost:~# apt-get install php7.0-fpm
Configuration
We will configure NGINX
to serve each user's content over seperate domains like http://<username>.web.example.com
. The static files (like images and HTML files) will be served directly by NGINX
, while requests for PHP scripts will be passed to each user's PHP-FPM
pool to get processed separately. This allows us to
- run the php scripts with the
uid
andgid
of the webspace user instead ofNGINX
's - run the scripts inside a chroot and prevent scripts from accessing the filesystem outside the user's web directory
- tweak the performance (scheduling scheme, number of php processes) for any webspace separately
.--------. . CLIENT . '--------' | v .-,( ),-. .-( )-. ( internet ) '-( ).-' '-.( ).-' | v .----------------. .--------------------------------------. | NGINX | | PHP-FPM-MASTER | '----------------' | | | | | | | | | .--------------------------------. | | | .----------------------------------. | | PHP-FPM-POOL0 | | | | | Filesystem | | |------------------------------ | | u000.web.example.com/*.php | | u000.web.example.com/* |----------------------------------| | | UID : u000 |<-------------------------------| |------------------------->| /home/www/u000/chroot/data-u000/ | | | GID : u000 | | | | | | | | CHROOT: /home/www/u000/chroot/ | | | | u001.web.example.com/* | | | '--------------------------------' | | '------------------------->| /home/www/u001/chroot/data-u000/ | | | | '----------------------------------' | .--------------------------------. | | | | PHP-FPM-POOL1 | | | | |------------------------------ | | u001.web.example.com/*.php | | | UID : u001 |<-------------------------------' | | GID : u001 | | | | CHROOT: /home/www/u001/chroot/ | | | '--------------------------------' | | | | | '--------------------------------------'
User management
You can either use common unix system users and create them with
root@webhost:~# mkdir /home/www root@webhost:~# useradd -b /home/www -k /dev/null -m <username>
or, if you need the user information available on different hosts, you could manage users in an external database like MySQL or LDAP and integrate them into the operating system using NSS
. How this can be achieved with LDAP is explained in my article “Linux user management with LDAP”
Directory structure
The user's home directory is located in /home/www/<username>/
containing the /chroot
directory serving as chroot for the user's PHP-FPM
pool. The /chroot
directory has three more sub directories:
/tmp
will store all temporary files like PHP session data, files uploaded through PHP or files created with PHP'stmpfile()
function./log
to keep all logfiles related to the user's webspace/data
with the actual files to be served byNGINX
.
/home/www/u000/ └── [d-----x--- root u000 ] chroot ├── [d---rwx--- root u000 ] data ├── [d----wx--- root u000 ] log └── [d-----x--- root u000 ] tmp ├── [d----wx--- root u000 ] misc ├── [d----wx--- root u000 ] session ├── [d----wx--- root u000 ] upload └── [d----wx--- root u000 ] wsdl
All directories are owned by uid root
and the user's initial login group. The above structure can be created with
root@webhost:~# cd /home/www/<username> root@webhost:~# mkdir chroot root@webhost:~# mkdir chroot/data root@webhost:~# mkdir chroot/log root@webhost:~# mkdir chroot/tmp root@webhost:~# mkdir chroot/tmp/misc root@webhost:~# mkdir chroot/tmp/session root@webhost:~# mkdir chroot/upload root@webhost:~# mkdir chroot/wsdl root@webhost:/home/www/<username># chown -R root:<username> chroot/ root@webhost:/home/www/<username># chmod 0010 chroot/ root@webhost:/home/www/<username># chmod 0070 chroot/data root@webhost:/home/www/<username># chmod 0030 chroot/log root@webhost:/home/www/<username># chmod 0010 chroot/tmp root@webhost:/home/www/<username># chmod 0030 chroot/tmp/*
PHP-FPM
PHP-FPM
will process the user's php scripts and allow us to run the PHP scripts chrooted with the uid
and gid
of the user.
PHP-FPM
can be configured in /etc/php/7.0/fpm/
. The main configuration is done in php-fpm.conf
which includes /etc/php/7.0/fpm/pool.d/*.conf
at its end. This is the place where we can create separate pools for the users. Each user will get their own pool configured in /etc/php/7.0/fpm/pool.d/<username>.conf
.
- username.conf
[<username>] user = $pool group = $pool listen = /var/run/php-fpm-$pool.sock listen.owner = nginx listen.group = nginx pm = dynamic pm.max_children = 5 pm.start_servers = 2 pm.min_spare_servers = 1 pm.max_spare_servers = 3 pm.status_path = /php-fpm-status ping.path = /php-fpm-ping access.log = /home/www/$pool/chroot/log/php-fpm-pool.log slowlog = /home/www/$pool/chroot/log/php-fpm-slow.log request_slowlog_timeout = 15s request_terminate_timeout = 20s chroot = /home/www/$pool/chroot/ chdir = / ; Flags & limits php_flag[display_errors] = off php_admin_flag[log_errors] = on php_admin_flag[expose_php] = off php_admin_value[memory_limit] = 32M php_admin_value[post_max_size] = 24M php_admin_value[upload_max_filesize] = 20M php_admin_value[cgi.fix_pathinfo] = 0 php_admin_value[disable_functions] = apache_child_terminate,apache_get_modules,apache_get_version,apache_getenv,apache_lookup_uri,apache_note,apache_request_headers,apache_reset_timeout,apache_response_headers,apache_setenv,getallheaders,virtual,chdir,chroot,exec,passthru,proc_close,proc_get_status,proc_nice,proc_open,proc_terminate,shell_exec,system,chgrp,chown,disk_free_space,disk_total_space,diskfreespace,filegroup,fileinode,fileowner,lchgrp,lchown,link,linkinfo,lstat,pclose,popen,readlink,symlink,umask,cli_get_process_title,cli_set_process_title,dl,gc_collect_cycles,gc_disable,gc_enable,get_current_user,getmygid,getmyinode,getmypid,getmyuid,php_ini_loaded_file,php_ini_scanned_files,php_logo_guid,php_sapi_name,php_uname,sys_get_temp_dir,zend_logo_guid,zend_thread_id,highlight_file,php_check_syntax,show_source,sys_getloadavg,closelog,define_syslog_variables,openlog,pfsockopen,syslog,nsapi_request_headers,nsapi_response_headers,nsapi_virtual,pcntl_alarm,pcntl_errno,pcntl_exec,pcntl_fork,pcntl_get_last_error,pcntl_getpriority,pcntl_setpriority,pcntl_signal_dispatch,pcntl_signal,pcntl_sigprocmask,pcntl_sigtimedwait,pcntl_sigwaitinfo,pcntl_strerror,pcntl_wait,pcntl_waitpid,pcntl_wexitstatus,pcntl_wifexited,pcntl_wifsignaled,pcntl_wifstopped,pcntl_wstopsig,pcntl_wtermsig,posix_access,posix_ctermid,posix_errno,posix_get_last_error,posix_getcwd,posix_getegid,posix_geteuid,posix_getgid,posix_getgrgid,posix_getgrnam,posix_getgroups,posix_getlogin,posix_getpgid,posix_getpgrp,posix_getpid,posix_getppid,posix_getpwnam,posix_getpwuid,posix_getrlimit,posix_getsid,posix_getuid,posix_initgroups,posix_isatty,posix_kill,posix_mkfifo,posix_mknod,posix_setegid,posix_seteuid,posix_setgid,posix_setpgid,posix_setsid,posix_setuid,posix_strerror,posix_times,posix_ttyname,posix_uname,setproctitle,setthreadtitle,shmop_close,shmop_delete,shmop_open,shmop_read,shmop_size,shmop_write,opcache_compile_file,opcache_get_configuration,opcache_get_status,opcache_invalidate,opcache_is_script_cached,opcache_reset ; Session php_admin_value[session.entropy_length] = 1024 php_admin_value[session.cookie_httponly] = on php_admin_value[session.hash_function] = sha512 php_admin_value[session.hash_bits_per_character] = 6 php_admin_value[session.gc_probability] = 1 php_admin_value[session.gc_divisor] = 1000 php_admin_value[session.gc_maxlifetime] = 1440 ; Pathes php_admin_value[include_path] = . php_admin_value[open_basedir] = /data/:/tmp/misc/:/tmp/upload/:/dev/urandom php_admin_value[sys_temp-dir] = /tmp/misc php_admin_value[upload_tmp_dir] = /tmp/upload php_admin_value[session.save_path] = /tmp/session php_admin_value[soap.wsdl_cache_dir] = /tmp/wsdl php_admin_value[sendmail_path] = /bin/sendmail -f -i php_admin_value[session.entropy_file] = /dev/urandom php_admin_value[openssl.capath] = /etc/ssl/certs
The pool's name is defined inside the brackets at the first line and set to the username of the pool's owner. The $pool
variable contains that name and can be used within the configuration to define further parameters. user
and group
define the user and group the scripts will be executed with and are set to $pool
(=username of the webspace owner). The listen
directives define ownership and location of the unix socket, which will be used by NGINX
to pass the PHP scripts. The pm*
directives specify how the pool processes will be managed (spawned and terminated). Three modes can be defined:
pm = static
:pm.max_children
processes will be created on startup.pm = static
can be used for websites with constant load and no fluctuation.pm = ondemand
: Processes will be created on request. Processes will be killed afterpm.process_idle_timeout
seconds of waiting for more requests. The maximum of processes created is specified withpm.max_children
.ondemand
should be used for less frequented websites with long idle times, since no useless processes will be kept alive unless a request is made.pm = dynamic
: At least one process will always be available. WhenPHP-FPM
is startedpm.start_servers
processes will be spawned.pm.max_children
sets the maximum number of processes that will be created by this pool.pm.min_spare_servers
andpm.max_spare_servers
specify the minumum and maximum of processes kept alive in idle state.dynamic
process management should be used for websites with fluctuating (high) load and almost no idle times.
pm.status_path
and ping.path
define paths that, when passed to the pool, will give us information of the current state of the pool.
access.log
specifies where requests to the pool will be logged. Requests taking longer than request_slowlog_timeout
to process will be logged to slowlog
. The logs will be stored in the user's /log
directory. After request_terminate_timeout
the process will be killed and script execution will be stopped.
chroot
sets the directory the processes will chroot to. This is set to the user's /chroot
directory as explained above. chdir = /
changes the process to the chroot.
php_flag
, php_value
, php_admin_flag
and php_admin_value
can be used to override directives defined in /etc/php/7.0/fpm/php.ini
. When the php_admin_*
directives are used, it won't be possible to overwrite these definitions with ini_set()
PHP call. Be aware that all paths defined are relative to the chroot.
Configuration applying to all webspaces can also be done in /etc/php/7.0/fpm/php.ini
.
Be aware that in Debian, the garbage collection for the session data is disabled by default and needs either to be managed by yourself (e.g. cronjob) or activated by setting session.gc_probability
to a value >0
.
Please see the following links for further details on what might be worth to configure:
Warning: When enabling the PHP OPCache in php.ini
(which is recommended for better performance) always enable the option opcache.validate_root
to prevent php files leaking from one chroot to another through the OPCache. More details on this can be found in this bugreport.
When the pools were defined, PHP-FPM
can be reloaded
root@webhost:~# systemctl restart php7.0-fpm
To see the pool processes running, execute
root@webhost:~# systemctl status php7.0-fpm
NGINX
The main configuration of NGINX
is done in /etc/nginx/nginx.conf
. At the end of the file, further configuration is included from /etc/nginx/conf.d/*.conf
. Depending on your host running NGINX
you could change some of the initial parameters in nginx.conf
.
- nginx.conf
worker_processes 8; error_log /var/log/nginx/error.log debug; http { keepalive_timeout 65; disable_symlinks on; server_tokens off; } events { worker_connections 1024; }
worker_processes
could be set to the number of available CPU cores and keepalive connection should be enabled. This is the recommended configuration for using NGINX
along with SSL
. worker_connections
should be set to a suitable value for your expected load on the server. disable_symlinks
is set to on
to avoid following symlinks to locations outside of our user's document root. If you find symlinks useful for some special configuration, you could still enable it for a certain webspace. server_tokens off
won't expose NGNIX
's version in error pages. The loglevel of the error_log
directive was set to debug
to get more information while testing the setup and should be set to error
when moving to production.
More directives can be found in the ''NGINX'' documentation of the core functionality.
Creating webspaces
For each user we create a file in /etc/nginx/conf.d/<username>.conf
.
- username.conf
server { listen 0.0.0.0:80; listen [::]:80; server_name <username>.web.example.com; root /home/www/<username>/chroot/data; index index.html index.htm index.php; location / { try_files $uri $uri/ =404; } location ~ \.php$ { try_files $uri =404; include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/php-fpm-<username>.sock; fastcgi_param SCRIPT_FILENAME /data$fastcgi_script_name; } }
<username>
has to be replaced by the username the webspace will be associated with. server_name
defines a space seperated list of domains with which the webspace will be reachable. You could dedicate one domain, or a part of it as generic domain for the webspaces, e.g. web.example.com
and prepend the username as subdomain, e.g. u000.web.example.com
. The root
directive defines where static files will be served from and is set to /home/www/<username>/chroot/data
. The second location
block configures the handling of requests ending with .php
. They are passed to the user's PHP-FPM
pool, which was configured to provide the unix socket /var/run/php-fpm-<username>.sock
with the listen
directive in /etc/php/7.0/fpm/pool.d/<username>.conf
. /etc/nginx/fastcgi_params
maps variables like $remote_addr
and $query_string
available in NGINX
to fastcgi parameters which then will be available in PHP's $_SERVER['REMOTE_ADDR']
and $_SERVER['QUERY_STRING']
. SCRIPT_FILENAME
is overwritten in the last line of the block to make PHP-FPM
look for the PHP script in /data
(relative to their chroot).
After creating the NGINX
configuration for the webspace user, place a test php file in /home/www/<username>/chroot/data/test.php
. Make sure to set ownership and permissions correctly. Since users might need to add new files to their webspace on their own, the ownership of served files should be set to the user and group of this user. While the php files are processed by the PHP-FPM
pool which also runs as the user of the webspace, files served by NGINX
(all other than .php
files) needs to be accessed by the user running NGINX
(set in /etc/nginx/nginx.conf
). To allow NGINX
to access static files while allowing PHP-FPM
to process .php
files, either set the file permission to something like 0644
to allow others
(=NGINX
) to read the file's content or add the user running NGINX
to the group of the webspace owner and allow user and group to read the file's content (mode 0640
). The latter involves less permissions and better usability for the webspace owner when adding files to their webspace and is recommended.
- test.php
<?php phpinfo(); ?>
root@webhost:/home/www/<username>/chroot/data# chown <username>:<username> test.php root@webhost:/home/www/<username>/chroot/data# chmod 0640 test.php root@webhost:/home/www/<username>/chroot/data# usermod -a -G <username> nginx
Reload NGINX
and point your browser to one of the domains listed in server_name
.
root@webhost:~# systemctl reload-or-restart nginx
If you don't get a page listing the configuration of the php setup check
/var/log/nginx/error.log
/var/log/php7.0-fpm.log
/home/www/<username>/chroot/logs/php-fpm-pool.log
for errors. If everything worked as expected, have a look at the Environment
section of the output to see by which user the script was executed. Also have a look at the PHP Variables
section to see what information was passed to PHP-FPM
by NGINX
.
Chroot binds
Since the user's PHP-FPM
pool runs in a chroot, it can not access any location outside the configured directory (/home/www/<username>/chroot
). This is intended and the reason for using chroots for our setup, and wont be a problem unless php scripts use any function that needs to access locations outside the chroot to work.
Some of these widely used functions and required locations are
/usr/share/zoneinfo
fordate()
functionality/dev/urandom
for generating random data (needed for creating sessions)/dev/null
for redirecting purposes/etc/ssl/certs
and/usr/share/ca-certificates
to allow php to validate certificates for TLS connections (e.g.fsockopen('https://...')
)- For DNS resolution either
/var/run/nscd/socket
to querynscd
for resolving or/etc/resolv.conf
and/lib/x86_64-linux-gnu/libnss_dns.so.2
to use certain DNS servers.
When using /var/run/nscd/socket
, be aware that it will also be possible to query for other services enabled in /etc/nsswitch.conf
, like passwd
or group
. Please read my article about this issue to understand the problem. It might be better to go for the second solution and link libnss_dns.so
and /etc/relov.conf
to avoid this problem.
First, place this file in the /data
directory of a PHP-FPM
chroot and point your webbrowser to this script to see the problem in action.
- test_chroot.php
<?php header('Content-Type: text/plain'); ini_set('display_errors',1); error_reporting(E_ALL); echo "----- DNS + TLS -----\n"; file_get_contents('https://www.example.com') && print("OK"); echo "\n\n----- Timezone database -----\n"; echo "Date: ".date('r'); ?>
This test script will produce some error messages like
php_network_getaddresses: getaddrinfo failed: Name or service not known
indicating that DNS resolution is not working.Timezone database is corrupt - this should *never* happen!
since/usr/share/zoneinfo
can't be accessed.- Depending on if you already fixed DNS resolution:
SSL operation failed with code 1. OpenSSL Error messages: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
andFailed to enable crypto
sincePHP-FPM
can't access the CA certificates in/etc/ssl/certs
and/usr/share/ca-certificates
.
To solve these problems and make the required locations available within the chroots, bind mounts can be used. For example by invoking
root@webhost:~# mkdir -p /home/www/<username>/chroot/usr/share/zoneinfo root@webhost:~# mount -o "bind,ro" /usr/share/zoneinfo /home/www/<username>/chroot/usr/share/zoneinfo
the directory /usr/share/zoneinfo
will be accessible in two places (/usr/share/zoneinfo
and /home/www/<username>/chroot/usr/share/zoneinfo
). You can think of bind mounts as a form of hardlinks, without the limitation of only linking within the same filesystem. Active mounts can be listed by runnig mount
without arguments.
The traget (mountpoint) needs to exist beforehand. When binding files (e.g. /etc/resolv.conf
) the mountpoint also needs to be a file.
Since each location needs to be linked to any chroot, lots of mount
operations are necessary to properly set up the chroots. I wrote a little script to automate the binding and unbinding to the chroots, as well as the creation and deletion of the mountpoints. It also allows you to create a configuration for systemd
to setup the chroot binds on boot. To use it, download the latest version from GitHub (via git clone
, or from the GitHub webpage) and place it somewhere to get executed by root.
root@webhost:~# apt-get install git root@webhost:~# git clone https://github.com/68b32/php-chroot-bind.git root@webhost:~# cp php-chroot-bind/php-chroot-bind /usr/local/sbin/ root@webhost:~# chmod u+x /usr/local/sbin/php-chroot-bind
Open /usr/local/sbin/php-chroot-bind
and see if the configuration at the top suits your environment
_CHROOTS_CMD="ls -1d /home/www/*/chroot" _SYSTEMD_UNIT_DIR="/etc/systemd/system" _BIND="\ /usr/share/zoneinfo \ /dev/urandom \ /dev/null \ /etc/resolv.conf \ /lib/x86_64-linux-gnu/libnss_dns.so.2 \ /usr/share/ca-certificates \ /etc/ssl/certs" _BIND_LOCAL="../bind.conf"
_CHROOTS_CMD
defines the command to list the available PHP-FPM
chroots. The command configured by default prints a list like
/home/www/u000/chroot /home/www/u001/chroot /home/www/u002/chroot
and complies with the setup described in this article. If you use different paths for your chroots, you could also list all of them in a file and define cat /path/to/chroot.list
as _CHROOTS_CMD
.
_SYSTEMD_UNIT_DIR
defines the location where unit files for systemd
are stored. On Debian this is /etc/systemd/system
. This will be used to configure systemd
to bind everything during boot.
_BIND
is the list of paths that should be linked to the chroots and contains the list described above as default.
_BIND_LOCAL
points to a file relative to a chroot directory and can contain additional bind locations for a chroot. For example a file /home/www/u001/bind.conf
containing
/etc/ldap/ldap.conf /etc/ldap/tls
would cause /etc/ldap/ldap.conf
and /etc/ldap/tls
to also get bound to the chroot /home/www/u001/chroot
.
After configuration you can get a list with the active binds for each chroot.
root@webhost:~# php-chroot-bind status Chroot: /home/www/u000/chroot - /usr/share/zoneinfo - /dev/urandom - /dev/null - /etc/resolv.conf - /lib/x86_64-linux-gnu/libnss_dns.so.2 - /usr/share/ca-certificates - /etc/ssl/certs Chroot: /home/www/u001/chroot - /usr/share/zoneinfo - /dev/urandom - /dev/null - /etc/resolv.conf - /lib/x86_64-linux-gnu/libnss_dns.so.2 - /usr/share/ca-certificates - /etc/ssl/certs
The -
in front of a bind path indicates that it is not bound to the chroot. To activate the binds run the script with parameter bind
. This will create the necessary mountpounts and mount the bind paths read only to the chroot.
root@webhost:~# php-chroot-bind bind Chroot: /home/www/u000/chroot mount: /usr/share/zoneinfo bound on /home/www/u000/chroot/usr/share/zoneinfo. mount: /dev/urandom bound on /home/www/u000/chroot/dev/urandom. ...
When calling the script with status
again, you will see that everything was bound (indicted by +
). To unbind, run php-chroot-bind unbind
. When running php-chroot-bind unbind clean -do
the mountpoints created by this script will be deleted (run without -do
first, to see what would be deleted).
The process of creating mountpoints, binding and unbinding can be configured to happen automatically during boot using systemd
. To activate this run php-chroot-bind systemd create
and reload systemd
.
root@webhost~:# php-chroot-bind systemd create Created /etc/systemd/system/php-chroots.target Created /etc/systemd/system/php-chroot-home-www-u000-chroot.target Created /etc/systemd/system/home-www-u000-chroot-usr-share-zoneinfo.mount Created /etc/systemd/system/php-chroot-create-mountpoint-file-usr-share-zoneinfo@.service Created /etc/systemd/system/php-chroot-create-mountpoint-dir-usr-share-zoneinfo@.service ... root@webhost~:# systemctl daemon-reload
The unit files installed will handle the readonly binding as well as creating propper mountpoints if necessary.
To test, first unbind all active binds and remove created mountpoints.
root@webhost:~# php-chroot-bind unbind clean -do
Now try if you can bind using systemctl
root@webhost:~# systemctl restart php-chroots.target root@webhost:~# php-chroot-bind status
The status output of php-chroot-bind status
should show all paths as bound. If this did not work check systemctl
for failed units, and use systemctl status <unit>
to see why a unit didn't succeed.
If everything worked, you can bind and unbind from specific chroots with systemctl
. For example
root@webhost:~# systemctl stop php-chroot-home-www-u000-chroot.target
will unbind all binds to chroot /home/www/u000/chroot
(use php-chroot-bind status
to verify). Use systemctl start php-chroot-home-www-u000-chroot.target
to restore binds for this chroot again.
If you add or remove a chroot, the systemd
configuration needs to be updated. To update the configuration run php-chroot-bind systemd update
root@webhost:~# systemctl stop php-chroot-home-www-u001-chroot.target # Unbind binds for u001 root@webhost:~# rm -rf /home/www/u001/chroot # Delete chroot for u001 root@webhost:~# mkdir -p /home/www/u002/chroot # Create new chroot for u002 root@webhost:~# php-chroot-bind systemd update # Deletes units for u001 & adds units for u002 root@webhost:~# systemctl restart php-chroots.target # Activate binds for all chroots (including new chroot for u002)
To list all unit files installed by php-chroot-bind
run php-chroot-bind systemd list
. To remove all unit files generated by php-chroot-bind
run php-chroot-bind systemd clean -do
(Run without -do
to see what would be deleted).
If everything works as expected, enable the php-chroots.target
unit to be activated just before php7.0-fpm.service
on boot.
root@webhost~:# systemctl enable php-chroots.target Created symlink from /etc/systemd/system/php7.0-fpm.service.wants/php-chroots.target to /etc/systemd/system/php-chroots.target.
Reboot to see if all binds will be set up on boot (php-chroot-bind status
or mount
).
TLS
Running multiple websites secured by TLS has long time been associated with costs and administrative effort when buying certificates from trusted certificate authorities (CAs) and replacing them on the host when renewing them once in a while. Due to a lack of standardization of commiting certifcate signing requests (CSRs) to the CA and verification of domain ownership by the CA, renewing certificates never was easy to automate and often involved manual steps performed by the server administrator.
I'm happy to say that times have changed.
Since December 2015, with Let's Encrypt, the world got a free and open certificate authority which most modern browsers and operating systems include in their default CA list. It also defines an open standard for requesting certificates for a certian domain as well as validating ownership for that domain (called ACME
) and a lot of tools (so called ACME
clients) have been written to automate the process of requesting and renewing certificates.
So today, there is no more justification to not (also) serve any website on the internet via https
.
Nevertheless, the global surveillance disclosures by Edward Snowden in 2013 and the following investigations by security research groups on cryptographic methods and tools have shown that just “switching on https” is not enough to provide secure communication between network parties and some more parameters have to be taken into consideration when configuring TLS support for your service.
Create user for certificate management
Since we are going to automate the whole process of requesting and renewing certificates for our server, we will spend a seperate system account dedicated to create and store all necessary keys and parameters as well as running the ACME
client to request and renew our certificates.
This command will create the user letsencrypt
with its home directory at /home/letsencrypt
. Since the home directory will keep some very crucial and sensitive data for our TLS setup, we have to take care of the permissions from the beginning.
root@webhost:~# useradd -b /home -k /dev/null -m -s /bin/bash letsencrypt root@webhost:~# chmod 710 /home/letsencrypt
The next steps will be performed with the new user in its home directory. We will create some directories to keep the home directory clean.
root@webhost:~# su letsencrypt letsencrypt@webhost:/root$ cd letsencrypt@webhost:~$ mkdir csr crt letsencrypt@webhost:~$ chmod 700 csr/ letsencrypt@webhost:~$ chmod 710 crt/
Generating RSA keys and Diffie–Hellman parameters
First we generate two RSA keys, one will be used for the RSA key exchange between our web server and the clients connecting via https. The other key will be used to register with Let's Encrypt to authenticate on future requests. Along with the creation of these keys, the keylength has to be configured. Making recommendations for a keylength is not easy and a lot of smart people summarized their insights in several publications. Please consult different sources to make your own informed decision on the keylength that suits your needs.
The following commands will generate two RSA keys with a keylength of 4096 bit.
letsencrypt@webhost:~$ openssl genrsa 4096 > nginx.key letsencrypt@webhost:~$ openssl genrsa 4096 > letsencrypt.key
letsencrypt.key
only needs to be read when using the ACME
client to request and renew certificates. Since the ACME
client will only be executed by the letsencrypt
user, the permisson for this file can be set to 0400
.
nginx.key
needs to be read by the letsencrypt
user when creating certificate signing requests (CSRs) for requesting and renewing certificates as well as by the user running NGINX
to do the key exchange with the clients. We will add the NGINX
user to the letsencrypt
group and set the permissions of nginx.key
to 0440
.
letsencrypt@webhost:~$ chmod 0400 letsencrypt.key letsencrypt@webhost:~$ chmod 0440 nginx.key root@webhost:~$ usermod -a -G letsencrypt nginx
Along with the RSA key for the web server, we create our own Diffie-Hellman group to be used for Diffie-Hellman key exchange between the clients and NGINX
. Creating a dedicated and sufficiently long DH group is strongly adviced due to some weaknesses found with the commonly and widely used Diffie-Hellman groups.
Recommended sizes for the group are at least 2048 bit and at least the size of the RSA key in use. It might take some time to generate the parameters. Set the file permissions to 0440
, so that NGINX
can read it.
letsencrypt@webhost:~$ openssl dhparam 4096 > nginx.dhparams.pem letsencrypt@webhost:~$ chmod 440 nginx.dhparams.pem
Since version 1.11.0 of NGINX
it is possible to use RSA and ECDSA certificates in parallel. If you like to improve your security and server perfomance even more, read this article to the end, and then read my article „ECDSA and RSA certificate in parallel with NGINX and Let's Encrypt“.
Requesting certificates
Certificates are issued for one or several domains. You could either use one certificate, including all domains served by the web server, or your can install seperate certificates for each webspace, each including all domains configured as the webspace's server_name
.
Certificates are requested by generating Certificate Signing Requests which are sent to the CA (here Let's Encrypt). The CA then validates whether the domains listed in the CSR's subject
or Subject Alternative Name
field are owned by the requester. This is done by asking the requester to host a so called challenge file at a specific path under the domain(s) the certificate was requested for. The process of requesting and validating the ownership is also described on the Let's Encrypt website.
To generate a CSR for the domains u000.web.example.com
, another-domain.com
and www.another-domain.com
run
letsencrypt@webhost:~$ openssl req -new -sha256 -key nginx.key -subj "/" -reqexts SAN -config <(cat /etc/ssl/openssl.cnf <(printf "[SAN]\nsubjectAltName=DNS:u000.web.example.com, DNS:another-domain.com, DNS:www.another-domain.com")) > csr/u000.csr
This will leave you with a file u000.csr
which needs to be submitted to the CA. The name of this file is arbitrary, but since it will also be used to regularly renew the certificates it should be related to the webspace thats domains are included in the CSR.
To submit the CSR to the CA, an ACME
client must be used. There are several ACME
implementations written in different languages out there. Check out this list of client implementations to get an overview.
For the purpose of requesting new certificates and renewing them once in a while I like to utilize the ACME Tiny
implementation by Daniel Roesler. It's just about 200 lines of simple python code to do the job and can be understood easily by reading through it. Since all clients need to handle your Let's Encrypt account key you need to trust the ACME client in use.
To install ACME Tiny, clone the repository from GitHub
root@webhost:~# apt-get install git python letsencrypt@webhost:~$ git clone https://github.com/diafygi/acme-tiny.git letsencrypt@webhost:~$ chmod u+x acme-tiny/acme_tiny.py
Before actually using the client to request the certificate, NGINX
needs to be prepared for hosting the challenge files as described above.
First, create the directory where the ACME
client will store the challenge files.
letsencrypt@webhost:~$ mkdir acme letsencrypt@webhost:~$ chmod 0710 acme
Then create a new configuration file /etc/nginx/acme.conf
- acme.conf
location /.well-known/acme-challenge/ { alias /home/letsencrypt/acme/; try_files $uri =404; }
and include it to the end of each webspace configuration (/etc/nginx/conf.d/<username>.conf
) for which certificates should be retrieved.
- username.conf
server { listen 0.0.0.0:80; listen [::]:80; server_name ...; ... include /etc/nginx/acme.conf; }
Reload NGINX
and you should be set to submit your CSR.
root@webhost:~# systemctl reload nginx letsencrypt@webhost:~$ ./acme-tiny/acme_tiny.py --account-key letsencrypt.key --csr csr/u000.csr --acme-dir /home/letsencrypt/acme/ > crt/u000.crt Parsing account key... Parsing CSR... Registering account... Registered! Verifying u000.web.example.com... u000.web.example.com verified! Verifying another-domain.com... another-domain.com verified! Verifying www.another-domain.com... www.another-domain.com verified! Signing certificate... Certificate signed! letsencrypt@webhost:~$ chmod 640 crt/u000.crt
If this worked out, you received a signed certificate stored in crt/u000.crt
.
u000.crt
can now be specified in NGINX
's ssl_certificate
to be used as certificate.
Automating certificate renewal
Since certificates issued by Let's Encrypt only lasts for 3 months, they need to be replaced quite often. To renew a certificate, the same CSR can be reused. So a simple way to keep certificates up to date is to check whether they will expire and submit the CSR again to replace the old certificate regularly.
Some ACME
clients have this feature integrated, but since I chose ACME Tiny
for this setup to keep the trusted base small, this functionality needs to be implemented manually.
You can download my small script from GitHub to do the job.
letsencrypt@webhost:~$ git clone https://github.com/68b32/acme-tiny-renew.git letsencrypt@webhost:~$ chmod u+x acme-tiny-renew/acme-tiny-renew
Open the script and check whether the configuration suits your environment.
_LETS_ENCRYPT_DIR="/home/letsencrypt" _ACME_TINY="/home/letsencrypt/acme-tiny/acme_tiny.py" _EXPIRY=$((60*60*24*2))
According to the configuration above it expects
- All certificates in use at
$_LETS_ENCRYPT_DIR/crt/*.crt
- For each certificate a CSR at
$_LETS_ENCRYPT_DIR/csr/*.csr
- The Let's Encrypt account key at
$_LETS_ENCRYPT/letsencrypt.key
_ACME_TINY
points to the ACME Tiny python script installed before.
The _EXPIRY
variable defines the time in seconds the certificate must at least be valid for before it will have to be renewed. The default is two days, which is enough if you run this script every day.
Be aware that NGINX
needs to be reloaded after certificates were replaced in order to load them. The script will do this if any certificate was replaced, but since it will be executed as user letsencrypt
, we need to allow that user to perform this reload operation. This can easisly be done with sudo
.
root@webhost:~# apt-get install sudo root@webhost:~# visudo
and add the line
letsencrypt ALL= NOPASSWD: /bin/systemctl reload nginx
at the end to allow the user letsencrypt
to execute the command systemctl reload nginx
without password.
To run this script every day, you can either configure a cronjob or you use a systemd timer. The latter is done by creating two files in /etc/systemd/system
- renew-nginx-certs.service
[Unit] Description=Renew certificates used by NGINX when they are about to expire User=letsencrypt Wants=nginx.service After=nginx.service [Service] Type=oneshot ExecStart=/home/letsencrypt/acme-tiny-renew/acme-tiny-renew
- renew-nginx-certs.timer
[Unit] Description=Renew certificates used by NGINX regularly [Timer] OnBootSec=0 OnUnitActiveSec=1d [Install] WantedBy=timers.target
To start the timer, run
root@webhost:~# systemctl start renew-nginx-certs.timer
to start the timer on boot, run
root@webhost:~# systemctl enable renew-nginx-certs.timer Created symlink from /etc/systemd/system/timers.target.wants/renew-nginx-certs.timer to /etc/systemd/system/renew-nginx-certs.timer.
Configuring NGINX
The general TLS configuration for NGINX
can be kept separately in /etc/nginx/tls.conf
and be included into the webspace configurations (/etc/nginx/conf.d/<username>.conf
) when TLS support is desired. This general configuration will include everything except the path to the webspace's certificate, since these will differ if they were generated for each webspace separately. If one certificate, including all domains from all webspaces is used, it could also be included in the main TLS configuration.
- tls.conf
listen 0.0.0.0:443 ssl; listen [::]:443 ssl; keepalive_timeout 70; ssl_session_cache shared:SSL:20m; ssl_session_timeout 10m; ssl_stapling on; ssl_stapling_verify on; resolver 1.1.1.1; resolver_timeout 2s; ssl_dhparam /home/letsencrypt/nginx.dhparams.pem; ssl_certificate_key /home/letsencrypt/nginx.key; #ssl_certificate /home/letsencypt/crt/all-domains.crt; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; # Ciphersuite "Intermediate compatibility" by Mozilla OpSec team # See https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28default.29 ssl_ciphers "ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS"; add_header Strict-Transport-Security max-age=15768000;
The listen
directives enable NGINX's ssl module on port 443 for the IPv4 and IPv6 sockets available on the host.
keepalive_timeout
activates keepalive connections, which will save CPU time and network resources. This is especially important when using HTTPS connections since they need much more network roundtrips and CPU time than clear HTTP.
ssl_session_cache shared:SSL:20m
creates and utilizes a cache for TLS session parameters shared between all worker processes. It implicitly disables the native cache built into OpenSSL. One Megabyte can store about 4000 sessions. The use of sole shared caches is recommended in the NGINX's SSL module documentation.
ssl_session_timeout
sets the time until the session parameters will be deleted from the session cache.
ssl_stapling
enables OCSP stapling to reduce the load on the OSCP servers of the CA, and provides validity information of your certificate for the clients. It also improves the client's privacy, since the client is not forced to tell the CA what certificate he is about to validate, which would reveal the domain the client is about to connect to.
ssl_stapling_verify
enables the verification of the OCSP responses NGINX
gets from the CA's OSCP server.
resolver
specifies one or more DNS hosts to use for resolving the OCSP responder's hostnames as provided in the certificate. If you run a local DNS server, your can use these, or your might want to use some publicly available DNS servers. Along with the DNS resolvers, resolver_timeout
sets the maximum time a lookup should take.
ssl_dhparam
points to the Diffie-Hellman parameters generated earlier.
ssl_certificate_key
points to the RSA key for the server.
If you use one certificate containing all domains for all webspaces, point ssl_certificate
to this certificate. Otherwise skip this line, and set this directive in the webspace configuration (/etc/nginx/conf.d/<username>.conf
).
ssl_protocols
lists the protocols allowed to negotiate the parameters for the encrypted connection. Since the old SSL
protocols have their flaws they should not be used anymore.
ssl_prefer_server_ciphers
set to on
will prefer the ciphersuites defined with ssl_ciphers
over the client ciphers.
ssl_ciphers
specifies a list of enabled ciphers in form of a string understood by OpenSSL. Selecting proper ciphers to be allowed is crucial for the security of the connections, since not all ciphers are considered to provide a sufficient level of security. Check out this Ciphersuite guidance document to find suggestions made by smart people. The one used here is the list for intermediate compatibility found in the Mozilla Wiki and might not work with older clients like Windows XP.
The last line adds the Strict-Transport-Security
header to the HTTP response and tells the client to not access the server unencrypted for the given period in time (here about 6 months). This will have the effect that any links inside the website will automatically be turned into secure https
links by your browser and the user won't be able to access the website if a secure connection cannot be ensured (e.g. if the server's certificate cannot be validated by the browser). Keep in mind that this is different from enforcing the use of https
over http
, since Strict Transport Security only applies when using https. It is still possible to request for websites via unencrypted http
at port 80 if not prevented by some extra configuration.
To activate TLS for a specific webspace it needs to be included in the webpspace configuration (/etc/nginx/conf.d/<username>.conf
). If you use a seperate certificate for each webspace, also specify its path with the ssl_certificate
parameter.
- username.conf
server { listen 0.0.0.0:80; listen [::]:80; server_name ...; include /etc/nginx/tls.conf; ssl_certificate /home/letsencrypt/crt/<username>.crt; ... }
Save the configuration and reload NGINX
. Then point your browser to the webspace using https
. If this works without any errors, you can test your SSL configuration.
Enforce HTTPS
If you got your TLS configuration working, you can configure NGINX
to redirect all unencrypted requests for a webspace to its HTTPS equivalent. To achieve this, create a new server
block in the webspace's configuration (/etc/nginx/conf.d/<username>.conf
), listening on port 80 (for clear HTTP connections) returning the HTTP status code 301 along with the https URL to be used instead. The listen
lines for port 80 must then be removed from the other block, which will now only serve TLS connection via port 443.
- username.conf
server { server_name ...; listen 0.0.0.0:80; listen [::]:80; return 301 https://$host$request_uri; } server { server_name ...; include /etc/nginx/tls.conf; ssl_certificate /home/letsencrypt/crt/<username>.crt ... }
Be aware that the server_name
directive must include the same domains to make this work.
If using $servername
instead of $host
, the request will always be forwarded to the first domain listed in server_name
.