PREVIOUSLY ON PART II: we have spoken about TLS configuration. For this topic we spoke about client capabilities, then key exchange (and DHE and ECDHE) and the comparison between DHE and RSA, Server Name Indication (SNI), SSL Stripping and Mixed Content for ending up into HSTS and Preload list. I ended the post speaking about CSP (Content Security Policy).

The rest of the first day we were already very tired. Our trainer spoke about cookies, and how to explicitly instruct the browser to set them as secure (knowing that anyway secure cookies can be overwritten by insecure cookies and that subdomains can overwrite parent cookies): there is a specific prefix for the cookie name that makes it explicitly secure. There are two kinds of attack related cookies: the first is a BEAST style attack called CSFR (Cross Site Request Forgery) where an insecure site can create a weakness. An attacker can add scripts that force your browser to make specific requests to a web server in which you are authenticated. To avoid that we can use Same-Site cookies to validate each request (meaning that you add something in the cookie to be sent to your server and that validates every request). We can add a crypto sign, moreover, to be sure that also the cookie has not been compromised.
The second is a CRIME attack (Compression Before Encryption) called Man in the browser: using the size of the payload and its compression, the attacker tries to guess the content of the cookie for the session id and to slightly change it. Meaning that if you change one letter on the session id and you have a compressed result bigger, that cannot be your session id. You try only the session id that are not breaking the length of your compressed payload. In the wikipedia page there is a slightly different explanation, anyway nice to read. The way to avoid this attack is to use an SRI (Subresource Integrity), so to put an integrity SHA code into your link to check integrity. CRIME is a BEAST attack.

There was another session of explanations after the exercises, but I didn’t pay too much attention to that. It was about TLS 1.3. But the second block of exercises was interesting: it was the configuration of a server to add certificates. The server was NGINX (all the needed configurations are usually in /etc/nginx/sites.conf). We can setup the previously created key and certificates with the two directive in the config file:


and after restarting the server (service nginx restart) the certificate will be use. You can scan your server with SSLLabs to got a grade of security. Now we want to go a step further and obtain a valid certificate from Let’s Encrypt, a free certificate autority. For doing that we used acme-tiny, available from github (it has a very well written howto), that makes the process automatic. The only needed thing to make acme-tiny obtain a certificate is to have a place that it can write into and that it is served by your server. Let’s say that this location is /var/www/.well-known and you want to have nginx serve this by adding in the sites.conf the following:

location /.well-known/ {
    alias /var/www/.well-known/;

the directory should be accessed by everyone (especially the server) so

chmod -R o+rx /var/www/.well-known

and you should be able to write in it (like

echo "it works!" > /var/www/.well-known/acme-challenge/test

) and the just created files should have permission R and x, in that case you should be able to get it with a

wget http://SERVER/.well-known/acme-challenge/test -O -

and you are ready to obtain the certificate from Let’s Encrypt. If you cannot get the test file in your server, you won’t be able to confirm the ownership of your server, so Let’s Encrypt won’t provide you a certificate.
The steps to have a certificate are the following: we need another key specific for Let’s Encrypt:

openssl genrsa -out letsencrypt.key 2048

and now we can use acme-tiny to make the job of asking the certificate. Acme-tiny needs to know your keys and the location to make the test challenge (next command is one line): --account-key letsencrypt.key --csr server.csr 
--acme-dir /var/www/.well-known/acme-challenge/ > server.crt

Now your certificate is in the file server.cert (you can grep the issuer in openssl x509 -in server.crt -noout -text). The certificate issued by Let’s Encrypt, again, must protect the private key of the CA, so it is verified by an intermediate that we need to obtain with the following

wget -O intermediate.der

then we need to convert into a PEM format

openssl x509 -in intermediate.der -inform DER -out intermediate.crt -outform PEM

because NGINX needs all the certificate to be stored in the same file. We can do that by typing

cat server.crt intermediate.crt > server-all.crt

and if you change your ssl_certificate directive in the sites.conf file to point to server-all.crt you have finally configured your server to use Let’s Encrypt certificate (remember to restart nginx).

Now, to have a fully configured server, we still need to create a (non default) DHE key exchange parameter

openssl dhparam -out dh-2048.pem 2048

and then configure it in the server, with also the TLS protocols that the server can use, the cipher suites accepted from the client and the ECDHE parameters, by inserting the following directives in the sites.conf file:

ssl_dhparam dh-2048.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "ECDHE-RSA-AES128-GCM-SHA256 DHE-RSA-AES128-SHA";
ssl_ecdh_curve prime256v1;

After that SSLLabs should give you an A+ grade and your server uses all the crypto features that you can need. Well done! As a note, the ssl_ciphers MUST include all the ciphers you want to support, I put just those two, but a minimal optimal list was provided in the course and it included more than 20 ciphers.
For this post is enough. Stay tuned!