lot of things are gonna change in the next month. I was (as always) very busy, so I didn’t find time to write even for the usual conferences I attended, like Scala Central 11. But today and tomorrow I am following a course on TLS and encryption and I have to write down what I learnt for further reading. The course was really titled like that, “The Best TLS Training in the World”, it is not my idea to call like that these posts, as you can see in the page of the course.
The need of security and encryption is obvious, maybe, because everyone has something he wants to keep secret (bank accounts, medical checks, a lover…). The heart of information security is based on three things we want to achieve (the acronimous is CIA or CAI): Confidentiality (what is secret, rest secret long enough to be useless when it won’t be secret anymore), Authenticity (the person with whom we speak is the one we want to speak with) and Integrity (what we say is not altered by anyone). But when we transmit data, every point that is routing or bouncing our data can read it, if it is in clear state.
Moreover there are few sites for long transmissions: if you think about data starting its travel in Europe (or better, in UK, not Europe anymore) to land in USA, there are just two sites in south of UK from which internet cables start to go underwater and reach the coasts of America. Those cables are huge point of transmission from which, being able to connect to them, you could basically read all the data from UK to America.
This explains why encryption is important, and although it is, it is also commonly not used. And to be honest, all protocols on internet are insecure and generally they rely on trust: BGP (Border Gateway Protocol) route or WPAD (Web Proxy Auto Discovery protocol) can be hijacked, DNS can be poisoned, ARP can be spoofed. This happened because when networking was created, security was not a requirement.
So now any point of access to internet can be monitored for reading traffic. MITM attacks are the Man In The Middle attacks, with an attacker between a client and a server. This is usually as much close to the client as possible, like directly connected to its wifi or as the first hop. There was a time in which the telecoms, providing your internet connection, were injecting also advertisement in your content. Google Transparency Reports then started showing how bad companies were doing in terms of security, forcing some of them to react.
There are indeed some tools to evaluate the robustness of a website: firesheep for example is a firefox extension that listens to your connection and collect all the session tokens that can exctract, so that you can use them to act as the logged user they belong to. SSLStrip instead downgrades the connection to http, so that if possible the communication is sent in a non crypted fashion. Used with firesheep can make you obtain session tokens even from https connections. There are free Certification Authorities that help growing the number of companies that use crypted connections by giving free CA certificates (for example Let’s Encrypt). And there are some website that analyse your servers for good practice on crypto, like SSL Lab that is the standard de facto of the industry.
ATTACKING CRYPTO: we consider the strength of crypto by considering its number of bits (a key with 128 bit has 2^128 possibilities). Two aspects are important: 1. performance degrades when bits increase, 2. crypto is not a never ending defence, meaning that what you are keeping secret can still be decrypted in a given amount of time, but we want to keep that amount of time so huge that we can just not care about it.
There are several weak points to attack crypto: Public Trust (PKI) or bugs or keys exchange… so nobody just brute force attack all the possible keys because it is too much expensive. This is the main reason for which it is better to not try to code our own protocol or encryption, because the standard ones are proven to work properly and standards are strong.
One note: SSLv2 was created in a couple of weeks by one engineer in Netscape in 1994. TLS was just a rename in 2006. Now TLS is the standard for connectivity and SSL/TLS ecosystem includes lot of companies (browser vendors, embedded device producers…).
An encrypted conversation starts with an authentication (I am speaking with whom I think), then there is a key exchange (on both client and server). This close the 2-way-handshake. The two entities now can communicate secure channel with encryption and integrity. Eventually there should be Forward Secrecy meaning that if the long-term keys are compromised, this should not compromised past conversations. Or that if you compromise keys, session keys remain hidden. Compromising keys is avoided with RSA.
There are two protocol to generate keys: RSA (older) or ECDSA (newer) (you can use both by looking at the capabilities of the client). RSA uses minimum 2048 bits, while ECDSA minimum is 256. But ECDSA is more powerful for the same amount of bits.
Best practice is to ensure that keys have restricted access, are pass protected and they are not used forever (and they are revoked when there is a chance that they have been compromised).
Certificates can be DV (domain validated), the easiest one, or EV (extended validation) is proving that the domain is connected to a legal entity. The certificate must list the domain names to which it is connected (eventually using wildcards). One suggestion is to never do certificate sharing: if you use the same certificate in two servers, you have to share also all the keys, you share some vulnerabilities and when you revoke a certificate, you have to revoke it in all the servers at the same time.
Certificates have a lifetime: times ago computational power and strategies were not like now, and a certificate was expected to be sure for a long time. Now the maximum is 3 years, but it is better to change them every year. Moreover, being quite easy to make the renewal process automatic, the more often the better.
One problem with issuing a new certificate is that clients in another time-zone can see the certificate as valid in the future. Another problem is that OCSP (Online Certificate Status Protocol) responders take time to update the status of new certificates. To address those two problems you can have Dual-CA deployments, so you can use multiple CA for resiliancy. Anyway, each certificate should have a complete Chain Of Trust, meaning that each certificate must be linked to the CA certificate through one or more intermediate certificate. This closed the first block of the course.
The first group of exercises started by creating an RSA key
openssl genrsa -out server.key 2048
then create a Certificate Signing Request (CSR) and verify it
openssl req -new -key server.key -out server.csr openssl req -text -in server.csr -noout
or, as an alternative, create a CSR for multiple hostnames through a configuration file. And finally create the self signed certificate
openssl x509 -req -sha256 -in server.csr -signkey server.key -out server.crt
Please note that for the creation of a certificate with multiple domains you should explicitly instruct the openssl command to use the extensions and the configuration file.
For this first post it is enough. Let’s continue tomorrow with the second part and the second day of training.