Additional Security Measures

This page includes discussions on additional security measures that a system administrator can take beyond what is covered on the Apache TLS Configuration page.

Basic Firewall Configuration

When you purchase a virtual machine from a company such as Linode (I highly recommend them, btw) and select a CentOS 7 image, what you typically get is an environment with the firewall turned off.

It is a good idea to turn it on, blocking every port except the ports you specifically need to have open. This section will give you basic instructions on how to do that, using command line utilities.

For more extensive documentation on the firewall in RHEL/CentOS 7, please see the documentation at FirewallD - Fedora Project.

Port Selection

The first thing you need to do is determine what ports the server actually needs. I usually write this down on a 3x5 index card on my desk.

You obviously are going to need to have the port open that the SSH daemon is using. By default that is TCP port 22 but you may have changed that. I do.

For a web server, you will need to add TCP ports 80 and 443 to the list.

If you are running a nameserver (not recommended on the same host as a web server) you need to add both UDP port 53 and TCP port 53 to the list. Etc.

Firewall Shell Script

Create a shell script to configure your firewall. Using a shell script both keeps a log of the individual commands and allows you to easily deploy the same set of rules on a different server if you need to.


/bin/systemctl start firewalld

# Set public zone as default (should already be but...)
/bin/firewall-cmd --set-default-zone=public

# Open TCP port 1414 for sshd
/bin/firewall-cmd --zone=public --add-port=1414/tcp  --permanent

# Open TCP ports 80 and 443 for apache
/bin/firewall-cmd --zone=public --add-port=80/tcp  --permanent
/bin/firewall-cmd --zone=public --add-port=443/tcp --permanent

#reload the firewall
/bin/firewall-cmd --reload

The script starts and configures the firewall but it does not enable it at system boot.

If I made a typo and the port needed for the SSH daemon ends up blocked, I can log in to my account with my host provider and reboot the server from there. Since the firewall is not enabled at system boot, it will not start during the reboot and I can connect again to fix the typo.

Test and Enable Firewall

Run the shell script as the root user, taking careful note of any errors. It probably will give you a warning telling you the default zone is already set to public.

Close your connection to the server. If at all possible, use a port scan utility such as nmap to make sure only the ports you specified are open. If you used a high port number for the SSH daemon, it likely will not show up in the port scan unless you do a lengthy scan. That is normal.

Assuming everything went according to plan, SSH back in to your server and set the firewall to start at system boot:

/bin/systemctl enable firewalld.service

It is not a bad idea to reboot the server just to make sure the firewall starts at system boot.

Important Note

FirewallD has a concept they call services so that you do not have to specify the ports directly to enable. For example:

/bin/firewall-cmd --zone=public --add-service=http --permanent

That would enable both TCP ports 80 and 443.

I do not like the concept. I do not always run services on the default ports. I always change the port used for the SSH daemon, for example, and if I need MariaDB to be available to other servers, I change the port that is running on too.

The services get in the way of port based administration. For example, I open up TCP port 1414 for SSH and then specifically close port 22, it may open port 22 again anyway because the fricken SSH service is enabled in the firewall.

Just specifying the ports I actually need keeps things simple, and I like to keep things simple. I like a port to be closed when I tell the firewall close it, I do not want to have to care what service name was used to open it.

But that is just me. Otherwise, I really like FirewallD. I just think that whole named services thing was a solution to a problem that does not exist.

DANE Certificate Pinning

The PKI system currently puts way too much trust in the hands of the Certificate Authorities. This trust can and has been exploited in several ways:

Rogue Certificate Authorities
There have been cases where Certificate Authorities have intentionally issued inappropriately signed certificates. Browsers see the certificate is signed and then trust the certificate, allowing for abuse of the client trust in the system.
Stolen Signing Keys
There have been cases where the private key used by a Certificate Authority to sign certificates has been stolen. This allows fraudulently signed certificates to be used that are then trusted by browsers, allowing for abuse of the client trust in the system.
Improper Validation
The attacker manages to convince a Certificate Authority that the CSR they submitted is valid and the Certificate Authority issues a signed certificate. This is the most common method and usually involves DV certificates. All an attacker has to do is take over an e-mail account to validate.

Their are methods of dealing with those issues, but the methods currently in use by mainstream browsers simply are not adequate.

Certificate Revocation Lists

When it is discovered that a certificate has been fraudulently signed, the Certificate Authority can revoke the signature. The problem is communicating that the signature has been revoked to clients is not a trivial task.

The way this currently is done is through a Certificate Revocation List maintained by the Certificate Authority. These lists are huge and it is simply not practical to distribute them to the clients. The client instead makes a connection to a server to ask if a particular certificate has been revoked. Usually this is done through OCSP.

Certificate revocation is an important component of PKI but it is not good enough to adequately solve the problem:

The Time Problem
The time it takes from when a fraudulent certificate is issued, discovered, and added to a Certificate Revocation List can be considerable. The attacker can do a significant amount of damage in that time.
The Availability Problem
Far too frequently, when a client attempts to find out the current status of a certificate, it is not able to get a response from the revocation list. The default behavior in these cases is to simply accept the certificate.
The Third Party Problem
When a fraudulently signed certificate is in use, you are at the mercy of the Certificate Authority to get it revoked. If they do not revoke it in a timely manner, your server and your customers remain at risk to MITM attacks. This is particularly problematic when the Certificate Authority itself is acting in bad faith.

I dream of a world where every web site uses OCSP stapling, a world where browsers by default will reject secure connections that do not use it. But I dream about a lot of things that will never happen.

Even if OCSP stapling solved the Availability problem, it would not solve the Time problem.

Certificate Transparency can help solve the Time problem, but it still is not yet widely available and the Certificate Authorities still have to act on the information that a fraudulent certificate is in use.

The bottom line is that relying upon Certificate Revocation Lists to protect against fraudulently issued certificates is reactive security that depends on the actions of a third party, the Certificate Authority.

Good security is proactive security that is not dependent upon a third party taking action.

Public Key Pinning is proactive security. With Public Key Pinning, the client can check whether or not the certificate is appropriate to accept for the specified domain name even if it has not been revoked.

HTTP Public Key Pinning

HPKP is a solution to the problem, but it is not a very good solution to the problem.

The problem is a problem that plagues x.509 certificate validation regardless of the protocol. HPKP only address the issue on the HTTP protocol. A proper solution to a problem that is protocol agnostic should be a solution that likewise is protocol agnostic.

HPKP is a ‘Trust on first use’ approach to key pinning. There is no mechanism in place by which the client can confirm the key pins it is being sent are authentic. Even worse, it is a completely automated ‘Trust on first use’ without any interaction from the end user.

Trust on first use’ systems should be avoided as much as possible. ‘Validate on every use’ systems are the way security should be implemented whenever possible.

If a MITM attack is already taking place when a client without the key pins already cached connects, the client will not be protected by HPKP. In fact HPKP opens up a new vector for DoS attacks as a MITM attack could provide bogus key pins the client would blindly accept, causing it to reject the legitimate certificate from the legitimate web site in the future.

To partially mitigate the issues caused by ‘Trust on first use’ it is recommended to instruct browsers to cache the key-pins for a relatively long time, 60 days is fairly typical.

Assuming the key pins the client initially received are legitimate, they remain in the cache without being deleted for the specified period of time, and the user visits the website with the same browser with some frequency — then the system works fairly well. That however is a lot of assumptions.

The length of time the browser is instructed to cache the key pins for also creates its own issues.

It reduces the flexibility of legitimate changes in the signed certificate. Before a new certificate can be put into use, the key pin must have been part of the HPKP header for at least the length of time that browsers are instructed to cache the key pins. Otherwise some clients will reject the new certificate as fraudulent.

This is why a proper implementation of HPKP must include the key pin from a backup key. In the event the private key on the server has been compromised, the new certificate needs to be generated from a private key that browsers already know they can trust. Otherwise they will reject it.

Unfortunately maintaining a backup private key that your clients are pre-disposed to trust means that if a hacker or disgruntled employee manages to steal that backup key, they could use it to acquire a fraudulently signed certificate and successfully pull off a MITM attack, the very thing HPKP is suppose to mitigate.

HPKP is better than nothing, but it is a conceptually flawed solution to the problem. It should not have been implemented by browsers.

DANE — The Right Approach

DANE uses the DNS system to store a fingerprint of the x.509 signed certificate. It is protocol agnostic, you can store the fingerprint for any x.509 certificate regardless of the protocol the certificate is being used with.

A simple DNS query specifying port, either TCP or UDP, and the hostname will retrieve you the fingerprint if it exists:

dig TLSA +short

3 0 1 7A8180703597047DC3B3F6CCA7234766202EFEB98F02F6C1C26DEA59BC1D9AB2

If you know dig you can figure out from that example that the fingerprint is kept in a record type called TLSA.

The TLSA record is one component of DANE, the component that makes it protocol agnostic.

The other component to DANE is DNSSEC. DNSSEC provides cryptographic signatures to DNS responses allowing the requesting client to verify that the response has not been altered in any way and is the authoritative response for the zone.

This allows DANE to be a ‘Validate on every use’ approach to Public Key Pinning. The right way to do it.

No need for client caching of the key pin. No interference with certificate management. No need to maintain a backup private key. Things just work better when the solution to a problem is conceptually correct from the start.

Unfortunately, even though DANE is clearly the conceptually superior solution, very few clients have implemented support for it. I highly recommend implementing DANE but for the time being, it is important to also implement HPKP. Even though it is a conceptually flawed solution, it is better than nothing and many clients support it.

The TLSA Record

The TLSA Resource Record type is defined in RFC 6698 updated by RFC 7218.

The OWNER portion of the record (the name to query in the DNS request) specifies the port, protocol (TCP or UDP), and the domain name where the pinned certificate is being used.

When a client encounters a server on the Internet that uses an x.509 certificate, the client can then easily make a request in the DNS system to see if there is a valid TLSA record it can use to validate the certificate.

The OWNER portion of the record begins with an underscore, followed by the port number, and then followed by a period. As HTTPS uses port 443, the OWNER portion of a TLSA record for a secure web server would begin with _443.

The next part specifies TCP or UDP, again preceded by an underscore and ending with a period. As HTTPS uses TCP, this would be _tcp. so the beginning our OWNER record now looks like _443._tcp.

A benefit to both of those portions begin with an underscore is an underscore is not allowed as the first character in the OWNER of an A or AAAA record, so it avoids collisions.

Finally, the OWNER has the FQDN of the server, and ends with a dot.

The RDATA portion of the record consists of four fields:

  1. Certificate Usage (value of 0, 1, 2, or 3)
  2. TLSA Selector (Value of 0 or 1)
  3. TLSA Matching Type (Value of 0, 1, or 2)
  4. Certificate Association Data (The data)

For the Certificate Usage field, a value of 0 indicates we are pinning a Certificate Authority Trust Anchor rather than the actual signed certificate for the domain. The actual certificate for the domain has to have the certificate we are pinning in its chain to be considered valid. That is usually not what we want, as it is less secure than pinning the actual certificate.

A value of 1 indicates we are pinning a certificate signed by a Certificate Authority. Do not use this to secure a port 25 (SMTP) service.

A value of 2 indicates we are pinning a certificate signed by a trust anchor that is not a part of an official Certificate Authority (e.g. a corporate trust anchor for an internal network)

A value of 3 indicates we are pinning the certificate itself, but who signed the certificate does not matter, DANE validation does not depend upon validating the signature. This is usually what we want, even with certificates signed by a Certificate Authority.

For the TLSA Selector field, a value of 0 indicates the Certificate Association Data field is based upon the full certificate.

A value of 1 indicates the Certificate Association Data field is based upon the SubjectPublicKeyInfo.

Which you use does not really matter. I personally tend to use 0 when the certificate is signed by a CA and use a 1 when the certificate is self-signed, just so I can easily tell them apart in my DNS zone files.

Please note that which you use impacts how you generate the fingerprint.

A value of 0 requires the fingerprint be generated from the entire certificate, if you get a new certificate for the same public key the fingerprint will not match.

A value of 1 is generated from the public key itself and not the certificate, you can generate a different certificate and it will match.

If you are using Let’s Encrypt, the latter is better because they generate a new certificate every three months but keep the keys the same.

For the TLSA Matching Type field, a value of 0 indicates the Certificate Association Data field contains the entire certificate (or SubjectPublicKeyInfo) rather than a hash. Please do not do that, it is an incredible waste of bandwidth.

A value of 1 indicates the Certificate Association Data field contains a 256 bit SHA2 hash. This is almost always what we want.

A value of 2 indicates the Certificate Association Data field contains a 512 bit SHA2 hash. It may sound more secure, but it really is just added bloat to the DNS response. There are no known collisions for the 256 bit SHA2 hash, let alone an algorithm to produce one that would do it in a valid Certificate Authority signed x.509 certificate with a matching CN field.

To generate a 256 bit SHA2 hash of the full certificate for the Certificate Association Data field:

/usr/bin/libressl x509 -noout -fingerprint -sha256 < /path/to/certificate.crt |tr -d : |cut -d"=" -f2

That will give output that is something like the following:


That is what we would use with a TLSA selector value of 0 and a TLSA Matching Type of 1.

Alternatively (what I do with the self-signed certs I use for SMTP) to generate a 256 bit SHA2 hash of the SubjectPublicKeyInfo:

/usr/bin/libressl x509 -in /path/to/certificate.crt -noout -pubkey |\
  /usr/bin/libressl pkey -pubin -outform DER |\
  /usr/bin/libressl dgst -sha256 -binary |\
  hexdump -ve '/1 "%02x"'

That will give output that is something like the following:


That is what we would use with a TLSA selector value of 1 and a TLSA Matching Type of 1.

We can now make our TLSA record in our DNS zone file. Remember to make one for each sub-domain using the certificate:       IN   TLSA   ( 3 0 1 7A8180703597047DC3B3F6CCA7234766202EFEB98F02F6C1C26DEA59BC1D9AB2 )   IN   TLSA   ( 3 0 1 7A8180703597047DC3B3F6CCA7234766202EFEB98F02F6C1C26DEA59BC1D9AB2 )        IN   TLSA   ( 3 1 1 b58a3ca324f56df4c6b59a018fdf0c4b1213ed2c8f0f728961717c678035d382 )

One of the nice things about DANE, it allows you to pin different key pairs to different services. If someone stole the private key I use for HTTPS it would be bad, but they would not be able to use that key to MITM my SMTP service because the fingerprint would not match what I have specified for Port 25. The flawed HPKP does not give that kind of security.

A TLSA Resource Record is just like any other Resource Record, it does not itself depend upon DNSSEC.

However without properly implemented DNSSEC, the client has no mechanism to verify the record is legitimate and will not use it to verify the x.509 certificate. DNSSEC is thus a critical component of DANE certificate pinning.

Myths about DANE and DNSSEC

Myth: DANE Seeks To Replace Certificate Authorities

That myth appears to have started in the blogosphere, with bloggers who had absolutely nothing to do with creating the DANE specification.

Replacement of Certificate Authorities is not mentioned anywhere in the RFCs that define DANE. In fact they include specific support for Certificate Authorities as part of the specification.

DANE is a method of x.509 certificate pinning. That is all it is and that is all it tries to be.

Myth: DNSSEC Is Difficult To Set Up

If you are running your own authoritative nameserver, you will need to read some documentation.

It took less than a week from when I started reading the documentation until I had it correctly deployed on my own authoritative nameservers. It does bring in some new concepts but it is not difficult.

Myth: DNSSEC Is Pointless, Few Clients Support It

It is true that few end user applications support it, but many caching DNS servers do, and they count as clients. They protect the users behind them, even if the end user applications do not support DNSSEC themselves.

Google’s Public DNS, Comcast DNS, and many others already implement it on their caching nameservers.

That does not protect the clients from modifications to the DNS response data between the client and the caching DNS server, only DNSSEC support in the client itself can do that.

It does however protect the client from fraudulent DNS data between the caching nameserver and the zone’s authoritative nameserver, but only if the authoritative nameserver for the zone implements DNSSEC. Security benefits exist now, despite poor end user application support.

These myths are often perpetrated by people who have both the intelligence and technical savvy to know better.

My suspicion is that some of the people perpetuating the myths have nefarious reasons for spreading the FUD. It is harder to supplement your income with stolen credentials and identity theft in an Internet where DNSSEC prevents you from being able to modify DNS responses.

BREACH Vulnerability

CVE-2013-3587, also known as BREACH, is an attack that can allow the discovery of secrets submitted by a user to a secure server that supports HTTP compression.

It is a variation of the CRIME attack against TLS compression, but it attacks HTTP compression instead.

Static content does not pose a vulnerability, but secrets can be discovered with dynamic content that responds to data sent to the server from the client.

If you enable HTTP compression in PHP (usually set by the zlib.output_compression PHP setting) you should make sure any content submitted by the user has not been manipulated by a third party before allowing PHP to compress the output.

The easiest way to do this is to check the HTTP_REFERER header. If the header is either missing or does not reflect your web server, then you can not be sure the content has not been modified by a third party,

While the accuracy of that header is not something that can be trusted, it is set by the client making the request. This attack is carried out using a cross-site request where the attacker tricks the victim into submitting data to your server by referencing a resource on your server from a third party website. An attacker can not modify that header unless they already have taken over the user’s browser, in which case this attack is irrelevant.

For plain text version:

By including that with all your PHP generated content, you can mitigate this attack when performed using a cross-site reference technique, such as an embedded resource on a website the attacker controls that interacts with your server.

It is important to note this must be run before the first byte or headers are sent to the client, otherwise it will not work.

It is also a good idea to employ CSRF tokens submitted by POST with any forms, whether or not you allow PHP to compress the output.