Public Key Cryptography simplifies authentication. We can use a public key to authenticate firmware updates signed with the private key. Everything seems pretty clear at this point.
But the security of the entire scheme is dependent on the private key being secure and secret. When using public key cryptography, the only thing a valid signature tells us is which private key created the signature. There is no way to guarantee a malfeasant employee didn't steal the key. So, to make a stronger attestation of who signed the firmware image, we need to build some policy.
This blog post is a bit more philosophical than technical in content. Sorry for that... we'll be back to technical details soon, I promise.
Keeping Your Keys Secure
Your entire security model depends on keeping your private key safe. There are several minimal precautions you should take:
- wrap the private key with a passphrase
- limit physical and network access to the device that stores the private key
- do not use the server for anything other than signing the firmware image
- keep the software on the server up to date
- have a physical airgap between the server and your infrastructure.
Most of these are basic hygiene. An air gap can be inconvenient, but it is the best way to keep network access-based attacks to a minimum. There are some other basics for dealing with Public-Key Cryptography to consider. For example, you should never use the same key for encrypting messages as you do for signing. Policy around each type of key varies widely, so it's better to not do this.
The nuclear option is to include a hardware security module (HSM), in the server. These are available in various flavors from vendors like Thales, SafeNet, Cavium and IBM. By generating your key pair in the HSM, then exporting the public key for use in authentication, your keys are much harder to tamper with. This is not panacea: HSMs like the IBM 4758 have had exploitable flaws in their firmware.
Performing all signature operations in the HSM ensures the private key is never disclosed. Furthermore, most HSMs let you mark keys as being non-exportable: this means the key can't be removed from the device. This raises the bar, diminishing probability of a key theft.
Danger, Danger! Rabbit hole ahead!
Of course, this is a rabbit's hole to head down. You still need to authenticate who is actually using the signing server. Even an HSM requires a user identify themselves with a password. With strict physical access controls you can prevent someone from tampering with the server. But you still need an authorized user to log in to sign the firmware.
Along with passwords, a second or third factor of authentication helps. In the past, I've done this done with fingerprint sensors and a one-time pad or smart card to log into the machine. This isn't perfect: a determined attacker could attack the authorized signer (say, with a Glock or blackmail). But unless you're defending against a nation state, this is as deep as we should reasonably go.
This is starting to sound a bit James Bond-like. Let's reel it in a bit.
The Human Element
Above all how do you trust the people who are setting up the signing server? How do you guarantee that they're not going to become malfeasant? How do you know the employee(s) allowed to authorize signatures won't go rogue? The answers are simple: you can't and you don't. But, by raising the bar so high that private key theft is the only option, maybe this is an acceptable risk. Cut down the number of people who can access the signing server. This reduces the number of people who can pose a threat.
One interesting approach I've seen is to require two people with two parts of a key to agree to sign a release. By using Shamir's Secret Sharing scheme, you can build systems that are provably secure. This means you need a certain number of people to collude to be malicious. This is way too complex to implement for your typical organization though.
The simplest, and actually most effective, approach I've seen is to give access to the keys to only long-tenured employees. If they're well respected, this is a good selector for if someone might be trustworthy enough to hold the keys to the kingdom. But good luck picking these people out of the crowd!
We went down a bit of a rabbit hole today. Your public-key cryptography based system is only as secure you keep your private key. There are bare-bones fundamental actions you should take. There are other layers you can add on top of that, depending on the types of threats you want to protect against.
But does your threat model need to consider state actors? Do you need to worry about someone blackmailing your engineers? Are you likely to have a trusted engineer go rogue and create a malicious firmware load? I bet not - but it's only constructive to even consider these after doing the bare bones basics. I've sat in far too many meetings where people would debate the finer points of these threats. In the mean while, I pulled the unwrapped key off an unprotected Windows share...
Back to our Scheduled Programming
Next time, we'll talk a bit about techniques using hardware to simplify authenticating software. We'll look at the specifics of what Intel has done with Skylake to authenticate EFI loads, and a case study with Broadcom's secure microcontrollers.