Security Embedded is 15+ years of experience in building secure systems. Learn more about how we can help you by exploring Phil's blog or contacting us.

A Pragmatic Look at Trust

As I write this at the end of 2017, I think it's safe to say this year was the year of the breach. Major data breaches achieved public notoriety. Huge corporations realized that trusting supply chain is actually a real problem (though they are still grappling with what this really means). We are all perhaps a lot less innocent than we were even a year ago. Information security, as with any form of security, is a holistic practice. We need to solve these problems, from the ground, up. Where rubber meets the road is where business and cryptography cross paths.

Identity is a crucial piece of this puzzle.  Any embedded device could have an identity. A service in the cloud should have a verifiable identity. The components of a motherboard in a server could have an identity. Users have an identity, perhaps verified using two-factor authentication. From smartphones to a high performance computing nodes, everything has an identity these days. It's best not to overlook the potential of establishing identities, and as such, trusting an identity.

Really Establishing Trust

Trust looks different depending on the angle you approach it from. For most, trust implies a business relationship. For example as a vendor I might choose to trust another business. This is common for things like SSL certificates. You need to rely on a third party to do the due diligence to attest a third party's identity.

But how do we verify this trust relationship exists? Computers can't read contracts, don't buy the lies of the smart contract folks. But, we can represent the business relationship in some simple cryptography.

Trust is a Business Decision

Alice is providing Bob a subassembly for Bob's ultra fancy new IoT widget. This device is a crucial part of the widget's functionality. It is good fortune that Alice and Bob that they have talented and pragmatic engineers. The architecture teams put their heads together to learn how Elliptic Curve (EC) cryptography could help solve their trust challenges.

Before the Engineers can do their good work though, Alice and Bob sign a contract to agree to trust each other. After the dust settles from the endless legal calls and audits, they reach an agreement. Alice and Bob now can build on that legal relationship.

As a part of finalizing the integration of Alice's part into Bob's product, Alice and Bob swap keys out of band. Maybe two engineers meet in a dingy bar (with luck this is not the case). More likely, they arrange a phone call to exchange and verify eachothers' PGP keys. This forms the foundation for communicating the parameters for verifying this trust. Keep that PGP private key safe, though!

Alice's team generates an EC key pair. They send Bob's team a PGP-signed representation of their EC public key, in the form of a certificate signing request (CSR) perhaps. The private key never leaves the hardware security module (HSM) that generated the key pair. (Right?)

Bob's engineers also generate an EC key pair. They, too, keep the private key in their HSM. They receive (and verify) Alice's CSR using the PGP keys swapped out of band. At this point, Bob will sign Alice's CSR, making it a certificate. They send this back, also signed with PGP, to Alice's engineers. We have established a root of trust!*

Alice and Bob have an agreement, to protect their respective private keys. Rather than using the EC key pairs they just used to establish trust, Alice instead issues child keys. This keeps the private key of the "master" key pair in as few places as possible. Alice needs to generate a key pair for each of her different factories where she makes her part. There are some challenges around this. Bob needs to ensure that Alice doesn't go rogue and issue a key to some evildoer. With luck the business relationship is what protects against this. Continuous auditing of Alice's processes will be a part of this. Each subassembly in Alice's factory is given a unique identity. That identity is then shown to be trustworthy, by Alice signing the identity with a key pair that Bob knows he can trust.

We're not done yet. At this point Bob's widgets have everything they need to establish that they should trust Alice's subassembly.

At manufacturing, Bob needs to boot his widget for the first time. As the new device opens its eyes, it reaches out to the various assemblies. Now it can verify who it is talking to. The widget reaches the point of provisioning its relationship with Alice's subassembly. Alice's subassembly hands Bob's widget an identity certificate. Using Bob's public key baked into his firmware, Bob's device is able to check the  subassembly's identity. As a part of this authentication, the device and subassembly can also exchange keys. This allows the device and subassembly to secure and authenticate future communications.

* The astute reader would note that the real cryptographic root of trust for Alice and Bob's business relationship is the PGP key. By exchanging this PGP key in-person (or via a hard-to-tamper with medium), you can have some confidence that secrets exchanged with that key are authentic.

A Case Study: Apple does it Right

Apple deserves recognition for their trust model in their devices. The Secure Enclave Processor (SEP) is the root of all identity for the device and for the user. The tight security model for the SEP makes this a reliable and trustworthy source of identity.

But it doesn't stop there. The SEP has a separete root of trust, distinct from that of the Application Processor (AP). Even with a compromised AP, third parties can continue to trust secrets held in the SEP. Key policy, identity policy (i.e. biometrics) all fall within the SEP's purview. Simple "yes/no/decrypted output" answers are all the AP gets when asking the SEP to assert these things. There's no path to export a private key from the SEP, thus reducing the odds someone could exploit errors in key attribute enforcement. All good design choices.

Finally, there's the concern of trusting third party subassemblies. The Touch ID sensor and Face ID sensors each carry their own unique identities. A "trust ceremony" occurs, where the devices swap their identity with the SEP in the factory. This creates a difficult to compromise channel between the the two. Like marriage, until death do us part... or shattered phone digitizer do we part, looking at you millennials. While the details of this are not public, it is likely a scheme similar that being used by Alice and Bob. The end result is that you have authenticated and encrypted communications with the sensor. It's much easier to build a physical security model around this.

By managing the trust between the sensor and the SEP this way, we get another benefit for users. Replacing a Touch ID sensor or Face ID camera doesn't have to happen in a controlled environment. Instead, anyone with the right tools can cause a new device to pair with the old. The exchange of keys between the two devices can happen anywhere. All through the power of Diffie-Hellman key exchange.

Many Trusts, Many Roots

In our previous example, Alice and Bob established trust between each other. Alice can now make devices that Bob can identify with ease. But, Bob's widget needs to a microprocessor to run its logic though. Good luck for Bob, Maxim Semiconductor has a great part that he plans to use. Maybe it's a MAX32552, but his NDA won't let him confirm that.

Trust in Secure Microcontrollers

Bob and Maxim establish a trust relationship (as Alice and Bob did before). Bob's and Maxim's incentives align well. Maxim wants to keep their platform locked down for PCI-PTS compliance. Bob wants to keep his users' data secure.

Maxim devices have a root of trust baked into them at the factory. This is in the form of a factory programmed public key, and only Maxim holds the private key of the pair. Maxim "attests" that they trust the key that Bob sent them out of band. That attestation allows Bob to load his custom firmware into the Maxim part. The Maxim part verifies this using the public key in the device's ROM. Now it's up to Bob to keep his product (and keys) secure though, Maxim has done all they can.

Bob's firmware contains the keys needed to verify Alice's subassembly (and those of any other partner).  The Maxim ROM bootloader ensures nobody has tampered with Bob's firmware image at load (and boot) time. At last, we have all the pieces to establish a chain of trust.

In Bob's case, there is now a pretty straightforward set of trust relationships:

  • Maxim has indicated it trusts Bob's key. This allows Bob's firmware to run on their microcontroller.
  • Bob's firmware contains the information needed to verify trust of Alice's subassembly (and others).
  • Bob's firmware is able to authenticate Alice's subassembly.
  • Bob's firmware is now able to trust any data delivered by Alice's subassembly.

All this allows Bob's devices to verify the business relationship with Alice.

Key Hygiene

There are a lot of points where Bob needs a key pair. In fact, at almost any point where there is a trust relationship to verify, Bob ought to use a unique key pair. A lazy user might suggest that Bob just reuses the same key pair. A basic tenet of key management is that you never reuse keys for different purposes, ever. Different purposes, different keys.

A strong rationale for this is key revocation. Revocation of trust is an unfortunate but necessary feature in these times. Sometimes a business relationship goes sour.

A Breakdown of Trust

Bob discovers that Alice's employees stole one of the factory keys. Alice has not held up their end of the contract: the private key was not in an HSM. Now Bob has moved his business to Carol, and established trust with her business. But going forward for new devices, Bob does not want to trust Alice's devices at all. Bob releases Firmware Version 1.1 (signed using his firmware signing key signed by Maxim). The major change: this release removes the key used to verify Alice's subassembly. 

In its place is the public key needed to verify Bob's business relationship with Carol. This way, there's no chance of a mistaken identity. The public key for trusting Alice's device is no longer available. Bob has no chance of trusting a non-trustworthy Alice-made subassembly. In the most brutal sense, Bob revoked his trust of Alice. By tying the key to only trusting Alice, we reduce the possibility of human error. Or for that matter, an untrustworthy part sneaking into the supply chain.

A Root of Trust Carries a Long Way

While manufacturing his device, Bob also has it generate a unique identity key pair. The private key never leaves the device. The public key gets extracted by Bob during manufacturing. With appropriate physical security measures in the manufacturing cell, this has an interesting implication. By having the device generate a CSR with its unique key, Bob can sign this CSR with his own trusted key. The device can then take this certificate and use it to certify its own identity.

The implications of this are astounding. Given that we can trust the code running on the device, trust the data from Alice's subassembly (or Carol's, after the business relationship with Alice breaks down), Bob now can use this certificate to identify the device. But not only that, Bob can use this certificate to authenticate and secure communication with this device. Of course, this does hinge on Bob's device storing its private key in a secure way.

This becomes the foundation for establishing a trusted, secure channel to the device.

Failures Happen (Often)

Bugs happen. These manifest in hardware and software form, or in broken architectures. I've seen many trust models that were simply not trustworthy. Careless use of unauthenticated data, broken fall-back schemes, JTAG failing open all can cripple a trust model. Usually this is how a device falls. Improper range checking, a misused buffer, improper memory protection are the low hanging fruit for a bored or malicious individual.

In times past, a trust envelope might have been an actual envelope, i.e. a Gore anti-tamper mesh. This technology is still relevant for many applications. Not all devices have reliable roots of trust* for one. Some devices also are so high value that a physical attack is worth the high cost to an adversary. HAIPEs for the government make use of extensive physical countermeasures for this reason. This not only ensures that key material doesn't fall into the wrong hands, but keeps algorithms as secret as possible. Of course, by Kerckhoff's principle this shouldn't matter. But, don't let pragmatism get in the way of a fun physical security model!

Of course, last but not least, there are limitations to assertions about identity that one can make purely using cryptography. The strongest assertion any asymmetric crypto scheme allows you to make is that someone likely in possession of the private key calculated a value. This is where all these physical tamper-proof requirements become important. Once you get the key material somewhere safe, you need to make sure it's difficult for an adversary to extract it from there.

* Have a look at my Twitter feed for a teardown of a modern device with an anti-tamper mesh. Kudos to our friends at Clover, by the way. They have one of the better anti-tamper schemes I've seen recently. On my old Flickr page, you can also find a tear-down of an IBM 4758 FIPS 140 level 4 device. IBM used Gore anti-tamper meshes for this product.

Those Devil-y Details

Of course, in all the examples of Alice, Bob and Carol, we haven't talked about mechanisms. To trust a child key, you need to know all its parent keys. This means that each device needs to store and verify this chain of keys, starting from the key's parents working its way up to the root. This can be time consuming, but is not done too often. 

Storage and representation of these certificates is also a challenge. Never more have I seen something go wrong than with X.509 implementations. If you end up using X.509, do yourself a favor. Have a glance at mbedTLS, it has some well tested code for managing certificate chains. Something custom isn't always out of the question, but there is always risk in rolling your own trust systems.

Legal contracts aren't fool-proof. This is one way of managing trust, enforced through the courts and large sums of money. The smart contract folks and otherwise would say this is not adequate for all cases. Of course, Bob is a business building a product at this stage, and is trying to capture a market. So maybe this all works well enough for what he needs.

A Treatise on Voting Machines

A Treatise on Voting Machines

Security Specifications for the Layperson