Security Embedded is 15+ years of experience in building secure systems. Learn more about how we can help you by exploring Phil's blog or contacting us.

A Treatise on Voting Machines

A Treatise on Voting Machines

The Voting Machine Village, round two, took place at DEFCON this past weekend. While I generally view this as stunt hacking, there's something to be said for how important this is to raise awareness. The Wall Street Journal published a piece this weekend  about the village and vendors' response to it. The response from Election Systems & Software (ES&S) is what I'd define as ignorant. In fact, if I was one of their customers or potential customers, I'd be downright worried to hear such a response. We're off to a good season for next year's Pwnies, it seems.

But let's leave sour grapes aside. The intersection of the physical world and information security is continually a disaster. Every industry seems to be hell-bent on learning the same lessons others have before them, the hard way. Turns out just adding computers to every process will not automatically make it better. Voting on its own is complex, emotional and is an opportunity to seize broad-reaching influence and power. Seems like something we should invest, heavily, in protecting.

The Information Security Cycle of Violence

For whatever reason, most industries insist on learning information security from first principles. Nobody likes to look at how others have succeeded to protecting their users (or failed catastrophically). It seems paying tuition is the way to go.

Figure 1: The Victim

Figure 1: The Victim

I had the pleasure of getting my hands on a Diebold AccuVote TSx voting machine earlier this year (NB: if you want one, eBay is your friend). Of course, I'm not the first to have this pleasure: many have trod upon this turf beforehand (for earlier hardware, see here... it is stark that nothing has improved in the intervening decade.) The AccuVote TSx had minimal physical security features to it. The device mostly relies on tamper-evident seals around the various doors on the device, and scrutineers and polling officials checking these seals periodically. Works great on election day.

The vendor response yesterday from ES&S was quick to point this out. In fact, there's a lot of truth to this statement. There's no way anyone would lay the machine bare as was done at DEFCON. Wiring up a JTAG probe and using it to modify the software or data on election day would leave a pretty obvious trail of destruction. Surely you'll be noticed doing that, and you'd leave a lot of evidence of tampering. A fair point, but only for election day.

When the vendor sees this and belabors this point, they're already headed down a slippery slope. What this shows is a short-sighted world view: one where they are only addressing the immediate threat in front of them. They seek to make their adversary look unrealistic and absurd while sweeping under the rug the other 360-or-so days a year the device is sitting in storage, unattended. A classic smear technique.

This is a concrete example of poor vendor response, likely in the face of awareness there is a deeper problem. Perhaps ES&S is working from talking points they put together in 2003. Or maybe the people crafting the response have no background in risk or information security, and are just worried about keeping their customers feeling safe, band-aid style. Either way, this fallacy has been oft repeated by vendors, and it has always led to embarrassment or worse. Usually it's just for short-term protection of their revenue stream.

To save ES&S, Diebold and other companies from the difficulty of learning this from first principles, I've laid out what is going to happen.

Stage 1: We are Physically Secure When Used as Designed, Honest!

This vendor antipattern repeats itself often. They'll see some effort (cf. Bitfi or the Voting Machine Village) where adversaries will pop the device open, analyze the architecture. The vendor response only focus on the normal use case, not the case where the device is beyond the watchful eyes of its owner (or poll station workers in this case). This is a typical engineer's eye view of security -- nobody will use a device outside of normal operating parameters. So much so that they might well leave a lot on the FR4 table, even.

Figure 2: The victim, laid bare. Note the useful silkscreen all over the board, handy to figure out how the device is supposed to work. Also of note, every part is well-documented and broadly used, from the SM501 system controller, to the Intel PXA …

Figure 2: The victim, laid bare. Note the useful silkscreen all over the board, handy to figure out how the device is supposed to work. Also of note, every part is well-documented and broadly used, from the SM501 system controller, to the Intel PXA CPU, through to the touchscreen digitizer. Not one of these parts provides a real security model.

Analysts and attackers will have an easy time finding JTAG ports and noisy UARTs. These will be handy to dump the firmware and analyze how the device works. Someone will write tools to analyze the firmware image, and perhaps even get arbitrary code running on the device. The vendor's response will be along the lines of "this is not a normal environment" or "this would not be feasible under controlled conditions" or so forth. This is, of course, accurate, but not everything in the real world is controlled.

Eventually, several enterprising hackers will come up with a way to implant the device in such a way that will be indistinguishable from the original firmware. Maybe it relies on an implant that plugs into an audio port and takes advantage of some automated test and bring up factory feature that was accidentally left enabled in the firmware. Perhaps they'll find a port that can be pried open without damaging a tamper evident sticker. Or maybe they'll find a hidden debug pattern that can be tapped on the screen, thus enabling a user unfettered access to the internals of the machine. Cue the vendor scrambling to respond.

It is also easy to forget that these techniques can be used to modify the device outside of normal election circumstances. Voting machines are locked up in a room for most of the year, usually somewhere that people might not be the most physical security minded. A determined adversary could figure out how to get access to the room and modify the devices slowly over the course of the year. They might even procure replacement tamper-evident seals to install after they've finished their work*. (Do you know the difference between an authentic Diebold tamper-evident seal and not? The instructions on how to verify the seals are non-existent to vague, at best.) Having this much time means that installing a software implant is not out of the question, and you can still leave the devices in good-to-go shape for election day.

The final fallacy of this stage is usually assuming that attackers are not sophisticated enough (or well funded enough) to build alternative firmware for such a device. Even smartphone vendors consider this as a part of their threat model, today.

Stage 2: We Need to be Physically Secure

Eventually, egg on their face, the vendor will realize that tamper evidence is not enough and tamper responsiveness and real physical security is necessary. The next iteration of the device will destroy the ability to update election data if it detects tampering, eliminates all ports, maybe even pots the critical electronics in epoxy, and wraps all that in a tamper-responsive envelope. Most PCI-PTS certified pin-entry devices (PEDs) do something like this, and cost effectively.

Of course, voting machines are nowhere near this level of physical security today.

Figure 3: in which a JTAG probe is used to read and write the software image

Figure 3: in which a JTAG probe is used to read and write the software image

Better software verification regimes also need to take over. By eliminating as much I/O as possible, software engineers can focus on strengthening the attack surface. Third-party security audits are a must, because even the best security engineer has blind spots and needs help to find them.

The vendor, in making these changes, will raise the bar dramatically. The attackers will need to resort to increasingly complex and esoteric mechanisms to extract firmware, load arbitrary code or even get an intact device in their hands for analysis. This is a very positive change for customers, and will increase their confidence in the vendor. Of course, this will increase the cost of each device. Eliminating the potential PR nightmare of trivial implantation, and the reducing the impact of the question "how trustworthy is the result?" will be positive outcomes, though.

Stage 3: Physical Security is Not Enough

After all this investment, the vendor will learn that the human is truly their enemy. It's almost impossible to address this problem, so they'll have to learn to mitigate the problem by eliminating the risk of obnoxious things people can do.

This enemy runs deep, and is necessary to get your product out the door in some cases:

  • Disgruntled developers running off with source code, giving it away or selling it to the highest bidder;
  • A debug device (or 5) makes it into the wild, letting attackers get a glimpse into the internals of the device (ask Apple how they feel about this);
  • Poll workers colluding to make sure their preferred party is elected.

And those are just a few ideas off the top of my head. These are difficult to address. By building a device with secure identity and that requires zero-trust provisioning, you might be able to cordon off the risk of your source code being released in the wild or debugging devices getting into the hands of an adversary. Of course, that means a well-defined security testing regime, a good understanding of how to 'secure' the systems development life cycle (SDLC) and general good engineering hygiene. This isn't a perfect solution, but it is a good way to eliminate low hanging fruit.

And of course, once you have reached this level of enlightenment, you'll also understand that you're in a long-term game of whack-a-mole. Attackers will find flaws in your device, and you will need to respond in a measured and well-thought out manner.

The Human is the Enemy

The DEFCON Voting Machine Village is critical PR to raise awareness of the challenges in this space. I'd argue it's itended to get voting machine vendors past Stage 1 of the security loop, and push them to Stage 2, where they actually build secure (and auditable) devices. We all know that the real weakness is the human factor though. Electronic voting is a mature industry -- these devices have been around for much more than a decade, and have been in broad use longer. There is very little excuse for where things are today. Many other industries (such as payment technology) have invested extensively in low-cost, repeatable and usable active tamper detection. The voting equipment industry could stand to learn some lessons from that space.

Why we aren't holding voting machine vendors to the security standards we have come expect from smartphones and other consumer electronics is beyond me. Elections are critical to democracy, so why are peoples' duckface selfies, Twitter tweets and Facebook tags deemed more security-critical than the process by which officials are elected to office?

Coda: On License Violations

Finally, ES&S, I believe you must be insane:

In the letter, ES&S also warned election officials ahead of the conference that unauthorized use of its software violated the company’s licensing agreements, according to a copy of the letter viewed by The Wall Street Journal.

Claiming that the Voting Machine Village is a violation of your licensing terms as a defense? So maybe you manage to shut down the Voting Machine Village. Will that stop someone who bought the device from a scrapper and did not agree to the license terms beforehand from attacking it? What about a determined adversary who does so, creating a modified firmware load that alters the election results? Are they going to respect your license as they load code into flash on your device? Sounds like a violation of your license -- hope you have some way to detect it to save elections from your insecure hardware.

Or maybe voters will have to agree to a EULA before they cast their vote. Maybe one that doesn't guarantee the machine will correctly count their vote, and that they won't violate your software license terms.

Application Trust is Hard, but Apple does it Well

Application Trust is Hard, but Apple does it Well

A Pragmatic Look at Trust