Secure Design of Safety Critical Software-Controlled Devices

As a licensed P.Eng., I recently received a copy of a draft version of a document regarding engineering practices for the design of safety critical software. It seemed to be well thought out and written, but one of the last statements caught me by surprise. It basically said that your design should prevent Stuxnet-like attacks.

I think it’s great that the profession is starting to take this stuff seriously, but I was a bit shocked at the implication of this document. Primarily I was concerned because Stuxnet was nothing like an ordinary attack. Defending against it would be like defending your office building against a laser guided bomb. Stuxnet was a nation-state-backed attack with huge funding and resources, going after a military/industrial target.

What does that mean for us practicing engineering in, say, the automotive industry? Do we just get to brush the whole cybersecurity thing aside because a Stuxnet-like attack against our facility has absolutely zero chance of ever being launched? It doesn’t seem like that’s what the document was trying to say.

In fact, is Stuxnet even a relevant example for “Safety Critical Software”? Did Stuxnet actually affect any safety critical systems, like a Safety PLC? All it did was damage equipment. Was anyone hurt? Was anyone in danger of being hurt? I guess I just disagree with the use of Stuxnet as an example here.

That’s not to say cybersecurity doesn’t need some attention from our profession. In particular, consider the case of so-called “Safety PLCs”. Here you have a device, typically sitting on some kind of network, definitely responsible for safety critical systems. There are at least 2 pieces of software installed on a Safety PLC: the firmware (the operating system, communication system, and runtime of the device) and the user-configurable logic that you download when you’re programming the system. Compromise of either of these pieces of software certainly poses a grave risk to human safety.

The user-configurable logic is supposed to be password protected. This is one step up from normal PLCs which rarely offer any kind of authentication. Now, when TUV, or whoever, is certifying these devices, they are definitely checking that once it’s locked, you can’t update the user-configurable software without a password. In fact, TUV has access to the design, so I *hope* someone there is well versed in cyber security and knows what they’re talking about. For instance, if it was found that authentication was being done in the programming software (client application) and not on the device itself, it would be completely useless security, because someone could easily bypass it. But let’s assume that nobody building these things is doing anything that blatantly wrong.

So even if the client/device protocol is rock solid, is the firmware bulletproof? I think this is unlikely. Every time a security researcher runs even the most basic of security audits against PLCs they usually find tons of obvious exploits. You have to realize that most of these devices are just embedded computers running off-the-shelf real-time operating systems like VxWorks. Usually it’s using the standard VxWorks networking protocols, which are sometimes known to have vulnerabilities (in certain versions) and are unfortunately rarely updated in the field. That’s not even counting deliberate backdoors and debug code left in the software. Does TUV do a security audit of every design? Maybe, but I doubt it. Even if they do, are they doing repeated security audits against older devices? When new vulnerabilities are found in these embedded operating systems that impact existing devices, are they requiring them to be recalled? I’ve never heard of such things.

I’m pretty sure the only reason we haven’t heard about this yet is because even security researchers who’ve woken up to the concept of PLCs and SCADA systems in the past 2 years still have no idea that Safety PLCs actually exist. I don’t think it’ll be too long until they find out.

Designing a device like this that’s intrinsically secure against hacking seems almost impossible to me. Whatever software module is responsible for receiving the user-configurable safety program and storing it in the persistent memory of the device is necessarily network-facing. Any vulnerability in that communication module could be exploited, and then you have the ability to bypass all the security protocols on the device, and write whatever user-configurable safety program you want.

So if this document says we have to design our Safety Critical systems to withstand a Stuxnet-like attack, and all the Safety PLCs we could use are likely vulnerable via communication module software exploits, and even firewalls and airgaps don’t seem to be able to defend against this kind of attack (just ask the employees of the Natanz facility in Iran), then where does that leave us? Is a P.Eng. supposed to just throw up their hands and say “sorry, it can’t be done?” Not likely.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.