Michal Zalewski, a security researcher at Google, recently wrote a guest editorial for ZDNet entitled “Security Engineering: Broken Promises”. The article lays out a series of issues with the security industry, specifically looking at an inability to provide any suitable frameworks for software assurance or code security.
We have in essence completely failed to come up with even the most rudimentary, usable frameworks for understanding and assessing the security of modern software; and spare for several brilliant treatises and limited-scale experiments, we do not even have any real-world success stories to share. The focus is almost exclusively on reactive, secondary security measures: vulnerability management, malware and attack detection, sandboxing, and so forth; and perhaps on selectively pointing out flaws in somebody else’s code. The frustrating, jealously guarded secret is that when it comes to actually enabling others to develop secure systems, we deliver far less value than could be expected.
There is nothing fundamentally wrong with this sentiment. I would tend to agree, although I would also argue that we do have success stories to share but we choose not to. It doesn’t make for good copy to reflect on the security incidents that didn’t happen because certain folks did the right things. In fact it can be quite difficult to build an argument for direct correlation between actions a-z, the frameworks employed that supported those actions, and the lack of security incidents against said thing.
To be sure there are many non-events that never occur because of SDL processes that are adhered to, software that is not exploited because certain developers included methodologies, such as threat modeling and tools such as fuzzers that helped to identify exposures prior to exploitation in the wild.
And even when software, riddled as it may be with vulnerabilities and exposures, fails to lead to exploit due to the actions of those operational security professionals that have implemented the people, processes, and controls to limit, negate or completely survive an incident we look the other way. We don’t talk about these, we don’t know how, as Michal further explains.
In the end, regardless of the number of elegant, competing models introduced, all attempts to understand and evaluate the security of real-world software using algorithmic foundations seem to be bound to fail. This leaves developers and security experts with no method to make authoritative statements about the quality of produced code. So, what are we left with?
We are left with the pursuit of the unattainable. We are left to grapple with inelegant imperfection. We are left incomplete since we cannot measure the immeasurable which leads one to believe that what we cannot measure is terribly flawed. What we must learn to accept is that security – as it pertains to both the development of software and its operational use – is ultimately more survivable than we like to believe.
We must also learn to accept that an inability to measure or even understand something doesn’t mean it isn’t. As Nietzsche stated
The irrationality of a thing is no argument against its existence, rather a condition of it.
We are blessed with an awesome number of highly qualified, extremely intelligent and talented individuals moving in the right direction. It is not easy, nor will improvements happen quickly and radically. We can have faith though that incremental improvements in all aspects of the operational value chain will evolve security postures to the point they need to be at any given moment in time.
- What is will be until it is no longer
- There is no way to achieve perfection, nor can we enjoy (or even define) total and complete security
- That which is fragile, such as the interconnected nature of the Internet, will tend to endure longer than that which appears hardened, such as GEMSOS – which has all but faded into obscurity
The failure of GEMSOS does not mean that there is not a need for a GEMSOS.
High assurance systems must still address issues of complexity and cost -effectiveness, besides just being secure, in order to become disruptive. With past attempts of high assurance technologies, the concepts behind them were ok. The implementations were not.
I didn’t say that GEMSOS failed, nor did I say there wasn’t a need. I only highlighted that it has become obscure. We DO need secure systems, but we also need shiny lights and flashy things, we need freedom to compute and the power to do that – sometimes this can be at odds with secure, properly segmented system design and sometimes we find balance. But none of this means it isn’t needed or attempts to implement them are failures.