6 min read
Yesterday, the Intercept published an article on a project led by Peiter and Sarah Zatko, the founders of the Cyber Independent Testing Lab. The lab has developed a testing protocol to evaluate the potential for security issues within software. A big part of the reason for the excitement (or fear) about this project is due to the founders: Peiter Zatko (aka Mudge) and Sarah have a track record of great work dating to the 90s. The entire piece is worth a read, and it highlights some common issues that affect software development and our understanding of security.
In very general terms, the first phase of their analysis examines the potential for vulnerabilities in the code.
During this sort of examination, known as “static analysis” because it involves looking at code without executing it, the lab is not looking for specific vulnerabilities, but rather for signs that developers employed defensive coding methods
In other words, the analysis looks for indications of the habits and practices of developers who understand secure development practice. This is roughly comparable to code smell - while it's not necessarily a problem, it's often an indicator of where issues might exist.
Modern compilers of Linux and OS X not only add protective features, they automatically swap out bad functions in code with safer equivalent ones when available. Yet some companies still use old compilers that lack security features.
We'll return to this point later in this post, but this cannot be emphasized enough: organizations creating software need to be using a current toolkit. It takes time to update this infrastructure - and to the suits in an organization, this often feels like lost time, but organizations shortchange this time at their peril.
The lab is also looking at the number of external software libraries a program calls on and the processes it uses to call them. Such libraries make life more convenient for programmers, because they allow them to repurpose useful functions written by other coders, but they also increase the amount of potentially vulnerable code, increasing what security experts refer to as the "attack surface."
As the article highlights, third party libraries are not necessarily an issue, but any issue in a third party library can potentially be an issue within apps that use the library. To use a metaphor that is poetically but not technically accurate: let's say you're going out with your friends, and one of your friends says, "Hey - can I bring my new boyfriend Johnny?" And you say, sure, why not. But then, later that night, Johnny turns out to be a real jackass - drinking too much, not tipping, talking all the time, laughing at his own jokes.
Potentially, third party libraries are like Johnny - not necessarily a problem, but when they are, they can be very unpleasant to deal with.
The people running the evaluation lab are also clear on what their tests show, and what they don't show.
Software vendors will no doubt object to the methods they’re using to score their code, arguing that the use of risky libraries and old compilers doesn’t mean the vendors’ programs have actual vulnerabilities. But Sarah disagrees.
"If they get a really good score, we’re not saying there are no vulnerabilities," says Sarah. But if they get a really low score, "we can guarantee that ... they’re doing so many things wrong that there are vulnerabilities [in their code]."
The potential for risk articulated here runs counter to what people want, and it's one of the reasons that many people balk at reading security analyses. People want an absolute; they want a guarantee - but vulnerabilities can exist anywhere. Secure coding practices are not new; it's not arcane knowledge - but up until this point, many vendors have not made securing their work a priority.
However, the lede is thoroughly buried in this piece. We get this gem near the end.
They’ve examined about 12,000 programs so far and plan to release their first reports in early 2017. They also plan to release information about their methodology and are willing to share the algorithms they use for their predictive fuzzing analysis if someone wants them.
We should have no illusions about the contents of this data set. We would likely see a small number of companies doing very well, a large number of companies in a very crowded middle, and a number of companies (looking at you, legacy enterprise software who insists on maintaining backwards compatibility) with pretty abysmal numbers. This is reality. Bad decisions get made in software development all the time, often for reasons that feel defensible - even logical - at the time. But over time, if this technical debt never gets paid down, these problems fester and grow.
To all the marketing people who ignored developer input in order to meet a PR-driven deadline: this is on you.
To all the salespeople who promised features and fabricated a timeline without consulting your development team: this is on you.
To all the CxOs who supported marketing and sales over the best advice of your dev team in order to "hit numbers": this is on you.
To all the developers who never said no for the right reasons, and just put your head down and delivered: this is on you as well.
We all have a level of responsibility here. But now, we need to fix it.
The end of the piece closes with a quotation from Mudge that is arguably the subtext for many of the ongoing conversations about security:
"We’ve been begging people to give a shit about security for a decade ...[But] there’s very little incentive if they’ve already got a product to change a product."
I'm saying this partially tongue in cheek, but I'd love to see 2017 be the year of the feature freeze, where we all agree to get our acts together. Companies could give their development teams the time to pay down technical debt. People could get their privacy policies in order and up to date. Organizations could take some time to figure out business plans that aren't predicated on selling data. Consumers could get weaned off the artificial expectation that online services built through the time and talent of countless people should be free.
We can have a tech infrastructure that isn't broken. We can make better choices. If this project is part of the work that pushes us there, then full speed ahead.