Saturday, March 17, 2018

We Need To Build Security In

The Old Priorities

For a long time, the basic priorities for software were:
  • Functional: does it work right?
  • Performance: is it fast enough?

The first is obvious. If software doesn't work right, it isn't going to be useful. It has to be a correct design, correctly implemented.

Once the first has been achieved, the second has become critically important. As systems scale up, performance has become an enormous driver. Do everything you can to make it fast, as long as it still works right (and there's an argument to be made for putting performance first, then make it work right subject to maintaining performance).

This is often expressed as "first make it work, then make it fast".

Failure to achieve either of these can mean failure in the marketplace.

In a few cases, security was also a requirement. But often, it wasn't. Or it was a distant third priority or an afterthought, always on the losing side of compromises for the first two.

The result is that we've sacrificed security on the three-legged altar of time to market, convenience, and performance. We've built everything with the assumption that everyone out there is well-behaved, using things only as they were intended to be used.

I've got some bad news for you sunshine, there are bad people out there. They're all too happy to slip into our insecure systems and have their way. These are people who actively search out ways to abuse, confuse, and misuse systems for their own purposes.

They have a variety of motivations and goals, with a variety resources at their disposal, from the single script kiddie just wanting to impress his friends to criminals, terrorists, and state-sponsored cyberespionage and cyberwarfare groups.

The potential consequences of these attacks range from minor annoyances to financial disaster to service outages to outright physical destruction, ranging in scope from personal to national. They can ruin lives.

We see real cases of this daily in the news, in data breaches; botnet recruiting (taking over legitimate machines for use as bots); DDOS attacks; identity theft; account takeovers; social media fake news, fake accounts, and fake followers; ATM and POS skimming, siphoning, and jackpotting; all manner of large and small financial attacks and scams; ransomware of critical data systems; industrial espionage; and other attacks and disruptions.

Just Google any of those terms if you want some depressing reading. Every new technology just seems to bring a whole new raft of attack opportunities.

At the risk of sounding overly alarmist, we've built an incredibly fragile house of cards, completely permeable to bad actors. The Big Bad Wolf doesn't even need to huff or puff. All he has to do is inhale to bring it down.

It's equivalent to doing all your banking by storing your money in grocery bags outside your front door.

And yet our lives increasingly depend on these systems. We've made ourselves completely vulnerable. We've left ourselves completely exposed.

Security Needs To Be The New Top Priority

Especially with the adoption of ubiquitous network connectivity over the past decade, that needs to change. Security needs to be the primary requirement, and the other two need to compromise to support it:
  • Security: is it secure?
  • Functional: does it work right?
  • Performance: is it fast enough?

Now getting it to work right and performance need to be subject to security. Does it work right, and still maintain security? Do everything you can to make it fast, as long as it's still secure and still works right.

First make it secure, then make it work, then make it fast. And make sure it stays secure.

That means when making design and implementation decisions, they need to done in such a way as to favor security. There are choices and ways of doing things that lead to insecure software. Make the choices that lead to secure software.

Security has really been a wholly overlooked critical segment of software engineering. In retrospect, that's irresponsible.

In other types of engineering, safety is the analogous property. In automobile or aircraft design, safety is a critical area. Imagine what would happen to a car company that ignored safety.

We need to add a fourth leg to that altar: security, time to market, convenience, and performance.

Build Security In

Here I'm adopting Gary McGraw's mantra: build security in. That means you address security first, then achieve proper functioning and performance while maintaining it.

I'll temper that with Bruce Schneier's key point: security is a trade-off. That means there's no such thing as absolute security, and you get security by giving something up.

I look at the combination of the two like this: we must focus on security from the start, but we have to realize that it can only get us so far within the context of the larger environment, and we're going to have to give up something in functional convenience and performance.

I'm not a security expert. I'm a student of security, so that I can become a practitioner. That's what we all need to do, become students of security so that we can become practitioners, looking to experts like McGraw and Schneier to guide us in the appropriate practices.

Real security engineering requires you to think from both sides of the fence. You need to think like a good guy defender ("white hat") and a bad guy attacker ("black hat").

On the white hat side, you need to know the proper security practices to follow. On the black hat side, you need to know what attacks will be arrayed against you; otherwise you end up creating the software version of the Maginot line, an ineffective defense against the actual attack.

Security isn't something you bolt on after the fact. There is no "security layer". It has to be built in from the beginning. It has to be interwoven throughout, part of the raw fabric.

And just because one part is secure doesn't mean that all the rest is safe. It's all too easy to undermine the security by not maintaining vigilance system-wide, throughout all uses of the system and the data it produces, in all environments and contexts, over its entire life.

Security is easy to get wrong and hard to get right, and easy to get wrong again once you get it right. There are a lot of details. Understanding those details and how they all fit together takes effort. That's why you have to study the literature and learn how to apply the techniques properly.

Some of the recommendations may seem arbitrary. For instance, a recommendation not to use a particular library function, because it's been the source of many security vulnerabilities in the past. You can say, well, I'm going to use it correctly in my code so that doesn't happen.

But what about a year from now, when you've moved on to another project, or you've left the company, and someone else comes in and has to make some changes to add a new feature? Or they lift your code out to a different context. They may not notice the potential for a problem and end up making your formerly safe code unsafe.

Borrowing a line from the top 10 security design flaws document in the reading list below, designing for security should take into account that code typically evolves over time, resulting in the risk that gaps in security are introduced in later stages of the software life-cycle.

What Causes Vulnerabilities?

Vulnerabilities are problems that can be exploited by attackers. They are the unlocked doors that allow entry. Not all software problems result in security vulnerabilities. But software problems are a rich ground for finding vulnerabilities. What causes them?

We can look at software correctness in two dimensions, design and implementation. Each can be either correct or incorrect. Adopting McGraw's terminology, "flaws" are problems in design. "Bugs" are problems in implementation.
Note that I'm lumping requirements in with design, so incorrect requirements implies incorrect design. You could treat requirements as a third independent dimension that can be correct or incorrect, but the results are really the same for this discussion.
This gives us four quadrants into which software may fall:



Software is problem-free in only one quadrant: correct design (free of flaws), and correct implementation of that design (free of bugs).

It's very important to realize that in two of the quadrants where one dimension is correct, you are still doomed to have software problems. You can have a correct design, but incorrect implementation. Or, you can have a perfect, bug-free implementation, but of an incorrect design.
If you treat requirements as a third dimension, that produces a cube of eight octants. You can see that this discussion generalizes to the same thing. Software is problem-free in only one octant: correct requirements, correct design to meet those requirements, and correct implementation of that design. If the requirements are incorrect, no matter how perfect the design and implementation, the software has problems. 
So for simplicity, we can collapse it down to the two-dimensional discussion. Just be aware that if you get the requirements wrong, the design is by definition incorrect (since it is designed for the wrong thing, no matter how perfectly done).
What all this means is that there are many opportinities to create a problem, and a potential vulnerability.

That's part of what I mean when I say security is hard to get right, and easy to get wrong. The other part is that there are lots of subtle details, and getting any single one wrong risks undermining all the rest.

That's what real engineering is about, dealing with all that, being rigorous and thorough and getting it all right top to bottom, beginning to end. That's what it means to be a responsible professional. Yeah, it's complicated. Yeah, it's hard work.

Are the odds really as bad as just a 1 in 4 chance of getting it right, or even 1 in 8? That may be abusing probability and statistics to overstate the situation, but it does show that the odds are against you.

And if you aren't testing for security vulnerabilities, you can bet that those bad people are. They're out there actively searching for your systems and probing them for vulnerabilities. They will find them. Then all you've done is added to the problem.

The tools for evaluating and implementing security are useful to both defenders and attackers. Regardless of how you use those tools to improve security (if at all), adversaries are using them to pick your systems apart.

That's why you need to learn how to use them, and why you have to put on the black hat and think that way. Where attackers will use the results to attack your system, you can use those same results to feed back into the development process to improve the design and implementation of the system from a security standpoint.

Next Steps

The first step is awareness. That's what this post is about. The second step is learning. The third step is putting the knowledge into practice. The fourth step is maintaining continuous vigilance.

It starts with us, the developers. It also ends with us, because no one else is going to do it.

Reading List

This is the reading list I've accumulated for the second step, learning, that I'm working my way through. There's some overlap here with my reading list from Testing Is How You Avoid Looking Stupid. Once again, the market in used books helps keep the cost down.

Interestingly, most of these are over 10 years old. Yet they remain as timely as ever. The same vulnerabilities still show up repeatedly. But their potential impact on our real daily lives has grown significantly. These are no longer abstract problems.

There are two nice starting points. They help set the background necessary to appreciate the others:
Here's the remainder of the list, in no particular order, which will no doubt lead to many others:
For a little perspective on the nature of vulnerabilities, see C.A.R. "Tony" Hoare's presentation on null references, what he calls his "billion dollar mistake" (though perhaps karma and cost balance out, since he also invented the quicksort algorithm, among many other brilliant contributions to computer science).

In addition to McGraw's and Schneier's websites, several good sources for security-related news and information:
  • Risks Digest, Forum on Risks to the Public in Computers and Related Systems, ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator. This is where it all starts for me, fascinating reading (in the way watching a train wreck is fascinating).
  • CMU SEI Cybersecurity, Carnegie Mellon University Software Engineering Institute cybersecurity main page.
  • CMU SEI CERT Division, CMU SEI Computer Emergency Response Team.
  • Krebs On Security, Brian Krebs.
  • Threatpost.
  • Open Web Application Security Project (OWASP)
  • Others? Probably, but also be aware that this is a topic ripe for abuse, so CHECK YOUR SOURCES AND CORROBORATE YOUR INFORMATION. 'Nuff said.

No comments:

Post a Comment