March 8, 2011, 11:10 p.m.
posted by oxy
Item 59: Establish a threat model
So you're convinced that you need to take security in your application seriously. Now what? Starting to read through the various security resources available (books, articles, Internet papers, and such) tends to leave developers with a really depressed outlook on the whole idea—no matter the cryptographic angle, no matter the security protocol being discussed, there always seems to be a way through it. Is there nothing we can do that will render the application secure?
First, remember that security is more than just prevention (see Item 58), and many of the attacks against a security protocol or cryptographic algorithm assume an attacker has an infinite amount of time to attack a given system. The security code you put in will never be able to defend against an attacker who has that kind of advantage over you, so don't even think you'll be able to.
Second, however, we need to accept the sobering reality that building the "perfect security system" is like chasing the Holy Grail—it's going to soak up a lot of time that ultimately won't produce much in the way of tangible benefit. As the old saying goes, "Don't spend a million dollars to protect a dime." So now we need to do the cold-hearted analysis to figure out precisely what kind of resources we're willing to commit to securing the application. Ideally, it'll be a reasonable amount of both time and money, but a large part of "how much we need to spend" frequently comes out of "how much will it cost," something we'll get to in just a bit.
Once we've figured out the dollar amount and/or person-hours we're willing to commit to securing the application, we next have to turn our attention to what to secure the application against. An attacker can come at the system in many different ways, and trying to cover them all is likely to be infeasible and to spread our efforts out too thin to be of much effect. Instead, we need to figure out precisely what parts of the system have the greatest exposure and liability and protect those first. There are two ways to go about this.
The first way, the one most developers choose by default, is to simply rely on developer intuition about which parts of the system are the most vulnerable and/or the largest liabilities in the event of a successful attack. As Item 10 points out, however, most developers' intuition sucks and shouldn't be trusted—not for optimization, and not for security.
The second way is to take a measured, planned approach to all this.
We've learned the hard way over decades of software development projects that to simply turn developers loose without some kind of guiding model, a grand vision for the system as a whole, is a Really Bad Idea. This is where we get the "design-first" methodologies like the Rational Unified Process, because as implementers we really need to be able to see the big picture in order to get all the details right the first time—otherwise, we risk huge costs in correcting those details later.
(Note that most agile methodology proponents, who are frequently misquoted as being anti-design, will in fact suggest that some kind of design is still necessary—for example, in Extreme Programming the design comes during the task implementation, and particularly when refactoring is necessary. The Extreme Programming crowds just argue against "design for tomorrow" and instead stress "design for today.")
Similarly, with respect to security, we need some kind of big picture of the security landscape vis-à-vis our application. We need to know where the largest vulnerabilities are, and what the damage would be if those vulnerabilities were exposed. This information in turn helps us prioritize which vulnerabilities we need to be concerned with and which ones we can safely ignore. This assessment is commonly called a threat model, and as its name implies, it serves much the same purpose as an object design model does.
Numerous security authors have suggested methodologies for developing threat models. Schneier, for example, has "attack trees" that hierarchically map the vulnerabilities into a large, almost UML-like diagram that can be expanded or contracted as necessary [Schneier01, 318–333]. It tends to give a good big-picture view of the various ways into the system, and the hierarchical arrangement allows for some easier estimation of whole categories of attacks.
Other writers have suggested a simpler model, in which developers sit around a conference table, brainstorming about all the different ways an attacker could attack the system. Then, each item on this brainstormed list is given a threat value, which is the product of two things: the financial damage that would occur if the threat were successfully carried out, and the percentage chance of the threat actually occurring. (This model is also used in risk assessment studies; since security is often about managing risk, in this case the risk of an intrusion, many of the same principles apply.) So, for example, if we've determined that an attacker successfully gaining root privileges on the Web server and defacing the site represents two person-hours work to undo the defacement (we keep good backups) and possibly $10,000 in lost revenue due to the defacement, then the liability of that vulnerability is roughly (assuming good system administrators make $60K/year, that's roughly $30/hour) $10,000 + 2 hours at $30/hr, or $10,060 total. Now we estimate the chances of this vulnerability actually materializing to be about 10%, since we are very careful to run the Web server in a least-privileged account (thus forcing the attacker to have to engage in some kind of luring attack elsewhere to get those root privileges; see Item 60). This means the total security risk here is roughly $1,006, which is how much we should spend closing this hole—anything more isn't worth it.
Not all security assessments are as easy to calculate as this; some damages are extremely hard to estimate, such as the damage resulting from bad publicity if and when the company is hacked and consumers' personal data is suddenly posted on hacker Web sites. Remember that the values assigned here are purely for threat-modeling purposes and serve only as a guide to estimate which vulnerabilities need to be addressed before others. Calculate the damages in abstract "Badness" units, if that makes the estimation any easier.
The point here is that without some kind of threat model, it's impossible to know how much time and energy should go into fixing one of the innumerable potential security vulnerabilities that a given enterprise system opens up. For example, do you trust your system administrators? Do you trust your users? Do you trust anybody and everybody on the corporate network inside the firewall? If you think the answer is yes to all of these questions, allow me to remind you of risks like disgruntled system administrators, socially engineered users, and Java instructors visiting your company's site to teach classes (or any other visitor, for that matter) who plug into the network "just to get mail" but in fact are being paid by your competition to snoop around on the network to eavesdrop on conversations and generally try to sniff out confidential data. Not that I've ever been approached by a company's competitor to do this, but raising this point in class almost always raises a few eyebrows and makes people start to think, which is, of course, the whole point of suggesting it. Industrial espionage is alive and well, folks.
If you suddenly think the answer is no to some or all of those questions, the next step is to figure out what you intend to do about it, and that's precisely what the threat model tries to tell you—which threats do you actively defend against, and which threats do you just shrug your shoulders to and admit there's not much you can do about them? After all, if heavily armed commando teams storm the data center and start physically pulling the hard drives out of the servers, there's not much your software can do to stop them, is there?