Signs of Triviality

Opinions, mostly my own, on the importance of being and other things.
[homepage] [index] [jschauma@netmeister.org] [@jschauma] [RSS]

Know Your Enemy - An Introduction to Threat Modeling

December 5th, 2016

The following is a write-up of my talk "Know Your Enemy - an Introduction to Threat Modeling", given at ConFoo Vancouver 2016 on December 5th, 2016. The slides are available from slideshare or here.


Sooooo... threat modeling. That sounds incredibly boring. Probably has to do with 'risk management'. Le yawn. "Chapter one. In order to manage risk we must first understand risk. How do you spot risk? How do you avoid risk and what makes it so risky?"

If we're honest, this is pretty much how we approach risk management: it's something that's on our resume, but if put to the test, we'd really have to cram and read up on it. How do we evaluate risk?

For the most part, people suck at accurately evaluating risk. For example, those of you who came here by plane most likely had some family member making the recommendation that you "have a safe flight". But who got a message saying to have a safe drive to the airport? Who gets told to have a safe trip to the grocery store?

When we're evaluating risk (consciously or unconsciously), we're influenced by many factors, including whether or not we feel we are in control -- we e.g. perceive a risk to be lower when we are engaging intentionally in the risky behavior -- and whether or not we have an objective with a high reward -- we're not only more likely to accept a higher risk for a higher reward, we actually will rationalize it to be lower than it actually is.

That is, human psychology affects how we understand risk. The more easily you can think of something happening, the more likely you think it is to actually happen. Which shapes how we build defenses.

If we hear in the news that Evil Governments(tm) can break your strongest encryption by using scary-sounding "0-days", then we begin to think that this is our biggest threat, and we start to pour resources into hardening our TLS stack (primarily by hitting SSLLabs.com until we get an "A+" rating, because this trivial, visual reward satisfies our desire to have instant approval: we're now safe!).

But I was going to talk about "Threat Modeling", wasn't I? Instead, I'm rambling on about human factors and psychological biases. But don't worry, there's a reason for all this: everything we do in trying to protect our infrastructure, our services, has to do with human components, with actual people.

One of the most important lessons to keep in mind when you begin to evaluate a threat model is that your attackers are people, too. They have their own motivations and are, just like you in your defense capabilities, restricted by constraints or resources.

This means that attackers also constantly perform a cost-benefit analysis to evaluate whether or not a given attack vector is suitable for them. For example, an attacker may perceive the risk of being discovered to be too high compared to the reward, the value of the assets they're after. That is, they threat model, too -- only their threats are the defense's wins. Two sides to every story.

But what exactly is a threat model?

So here's my talk in a single Venn Diagram. It looks complex, but don't worry, we'll get back to this, and by the end I hope you'll have a much better understanding. For now, notice that we specifically differentiate between threats we can and threats we cannot defend against; threats we care about, and threats we do not care about.

Wait, what? Can't we just go and Secure All The Things?1!

Well, it turns out that when you are operating under the wrong threat model, you may actually make things worse. Best case scenario, you are wasting your sparse resources - worst case scenario, you are actively decreasing the security of your users.

For example, 2FA requires you to enter a code from, e.g. your cell phone to log into a service. Or the service sends you a code to an email address of yours. By and large, this kills a whole range of attack vectors. In fact, using this method of authentication as a single factor is significantly better than using a password.

But some people are still not willing to use 2FA. While the perceived inconvenience may be a factor, there are also some users who do not want to share their cell phone number or alternate email address with a social service.

Now the set of people for whom this is a legitimate concern -- the people who are targeted by governments capable of e.g. intercepting SMS pass codes to this phone or to break into the service to get the cell phone number -- is rather small. But infosec nerds complaining on Twitter that "2FA with SMS is insecure" only harms overall adoption.

For most people, this is the correct way to threat model general decisions. However, it'd be nice if we had something a little bit more formal, an approach we can follow beyond passwords and magical amulets.

This formal process of identifying assets, vulnerabilities, threat actors, and defensive capabilities in order to determine a risk score helps us *snoooore*, you're asleep. I know, I know. Let's simplify:

There, much better.

Threat modeling is conceptual; it's a bit abstract, takes some practice, but ultimately leads to a change frame of mind that helps you cut through a whole bunch of bullshit and focus on what actually matters. And one of the best parts is that you can do this without having to read code, without port scanning services, without calculating checksums or reviewing cipher specs. You can do this offline, just by the power of brainery.

Threat modeling is conceptual. It does not tell you "SSLv2 is bad, mmkay?" -- although SSLv2 is bad, turn that shit off! -- but rather "weak protocols may be an issue if your adversaries have the capabilities and means to break them".

Performing the abstract thought exercise of threat modeling restricts your scope and helps you break your larger system into better understood components, since you can't just have a single threat model for your entire system.

Let us consider an example. Here's a web service. With a bunch of web servers, a database, an authentication system, some message queues. We sprinkle some Zookeeper on it why not, add a build cluster with Continuous Integration and Deployment services, a code repository, integrate with GitHub, Jira, and Slack.

We probably have some firewalls and network ACLs, holes punched into the firewalls, and oh, by the way, all of this is probably running on somebody else's hardware aka "The Cloud", and despite all that the easiest way in is still somebody phishing your employees.

Are we secure yet?

How on earth do we secure all these things? Of course it's impossible to accomplish absolute security here. So let's try to simplify, to focus. What is it we're trying to protect? What are our assets?

Let us consider your actual end-user data your primary asset. Where do we store this data? Let's claim this data is stored in our databases. To ensure it's not accessed without authorization, we require some sort of authN/authZ system that allows us to cross the trust boundary.

Trust boundaries show where a level of trust changes to either elevated or lowered levels of trust. Identifying your trust boundaries helps you clarify which components likely have a similar attack vectors, similar threat models.

That is, even though we likely have additional components interacting with the data base, many of them are likely within these trust boundaries and so will fall in line with our threat model here.

These perimeters we put up, these trust boundaries segregate threat models. Your web servers are not a primary target; they are a stepping stone to the end-user data. Getting onto the web server requires crossing one trust boundary, getting from there to the actual data another.

Focusing only on the data as the primary asset, we can now identify different vulnerabilities in our system to see how somebody could gain access. One method of categorizing vulnerabilities is the STRIDE model, which maps threats to system properties.

Focusing on one property, we can the identify the different vulnerabilities. As you can tell, some of these shift the focus onto another system component, such as the authN/authZ system. We can then further drill down into the specific vulnerabilities of that system and so on. In that manner, our threat model can help us both zoom in and out of the various components.

But identifying vulnerabilities alone is not enough; we also need to assign them a threat score to figure out which ones are more important to us to defend against:

One method to calculate threat (or risk) scores for each attack vector and vulnerability is called DREAD, which lets you assign a numeric value to each of the attack's Damage, Reproducibility, Exploitability, Affected Users, and Discoverability. (I tend to add a second 'D': 'Detection - how hard is it for the defender to discover the compromise'.)

You can then assign values to each vulnerability based on these factors and the value of the asset. For example, pick a number between 1 and 10 to assign to each, average the DREAD+D score and multiply it by that value. The resulting scores should give you an idea on which areas you should focus on.

Enumerating and understanding the vulnerabilities is helpful: it provides clarity and deepens your understanding of the system. But we also need to remain aware that not all vulnerabilities are equally easy to exploit.

The DREAD exercise requires you to understand how difficult vulnerabilities are to exploit and what capabilities or resources might be required. Gaining access to a user's credentials by phishing them, for example, takes a different skill level from breaking (or bribing your way) into a data center to then stage a side-channel attack such as observing the power differential as cryptographic operations are performed.

And this gets to the heart of a proper threat model: you need to understand whom you're protecting your assets from_ You need to know your adversaries.

Looking at these rough categories, you will realize that defending against each will require different steps, as each has different objectives and methods of operating.

Yep, here's the obligatory XKCD slide. I think it's great because it shows so succinctly that there are certain threats you can't well protect against. Strong cryptography only works against cryptographic attacks; sufficiently motivated adversaries will find the weakest part of the system -- frequently the human component -- and attack that.

And this is but one example of we know by observation time and again: attackers will go for the lowest hanging fruit. Attackers will continue to employ the cheapest, most effective attack until it ceases to be that. Nobody is going to burn a $1M 0-day if they can compromise your infrastructure with a few simple PHP or SQL code injections.

Your adversaries are people, too. (I repeat myself. I'm told repetition is a good way to learn something, so I'll probably repeat this again in a little bit.) They also have something to lose, and they will act rationally within their moral and economic frames.

Attackers will continue to pursue a given angle -- an attack vector -- for as long as the value they derive from it is larger than the cost to them. The more time and effort they have to spend on an exploit, the less likely they are to continue to use it. Raising the cost of an attack is therefor a great way to reduce Wile's ROI.

But it's not the only way. Sometimes your smartest move is not to implement bigger and better defenses, but to lower the value of the asset.

For example, reducing the length of your TLS keys to a few days means that an attacker will need to establish persistent access rather than gaining access once; regularly deleting users' private data or anonymizing it before storing it means that attackers will find less value in compromising the system.

Alright, so let's look at this Venn diagram again, building it up as we go along:

First of all, there are threats that we know about. So far so good.

Of those threats, there are some that we care about, and some we do not care about. In our earlier example, we do not care about the physical value of the hardware in question. That is, if an opportunistic robber manageѕ to scurry off with the servers in question, we don't really care about the monetary loss of the hardware.

Next, there are some threats that we can defend against. This is the tricky part: we have to be honest with ourselves and decide, consciously, what our defensive capabilities are.

In our earlier example, we understand that it's possible that a capable adversary may legally force Amazon to install a backdoor in the hypervisor that our VMs are running on, but we implicitly accept this threat as something we cannot defend against given our current capabilities and infrastructure requirements.

Accepting certain threats as outside the realm of what we can defend against does not mean we simply throw our hands in the air and give up, but rather that we are being realistic and decide not to waste time and money on efforts that would ultimately be futile.

So with all that knowledge, we finally identify a set of threats we decide to defend against. Necessarily, that is a subset of all the threats we know, and an intersection of threats we care about and can defend against. Unfortunately, we also try to defend against threats that we can't realistically defeat -- which is where we waste time and money.

As you can tell, a proper threat model can become complex. The decisions you make are not easy. Knowing which threats you cannot defend against, even though you'd like to, or which threats you can defend against, but decide not to (for example because the cost of doing so is offset by the potential loss in usability), requires an in-depth understanding of the threat landscape and a realistic, not idealistic, assessment of your own capabilities.

To summarize our threat modeling process:

The most difficult part in threat modeling is retaining your focus. As we've seen in our examples, you can zoom in and out on various components, and while you frequently outline your threat model in abstract terms, you may need to go into specifics as you translate it into specific recommendations.

When you try to perform these assessments, here are some good rules to keep in mind:

Your adversaries are people, too. They will make rational decisions as they pursue their goals. Understand their goals and objectives, and you can better focus your defenses. Know your enemy.

(I said I was going to repeat myself, didn't I?)

You can't secure all the things. You need to prioritize. Focus on what matters.

Is the threat you perceive a realistic problem for you? Is it one you can defend against?

This is a critical question. If answered honestly, it may drastically shift your focus: does a new vulnerability allow any script kiddie to execute code on your servers as root? Sure, fix that quickly.

On the other hand, does this vulnerability require privileged insider capabilities before it can be successfully executed? Does it require nation state compute capabilities or the deployment of a complicated backdoor? Are you in a position to defend against that?

If not, patching this 0-day is not likely to make a difference -- focus on e.g. insider threats, detection, instead.

Know your enemy.

Attackers will go for the lowest hanging fruit.

There are two ways to make an attack less profitable for the attacker: raise the cost of the attack or lower the value of the asset. At times, the latter can be a lot easier and more efficient.

Know your enemy.

Know your enemy. Understand their motives.
Know your vulnerabilities. Rank your threats.
Know your defensive capabilities. Be realistic.

Prioritize what matters.

Sometimes you don't have to outrun the bear. Sometimes you do. Know when.


Related content and additional links:


[Crazy Like A Fox - Infosec Ideas That Just Might Work] [Index] [OpSec 101 - A Choose Your Own Adventure for Devs, Ops, and other Humans]