Signs of Triviality

Opinions, mostly my own, on the importance of being and other things.
[homepage] [index] [jschauma@netmeister.org] [@jschauma] [RSS]

The Value of a Bug Bounty Program

July 11th, 2016

BountyBug Bounty programs are all the rage. Instinctively, I feel that they are well worth your money, but well aware that my department is a cost center, I've had trouble "doing the math" when focusing purely on the static dollar figures assigned to vulnerabilities.

How much does it cost you to run a successful, large scale bug bounty program? Let's assume an average payout of ~$800 / report, with around 3 reports / day. (Ballpark numbers via Twitter and Yahoo.) That's $876K per year. Add to that the staff needed to triage reports, at an average of $125K salary (not counting the overhead cost to the company). For the sample volume, you'd probably need at least 2.5 full-time employees, i.e. around $310K.

Let's assume that reproduction of a bug and verification of a fix is included in the bug bounty staff's work. Development of a fix, though, is likely to happen on another engineer's time, so add an estimated 4 staff-hours (~$240) per bug:

3 bugs / day * 365 days * $240 = $262,800

So you're looking at around $876K + $310K + $262K = ~$1.5M / year cost for a successful (large scale) program [see also]. Is it worth it?

Look at the price tag you're attaching to a given vulnerability. Is a remote code execution (RCE) vulnerability "worth" $15K? As so often in infosec, the answer is a clear and resounding "it depends": Cleaning up after an attacker exfiltrated all your users' personal data is likely to cost you more than tenfold the $15K you paid the bounty hunter, I'd wager. On the other hand, it seems a bit steep for allowing some bozo to run whoami(1) as nobody on a box that has no access to anything of value.

With this in mind, I see a fallacy lurking: The value of the program is defined as the cost of the program. Your bug bounty cost you $1.5M / year, so you think you got $1.5M "worth" of vulnerabilities.

If this was so, then you should be able to bill each of your departmens what they cost the company in bug bounty payout due to vulnerabilities introduced or not fixed. But this effort to increase their skin in the game -- an important concept! -- will inevitably backfire:

Departments do not carry the cost of the engineers handling the reports. They will share the total cost of all vulnerabilities you take in, and the staff-hours spent on fixing the bugs are absorbed into regular salaries. Immediate incident response costs are by and large carried by another team -- the information security staff -- as well.

Suppose you split your approximately 1K vulnerabilities per year across 20 different products: you'd end up billing each around:

1000 vulns/year / 20 products * $800 / vuln = $40K / year / product

Depending on the size of your organization and each product's budget, it'd be much easier -- and, on paper, make financial sense -- for them to swallow that cost, rather than to invest dedicated staff and training into detection or developing fixes.

It's too easy to "accept the risk" when all you have is a static dollar figure with competing product interests.

The issue at hand is that we're dealing with possible damage, probable costs, abstract risk. Risk increases with time, and exploitation becomes almost surely.

The probability of a vulnerability being discovered and exploited increases with time. P(E|V)=1. This is notably different from regular bugs, where the potential damage done by a bug rarely increases the longer it remains unnoticed. We should "price" our vulnerabilities the same way.

The Honeymoon Effect
[from Familiarity Breeds Contempt by Matt Blaze et. al]

Now suppose we assign a flexible price tag to our vulnerabilities. Let's say you linearly (exponentially, if you want to ruin your org) increase the "cost" of a vulnerability based on how long it takes to fix it (after discovery or beyond a certain SLA). Add a lead-up cost based on the length a vulnerability was dormant (i.e. the period from which it was introduced to the code base until it was discovered).

You now have a more realistic "cost" to associate with a vulnerability to determine the value of your Bug Bounty program, as you're comparing your payout to what the vulnerability is worth to you.

Does that help you, though? You're still a cost center. You're still dealing with probabilities and negative eventualities, which humans are experts at dismissing.

After toying around with numbers for a bit, and after observing how bug bounty reports are flowing in, are responded to and resolved, I've come to the conclusion that the primary value of a bug bounty program is not the "cost" saved on individual vulnerabilities disclosed, but the cumulative/trending data you collect.

Data is your friend. Bug bounty programs are chock-full of data. They give you a neat overview of your attack surface. They tell you exactly what low hanging fruit you're presenting, as well as what classes of vulnerabilities you should be focusing on eliminating altogether.

Pareto Chart: Vulnerabilities by
payout
Example: A Pareto Chart of Top 10 Vulnerabilities by cost as % of total payout.
This shows you immediately which issues to focus on.
(Note the absence of "CBC padding oracle in certain TLS stacks" etc.)

No matter how flawed the "cost" assigned to a given vulnerability may be, the trending data over long time periods will tell you whether you're spending money where it matters.

How many reports do you get about successful code injection attacks? How many about a successful (not theoretical!) attack exploiting weak ciphers in your TLS stack?

Compare and contrast the answers to how much time and effort you spend on eliminating avoidable, silly anti-patterns versus trying to protect against protocol downgrade attacks that require an active MitM to be successful?

True, your bug bounty program does not tell you how many nation states are actively trying to compromise your infrastructure, but is that really within your threat model? And if it is, is your response proportional to the risk?

Attackers will always go for the low hanging fruit. And they will continue to pick this fruit until it's gone. The cheapest, successful attack will continue to be used until it stops being either. [quoth yo]

Your bug bounty data will give you the areas in which you need to improve most before you can even begin to worry about all the other stuff. It shows how effective you are at eliminating attack vectors and what to prioritize. And that is the biggest value of the program.

July 11th, 2016


Related:


[A few thoughts on Incident Response] [Index] [Does Betteridge's Law of Headlines really apply universally?]