Signs of Triviality

Opinions, mostly my own, on the importance of being and other things.
[homepage] [index] [jschauma@netmeister.org] [@jschauma] [RSS]

Behavioral Economics in Information Security

June 5th, 2021

Over the last few months, I've read a few books on behavioral economics, which, unsurprisingly, got me thinking how that aligns with information security and the incentives we provide and are subject to.

Conclusion: most information security efforts do not make economic sense.

I know, not what you wanted to hear. Not what I want to be the case, either. But here we are, thanks in large part to the overall short-sightedness of the tech industry and, I suppose, America's particular flavor of cut-throat capitalism, where quarterly earnings and short-term gains overrule long-term vision, stability, and any notion of acting in the interest of your users. Let me explain...


A New Yorker Cartoon showing a
business man sitting around a post-apocalyptic fire
with children, saying 'Yes, the planet got destroyed.
But for a beautiful moment in time we created a lot of
value for shareholders.' A common mantra in #infosec that tends to make me throw up in my mouth at least a little every time I hear it goes something like this: "We're here to support the business." And what is the prime directive of "the business"? To "increase shareholder value". Ergo, our work must make economic sense, or it will be dismissed.

No, not every company is inherently evil, out to intentionally harm users. But show me a conflict between resolving a revenue decline and protecting user privacy or pushing a security fix, and I show you an executive who will "accept the risk" faster than you can say "synergize monetization alignment". That's MBA 101, not rocket science.

Information security is a cost center, and developer productivity is a zero sum game.

Information security is a cost center. We get in the way. All the protections and controls we ask teams to implement incur a penalty, e.g., in productivity, time to market (🤢), delayed features, or bugs fixed.

Developer productivity is a zero sum game: time spent on fixing security tickets cannot be used to improve the product. Aligning your product with a new patch cycle, changing the API calls to, say, rotate certs, and debugging the impact of the latest additional endpoint protection agent on your memory or I/O sensitive, high-performance service... all these conflict directly with your other priorities, goals, and deadlines.

Good security organizations try to minimize this trade-off by doing as much of that work for other teams, for example by providing libraries interfacing with the security critical services they run. But this requires a maturity and investment that's difficult to reach, and can never alleviate the burden on developers completely.

Business incentives tend to conflict with the long term vision needed to overhaul our fragile infrastructure.

"The business" tends to favor quick wins; meaningful changes in information security tend to require long term vision. The few significant security wins I've managed to snatch from the claws of inertia, change aversion, and the ever revolving doors of middle and senior management were multi-year efforts to fundamentally change how an organization or infrastructure worked, and the biggest battle was always to keep the organization's focus, to justify the project and its significant cost and impact on other teams' productivity over and over.

But long term vision is hard to sustain if everybody switches jobs every 3 years, which gets us back to incentives, albeit in a roundabout way: real change is hard, and a fundamental shift in security posture often pays off imperceptibly and only after years. At the same time, we regularly ask teams to juggle busy work with a notably low return on investment for them.

Attackers don't have to be right just once, but attackers do get to choose from a multitude of vectors at each step.

Another old infosec mantra goes something like this: "Defenders have to be right all the time, but attackers have to be right only once." But this isn't quite true. We know that successful attacks need to exploit an at times surprisingly large and complex chain of vulnerabilities. We still do need to close each of those venues, however, as the attacker can, at any point, pivot, explore, and then utilize a different vector.

Now consider the value proposition of many of our common defenses, such as e.g., patching libraries or applications after a CVE publication. The cost to implement is usually high, and what do we get in return? A chance at disrupting a possible attack that may then still succeed via a different path. That sure doesn't feel like a big ROI for your engineering hours. Not to mention that this is thankless work, since at no time can you point at these changes you rolled out and say "See, that was what stopped the attack!"

This calculation only makes economic sense if the risk, the possible cost as a result of not making the change, is comparatively large -- and carried by the party responsible for making the changes. But what is that cost? And: who carries it?

Even spectacular data breaches carry proportionally negligible cost for most executives and leadership.

The total cost to recover from a fatal compromise can be huge. In theory, at least, it ought be so high as to be an existential threat for the company. But, as software developers know all too well, in theory, there is no difference between theory and practice, but in practice, there is. As a fun exercise, go ahead and make a list of the largest data breaches just in the last 5 years or so, and then try to find out what happened to these companies afterwards.

Did they fold and don't exist any longer? Did their CEO get canned? Did their stock plummet, never to recover until they were bought by their competitor with better cyber defenses? Rarely. Or are they still thriving, in some cases glibly shrugging off any impact, with executives continuing to receive their obscene bonus packages? Mostly.

To be sure, suffering a major data breach can lead to lost revenue with stocks falling (albeit usually only temporarily) and even long term costs, such as when the FCC imposes fines and sanctions. (Honestly, it's the only way you get the buy-in from executives you need.) But those again get absorbed into your business as usual budget.

So who actually carries the cost? The executive leadership does not appear to get blamed so long as they -- retroactively -- claim to "take the security and privacy of our users seriously". The complexity of attacks and common infosec ennui works against accountability: if all experts agree that "There are two types of companies: those that have been hacked, and those who don't know they have been hacked." (John T. Chambers), and if the press reports breathlessly on the hooded hacker geniuses using some sort of "dark web", then who can possibly defend against such dark spells and unknowable wizardry?

Of course middle management shouldn't take the fall for missing cyber security leadership and be sacrificed for not patching the one vulnerability the attackers happened to exploit; had they patched that vulnerability, the attackers would likely have exploited a different one.

Do you want to know who really carries the cost? It's your users.

The economic incentives of not patching in the face of a low probability of significant loss encourages risk seeking behavior.

This is the trend in "the business": a distinct lack of accountability, and thus no strong incentive to do all the low-level grunt work that's necessary. Realistically, the probability of each individual software upgrade or code fix that we ask a team to make to be in the attack path of a fatal breach is low; the cost of not patching is effectively nil to the team or each individual executive leader. All too often the reward for doing the right thing boils down to a virtual pat on the back, such as in the form of ranking highest on the chart showing compliance with security requirements during an all-hands meeting. Hooray.

On the other hand, if you spend your engineers' time and efforts on shipping the next feature or on delivering a new product to market, you might get a promotion, some extra stock options, and maybe a new job title that you can use to negotiate an even higher salary at your next gig in 6 months time when somebody else will inherit the unpatched mess you left behind.

The decision then boils down to:

  • (a) spend resources for a small chance of averting an unlikely scenario that costs you little and provides no direct gain; best case scenario: nothing happens, but you couldn't do the other things you wanted to; worst case scenario: you spend all your resources, but still got pwned in a different way
  • (b) spend resources on increasing your chances of a meaningful gain

Within Prospect Theory (a) is a scenario where we are facing a certain loss (time/effort spent on patching) versus a low probability of a large loss. This encourages risk seeking behavior, so favors not patching.

And since we suffer from overconfidence bias in our ability to deliver on our promised features or the new product, we view (b) as a sure thing. Another strike against patching.

The uncertain gains of investing significant efforts into long term, fundamental changes lead to risk averse behavior favoring the status quo.

"But Jan," you say, "this is missing the big picture. Infosec should instead focus on fundamental changes so that this sort of busy work won't be necessary. Reproducible builds; short-lived, immutable containers; Zero Trust networks; auto-renewing authentication principals; that sort of stuff."

I agree. Those are the projects you should pursue. But those are long, looooong term projects, taking years. Those are expensive projects, and you need to once again ask yourself what the value proposition is. Implementing these controls when you start from scratch isn't easy, but doable. Trying to move a 25 year old infrastructure with warts and scars from when we still used telnet instead of SSH to adopt modern principles requires immense up-front investment and long term vision.

When it comes to investing into radical change like that with a high cost but a significant level of uncertainty of gain, people tend to be risk-averse, instead betting on the safe thing. Carrying over a generation's worth of tech debt is a hidden cost, already absorbed in existing budgets, but it's familiar, comfortable, the status quo.

Screengrab
from Wu-Tang Clan's C.R.E.A.M. video

So now what? If we're supposed to "support the business", how do we get the business to support us? Can information security be anything but a cost center?

I honestly don't know. I strongly believe that many of the modern information security paradigms noted above do promise immense value outside the realm of security. Reproducible builds increase reliability, lower the risk of deviation from your known good build, and facilitate rapid deployment; short-lived, immutable containers help clarify your requirements and increases the resilience of your infrastructure; Zero Trust networks can be a boon to productivity when e.g., access controls can be automatically provisioned based on system- and service identities. Good ops is good security.

But finding a value-add for every objective we bring is not going to be possible. We'll always have to chase new vulnerabilities, monitor deviation from our baselines, and handle the disappointingly large number of exceptions to our best practices. It's the nature of the game, and the only winning move is not to play.

At the end of the day, our primary responsibility is not to "support the business", but to protect our users. We're lucky when those two align, but should be clear that at times they conflict. The burden of working day in and day out to push that large boulder up the hill only to watch it roll back down again is significant.

Perhaps the recent spread of ransomware will change the cost- and risk calculations. When companies and executives feel the sting of paying actual money right there and then on the spot, I suspect things quickly become quite a bit more real. Cyber insurance companies, previously happily pitching their liability coverage by painting bleak pictures of, you guessed it, Evil Hooded Hax0rs from the Dark Web, suddenly experience a strong "no, not like that" moment.

While ransomware is but one unique threat with distinct defenses, it is possible that in its wake we'll see a reduced overall cyber risk appetite, if only in the near future. It's a silver lining I'm willing to take.

June 5th, 2021


Links:


[Sharing Secrets] [Index] [Infosec Core Competencies]