February 28th, 2016 The following is an approximate transcript of the talk I gave today at BSidesSF 2016. The slides are also available here and on slideshare; there is a video recording as well. ![]() Hey, who here owns a computer? Have you powered it on? And used it? Oh boy, you all suck at #infosec, don't you? Those were the first three Golden Rules of Computer Security, formulated by Robert Morris. Who here knows who Robert Morris was? No, not the guy who wrote that worm. His father. Ok, good. Who here is Robert Morris? (You never know at #infosec events. Gotta cover my bases.) Hello, my name is Jan, I'm from the Internet and I'm here to help. You guys ever been on the internet? They have some craaazy stuff there on that internet! For example: Twitter. You know about this? Have you heard about this? So I went on the Twitter the other day, but it turns out -- and you're not going to believe this -- it turns out that most people express opinions on Twitter that do NOT represent their employer's. I know, right? Wtf? What's next, you're telling me that RTs do not mean endorsements? That's just awful. Who here is on Twitter? Yeah, you're not helping. Neither am I. Vote Quimby. So instead of rage-quitting Twitter, I figured I'd take a page out of all y'alls book and quickly note that the opinions expressed in this talk do not, in fact, represent those of my employer. They were nice enough to pay for my trip out here, though, so thank you Yahoo. We're hiring. Vote Quimby. Aaaanyway, so I came all the long way from New York to tell you all something you already know. And I only have 20 minutes, so buckle up, this is going to be a fast ride. ![]() I'm from New York, and I work in #infosec. I'm cynical, but I repeat myself. You and I know that everything is terrible, right? Here are a few of the awful things we encounter every day: ![]() GithHub Dorking. It's a thing. ![]() Developers do the darndest things and users are just morons who don't know what they want, and the internet of unpatchable shit is going to bring about the collapse of civilization. ![]() Welcome to the Internet of Things, where "smart" means "dumb". ![]() I know, I know, skewering the Internet of Unpatchable Crap is like shooting fish in a barrel. ![]() Or like sweeping Shodan for infosec fail. ![]() This is all awful. And a lot of fun. But that mindset is not helping. And I think that this has consequences. ![]() We've reached a point where in leading industry conferences people paid to play pretend for a living tell us how to do our job! That is awful. But it is a reflection of how we operate. Every industry gets the conference it deserves. ![]() All too often do we in infosec pretend that we know better than everybody else. We tell people how to run their services and we roll our eyes at business decisions that do not align with our view of the world. Just because we're good at understanding risk, or cryptography, or internet security does not make us experts at running a business, at deploying a scalable internet infrastructure, or at writing good code. We're not helping, because we do not live in the same world as those whom we are criticizing. ![]() We do not live in the "real world". We live in a world where Hanlon's Razor seems dull, where attributing causes to malice seems significantly more plausible and more likely than attributing them to stupidity. (There's a reason that at Yahoo the infosec team is called "The Paranoids".) And Snowden sure enough didn't help. Now all of us tinfoil hat crack junkies spouting conspiracy theories have been proven right. Seriously, what in May 2013 looked completely bonkers and nuts now is something we all have to take seriously. Thanks a lot, Obama. I mean: Snowden. But that is not the "real world". ![]() Who lives in the real world? Journalists, who are trying to protect their sources but who to this day simply do not have the right tools to do so. ![]() Your nurses and doctors and hospital administrators live in the real world, a world where they suddenly have to know how to pay ransom in bitcoins to unlock their files. ![]() Your store clerk lives in the real world, who has to ask his supervisor for "the key" to void a transaction on the cash register, and who doesn't care if you sign your credit card receipt with "I do not authorize these charges". ![]() Parents live in the real world. Parents, who are scared shitless by the Big and Scary Internet with all its Social Media and cyberbullying and evil cyber kidnappers and cyber molesters and just generally too much cyber all around. ![]() Lawyers live in the real world. Who cannot communicate confidentially with their clients, whose private communications are leaked and whom we tell to "use PGP". ![]() You know who else lives in the real world? Your librarian, who would like to anonymize and delete all records of all loans, and who want to run Tor relays and who organize classes in how to use the internet, but who have no budget, need to use decades old computers from way back when they were called "Pee Cees" running Windows 95, and who still do an awesome job at it. (Not everything is awful all the time.) ![]() Seriously, librarians are awesome. Don't fuck with librarians. ![]() All those people live in a very different world from ours, and yes, by and large their world -- where it concerns information security -- is also awful, but they don't even know it. And then we come along. What do we do when we think that things are awful? Why, we rub some crypto on it! That always works. Crypto never has any problems. We are not helping when we tell them to use stronger passwords or to check that the website has a green lock. ![]() But I said this was going to be an optimistic infosec talk, didn't I? Pick any number of examples, and hopefully you will find they likely have a few things in common. Specifically, they illustrate something we have -- or should have! -- known for a long time. ![]() Quick show of hands: who here knows who Adi Shamir is? Good. You don't? How did you get in? Anyway, so Adi Shamir -- the S in RSA -- coined Three Laws of Security, included in his Turing speech in 2002: ![]() 14 years later, we still have not fully learned these. ![]() In infosec, we all too often are asking for the impossible. We're asking for absolutely secure systems when we should look at ways to increase the cost of attack instead. You -- we! -- need to be willing to make compromises. This is difficult, because in cryptography, a system that's not 100% secure is, by definition, insecure. But in the so-called "real world", cryptography is not the (only) solution, and just raising the cost of an attack is often sufficient. Since you can't fix all the things all the time, you need to prioritize. Knock out the low-hanging fruit, spend your bitcoins where it matters. To halve your vulnerability, you have to double your expenditure. How do you halve your vulnerability? Easiest way, and biggest bang for your buck is to reduce your attack surface. ![]() Just another Turing Award winner. If you haven't, you should read The Mythical Man Month, in which was included the essay "No Silver Bullet". In this essay, Brooks outlines what most software engineers know: All systems have two types of complexity: essential (or inherent) complexity and accidental complexity. Essential complexity cannot be reduced; it is inherent in the problem being solved. (Although sometimes it's worth revisiting whether or not you're solving the right problem to begin with.) As software engineers, our objective is reduce the accidental complexity, to make everything as simple as possible (but not simpler). As infosec practitioners, we face the same objective, only here complexity equals attack surface. ![]() Consider a web service using TLS. I hear that's a thing. The essential complexity consists of exposing port 443 to the internet and having to speak TLS. You can't get around that. But you don't have to have 25 different ways of doing that, using 15 different TLS implementations and allowing every god damn cipher ever invented using 13 different web servers in 6 different languages. Instead, ensure that all traffic gets handled via a unified stack, offering only the ciphers you know you need. This reduces your overall (accidental) complexity and, rather significantly, your attack surface. An example: ![]() Here, we graph the use of some TLS ciphers we want to get rid off over time. As we de-prioritize one, the other ones become more popular. Eventually, you reach a point where you can drop the unsafe ciphers altogether. Et voilà: ![]() Getting this deployed everywhere is an exercise in eliminated accidental complexity and attack surface. But it's going to cost you. As with most things, the break down follows Pareto's Principle (80% of effects come from 20% of the causes). ![]() Take your Bug Bounty intake. You do have a bug bounty, don't you? I know, I know, it's a lot of work, but it also gives you some really useful metrics -- if you're willing to track and analyze the data. Bug bounty programs show you exactly what the low hanging fruit is. But your data might lie to you. Or rather: your data will not tell you anything if you don't know the right questions to ask. So here we see a ton of obviously low hanging fruit, and you might think we should focus on fixing those first. But data is tricky. You have to ask the right questions. Should you just focus on those vulnerabilities which you see exploited the most often? Hopefully you also have some sort of prioritization in the types of vulnerabilities you have. Some vulns are more equal than others, are they not? Let's pretend that those priorities are reflected in your payouts. Then you get a rather different story: ![]() Behold, a Pareto Chart. (Note: I left off the labels so all y'all don't get on my case about getting a sub-average payout for that super leet self-XSS you found.) This graph nicely shows you which types of vulnerabilities you want to focus on to eliminate risk, but that doesn't tell you much about the cost. ![]() Can you tell I like Pareto Charts? Looking at these charts, you can start to figure out how you spend your money to reduce your attack surface, and I think that Shamir's initial estimate of having to double your expenditure to halve your vulnerability is actually a bit off. I wouldn't be surprised if that, too, falls more into a Pareto distribution. What do we in infosec spend most of our time on? Crypto. Breaking it, eye-rolling over it, ridiculing lay people who don't know how to use it, implementing it and telling other people not to implement it. But take another look at the charts. Note the vulnerabilities. ![]() Note how none of them say "RSA key factored" or "Quantum Computing Attack" or even "downgrade to export cipher". None of them have anything to do with cryptography. We think that we're facing "Nation-State" attackers MitM'ing us -- because that's exciting -- but in infosec our day to day routine, and where we probably gain the biggest wins, is in reducing our attack surface by eliminating code injection, deploying secure defaults, and raising the cost of an attack in incremental ways. ![]() I think we spend, proportionally speaking, waaay too much time on obsessing about how "APT" or "nation state" attackers can break our crypto. Cryptography is typically bypassed, not penetrated. ![]() One of the world's most capable attackers is -- well, claims to be -- stymied by a 6 digit PIN code. Trust the math. ![]() Here's how the US gov does risk management. Number of deaths versus budget allocation. The item shooting off the chart is the budget spent on "terrorism". That is awful. ![]() But we're not much better. We spend so much time worrying about APT breaking our cryptos, but in the mean time we get slaughtered by SQLi and XSS. I know, it's exciting to think that we're defending against "APT". But do you know what really is a persistent threat? ![]() 16 years after the ILOVEYOU virus, we still haven't figured out how to let people read email without getting the whole company compromised. Virtually every major data breach started with an email very much like this virus from 2000. We still haven't fixed that. ![]() And rubbing some more crypto on it isn't going to help here. You can't solve this problem with cryptography. Cryptography is typically bypassed, not penetrated. Trust the math. Be very, very skeptical of the implementation. But most of all, don't fucking upload your keys to github! ![]() ![]() I guess it's kind of weird to come to an infosec conference and end up arguing that we should perhaps deprioritize crypto-flaws, and yes, I am aware that we're looking at a new OpenSSL vulnerability about to be announced. No, seriously, on Tuesday there's a new OpenSSL release. And no, I don't mean to suggest that that's not important, that we shouldn't be paying attention, or that we should not quickly address this. But. ![]() There's always going to be a new OpenSSL vulnerability. There's always going to be a new shellshock, or OpenSSH UseRoaming, or a glibc getaddr RCE. The internet is always going to be on fire. I recommend we stop chasing after the latest thing that set the Internet on Fire and focus instead on building the infrastructure to fix our shit at scale. ![]() In order to be able to properly prioritize how you spend your limited resources, you have to understand your threat model. ![]() In order to understand your threat model, you actually have to have a threat model. No, really! ![]() Here's a Threat Model Venn Diagram. What's really important is the blue circle. And the green one. And the yellow one. And the red one. Ok, all the circles are important. It's a threat model. ![]() You can't do this work alone. You need help. Help others help you, guide them. But remember: just because you're good at infosec does not mean that you fully understand all the needs and requirements of other people. ![]() You can't go up to a team and say "Just make it secure, mmkay?". Remember, you very likely do not fully understand their needs and requirements. So try to listen to them as much as you try to teach them. ![]() Measure your impact. Prioritize. Be helpful. Guide others in taking responsibility, teach them about information security principles, but also hear them out. Listen. Understand their priorities. ...and stop with the fucking Sun Tzu quotes. February 28th, 2016 |