Signs of Triviality

Opinions, mostly my own, on the importance of being and other things.
[homepage] [index] [jschauma@netmeister.org] [@jschauma] [RSS]

OpSec 101 - A Choose Your Own Adventure for Devs, Ops, and other Humans

December 7th, 2016

The following is a write-up of my talk "OpSec 101 - A Choose Your Own Adventure for Devs, Ops, and other Humans", given at ConFoo Vancouver 2016 on December 7th, 2016. The slides are available from slideshare or here.


So... "OPSEC". That sounds serious. All cyber and hackers and spies, right? Only for people who are criminals or who are targeted by governments. Are you one of those? If so, sorry, I can't help you. But the world isn't quite so black and white. There are many different threats that having a little bit of opsec can help defend against. And even though you probably aren't specifically targeted by major governments, I'm still going to try to amp up your overall level of paranoia just a little bit.

Before we embark on our little 'Choose your own adventure' game, let's briefly cover what OPSEC really is (in our context):

Anybody remember John McAfee? I know it feels like an eternity ago. John McAfee, Anti-Virus millionaire, was chilling in Belize, when he found himself wanted for questioning in the murder of his neighbor. Proclaiming his innocence, he fled and was on the run for a while. Nobody knew where he was. Until a reporter for Vice posted this photo accompanying their exclusive interview with the headline "We Are With John McAfee Right Now, Suckers".

Funny thing about photos though is that they include all sorts of information in them, including, frequently, GPS coordinates of where they photo was taken.

So... opsec lesson #1: if you're on the run and you really, really, want to give interviews to stroke your ego, better strip the EXIF data from the photos. Or, you know, not give interviews.

Turns out, this sort of thing happens all the time. El Chapo having dinner with his son while on the run happened this year. Yep, Twitter has geo-location enabled on all its tweets by default. Every time you tweet, you tell the world where you are.

But it's not just criminals (suspected or actual), who have a tendency to let down their OPSEC:

Anybody remember this? The Washington Post published an article about "Travel Sentry" luggage locks, which can be opened by a master key in possession by the US Transportation Security Administration (TSA).

Now the funny thing is that if you have a reasonably good picture of a key, you can reproduce it. And so this photo very quickly led to... a github repository containing the CAD files you'd need to reproduce these keys.

Ok, so these were just some examples of 'opsec fail'. You can find a million more on your own. But, aside from making us laugh and feel smug, what else is a common theme here?

I think important aspect is that most of the time this is completely normal human behavior. That is, all of the incidents that we deride are outcomes of quick, easy to understand decisions. Most of the time, nobody is tricked into revealing information they didn't want to reveal. It kinda just happened.

It happened, because the people involved were not fully conscientious about what information might be revealed, and what others might do with the information.

And this reveals another thing about opsec: it's hard. You depend on other people's opsec, too. And the difficulty of maintaining strong opsec increases with the capabilities of your adversaries (see my earlier talk on knowing your enemy).

But why should we care? We're not on the run, we're not a target, right? Why or how is any of this relevant to us? All we do is come to work, geek around a bit on computers, and Make The World A Better Place(tm), right?

But if you're working at a place that is actually making the world a better place, then you do have adversaries. Or do you work at a totally unimportant place that doesn't matter at all? Exactly.

And if you're in a privileged position -- which, as developers who write the code or sysadmins who have root on many systems, you are -- then you are a target. Ah, healthy paranoia.

Ok, so now that I've sufficiently scared you, let's begin our game. But don't worry, I'm not going to tell you how to go off the grid or enter a witness protection program, but we'll cover some aspects such as physical security of your laptop, mobile devices, social media interactions, passwords etc.

All of these kind of overlap, and much of this is what may be referred to as "common sense", but as the US election has dramatically illustrated, it turns out that that is not actually all that common - at least not south of the border, eh.

Let us begin!

You've just convinced your boss to send us to Confoo in Vancouver, CA. Exciting! Canada, nice people, a sane democracy with social values, moose... it's gonna be awesome!

"Just keep up your OPSEC, eh?" your boss says as you run down the hallway. "No prob," you think, and with your laptop in your hand you're on your way out the door.

But WHOOPS, you got so excited, you really have to pee, so you make a quick pit stop at the bathroom. Hmm, the last time you went in there with your laptop, things got a bit weird, so you...

This is one example of considering physical access of your laptop. Now I'm not advocating turning around, locking your laptop into a safe and almost peeing your pants, but I've seen enough unlocked laptops sit outside bathrooms to know that that is an actually feasible attack vector that can be exploited within 30 seconds with one of these fun devices.

Alright, so you're done with your bathroom business and you return to your desk. You quickly fire off some emails, tweet about going to ConFoo -- thereby revealing to anybody on the internet what dates you're not going to be at home in case they want to stop by -- when you realize you haven't eaten anything.

You run off to the snack room...

Trick question: most of these are good ideas!

Ever wandered around your office around lunch time? How many workstations are unlocked? And not just unlocked, but logged into your facebook account, with restricted Google Spreadsheets open, ssh keys loaded in your agent, and generally ready to take over the world.

The best way to defend against this is to have your central configuration management system to set the workstation or laptop to auto-lock after a certain amount of inactivity. However, note that clever developers deploy all sorts of counter-measures to automatically jiggle the mouse or some such. (Don't do that -- if auto-locking gets in your way, talk to your infosec team and help them find a solution for your use case!)

In addition, I recommend committing to muscle memory a keyboard shortcut to lock your screen whenever you leave your desk, or use hot corners or an Alfred/Quicksilver shortcut, or...)

You're back at your desk. Just one more thing to wrap up before you're on your way. You just have to unwedge this database on this old box that you haven't touched in a year, and you forgot the old password for the mysql user. Only Bob knows the password, but he's on vacation - blast!

You...

Ah yup, this is what happens when you provide a password on the command-line. Even if the app munges argv. And most Unix systems keep shell history files and home directories unprotected.

Know yourself a umask, change system defaults, clear out your shell history!

Alright, you saved the day with Bob's carelessly leaked password and you even fixed /etc/profile to restrict permissions in the future. Good for you! You're ready to go - you earned your trip!

At the airport, you're facing "security lines". The worst. Since you're coming from the US, you have to perform these silly rituals to appease the TSA with no meaningful increase in overall security so you:

Another trick question. You can choose any one of these. I just dislike all options.

But one thing you may want to do is shut off your phone, since it's possible that border agents confiscate it. (Border security is its whole own thing, but more on mobile devices in a bit.) And if it's shut down, then they can't easily go through it nor can they force you to unlock it.

Ok, you got your $7 bottle of water, your $15 dollar caesar tasteless rubber-chicken wrap, and about 30 minutes to kill. You unlock your phone by...

Ok, let's talk about mobile device unlocking. As so often, the question is what type of threat you're trying to protect against (see my earlier talk).

In this context, we're primarily concerned with losing the phone or leaving it accidentally unattended, so picking a simple 4-character is probably good (so long as it's not an obvious code), although defaulting to a 6 digit PIN won't hurt you much, either.

Besides, nobody ever said you have to use your finger to unlock the phone...

Fingerprint unlocking is, by the way, totally reasonable and convenient. You can use multiple fingerprints, you can grant others access to your phone without them having to know your passcode, etc.

The threat that you sometimes read about on the internet with regards to fingerprint unlocking is usually that it's possible that somebody gets your fingerprint and creates a fake mold, somebody hacks off your finger, or forces your finger onto the phone (law enforcement officers may we allowed to do one of these).

The other thing you may want to do with your phone might be to set it to auto-lock (much like your laptop) as well as to set a PIN for your SIM (if none is set, try 1111). Finally, take a look at what access you allow from the lock screen.

Ok, after playing with the security settings on your phone, you finally board, fold yourself into the middle seat, and, after take-off, get out your laptop and...

Privacy screens are awesome. They work really well, and are particularly great when working in public places or open offices, or when attending conferences. Take a look around you. See anybody with a laptop? Can you see what they're doing, what websites they're browsing, what code they're working on?

This is what this looks like:

With privacy screens, you can still goof off at work and during boring conference talks -- everybody's a winner!

Ok, back on our plane, you put away the laptop. You read a bit on your phone, but then realize -- OH NO! -- that you're almost out of battery. You...

Easy, right? Now the thing about USB is that in most cases your USB cable allows not just charging, but also allows data transfer. That means that anywhere you plug in your phone might get access to your data or might upload data to your phone.

Enter the USB Condom. There are other brands and names, including e.g. SyncStop. They cost you less than $10 and are small enough so that you can just bring them with you anywhere. Alternatively, you can purchase charge-only USB cables.

Ok, cool, you finally made it to the hotel. You're checked in, chilling in the hotel lobby bar, ready to check your mail. You flip open your laptop and...

Clearly option [c] is correct here. But you're all a bunch of geeks, so you probably did use your laptop.

And auto-connecting to wifi access points you've previously connected can be very convenient, but you probably want to make a habit of periodically pruning your known access points. Otherwise, it's trivial for an attacker to put up a wifi access point named 'linksys' or 'attwifi' in the hotel bar to MitM just about any guest.

If you do connect to an unsecured, untrusted or unknown network, immediately connect to your VPN. This is not (only) to protect your employer or organization, but also to protect you: by using a VPN, you are ensuring that your DNS lookups are not hijacked, you lower the odds of being MitM'd, and you're not telling everybody around you what sites you're visiting etc.

(Depending on your level of paranoia, however, you may want to note that you're revealing to the operator of the VPN (i.e. your employer, often times) all the websites you're visiting...)

The next day, you're ready for ConFoo to start. You know you're going to listen to all the keynotes and actually pay attention to the talks, so you...

Leaving your laptop in the hotel safe is of course a pretty good idea. Consider the numeric code, maybe do not select the last four digit of your US social security number that you had to provide to the Canadian government when you applied for ETA, but otherwise, you're good.

More generally speaking, what is the risk of physical access to your laptop when you have it shut down? Assuming you do not care only about the monetary value of the device, you want to make sure that nobody can get to the data on the laptop.

For this, you should have whole disk encryption enabled, so that without your passphrase you can't get to the data. Unless...

...the Evil Room Service enters, installs a keylogger in your firmware, captures your passphrase the next time you enter it, and then can access the data when you leave the laptop unattended.

As mentioned before, physical access grants an attacker capabilities that are near impossible to defend against. But you can raise the stakes a bit by locking your firmware. Doing so does not impact your day-to-day operations at all, but makes it significantly harder for somebody else to monkey around with your device.

Alright, laptop (reasonably) secured and locked in the safe, you head to the after party and begin networking. You start chatting with a bunch of amazing people, and exchange business cards. Turns out, you met the CTO of this company you admire, and she really liked your talk, and maybe you want to come in for an interview the next day. You:

Oh, boy, so many options here, right? One thing to keep in mind is that whatever information you're sharing may reveal something about your intentions: Social media stalking thought leaders can bubble up in your coworkers' or boss's feed; geo tagged pictures may tell others that you're interviewing somewhere.

But, github... It's awesome, right? You can share code! And configurations! And ssh keys! And rc files with passwords!

Do a search in your local issue tracking system for the words "github" and "security incident" or "password".

A few defensive mechanisms you can employ here also happen to be good development practices: separate code and config, config and credentials; tighten your .gitignore file and use pre-commit hooks to avoid pushing sensitive materials into a repository.

Ok, second day of the conference. Even though you know there's no such thing, you attempt to multitask and bring your laptop to the talks and join your company slack so you can "catch up on work" while you're listening to the talks.

You're all signed in to slack when you open another browser tab to log into Twitter, so you can tweet about how awesome this talk is that you're attending. You...

Okay, so I'm sure we've all been here before or have at least seen others accidentally type a password into a chat system. It's not the end of the world, but you do have some egg on your face.

But this scenario now hits on a couple of different things: information leaking via social media (again), password hygiene, and chat log persistence.

The former is such a broad field that I can't reasonably include this in this talk, but you should be aware that if you're browsing the web with the same browser that you're logged into social media accounts with, they can track your across other sites. One way to combat that is to separate your profiles: use one browser for social media, one for other stuff. (You can even go further and use one container per browser / app.)

No talk on end-user security would be complete without hyping the benefits of a password manager (I happen to like 1Password). You do use a password manager, right?

But let's talk about chat systems:

It's important to remember that chat logs are stored elsewhere, and that anything you type in there, intentionally or not, is logged. And it may be logged by third parties, and on other people's servers if you're outsourcing the chat. Or it may be logged by fourth parties, if your third party provider integrates a fourth party bot, for example. Or it may be logged by somebody in the room. Or a company administrator.

This concept extends beyond just your general secrets: The internet does not forget. Anything. Ever. Assume your chat logs are available. Just like email.

Alright, the talk is finally over. Your batteries are empty, and you have to close your laptop anyway, so you might as well pay attention to the final summary slides:

OpSec is hard. A lot of this is non-obvious and requires internalizing a certain amount of paranoia. It's too easy to become defeatist and feel like since you can't be 100% secure 100% of the time, why should you bother? But perhaps instead, you can try to focus on growing a security culture within your organization:

Here's a few things you can provide to all your employees. Privacy screens, laptop webcam cover, FIDO U2f Security Keys (particularly useful for use with Google Apps), a USB Condom, a password manager license, a license for Little Snitch, and perhaps an RFID wallet.

You can get all of these things together for less than $100. And the important thing here is not that all your employees use all of these all the time, but rather that you're conveying that security is important, that opsec should be on their minds. If employees only use some of these some of the time, your organization has benefited.

Beyond these, you have a number of ways to strengthen your security posture by tightening system defaults and configurations, hopefully by way of a central configuration management system.

So keep calm, stay paranoid, and always remember to practice safe secs.

December 7th, 2016


[Know Your Enemy - An Introduction To Threat Modeling] [Index] [RealWorldCryptography 2017 Notes]