Signs of Triviality

Opinions, mostly my own, on the importance of being and other things.
[homepage] [index] [jschauma@netmeister.org] [@jschauma] [RSS]

Of Users and Groups -- oh, and Trust

Trust Fall I finished another chapter of my book on System Administration the other day, this one covering general concepts of multi-user systems. I also recently gave a presentation at DevOpsDays NYC (video), which brushed on the topic of implicit and explicit trust hierarchies within an organization. Having clearly had ample time to think about the topic of Users and Groups, let me share some of what I've come to believe are perhaps-not-as-obvious-as-you-might-think aspects, hiding in plain sight. First, the entirely obvious:

At its core, all computer problems can be reduced to human problems.

I know, it's not particularly insightful or revolutionary, but it's rather satisfying to see how widely this concept applies. Humans are terrible not only at expressing their ideas in ways that computers understand[1], but humans are bad at actually understanding what problem they're trying to solve. Any time a user requests that a new application be installed, that a hole should be punched into your firewall to allow communication between new end points, or that privileged access should be granted to them, the best response you can give is to simply ask: "What is the problem you're trying to solve?" Finding the answer to that question tends to yield simpler solutions.

The dilemma of a lack in understanding the nature of the problem is particularly significant when it comes to security implications deriving from operating in a multi-user environment.

Mapping People to Users

In a multi-user environment, we have, effectively, two kinds of "trust"[2]: We have a trust in our software that permissions are applied as defined -- which really is only "trust" in as far as that we assume somebody having correctly expressed the requirements to the computer -- and we have human-to-human trust: we believe that certain users will not intentionally perform malicious acts and that they are sufficiently skilled to not do so accidentally, either. (The latter can sometimes be enforced by software, but more often than not does a trusted user have -- and require -- the power to make spectacular mistakes.)

user<->people mapping It is easy to assume that different "users" refer to distinct people, and ideally we'd want to keep the mapping between the two sets ("persons", "accounts") to be of a simultaneously surjective as well as bijective nature. Unfortunately, that is not always possible. Consider that we actually have different kinds of users: we have accounts mapping actual humans to user-IDs on a host, we have so-called system accounts (non-interactive accounts used for services running as unprivileged users), and we have role accounts (semi- or completely interactive accounts accessed by multiple people for specific tasks). That is, we can identify mappings of zero, one or more people to a given account.

While generally speaking, access to any resource constitutes "trust", things become more interesting when we decide whom to grant root or sudo(8) privileges. Now here's the problem: we humans trust based on (perceived or projected) character features, and we dismiss the possibility of accidental damage done too easily. The Principle of Least Privilege just seems rather unfriendly to our co-workers, whom we are supposedly trusting implicitly. We tend to favor wider access due to our social nature. This in turn makes it more difficult for system administrators or security engineers to enforce stricter rules, even though said trust is inherited indirectly and in often non-obvious ways across all of these different types of users.[3]

Trust models across environments

The nature of our environment exerts social pressure on our decision making process, and we like to assume that other people's environment are, if not identical, at least similar to our own. But the differences can have significant implications on the trust model:

In a small commercial environment, such as a start-up, the good will and intent to collaborate for all users is implied. In such environments all users do frequently require the same or at least similar access to all systems, as responsibilities are shared and job duties overlap. Getting access to one system tends to simultaneously get you access to all systems.

And why shouldn't it? After all, we are all in this together, and we only hire people we can trust. (Also, we only hire "rock stars" and "ninjas" who are immune to accidental failure.) But, while we may trust all our employees right now (when there's 5, or 10, or perhaps even 150 of them), the larger the environment grows, the more urgent the need for privilege separation becomes. Not all users should have access to all systems, even if we may believe that they are all good people with no ill intentions.

Trust Does Not Scale

cows Eventually, the number of users in our group necessarily increases beyond Dunbar's Number, and we create sub-groups[4]. These smaller groups then develop internal trust (and, to some degree, begin to distrust the "others", an apparent necessity in the human group identification of us-vs-them), but for those in charge of managing access to all systems across all groups, it now becomes much easier to put in place access restrictions. Trust is no longer implicit, and putting in place common restrictions is no longer seen as near-hostile towards the "culture" of the environment.

Now let's take a step back and consider an academic environment or general access systems such as an ISP or a large public library might. All of a sudden, your trust model is warped: you cannot assign any trust to your users. Your users may not only not wish to collaborate with each other, they may have actively conflicting goals or be assumed hostile to each other (and, in some cases, to your organization).

groups<->machines mapping Back to our small environment, we feel a certain social pressure to be more open. But here's another problem: the infrastructure we put in place now cannot (easily) be changed once we've grown beyond our threshold of group trust. Trust does not scale.

Groups created when a company is in its infancy are retained, and users are added as the company grows. But group membership may imply privileged access, as groups are mapped by roles to large sets of hosts.

What's more, it's inherently more difficult to revoke access than it is to grant it. Logic and practical thought would require us to bootstrap our systems on a strict least privilege basis, but our human nature prevents us from doing that[5], and so I have found myself facing every large scale system administrator's déjà vu by addressing the "too many users are in group X and thus have sudo(8) everywhere" issue over and over again.

Yertle, the turtle The computer and software systems we build grow increasingly complex (and, often, worse: complicated) with time, and the assumptions we made when the system was first conceived of have long broken down. What I've come to find really interesting lately is not the technical aspects of these problems but how they relate to and may be traced back to how people identify (within) groups. It really is turtles^Wpeople all the way down.

January 25th, 2013


[1] Computers will do what you tell them to do, not what you meant. Internalizing this lies at the core of debugging a mis-behaving program.

[2] Obligatory Reflections on trusting trust link.

[3] The funny part is that all (reasonable) systems engineers will agree on the benefit of the Principle of Least Privilege, but -- especially in small environments -- convenience (some may say: laziness) and social pressure easily lead to the entirely expected response: "Well, we trust the people we hire." or "This is not considered a security risk in our environment."

[4] Actually, for productive team cohesion, Dunbar's number is much too high. Your team most likely consists of less than 20 people, and your company may well have a "no more han X reports for one manager" rule. This is the size where core-trust is retained, but up to around Dunbar's number an organization (or a branch of a given "org") still feels a sense of "we" and inherits trust. Beyond that number, trust is no longer implicit and can more easily questioned.

[5] Yes, the other major factor is convenience: having an open system is easier and faster to set up. This suggests the old dichotomy of security versus usability holds, but I've long argued that this is only the case if "security" is tacked on after the fact. It's certainly possible to build secure solutions from the get-go, but that is, admittedly, more work. At which point human nature kicks in again, and we favor short-term returns for low effort over the higher rewards in scalability we'd earn for investing more labor up front.


[iPad Apps for Kids] [Index] [Ramblings on Remote Employment]