Signs of Triviality

Opinions, mostly my own, on the importance of being and other things.
[homepage] [index] [] [@jschauma] [RSS]

CPU Pinning and CPU Sets

December 3rd, 2020

This is a blog version of a video lecture segment of my Advanced Programming in the UNIX Environment class. I wrote this up as a blog post because I was not able to find many succinct, written posts on this particular topic, so perhaps this is useful to somebody else searching for this information.

CPU with a red pin on it. On a system with multiple CPUs it may at times be useful or desirable to control the placement of processes on a specific CPU or a selection of CPUs rather than letting the scheduler place it wherever it may see fit.

On the one hand, this can help increase performance by way of reducing CPU cache misses for processes or threads, but it can also be used to ensure that resource hungry processes do not impact the execution time of other processes, for example.

This can be accomplished in one of two ways: by assigning processor affinity to a process or a process group, or by creating a CPU set and then binding a process or process group to it.

Basic CPU scheduling

To better understand these two methods, let us start by envisioning (in a simplified manner) how processes are scheduled across multiple CPUs.

Let's suppose that we have a system with four CPUs, and then we have a selection of fairly typical processes running on our system: a shell, together with several commands started by it, a few system dæmons, and a few resource-hungry worker jobs doing some CPU intensive work.

Now with your usual time-sharing priority based scheduling algorithm, any of these processes may be placed on any of the available CPUs. As work is being completed and as jobs may be preempted and rescheduled, these jobs may be moved from one CPU to another or new jobs placed on the CPUs as the scheduler sees fit.

This might look somewhat like so:

Animation of jobs being distributed across four

CPU Pinning / Processor Affinity

But now let's assume that our 'worker' jobs here are all very CPU intensive. By having them get placed on any of the CPUs, you might end up with a fully loaded system, and, depending on their priority, some of your system jobs might not complete as quickly as you'd like.

So let's pick these 'worker' jobs and try to ensure that they don't get placed on just any CPU, but only on CPUs 1 and 2. Doing that is called "CPU pinning", or assigning a processor affinity. When we do that, the workers are correctly placed onto just these CPUs:

Animation of jobs being arranged based on CPU

Note that we may still have other jobs on CPUs 1 and 2: the shell and the find command were not evicted from the CPU, and in fact new processes may be placed on CPUs 1 and 2 as needed.

It is only the 'worker' processes that have been bound to the specified CPUs, all other processes can still be placed any way the scheduler sees fit.

CPU Pinning Example

In practice, we can reproduce this setup like so:

Let's create a trivial little program to keep a CPU busy, and run it while keeping an eye on the CPU utilization in a separate window using top -s 1 -1:

$ cat busy.c
main() {
        int i = 0;
        while (1) {
$ cc -Wall -Werror -Wextra busy.c
$ ./a.out &
$ top -s 1 -1

Let's place it in the background and start a few more instances of this worker job, and you should find that the scheduler distributed them across all four CPUs. If we then run other commands -- dd(1), or find(1) for example -- they have to share the CPU with one of the other processes:

Screenshot of the worker commands and output of

But that's not what we wanted -- we wanted to assign the worker jobs to specific CPUs. For that, we can use the schedctl(8) command on NetBSD:

$ schedctl -p $$                  # show current CPU affinity of our shell
  LID:              1
  Priority:         43
  Class:            SCHED_OTHER
  Affinity (CPUs):  <none>
$ ./a.out &
$ sudo schedctl -A 3 -p $!        # assign the last background job to CPU 3
  LID:              1
  Priority:         28
  Class:            SCHED_OTHER
  Affinity (CPUs):  3
$ sudo schedctl -A 1,2 -p $$      # assign our shell to CPUs 1 and 2
  LID:              1
  Priority:         43
  Class:            SCHED_OTHER
  Affinity (CPUs):  1,2
$ ./a.out &                       # this process will be pinned to CPU 1 or 2
$ ./a.out &                       # this process will be pinned to CPU 1 or 2
$ ./a.out &                       # this process will be pinned to CPU 1 or 2

You should be able to see the first a.out process be moved to CPU 3, while the second, third, and fourth processes would all remain pinned to CPUs 1 and 2.

That is, CPU affinity is inherited by a child process from its parent. (However, if you were to change the CPU affinity of the shell to, say, CPU 0, the children would remain pinned to CPUs 1 and 2.)

Note also that you can still move other processes to any of the four CPUs -- the affinity of the a.out jobs does not prevent other processes from being placed onto the same CPU.

CPU Sets

Now let's look at how we can reserve one or more CPUs specifically and only for a given process or process group.

Let's say we want to take our four CPUs and reserve two of them for our worker jobs, and one of them for our shell, then we can do so using "CPU sets".

When you create CPU sets, you will always keep one default set available for any of the leftover processes. So in our example below, all our system processes would end up on CPU 0, while we can then explicitly bind our shell to CPU 3.

As before, child processes are placed on the same CPU set as their parent, so any process created by the shell will also end up on CPU 3, and if we then bind our worker jobs to CPU set 1, then things might look like so:

Animation of process placement using CPU

CPU Sets Example

In this example, I've extended the little "busy" program a bit to make it easier to kick off multiple CPU intensive jobs and track them by name. You can download the code from here.

By default, only one CPU set exists, comprising all four CPUs. To replicate the setup from our illustration above, we use the psrset(8) command:

$ cc -Wall -Werror -Wextra busy-child.c
$ psrset
system processor set 0: processor(s) 0 1 2 3
$ sudo psrset -c 1 2
$ sudo psrset -c 3
$ psrset
system processor set 0: processor(s) 0
user processor set 1: processor(s) 1 2
user processor set 2: processor(s) 3
$ sudo psrset -e 1 ./a.out 6 &

Now we have three CPU sets: the default CPU set, with CPU 0 only, cpuset 1, comprising CPUs 1 and 2, and cpuset 2, with CPU 3. When we run our 6 worker jobs, we now see them distributed across CPUs 1 and 2, as we had planned.

But note that despite binding the worker to a given CPU set, we can still explicitly move it to a CPU in the default CPU set, but if we try to move any process from the default CPU set -- CPU 0 -- to CPU 2, then we fail: CPU 2 is part of a non-default CPU set, and so does not allow any jobs that are not explicitly bound to it via the psrset(8) command.

Screenshot showing processes assigned to CPU


Ok, let's summarize what we've learned:

  • Pinning a process (group) to a CPU can improve performance by e.g., reducing CPU cache misses, or to ensure resources are used fairly, or to prevent a group of processes from interfering with other jobs.
  • "Processor Affinity" or "CPU pinning" lets you assign a process to a specific CPU, but other processes may still be placed on that CPU.
  • Processor Affinity is inherited by a child process from its parent, but changing a parents affinity does not affect running children.
  • "CPU Sets" let you reserve CPUs for specific processes; no other processes can be placed on those CPUs.

Lastly, it's worth noting that processor affinity and CPU sets are not standardized; different OS implement them differently or using different tools.

On NetBSD, we use the schedctl(8) and psrset(8) tools; see their manual pages for references to the correlating library functions and system calls.

December 3rd, 2020

See also:

[The Secret Language of Coders] [Index] [Recommendations To Write (Slightly More) Readable And (Thus) Robust Code]