Wednesday, July 20, 2011

Principles, principals, and protection domains

Recently, I've had some time to think about computer security as I wrap up a book chapter that I'm co-authoring on the topic. I recall my initial exposure to three fundamental security concepts: the principle of least privilege, principals, and protection domains.  These three concepts appear in practical computer security, have been around for a long time (longer than me), and are intricately related. This post explores the relationship among the three as they relate to one aspect of my research.

The classic paper by Saltzer and Schroeder [1] was my first introduction to these concepts; I highly recommend this paper for anyone even slightly interested in computer security. Butler Lampson's pages on Protection [2] and Principles [3] also have good information.

Let's start with definitions [1].

Principle of Least Privilege (POLP): Every program and every user of the system should operate using the least set of privileges necessary to complete the job.

Principal: The entity in a computer system to which authorizations are granted; thus the unit of accountability in a computer system.

Protection domain: The set of objects that currently may be directly accessed by a principal.

The logical conclusion of the POLP is that every principal should be as small (fine-grained) as is feasible, and every protection domain should be minimal for each principal. But in general as things get smaller and more numerous, managing them gets harder.

Lampson has argued against the POLP. His argument is that if "the price of reliability is the pursuit of utmost simplicity," as Hoare famously said, and if reliability is required for security, then enforcing the finest granularity of privileges is wrong because it introduces complexity thus reducing reliability.

So an open challenge in computer security is supporting fine-grained privileges without introducing complexity (or overhead).

Privileges are inherent to both principals and protection domains. In *nix, the "user" (account) is the principal. All processes having the same user can access the same persistent objects, but not temporary objects outside of process context. So files are accessible with user rights, but not per-process open file descriptors. So who is the principal—the user or the process?

I think it is really about multiple "contexts" within a single computer system. In one context, the principal is the user, and the protection domain is the set of persistent objects that are managed by the OS (files, programs). In another context, the principal is the process, and the protection domain is the set of objects "in use" by the process (file descriptors, process address space). These two contexts are muddled by interfaces like /proc, which allow users to access process context, and also because a process has the user's privileges when accessing persistent objects.

By thinking in terms of multiple contexts that align with principals, I can more easily think about how user/process principals fit with other principals, such as the coarser-grained machine principal seen in network protocols or finer-grained principals like threads, objects, or even procedure invocations. Each principal has an associated protection domain and can co-exist with overlapping principals.

Pushing the POLP toward its limits is one aspect of joint work I have done: hardware containers [4,5] support procedure invocations as a fine-grained principal, and we have argued that software can reasonably manage permissions to establish tightly-bounded protection domains.

[1] J. H. Saltzer and M. D. Schroeder, “The protection of information in computer systems,” Proceedings of the IEEE, vol. 63, no. 9, pp. 1278-1308, 1975. Available at:

[2] B. Lampson. Protection. Proc. 5th Princeton Conf. on Information Sciences and Systems, Princeton, 1971. Reprinted in ACM Operating Systems Rev. 8, 1 (Jan. 1974), pp 18-24. Available at:

[3] In Software System Reliability and Security, Proceedings of the 2006 Marktoberdorf Summer school. Available at:

[4] E. Leontie, G. Bloom, B. Narahari, R. Simha, and J. Zambreno, “Hardware Containers for Software Components: A Trusted Platform for COTS-Based Systems,” in Computational Science and Engineering, IEEE International Conference on, Los Alamitos, CA, USA, 2009, vol. 2, pp. 830-836. Available at:

[5] E. Leontie, G. Bloom, B. Narahari, R. Simha, and J. Zambreno, “Hardware-enforced fine-grained isolation of untrusted code,” in Proceedings of the first ACM workshop on Secure execution of untrusted code, Chicago, Illinois, USA, 2009, pp. 11-18. Available at:

No comments:

Post a Comment