TLDR: It's good for machines but bad for humans.
The Principle of Least Privilege is often bad – at least in the way that it is implemented in many companies today.
Before we go into why, we should clarify what principle of least privilege actually is.
What Is Least Privilege?
The principle of least privilege applies to access control and asserts that an individual should only have the access privileges required to accomplish a specified job or task. An employee whose job comprises processing payroll checks, for example, would only have access to that function in a payroll program and not to the customer database as a whole. A marketing professional, for example, does not require access to employee salary data, an entry-level government employee does not need access to top-secret papers, and a finance specialist does not need to be able to alter application source code in order to accomplish their duties.
The concept of restricting access is familiar to most of us, and we see or practice variations of it every day. Parents use parental controls on their home devices to limit their children's access to harmful content; airline passengers can board a plane but aren’t allowed in the cockpit; students have access to learning systems but not to teachers' grading files; and a parking attendant with a valet key can park your car but not open the locked glove box, console, or trunk.
What Isn't Least Privilege?
Least privilege is sometimes confused with two similar security principles: need to know and separation of duties, but they are not the same. Need to know is frequently used in conjunction with least privilege to provide more specific access control based on need. For example, sales managers should only have access to their direct reports' personnel files for a limited time in order to complete each employee's annual performance review.
How Is It Then (Ab)used?
This is where most (software) companies go wrong. The combination of need to know and principle of least privilege combined. When software engineers are prevented from accessing all the code in the company "because the software written outside their department does not concern them" is when the company starts loosing valuable intelligence.
We have to ask ourselves what we are optimising for, when we are preventing employees from seeing information that is not directly related to their jobs. Are we hiding known vulnerabilities in the code because we don't want hackers to exploit them before we get a chance to fix those vulnerabilities? That may initially seem fair, but do we suspect our fellow software engineers to leak that code to external hackers? If so, then we probably should never have hired them in the first place. Do we want to foster a work environment where we generally mistrust one another? I know I don't want to work for a company where I cannot trust my co-workers.
When talking about need to know, do I need to know how much my coworkers are getting paid? Well, I may not strictly speaking need to know to do my job, but if it comes up during a lunch conversation, then it might be better to have had that out in the open than to have someone be unpleasantly surprised. Ask yourself: Who is it helping to keep salary information private – the employees or management? If everyone is paid fairly, then there should be no good reason to keep salary information secret.
More Conflation Of Issues
It is often stated that we want to prevent accidental loss of data, and we therefore must restrict everyone from accessing the data. This, of course is a conflation of issues. We can, and should, restrict who has write access to data. This can apply both to ensuring that nobody accidentally deletes the entire production database, and to only allow people from outside a team to make pull requests instead of allowing then to push to
main. The former has to do with mitigating risk to production systems, and the latter has to do with ensuring that best practices are maintained across teams.
Ensuring that everyone has to jump through a series of hoops to write to production systems keeps everyone safe from accidental damage – which is important for systems that need realtime availability or systems with no feasible possibility of making backups.
Codifying that nobody can force push to main ensures that everyone is well behaved, when we don't trust them to do what we agreed to do. And this is where problems start to arise. If we need to put rules into code because we don't trust our coworkers to follow those rules, then it's about time to take a hard look at the company culture. Is this company a safe and friendly place to be?
Implementing rules preventing engineers from creating or sharing code repositories does a really great job of preventing engineers from sharing knowledge with each other. I'm not entirely sure when that is a good thing for a company.
There are some places where least privilege makes a lot of sense. To find those, look at attack surfaces. Consider what attack surfaces a company has and how likely those are to be attacked, and what the impact could be in an event of an attack.
Obvious attack surfaces are servers with publicly available services. These should always have the least level of privilege needed to perform the services they are designed to perform. This has to do with the level of availability of these services, which makes them more vulnerable to attack.
Another, often overlooked attack surface is employee computers. These usually get exploited by automated drive-by attacks designed to take data hostage, usually by encrypting drives and demanding a ransom. Sometimes - exceedingly rarely by comparison – they are attacked by simply stealing a laptop and copying all data from it to look for exploitable information such as company financial information.
Least Privilege Best Practices
Organisations that want to (or must) implement least privilege should think long and hard about what they are accomplishing with by introducing it. Here are a few good rules of thumb:
- Only apply least privilege to actual attack surfaces.
- Set the level of restrictions to match likelihood and cost of an attack.
- Automate any restricted tasks.
What This Means
If an employee has access to all code (because it's a non-toxic workplace), then enforce full harddisk encryption and consider making them run some security software to enable remote shutdown of said computer if it's stolen or (more likely) compromised.
If access to production servers is restricted, then make sure that there are continuous deployment systems set up so productivity isn't negatively impacted by those restrictions.
Be generous when granting access to restricted systems if your employees have legitimate reason to need access.
Trust your coworkers to work in the best interest of the company at all times.
Never trust computers. If a CD server does not need access to a database, then make sure that it doesn't have that access. Compartmentalise production systems so each system only has read access if it does not need write access, and does not have any access when it doesn't need it. Make highly granular database access rules, possibly even to the level of restricting access to some tables or columns, and apply those rules to deployed production systems. Rotate production keys often. Consider limiting how many times a given key can be used and where it can be used from.
Why Do We (Ab)use It?
Let's have a quick look at the analogies from earlier:
The concept of restricting access is familiar to most of us, and we see or practice variations of it every day. Parents use parental controls on their home devices to limit their children's access to harmful content; airline passengers can board a plane but aren't allowed in the cockpit; students have access to learning systems but not to teacher's grading files; and a parking attendant with a valet key can park your car but not open the locked glove box, console, or trunk.
Let's break those down, one at a time:
- Parents use parental control because there is different relationship between parent-kid than there is between coworkers.
- Airlines restrict access because there is a different relationship between provider-customer than there is between coworkers.
- Students have access to learning systems but not to teachers' grading files. This is super murky, because of all kinds of GDPR around sensitive student information, so the analogy probably doesn't apply.
- The parking attendant with a valet key can park your car and actually does have access to the glove box, console, and trunk; so this seems like a false analogy.
Because these analogies are typically misused, it is no wonder that the principle of least privilege is also typically misused.
Let's apply this principle where it makes sense. Where the risk and severity of an incident is really high. Let's back off when it comes to employee productivity and happiness.
Attacks on production servers is really costly for businesses.
Employee churn is really costly for businesses.
Keep both of those in mind before considering introducing principle of least privilege.
There is a a thread about this post on Mastodon.