Ask HN: Why isn't capability-based security more common?

killerstorm | 10 points

#1 problem is the IT mindsets' unwillingness to adopt a default-deny philosophy.

Default-Accept philosophy make it easier for millions of holes to open up ag first and you spend entire IT budget locking down things you don't need but not the ones you don't see that needs closing.

Default-deny is one time IT expenditure. And you start poking holes to let things thru. If that hole is dirty, you plainly see that dirty hole and plug it.

All that also equally applies to CPU designers.

egberts1 | 3 hours ago

The more fine-grained you make a capability system, the more you have an explosion of the number of permissions required by an application, and the chance that some combination of permissions grants more access than intended.

It also requires rewriting all your apps.

It also might require hardware support to not be significantly slower.

"Just sandbox each app" has much fewer barriers to entry, so people have been doing that instead.

And systems like Android have been working with discrete permissions / capabilities, because they were able to start from scratch in a lot of ways, and didn't need to be compatible with 50 years of applications.

structural | 3 hours ago

I don't know much (if anything) about it, but it can be turned into an interesting thought experiment.

Let’s use Apple as an example, as they tend to do major transitions on a regular basis.

So, let’s say that the top tier already approved the new security mode(l).

Now, how to do it?

My understanding is that most if not all APIs would have to be changed or replaced. So that's pretty much a new OS, that needs new apps (if the APIs change, you cannot simply recompile the apps).

Now, if you expose the existing APIs to the new OS/apps, then what's the gain?

And if you don't expose them, then you basically need a VM. I mean, I don’t know Darwin syscalls, but I suspect you might need new syscalls as well.

And so you end up with a brand new OS that lives in a VM and has no apps. So it's likely order(s?) of magnitude more profitable to just harden the existing platforms.

TomaszZielinski | 3 hours ago

I presume this is because of compatibility reasons.

Back in 70s and 80s computers didn't contain valuable information to care about and there was no Internet to transmit such information. So, adding some sort of security elements in operating systems had no sense. In these years modern operating system were first developed - Unix, Dos, Windows. Since then many architectural decisions of these operating systems weren't revised in order to avoid breaking backward-compatibility. Even if we need to break it to achieve better security, no one is ready to make such sacrifice.

There are projects of operating systems with focus on security, which are not just Unix-like systems or Windows clones. But they can't replace existing operating systems because of network effects (it's unpractical to use a system nobody else uses).

Panzerschrek | 5 hours ago

Have a look at microsoft MSIX.

privatelypublic | 4 hours ago

This is something that needs to be baked into the operating system, which is not supported by major OSs today. The next best thing is to rely on a "secure environment" where applications can be installed and run, similar to phone apps or browser extensions. This environment would probably use application manifests to list entitlements (aka capabilities), like disk access, network access, etc. But until then, we're stuck with the ambient security model.

khaledh | 4 hours ago