f2fc62ad033786cc0d3d5fd0ce59ed54f74dee2b — Jakub Pastuszek 3 years ago d206843
2 files changed, 99 insertions(+), 0 deletions(-)

A content/reality-of-linux.md
M content/security-mess/index.md
A content/reality-of-linux.md => content/reality-of-linux.md +69 -0
@@ 0,0 1,69 @@
title = "TBD"

tags = []
categories = []

image = "../wip.jpg"
image_alt = "Work in progress"

# Shared vs vendored dependencies

Fundamental dilemma is this:
Software is build out of reusable part - this is critical as it allows progress/scalability.
There are 3 ways for software reuse:
* vendoring - n copies of library code
  * put all the library source code in your source code and compile together - this is what most bigger C projects do to some extent (look up blog post about Rust dependency bloat)
  * put all .so files together - basically same as static linking but using dynamic linking: Docker, Nix etc.
* sharing - 1 copy
  * share dependencies and resolve conflicts
* hybrid - you do some of the one and some of the other

In practice we always get hybrid approach:
* C programs often vendor in smaller libraries like base64 encoders, XML parsers, SHA1 functions etc. - this is due to lack of proper tooling (e.g. cargo, npm etc.) and then they use OS provided dynamically linked libraries at the same time
* with docker your image in the end still uses dynamically linked libraries and they can still change and conflict if you update the base OS layer

Why we do vendoring:
* lack of proper tooling - it is just easier to stick this single header file library in...
* conflicts between versions: program A needs lib version a, program B needs same lib but in version b to work correctly
* movability - you can just give someone this statically linked binary and it will probably work on his computer without him having to figure out dependencies
* dependency hell - how wants that?

Why we do shared libraries
* smaller footprint - this is mostly historical reason (although why waste space?)
* security - you only need to update one library and have you system secure instead of having to recompile everything that depends on it - e.g. all software that depends on bad OpenSSL version

So with vendoring it is hard to have a secure system - having good tools that help you figure out what libraries are compiled in what binaries is necessary to even have a hope of that.
But with shared system having consistently not broken system is a challenge, it may even be impossible for given mix of software - you may need more than one system or more than one "view" on the system (like with Nix/Guix).

# Long Term Support vs rolling

Due to sharing of dependencies, the distributions that have regular releases handle some of the problems of conflicts by working with upstream or creating patches to have single library version used across all the packages.
With rolling systems this is not possible and you always have some packages broken and things move on in different pace and there is no moment to get them all synchronised.

LTS systems will contain stale software by definition:
* once you get all the packages to work together you can't just upgrade a library to newer version if it were to break some of the packages
* to handle security you will need to be back-porting security fixes for newer version back to your version of the software.

LTS systems are insecure:
* backporting always lags behind upstream - first upstream gets fixed, then you need to work out what was fixed and how and then backport it to you version - this take time,
* backporting may not be possible - depending on the extent and nature of the fix - e.g. if the fix was addressing a design flaw you will have to recompile all dependencies,
* backporting may not happen - package maintainers may not be aware of a security issue that needs backporting.

Rolling systems are broken:
* thing are moving around; no moment in time to get everything in sync,
* the more packages you install the bigger chance of breakage.

# Nix/Guix workaround

Vendor the libraries by crating different views of the system for different binaries.

## Why is this a hack?

Linux (UNIX in general) was designed with one global namespace (one hierarchical (but not in practice) file system view). In order to render this full OS views Nix hacks binaries, and maintains ENV variables to simulate this.
So this is going against fundamental design of UNIX.

For example this would be natural way of doing things in Plan9 which maintains namespace per group of processes and allows composition of it using multiple sources.

M content/security-mess/index.md => content/security-mess/index.md +30 -0
@@ 34,3 34,33 @@ For example I can imagine standard library for Rust to be implemented in such a 
We need new OSes!

We can do caps in distributed software that we build like social networks, web etc.

# Systemd fiasco and home directories



For me this looks like he is trying to work around fundamentally broken model of ACLs written to file system that UNIX uses. The only way to get this right and not have mountains of complexity is to use object-capability system instead of ACL; but this would not be UNIX anymore.

Also the idea of erasing your LUKS key is kinda pointless since your RAM will also contain most of your recently opened files in page cache - so if you can read your LUKS key from RAM you can also read some of your files from RAM. If you want your files to be really secure just shut down the computer or suspend to disk (“hibernate”) with encryption of the suspend file - this would be no different for what he proposes (since no user program can run anyway) and also better for CO2 emissions…

Maybe it is time to move away from UNIX, just a thought. UNIX/POSIX never had technologies like portable storage and encrypted drives when it was first created if I am not mistaken. Also the needs of encryption and security were very different back in the old card-punch days (which is where I started).

Yes, this is what I start to realize. I think sooner we understand the fundamental design flaws of UNIX (arguably ACLs being one of them in this particular scenario) the sooner we can move on to something better.

E.g. see http://erights.org/ or even Plan9 dose this capability based security (via P9 protocol) to some extent and it was designed to use remote home directory from day-1.

If you watched the video in this post you can see that:

    the UNIX file permissions (owner and group in particular) are in the way of this scenario - as Lennart says it would be best if mount could just override this values stored with each file; otherwise you need to chown -R the whole directory
    he then also says that even having a user name is problematic as it may conflict (he says that adding a domain to disambiguate the names globally may help but won’t solve the issue)
    fundamentally you don’t even need the user name (login name) in the first place as the fact that you are capable of decrypting the content of your home folder is enough

So this are all fundamentals of ACL system that goes against this use-case. So I would argue that instead of hacking around this fundamental design making something that will be very complex, insecure and not doing exactly what we want we should either accept Linux as what it is or move on to something that supports this use-cases. I don’t think you can “migrate” Linux out of ACL model.