lmao that copy.fail was in linux/crypto/
Post
we all know that "AI" can't find vulnerabilities, and having a legit vulnerability is not a proof that "AI" found it. instead it means it was already known beforehand. why someone would choose to release it publicly then becomes the question at hand
the fact that it wasn't responsibly disclosed means the "AI" corp isn't interested in being seen as a legitimate security corp. and it does force people to frantically update without thinking or auditing much
which is what you would want if you had inserted a backdoor into linux/crypto/ during the 7.0 rc cycle
this makes me feel very nihilist and i don't need more proof but it's useful to identify the patterns which may be used to insert backdoors. i absolutely 100% called in advance that there would be vulns declared right after the 7.0 release because of the backdoor i found
i don't think it's worth getting upset over this because the only way to fix this is to develop not just code which is safe against backdoors (i think a microkernel is the correct approach to start off with) but also a governance model that achieves accountability processes so that this can't be hidden (and then a retro process that identifies architectural improvements)
a "blameless postmortem" exists because internal corporate politics seek to use blame to avoid fixing issues. there are ways in which this can be valid in other contexts, but codebases that seek to achieve security for their users generally don't have internal politics to deal with and instead very much do have to worry about trust and accountability for failures
so, it's worth reconsidering whether blame needs to be a bad word. it can be reframed as responsibility. in addition to making it harder for contributors to make mistakes, we also have to consider how to limit the ability of contributors to harm users. this encourages more forms of direct democracy, instead of hyperspecialization and hierarchy as seen in the linux codebase
ideally, a microkernel should also help to reduce the scale of the project to make it possible to be familiar enough with most of the code to review others' work
it's not a magic bullet but rather a bundle of strategies to enable deep trust over the code that runs in ring 0. an additional result is in lowering the barrier to contributions. i suspect receptivity to forks would be a good thing, but also if the project needs to be forked, that could be a sign it's not microkerneling hard enough
grapheneos is much more ambitious and it remains secure partially by limiting hardware support, which further enables it to reduce the amount of trusted code. it also integrates cryptography into the boot process and in other ways generally understands that the operating system's purpose is to act as the trusted computing base.
linux acts like the os is supposed to have a constant stream of sick new features for really specific situations (splice and io_uring in particular), which constantly exposes it to vulns. if you need to run in kernel space to access hardware, you're going to have a conflict between feature support and security eventually. so i think interfaces which enable deep integration with hardware without needing to pull in code to the kernel source tree that runs in ring 0 is a research goal for the future
this also has a lot of practical utility because it sucks that you essentially can't ship a kernel module to users like other types of code. FUSE is an example of how to expose safe interfaces (although FUSE also necessarily doesn't interface with hardware. need to learn.....so much more about the motherboard to figure this out)
this was inspired by https://www.askbaize.com/blog/linux-compromises-broken-embargoes-and-the-shrinking-patch-window, which i didn't know at all would explicitly mention splice and io_uring (both places intentionally created to share memory between the kernel and userspace). this is the kind of thing imo that really can't be done as a one-off and indicates that the kernel really shouldn't be managing i/o in the first place (gasp!)
the reason the kernel has to manage i/o is because the kernel is the only thing that can connect to the hardware, which is where i/o happens. the reason this pisses me off is because it also imposes the kernel's idea of how i/o scheduling should work, which application programmers (build tools, package managers) constantly have to work against
it's a well-known meme that databases have to tell the kernel to stop fucking with the pages they allocate and let the db software manage things directly, because the db works differently than the kind of software that the page cache was made for. this isn't understood as the serious indictment of many os-based abstractions which to my understanding date back before unix
i have wanted less monolithic i/o APIs for years because they kept getting in the way of perf. in particular, most programming language async APIs are extremely detrimental to local i/o and are optimized for network traffic in a variety of ways. the kernel imposing very limited mechanisms for i/o scheduling makes it quite difficult to get "close to the metal", or on the other hand, to get "close to the user" by representing a precise set of application-level constraints
it's obviously a huge task to produce an interface that enables high-performance i/o without sharing the same memory space or even the same build system. it can be thought of as the difference between an existential vs universal quantifier. but the result will mean highly specialized application domains don't have to work around assumptions made for other use cases. and highly-secure systems won't incur tradeoffs made for other systems with less stringent requirements
with the establishment of standards (sel4 being the closest to what i want to see), this shouldn't then mean code becomes less portable. in fact, i think it should increase portability as OS interfaces become less of a product of the monolithic OS version, but rather a construction of userspace code that can be constructed independently of the kernel
the violence of a monolithic kernel is in needing to support every use case. with linux, this has led to compounding degrees of driver support, because the only way to support your hardware at all is to get it into the kernel. like the GPL(v2), it was an effective approach to coerce corporations to contribute code. but it achieved this coercion via monopoly. and eventually, corporate use cases became preferenced. this is not a problem for linus, who gets free labor and state-of-the-art research into his little project. but it becomes problematic for those who are at odds with corporate or government initiatives
@hipsterelectron meanwhile it's funny to see the extremely corporate Apple kernel go full circle, from NeXT slopping 4.2BSD on top of the Mach microkernel as a rush-to-market MVP 40 years ago to contemporary macOS/iOS carving out chunks to put them back into userspace daemons today
@joe to me it seems like the right architecture but it requires immense patience hence why it doesn't surprise me that linux used it and began to amass more functionality than e.g. hurd or others. my understanding too is that windows is a "microkernel" in some specific technical respects, but fails to use that to achieve isolation (because it reflects the segregated corporate organization of microsoft, and also the move fast break things corporate incentive model of microsoft)
@joe i will also note that OSXFUSE requiring an apple signature was used to extort twitter inc when we needed to make some changes necessary for our git virtual file system. so i will say that DRM is an alluring solution to this but without oversight can be used to harm security (which google and microsoft are imho much more guilty of). reducing the need for elevated permissions seems to be the path of righteousness. the problem then becomes a matter of API design
@joe there may be some potential similarities between cryptosystems and microkernel isolation. i think ARM MTE in grapheneos (and ios which failed to credit them) is an example of this. wish i could speak less vaguely on this, hardware is just so complex aaaaaa
@joe i think sel4 is probably something i wanna study closely (it's why i mention the military applications part above) bc i'm under the impression that's really where a lot of research into this has gone