lmao that copy.fail was in linux/crypto/
Post
we all know that "AI" can't find vulnerabilities, and having a legit vulnerability is not a proof that "AI" found it. instead it means it was already known beforehand. why someone would choose to release it publicly then becomes the question at hand
the fact that it wasn't responsibly disclosed means the "AI" corp isn't interested in being seen as a legitimate security corp. and it does force people to frantically update without thinking or auditing much
which is what you would want if you had inserted a backdoor into linux/crypto/ during the 7.0 rc cycle
this makes me feel very nihilist and i don't need more proof but it's useful to identify the patterns which may be used to insert backdoors. i absolutely 100% called in advance that there would be vulns declared right after the 7.0 release because of the backdoor i found
i don't think it's worth getting upset over this because the only way to fix this is to develop not just code which is safe against backdoors (i think a microkernel is the correct approach to start off with) but also a governance model that achieves accountability processes so that this can't be hidden (and then a retro process that identifies architectural improvements)
a "blameless postmortem" exists because internal corporate politics seek to use blame to avoid fixing issues. there are ways in which this can be valid in other contexts, but codebases that seek to achieve security for their users generally don't have internal politics to deal with and instead very much do have to worry about trust and accountability for failures
so, it's worth reconsidering whether blame needs to be a bad word. it can be reframed as responsibility. in addition to making it harder for contributors to make mistakes, we also have to consider how to limit the ability of contributors to harm users. this encourages more forms of direct democracy, instead of hyperspecialization and hierarchy as seen in the linux codebase
ideally, a microkernel should also help to reduce the scale of the project to make it possible to be familiar enough with most of the code to review others' work
it's not a magic bullet but rather a bundle of strategies to enable deep trust over the code that runs in ring 0. an additional result is in lowering the barrier to contributions. i suspect receptivity to forks would be a good thing, but also if the project needs to be forked, that could be a sign it's not microkerneling hard enough
grapheneos is much more ambitious and it remains secure partially by limiting hardware support, which further enables it to reduce the amount of trusted code. it also integrates cryptography into the boot process and in other ways generally understands that the operating system's purpose is to act as the trusted computing base.
linux acts like the os is supposed to have a constant stream of sick new features for really specific situations (splice and io_uring in particular), which constantly exposes it to vulns. if you need to run in kernel space to access hardware, you're going to have a conflict between feature support and security eventually. so i think interfaces which enable deep integration with hardware without needing to pull in code to the kernel source tree that runs in ring 0 is a research goal for the future
this also has a lot of practical utility because it sucks that you essentially can't ship a kernel module to users like other types of code. FUSE is an example of how to expose safe interfaces (although FUSE also necessarily doesn't interface with hardware. need to learn.....so much more about the motherboard to figure this out)
this was inspired by https://www.askbaize.com/blog/linux-compromises-broken-embargoes-and-the-shrinking-patch-window, which i didn't know at all would explicitly mention splice and io_uring (both places intentionally created to share memory between the kernel and userspace). this is the kind of thing imo that really can't be done as a one-off and indicates that the kernel really shouldn't be managing i/o in the first place (gasp!)
the reason the kernel has to manage i/o is because the kernel is the only thing that can connect to the hardware, which is where i/o happens. the reason this pisses me off is because it also imposes the kernel's idea of how i/o scheduling should work, which application programmers (build tools, package managers) constantly have to work against
it's a well-known meme that databases have to tell the kernel to stop fucking with the pages they allocate and let the db software manage things directly, because the db works differently than the kind of software that the page cache was made for. this isn't understood as the serious indictment of many os-based abstractions which to my understanding date back before unix
@hipsterelectron return to pickos. return to the database appliance OS