Discussion
Loading...

Post

  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
David Chisnall (*Now with 50% more sarcasm!*)
@david_chisnall@infosec.exchange  ·  activity timestamp last week

One think has really struck me in most pictures of #RTOS designs that I’ve seen recently: they all have a HAL or device driver layer at the bottom. The bottom layer is your trusted computing base (TCB): the bits that have to be correct for any of your security guarantees to hold. But device drivers talk to the outside world. Ethernet, SPI, I2C, UARTs, and so on are all parts of your attack surface: the place where an attacker is most likely to start attacking your system.

By combining your external attack surface with your TCB, you guarantee that an attacker who mounts a successful exploit has complete control over the system.

This is why #CHERIoT RTOS makes it trivial to delegate devices to the least-trusted portion of a firmware image. A compartment with exclusive access to a device doesn’t get any privileges other than a CHERI capability (hardware-enforced pointer) authorising access to that device’s MMIO region. If an attacker compromises it, they get access to that device, and maybe a way of trying to attack the next compartment, but it’s just the first step towards device compromise, not the end. Device drivers for single-user devices are just a software engineering abstraction, not a security boundary: they let you avoid thinking about device-specific details. Device drivers that handle multiplexing have no special privileges other than access to the device. At worst, compromising one allows you to break the isolation that the multiplexing provided.

  • Copy link
  • Flag this post
  • Block
Calicoday :gamedev:
@calicoday@mastodon.gamedev.place replied  ·  activity timestamp last week

@david_chisnall totally off-topic, sorry, but how is it you're able to have such a high character count in a single post? Is it to do with which server you're posting from?

  • Copy link
  • Flag this comment
  • Block
David Chisnall (*Now with 50% more sarcasm!*)
@david_chisnall@infosec.exchange replied  ·  activity timestamp last week

@calicoday

Yes, it’s a per instance setting. The Mastodon defaults are stupid. They claim it’s to make people write concise thoughts, but it just makes people write 1/14 posts split across a load of different things. It also adds to server load because the individual subtoots are ActivityPub entities in their own right. And then it makes the flow harder to follow because people reply to something in the middle of their thread because they don’t want to waste their own word limit quoting a bit in the parent.

But just because something has been shown to be a bad idea in practice doesn’t mean people stop doing it.

This instance has a 11K character limit. I have once written a post the hit that limit.

  • Copy link
  • Flag this comment
  • Block
Calicoday :gamedev:
@calicoday@mastodon.gamedev.place replied  ·  activity timestamp last week

@david_chisnall 😹 Wow. Yeah, thoughts and language are hard enough without arbitrary limits. Thanks very much.

  • Copy link
  • Flag this comment
  • Block
Miroslav Kravec
@kravemir@fosstodon.org replied  ·  activity timestamp last week

@david_chisnall with drivers isolated in their compartments.

What about DMA done by device itself?

Similarly as GPU(s) use DMA for performance.

I've never done serious embedded, but I've read that (some) ethernet cards access RAM directly to send packet payload directly to not burden CPU with I/O. Or, even splitting to TCP packets themselves with DMA access to buffer.

AFAIK, these can run any arbitrary code in their own firmware (while they listen to the outside world - network - directly).

  • Copy link
  • Flag this comment
  • Block
RootWyrm 🇺🇦:progress:
@rootwyrm@weird.autos replied  ·  activity timestamp last week

@kravemir @david_chisnall it's a complicated problem, and part of why most hardened systems are completely unsuitable for certain workloads.
e.g. the last thing you'd run on CHERIoT would be a NAS or SAN. Doubly so a high performance router.

It can be done without the corresponding performance hit, but you're automatically getting into what can only be described as true black magic.

  • Copy link
  • Flag this comment
  • Block
David Chisnall (*Now with 50% more sarcasm!*)
@david_chisnall@infosec.exchange replied  ·  activity timestamp last week

@kravemir

What about DMA done by device itself?

On our chips, the DMA units take capabilities, so they can access memory only if the caller on the CPU could access the same memory.

I've never done serious embedded, but I've read that (some) ethernet cards access RAM directly to send packet payload directly to not burden CPU with I/O. Or, even splitting to TCP packets themselves with DMA access to buffer

Yup, and these devices on a CHERIoT system have access to memory mediated by capabilities.

On a non-CHERI system you’d have to set up an IOMMU or IOMPU (or, as RISC-V calls it, an IOPMP) to achieve the same things and would have a worse programming model, but the same approximate structure is possible.

  • Copy link
  • Flag this comment
  • Block
Log in

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About · Code of conduct · Privacy · Users · Instances
Bonfire social · 1.0.0 no JS en
Automatic federation enabled
  • Explore
  • About
  • Members
  • Code of Conduct
Home
Login