I'm performing some interesting tests inside SmartOS zones. It'll be a blog post - but it'll be out after EuroBSDCon
I'm performing some interesting tests inside SmartOS zones. It'll be a blog post - but it'll be out after EuroBSDCon
The September 16th, 2025 Jail/Zones Production User Call is up:
We discussed Eurobhyvecon, Package Base Jails and related terminology, and more!
"Don't forget to slam those Like and Subscribe buttons."
You can support Call For Testing on Patreon: https://www.patreon.com/c/CallForTesting
The September 16th, 2025 Jail/Zones Production User Call is up:
We discussed Eurobhyvecon, Package Base Jails and related terminology, and more!
"Don't forget to slam those Like and Subscribe buttons."
You can support Call For Testing on Patreon: https://www.patreon.com/c/CallForTesting
I'm building a new EPYC Rome node for my home data center, trying to figure out if OmniOS can run everything I want. The last component just arrived and the machine is memtesting right now.
Over the years I've ran mostly ESXi and I've also tried to love (but didn't) Proxmox. My new node has a twin sister with a Skylake Xeon, running Fedora. If everything goes to plan the twin can be migrated to OmniOS later too.
Some specs:
- Inter-Tech 4U-4408 storage chassis
- Supermicro H11SSL-i motherboard
- AMD EPYC 7302P CPU (16 cores)
- 8x 16 GB DDR4 3200 ECC RDIMM (Samsung)
- 1x 2 TB M.2 NVMe WD SN700
- 2x 16 TB HDD Toshiba MG09
This motherboard is a little funny. You get two SFF-8643 connectors, but there's no SAS chip, so you can only break them out to SATA. Suits me fine for this build, the chassis backplanes happily take SFF-8643 too. The H11SSL-i motherboard is alright. EPYC Rome also fits a H12SSL-i which would get me PCIe 4.0, but the price for the motherboard + CPU combo would have doubled and I don't really need it for this build.
The Inter-Tech (German brand) chassis are pretty nice for affordable DIY builds. 4U is sweet because you can use regular ATX PSU's and fit heat sinks with fans that don't sound like jet engines. The heat sink I have now is a "CooNong" (? bless me) that I got with the motherboard. Looks like a clone of the same style that Supermicro sells. It came with the most crappy fan humanity ever laid eyes on so I promptly fitted a Noctua like the rest of the fans. As usual, motherboard warns about fan speeds being too low, expecting high RPM data center fans, but temps are otherwise fine. At least there's no audible warning for the fan speeds on this motherboard. The result: you can actually work right next to the machine.
I'm building a new EPYC Rome node for my home data center, trying to figure out if OmniOS can run everything I want. The last component just arrived and the machine is memtesting right now.
Over the years I've ran mostly ESXi and I've also tried to love (but didn't) Proxmox. My new node has a twin sister with a Skylake Xeon, running Fedora. If everything goes to plan the twin can be migrated to OmniOS later too.
Some specs:
- Inter-Tech 4U-4408 storage chassis
- Supermicro H11SSL-i motherboard
- AMD EPYC 7302P CPU (16 cores)
- 8x 16 GB DDR4 3200 ECC RDIMM (Samsung)
- 1x 2 TB M.2 NVMe WD SN700
- 2x 16 TB HDD Toshiba MG09
This motherboard is a little funny. You get two SFF-8643 connectors, but there's no SAS chip, so you can only break them out to SATA. Suits me fine for this build, the chassis backplanes happily take SFF-8643 too. The H11SSL-i motherboard is alright. EPYC Rome also fits a H12SSL-i which would get me PCIe 4.0, but the price for the motherboard + CPU combo would have doubled and I don't really need it for this build.
The Inter-Tech (German brand) chassis are pretty nice for affordable DIY builds. 4U is sweet because you can use regular ATX PSU's and fit heat sinks with fans that don't sound like jet engines. The heat sink I have now is a "CooNong" (? bless me) that I got with the motherboard. Looks like a clone of the same style that Supermicro sells. It came with the most crappy fan humanity ever laid eyes on so I promptly fitted a Noctua like the rest of the fans. As usual, motherboard warns about fan speeds being too low, expecting high RPM data center fans, but temps are otherwise fine. At least there's no audible warning for the fan speeds on this motherboard. The result: you can actually work right next to the machine.
The September 9th, 2025 Jail/Zones Production User Call is up:
We discussed Eurobhyvecon, Jail Descriptors, Capsicum vs. Pledge, the WITHOUT_JAIL build option, the Yggdrasil overlay network IPv6 routing scheme, podman, PkgBase, Netgraph vs. bridge vs. aliased networking performance, rctl and cpu set, the Nitro init system and process supervisor, and much more!
"Don't forget to slam those Like and Subscribe buttons."
Ready for the illumos Cafe friends who will be at EuroBSDCon!
Ready for the illumos Cafe friends who will be at EuroBSDCon!
One of my plans for the next few weeks is to take a nice "dip" back into #OpenIndiana, which I haven't tried in a while. I currently have installations of both #SmartOS and #OmniOS, each with its own specific target. I also want to try installing #Tribblix on a PC that's not exactly new, but still perfectly functional.
One of my plans for the next few weeks is to take a nice "dip" back into #OpenIndiana, which I haven't tried in a while. I currently have installations of both #SmartOS and #OmniOS, each with its own specific target. I also want to try installing #Tribblix on a PC that's not exactly new, but still perfectly functional.
I have an OmniOS machine, I'm trying to pass 4 physical disks it has to a bhyve-branded freebsd zone (the disks have a zfs pool with some features that are not supported on #illumos). Without the extra disks the zone/vm boots fine, with it it just halts without logging anything. I collected all the bits I could gather here https://codefloe.com/aru/gists/issues/1 . Essentially I want an equivalent of qemu-system-x86_64 -drive file=/dev/sdb,if=virtio,format=raw . Any ideas?
I have an OmniOS machine, I'm trying to pass 4 physical disks it has to a bhyve-branded freebsd zone (the disks have a zfs pool with some features that are not supported on #illumos). Without the extra disks the zone/vm boots fine, with it it just halts without logging anything. I collected all the bits I could gather here https://codefloe.com/aru/gists/issues/1 . Essentially I want an equivalent of qemu-system-x86_64 -drive file=/dev/sdb,if=virtio,format=raw . Any ideas?
In the past couple of days I debugged an issue affecting #rustler in #illumos ( #omnios ). Rustler is an ergonomic way to implement NIFs for #erlang / #elixir and they weren't loading at all. This led me to a fairly fun rabbithole.
I've written a first draft of the experience debugging in #illumos here: https://system-illumination.org/01-rustler.html
In the spirit of the recent #illumoscafe I also take a chance to start my #illumos site: https://system-illumination.org, where I'll be posting and documenting my learnings to shine a bit more light into this beautiful OS and tooling.
The above write-up is my first post on this new platform.
Hope you like it!
New #blog post. Let's write a peephole optimizer for #QBE that operates on #AArch64 assembly code. Three years ago, we did this for #AMD64 assembly code. But now that I have Arm machines, we can replicate the effort for another CPU architecture.
https://briancallahan.net/blog/20250901.html
#compiler #compilers #opensource #freesoftware #unix #bsd #freebsd #openbsd #netbsd #dragonflybsd #linux #illumos #macos #assembler #assembly
New #blog post. Let's write a peephole optimizer for #QBE that operates on #AArch64 assembly code. Three years ago, we did this for #AMD64 assembly code. But now that I have Arm machines, we can replicate the effort for another CPU architecture.
https://briancallahan.net/blog/20250901.html
#compiler #compilers #opensource #freesoftware #unix #bsd #freebsd #openbsd #netbsd #dragonflybsd #linux #illumos #macos #assembler #assembly
In the past couple of days I debugged an issue affecting #rustler in #illumos ( #omnios ). Rustler is an ergonomic way to implement NIFs for #erlang / #elixir and they weren't loading at all. This led me to a fairly fun rabbithole.
I've written a first draft of the experience debugging in #illumos here: https://system-illumination.org/01-rustler.html
In the spirit of the recent #illumoscafe I also take a chance to start my #illumos site: https://system-illumination.org, where I'll be posting and documenting my learnings to shine a bit more light into this beautiful OS and tooling.
The above write-up is my first post on this new platform.
Hope you like it!
Hey #illumos friends, I'm experimenting on a VPS. They assigned me a /64 IPv6 subnet, but I can't use it. Long story short, I think I need an NDP proxy, but as far as I know, one isn't available on illumos. So, I've created a ULA and mapped it using NAT66: map vioif0 myULA/64 -> publicIPV6/128. It works, but the server eventually crashes and reboots. When I look at the /var/adm/messages file, I see:
[...]
2025-09-01T17:47:59.314554+00:00 hostname unix: [ID 836849 kern.notice] #012#015panic[cpu1]/thread=fffffe00040f4c20:
2025-09-01T17:47:59.314564+00:00 hostname genunix: [ID 335743 kern.notice] BAD TRAP: type=e ( #pf Page fault) rp=fffffe00040f3b20 addr=0 occurred in module "unix" due to a NULL pointer dereference
2025-09-01T17:47:59.314569+00:00 hostname unix: [ID 100000 kern.notice] #012
2025-09-01T17:47:59.314574+00:00 hostname unix: [ID 839527 kern.notice] sched:
2025-09-01T17:47:59.314578+00:00 hostname unix: [ID 753105 kern.notice] #pf Page fault
2025-09-01T17:47:59.314582+00:00 hostname unix: [ID 532287 kern.notice] Bad kernel fault at addr=0x0
2025-09-01T17:47:59.314587+00:00 hostname unix: [ID 243837 kern.notice] pid=0, pc=0xfffffffffb887d3b, sp=0xfffffe00040f3c18, eflags=0x10246
2025-09-01T17:47:59.314591+00:00 hostname unix: [ID 619397 kern.notice] cr0: 8005003b cr4: 3606f8
2025-09-01T17:47:59.314596+00:00 hostname unix: [ID 152204 kern.notice] cr2: 0
2025-09-01T17:47:59.314599+00:00 hostname unix: [ID 634440 kern.notice] cr3: 22800000
2025-09-01T17:47:59.314603+00:00 hostname unix: [ID 625715 kern.notice] cr8: 0
[...]
If I disable that map, it's stable (but I can't use ipV6 from the non global zones)
Any ideas?