Today in #openzfsmastery:
"GEOM lets you get clever with your storage. Don’t be clever. Put your ZFS on top of a partition and leave it the hell alone." #nonotryagain
Today in #openzfsmastery:
"GEOM lets you get clever with your storage. Don’t be clever. Put your ZFS on top of a partition and leave it the hell alone." #nonotryagain
It appears Linux root-on-ZFS is a mess. No standard way to do it. Kernel updates require recompiling ZFS. Boot environments are a cornucopia of constantly evolving hacks.
If you're actually using #ZFS on root, on #Debian, what's your preferred hack to make that happen? #sysadmin
I suspect #openzfsmastery might need to assume root on extFS and data on ZFS, leaving root-on-ZFS for the advanced user or a terminal chapter. 
Today is #openzfsmastery, a footnote:
"See, once upon a time the phone company printed huge books that listed everyone with a phone and their phone number. No, phone numbers didn’t change so often, because they were all landlines. But then the dinosaurs knocked the phone lines down, so we went cellular."
I have no idea why anyone
buys my books, but it's open for sponsorship: https://sponsor.mwl.io
Today is #openzfsmastery, a footnote:
"See, once upon a time the phone company printed huge books that listed everyone with a phone and their phone number. No, phone numbers didn’t change so often, because they were all landlines. But then the dinosaurs knocked the phone lines down, so we went cellular."
I have no idea why anyone
buys my books, but it's open for sponsorship: https://sponsor.mwl.io
It appears Linux root-on-ZFS is a mess. No standard way to do it. Kernel updates require recompiling ZFS. Boot environments are a cornucopia of constantly evolving hacks.
If you're actually using #ZFS on root, on #Debian, what's your preferred hack to make that happen? #sysadmin
I suspect #openzfsmastery might need to assume root on extFS and data on ZFS, leaving root-on-ZFS for the advanced user or a terminal chapter. 
Today in #openzfsmastery:
"Long-term sysadmins probably have vast depth of brutally-acquired knowledge about storage devices. Study one drive’s geometry so you can calculate the optimal cylinder grouping for each partition one time and that knowledge scorches itself into your brain. You develop a wholly natural desire to treasure and share that kind of trauma—uh, wisdom. Both authors are right there with you, but none of that matters in the age of ZFS so we’ve chosen to ignore geometry and MBRs and floppy drives and discuss only modern hardware."
Today in #openzfsmastery:
"Long-term sysadmins probably have vast depth of brutally-acquired knowledge about storage devices. Study one drive’s geometry so you can calculate the optimal cylinder grouping for each partition one time and that knowledge scorches itself into your brain. You develop a wholly natural desire to treasure and share that kind of trauma—uh, wisdom. Both authors are right there with you, but none of that matters in the age of ZFS so we’ve chosen to ignore geometry and MBRs and floppy drives and discuss only modern hardware."
"The proper amount of swap space for any Unix system is a matter of fierce debate and best settled with a #sysadmin knife fight." #openzfsmastery #nonobadwriter
You can support me doing better at https://sponsor.mwl.io
"The proper amount of swap space for any Unix system is a matter of fierce debate and best settled with a #sysadmin knife fight." #openzfsmastery #nonobadwriter
You can support me doing better at https://sponsor.mwl.io
ICYMI: #openzfsmastery sponsorships are now open. #sysadmin #zfs
The #n4sa2e sponsor books have all been mailed (except for a couple problem cases) and are starting to arrive.
Which means I can probably tell folks about the #openzfsmastery sponsorship.
ICYMI: #openzfsmastery sponsorships are now open. #sysadmin #zfs
The #n4sa2e sponsor books have all been mailed (except for a couple problem cases) and are starting to arrive.
Which means I can probably tell folks about the #openzfsmastery sponsorship.
okay. #n4sa2e book production is complete. Time to get on #openzfsmastery.
Which means seriously getting to grips with #bhyve.
Did some bhyve experimenting a couple weeks ago. Got FreeBSD installed just fine. Debian with ZFS, not so much.
So this week it's go back, one step at a time. Install base debian with grub, does it work? Then UEFI, then ZFS secondary disk, then root on ZFS.
This morning's install ends with a console saying:
grub>
The Debian installer wrote grub to disk, but... didn't configure it? Huh.
Time for some classic #sysadmin headdesking.
okay. #n4sa2e book production is complete. Time to get on #openzfsmastery.
Which means seriously getting to grips with #bhyve.
Did some bhyve experimenting a couple weeks ago. Got FreeBSD installed just fine. Debian with ZFS, not so much.
So this week it's go back, one step at a time. Install base debian with grub, does it work? Then UEFI, then ZFS secondary disk, then root on ZFS.
This morning's install ends with a console saying:
grub>
The Debian installer wrote grub to disk, but... didn't configure it? Huh.
Time for some classic #sysadmin headdesking.
Planning to use #freebsd #bhyve and vm-bhyve as a test bed for #openzfsmastery.
My 14-3-p4 host has em0 and em1. em0 is for host management, em1 is part of the vm-public bridge. I have a freebsd 14.3 VM installed with a tap0 in the bridge, no problem. em1 and vm-bridge do not have IP addresses.
Reboot, log in, start VM.
Ping from the host to the VM? ~45% packet loss.
Wait a few minutes, and packet loss gradually drops to 0.
Seems weird to me. Is this expected? Should I file a bug?