anyone potentially into this? I’ll store your HDD in my server for $1/day so you can store your backups offsite cheap, a third or fourth copy of your files - more you store the cheaper it is
New blog post: PF Firewall on FreeBSD - A Practical Guide
After years of running PF across multiple FreeBSD servers, I've written up the patterns that work: macros, tables, brute-force protection, NAT for jails, and dual-stack filtering.
Covers everything from basic concepts to production configs, plus a sidebar on authpf for bastion hosts.
If you're running FreeBSD and want a firewall that's elegant, powerful, and actually understandable, PF is worth your time.
https://blog.hofstede.it/pf-firewall-on-freebsd-a-practical-guide/
#FreeBSD #PF #Firewall #Security #Jails #SysAdmin #IPv6
New blog post: Running your own Autonomous System on FreeBSD.
Got an AS number and IPv6 /48 via RIPE, set up a FreeBSD BGP router with FRR, two upstreams, and built GRE/GIF tunnels ti bring my own globally routable addresses to servers at different providers.
The interesting part: dual-FIB policy routing lets FreeBSD jails speak from both provider and BGP addresses simultaneously.
https://blog.hofstede.it/running-your-own-as-bgp-on-freebsd-with-frr-gre-tunnels-and-policy-routing/
New blog post: Running your own Autonomous System on FreeBSD.
Got an AS number and IPv6 /48 via RIPE, set up a FreeBSD BGP router with FRR, two upstreams, and built GRE/GIF tunnels ti bring my own globally routable addresses to servers at different providers.
The interesting part: dual-FIB policy routing lets FreeBSD jails speak from both provider and BGP addresses simultaneously.
https://blog.hofstede.it/running-your-own-as-bgp-on-freebsd-with-frr-gre-tunnels-and-policy-routing/
New blog post: PF Firewall on FreeBSD - A Practical Guide
After years of running PF across multiple FreeBSD servers, I've written up the patterns that work: macros, tables, brute-force protection, NAT for jails, and dual-stack filtering.
Covers everything from basic concepts to production configs, plus a sidebar on authpf for bastion hosts.
If you're running FreeBSD and want a firewall that's elegant, powerful, and actually understandable, PF is worth your time.
https://blog.hofstede.it/pf-firewall-on-freebsd-a-practical-guide/
#FreeBSD #PF #Firewall #Security #Jails #SysAdmin #IPv6
I'm putting together a list of big and small issues that makes us (the curl project) considering switching away from GitHub for security reporting/advisories again:
https://gist.github.com/bagder/ed3268e8745452a53a999d23b7fa1273
*considering* being the operative word, nothing has been decided and I think it's fair to give it some more time first. And some communication to see what can be done, fixed or adjusted.
To be continued.
@fosdem as is tradition: the photo from the closing talk of #FOSDEM
Of note:
* Next year, the default network of #FOSDEM will be #IPv6 only. At lot of things will break, and this is intentional
* FOSDEM will become political. If you are in policy, politics, a position of power: reach out to us at policy@fosdem.org
@fosdem as is tradition: the photo from the closing talk of #FOSDEM
Of note:
* Next year, the default network of #FOSDEM will be #IPv6 only. At lot of things will break, and this is intentional
* FOSDEM will become political. If you are in policy, politics, a position of power: reach out to us at policy@fosdem.org
@paulehoffman @b0rk @spamvictim @andy
2) Limited Public #IPv4 address space forces most organizations into CGNAT. This has lots of challenges (shared IP reputation, scaling/reliability/perf issues, etc). Those NATs can be fairly costly to operate as well. This also makes troubleshooting hard (eg, if a compromised or broken client is behind a NAT, it can be hard to chase the problem down and it can have impact to all of the other users behind that IP).
(Viet Nam has actually been making some great progress with their #Ipv6 transition and unlike some countries just talking about it, they seem to be following through so far: https://blog.apnic.net/2025/08/27/modernizing-viet-nams-internet-infrastructure-security-action/ )
@paulehoffman @b0rk @spamvictim @andy
There are many angles here, so I'll provide one or two.
1) Having a large amount of IPv4 space made address planning and structured addresses easy. For example, MIT used to split up 18.0.0.0/8 in a structured manner -- for example buildings often got a /16. My undergrad dorm didn't *need* 64k IPv4 addresses, but being able to look at the second octet to know where it was turned out to be super convenient.
This is actually one of the huge benefits of IPv6, especially when people treat it as its own things rather than just as "bigger IPv4". If you get you address plan right then you can have structured addresses. As a large scale operator this turns out to be super convenient.
For example, if an organization has a /32 then they can slice this up in various ways. For example:
* Have a /48 per site, and then have common structure within each site.
* Have a /36 per function (prod servers, lab/QA, clients, etc) then have a /48 per site within that.
That sort of structure makes IPv6 addresses actually easier to work with than IPv4 -- it's not like anyone managing a network with hundreds of thousands of nodes is typing IP addresses by hand or memorizing them.
While structured addressing sometimes happens in RFC1918 space (eg, for K8s clusters in net-10), it is much easier to run out of space in IPv4 this way in ways that get you stuck, especially if you ever need to connect multiple environments together. While 24M addresses in 10.0.0.0/8 sounds like a lot, it turns out to be not big enough for structured addressing in large compute environments, or even for unstructured addressing for large ISPs with many tens of millions of subscribers.
@paulehoffman @b0rk @spamvictim @andy
2) Limited Public #IPv4 address space forces most organizations into CGNAT. This has lots of challenges (shared IP reputation, scaling/reliability/perf issues, etc). Those NATs can be fairly costly to operate as well. This also makes troubleshooting hard (eg, if a compromised or broken client is behind a NAT, it can be hard to chase the problem down and it can have impact to all of the other users behind that IP).
(Viet Nam has actually been making some great progress with their #Ipv6 transition and unlike some countries just talking about it, they seem to be following through so far: https://blog.apnic.net/2025/08/27/modernizing-viet-nams-internet-infrastructure-security-action/ )
@b0rk You are totally hitting the ball out of the park today with "Excellent questions that will draw out uninformed opinions from people who wish they were experts in this area but can type like them". (I say this as someone whose response to questions about IP addresses is "ask my friends like @spamvictim, @andy, @nygren or anyone who has been in the RIPE community for >20 years.)
@paulehoffman @b0rk @spamvictim @andy
There are many angles here, so I'll provide one or two.
1) Having a large amount of IPv4 space made address planning and structured addresses easy. For example, MIT used to split up 18.0.0.0/8 in a structured manner -- for example buildings often got a /16. My undergrad dorm didn't *need* 64k IPv4 addresses, but being able to look at the second octet to know where it was turned out to be super convenient.
This is actually one of the huge benefits of IPv6, especially when people treat it as its own things rather than just as "bigger IPv4". If you get you address plan right then you can have structured addresses. As a large scale operator this turns out to be super convenient.
For example, if an organization has a /32 then they can slice this up in various ways. For example:
* Have a /48 per site, and then have common structure within each site.
* Have a /36 per function (prod servers, lab/QA, clients, etc) then have a /48 per site within that.
That sort of structure makes IPv6 addresses actually easier to work with than IPv4 -- it's not like anyone managing a network with hundreds of thousands of nodes is typing IP addresses by hand or memorizing them.
While structured addressing sometimes happens in RFC1918 space (eg, for K8s clusters in net-10), it is much easier to run out of space in IPv4 this way in ways that get you stuck, especially if you ever need to connect multiple environments together. While 24M addresses in 10.0.0.0/8 sounds like a lot, it turns out to be not big enough for structured addressing in large compute environments, or even for unstructured addressing for large ISPs with many tens of millions of subscribers.
@shlee i wanna do kinda like what you do with the Posters Union, but for web forums - if there's a community group or a club or a bunch of people that want to start a web forum I want to host & manage it for them, for free - with the funds coming from donations from folks that aren't involved in that community, but just web nerds that want more forums in general and are sympathetic to the cause, being a charity helps build a bit of trust and means people's donations are tax deductible
I HAVE TO AAAA
A hard to swallow pill for some folks out here, but the main issue with #IPv6 is this:
A hard to swallow pill for some folks out here, but the main issue with #IPv6 is this:
#IPv6 project updates:
- I leased a /48 from https://freerangecloud.com and figured out how to get Bird to announce it from a Vultr VPS (shockingly cheap, $5/mo VPS + $8/year for the prefix)
- Took multiple hours last night and this morning but I finally got a /56 of that delegated into my #homelab over a Wireguard tunnel, including proper routing and return paths.
- I also have my Comcast-provided /60 delegated out to a couple VLANs and am shoving it into Mikrotik static DNS records.
My vague concept of a plan is now to get NAT64 and DNS64 working on another Vultr-delegated IPv6-only VLAN and then stick a VM there with some public services. Once I have that working I can think about bringing VMSave back onto #selfhosted infra and retiring that VPS.
#IPv6 project updates:
- I leased a /48 from https://freerangecloud.com and figured out how to get Bird to announce it from a Vultr VPS (shockingly cheap, $5/mo VPS + $8/year for the prefix)
- Took multiple hours last night and this morning but I finally got a /56 of that delegated into my #homelab over a Wireguard tunnel, including proper routing and return paths.
- I also have my Comcast-provided /60 delegated out to a couple VLANs and am shoving it into Mikrotik static DNS records.
My vague concept of a plan is now to get NAT64 and DNS64 working on another Vultr-delegated IPv6-only VLAN and then stick a VM there with some public services. Once I have that working I can think about bringing VMSave back onto #selfhosted infra and retiring that VPS.
Remember when in December I had a bit of a debug session why I couldn't reach my monitoring server via #IPv6 from a DTAG internet line?
Well, this later motivated @domi to dig a bit deeper and it ended up with @q3k sending some emails to EPIX, DTAG and Lumen about it, as it was obviously a bigger issue of an entire /32 IPv6 prefix from Lumen not being accepted by DTAG.
The issue ended up being escalated multiple times and it seemed to take them quite a while to figure out what was wrong.
But as of today (or recently, today is when I got note of it), it works!
DTAG now accepts the 2a0d:eb00::/32 prefix, which they previously rejected due to, supposedly, an incorrectly documented Lumen AS-SET in the RIPE DB.
This is certainly not the escalation I expected after I connect to my server back then 