Sylve is a new and very promising bhyve and jail manager for FreeBSD - coming with clustering support and a pretty nice and modern web ui which is alike the #Proxmox one. I had a closer look at it... And I'm amazed!

#sylve #freebsd #jail #jails #bhyve #vm #virtualization #manager #ipv6 #zfs #opensource #runbsd #blog #devops #go #golang #cluster #freebsdcluster #bhyvecluster

https://gyptazy.com/blog/sylve-a-proxmox-alike-webui-for-bhyve-on-freebsd/

@gyptazy Thank you for the kind words, and also for joining the weekly call and giving your thoughts on the existing status quo in the world of virtualization and clustering.

Hopefully we make a release soon so more people can take this puppy for a test drive!

Also shout out to Go and Svelte for being such a pleasure to work with and helping us make the best out of the rock solid FreeBSD base!

@dexter @gyptazy @hayzam

As I was not in the call I would comment here.

From what I remember HAST is only limited to TWO systems - so relying on that one for multi-node Bhyve cluster seems not a solution - like 8 nodes for example.

One can utilize other FreeBSD less known feature - GEOM GATE - with ggated(8) and ggatec(8) commands.

Its serving a block device over TCP/IP - that means each host from the 8 nodes of Bhyve cluster can 'serve' its ZFS ZVOL or disk over TCP/IP and the 'master' node can use them all and use some ZFS zpool there or other redundant filesystem - and it does not have to be cluster or distributed filesystem. When the 'master' will fail - then one of the other cluster members will take over the role and mount filesystem on these GEOM GATE nodes on itself. It will require some scripting - sure - but the tools are there.

Just an idea that can be utilized to make Sylve and/or FreeBSD better.

... and thanks for making me know that zelta.space exists - need to check on that one :]

Regarding HAST; I haven't seen anyone using for years. Especially when talking and even targeting people moving from VMware. But I get the point to support BSD/FreeBSD native solutions first; just like using Jails, ZFS, pf and of course bhyve.

For enterprises, we have mostly two use cases:
* Larger clusters (with external storage)
* Two-Node clusters (which were based on vSan)

So, I think the most important part for larger clusters is about support external storage like iSCSI, FC. External storage as shared storage might be a bit more complex, because we do not have any great performing alternative to vmfs (we already slightly discussed the issues of accessing block storage from two nodes ;)). So, this is the same to HAST and I think HAST takes only place for 2-node setups which maybe could also be solved in a way of replication like Proxmox it does with ZFS. For external storage, NFSv4 performs mostly fine (see also: https://gyptazy.com/blog/nfsv3-vs-nfsv4-storage-on-proxmox-the-latency-clash-that-reveals-more-than-you-think/) for most typical workloads and solves the issues. With nConnect and pNFS and aggregated links even with 100G we do not have bandwidth issues and it's just about latency. For today's workloads, low latency db queries, AI workloads, etc., we mostly speak over NVMEoF and it becomes a more important role. Running NVMeoF with SPDK can even lower the latencies by half. But this is all about upcoming additions and they can all be supported in parallel without any issues - and the best - under the hood we already have everything in place. It's more or less just about making it accessible to ClickOps - and this is the important part. If no one feels comfortable in it's use, no one will use it - that was the issue in the past and avoided a higher adaption rate.

@vermaden@bsd.cafe@dexter@bsd.network@hayzam@bsd.cafe