Discussion
Loading...

Post

  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
gyptazy
@gyptazy@gyptazy.com  ·  activity timestamp last month

Sylve is a new and very promising bhyve and jail manager for FreeBSD - coming with clustering support and a pretty nice and modern web ui which is alike the #Proxmox one. I had a closer look at it... And I'm amazed!

#sylve #freebsd #jail #jails #bhyve #vm #virtualization #manager #ipv6 #zfs #opensource #runbsd #blog #devops #go #golang #cluster #freebsdcluster #bhyvecluster

https://gyptazy.com/blog/sylve-a-proxmox-alike-webui-for-bhyve-on-freebsd/

  • Copy link
  • Flag this post
  • Block
Ed Maste
@emaste@mastodon.social replied  ·  activity timestamp last month
@gyptazy I'm happy for the #FreeBSD Foundation's collaboration with Alchemilla on the sponsorship of this project, it's coming along very well! The Foundation's also sponsoring improvements to #libvirt's #bhyve support to help ensure it's a stable foundation to build on.
  • Copy link
  • Flag this comment
  • Block
hayzam
@hayzam@mastodon.bsd.cafe replied  ·  activity timestamp last month
@gyptazy Thank you for the kind words, and also for joining the weekly call and giving your thoughts on the existing status quo in the world of virtualization and clustering.

Hopefully we make a release soon so more people can take this puppy for a test drive!

Also shout out to Go and Svelte for being such a pleasure to work with and helping us make the best out of the rock solid FreeBSD base!

  • Copy link
  • Flag this comment
  • Block
xyhhx 🔻 (plz hire me)
@xyhhx@nso.group replied  ·  activity timestamp last month
@gyptazy this is cool as fuck
  • Copy link
  • Flag this comment
  • Block
gyptazy
@gyptazy@gyptazy.com replied  ·  activity timestamp last month
@xyhhx@nso.group indeed, indeed - it really is! And I’m still amazed 🥳
  • Copy link
  • Flag this comment
  • Block
gyptazy
@gyptazy@gyptazy.com replied  ·  activity timestamp last month

Finally managed to join @dexter@bsd.network’s bhyve call after a long time. Really happy to see that #sylve plays an important role and we can discuss upcoming feature implementations. Things are definitely moving into the right direction with clearly visible progress! Thanks @hayzam@bsd.cafe

#freebsd #bhyve #jail #jails #development

  • Copy link
  • Flag this comment
  • Block
Michael Dexter
@dexter@bsd.network replied  ·  activity timestamp last month
@gyptazy @hayzam The Call recording is up:

https://bsd.network/@dexter/115149669231664294

  • Copy link
  • Flag this comment
  • Block
vermaden
@vermaden@mastodon.bsd.cafe replied  ·  activity timestamp last month
@dexter @gyptazy @hayzam

As I was not in the call I would comment here.

From what I remember HAST is only limited to TWO systems - so relying on that one for multi-node Bhyve cluster seems not a solution - like 8 nodes for example.

One can utilize other FreeBSD less known feature - GEOM GATE - with ggated(8) and ggatec(8) commands.

Its serving a block device over TCP/IP - that means each host from the 8 nodes of Bhyve cluster can 'serve' its ZFS ZVOL or disk over TCP/IP and the 'master' node can use them all and use some ZFS zpool there or other redundant filesystem - and it does not have to be cluster or distributed filesystem. When the 'master' will fail - then one of the other cluster members will take over the role and mount filesystem on these GEOM GATE nodes on itself. It will require some scripting - sure - but the tools are there.

Just an idea that can be utilized to make Sylve and/or FreeBSD better.

... and thanks for making me know that zelta.space exists - need to check on that one :]

  • Copy link
  • Flag this comment
  • Block
gyptazy
@gyptazy@gyptazy.com replied  ·  activity timestamp last month

Regarding HAST; I haven't seen anyone using for years. Especially when talking and even targeting people moving from VMware. But I get the point to support BSD/FreeBSD native solutions first; just like using Jails, ZFS, pf and of course bhyve.

For enterprises, we have mostly two use cases:
* Larger clusters (with external storage)
* Two-Node clusters (which were based on vSan)

So, I think the most important part for larger clusters is about support external storage like iSCSI, FC. External storage as shared storage might be a bit more complex, because we do not have any great performing alternative to vmfs (we already slightly discussed the issues of accessing block storage from two nodes ;)). So, this is the same to HAST and I think HAST takes only place for 2-node setups which maybe could also be solved in a way of replication like Proxmox it does with ZFS. For external storage, NFSv4 performs mostly fine (see also: https://gyptazy.com/blog/nfsv3-vs-nfsv4-storage-on-proxmox-the-latency-clash-that-reveals-more-than-you-think/) for most typical workloads and solves the issues. With nConnect and pNFS and aggregated links even with 100G we do not have bandwidth issues and it's just about latency. For today's workloads, low latency db queries, AI workloads, etc., we mostly speak over NVMEoF and it becomes a more important role. Running NVMeoF with SPDK can even lower the latencies by half. But this is all about upcoming additions and they can all be supported in parallel without any issues - and the best - under the hood we already have everything in place. It's more or less just about making it accessible to ClickOps - and this is the important part. If no one feels comfortable in it's use, no one will use it - that was the issue in the past and avoided a higher adaption rate.

@vermaden@bsd.cafe@dexter@bsd.network@hayzam@bsd.cafe

  • Copy link
  • Flag this comment
  • Block
Michael Dexter
@dexter@bsd.network replied  ·  activity timestamp last month
@vermaden @gyptazy @hayzam CC @crest
  • Copy link
  • Flag this comment
  • Block
Log in

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About · Code of conduct · Privacy · Users · Instances
Bonfire social · 1.0.0-rc.3.1 no JS en
Automatic federation enabled
  • Explore
  • About
  • Members
  • Code of Conduct
Home
Login