Discussion
Loading...

Post

  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
algernon the zellij stan
@algernon@come-from.mad-scientist.club  ·  activity timestamp 2 months ago

With #iocaine 3.0, where the request handler is mandatory, I kept thinking how to make nixocaine and nam-shub-of-enki play well together. I came up with funky schemes and many nix crimes.

Last night, just as I was going to bed, I realized I don't need any of that. Since nixocaine is alredy a separate thing, and does not build on anything but the package provided by iocaine, I can simply make it use nam-shub-of-enki as an input too, and rather than having a separate NSoE module that integrates with nixocaine, it would just all be in nixocaine.

  • Copy link
  • Flag this post
  • Block
algernon the zellij stan
@algernon@come-from.mad-scientist.club replied  ·  activity timestamp 2 months ago

In other news, I'm almost finished adding fakejpeg support to iocaine. Just benchmarked it, and with the iocaine & HTTP overhead, it's comfortably dishing out 100k 2.8MiB 512x512 jpegs / sec.

I do not envy the bots.

  • Copy link
  • Flag this comment
  • Block
Ariel (🐿 arc)
@arichtman@eigenmagic.net replied  ·  activity timestamp 2 months ago
@algernon that's mad fast
  • Copy link
  • Flag this comment
  • Block
algernon the zellij stan
@algernon@come-from.mad-scientist.club replied  ·  activity timestamp 2 months ago

The hardest part of updating my templates will be to prepare a jpeg training set. I don't want my images to be that large, but I need some size variance. So I'm planning to convert a bunch of random pictures I took to low quality jpegs between 128x128 and 512x512 sizes, and train on those.

  • Copy link
  • Flag this comment
  • Block
algernon the zellij stan
@algernon@come-from.mad-scientist.club replied  ·  activity timestamp 2 months ago

btw, iocaine will require pre-training for jpgs: it will be able to load templates serialized into cbor, but it will not be able to train on jpegs at init time.

This is different from the wordlist & markov corpus, where we do train at init time. Training on jpegs is much more expensive, and pre-training is a whole lot easier. So this is the compromise I made.

  • Copy link
  • Flag this comment
  • Block
algernon the zellij stan
@algernon@come-from.mad-scientist.club replied  ·  activity timestamp 2 months ago

aaand merged to main!

I'll update my WIP nam-shub branch later.

  • Copy link
  • Flag this comment
  • Block
algernon the zellij stan
@algernon@come-from.mad-scientist.club replied  ·  activity timestamp 2 months ago

Updating NSoE postponed 'till tomorrow. Not feeling it atm. There's also a couple of small things I may need to implement in iocaine to make what I'm aiming for possible.

  • Copy link
  • Flag this comment
  • Block
algernon the zellij stan
@algernon@come-from.mad-scientist.club replied  ·  activity timestamp 2 months ago

And it looks like I'll have to implement the tiny templating language too, to provide the kind of flexibility I want to provide.

This will be a nice usecase for winnow! One of the reasons I used it for fakejpeg-rs was to figure it out if it'd be a good fit for parsing my templating language, and it will be.

  • Copy link
  • Flag this comment
  • Block
Log in

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About · Code of conduct · Privacy · Users · Instances
Bonfire social · 1.0.0-rc.3.1 no JS en
Automatic federation enabled
  • Explore
  • About
  • Members
  • Code of Conduct
Home
Login