With #iocaine 3.0, where the request handler is mandatory, I kept thinking how to make nixocaine and nam-shub-of-enki play well together. I came up with funky schemes and many nix crimes.

Last night, just as I was going to bed, I realized I don't need any of that. Since nixocaine is alredy a separate thing, and does not build on anything but the package provided by iocaine, I can simply make it use nam-shub-of-enki as an input too, and rather than having a separate NSoE module that integrates with nixocaine, it would just all be in nixocaine.

The hardest part of updating my templates will be to prepare a jpeg training set. I don't want my images to be that large, but I need some size variance. So I'm planning to convert a bunch of random pictures I took to low quality jpegs between 128x128 and 512x512 sizes, and train on those.

btw, iocaine will require pre-training for jpgs: it will be able to load templates serialized into cbor, but it will not be able to train on jpegs at init time.

This is different from the wordlist & markov corpus, where we do train at init time. Training on jpegs is much more expensive, and pre-training is a whole lot easier. So this is the compromise I made.

And it looks like I'll have to implement the tiny templating language too, to provide the kind of flexibility I want to provide.

This will be a nice usecase for winnow! One of the reasons I used it for fakejpeg-rs was to figure it out if it'd be a good fit for parsing my templating language, and it will be.