“very organic, shaped by psychopathologies”


Bab 0.5.0-alpha.1 Released
Finally, a new release announcement! There’s a new version of our Bab crate, exposing verifiable streaming and data storage. You can create new storage backends for strings of some known hash, and then feed data into those backends piece by piece. As long as that data is a prefix of the expected string, it gets added to the backend. If it isn’t a prefix, you get an error, and can later resume at that point.
This release is an alpha prerelease instead of a proper new version. The reason is that not all functionality is tested yet: everything to do with the k-grouping optimisation of Bab is untested and very, very likely to be broken. Also, the persistent storage backend is still untested. But in-memory incremental verification works!
By next week, we will test the persistent backend, and then we can move on to testing our new Willow store implementation. Fingers cross that we can release persistent Willow storage next Friday!
Shout-outs!

Ben Frye has added some helpful new adaptors to Ufotofu. And caused a bit of an oh-no-do-we-need-to-break-the-core-trait-designs-again moment in the process, but later found a non-breaking solution to the problem. Phew! And then Ben also stumbled upon an issue with some of the built-in consumers of ufotofu that will need fixing. So quite a fruitful contribution. Thank you!

Also on the ufotofu side of things, Danmw has implemented another pair of adaptors. This was the most recent in a series of contributions, all of which demonstrated a deep care for the codebase (not to mention our symmetry-pondering discussions on Discord). So I went ahead and gave Dan full access to the ufotofu repo and publishing rights on crates.io. Version 0.10.1 of ufotofu was published by Dan! bus_factor += 1 =)

This Monday I took the train to Rotterdam to go speak at XPUB.
Before we started, Doriane (who had invited me after we’d met at FOSDEM) gave me a quick tour of the XPUB facilities, way up on the top floor of a building overlooking the Nieuwe Maas. I was really enamoured with the atmosphere of their space, which had zine-filled shelves, boards crowded with posters, and students working away in cosy, personal spaces. It didn’t at all have the feeling of a curated display case, but rather a living, breathing space with work flowing through it. Nice.
And as for my part: we neatly avoided a sammy slideshow situation (thank you doriane) and instead had a kind of guided discussion for two hours, in which we hardly talked about Willow. It was great. I had a big chalkboard behind me on which we assembled a timeline of nearly three decades of peer-to-peer trials and tribulations, from Napster’s legal dismantling to the state-sanctioned deactivation of Apple’s Airdrop. Interjections from the rest of the room were frequent and always thoughtful, and when we finished at 19:30 it felt like we could have kept on going.
Like I said last week, I couldn’t leave it on a downcast note. After all, what are we going to all this trouble for? I argued that we had an opportunity to remember that data is just a byproduct, and that the point is for it to change us somehow. Peer-to-peer protocols have some unique features which encourage us to learn to become conscious stewards — rather than hoarders — of data.
Anyway. Great time, great people, hope to be invited back again some day.
But that’s not all I got up to this week. I also implemented payload storage in the new Willow persistent store, using the Bab implementation Aljoscha has been working on. Streaming, verified storage! It’s a beautiful thing. Or it will be a beautiful thing once we’ve got all the bugs ironed out.
~sammy

Do I write a thoughtful editorial, or do I simply finalise this rather late update? While I have no idea about the identity of the BOTTLENECK in Blossoquest, I sure know that for this week’s update I was the bottleneck. But maybe there is time for sharing a few fun ideas still.
- A live-coding environment for shaping the sound of digital instruments. Basically, you edit a function that takes as inputs a frequency, a volume level, the time since the note was triggered, and the time since it was released, and you return a value between -1 and 1. This function is called 44000 times per second to produce sound. Oh, and the function has access to the samples it produced for the past x seconds, so that you can employ feedback and self-reference in shaping the sound. (shoutout to the best repo on github, but this would have a live-refreshable js interface instead of compiling rust programs into command line utilities)
- A theremin-like, mouse-based (or touch-pen!) UI for playing the live-coded instruments. Should have a toggle for switching between continuous frequencies and a mode that discretises into the 12 notes of well-tempered tuning. And while you move the right hand on the frequency-volume domain, the left hand can do key-presses: one key functions as a sustain pedal, another represents whether to produce sounds at all (pressing and holding that key is like pressing and holding a note on a piano).
- A language for writing down instructions for triggering the digital instuments. Basically, what happens if you take the good parts of Lilypond notation and free it from the complexities of score engraving. You end up with a language for structuring events in a flow of time. The language would “compile” into a stream of MIDI or OSC signals. And, the theremin-like UI has a recording function that saves live input in the form of that written language.
- Also, for something completely different: cellular automata (think game of life), but with a notion of continuous force fields. Cells can emit a force field, and the values of the field instantaneously propagates between cell updates, with a configurable decay based on distance to the emitter(s). Imagine a tree-drawing automaton where each cell of tree trunk acts as an emitter on the force field of treeness, and when picking a direction for growth, cells of low treeness are more likely to be picked.
- And how do I turn the oscillations of a cellular automaton into music?
ALSO: I have taken some work-in-progress drafts of other ideas of mine and moved them from obscure branches into a work-in-progress directory here at the deployment of worm-blossom.org. There is an incomplete but already informative spec draft of the media file formats I wrote about some weeks ago. There is a sketch of a bit-level variable length integer encoding for usage in the media formats. And there’s a draft of the website for Macromania. Without styling, and without most of the content, but the getting-started guide and the introduction-to-macros writeup are functional. You could start using Macromania today!
Oops, I suppose I did end up writing a full editorial.
~Aljoscha





So how about ‘persona’? Data can be verified to be written by a persona, you can have as few or as many as you like, and you can attach as much ‘identity’ information (e.g. display name, avatar) you want… or not. Miaourt suggested using little masks as the iconography for this, which I really love.

This week I finished a super secret side-project that I will reveal... later :>




The real trouble started however once we needed willow25 wrappers for generic types made up from other generic types. For example, a generic 

With that out the way I can get back to the important work of being a weird little frog. This weekend I’ll be heading to the Internet Archive’s new European Headquarters in Amsterdam for a little 

With no more slides to prepare, I returned to programming (shudder). It has been a while, and I feel clumsy and slow, especially in Rust. But I’ve started implementing the prerequisite encodings for 




Luckily I was met by friends of worm-blossom (FoWB) 


