“more than 80 bytes per week”

what's with the name?!

why is this website called ‘worm-blossom’? In this series we will explain why in a concise manner.
Part 2: Swatch Internet Time
It is 1998. Two centuries after the French revolutionaries tried (and failed) to metricise time, Nicolas Hayek, the CEO of Swatch, and Nicholas Negroponte, director of the MIT Media Lab share the stage at the MIT Junior Summit ‘98 to jointly announce “Swatch Internet Time”, which divides the day into 1000 beats.
Hayek is a businessman who has successfully reordered ailing Swiss watchmakers into the Swatch Group. It is safe to assume that for him, Internet Time is just business, a new marketing campaign. But Negroponte is a true believer: “Internet Time is absolute time for everybody. Internet Time is not geopolitical. It is global. In the future, for many people, real time will be Internet Time," says Negroponte, an unabashed techno-optimist who regularly asks readers of his column in Wired to “move bits, not atoms”.
He is not alone in his belief that the internet will bring about a liberal utopia. And perhaps that is why for a minute (or a beat) it looked like internet time might actually happen. Watches flashing the time in beats flew off the shelves in Japan. CNN displayed the time in beats in the corner of their news feed, 24/7. Ericsson phones displayed internet time. ICQ, a then popular instant messenger used it too.
But after the initial fervour, Swatch Internet fell into disuse, just as it had two centuries prior. But unlike France's decimal time where the fifth hour corresponded with noon, 500 beats could mean anything depending on where you were in the world. 500 beats could mean noon — if you were in Biel, where Swatch’s headquarters are situated — or it could mean morning, or midnight elsewhere. Swatch Internet Time asked us to abandon the real world as our frame of reference and replace it with the internet instead.
But why is worm-blossom called worm-blossom? If we turn the camera away from the stage at Junior Summit ‘98 and zoom into the audience, we can see Isao Okawa, the then chairman of Sega Enterprises. We’ll meet him again in our next instalment.

worm-blossom: now on Fridays. We have moved the publishing day because having a Monday deadline put a lot of pressure on the weekend. On Thursdays I'd think I have so much time to write about the French revolution, handily forgetting that I never have the opportunity to use a computer on the weekends. Then Monday would turn into a very stressful day. Now Fridays can be a very stressful day instead, which is much better.
We sadly had our proposal for Rummager rejected this week. It's one of those ideas that is very hard to convey, and I'm starting to think we'll just need to build out a prototype of it in our spare time to be able to do that.
It's a bit similar to something else I noticed this week: no matter how much documentation we write — loaded with illustrations and references and sidenotes — you just can't build up the same kind of understanding as if you actually got your hands on it. I'm thinking of building out some interactive tools for the Willow Data Model page because of that.
In any case, we have no shortage of (funded) plans. Keep muddling through out there.
~sammy
We have opinions
...on 'fancy' CRDTs
Willow uses a last-write-wins strategy for resolving conflicts. With so many sophisticated alternatives out there, why would we do that?
Systems which automagically merge data tend to fall into two camps:
- Good at merging self-contained units of data (e.g. merging multiple edits of text into a single coherent edit)
- Good at merging collections of data (e.g. merging separate timelines of posts into a single timeline with everyone’s posts in it)
Intelligently merging self-contained data like text is undoubtedly an impressive technical (and scholarly) feat! But most of the connected applications we use day-to-day don’t need it: microblogging, chatrooms, forums, messaging, media sharing, issue trackers. All applications where people author data alone before contributing it to a common pool of data.
Fine-grained CRDTs have a hard limit on how much they can be optimised. And they often require lugging around a complete history of changes, forever.
Which is why Willow focuses on merging potentially huge collections of interleaved data. You get a system which can serve a broad spectrum of connected applications, and one which we know we can make efficient.

Ufotofu-Based Encoding and Decoding
We have released a new minor version of ufotofu, adding traits and helpers for encoding and decoding values.
use ufotofu::codec_prelude::*;
use ufotofu::codec::endian::U32BE;
let mut buf = [99; 5];
let mut con = (&mut buf).into_consumer();
// Encode a u32 into big-endian bytes...
U32BE(258).encode(&mut con).await?;
assert_eq!(buf, [0, 0, 1, 2, 99]);
// ...and decode them back into a u32.
let mut pro = buf.into_producer();
assert_eq!(
U32BE::decode_canonic(&mut pro).await?.0,
258,
);In addition to traits for conventional encoding and decoding, we also have a module for encoding and decoding relative to some context shared between encoder and decoder. These traits allow for techniques such as delta encodings. Many of the encodings of Willow rely on this to encode information significantly more compactly than if everything was encoded in isolation.
In general, the codec traits of ufotofu map directly to the notions of (relative) encoding relations we use throughout the Willow specs. This does not mean that ufotofu takes on an opinionated formalism applicable only to Willow. On the contrary, it simply means that the concepts on which we build encodings in Willow are useful general-purpose abstractions.

The codec traits we have added to ufotofu today have been part of older ufotofu versions for quite a while now. And they have been exceptionally useful throughout our Willow-in-Rust journey. In fact, they alone already justified ufotofu as a whole for us. What makes them so special?
Not just any value that can be turned into a bytestring gets to implement our Encodable trait. No, there are precise formal requirements that must be fulfilled by all possible encodings. Every possible value must have at least one encoding. No two nonequal values must have the same encoding. No valid encoding must be a prefix of any other encoding. That sort of thing (see the docs for the precise list). But still, how is that supposed to be incredibly productivity-boosting?
The answer lies in the humble codec::proptest module. It provides functions such as assert_codec. You call that function with a bunch of values, and the function panics if and only if the arguments are a counterexample to any of the formal criteria an encoding must fulfil.
We use this to let a coverage-guided fuzzer try to find counterexamples. And if it cannot, then we can be pretty darn confident that our implementation is correct. Which only few of our initial implementation attempts are. But we just press a button, and the computer gives us a set of failing inputs. No matter how many encodings we implement (and Willow alone has quite a few), we always get an almost ideal test setup and debugging aid for free.
And even better, this process has uncovered plenty of faulty definitions of encodings in the Willow specs as well. Some of our encodings are quite fiddly, and it is easy to lose track of some detail and specify an encoding which cannot be uniquely decoded, for example. But if the fuzzer cannot find a counterexample after 20 minutes, we can be highly confident that the encoding we specified is free of bugs.
Now that we have properly released ufotofu and its tooling around encodings, we will return to the Rust implementation of Willow proper. If everything goes according to plan, we will release a cleaned-up, well-documented, and ergonomic-to-use implementation of Meadowcap next week. And internally, it will be powered by ufos and tofu!
~Aljoscha
Shout-outs!
Many thanks again to Miaourt, who implemented a complete suite of indexing traits for the Component struct in willow_rs. They also included fuzz tests — and upgraded our fuzz testing tooling while they were at it.
And thank you to kzdnk, who has made it possible to set an Entry's payload digest using one of ufotofu's BulkProducers. This change also came with lovely docs and fuzz tests.links of the week
- Who needs Graphviz when you can build it yourself? - “Perhaps programmers ought to put less trust into magic optimizing systems, especially when a human-friendly result is the goal. Simple (and stupid) algorithms can be very effective when applied with discretion and taste.”
- Mechanisms and Aesthetics of Online Radicalisation with Danielle Brathwaite-Shirley and Cade Diehm (video, 1hr 30m) - What a great pairing of presenters. I hadn't seen Danielle Brathwaite-Shirley's work before, but her installations (where she often found technical interfaces got in the way of people reaching consensus) are beautiful and fascinating. Friend of the site Cade ends his sobering talk with radical optimism (if the fascists can reappropriate technology to their ends, so can we). Stick around for the Q&A.






Firstly, we’ve reworked 



For breakfast I often turn to the humble porridge. I like to put big chunks of apple in my porridge which cook just long enough to get soft but still keep their structure, kind of like in an apple pie. I like to cook my porridge low and slow, which really annoys everyone else.
You can't go far wrong with a curry. I break up a cauliflower into little florets and roast them in the oven until they start to caramelise at the edges. I could eat them just like that, but in a curry sauce they're even better. I make a curry sauce from scratch and it is completely inauthentic.
Salmon is great when you're low on time because you just smack it in the oven for a bit. In the colder months I like to do a roast vegetable mix alongside it: potato, beetroot, sweet potato, red onion, topped with some capers. The purple of the beetroot and onions look so good next to everything else.
And when it all became too much for me I just ordered pizza. hashtag winning

