Don’t underestimate the utility of naming meeting rooms. As someone who was almost late to a meeting in the “conference room next door, but not that one you’re thinking of,” they can be super useful.

People at work were always talking about a bánh mì place that was “across the road.” Today, I realised that they weren’t referring to the major road a few doors down from the office, but literally across the street the office is located on (you can see it from the front door). Pretty decent bánh mì.

Watching an integration video to learn more about how to work with an API service provider. First slide:

We use HTTP REST, which has 4 verbs: GET, POST, PUT, DELETE. You just used a GET to get this video…

Ah, I guess we’re starting this story at the Big Bang. 😩

I know for myself that if an OS vendor started designing their products thinking that I’d want an emotional connection with my computer, I’d start looking for another OS vendor. This is coming from someone who’s turned off Siri on all their Macs. I’m here to use my computer, not make new friends.

All the recent changes to UCL is in service of unifying the scripting within Dynamo Browse. Right now there are two scripting languages: one for the commands entered after pressing :, and one for extensions. I want to replace both of them with UCL, which will power both interactive commands, and extensions.

Most of the commands used within the in-app REPL loop has been implemented in UCL. I’m now in the process of building out the UCL extension support, start with functions for working with result sets, and pseudo-variables for modifying elements of the UI.

Here’s a demo of what I’ve got so far. This shows the user’s ability to control the current result-set, and the selected item programatically. Even after these early changes, I’m already seeing much better support for doing such things than what was there before.

Devlog: UCL — Assignment

Some thoughts of changing how assignments work in UCL to support subscripts and pseudo-variables.

Unexpected heron sighting. The noisy miners were not expecting it either, and they were not happy.

Auto-generated description: A white-faced heron is standing on a grassy area near some trees and a paved surface.

Like the coining of phrase “Canadian Devil Syndrome” by emailer Joseph on the latest Sharp Tech.

Serious Maintainers

I just learnt that Hugo has changed their layout directory structure (via) and has done so without bumping the major version. I was a little peeved by this: this is a breaking change1 and they’re not indicating the “semantic versioning” way by going from 1.x.x to 2.0.0. Surely they know that people are using Hugo, and that an ecosystem of sorts has sprung up around it.

But then a thought occurred: what if they don’t know? What if they’re plugging away at their little project, thinking that it’s them and a few others using it? They probably think it’s safe for them to slip this change in, since it’ll only inconvenience a handful of users.

I doubt this is actually the case: it’s pretty hard to avoid the various things that are using Hugo nowadays. But this thought experiment led to some reflection on the stuff I make. I’m planning a major change to one of my projects that will break backwards compatibility too. Should I bump the major version number? Could I slip it in a point release? How many people will this touch?

I could take this route, with the belief it’s just me using this project, but do I actually know that? And even if no-one’s using it now, what would others coming across this project think? What’s to get them to start using it, knowing that I just “pulled a Hugo”? If I’m so carefree about such changes now, could they trust me to not break the changes they depend on later?

Now, thanks to website analytics, I know for a fact that only a handful of people are using the thing I built, so I’m hardly in the same camp as the Hugo maintainers. But I came away from this wondering if it’s worth pretending that making this breaking change will annoy a bunch of users. That others may write their own post if I’m not serious about it. I guess you could call this an example of “fake it till you make it,” or, to borrow a quote from of Logan Roy in Succession: being a “serious” maintainer. If I take this project seriously, then others can do so too.

It might be worth a try. Highly unlikely that it itself will lead to success or adoption, but I can’t see how it will hurt.


  1. Technically it’s not a breaking change, and they will maintain backwards compatibility, at least for a while. But just humour me here. ↩︎

Watched the first semifinals of the Eurovision Song Contest this evening on SBS (the good and proper time for an Aussie). Good line-up of acts tonight: not too disappointed with who got through.

My favourites this evening: 🇮🇸🇪🇪🇪🇸🇸🇪🇸🇲🇳🇱🇨🇾, plus 🇳🇴🇧🇪🇦🇿 which were decent.

My first automation to assist me with this “issue driven development” approach: a Keyboard Maestro macro which will activate Obsidian, go to the end of the document, and add a new line beginning with the current time.

Auto-generated description: A configuration window for creating a new timestamped line in Obsidian, detailing trigger options and actions.

My goal is to have one Obsidian note per Jira task, which I will have open when I’m actively working on it. When I want to record something, like a decision or passing thought, I’ll press Cmd+Option+Ctrl+L to fire this macro, and start typing. Couldn’t resist adding some form of automation for this, but hey: at least it’s not some hacked-up, makeshift app this time.

Enjoyed watching Simon Willison’s talk about issue driven development and maintaining temporal document for tasks. Watch the video but that section can be boiled down to “now write it down.” Will give this a try for the tasks I do at work.

Devlog: Blogging Tools — Finished Podcast Clips

Well, it’s done. I’ve finally finished adding the podcast clip to Blogging Tools. And I won’t lie to you, it took longer than expected, even after enabling some of the AI features my IDE came with. Along with the complexity that came from implementing this feature, that touched on most of the key subsystems of Blogging Tools, the biggest complexity came from designing how the clip creation flow should work. Blogging Tools is at a disadvantage over clipping features in podcast players in that it:

  1. Doesn’t know what feeds you’ve subscribed to,
  2. Doesn’t know what episode you’re listening to, and
  3. Doesn’t know where in the episode you are.

Blogging Tools needs to know this stuff for creating a clip, so there was no alternative to having the user input this when they’re creating the clip. I tried to streamline this in a few ways:

  • Feeds had to be predefined: While it’s possible to create a clip from an arbitrary feed, it’s a bit involved, and the path of least resistence is to set up the feeds you want to clip ahead of time. This works for me as I only have a handful of feeds I tend to make clips from.
  • Prioritise recent episodes: The clips I tend to make come from podcasts that touch on current events, so any episode listings should prioritise the more recent ones. The episode list is in the same order as the feed, which is not strictly the same, but fortunately the shows I subscribe to list episodes in reverse chronological order.
  • Easy course and fine positioning of clips: This means going straight to a particular point in the episode by entering the timestamp. This is mainly to keep the implementation simple, but I’ve always found trying to position the clip range on a visual representation of a waveform frustrating. It was always such a pain trying to make fine adjustments to where the clip should end. So I just made this simple and allow you to advance the start time and duration by single second increments by tapping a button.

Rather than describe the whole flow in length, or prepare a set of screenshots, I’ve decided to record a video of how this all fits together.

The rest was pretty straightforward: the videos are made using ffmpeg and publishing it on Micro.blog involved the Micropub API. There were some small frills added to the UI using both HTMX and Stimulus.JS so that job status updates could be pushed via web-sockets. They weren’t necessary, as it’s just me using this, but this project is becoming a bit of a testbed for stretching my skills a little, so I think small frills like this helped a bit.

I haven’t made a clip for this yet or tested out how this will feel on a phone, but I’m guessing both will come in time. I also learnt some interesting tidbits, such that the source audio of an <audio> tag requires a HTTP response that supports range requests. Seeking won’t work otherwise: trying to change the time position will just seek the audio back to the start.

Anyway, good to see this in prod and moving onto something else. I’ve getting excited thinking about the next thing I want to work on. No spoilers now, but it features both Dynamo Browse and UCL.

Finally, I just want to make the point that this would not be possible without the open RSS podcasting ecosystem. If I was listening to podcasts in YouTube, forget it: I wouldn’t have been able to build something like this. I know for myself that I’ll continue to listen to RSS podcasts for as long as podcasters continue to publish them. Long may it be so.

I sometimes wish there was a way where I could resurface an old post as if it was new, without simply posting it again. I guess I could adjust the post date, but that feels like tampering with history. Ah well.

In other news, my keyboard’s causing me to make spelling errors again. 😜

My online encounters with Steve Yegge’s writing is like one of those myths of someone going on a long journey. They’re travelling alone, but along they way, a mystical spirt guide appears to give the traveller some advice. These apparitions are unexpected, and the traveller can go long spells without seeing them. But occasionally, when they arrive at a new and unfamiliar place, the guide is there, ready to impart some wisdom before disappearing again.1

Anyway, I found a link to his writing via another post today. I guess he’s writing at Sourcegraph now: I assume his working there.

Far be it for me to recommend a site for someone else to build, but if anyone’s interested in registering wheretheheckissteveyeggewritingnow.com and posting links to his current and former blogs, I’d subscribe to that.


  1. Or, if you’re a fan of Half Life, Yegge’s a bit like the G-Man. ↩︎

Gotta be honest: the current kettle situation I find myself in, not my cup of tea. 😏

A kettle with a removable lid with a missing handle beside the toaster. The handle and some small debris site beside it. To left is a cutting board with a coat-hanger.

Amusing that find myself in a position where I have to log into one password manager to get the password to log into another password manager to get a password.

The yo dawg meme with the caption: Yo Dawg heard you like passwords for your passwords, so we added a password for your passwords for your passwords.

Does Google ever regret naming Go “Go”? Such a common word to use as a proper noun. I know the language devs prefer not to use Golang, but there’s no denying that it’s easier to search for.

The category keyword test is a go.

Unless you’re working on 32 bit hardware, or dealing with legacy systems, there’s really no need to be using 32 bit integers in database schemas or binary formats. There’s ample memory/storage/bandwidth for 64 bit integers nowadays. So save yourself the “overflow conversion” warnings.

This is where I think Java made a mistake of defaulting to 32 bit integers regardless of the architecture. I mean, I can see why: a language and VM made in the mid-90s targeting set-top boxes: settling on 32 integers made a lot of sense. But even back then, the talk of moving to 64 bit was in the air. Nintendo even made that part of the console marketing.