Like the coining of phrase “Canadian Devil Syndrome” by emailer Joseph on the latest Sharp Tech.
Serious Maintainers
I just learnt that Hugo has changed their layout directory structure (via) and has done so without bumping the major version. I was a little peeved by this: this is a breaking change1 and they’re not indicating the “semantic versioning” way by going from 1.x.x to 2.0.0. Surely they know that people are using Hugo, and that an ecosystem of sorts has sprung up around it.
But then a thought occurred: what if they don’t know? What if they’re plugging away at their little project, thinking that it’s them and a few others using it? They probably think it’s safe for them to slip this change in, since it’ll only inconvenience a handful of users.
I doubt this is actually the case: it’s pretty hard to avoid the various things that are using Hugo nowadays. But this thought experiment led to some reflection on the stuff I make. I’m planning a major change to one of my projects that will break backwards compatibility too. Should I bump the major version number? Could I slip it in a point release? How many people will this touch?
I could take this route, with the belief it’s just me using this project, but do I actually know that? And even if no-one’s using it now, what would others coming across this project think? What’s to get them to start using it, knowing that I just “pulled a Hugo”? If I’m so carefree about such changes now, could they trust me to not break the changes they depend on later?
Now, thanks to website analytics, I know for a fact that only a handful of people are using the thing I built, so I’m hardly in the same camp as the Hugo maintainers. But I came away from this wondering if it’s worth pretending that making this breaking change will annoy a bunch of users. That others may write their own post if I’m not serious about it. I guess you could call this an example of “fake it till you make it,” or, to borrow a quote from of Logan Roy in Succession: being a “serious” maintainer. If I take this project seriously, then others can do so too.
It might be worth a try. Highly unlikely that it itself will lead to success or adoption, but I can’t see how it will hurt.
-
Technically it’s not a breaking change, and they will maintain backwards compatibility, at least for a while. But just humour me here. ↩︎
Watched the first semifinals of the Eurovision Song Contest this evening on SBS (the good and proper time for an Aussie). Good line-up of acts tonight: not too disappointed with who got through.
My favourites this evening: 🇮🇸🇪🇪🇪🇸🇸🇪🇸🇲🇳🇱🇨🇾, plus 🇳🇴🇧🇪🇦🇿 which were decent.
My first automation to assist me with this “issue driven development” approach: a Keyboard Maestro macro which will activate Obsidian, go to the end of the document, and add a new line beginning with the current time.

My goal is to have one Obsidian note per Jira task, which I will have open when I’m actively working on it. When I want to record something, like a decision or passing thought, I’ll press Cmd+Option+Ctrl+L to fire this macro, and start typing. Couldn’t resist adding some form of automation for this, but hey: at least it’s not some hacked-up, makeshift app this time.
Enjoyed watching Simon Willison’s talk about issue driven development and maintaining temporal document for tasks. Watch the video but that section can be boiled down to “now write it down.” Will give this a try for the tasks I do at work.
Devlog: Blogging Tools — Finished Podcast Clips
Well, it’s done. I’ve finally finished adding the podcast clip to Blogging Tools. And I won’t lie to you, it took longer than expected, even after enabling some of the AI features my IDE came with. Along with the complexity that came from implementing this feature, that touched on most of the key subsystems of Blogging Tools, the biggest complexity came from designing how the clip creation flow should work. Blogging Tools is at a disadvantage over clipping features in podcast players in that it:
- Doesn’t know what feeds you’ve subscribed to,
- Doesn’t know what episode you’re listening to, and
- Doesn’t know where in the episode you are.
Blogging Tools needs to know this stuff for creating a clip, so there was no alternative to having the user input this when they’re creating the clip. I tried to streamline this in a few ways:
- Feeds had to be predefined: While it’s possible to create a clip from an arbitrary feed, it’s a bit involved, and the path of least resistence is to set up the feeds you want to clip ahead of time. This works for me as I only have a handful of feeds I tend to make clips from.
- Prioritise recent episodes: The clips I tend to make come from podcasts that touch on current events, so any episode listings should prioritise the more recent ones. The episode list is in the same order as the feed, which is not strictly the same, but fortunately the shows I subscribe to list episodes in reverse chronological order.
- Easy course and fine positioning of clips: This means going straight to a particular point in the episode by entering the timestamp. This is mainly to keep the implementation simple, but I’ve always found trying to position the clip range on a visual representation of a waveform frustrating. It was always such a pain trying to make fine adjustments to where the clip should end. So I just made this simple and allow you to advance the start time and duration by single second increments by tapping a button.
Rather than describe the whole flow in length, or prepare a set of screenshots, I’ve decided to record a video of how this all fits together.
The rest was pretty straightforward: the videos are made using ffmpeg
and publishing it on Micro.blog involved the Micropub API. There were some small frills added to the UI using both HTMX and Stimulus.JS so that job status updates could be pushed via web-sockets. They weren’t necessary, as it’s just me using this, but this project is becoming a bit of a testbed for stretching my skills a little, so I think small frills like this helped a bit.
I haven’t made a clip for this yet or tested out how this will feel on a phone, but I’m guessing both will come in time. I also learnt some interesting tidbits, such that the source audio of an <audio>
tag requires a HTTP response that supports range requests. Seeking won’t work otherwise: trying to change the time position will just seek the audio back to the start.
Anyway, good to see this in prod and moving onto something else. I’ve getting excited thinking about the next thing I want to work on. No spoilers now, but it features both Dynamo Browse and UCL.
Finally, I just want to make the point that this would not be possible without the open RSS podcasting ecosystem. If I was listening to podcasts in YouTube, forget it: I wouldn’t have been able to build something like this. I know for myself that I’ll continue to listen to RSS podcasts for as long as podcasters continue to publish them. Long may it be so.
I sometimes wish there was a way where I could resurface an old post as if it was new, without simply posting it again. I guess I could adjust the post date, but that feels like tampering with history. Ah well.
In other news, my keyboard’s causing me to make spelling errors again. 😜
My online encounters with Steve Yegge’s writing is like one of those myths of someone going on a long journey. They’re travelling alone, but along they way, a mystical spirt guide appears to give the traveller some advice. These apparitions are unexpected, and the traveller can go long spells without seeing them. But occasionally, when they arrive at a new and unfamiliar place, the guide is there, ready to impart some wisdom before disappearing again.1
Anyway, I found a link to his writing via another post today. I guess he’s writing at Sourcegraph now: I assume his working there.
Far be it for me to recommend a site for someone else to build, but if anyone’s interested in registering wheretheheckissteveyeggewritingnow.com
and posting links to his current and former blogs, I’d subscribe to that.
-
Or, if you’re a fan of Half Life, Yegge’s a bit like the G-Man. ↩︎
Gotta be honest: the current kettle situation I find myself in, not my cup of tea. 😏

Amusing that find myself in a position where I have to log into one password manager to get the password to log into another password manager to get a password.

Does Google ever regret naming Go “Go”? Such a common word to use as a proper noun. I know the language devs prefer not to use Golang, but there’s no denying that it’s easier to search for.
The category keyword test is a go.
Unless you’re working on 32 bit hardware, or dealing with legacy systems, there’s really no need to be using 32 bit integers in database schemas or binary formats. There’s ample memory/storage/bandwidth for 64 bit integers nowadays. So save yourself the “overflow conversion” warnings.
This is where I think Java made a mistake of defaulting to 32 bit integers regardless of the architecture. I mean, I can see why: a language and VM made in the mid-90s targeting set-top boxes: settling on 32 integers made a lot of sense. But even back then, the talk of moving to 64 bit was in the air. Nintendo even made that part of the console marketing.
There’s also this series of videos by the same creator that goes in depth on how the Super Mario Bros. levels are encoded in ROM. This is even more fascinating, as they had very little memory to work with, and had to make some significant trade-offs, like allowing Mario to go left. 📺
If anyone’s interested in how levels in Super Mario Bros. 2 are encoded in the ROM, I can recommend this video by Retro Game Mechanics. It goes for about 100 minutes so it’s quite in depth. 📺
Just blindly accepting permission dialogs whenever MacOS throws them at me, like some bad arcade game. Was this your intention, Apple?
Overheard this exchange just now at the cafe:
Customer: How ya’ feeling?
Barista: Feeling cold.
Customer: Well at least that’s something. If ya’ don’t feel the cold it means you’re dead.
Had a lot more weight to me than I think the customer originally intended.
Mother’s Day in full bloom over here. 💐

Rubberducking: More On Mocking
Mocking in unit tests can be problematic due to the growing complexity of service methods with multiple dependencies, leading to increased maintenance challenges. But the root cause may not be the mocks themselves.
🔗 NY Mag: Rampant AI Cheating Is Ruining Education Alarmingly Fast
Two thoughts on this. The first is that I think these kids are doing a disservice to themselves. I’m not someone who’s going to say “don’t use AI ever,” but the only way I can really understanding something is working through it, either by writing it myself or spending lots of time on it. I find this even in my job: it’s hard for me to know of the existence of some feature in a library I haven’t touched myself, much less how to use it correctly. Offloading your thinking to AI may work when you’re plowing through boring coding tasks, but when it comes to designing something new, or working through a Sev-1, it helps to know the system your working on like the back of your hand.
Second thought: TikTok is like some sort of wraith, sucking the lifeblood of all who touches it, and needs to die in fire.
Via: Sharp Tech