My first automation to assist me with this “issue driven development” approach: a Keyboard Maestro macro which will activate Obsidian, go to the end of the document, and add a new line beginning with the current time.

Auto-generated description: A configuration window for creating a new timestamped line in Obsidian, detailing trigger options and actions.

My goal is to have one Obsidian note per Jira task, which I will have open when I’m actively working on it. When I want to record something, like a decision or passing thought, I’ll press Cmd+Option+Ctrl+L to fire this macro, and start typing. Couldn’t resist adding some form of automation for this, but hey: at least it’s not some hacked-up, makeshift app this time.

Enjoyed watching Simon Willison’s talk about issue driven development and maintaining temporal document for tasks. Watch the video but that section can be boiled down to “now write it down.” Will give this a try for the tasks I do at work.

Devlog: Blogging Tools β€” Finished Podcast Clips

Well, it’s done. I’ve finally finished adding the podcast clip to Blogging Tools. And I won’t lie to you, it took longer than expected, even after enabling some of the AI features my IDE came with. Along with the complexity that came from implementing this feature, that touched on most of the key subsystems of Blogging Tools, the biggest complexity came from designing how the clip creation flow should work. Blogging Tools is at a disadvantage over clipping features in podcast players in that it:

  1. Doesn’t know what feeds you’ve subscribed to,
  2. Doesn’t know what episode you’re listening to, and
  3. Doesn’t know where in the episode you are.

Blogging Tools needs to know this stuff for creating a clip, so there was no alternative to having the user input this when they’re creating the clip. I tried to streamline this in a few ways:

  • Feeds had to be predefined: While it’s possible to create a clip from an arbitrary feed, it’s a bit involved, and the path of least resistence is to set up the feeds you want to clip ahead of time. This works for me as I only have a handful of feeds I tend to make clips from.
  • Prioritise recent episodes: The clips I tend to make come from podcasts that touch on current events, so any episode listings should prioritise the more recent ones. The episode list is in the same order as the feed, which is not strictly the same, but fortunately the shows I subscribe to list episodes in reverse chronological order.
  • Easy course and fine positioning of clips: This means going straight to a particular point in the episode by entering the timestamp. This is mainly to keep the implementation simple, but I’ve always found trying to position the clip range on a visual representation of a waveform frustrating. It was always such a pain trying to make fine adjustments to where the clip should end. So I just made this simple and allow you to advance the start time and duration by single second increments by tapping a button.

Rather than describe the whole flow in length, or prepare a set of screenshots, I’ve decided to record a video of how this all fits together.

The rest was pretty straightforward: the videos are made using ffmpeg and publishing it on Micro.blog involved the Micropub API. There were some small frills added to the UI using both HTMX and Stimulus.JS so that job status updates could be pushed via web-sockets. They weren’t necessary, as it’s just me using this, but this project is becoming a bit of a testbed for stretching my skills a little, so I think small frills like this helped a bit.

I haven’t made a clip for this yet or tested out how this will feel on a phone, but I’m guessing both will come in time. I also learnt some interesting tidbits, such that the source audio of an <audio> tag requires a HTTP response that supports range requests. Seeking won’t work otherwise: trying to change the time position will just seek the audio back to the start.

Anyway, good to see this in prod and moving onto something else. I’ve getting excited thinking about the next thing I want to work on. No spoilers now, but it features both Dynamo Browse and UCL.

Finally, I just want to make the point that this would not be possible without the open RSS podcasting ecosystem. If I was listening to podcasts in YouTube, forget it: I wouldn’t have been able to build something like this. I know for myself that I’ll continue to listen to RSS podcasts for as long as podcasters continue to publish them. Long may it be so.

I sometimes wish there was a way where I could resurface an old post as if it was new, without simply posting it again. I guess I could adjust the post date, but that feels like tampering with history. Ah well.

In other news, my keyboard’s causing me to make spelling errors again. 😜

My online encounters with Steve Yegge’s writing is like one of those myths of someone going on a long journey. They’re travelling alone, but along they way, a mystical spirt guide appears to give the traveller some advice. These apparitions are unexpected, and the traveller can go long spells without seeing them. But occasionally, when they arrive at a new and unfamiliar place, the guide is there, ready to impart some wisdom before disappearing again.1

Anyway, I found a link to his writing via another post today. I guess he’s writing at Sourcegraph now: I assume his working there.

Far be it for me to recommend a site for someone else to build, but if anyone’s interested in registering wheretheheckissteveyeggewritingnow.com and posting links to his current and former blogs, I’d subscribe to that.


  1. Or, if you’re a fan of Half Life, Yegge’s a bit like the G-Man. ↩︎

Gotta be honest: the current kettle situation I find myself in, not my cup of tea. 😏

A kettle with a removable lid with a missing handle beside the toaster. The handle and some small debris site beside it. To left is a cutting board with a coat-hanger.

Amusing that find myself in a position where I have to log into one password manager to get the password to log into another password manager to get a password.

The yo dawg meme with the caption: Yo Dawg heard you like passwords for your passwords, so we added a password for your passwords for your passwords.

Does Google ever regret naming Go “Go”? Such a common word to use as a proper noun. I know the language devs prefer not to use Golang, but there’s no denying that it’s easier to search for.

The category keyword test is a go.

Unless you’re working on 32 bit hardware, or dealing with legacy systems, there’s really no need to be using 32 bit integers in database schemas or binary formats. There’s ample memory/storage/bandwidth for 64 bit integers nowadays. So save yourself the “overflow conversion” warnings.

This is where I think Java made a mistake of defaulting to 32 bit integers regardless of the architecture. I mean, I can see why: a language and VM made in the mid-90s targeting set-top boxes: settling on 32 integers made a lot of sense. But even back then, the talk of moving to 64 bit was in the air. Nintendo even made that part of the console marketing.

There’s also this series of videos by the same creator that goes in depth on how the Super Mario Bros. levels are encoded in ROM. This is even more fascinating, as they had very little memory to work with, and had to make some significant trade-offs, like allowing Mario to go left. πŸ“Ί

If anyone’s interested in how levels in Super Mario Bros. 2 are encoded in the ROM, I can recommend this video by Retro Game Mechanics. It goes for about 100 minutes so it’s quite in depth. πŸ“Ί

Just blindly accepting permission dialogs whenever MacOS throws them at me, like some bad arcade game. Was this your intention, Apple?

Overheard this exchange just now at the cafe:

Customer: How ya’ feeling?

Barista: Feeling cold.

Customer: Well at least that’s something. If ya’ don’t feel the cold it means you’re dead.

Had a lot more weight to me than I think the customer originally intended.

Mother’s Day in full bloom over here. πŸ’

A shopfront is adorned with a vibrant display of flowers and plants in pots, buckets, and bouquets.

Rubberducking: More On Mocking

Mocking in unit tests can be problematic due to the growing complexity of service methods with multiple dependencies, leading to increased maintenance challenges. But the root cause may not be the mocks themselves.

πŸ”— NY Mag: Rampant AI Cheating Is Ruining Education Alarmingly Fast

Two thoughts on this. The first is that I think these kids are doing a disservice to themselves. I’m not someone who’s going to say β€œdon’t use AI ever,” but the only way I can really understanding something is working through it, either by writing it myself or spending lots of time on it. I find this even in my job: it’s hard for me to know of the existence of some feature in a library I haven’t touched myself, much less how to use it correctly. Offloading your thinking to AI may work when you’re plowing through boring coding tasks, but when it comes to designing something new, or working through a Sev-1, it helps to know the system your working on like the back of your hand.

Second thought: TikTok is like some sort of wraith, sucking the lifeblood of all who touches it, and needs to die in fire.

Via: Sharp Tech

How and when did “double click” become a phrase meaning to focus on or get into the details of a topic? Just heard it being used on a podcast for the second time in as many weeks.

If Apple think the recent App Store ruling is taking away their right to monetise their IP, then Apple needs to explain what the $US 99.00 developer fee is for. They’re probably shooting their videos for WWDC right now. Maybe have one going through each line item of a theoretical “developer fee invoice” and explain what the fee is, and what IP rights the fee is covering.

When hiring a senior software engineer, it’s probably less useful to know whether they could code up a sorting algorithm versus knowing whether they can work out what time it is in UTC in their head. πŸ˜€