How and when did “double click” become a phrase meaning to focus on or get into the details of a topic? Just heard it being used on a podcast for the second time in as many weeks.

If Apple think the recent App Store ruling is taking away their right to monetise their IP, then Apple needs to explain what the $US 99.00 developer fee is for. They’re probably shooting their videos for WWDC right now. Maybe have one going through each line item of a theoretical “developer fee invoice” and explain what the fee is, and what IP rights the fee is covering.

When hiring a senior software engineer, it’s probably less useful to know whether they could code up a sorting algorithm versus knowing whether they can work out what time it is in UTC in their head. 😀

Ooh, I lost my temper when I was under the pump and MacOS decided to stop everything and ask if I permit Terminal to access files from other apps.

Of course I do, Apple! I’m not using the terminal to amuse myself. I’m trying to get shit done, and your insistent whining is getting in my way! 😡

Seeing these river tour boats moored like this reminds me of the UK, and all the narrowboats in the canals.

Boats are docked along a river beside a row of buildings with trees and outdoor seating.

Devlog: Blogging Tools — Ideas For Stills For A Podcast Clips Feature

I recently discovered that Pocketcasts for Android have changed their clip feature. It still exists, but instead of producing a video which you could share on the socials, it produces a link to play the clip from the Pocketcasts web player. Understandable to some degree: it always took a little bit of time to make these videos. But hardly a suitable solution for sharing clips of private podcasts: one could just listen to the entire episode from the site. Not to mention relying on a dependent service for as long as those links (or the original podcast) is around.

So… um, yeah, I’m wondering if I could building something for myself that could replicate this.

I’m thinking of another module for Blogging Tools. I was already using this tool to crop the clip videos that came from Pocketcasts so it was already in my workflow. It also has ffmpeg bundled in the deployable artefact, meaning that I could use to produce video. Nothing fancy: I’m thinking of a still showing the show title, episode title, and artwork, with the audio track playing. I pretty confident that ffmpeg can handle such tasks.

I decided to start with the fun part: making the stills. I started with using Draw2D to provide a very simple frame where I could place the artwork and render the text. I just started with primary colours so I could get the layout looking good:

Auto-generated description: A date, episode title, and show name are displayed alongside an image of ocean waves against rocks in a colorful border.

I’m using Roboto Semi-bold for the title font, and Oswald Regular for the date. I do like the look of Oswald, the narrower style contrasts nicely with the neutral Roboto. Draw2D provides methods for measuring text sizes, which I’m using to power the text wrapping layout algorithm (it’s pretty dumb. It basically adds words to a line until it can’t fit the available space)

The layout I got nailed down yesterday evening. This evening I focused on colour.

I want the frame to be interesting and close to the prominent colours that come from the artwork. I found this library which returns the dominant colours of an image using K-means clustering. I’ll be honest: I haven’t looked at how this actually works. But I tried the library out with some random artwork from Lorem Picsum, and I was quite happy with the colours it was returning. After adding this library1 to calculate the contract for the text colour, plus a slight shadow, and the stills started looking pretty good:

Auto-generated description: Six rectangular cards each feature a different background image with the date 14 April 2020, text A pretty long episode title, and My test show.

I then tried some real podcast artwork, starting with ATP. And that’s where things started going off the rails a little:

Auto-generated description: Four color variations of a promotional card design featuring a logo with rainbow stripes, a date of 14 April 2020, and text stating A pretty long episode title and My test show.

The library returns the colours in order of frequency, and I was using the first colour as the border and the second as the card background. But I’m guessing since the ATP logo has so few actual colour, the K-means algorithm was finding those of equal prominence and returning them in a random order. Since the first and second are of equal prominence, the results were a little garish and completely random.

To reduce the effects of this, I finished the evening by trying a variation where the card background was simply a shade of the border. That still produced random results, but at least the colour choices were a little more harmonious:

Auto-generated description: A series of four visually distinct cards display a logo, date, episode title, and show subtitle, each set against different colored backgrounds.

I’m not sure what I want to do here. I’ll need to explore the library a little, just to see whether it’s possible to reduce the amount of randomness. Might be that I go with the shaded approach and just keep it random: having some variety could make things interesting.

Of course, I’m still doing the easy and fun part. How the UI for making the clip will look is going to be a challenge. More on that in the future if I decide to keep working on this. And if not, at least I’ve got these nice looking stills.


  1. The annoying thing about this library is that it doesn’t use Go’s standard Color type, nor does it describe the limits of each component. So for anyone using this library: the range for R, G, and B go from 0 to 255, and A goes from 0 to 1. ↩︎

Ah, CSV files. They’re a little painful to work with, but they truly are the unsung heroes of ad-hoc scripts written in haste.

Listening to all the recent platform talk on Stratechery has been fascinating, if not a little melancholy. We may never see another mainstream platform be as truly open as Windows, MacOS, or the Web is right now.

What ever happen to Power Nap on MacOS? Is that still a thing? I’ve been asked when I’d like to install updates for about a week now. Yet despite choosing “later tonight” every time, nothing’s been happening. I could choose to install them now, but I thought deferring them to a time when I’m not using my computer was the point of this feature.

Argh! Someone has discovered my secret of where the best place to stand on an A-class tram is (it’s at the back, beside the back door).

It’s easy to get a irrational sense of how scalable a particular database technology can be. Take my experience with PostgreSQL. I use it all the time, but I have it in my head that it shouldn’t be used for “large” amounts of data. I get a little nervous when a table goes beyond 100,000 rows, for example.

But just today, I discovered a table that had 47 million rows of time-series data, and PostgreSQL seems to handle this table just fine. There are a few caviets: it’s a database with significant resources backing it, it only sees a few commits per second, and the queries that are not optimised for that table’s indices can take a few seconds. But PostgreSQL seems far from being under load. CPU usage is low (around 3%), and the disk queue depth is zero.

I guess my point is that PostgreSQL continues to be awesome and I really shouldn’t underestimate how well it can handle we we throw at it.

Ah! I think I know why I keep asking for a bagel with lettuce, tomato, and cheese instead of ham, tomato, and cheese. It’s because I always said “lettuce, tomato, and cheese” when ordering sandwiches back when I was working in the CBD. Wow, talk about old habits dying hard.

It’s 2025. Why am I still not writing down thoughts I had in the shower that I knew I wanted to remember? 🤦

The Alluring Trap Of Tying Your Fortunes To AI

It’s when the tools stop working the way you expect that you realise the full cost of what you bought into.

Devlog: Dialogues

A post describing a playful dialogue styling feature, inspired by rubber-duck debugging, and discusses the process and potential uses for it.

Bluesky needs a bookmarking feature. It took me a while, but I’ve grown to bookmarking posts in Micro.blog and Mastodon that I’d like to revisit in the future. Extra points for having a public API/RSS feed for those bookmarks.

On AI, Process, and Output

Manuel Moreale’s latest post about AI was thought-provoking:

One thing I’m finding interesting is that I see people falling into two main camps for the most part. On one side are those who value output and outcome, and how to get there doesn’t seem to matter a lot to them. And on the other are the people who value the process over the result, those who care more about how you get to something and what you learn along the way.

I recently turned on the next level of AI assistence in my IDE. Previously I was using line auto-complete, which was quite good. This next level gives me something closer to Cursor: prompting the AI to generate full method implementations or having a chat interaction.

And I think I’m going to keep it on. One nice thing about this is that it’s on-demand: it stays out of the way, letting me implement something by hand if I want to. This is probably going to be the majority of the time, as I do enjoy the process of software creation.

But other times, I just want a capability added, such as marshalling and unmarshalling things to a database. In the past, this would largely be the code copied and pasted from another file. With the AI assistence, I can get this code generated for me. Of course I review it — I’m not vibe coding here — but it saves me from making a few subtle bugs and some pretty boring editing.

I guess my point is that I think these two camps are more porous then people think. There are times where the process is half the fun in making the thing, and others where it’s a slog, and you just want the thing to be. This is true for me in programming, and I can only guess that it’ll be similar in other forms of art. I guess the trap is choosing to join one camp, feeling that’s the only camp that people should be in, and refusing to recognise that others may feel differently.

Very happy with how this evening has panned out. 🇦🇺

Free business idea for anyone: I see lots of people around the polling booth with dogs. I don’t believe dogs are allowed inside, so they must be walking them. But they’ll need to vote eventually, and if the queue is small, maybe they’ll think it’s worth voting now.

So, here’s the pitch:

Stand outside the front along with those handing how-to-vote cards, and offer to look after their dogs while they go in to vote. I’m not sure you can charge much for the service — voting usually takes around 5-10 minutes if the queue is small — and it might be more community minded if you just offer to do it for nothing. But maybe you can earn $10 for standing around half-a-day? Buy a democracy sausage and coffee for that.

At the cafe. Polling station is directly across the road and will open in a few minutes. Already a queue of people waiting to vote. Party banners on the fence, people with how-to-vote cards at the ready. Hardest decision I have before me is if I should join them once I’ve finished breakfast. But the barbie’s not been wheeled out yet and I’ve not organised anything for lunch.