Long Form Posts
- Doesn’t know what feeds you’ve subscribed to,
- Doesn’t know what episode you’re listening to, and
- Doesn’t know where in the episode you are.
- Feeds had to be predefined: While it’s possible to create a clip from an arbitrary feed, it’s a bit involved, and the path of least resistence is to set up the feeds you want to clip ahead of time. This works for me as I only have a handful of feeds I tend to make clips from.
- Prioritise recent episodes: The clips I tend to make come from podcasts that touch on current events, so any episode listings should prioritise the more recent ones. The episode list is in the same order as the feed, which is not strictly the same, but fortunately the shows I subscribe to list episodes in reverse chronological order.
- Easy course and fine positioning of clips: This means going straight to a particular point in the episode by entering the timestamp. This is mainly to keep the implementation simple, but I’ve always found trying to position the clip range on a visual representation of a waveform frustrating. It was always such a pain trying to make fine adjustments to where the clip should end. So I just made this simple and allow you to advance the start time and duration by single second increments by tapping a button.
-
The annoying thing about this library is that it doesn’t use Go’s standard
Color
type, nor does it describe the limits of each component. So for anyone using this library: the range for R, G, and B go from 0 to 255, and A goes from 0 to 1. ↩︎ - How do I get the platform starting point?
- How can I actually respawn the platform?
- A project overview: basically answering the question of what this project is, why it s, and why should one use it.
- How to instructions: how does one install it, get started using it, etc.
- How does one engage and contribute to the project: how they can get help, contribute changes, etc.
- Credits, license and contact details
Devlog: Blogging Tools — Finished Podcast Clips
Well, it’s done. I’ve finally finished adding the podcast clip to Blogging Tools. And I won’t lie to you, it took longer than expected, even after enabling some of the AI features my IDE came with. Along with the complexity that came from implementing this feature, that touched on most of the key subsystems of Blogging Tools, the biggest complexity came from designing how the clip creation flow should work. Blogging Tools is at a disadvantage over clipping features in podcast players in that it:
Blogging Tools needs to know this stuff for creating a clip, so there was no alternative to having the user input this when they’re creating the clip. I tried to streamline this in a few ways:
Rather than describe the whole flow in length, or prepare a set of screenshots, I’ve decided to record a video of how this all fits together.
The rest was pretty straightforward: the videos are made using ffmpeg
and publishing it on Micro.blog involved the Micropub API. There were some small frills added to the UI using both HTMX and Stimulus.JS so that job status updates could be pushed via web-sockets. They weren’t necessary, as it’s just me using this, but this project is becoming a bit of a testbed for stretching my skills a little, so I think small frills like this helped a bit.
I haven’t made a clip for this yet or tested out how this will feel on a phone, but I’m guessing both will come in time. I also learnt some interesting tidbits, such that the source audio of an <audio>
tag requires a HTTP response that supports range requests. Seeking won’t work otherwise: trying to change the time position will just seek the audio back to the start.
Anyway, good to see this in prod and moving onto something else. I’ve getting excited thinking about the next thing I want to work on. No spoilers now, but it features both Dynamo Browse and UCL.
Finally, I just want to make the point that this would not be possible without the open RSS podcasting ecosystem. If I was listening to podcasts in YouTube, forget it: I wouldn’t have been able to build something like this. I know for myself that I’ll continue to listen to RSS podcasts for as long as podcasters continue to publish them. Long may it be so.
Rubberducking: More On Mocking
Mocking in unit tests can be problematic due to the growing complexity of service methods with multiple dependencies, leading to increased maintenance challenges. But the root cause may not be the mocks themselves.
Devlog: Blogging Tools — Ideas For Stills For A Podcast Clips Feature
I recently discovered that Pocketcasts for Android have changed their clip feature. It still exists, but instead of producing a video which you could share on the socials, it produces a link to play the clip from the Pocketcasts web player. Understandable to some degree: it always took a little bit of time to make these videos. But hardly a suitable solution for sharing clips of private podcasts: one could just listen to the entire episode from the site. Not to mention relying on a dependent service for as long as those links (or the original podcast) is around.
So… um, yeah, I’m wondering if I could building something for myself that could replicate this.
I’m thinking of another module for Blogging Tools. I was already using this tool to crop the clip videos that came from Pocketcasts so it was already in my workflow. It also has ffmpeg
bundled in the deployable artefact, meaning that I could use to produce video. Nothing fancy: I’m thinking of a still showing the show title, episode title, and artwork, with the audio track playing. I pretty confident that ffmpeg
can handle such tasks.
I decided to start with the fun part: making the stills. I started with using Draw2D to provide a very simple frame where I could place the artwork and render the text. I just started with primary colours so I could get the layout looking good:

I’m using Roboto Semi-bold for the title font, and Oswald Regular for the date. I do like the look of Oswald, the narrower style contrasts nicely with the neutral Roboto. Draw2D provides methods for measuring text sizes, which I’m using to power the text wrapping layout algorithm (it’s pretty dumb. It basically adds words to a line until it can’t fit the available space)
The layout I got nailed down yesterday evening. This evening I focused on colour.
I want the frame to be interesting and close to the prominent colours that come from the artwork. I found this library which returns the dominant colours of an image using K-means clustering. I’ll be honest: I haven’t looked at how this actually works. But I tried the library out with some random artwork from Lorem Picsum, and I was quite happy with the colours it was returning. After adding this library1 to calculate the contract for the text colour, plus a slight shadow, and the stills started looking pretty good:

I then tried some real podcast artwork, starting with ATP. And that’s where things started going off the rails a little:

The library returns the colours in order of frequency, and I was using the first colour as the border and the second as the card background. But I’m guessing since the ATP logo has so few actual colour, the K-means algorithm was finding those of equal prominence and returning them in a random order. Since the first and second are of equal prominence, the results were a little garish and completely random.
To reduce the effects of this, I finished the evening by trying a variation where the card background was simply a shade of the border. That still produced random results, but at least the colour choices were a little more harmonious:

I’m not sure what I want to do here. I’ll need to explore the library a little, just to see whether it’s possible to reduce the amount of randomness. Might be that I go with the shaded approach and just keep it random: having some variety could make things interesting.
Of course, I’m still doing the easy and fun part. How the UI for making the clip will look is going to be a challenge. More on that in the future if I decide to keep working on this. And if not, at least I’ve got these nice looking stills.
The Alluring Trap Of Tying Your Fortunes To AI
It’s when the tools stop working the way you expect that you realise the full cost of what you bought into.
Devlog: Dialogues
A post describing a playful dialogue styling feature, inspired by rubber-duck debugging, and discusses the process and potential uses for it.
On AI, Process, and Output
Manuel Moreale’s latest post about AI was thought-provoking:
One thing I’m finding interesting is that I see people falling into two main camps for the most part. On one side are those who value output and outcome, and how to get there doesn’t seem to matter a lot to them. And on the other are the people who value the process over the result, those who care more about how you get to something and what you learn along the way.
I recently turned on the next level of AI assistence in my IDE. Previously I was using line auto-complete, which was quite good. This next level gives me something closer to Cursor: prompting the AI to generate full method implementations or having a chat interaction.
And I think I’m going to keep it on. One nice thing about this is that it’s on-demand: it stays out of the way, letting me implement something by hand if I want to. This is probably going to be the majority of the time, as I do enjoy the process of software creation.
But other times, I just want a capability added, such as marshalling and unmarshalling things to a database. In the past, this would largely be the code copied and pasted from another file. With the AI assistence, I can get this code generated for me. Of course I review it — I’m not vibe coding here — but it saves me from making a few subtle bugs and some pretty boring editing.
I guess my point is that I think these two camps are more porous then people think. There are times where the process is half the fun in making the thing, and others where it’s a slog, and you just want the thing to be. This is true for me in programming, and I can only guess that it’ll be similar in other forms of art. I guess the trap is choosing to join one camp, feeling that’s the only camp that people should be in, and refusing to recognise that others may feel differently.
Merge Schema Changes Only When The Implementation Is Ready
Integrating schema changes and implementation together before merging prevents project conflicts and errors for team members.
You Probably Do Want To Know What You Had for Lunch That Other Day
There’s no getting around the fact that some posts you make are banal. You obviously thought your lunch was posting about at the time was worthy of sharing: after all, you took the effort to share it. Then a week goes buy and you wonder why you posted that. “Nobody cares about this,” you say to yourself. “This isn’t giving value to anyone.”
But I’d argue, as Doc did in Back to the Future, that you’re just not thinking forth-dimensionally enough. Sure it may seem pretty banal to you now, but what about 5 years in the future? How about 10? You could be persuing your old posts when you come across the one about your lunch, and be reminded of the food, the atmosphere, the weather, the joys of youth. It could be quite the bittersweet feeling.
Or you could feel nothing. And that’s fine too. The point is that you don’t know how banal a particular post is the moment you make it.
Worse still, you don’t know how banal anything will be 5 years from now, (as in right now, the moment you’re reading this sentence). The banality of anything is dynamic: it changes as a funcion of time. It could be completely irrelevant next week, then the best thing that’s happened to you a week later.
This is why I don’t understand the whole “post essays on my blogs and the smaller things on Twitter/Bluesky/Mastodon/whatever” dichotomy some writers have out there. Is what you write on those other sites less worthy than what you write on the site you own? Best be absolutely sure about that when you post it then, as you may come to regret making a point about posting banal tweets 17 years ago, only for that moratorium about banal tweets to be lost when you decided to move away from that micro-blogging site.
But whatever, you do you. I know for myself that I rather keep those supposedly banal thoughts on this site. And yeah, that’ll mean a lot of pretty pointless, uninteresting things get published here: welcome to life.
But with the pressure of time, they could turn into nice, shiny diamonds of days past. Or boring, dirty lumps of coal. Who knows? Only time will answer that.
Gallery: Morning In Sherbrooke
A visit to Sherbrooke in the Dandenong Ranges on Easter Monday included a walk along the falls track, a sighting of a Superb Lyrebird, and a brief exploration of Alfred Nicholas Memorial Garden.
Airing Of Draft Posts
A collection draft ideas and reflections, amassed over the last year, highlighting a mix of topics ranging from technology insights to personal musings.
On Go And Using Comments For Annotations
Some thoughts of whether Go should have a dedicated syntax for annotations that comments are currently being used for.
Don't Be Afraid Of Types
Types in coding projects are good. Don’t be afraid to create them when you need to.
Adventures In Godot: Respawning A Falling Platform
My taste of going through a Godot tutorial last week has got me wanting more, so I’ve set about building a game with it. Thanks to my limited art skills, I’m using the same asset pack that was used in the video, although I am planning to add a bit of my own here and there.
But it’s the new mechanics I enjoy working on, such as adding falling platforms. If you’ve played any platformer, you know what these look like: platforms that are suspended in air, until the player lands on them, at which point gravity takes a hold and they start falling, usually into a pit killing the player in the process:
The platforms are built as a CharacterBody2D
with an Area2D
that will detect when a player enters the collision shape. When they do, a script will run which will have the platform “slipping” for a second, before the gravity is turned on and the platform falls under it’s own weight. The whole thing is bundled as a reusable scene which I could drag into the level I’m working on.

I got the basics of this working reasonably quickly, yet today, I had a devil of a time going beyond that. The issue was that I wanted the platform to respawn after it fell off the map, so that the player wouldn’t get soft-locked at at an area where the platform was needed to escape. After a false-start trying to reposition the platform at it’s starting point after it fell, I figured it was just easier to respawn the platform when the old one was removed. To do this I had to solve two problems:
Getting The Starting Point
A Node2D
object has the property global_position, which returns the position of the object based on the world coordinates. However, it seems like this position is not correct when the _init function of the attached script is called. I suspect this is because this function is called before the platform is added to the scene tree, when the final world coordinates are known.
Fortunately, there exists the _ready
notification, which is invoked when the node is added to the scene tree. After some experimentation, I managed to confirm that global_position
properly was correct. So tracking the starting point is a simple as storing that value in a variable:
var init_global_position = null
func _ready():
init_global_position = global_position
Another option is to use the _enter_tree()
notification. From the documentation, it looks like either would probably work here, with the only difference being the order in which this notification is invoked on parents and children (_enter_tree
is called by the parent first, whereas _ready
is called by the children first).
Respawning The Platform
The next trick was finding out how to respawn the platform. The usual technique for doing so, based on the results of my web searching, is to load the platform scene, instantiate a new instance of it, and added it to the scene tree.
@onready var FallingPlatform = preload("res://scenes/falling_platform.tscn")
func respawn():
var dup = FallingPlatform.instantiate()
add_child(dup)
Many of the examples I’ve seen online added the new scene node as a child of the current node. This wouldn’t work for me as I wanted to free the current node at the same time, and doing so would free the newly instantiated child. The fix for this was easy enough: I just added the new node as a child of the current scene.
@onready var FallingPlatform = preload("res://scenes/falling_platform.tscn")
func respawn():
var dup = FallingPlatform.instantiate()
get_tree().current_scene.add_child(dup)
queue_free()
I still had to reposition the new node to the spawn point. Fortunately the global_position
property is also settable, so it was simply a matter of setting that property before adding it to the tree (this is so that it’s correct when the newly instantiated node receives the _ready
notification).
@onready var FallingPlatform = preload("res://scenes/falling_platform.tscn")
func respawn():
var dup = FallingPlatform.instantiate()
dup.global_position = init_global_position
get_tree().current_scene.add_child(dup)
queue_free()
This spawned the platform at the desired positioned, but there was a huge problem: when the player jumped on the newly spawn platform, it wouldn’t fall. The Area2D
connection was not invoking the script to turn on the gravity:
It took me a while to figured out what was going on, but I came to the conclusion that the packed scene was loading properly, but without the script attached. Turns out a Script is a resource separate from the scene, and can be loaded and attached to an object via the set_script method:
@onready var FallingPlatform = preload("res://scenes/falling_platform.tscn")
@onready var FallingPlatformScript = preload("res://scripts/falling_platform.gd")
func respawn():
var dup = FallingPlatform.instantiate()
dup.set_script(FallingPlatformScript)
dup.global_position = init_global_position
get_tree().current_scene.add_child(dup)
queue_free()
Finally, after figuring all this out, I was able to spawn a new falling platform, have it positioned at the starting position of the old platform, and react to the player standing on it.
The time it took to work this out is actually a little surprising. I was expecting others to run into the same problem I was facing, where they were trying to instantiate a scene only to have the scripts not do anything. Yet it took me 45 minutes of web searching through Stack Overflow and forum posts that didn’t solve my problem. It was only after a bit of experimentation and print-debugging on my own that I realised that I actually had to attached the script after instantiating the node.
To be fair, I will attribute some of this to not understanding the problem at first: I actually thought the Area2D
wasn’t actually being instantiated at all. Yet not one of the Stack Overflow answers or forum post floated the possibility that the script wasn’t being loaded alongside the scene. This does suggest to me that my approach may not be optimal. There does exist a “Local to Scene” switch in the script inspector that could help, although turning it on doesn’t seem to do much. But surely there must be some way to instantiate the script alongside the scene.
Anyway, that’s for later. For now, I’m happy that I’ve got something that works.
Running PeerTube In Coolify
A guide for setting up a basic PeerTube instance on Coolify using a docker-compose file.
Attending the DDD Melbourne 2025 Conference
Yesterday, I attended the DDD Melbourne 2025 conference. This was in service of my yearly goal to get out more, to be around people more often than I have been. So the whole reason I attended was to meet new people. That didn’t happen: I said hi to a few people I once worked with, and spoke to a few sponsors, but that was it. So although I marked it off my goal list, it wasn’t a huge success.

But a dev conference is still a dev conference and I’d thought I’d write a few notes of the sessions I attended, just to record what I did get out of it.
Keynote
Emerging trends in robots, by Sue Keay.
The keynote interesting session about the state of robotics in Australia. Didn’t get a lot of specifics, but I did get a name for the robot I saw once in a Tokyo department store that, let’s just say, left an impression on me.

First Session
Are you overcomplicating software development? I certainly have been…, by Ian Newmarch.
This speaker was absolutely preaching my gospel around complexity in software development. But it took someone to go deeper into into why developers are prone to take an inherently complex practice and add additional complexity (so call “accidental complexity”). This is mainly due to human factors: ego, fear, imposter syndrome, and to some extent, to keep the job interesting.

Very reliable. Only real way to mitigate this is going back to principals such as avoiding premature abstraction, YAGNI, and KISS. Thing about principals is that they’re always a little hard to know when you need it. So remember to always keep a focus on the problem - what you’re trying to solve - and working with people can help here.
Second Session
How to Design Your Next Career Move, by Emily Conaghan.
This speaker went through a process of how one could reflect on what they want out of their career, and how to come up with what they need to do to bridge the gap to get to it. The process is rather methodical, which is not a bad thing, and there’s a whole workbook component to this. This might be something that’s personally worth doing though: it does feel like I’m drifting aimlessly a little.
Third Session
The Lost Art of good README documentation, by Swapnil Ogale.

I found this one to be quite good. It touched on the properties of what makes a good README for a project, and why you’d want to (the reason is that a developer’s or user’s trust in a project directly relates to the support document). In short, a good readme should have:
But even though these could be described as what a “good” README looks like, a takeaway is there’s no such thing as a bad README, apart from not having any README at all.
One other thing that I didn’t know was that README’s are traditionally capitalised so that they appear near the top in a alphanumerical listing of files. That was interesting to know.
Lunch
Yeah, this was the hardest part of the day. But it’s amazing how much time you can kill just by waiting in lines.
Forth Session
Being consistently wrong, by Tom Ridge.
I was expecting this to be a bit more general, like techniques for keeping an open mind or learning from one’s mistakes. But it was largely focused on task estimations, which is a weakness of mine, but seeing that this was after lunch and I was getting a bit tired around this time, I only halved listened. But the takeaways I did get are the importance of measuring, how long tasks take to travel across the board, how long they’re in progress, in review, etc.; using those measurements to determine capacity using formula’s derived from queuing theory; keeping the amount of work in progress low; and keeping task duration variance low by slicing.

These are all valid points, although I’m not sure how applicable they are to how we work at my job. But it may be a worthy talk to revisit if that changes.
Fifth Session
On The Shoulders Of Giants — A Look At Modern Web Development, by Julian Burr.
Despite being a backend developer by day, I am still curious of the state of web developer. This talk was a good one, where the speaker went through the various “milestones” of major technical developments in web technology — such as when Javascript and jQuery was introduce, when AJAX became a thing, and when CSS was developed (I didn’t know CSS was devised at CERN).

Going back in time was fun (R.I.P. Java applets & Flash) but it seems the near-term future is all React, all the time. And not just React in the traditional sense, but React used for zero-hydration server side components (Qwik) and out of order streaming (React Suspense). Not sure that appeals to me. Although one thing that does is that Vite is becoming the build tool du jour for frontend stuff. This I may look at, since it looks simple enough to get started.
Some other fun things: JavaScript style sheets was a thing, and Houdini still is a thing.
Sixth Session
Dungeons and… Developers? Building skills in tech teams with table top role playing games, by Kirsty McDonald.
This was the talk that got me in the door in some respects. I’ve heard of role-playing games being a thing for scenario planning, so the idea of doing it for team development and practice responding to things like production incidents. This consisted of the normal thing’s you’d expect from a role playing game, like character cards, a game master, and scenario events with a random-number generator component it it.

I’ve never played D&D before, so I was curious as to how these games actually ran. Fortunately, I was not disappointed, as the last part of the talk was walking through an example game with a couple of volunteers from the audience. Definitely a talk worth staying back for.
Locknote
Coding Like it’s 2005, by Aaron Powell
This was a fun look-back on the state of the art of web development back in 2005, before jQuery, AJAX, decent editors, when annoying workarounds in JavaScript and CSS were necessary to get anything working in Internet Explorer. This was just before my time as a practicing dev, and apparently trying to replicate rich-client applications in the web browser were all the rage, which was something I missed. It was mainly focused on Microsoft technology, something I don’t have a lot of personal experience in, but I did get flashbacks of using Visual Studio 2003 and version 1 of Firefox.


Lot’s of fun going down memory lane (R.I.P clearfix & YUI; rot in hell, IE6 😛).
Overall
I was contemplating not showing up to this, and even while I was there, I was considering leaving at lunchtime, but overall I’m glad that I stayed the whole day. It got me out of the house, and I learnt a few interesting things. And let me be clear: DDD Melbourne and the volunteers did an excellent job! It was a wonderfully run conference with a lot of interesting speakers. I hope to see some of the talks on YouTube later.
But, I don’t think I’ll be going to a conference by myself again. I mean, it’s one thing to go if work asks you to: I can handle myself in that situation. But under my own volition? Hmm, it would be much easier going with someone else, just so that I have someone to talk to. It’s clear that I need to do something about my fear of approaching someone I don’t know and start speaking to them. Ah well, it was worth a try.
An Incomplete List of DRM-Free Media Stores
A collection of links to online stores that sell DRM-Free media.
Apple AI in Mail and What Could Be
Apple AI features in Mail currently do not help me. But they can, if Apple invited us to be more involved in what constitute an important email.
First Impressions of the Cursor Editor
Trying out the Cursor editor to build a tool to move Micro.blog posts.