Long Form Posts

    Encapsulation In Software Development Is Underrated

    Encapsulation is something object-oriented programming got right.

    Rubberducking: Of Brass and Browsers

    🦆: Did you hear about The Browser Company?

    L: Oh yeah, I heard the CEO wrote a letter about Arc.

    🦆: Yeah, did you ever use Arc?

    L: Nah. Probably won’t now that it seems like they’ve stopped work on it. Heard it was pretty nice thought.

    🦆: Yeah, I heard Scott Forstall had an early look at it.

    L: Oh yeah, and how he compared it to a saxophone and recommended making it more like a piano.

    🦆: Yeah.

    L: Yeah. Not sure I agree with him.

    🦆: Oh really?

    L: Yeah. I mean, there’s nothing wrong with pianos. Absolutely love them. But everyone seems to be making those, and no-one’s making saxophones, violins, etc. And we need those instruments too.

    🦆: Yeah, I suppose an orchestra with 30 pianos would sound pretty bland.

    L: Yeah, we need all the instruments: the one’s that are approachable, and the ones for those with the technical skills to get the best sound.

    And no-one's a beginner for ever. I'm sure there are piano players out there who would like to try something else eventually, like a saxophone.

    🦆: Would you say Vivaldi is like a saxophone?

    L: I'd probably say Vivaldi is like a synthesiser. The basics are approachable for the beginners, yet it's super customisable for those that want to go beyond the basics.

    And just like a synthesiser, it can be easy to get it sounding either really interesting, or really bizarre. You can get in a state where you can't back out and you'll have to start from scratch.

    🦆: Oh, I can't imagine that being for everyone.

    L: No, indeed. Probably for those piano players that would want to try something else.

    Devlog: Dynamo-Browse Now Scanning For UCL Extensions

    Significant milestone in integrating UCL with Dynamo-Browse, as UCL extensions are now being loaded on launch.

    Devlog: UCL — Assignment

    Some thoughts of changing how assignments work in UCL to support subscripts and pseudo-variables.

    Serious Maintainers

    I just learnt that Hugo has changed their layout directory structure (via) and has done so without bumping the major version. I was a little peeved by this: this is a breaking change1 and they’re not indicating the “semantic versioning” way by going from 1.x.x to 2.0.0. Surely they know that people are using Hugo, and that an ecosystem of sorts has sprung up around it.

    But then a thought occurred: what if they don’t know? What if they’re plugging away at their little project, thinking that it’s them and a few others using it? They probably think it’s safe for them to slip this change in, since it’ll only inconvenience a handful of users.

    I doubt this is actually the case: it’s pretty hard to avoid the various things that are using Hugo nowadays. But this thought experiment led to some reflection on the stuff I make. I’m planning a major change to one of my projects that will break backwards compatibility too. Should I bump the major version number? Could I slip it in a point release? How many people will this touch?

    I could take this route, with the belief it’s just me using this project, but do I actually know that? And even if no-one’s using it now, what would others coming across this project think? What’s to get them to start using it, knowing that I just “pulled a Hugo”? If I’m so carefree about such changes now, could they trust me to not break the changes they depend on later?

    Now, thanks to website analytics, I know for a fact that only a handful of people are using the thing I built, so I’m hardly in the same camp as the Hugo maintainers. But I came away from this wondering if it’s worth pretending that making this breaking change will annoy a bunch of users. That others may write their own post if I’m not serious about it. I guess you could call this an example of “fake it till you make it,” or, to borrow a quote from of Logan Roy in Succession: being a “serious” maintainer. If I take this project seriously, then others can do so too.

    It might be worth a try. Highly unlikely that it itself will lead to success or adoption, but I can’t see how it will hurt.


    1. Technically it’s not a breaking change, and they will maintain backwards compatibility, at least for a while. But just humour me here. ↩︎

    Devlog: Blogging Tools — Finished Podcast Clips

    Well, it’s done. I’ve finally finished adding the podcast clip to Blogging Tools. And I won’t lie to you, it took longer than expected, even after enabling some of the AI features my IDE came with. Along with the complexity that came from implementing this feature, that touched on most of the key subsystems of Blogging Tools, the biggest complexity came from designing how the clip creation flow should work. Blogging Tools is at a disadvantage over clipping features in podcast players in that it:

    1. Doesn’t know what feeds you’ve subscribed to,
    2. Doesn’t know what episode you’re listening to, and
    3. Doesn’t know where in the episode you are.

    Blogging Tools needs to know this stuff for creating a clip, so there was no alternative to having the user input this when they’re creating the clip. I tried to streamline this in a few ways:

    • Feeds had to be predefined: While it’s possible to create a clip from an arbitrary feed, it’s a bit involved, and the path of least resistence is to set up the feeds you want to clip ahead of time. This works for me as I only have a handful of feeds I tend to make clips from.
    • Prioritise recent episodes: The clips I tend to make come from podcasts that touch on current events, so any episode listings should prioritise the more recent ones. The episode list is in the same order as the feed, which is not strictly the same, but fortunately the shows I subscribe to list episodes in reverse chronological order.
    • Easy course and fine positioning of clips: This means going straight to a particular point in the episode by entering the timestamp. This is mainly to keep the implementation simple, but I’ve always found trying to position the clip range on a visual representation of a waveform frustrating. It was always such a pain trying to make fine adjustments to where the clip should end. So I just made this simple and allow you to advance the start time and duration by single second increments by tapping a button.

    Rather than describe the whole flow in length, or prepare a set of screenshots, I’ve decided to record a video of how this all fits together.

    The rest was pretty straightforward: the videos are made using ffmpeg and publishing it on Micro.blog involved the Micropub API. There were some small frills added to the UI using both HTMX and Stimulus.JS so that job status updates could be pushed via web-sockets. They weren’t necessary, as it’s just me using this, but this project is becoming a bit of a testbed for stretching my skills a little, so I think small frills like this helped a bit.

    I haven’t made a clip for this yet or tested out how this will feel on a phone, but I’m guessing both will come in time. I also learnt some interesting tidbits, such that the source audio of an <audio> tag requires a HTTP response that supports range requests. Seeking won’t work otherwise: trying to change the time position will just seek the audio back to the start.

    Anyway, good to see this in prod and moving onto something else. I’ve getting excited thinking about the next thing I want to work on. No spoilers now, but it features both Dynamo Browse and UCL.

    Finally, I just want to make the point that this would not be possible without the open RSS podcasting ecosystem. If I was listening to podcasts in YouTube, forget it: I wouldn’t have been able to build something like this. I know for myself that I’ll continue to listen to RSS podcasts for as long as podcasters continue to publish them. Long may it be so.

    Rubberducking: More On Mocking

    Mocking in unit tests can be problematic due to the growing complexity of service methods with multiple dependencies, leading to increased maintenance challenges. But the root cause may not be the mocks themselves.

    Devlog: Blogging Tools — Ideas For Stills For A Podcast Clips Feature

    I recently discovered that Pocketcasts for Android have changed their clip feature. It still exists, but instead of producing a video which you could share on the socials, it produces a link to play the clip from the Pocketcasts web player. Understandable to some degree: it always took a little bit of time to make these videos. But hardly a suitable solution for sharing clips of private podcasts: one could just listen to the entire episode from the site. Not to mention relying on a dependent service for as long as those links (or the original podcast) is around.

    So… um, yeah, I’m wondering if I could building something for myself that could replicate this.

    I’m thinking of another module for Blogging Tools. I was already using this tool to crop the clip videos that came from Pocketcasts so it was already in my workflow. It also has ffmpeg bundled in the deployable artefact, meaning that I could use to produce video. Nothing fancy: I’m thinking of a still showing the show title, episode title, and artwork, with the audio track playing. I pretty confident that ffmpeg can handle such tasks.

    I decided to start with the fun part: making the stills. I started with using Draw2D to provide a very simple frame where I could place the artwork and render the text. I just started with primary colours so I could get the layout looking good:

    Auto-generated description: A date, episode title, and show name are displayed alongside an image of ocean waves against rocks in a colorful border.

    I’m using Roboto Semi-bold for the title font, and Oswald Regular for the date. I do like the look of Oswald, the narrower style contrasts nicely with the neutral Roboto. Draw2D provides methods for measuring text sizes, which I’m using to power the text wrapping layout algorithm (it’s pretty dumb. It basically adds words to a line until it can’t fit the available space)

    The layout I got nailed down yesterday evening. This evening I focused on colour.

    I want the frame to be interesting and close to the prominent colours that come from the artwork. I found this library which returns the dominant colours of an image using K-means clustering. I’ll be honest: I haven’t looked at how this actually works. But I tried the library out with some random artwork from Lorem Picsum, and I was quite happy with the colours it was returning. After adding this library1 to calculate the contract for the text colour, plus a slight shadow, and the stills started looking pretty good:

    Auto-generated description: Six rectangular cards each feature a different background image with the date 14 April 2020, text A pretty long episode title, and My test show.

    I then tried some real podcast artwork, starting with ATP. And that’s where things started going off the rails a little:

    Auto-generated description: Four color variations of a promotional card design featuring a logo with rainbow stripes, a date of 14 April 2020, and text stating A pretty long episode title and My test show.

    The library returns the colours in order of frequency, and I was using the first colour as the border and the second as the card background. But I’m guessing since the ATP logo has so few actual colour, the K-means algorithm was finding those of equal prominence and returning them in a random order. Since the first and second are of equal prominence, the results were a little garish and completely random.

    To reduce the effects of this, I finished the evening by trying a variation where the card background was simply a shade of the border. That still produced random results, but at least the colour choices were a little more harmonious:

    Auto-generated description: A series of four visually distinct cards display a logo, date, episode title, and show subtitle, each set against different colored backgrounds.

    I’m not sure what I want to do here. I’ll need to explore the library a little, just to see whether it’s possible to reduce the amount of randomness. Might be that I go with the shaded approach and just keep it random: having some variety could make things interesting.

    Of course, I’m still doing the easy and fun part. How the UI for making the clip will look is going to be a challenge. More on that in the future if I decide to keep working on this. And if not, at least I’ve got these nice looking stills.


    1. The annoying thing about this library is that it doesn’t use Go’s standard Color type, nor does it describe the limits of each component. So for anyone using this library: the range for R, G, and B go from 0 to 255, and A goes from 0 to 1. ↩︎

    The Alluring Trap Of Tying Your Fortunes To AI

    It’s when the tools stop working the way you expect that you realise the full cost of what you bought into.

    Devlog: Dialogues

    A post describing a playful dialogue styling feature, inspired by rubber-duck debugging, and discusses the process and potential uses for it.

    On AI, Process, and Output

    Manuel Moreale’s latest post about AI was thought-provoking:

    One thing I’m finding interesting is that I see people falling into two main camps for the most part. On one side are those who value output and outcome, and how to get there doesn’t seem to matter a lot to them. And on the other are the people who value the process over the result, those who care more about how you get to something and what you learn along the way.

    I recently turned on the next level of AI assistence in my IDE. Previously I was using line auto-complete, which was quite good. This next level gives me something closer to Cursor: prompting the AI to generate full method implementations or having a chat interaction.

    And I think I’m going to keep it on. One nice thing about this is that it’s on-demand: it stays out of the way, letting me implement something by hand if I want to. This is probably going to be the majority of the time, as I do enjoy the process of software creation.

    But other times, I just want a capability added, such as marshalling and unmarshalling things to a database. In the past, this would largely be the code copied and pasted from another file. With the AI assistence, I can get this code generated for me. Of course I review it — I’m not vibe coding here — but it saves me from making a few subtle bugs and some pretty boring editing.

    I guess my point is that I think these two camps are more porous then people think. There are times where the process is half the fun in making the thing, and others where it’s a slog, and you just want the thing to be. This is true for me in programming, and I can only guess that it’ll be similar in other forms of art. I guess the trap is choosing to join one camp, feeling that’s the only camp that people should be in, and refusing to recognise that others may feel differently.

    Merge Schema Changes Only When The Implementation Is Ready

    Integrating schema changes and implementation together before merging prevents project conflicts and errors for team members.

    You Probably Do Want To Know What You Had for Lunch That Other Day

    There’s no getting around the fact that some posts you make are banal. You obviously thought your lunch was posting about at the time was worthy of sharing: after all, you took the effort to share it. Then a week goes buy and you wonder why you posted that. “Nobody cares about this,” you say to yourself. “This isn’t giving value to anyone.”

    But I’d argue, as Doc did in Back to the Future, that you’re just not thinking forth-dimensionally enough. Sure it may seem pretty banal to you now, but what about 5 years in the future? How about 10? You could be persuing your old posts when you come across the one about your lunch, and be reminded of the food, the atmosphere, the weather, the joys of youth. It could be quite the bittersweet feeling.

    Or you could feel nothing. And that’s fine too. The point is that you don’t know how banal a particular post is the moment you make it.

    Worse still, you don’t know how banal anything will be 5 years from now, (as in right now, the moment you’re reading this sentence). The banality of anything is dynamic: it changes as a funcion of time. It could be completely irrelevant next week, then the best thing that’s happened to you a week later.

    This is why I don’t understand the whole “post essays on my blogs and the smaller things on Twitter/Bluesky/Mastodon/whatever” dichotomy some writers have out there. Is what you write on those other sites less worthy than what you write on the site you own? Best be absolutely sure about that when you post it then, as you may come to regret making a point about posting banal tweets 17 years ago, only for that moratorium about banal tweets to be lost when you decided to move away from that micro-blogging site.

    But whatever, you do you. I know for myself that I rather keep those supposedly banal thoughts on this site. And yeah, that’ll mean a lot of pretty pointless, uninteresting things get published here: welcome to life.

    But with the pressure of time, they could turn into nice, shiny diamonds of days past. Or boring, dirty lumps of coal. Who knows? Only time will answer that.

    Gallery: Morning In Sherbrooke

    A visit to Sherbrooke in the Dandenong Ranges on Easter Monday included a walk along the falls track, a sighting of a Superb Lyrebird, and a brief exploration of Alfred Nicholas Memorial Garden.

    New Desk Chair Day

    About the new desk chair I bought that arrived today.

    Airing Of Draft Posts

    A collection draft ideas and reflections, amassed over the last year, highlighting a mix of topics ranging from technology insights to personal musings.

    On Go And Using Comments For Annotations

    Some thoughts of whether Go should have a dedicated syntax for annotations that comments are currently being used for.

    Don't Be Afraid Of Types

    Types in coding projects are good. Don’t be afraid to create them when you need to.

    Replacing A Side Mirror Of A Toyota Echo

    Replacing a broken car mirror myself.

    Adventures In Godot: Respawning A Falling Platform

    My taste of going through a Godot tutorial last week has got me wanting more, so I’ve set about building a game with it. Thanks to my limited art skills, I’m using the same asset pack that was used in the video, although I am planning to add a bit of my own here and there.

    But it’s the new mechanics I enjoy working on, such as adding falling platforms. If you’ve played any platformer, you know what these look like: platforms that are suspended in air, until the player lands on them, at which point gravity takes a hold and they start falling, usually into a pit killing the player in the process:

    The platforms are built as a CharacterBody2D with an Area2D that will detect when a player enters the collision shape. When they do, a script will run which will have the platform “slipping” for a second, before the gravity is turned on and the platform falls under it’s own weight. The whole thing is bundled as a reusable scene which I could drag into the level I’m working on.

    Auto-generated description: A game development interface with a sprite and code editor is shown, from a software environment like Godot.

    I got the basics of this working reasonably quickly, yet today, I had a devil of a time going beyond that. The issue was that I wanted the platform to respawn after it fell off the map, so that the player wouldn’t get soft-locked at at an area where the platform was needed to escape. After a false-start trying to reposition the platform at it’s starting point after it fell, I figured it was just easier to respawn the platform when the old one was removed. To do this I had to solve two problems:

    1. How do I get the platform starting point?
    2. How can I actually respawn the platform?

    Getting The Starting Point

    A Node2D object has the property global_position, which returns the position of the object based on the world coordinates. However, it seems like this position is not correct when the _init function of the attached script is called. I suspect this is because this function is called before the platform is added to the scene tree, when the final world coordinates are known.

    Fortunately, there exists the _ready notification, which is invoked when the node is added to the scene tree. After some experimentation, I managed to confirm that global_position properly was correct. So tracking the starting point is a simple as storing that value in a variable:

    var init_global_position = null
    
    func _ready():
    	init_global_position = global_position
    

    Another option is to use the _enter_tree() notification. From the documentation, it looks like either would probably work here, with the only difference being the order in which this notification is invoked on parents and children (_enter_tree is called by the parent first, whereas _ready is called by the children first).

    Respawning The Platform

    The next trick was finding out how to respawn the platform. The usual technique for doing so, based on the results of my web searching, is to load the platform scene, instantiate a new instance of it, and added it to the scene tree.

    @onready var FallingPlatform = preload("res://scenes/falling_platform.tscn")
    
    func respawn():	
        var dup = FallingPlatform.instantiate()
        add_child(dup)
    

    Many of the examples I’ve seen online added the new scene node as a child of the current node. This wouldn’t work for me as I wanted to free the current node at the same time, and doing so would free the newly instantiated child. The fix for this was easy enough: I just added the new node as a child of the current scene.

    @onready var FallingPlatform = preload("res://scenes/falling_platform.tscn")
    
    
    func respawn():	
        var dup = FallingPlatform.instantiate()
        get_tree().current_scene.add_child(dup)
        queue_free()
    

    I still had to reposition the new node to the spawn point. Fortunately the global_position property is also settable, so it was simply a matter of setting that property before adding it to the tree (this is so that it’s correct when the newly instantiated node receives the _ready notification).

    @onready var FallingPlatform = preload("res://scenes/falling_platform.tscn")
    
    func respawn():	
        var dup = FallingPlatform.instantiate()
        dup.global_position = init_global_position
        get_tree().current_scene.add_child(dup)
        queue_free()
    

    This spawned the platform at the desired positioned, but there was a huge problem: when the player jumped on the newly spawn platform, it wouldn’t fall. The Area2D connection was not invoking the script to turn on the gravity:

    It took me a while to figured out what was going on, but I came to the conclusion that the packed scene was loading properly, but without the script attached. Turns out a Script is a resource separate from the scene, and can be loaded and attached to an object via the set_script method:

    @onready var FallingPlatform = preload("res://scenes/falling_platform.tscn")
    @onready var FallingPlatformScript = preload("res://scripts/falling_platform.gd")
    
    func respawn():	
        var dup = FallingPlatform.instantiate()
        dup.set_script(FallingPlatformScript)
    
        dup.global_position = init_global_position
        get_tree().current_scene.add_child(dup)
        queue_free()
    

    Finally, after figuring all this out, I was able to spawn a new falling platform, have it positioned at the starting position of the old platform, and react to the player standing on it.

    The time it took to work this out is actually a little surprising. I was expecting others to run into the same problem I was facing, where they were trying to instantiate a scene only to have the scripts not do anything. Yet it took me 45 minutes of web searching through Stack Overflow and forum posts that didn’t solve my problem. It was only after a bit of experimentation and print-debugging on my own that I realised that I actually had to attached the script after instantiating the node.

    To be fair, I will attribute some of this to not understanding the problem at first: I actually thought the Area2D wasn’t actually being instantiated at all. Yet not one of the Stack Overflow answers or forum post floated the possibility that the script wasn’t being loaded alongside the scene. This does suggest to me that my approach may not be optimal. There does exist a “Local to Scene” switch in the script inspector that could help, although turning it on doesn’t seem to do much. But surely there must be some way to instantiate the script alongside the scene.

    Anyway, that’s for later. For now, I’m happy that I’ve got something that works.

Older Posts →