It’s always after you commit to a deadline that you find the tasks that you forgot to do.
I think if I ever created a Tetris game for the TI-83 graphing calculator, I would call it “Tetris Instruments.”
My Position On Blocking AI Web Crawlers
I’m seeing a lot of posts online about sites and hosting platforms blocking web crawlers used for AI training. I can completely understand their position, and fully support them: it’s their site and they can do what they want.
Allow me to lay my cards on the table. My current position is to allow these crawlers to access my content. I’m choosing to opt in, or rather, not to opt out. I’m probably in the minority here (well, the minority of those I follow), but I do have a few reasons for this, with the principal one being that I use services like ChatGTP and get value from them. So to prevent them from training their models on my posts feels personally hypocritical to me. It’s the same reason why I don’t opt out of Github Copilot crawling my open source projects (although that’s a little more theoretical, as I’m not a huge user of Copilot). To some, this position might sound weird, and when you consider the gulf between what value these AI companies get from scraping the web verses what value I get from them as a user, it may seem downright stupid. And if you approach it from a logical perspective, it probably is. But hey, we’re in the realm of feelings, and right now this is just how I feel. Of course, if I were to make a living out of this site, it would be a different story. But I don’t.
And this leads to the tension I see between site owners making decisions regarding their own content, and services making decisions on behalf of their users. This site lives on Micro.blog, so I’m governed by what Manton chooses to do or not do regarding these crawlers. I’m generally in favour of what Micro.blog has chosen so far: allowing people to block these scrapers via “robots.txt” but not yet blocking requests based on their IP address. I’m aware that others may not agree, and I can’t, in principal, reject the notion of a hosting provider choosing to block this crawlers at the network layer. I am, and will continue to be, a customer of such services.
But I do think some care should be considered, especially when it comes to customers (and non-customer) asking these services to add these network blocks. You may have good reason to demand this, but just remember there are users of these services that have opinions that may differ. I personally would prefer a mechanism where you opt into these crawlers, and this would be an option I’ll probably take (or probably not; my position is not that strong). I know that’s not possible under all circumstances so I’m not going to cry too much if this was not offered to me in lieu of a blanket ban.
I will make a point on some comments that I’ve seen that, if taken in an uncharitable way, imply that creators that have no problem with these crawlers do not care about their content. I think such opinions should be worded carefully. I know how polarising the use of AI currently is, and making such remarks, particularly within posts that are already heated due to the author’s feelings regarding these crawlers, risks spreading this heat to those that read it. The tone gives the impression that creators okay with these crawlers don’t care about what they push online, or should care more than they do. That might be true for some — might even be true for me once in a while — but to make such blanket assumptions can come off as a little insulting. And look, I know that’s not what they’re saying, but it can come across that way at times.
Anyway, that’s my position as of today. Like most things here, this may change over time, and if I become disenfranchised with these companies, I’ll join the blockade. But for the moment, I’m okay with sitting this one out.
Finally did something today that I should’ve done a long time ago: buy a UPS. Hopefully power outages will no longer bring down my Mac Mini server while I’m away (power is usually quite reliable when I’m home, but as soon as I leave for any extended period of time… 🪫).

Sometimes I wonder how and why my work email address got onto various B2B marketing email lists. “Want to buy some network gear, or setup a meeting with our account manager?” What? No! Even if I wanted to, that’s not a decision I’m authorised to make.
In today’s demonstration of the gulf between taste and ability, may I present my attempt at fixing the fence extension:

Part of the challenge was getting to it. I had to hack out a path through the overgrown beds:

Trust me when I say that this is an improvement. 😅
Checked out of the Cockatiel Cafe and heading home to Melbourne. Always a little melancholy leaving Canberra, but I’m sure to be back soon enough. As for the “residents” I was looking after, I’ll be seeing them again real soon. More posts then I’m sure.

One of these days, I’m going to make change to a Dockerfile or a Github workflow, and it’s going to work the first time.
🔗 How the “Nutbush” became Australia’s unofficial national dance
It’s amusing to grow up thinking everyone did this up until a few years ago, when someone from overseas told me they never learnt this dance. Anyway, this is totally a thing. Last wedding I attended, we absolutely did the Nutbush. 😄
Been asked to do a routine task today. This is the fifth time I’ve started it, the fifth time I said to myself “hmm, I should probably automate this,” and the fifth time I just did it manually. Now wondering if that was time well spent.

Blogging Gallery Tool
Oof! It’s been a while, hasn’t it.
Not sure why I expected my side-project work to continue while I’m here in Canberra. Feels like a waste of a trip to go somewhere — well, not “unique”, I’ve been here before; but different — and expect to spend all your time indoors writing code. Maybe a choice I would’ve made when I was younger, but now? Hmm, better to spend my time outdoors, “touching grass”. So that’s what I’ve been doing.
But I can’t do that all the time, and although I still have UCL (I’ve made some small changes recently, but nothing worth writing about) and Photo Bucket, I spent this past fortnight working on new things.
The first was an aborted attempt at an RSS reader for Android that works with Feedbin. I did get something working, but I couldn’t get it onto my mobile, and frankly it was rather ugly. So I’ve set that idea aside for now. Might revisit it again.
But all my outdoor adventures did motivate me to actually finish something I’ve been wanting to do for a couple of years now. For you see, I take a lot of photos and I’d like to publish them on my Micro.blog in the form of a GLightbox gallery (see this post for an example). But making these galleries is a huge pain. Setting aside that I always forget the short-codes to use, it’s just a lot of work. I’m always switching back and forth between the Upload section in Micro.blog, looking that the images I want to include, and a text file where I’m working on the gallery markup and captions.
I’ve been wishing for some tool which would take on much of this work for me. I’d give it the photos, write the captions, and it would generate the markup. I’ve had a run at building something that would do this a few times already, including an idea for a feature in Photo Bucket. But I couldn’t get over the amount of effort it would take to upload, process, and store the photos. It’s not that it’d would be hard, but it always seemed like double handling, since their ultimate destination was Micro.blog. Plus, I was unsure as to how much effort I wanted to put into this, and the minimum amount of effort needed to deal with the images seemed like a bit of a hassle.

It turns out the answer was in front of me this whole time. The hard part was preparing the markup so why couldn’t I build something that simply did that? The images would already be in Micro.blog; just use their URLs. A much simpler approach indeed.
So I started working on “Blogging Tools”, a web-app that’ll handle this part of making galleries. First, I upload the images to Micro.blog, then I copy the image tags into to this tool:

The tool will parse these tags, preserving things like the “alt” attribute, and present the images in the order they’ll appear in the gallery, with text boxes beside each one allowing me to write the caption.

Once I’m done, I can then “render” the gallery, which will produce the Hugo short-codes that I can simply copy and paste into the post.


This took me about a few evenings of work. It’s a simple Go app, using Fiber and Rainstorm, running in Docker. Seeing that the image files themselves are not managed by the tool, once I got the image parsing and rendering done, the rest was pretty straight forward. It’s amazing to think that removing the image handling side of things has turned this once “sizeable” tool into something that that was quick to build and, most importantly, finally exists. I do have more ideas for this “Blogging Tool”. The next idea is porting various command line tools that do simple image manipulation to WASM so I can do them in the browser (these tools were use to crop and produce the shadow of the screenshot in this post). I’m hoping that these would work on the iPad, so that I can do more of the image processing there rather than give up and go to a “real” computer. I should also talk a little about why I chose Rainstorm over Sqlite, or whether that was a good idea. Maybe be more on those topics later, but I’ll leave it here for now.
MacOS has cat
, but not tac
. Fortunately, Vim came to the rescue with this command:
:global/^/move 0
Source: Superuser
Thinking About Plugins In Go
Thought I’d give Go’s plugin package a try for something. Seems to works fine for the absolutely simple things. But start importing any dependencies and it becomes a non-starter. You start seeing these sorts of error messages when you try to load the plugin:
plugin was built with a different version of package golang.org/x/sys/unix
Looks like the host and plugins need to have exactly the same dependencies. To be fair, the package documentation says as much, and also states that the best use of plugins is for dynamically loaded modules build from the same source. But that doesn’t help me and what I’m trying to do, which is encoding a bunch of private struct types as Protobuf messages.
So might be that I’ll need to find another approach. I wonder how others would do this. An embedded scripting language would probably not be suitable for this, since I’m dealing with Protobuf and byte slices. Maybe building the plugin as a C shared object? That could work, but then I’d loose all the niceties that come from using Go’s type system.
Another option would be something like WASM. It’s interesting seeing WASM modules becoming a bit of a thing for plugin architectures. There’s even a Go runtime to host them. The only question is whether they would have the same facilities as regular process would have, like network access; or whether they’re completely sandboxed, and you as the plugin host would need to add support for these facilities.
I guess I’d find out if I were to spend any more time looking at this. But this has been a big enough distraction already. Building a process to shell-out to would work just fine, so that’s probably what I’ll ultimately do.
It’s easy for me to say this now, but I would pay a non-zero number of dollars for a set of well designed and well curated sites that can replace Know Your Meme, Fandom, and all these song lyric sites. I’d be fine it they also host ads, so long as there’s one or two, and none of them are video.
I’d be curious to know if there’re any Go apps that are using the plugin package. I’m not aware of any myself; most seem to use things like shell-outs or embedded languages. It seems like the package itself is little more than an experiment so I’m not that surprised, but it’s a little disappointing.
I’m a little suspicious of using project starter kits for learning something new. Sure it can whip up that React or Svelte web-app in a few seconds, but then you’re left with maintaining project infrastructure that you didn’t build yourself. Might be best to start learning projects from scratch.
One of these days, I’m going to write a long form post, and do the narration in a single take.
Word Cloud
From Seth’s blog:
Consider building a word cloud of your writing.
Seems like a good idea so that’s what I did, taking the contents of the first page of this blog. Here it is:

Some observations:
- One of the most prominent words is “just”, with “it’s” not far behind. I though it’s because I started a lot of sentences with “it’s just”, but it turns out I’ve only used that phrase once, while the individual words show up around 10 times each. I guess I use “just” a lot (apparently, so does Seth). I am surprise to see the word “anyway” only showing up twice.
- Lots of first-person pronouns and articles, like “I’m”, “I’ve”, and “mine”. That’s probably not going to change either. This is just1 the tonal choice I’ve made. I read many blogs that mainly speak in the second person and I don’t think it’s a style that works for me. Although I consciously know that they’re not speaking to me directly, or even to the audience as a whole, I don’t want to give that impression myself, unless that’s my intention. So it’ll be first person for the foreseeable future I’m sure.
- Because it’s only the first page, many of the more prominent words are from recent posts. So lots about testing, OS/2, and Bundanoon. I would like to cut down on how much I write about testing. A lot of it is little more than venting, which I guess is what one does on their blog, but I don’t want to make a habit of it.
- I see the word “good” is prominent. That’s good: not a lot of negative writing (although, this is a choice too).
- I see the word “video” is also prominent. That’s probably not as good. Might be a sign I’m talking a little too much about the videos I’ve been watching.
Anyway, I thought these findings were quite interesting. One day, I’ll have to make another word cloud across all the posts on this blog.
Cracked open Acorn to come up with a new wordmark for Apple:

Yes, I know I’ve got work to do. 😜
Caught up on the WWDC announcements (did Ars Technica have a live blog this year? Must’ve missed it). The scratch maths feature for the iPad looks pretty good. If my Apple Pencil wasn’t always flat, I’d definitely make use of it.