Five years ago, I wrote my first blog post. And despite the slow start, I’m extremely happy that I was able to keep it up since then. Blogging is not just what I do, it’s become part of who I am.
Here’s to the next five years, and hopefully many more. π₯
I love the multi-carat support in Nova, Goland, and the other text editors that have it. I use it all the time. I wish it was built into every textbox I use, including the ones used by browsers and AppKit.
The view of the Stony Point line from Bittern station.
I finished my experiment with htmgo, building the worlds most inefficient world clock. It uses HTMX swapping to get the time from the server every second.

It’s an interesting framework. Not sure it’s fully ready yet (you can’t change the bind port, for example) but might be useful in the future.
Had my first taste using Tailwind, since the framework I’m playing with includes it by default. Can’t say I’m a fan of it. One good thing I can say about it is that it comes with some pretty decent defaults, but I think much of that could be replicated using a good quality reset stylesheet. The rest, such as styling elements with classes β which I’m guessing are processed in some way, since I don’t think HTML classnames support punctuation like slashes and brackets β and configuring everything using JSON doesn’t appeal to me. Might make sense for larger teams, where frontend developers are separate from UI designers, but it doesn’t make sense for my little crappy projects. I think I’ll stick with CSS.
π§βπ» New post on TIL Computer: psql Techniques
I get why authors build quickstart tools for their Go package, thinking theyβll be helpful, but I implore them to resist. Focus that effort in making your package easily importable as a library. Thatβs many times more helpful than forcing your users to integrate new tools into their tool-chain.
I hadn’t considered adding a /save page to this site. That is, not until someone at work asked me if I could provide them a referral link to Hetzner. I wasn’t qualified to get one β haven’t met the minimum spending limits yet β but I was able to share one from a /save page from a fellow blogger. So my co-worker got some sign-up credit and I got to feel good about helping someone I respect online.
I’ve since added a /save page to this site, and it’ll feature a referral link for Hetzner as soon as I’m allowed to get one.
An Alternative Arrangement to Cursor Movement in Terminal Applications
I appreciate why many programs support Vim’s navigation control, using HJKL to move the cursor. A classic case of a successful application, built in an earlier time, with an earlier keyboard layout, still making an impact in the software ecosystem today.
But if I may make my pitch, I would suggest newer terminal applications consider using IJKL for cursor movement.

Not only does it emulate the inverted T of the regular cursor characters β a more comfortable arrangement to me β but I find myself naturally resting my hand there due to my typing style, with my index finger on the J and my middle finger on the I. There’s no need to move my index finger over to the H and reposition my other three fingers in a row if I wanted to move around. I always found that manoeuvre awkward, and it’s probably why I eschew HJKL in favour of the normal arrow keys whenever I’m using Vim. I’m moving figures anyway, might as well move them to something more comfortable.
In fact, I like the IJKL arrangement so much that I set it as the default in all the terminal applications I build. It probably won’t work well in Vim β the I key is pretty important β but if you’re building a terminal application, or using one that allows you to change your keyboard bindings, I’d recommend giving it a try.
I’m a little put off by Slack’s instance of hiding things it doesn’t think are relevant. Maybe it’s necessary for large Slack groups, but I’m happy to scroll a bit more if it means finding the channels I interact with infrequently. I’m recall those by where they are, rather than by their name.
I’m still blown away by how good PostgreSQL is. I’ve been writing queries all afternoon and it’s been handling them without breaking a sweat. Granted, it’s not a huge database so I shouldn’t expect anything else. But if we were to use DynamoDB instead, running the same queries would’ve been slower and involved more effort on my part doing the joins in code. So maybe what I’m actually blown away with is SQL.
This service I’m working on, one of the other things it’s responsible for is sending restart commands to this pool of workers. I named the command “restart”, and one of my co-workers was wondering if it was the same thing as resetting the workers. I was somewhat curious about this confusion. There’s no “reset” command, and I know for myself that if someone came to me to reset something, I’d take it to mean that it should be restored to some previous state. Resetting it to factory settings, for example.
But it got me thinking: are the terms interchangeable? I guess it’s not too ridiculous to think so. After all, PC’s in the ’90s came with “reset” buttons, not “restart” buttons. So it might be a little anachronistic, but do others still use “resetting” to mean restarting something?
So I did the most scientific thing that was within my power to do, and created a Mastodon poll:
When I want to describe turning a computer/device off and on again with a single interaction, I'd say that I want toβ¦
100.0% Restart it0.0% Reset it0.0% Either restart or reset (I use both terms)0.0% Turn it off and on again6 people
And the results are overwealming: 6 out of 6 people who responded use the term “restart” over “reset”, with one additional replier saying that they use the term “reboot”.
So maybe this confusion was just a fluke, or maybe I’m reading too much into this. But it was an interesting train of thought, nonetheless.
Spent the morning looking into what was causing the service I’m working on to be performing so badly. Turns out that one of the reasons was just us being stupid but another was actually quite surprising.
Without giving too much away, the service I’m working on takes messages from a NATS Jetstream, and converts them to jobs to be sent to a pool of workers. The workers need to access an API to do the job, and the service I’m responsible for needs to setup the permissions so the worker can do so. This involves calling out to a bunch of other micro-services and producing a signed JWT to grant it access to the API. If the workers are fully utilised, then the service will send any incoming jobs to a queue. When a worker has finished a job, it will get the next one from the queue.
One of the responsibility of this service is to make sure the workers are doing work as much as possible. The pool of workers is fixed, and we’re paying a fixed price for them, so we’d like them to be in use much like an airline would like their planes to be in the sky. So one of the things I added was that the Jetstream handler was to try and send a new job to a worker first, before adding it to the queue to be worked on later. This means that the job could completely skip the queue if a worker is available to pick it up job then and there.
Turns out that this made a huge hit on performance. When the workers are all working on jobs, any requested work sent by this queue bypass logic would be refused, making all the prep work for preparing access to the API completely unnecessary. Time spent on calling the micro-service I can completely understand, but I was surprised to find that a significant chunk of prep work was spent on signing the JWT β a JWT that would never be used. This showed up on the CPU profile trace I took, but it also came through in the output of top
and the CPU status on the EC2 dashboard. The CPU was maxed out at 99%, and virtually all of that was spent on the service itself. CPU idle time, system time, and I/O wait time was pretty much at 0%.
Taking out this queue bypass logic dramatically improved things (although the CPU is still unusually high, which I need to look at), and what I’ll eventually do is add a circuit breaker to the Jetstream handler, so that if no worker is available to pick up the job then and there, all the other jobs will go straight to the queue. There are a few other things we could do here, like raise the number of database connections and Jetstream listeners.
But I’m shocked that signing a JWT was having such an impact. One of my colleagues suggested building the service with GOAMD64=v3
, which I’m guessing would enable some AMD extensions in the compiled binary. I’d be interested to see if this would help.
I generally don’t want to write about things going on at work, but sometimes it helps, and with the self-imposed rule of writing something a day, it’s usually the only thing worthy of comment. Classic case of the topic sometimes choosing you, rather than the other way around.
Seeing a problem at work where the performance of the service we’re working on is just not good enough. Tried bumping up the EC2 instance size and raising the DB connection pool to 20. But the pool won’t go higher than 7 connections.
So it’s either that the connection pool is misconfigured, or the NATS Jetstream client has too few listeners. I suspect it’s the latter. I don’t know what the default is set to, but if it’s a multiple of the number of CPU cores, I’m not sure how well that’ll play with AWS EC2.
So that’s the next thing to look at.
I enjoyed reading this post by Rohan Ganapavarapu. It’s fascinating getting the perspective of someone born after the early internet yet wishing they were there to experience it. The ending’s quite illuminating:
There is neocites, and a small community of people who share this philosophy about the web (and that are relatively young), but I have not met anyone my age, in the real world, that would choose to do something like this.
The majority of people (my age [of 18]) today would think sites like those (and, by extension, their creators) are weird.
I guess, here’s to the weird ones. π₯
Finished work and now I’m having a quick dinner before heading off to a meetup. And I can’t lie: I’ve been nervous all day. π¬
Kind of glad I don’t run a big, multinational corporation.
Scare with care. π

Try-Catch In UCL - Some Notes
Stared working on a try
command to UCL, which can be used to trap
errors that occur within a block. This is very much inspired by
try-blocks in Java and Python, where the main block will run, and if any
error occurs, it will fall through to the catch block:
try {
echo "Something bad can happen here"
} catch {
echo "It's all right. I'll run next"
}
This is all I’ve got working at the moment, but I want to quickly write some notes on how I’d like this to work, lest I forget it later.
First, much like everything in UCL, these blocks should return a value. So it should be possible to do something like this:
set myResults (try {
result-of-something-that-can-fail
} catch {
"My default"
})
--> (result of the thing)
This is kind of like using or
in Lua to fallback to a default, just
that if the result fails with an error, the default value can be
returned from the catch
block. In might even be possible to simply
this further, and have catch
just return a value in cases where an
actual block of code is unnecessary:
set myResults (try { result-of-something-that-can-fail } catch "My default")
One other thing to consider is how to represent the error.Β Errors are
just treated out-of-band at the moment, and are represented as regular
Go error
types. It might be necessary to add a new error
type to
UCL, so that it can be passed through to the catch
block for logging
or switching:
try {
do-something
} catch { |e|
echo (cat "The error is " $e)
}
This could also be used as the return value if there is no catch
block:
set myResult (try { error "my error" })
--> error: my error
Another idea I have is successive catch blocks, that would cascade one after the other if the one before it fails:
try {
do-something
} catch {
this-may-fail-also
} catch {
echo "Always passes"
}
Unlike JavaScript or Python, I don’t think the idea of having catch
blocks switching based on the error type would be suitable here. UCL is
dynamic in nature, and having this static type checking feels a little
wrong here. The catch
blocks will only act as isolated blocks of
execution, where an error would be caught and handled.
Finally, there’s finally
, which would run regardless of which try or
catch block was executed. I think, unlike the other two blocks, that the
return value of a finally
block will always be swallowed. I think this
will work as the finally block should mainly be used for clean-up, and
it’s the result of the try
or catch
blocks that are more important.
set res (try {
"try"
} catch {
"catch"
} finally {
"finally"
})
--> "try"
Anyway, this is the idea I have right now.
Update β I just realised that the idea of the last successful try block return an error, rather than letting it climb up the stack defeats the purpose of exceptions. So having something like the following:
try { error "this will fail" }
Should just unroll the stack and not return an error value. Although if there is a need to have an error value returned, then the following should work:
try { error "this will fail" } catch { |err| $err }
--> error: this will fail