Of course I deployed something that broke other services because of dodgy permissions. So…

My second favourite word to write in a Jira ticket, after augment, is “decommission”. I’m basically using it as an euphemism for “rip this unused code out”. To have made a few tickets with this word today feels glorious. 😊
As Someone Who Works In Software
As someone who works in software…
- I cringe every time I see society bend to the limitations of the software they use. It shouldn’t be this way; the software should serve the user, not the other way around.
- I appreciate a well designed API. Much of my job is using APIs built by others, and the good ones always feel natural to use, like water flowing through a creek. Conversely, a badly designed API makes me want to throw may laptop to the ground.
- I think a well designed standard is just as important as a well designed API. Thus, if you’re extending the standard in a way that adds a bunch of exceptions to something that’s already there, you may want to reflect on your priorities and try an approach that doesn’t do that.
- I also try to appreciate, to varying levels of success, that there are multiple ways to do something and once all the hard and fast requirements are settled, it usually just comes down to taste. I know what appeals to my taste, but I also (try to) recognise that others have their own taste as well, and what appeals to them may not gel with me. And I just have to deal with it. I may not like it, but sometimes we have to deal with things we don’t like.
- I believe a user’s home directory is their space, not yours. And you better have a bloody good reason for adding stuff there that the user can see and didn’t ask for.
My favourite gym t-shirt. All the Aussies would get this reference.

This I got from an op-shop but I have been to the Bonnie Doon Hotel a few times. It’s actually pretty nice.
📺 Taitset
Discovered another YouTube channel about Victorian railways this evening. This one’s more about history and operations and less pure cab-rides. A lot of fascinating information about locations that I’m very familiar with.
It’s already May and I’m way behind on my reading goals for the year.

The trouble is that the book that I want to read next is one I’ve read before, which doesn’t really count towards my goal. Well, I guess it could, since I haven’t listed it here. Maybe I’ll let myself this one pass.
On the train. Overhead announcement comes through from the control centre mentioning that a way to get service updates is to follow Metro on Twitter. Not X, Twitter. Even 1.5 years out.
Such is the staying power of Twitter as a brand, compared to what it’s called now. I’d be curious to know if those not using X or are not interested in tech know about the rebrand at all. Everyone knew about Twitter, even if they never used it.
Tape Playback Site
Thought I’d take a little break from UCL today.
Mum found a collection of old cassette tapes of us when we were kids, making and recording songs and radio shows. I’ve been digitising them over the last few weeks, and today the first recorded cassette was ready to share with the family.
I suppose I could’ve just given them raw MP3 files, but I wanted to record each cassette as two large files — one per side — so as to not loose much of the various crackles and clatters made when the tape recorder was stopped and started. But I did want to catalogue the more interesting points in the recording, and it would’ve been a bit “meh” simply giving them to others as one long list of timestamps (simulating the rewind/fast-forward seeking action would’ve been a step too far).
Plus, simply emailing MP3 files wasn’t nearly as interesting as what I did do, which was to put together a private site where others could browse and play the recorded tapes:


The site is not much to talk about — it’s a Hugo site using the Mainroad theme and deployed to Netlify. There is some JavaScript that moves the playhead when a chapter link is clicked, but the rest is just HTML and CSS. But I did want to talk about how I got the audio files into Netlify. I wanted to use `git lfs` for this and have Netlify fetch them when building the site. Netlify doesn’t do this by default, and I get the sense that Netlify’s support for LFS is somewhat deprecated. Nevertheless, I gave it a try by adding an explicit `git lfs` step in the build to fetch the audio files. And it could’ve been that I was using the LFS command incorrectly, or maybe it was invoked at the wrong time. But whatever the reason, the command errored out and the audio files didn’t get pulled. I tried a few more times, and I probably could’ve got it working if I stuck with it, but all those deprecation warnings in Netlify’s documentation gave me pause.

So what I ended up doing was turning off builds in Netlify and using a Github Action which built the Hugo site and publish it to Netlify using the CLI tool. Here’s the Github Action in full:
name: Publish to Netify
on:
push:
branches: [main]
jobs:
build:
name: Build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
submodules: true
fetch-depth: 0
lfs: true
- name: Setup Hugo
uses: peaceiris/actions-hugo@v3
with:
hugo-version: '0.119.0'
- name: Build Site
run: |
npm install
hugo
- name: Deploy
env:
NETLIFY_SITE_ID: ${{ secrets.NETLIFY_SITE_ID }}
NETLIFY_AUTH_TOKEN: ${{ secrets.NETLIFY_AUTH_TOKEN }}
run: |
netlify deploy --dir=public --prod
This ended up working quite well: the audio files made it to Netlify and were playable on the site. The builds are also quite fast; around 55 seconds (an earlier version involved building Hugo from source, which took 5 minutes). So for anyone else interested in trying to serve LFS files via Netlify, maybe try turning off the builds and going straight to using Github Action and the CLI tool. That is… if you can swallow the price of LFS storage in Github. Oof! A little pricy. Might be that I’ll need to use something else for the audio files.
Interestingly, the best example of an app soliciting user feedback might be the Economist app. Just one alert modal with a simple question — “are you enjoying the Economist app?” — and a simple Yes/No answer. No star rating. No review prose. Just a simple thumbs up/thumbs down. Crude, but effective.
UCL: Brief Integration Update and Modules
A brief update of where I am with UCL and integrating it into Dynamo-browse. I did managed to get it integrated, and it’s now serving as the interpreter of commands entered in during a session.
It works… okay. I decided to avoid all the complexities I mentioned in
the last post — all that about continuations, etc. — and simply kept the
commands returning tea.Msg
values. The original idea was to have the
commands return usable values if they were invoked in a non-interactive
manner. For example, the table
command invoked in an interactive
session will bring up the table picker for the user to select the table.
But when invoked as part of a call to another command, maybe it would
return the current table name as a string, or something.
But I decided to ignore all that and simply kept the commands as they are. Maybe I’ll add support for this in a few commands down the line? We’ll see. I guess it depends on whether it’s necessary.
Which brings me up to why this is only working “okay” at the moment.
Some commands return a tea.Msg
which ask for some input from the user.
The table
command is one; another is set-attr
, which prompts the
user to enter an attribute value. These are implemented as a message
which commands the UI to go into an “input mode”, and will invoke a
callback on the message when the input is entered.
This is not an issue for single commands, but it becomes one when you
start entering multiple commands that prompt for input, such as two
set-attr
calls:
set-attr this -S ; set-attr that -S
What happens is that two messages to show the prompt are sent, but only one of them is shown to the user, while the other is simply swallowed.
Fixing this would require some re-engineering, either with how the controllers returning these messages work, or the command handlers themselves. I can probably live with this limitation for now — other than this, the UCL integration is working well — but I may need to revisit this down the line.
Modules
As for UCL itself, I’ve started working on the builtins. I’m planning to have a small set of core builtins for the most common stuff, and the rest implemented in the form of “modules”. The idea is that the core will most likely be available all the time, but the modules can be turned on and off by the language embedder based on what they need or are comfortable having.
Each module is namespaces with a prefix, such as os
for operating
system operations, or fs
for file-system operations. I’ve chosen the
colon as the namespace separator, mainly so I can reserve the dot for
field dereferencing, but also because I think TCL uses the colon as a
namespace separator as well (I think I saw it in some sample code). The
first implementation of this was simply adding the colon to the list of
characters that make up the IDENT token. This broke the parser as the
colon is also use as the map key/value separator, and the parser
couldn’t resolve maps anymore. I had to extend the “indent” parse
rule to support multiple IDENT tokens separated by colons. The module
builtins are simply added to the environment with there fully-qualified
name, complete prefix and colon, and invoking them with one of these
idents will just “flatten” all these colon-separated tokens into a
single string. Not sophisticated, but it’ll work for now.
There aren’t many builtins for these modules at the moment: just a few for reading environment variables and getting files as list of strings. Dynamo-browse is already using this in a feature branch, and it’s allows me to finally add a long-standing feature I’ve been meaning to add for a while: automatically enabling read-only mode when accessing DynamoDB tables in production. With modules, this construct looks a little like the following:
if (eq (os:env "ENV") "prod") {
set-opt ro
}
It would’ve been possible to do this with the scripting language already used by Dynamo-browse. But this is the motivation of integrating UCL: it makes these sorts of constructs much easier to do, almost as one would do writing a shell-script over something in C.
The Perfect Album
The guys on Hemispheric Views have got me blogging once again. The latest episode bought up the topic of the perfect album: an album that you can “just start from beginning, let it run all the way through without skipping songs, without moving around, just front to back, and just sit there and do nothing else and just listen to that whole album”.
Well, having crashed Hemispheric Views once, I’d thought it’s time once again to give my unsolicited opinion on the matter. But first, some comments on some of the suggestions made on the show.
I’ll start with Martin’s suggestion of the Cat Empire. I feel like I should like Cat Empire more than I currently do. I used to know something who was fanatic about them. He shared a few of their songs when we were jamming out — we were in a band together — and on the whole I thought they were pretty good. They’re certainly a talented bunch of individuals. But it’s not a style of music that gels with me. I’m just not a huge fan of scar, which is funny considering the band we were both in was a scar band.
I feel like I haven’t given Radiohead a fair shake. There were many people that approached me and said something of the lines of “you really should try Radiohead; it’s a style of music you may enjoy,” and I never got around to following their advice. I probably should though, I think they may be right. Similarly for Daft Punk, of which I have heard a few tracks of and thought them to be pretty good. I really should give Random Access Memory a listen.
I would certainly agree with Jason’s suggestion of the Dark Side of the Moon. I count myself a Pink Floyd fan, and although I wouldn’t call this my favourite album by them, it’s certainly a good album (if you were to ask, my favourite would probably be either The Wall or Wish You Were Here, plus side B of Metal).
As to what my idea of a perfect album would be, my suggestion is pretty simple: it’s anything by Mike Oldfield.
LOL, just kidding!1 😄
No, I’d say a great example of a perfect album is Jeff Wayne’s musical adaptation of The War Of The Worlds.

I used to listen to this quite often during my commute, before the pandemic arrived and bought that listen count down to zero. But I’ve picked it back up a few weeks ago and it’s been a constant earworm since. I think it ticks most of the boxes for a perfect album. It’s a narrative set to music, which makes it quite coherent and naturally discourages skipping tracks. The theming around the various elements of the story are really well done: hearing one introduced near the start of the album come back later is always quite a thrill, and you find yourself picking up more of these as you listen to the album multiple times. It’s very much not a recent album but, much like Pink Floyd, there’s a certain timelessness that makes it still a great piece of music even now.
Just don’t listen to the recent remakes.
-
Although not by much. ↩︎
God bless the person that invented the command line history. They just saved me 15 minutes of work.
And while we’re handing out praises, thank you to the person that added RunAndReturn
to the Mockery mock generator. I might be able to climb out of this mocking hell with this before the day is up.
One other thing from Rec Diffs #233: it’s amusing to hear Siracusa being as frustrated with Britishism seeping into American English as I am with Americanisms seeping into Australian English.
John, I know how you feel. 😀
Favourite Comp. Sci. Textbooks
John Siracusa talked about his two favourite textbooks on Rec Diffs #233: Modern Operation Systems, and Computer Networks, both by Andrew S. Tanenbaum. I had those textbooks at uni as well. I still do, actually. They’re fantastic. If I were to recommend something on either subject, it would be those two.

I will add that my favourite textbook I had during my degree was Compilers: Principal, Techniques and Tools by Alfred V. Aho, et al. also known as the “dragon book.” If you’re interested in compiler design in any way, I can definitely recommend this book. It’s a little old, but really, the principals are more or less the same.

And that makes it the third time this week that I encountered a bug involving DynamoDB that was avoidable with a unit test that actually used a proper database.

(To be fair, this time is was my fault: I haven’t got around to writing the unit test yet).
Github/Gitlab code search is fine, but have you ever tried grep -l -r methodName projects/*
? Seems to be not that much slower and like 100x more reliable.
I wish Ghost allowed readers to choose a different email address to send newsletters, rather than just send them to the email address associated with the account itself. I’ve got news for you: send reading material to my personal inbox and I’ll never see it. That’s just not where I read stuff: it’s all in Feedbin.
Even better would be a private RSS feed. I know Gruber had issues with doing way back during the Google Reader days. But those days are gone, so it might be worth trying this again. Seems to work for Stratechery.
For the last few years, I’ve been using 4/24 as the expiry date of test credit cards within Stripe. Well those days are literally in the past now.

UCL: Breaking And Continuation
I’ve started trying to integrate UCL into a second tool: Dynamo Browse. And so far it’s proving to be a little difficult. The problem is that this will be replacing a dumb string splitter, with command handlers that are currently returning a tea.Msg type that change the UI in some way.
UCL builtin handlers return a interface{}
result, or an error
result, so there’s no reason why this wouldn’t work. But tea.Msg
is
also an interface{}
types, so it will be difficult to tell a UI
message apart from a result that’s usable as data.
This is a Dynamo Browse problem, but it’s still a problem I’ll need to
solve. It might be that I’ll need to return tea.Cmd types — which
are functions returning tea.Msg
— and have the UCL caller detect these
and dispatch them when they’re returned. That’s a lot of function
closures, but it might be the only way around this (well, the
alternative is returning an interface type with a method that returns a
tea.Msg
, but that’ll mean a lot more types than I currently have).
Anyway, more on this in the future I’m sure.
Break, Continue, Return
As for language features, I realised that I never had anything to exit
early from a loop or proc. So I added break
, continue
, and return
commands. They’re pretty much what you’d expect, except that break
can optionally return a value, which will be used as the resulting value
of the foreach
loop that contains it:
echo (foreach [5 4 3 2 1] { |n|
echo $n
if (eq $n 3) {
break "abort"
}
})
--> 5
--> 4
--> 3
--> abort
These are implemented as error types under the hood. For example,
break
will return an errBreak
type, which will flow up the chain
until it is handled by the foreach
command (continue
is also an
errBreak
with a flag indicating that it’s a continue). Similarly,
return
will return an errReturn
type that is handled by the proc
object.
This fits quite naturally with how the scripts are run. All I’m doing
is walking the tree, calling each AST node as a separate function call
and expecting it to return a result or an error. If an error is return,
the function bails, effectively unrolling the stack until the error is
handled or it’s returned as part of the call to Eval()
. So leveraging
this stack unroll process already in place makes sense to me.
I’m not sure if this is considered idiomatic Go. I get the impression
that using error types to handle flow control outside of adverse
conditions is frowned upon. This reminds me of all the arguments against
using expressions for flow control in Java. Those arguments are good
ones: following executions between try
and catch
makes little sense
when the flow can be explained more clearly with an if
.
But I’m going to defend my use of errors here. Like most Go projects,
the code is already littered with all the if err != nil { return err }
to exit early when a non-nil error is returned. And since Go developers
preach the idea of errors simply being values, why not use errors here
to unroll the stack? It’s better than the alternatives: such as
detecting a sentinel result type or adding a third return value which
will just be yet another if bla { return res }
clause.
Continuations
Now, an idea is brewing for a feature I’m calling “continuations” that might be quite difficult to implement. I’d like to provide a way for a user builtin to take a snapshot of the call stack, and resume execution from that point at a later time.
The reason for this is that I’d like all the asynchronous operations to
be transparent to the UCL user. Consider a UCL script with a sleep
command:
echo "Wait here"
sleep 5
echo "Ok, ready"
sleep
could simply be a call to time.Sleep()
but say you’re running
this as part of an event loop, and you’d prefer to do something like
setup a timer instead of blocking the thread. You may want to hide this
from the UCL script author, so they don’t need to worry about
callbacks.
Ideally, this can be implemented by the builtin using a construct similar to the following:
func sleep(ctx context.Context, arg ucl.CallArgs) (any, error) {
var secs int
if err := arg.Bind(&secs); err != nil {
return err
}
// Save the execution stack
continuation := args.Continuation()
// Schedule the sleep callback
go func() {
<- time.After(secs * time.Seconds)
// Resume execution later, yielding `secs` as the return value
// of the `sleep` call. This will run the "ok, ready" echo call
continuation(ctx, secs)
})()
// Halt execution now
return nil, ucl.ErrHalt
}
The only trouble is, I’ve got no idea how I’m going to do this. As mentioned above, UCL executes the script by walking the parse tree with normal Go function calls. I don’t want to be in a position to create a snapshot of the Go call stack. That a little too low level for what I want to achieve here.
I suppose I could store the visited nodes in a list when the ErrHalt
is raised; or maybe replace the Go call stack with an in memory stack,
with AST node handlers being pushed and popped as the script runs. But
I’m not sure this will work either. It would require a significant
amount of reengineering, which I’m sure will be technically
interesting, but will take a fair bit of time. And how is this to work
if a continuation is made in a builtin that’s being called from another
builtin? What should happen if I were to run sleep
within a map
, for
example?
So it might be that I’ll have to use something else here. I could
potentially do something using Goroutines: the script is executed on
Goroutine and args.Continuation()
does something like pauses it on a
channel. How that would work with a builtin handler requesting the
continuation not being paused themselves I’m not so sure. Maybe the
handlers could be dispatched on a separate Goroutine as well?
A simpler approach might be to just offload this to the UCL user, and
have them run Eval
on a separate Goroutine and simply sleeping the
thread. Callbacks that need input from outside could simply be sent
using channels passed via the context.Context
. At least that’ll lean
into Go’s first party support for synchronisation, which is arguably a
good thing.
I’d be curious to know why Microsoft renamed Azure Active Directory to “Entra.” That name is… not good.