Long Form Posts
-
Not that there’s much to see. It’s just the podcast artwork. Not even a rendered scrubber. ↩︎
-
In my experience, the tracks that take some time to grow to like turn out to be the best ones to listen to. ↩︎
-
Actually, I have a fifth blog which is for projects I’m working on that I rather keep private. Oh, and a sixth, which is a travel blog that I really should maintain better. Might be that I have a few too many blogs. ↩︎
- HTTP Routing: For this, I use Fiber. I suppose using Go’s builtin HTTP router is probably the best approach, but I do like the utility Fiber gives for doing a lot of the things that go beyond what the standard library provides, such as session management and template rendering. Speaking of…
- Server Side Templating: Nothing fancy here. I just use Go’s template engine via Fiber’s Render integration. It has pretty much all I need, so I don’t really look at anything else.
- Database: If I need one, then I’ll first take a look at Sqlite. I use the modernc.org Sqlite driver, as it doesn’t require CGo, making deployments easier (more on that later). If I need something a bit larger, I tend to go with PostgreSQL using the pgx driver. I would also like to use StormDB if I could, but it doesn’t play well with how I like to deploy things, so I tend to avoid that nowadays.
- Database ORM: I don’t really use an ORM (too much PTSD from using the various Java ORMs), but I do use sqlc to generate the Go code that interacts with the database. It’s not perfect, and it does require some glue code which is tedious to write. But what it does it does really well, and it’s better than writing all that SQL marshalling code from scratch.
- Database Migration: I’ve tried using golang-migrate before, and we do use it at work for PostgreSQL databases, but it doesn’t work well with the modernc.org Sqlite driver. So I ended up writing my own. But if it makes sense to use golang-migrate, I will.
- JavaScript: I try to keep my JavaScript usage to a minimum, favouring vanilla JavaScript if I only need a few things. For anything else, I usually turn to Stimulus.js, which adds just enough “magic” for the slightly more involved pieces of front-end logic. I’m also looking at HTMX, and have tried it for a few things, but I’ve yet to use it for a complete project. I use esbuild if I need to bundle my JavaScript, but I’m trying to go “builderless” for most things nowadays, relying on import maps and just serving the JavaScript as is.
- CSS: Much like JavaScript, I still prefer to use vanilla CSS served directly for most things. I tend to start new projects by importing SimpleCSS by Kev Quirk. It makes the HTML look good right out of the gate, but it does make each project look a little “samey” but that’s up to me to address.
- Live Reloading: I’ve only recently been a convert to live reloading. I did use it when I was bundling JavaScript, but since moving away from that, plus doing most things server-side anyway, I needed something that would build the entire app. I’ve started using Air for this, and it’s… fine. There are certain things that I don’t like about it — particularly that it tends to favour configuration over convention — but it does the job.
- Deployment: Finally, when I’m ready to deploy something, I do so using Dokku running on a Linux server. I bundle the app in a Docker container, mainly using a Go builder image, and a scratch image for the run-time container (this scratch container has nothing else in it, not even libc, which is why I use the modernc.org Sqlite driver). All I need to do is run
git push
, and Dokku does the rest. Dokku also makes it easy to provision PostgreSQL databases with automated backups, and HTTPS certificates using Lets Encrypt. Deploying something new does involve logging into the remote server to run some commands, but having been burned by PaaS providers that are either too pricy, or not pricy enough to stay in business, I’ve found this setup to be the most stable way to host apps. -
My brain muscles couldn’t come up with a better term here. 😄 ↩︎
-
I can’t remember if this is case insensitive, although I think it is. ↩︎
- Stark — the M2 Mac Mini that I use as my desktop
- Tully — the Intel Mac Mini that I use as my home server
- Crow — a very old laptop that I occasionally use when I travel (this one is a reference to the Night’s Watch)
- A self-hosted SCM (source code management) system that can be bound to my own domain name.
- A place to store Git repositories and LFS objects that can be scaled up as my needs for storage grow.
- A CI/CD runner of some sort that can be used for automated builds, ideally something that supports Linux and MacOS.
- 2x vCPU (CX22) instances, each with 4 GB RAM and 40 GB storage
- A virtual network which houses these two instances
- One public IP address
- One 50 GB volume which can be resized
- I turned off the ability for others to register themselves as users, an important first step.
- I changed the bind address of Forgejo. It listens to
0.0.0.0:3000
by default, but I wanted to put this behind a reverse proxy, so I changed it to127.0.0.1:3000
. - I also reduced the minimum size of SSH RSA keys. The default was 3,072, but I still have keys of length 2,048 that I wanted to use. There was also an option to turn off this verification.
- I was unable to register the launch agent definition within the user domain. I had to use the
gui
domain instead, which requires the user to be logged in. I’ve got the Mac Mini setup to login on startup, so this isn’t a huge issue, but it’s not quite what I was hoping for. - Half the commands in
launchctl
are deprecated and not fully working. Apples documentation on the command is sparse, and many of the Stack Exchange answers are old. So a lot of effort was spent fumbling through unfinished and outdated documentation trying to install and enable the launch service. - The actual runner is launch using a shell script, but when I tried the backup job, Bash couldn’t access the external drive. I had to explicitly add Bash to Privacy & Security → Full Disk Access within the Settings app.
- Once I finally got the runner up and running as a launch agent, jobs were failing because
.bash_profile
wasn’t been loaded. I had to adjust the launch script to include this explicitly, so that the PATH to Node and Go were set properly. - This was further exacerbated by two runners running at the same time. The foreground runner I was using to test with was configured correctly, while the one running as a launch agent wasn’t fully working yet. This manifested as the back-up job randomly failing with the same
cannot find: node in PATH
error half the time.
Cropping A "Horizontal" PocketCast Clip To An Actual Horizontal Video
Finally fixed the issue I was having with my ffmpeg incantation to crop a PocketCast clip. When I was uploading the clip to Micro.blog, the video wasn’t showing up. The audio was fine, but all I got for the visuals was a blank void1.
For those that are unaware, clips from PocketCast are always generated as vertical videos. You can change how the artwork is presented between vertical, horizontal, or square; but that doesn’t change the dimensions of the video itself. It just centers it in a vertical video geared towards TikTok, or whatever the equivalent clones are.
This, I did not care for. So I wanted to find a way to crop the videos to dimensions I find more reasonable (read: horizontal).
Here’s the ffmpeg command I’m using to do so. This takes a video of the “horizontal” PocketCast clip type and basically does a crop at the centre to produce a video with the 16:9 aspect ratio. This post shows how the cropped video turns out.
ffmpeg -i <in-file> \
-vf "crop=iw:iw*9/16:(iw-ow)/2:(ih-oh)/2, scale=640:360" \
-vcodec libx264 -c:a copy <out-file>
Anyway, back to the issue I was having. I suspect the cause was that the crop was producing a video with an uneven width. When I upload the cropped video to Micro.blog, I’ve saw in the logs that Micro.blog was downscaling video to a height of 360. This was using a version of the command that didn’t have the scale
filter, and the original clip was 1920 x 1080. If you downscale it while maintaining the original 15:9 aspect ratio, the new dimensions should be 640 x 360. But for some reason, the actual width of the cropped video was 639 instead.
I’m not sure if this was the actual problem. I had no trouble playing the odd-width video in QuickTime. The only hint I had that this might be a problem was when I tried downscaling in ffmpeg myself, and ffmpeg threw up an error complaining that the width was not divisible by two. After forcing the video size to 640 x 360, and uploading it to Micro.blog, the video started coming through again. So there might be something there.
Anyway, it’s working now. And with everything involving ffmpeg, once you get something working, you never touch it again. 😄
WeblogPoMo AMA #3: Best Music Experience
I’m on a roll with these, but I must warn you, this streak may end at any time. Anyway, todays question is from Hiro who asked it to Gabz, and discovered via Robb:
@gabz What’s the best music-related experience of your life so far?
Despite attending only a hand-full of concerts in my life — live music is not really my jam — I’ve had some pretty wonderful music-related experiences in my life, both through listing to it or by performing it. Probably my most memorial experience was playing in the pit orchestra for our Year 10 production of Pippin. This was during the last few weeks before the show opened and we attended a music camp for a weekend to do full day rehearsals with the music director. The director had a reputation of being a bit of a hard man, prone to getting a bit angry, and not afraid to raise his voice. It was intimidating to me at the time, but in hindsight I can appreciate that he was trying to get the best from us. And with us being a group of teenage boys who were prone to loosing focus, I think we were deserving of his wrath.
One evening, we were rehearsing late, and the director was spending a lot of time going through some aspect of the music. I can’t remember what was being discussed but it was one of those times where everyone was tired, yet each knew what they were meant to be doing and was still happy to be working. You feel something special during those moments, when the group was doing their best, not out of coercion but because we were trying to “get the work done”.
Probably a very close second was discovering Mike Oldfield for the first time. This was probably when I was 11 or 12, and I wasn’t a bit music listener back then (I mean, we did have a home stereo but I wasn’t listening to a walkman or anything like that). Dad was working one night and I came up to him. He then started playing track 1 of Tubular Bells II, thinking that I would appreciate it. I was more intrigued at first, as it wasn’t the type of music I was use to at the time: long, instrumental pieces. Yet I found it to be decent, and something I could see myself liking the future1. He then played track 7, and I was absolutely hooked after that.
WeblogPoMo AMA #2: One Thing I Wish I Could Change About Myself
Here’s my answer to another question asked by Annie for WebogPoMoAMA. This was previously answered by Keenan, Estebanxto, Kerri Ann, and Lou Plummer:
If you could instantly change one internal pattern/thing about yourself, what would it be?
My answer is that I wish I found it easier meeting new people. Not only am I quite introverted, I’m also really shy, and I find it extremely hard to introduce myself to new people in social situations. That is, if I ever find myself going to these social situations. I rarely do, and if I do attend, I usually stay quietly to the side, keeping with company that I know. It was at one time bad enough that I’d find excuses to avoid going out to see those I do know.
I’m trying to get better at this. For starters, I’m no longer staying away from friends, and I am trying to make the effort of going to more social events as they come. It’s still not great though, and I do struggle when being around a group of strangers. I guess the secret is just practice, and maybe trying to make a game of it: setting goals like saying hello to at least one new person every hour or so. I don’t think I’ll ever get over my shyness, but I’m hoping I can find away to at least manage it a little better than I have been.
Phaedra, The lmika Track Arrangement
I recently learnt that the version of Phaedra I’ve been listening to for the past 15 years had not only the wrong track order, but also the wrong track names. This is not entirely surprising, given how this version was… ah, acquired.
But after learning what the order and names should’ve been, I think I still prefer my version. And yes, that’s probably because I’m use to it, but if the official album were to have these names and this order, I think it would actually work really way. I may go so far as to say that if I got a copy of the official album, I’d probably change it to match the version I been listening to.
In case your curious, here’s how the tracks are named in my version:
Official Version | lmika Version |
---|---|
Phaedra | Mysterious Semblance At The Strand Of Nightmares |
Mysterious Semblance At The Strand Of Nightmares | Phaedra |
Movements Of A Visionary | Sequent ‘C’ |
Sequent ‘C’ | Movements Of A Visionary |
I’m actually a little surprised that my version of Sequent ‘C’ is officially called Movements Of A Visionary and visa-versa. The name Movements Of A Visionary gives it a more mysterious feeling, which fits well with the small, soft, reverb-filled piece of music that it is. As for the track with has that name officially… well I just assumed the name Sequent ‘C’ made the most logical sense for a piece of music with a sequencer in the key of C. I don’t have an explanation for Phaedra or Semblance other than “long piece == long title,” but Phaedra just feels like a title that fits better for a piece of music that predominantly features a mellotron.
The tracks in the version I listen too are arrange in the following order:
No. | Official Version Name | lmika Version Name |
---|---|---|
1. | Sequent 'C' | Movements Of A Visionary |
2. | Phaedra | Mysterious Semblance At The Strand Of Nightmares |
3. | Mysterious Semblance At The Strand Of Nightmares | Phaedra |
4. | Movements Of A Visionary | Sequent 'C' |
The fact that Phaedra is the first track in the official version make sense, given that on vinyl it would’ve taken up an entire side, but I reckon starting the album with a small, soft piece — acting almost like a prelude — whets the appetite for the heavier stuff. This would be track two, which is 17 minutes long, and is quite dynamic in it’s contract across the piece. You then climb down from that into what I thought was the title track which — given that it appears as the third one in my version — gives the artists an opportunity to have a something simpler to act as the centrepiece of the album. Then you end with a relatively lively piece with a driving sequencer, that finishes with a decisive C(7) chord, making it clear that the album is now over.
So that’s how I’d name and arrange the tracks in this album. I don’t want to say that Tangerine Dream got it wrong but… they did get it pretty wrong. 😀
My Favourite Watch
Seeing all the nostalgia for digital watches of the ’90s and early 2000s, following the release of retroest desk clock shaped like a large Casio digital watch, it got me thinking of the watches I owned growing up. I started off as a Casio person but I eventually moved on to Timex watches. I was pretty happy with all the watches I owned, but my favourite was the Timex Datalink USB Sports Edition, which stood head and shoulders about the rest.
Not only was this watch featureful out of the box — having the usual stopwatch, timers, and alarms — it was also reprogrammable. There was some Windows software that allowed you to install new modes and arrange them in the mode menu. I remember a few of these, such as a mode allowing you to browse data arranged in a tree; a simple note taking mode; and a horizontal game of Tetris.
There was also an SDK, allowing you to build new modes in assembly. I remember building a score keeping mode, where you could track points for a game between two or four competitors, with an optional auxiliary counter used to keep track of things like fouls. I also remember building a dice rolling mode, allowing you to roll up to 6 dice, with each dice having between 2 to 9 sides, and the total automatically being displayed to you.
I never used these modes for anything — I’m neither sporty nor much of a gamer to have any real need for tracking scores or rolling multiple dice — but they were super fun to build, and I got a fair bit of experience learning assembly from it. And the SDK was pretty well built, with predefined entry points for the mode, reacting to events like button presses, and displaying things on the LCD. The fact that the SDK came with a random-number generator, which wasn’t even used with any of the built-in modes, just showed how well Timex thought about what was possible with this watch.
This was the last watch I regularly wore: I’ve moved to using phones to keep track of time. But it was a great watch while it lasted.
Why I Keep Multiple Blogs
Kev Quirk wrote a post yesterday wondering why people have multiple blogs for different topics:
A few people I follow have multiple blogs that they use for various topics, but I don’t really understand why. […] I personally prefer to have a single place where I can get all your stuff. If you’re posting about something I’m not interested in, I’ll just skip over it in my RSS feed. I don’t have to read everything in my feed reader.
I’ve written about this before, and after taking a quick look at that post, most of those reasons still stand. So if you’ve read that post, you can probably stop reading this one at reason number two (unless you’re listening to the audio narration of this, in which case, please listen on as that last post predated that feature 🙂).
I’m currently keeping four separate blogs: this one, one for checkins to places I’ve visited, one for remembering how to do something for work, and one for projects I’m working on1. This arrangement came about after a few years of spinning out and combining topics to and from a single blog, generally following the tension I felt after publishing something, wondering if that was the right place for it. As strange as it is to say it, this multi-blogging arrangement gives me the lowest amount of tension for writing online.
There are a few reasons for this. First is that for certain topics, I like an easy way to reference posts quickly. This is the primary reason why I keep that work-related reference blog, so that when I’m faced with a software-related problem I know I’ve seen in the past, I can quickly lookup how I solved it. I’ve tried keeping those posts here, but it was always difficult finding them again amongst all the frequent, day-to-day stuff.
It mainly comes down to the online reading experience. Categories can only do so much, and that’s if I’m categorising posts rigorously, which is not always the case. Here, the posts are displayed in full, encouraging the reader to browse. But for my reference blog, a list of bare links works better for going directly to the post I need.
The second reason is the writing experience. For me, certain CMSes work better for certain types of posts. Micro.blog works well for micro-posts or mid-sized posts like this one, but for longer ones, I prefer the editors of either Scribbles or Pika. I don’t know why this is. Might be because all the code-blocks I tend to use on those blogs are easier to write using a WYSIWYG editor rather than Markdown.
And finally, it’s a good excuse to try out multiple CMSes. I have no rational explanation for this one: it’s an arrangement that costs me more money and requires learning new software. Might be that I just like the variety.
So that’s why I keep multiple blogs. I do recognise that it does make it harder for others to find my writing online, not to mention following along using RSS. But that’s a tradeoff I’m willing to make for a writing and reading arrangement that works better for me. Of course, like I said in my previous post, this might change in due course.
On Panic, iA, and Google Drive
I see that Panic is shutting down their Google Drive integration in their Android app, much like iA did a few weeks ago. This doesn’t affect me directly: even though I am a user of both Android and Google Drive, I regret to say that I don’t use apps from either company on my phone (I do use a few things from both on my Apple devices).
But I do wonder why Google is enacting policies that push developers away from using Drive as general purpose user storage. That’s what Drive was meant to be used for, no? Does Google not think that by adding these security conditions, and not getting back to developers trying to satisfy them, is maybe pushing the scale between security and usefulness a bit too far out of balance? Are they thinking through the implication of any of this at all?
If you were to ask me, my guess would probably be that no, they’re not thinking about it. In fact, I get the sense that they’re making these decisions unconsciously, at least at an organisation level. Probably someone said to the Drive devision that they need to “improve security” and that their performance will be measured against them doing so. So they drafted up these conditions and said “job done” without thinking through how it may affect the actual usefulness of Drive.
And it just reveals to me how large Google is, possibly too large to know why they do anything at all. It’s not like they’re being malicious or anything: they’re a victim of their own success, with way too many product lines making zero dollars that distract them from their raison d’être, which is getting that sweet, sweet ad money. After-all, what does Drive matter to Google in terms of increasing advertising revenue? It’s probably a division making a loss more than anything else.
I suppose, given that I do use both Drive and Android, that I should care more about it. And yeah, I care enough to write about it, but that’s barely above the level of mild curiosity I’m feeling as to why Google is letting this happen. Might be that I’ve just gotten numb to Google not caring about their own products themselves.
Passing
Three nights ago, and two months before her 94th birthday, my Nonna, my maternal grandmother, suffered a stroke. She’s now in palliative care and there’s no telling how much longer she has left. Over the last few years she was slowing down, yet was still quite aware and was able to do many things on her own, even travel to the shops by bus. She had a scare over the weekend but was otherwise in reasonably good health. So all of this is incredibly sudden.
I was unsure as to whether or not I wanted to actually write this post. I did have a draft planned yesterday, with the assumption that she wouldn’t make it through the night. Delaying it any further did not seem right. Neither is making this an eulogy or display of public grief — that’s not how I like to do thing. But to not acknowledge that any of this is happening felt just as wrong, at least for now.
But what seemed right was a public declaration that I love her and I’ll miss her. I consider myself lucky to have said that to her in person, while she was lucid.
So, what now? Timelines at this stage are uncertain. Would it be hours? Days? Who can say. I guess following that would be the funeral and other matters pertaining to the estate, but that won’t happen for a week or so. What about today? Does one simply go about their day as one normally would? Does life go on? Seems wrong that it should be so, yet I’m not sure there’s anything else that I’m capable of doing. Just the daily routine smeared with sadness and loss.
I heard someone say that grief comes from love, that you can’t have one without the other. I can attest to that, but the edges of that double-edge sword are razor sharp. I know that eventually the pain will dull, and all that would remain are the memories. All it takes is time.
Tools And Libraries I Use For Building Web-Apps In Go
I think I’ve settled on a goto set of tools and libraries for building web-apps in Go. It used to be that I would turn to Buffalo for these sorts of projects, which is sort of a “Ruby on Rails but for Go” type of web framework. But I get the sense that Buffalo is no longer being maintained. And although it was easy to get a project up and running, it was a little difficult to go beyond the CRUD-like layouts that it would generate (or it didn’t motivate me enough to do so). Plus, all that JavaScript bundling… ugh!
Huge pain to upgrade any of that.Since I’ve moved away from Buffalo, I’m now left to do more of the work up-front, but I think it helps me to be a little more deliberate in how I build something. And after getting burned with Buffalo shutting down, I think it’s was time to consider a mix of tools and libraries that would give me the greatest level of code stability while still being relatively quick to get something up and running.
So, here’s my goto list of tools and libraries for building web-apps in Go.
So, that’s my setup. It’s a collection that’s geared towards keeping the code low maintenance, even if it may come at the cost of scalability. I can’t tell you anything about that myself: I’m not running anything that has more than a couple of users anyway, and most things I’m running are only being used by myself. But I think that’s a problem for later, should it ever arise.
Micro-fiction: Get A Horse
Trying something new here. I came up with the concept of this short-story while riding home on the tram yesterday. The rest of it sort-of fell into place when I woke up at 5AM this morning, unable to get back to sleep. Hope you enjoy it.
Josh was riding the scooter on the city footpath, not trying super hard to avoid the other pedestrians. He was going at a speed that was both unsafe and illegal, but it was the only speed he knew that would prevent that horse from showing up. Besides, he had something that he needed to do, and it was only at such reckless speeds that he knew that that thing would work. Well, he didn’t know; but being at his wits’ end after trying everything else, he had to try this. He picked his target ahead and sped up towards it. Good thing he was wearing his helmet.
Josh never used these sorts of scooters before the collision two weeks ago. He was walking to work that day, when he saw someone on such a scooter coming towards him, helmet on head. The rider was going a ridiculous speed, and Josh tried to get out of his way as he approached, but the scooter driver turned towards him, not slowing down at all. Josh tried again but was not fast enough. The scooter rider ran straight into him and bowled him over onto the footpath. Before Josh could gather himself, the scooter rider slap his helmet onto Josh’s head and shouted, “Get a horse!” He got back onto the scooter and sped away.
Josh got up, fighting the various aching muscles from the fall. He dusted himself down, took the helmet from his head and looked at it. It was very uncharacteristic of those worn by scooter riders. Most of them were plastic things, either green or orange, yet this one was grey, made of solid leather that was slightly fuzzy to the touch. Josh looked inside the rim and found some printed writing: Wilkinsons Equestrian Helmet. One side fits all. The one was underlined with some black marker.
Josh put the helmet in his backpack and was about to resume his commute, when he stopped in place. Several metres away, a white horse stood, staring at him. Or at least it looked like a horse. The vision was misty and slightly transparent, giving the sense that it was not real. Yet after blinking and clearing his eyes, it didn’t go away. Josh started to move towards it, and when he was just within arms reach, it disappeared. Josh shook his head, and starting walking. But when he turned the next corner, there it was again: a horse, standing in the middle of the footpath several metres away, staring at him intently.
Since that day that horse has been haunting Josh. On his walk, at his workplace, in his home, even on the tram. Always staring, always outside of reach. Either standing in his path or following close behind him. The vision will go whenever Josh approached it, only to reappear when he turned to look in another direction. Naturally, no one else could see it. When that horse was in a public place, people seemed to instinctively walk around it. Yet when he asked them if they could see it, they had no idea what he was talking about. But Josh couldn’t do anything to stop seeing it. At every waking hour of the day, from when he got out of bed to when he got back in, there it was, always staring. Never looking away.
And he knew it had something to do with that helmet. He tried a few things to dispel the vision, such as leaving the helmet at home or trying to give to random strangers (who always refused it). Yet nothing worked to clear the vision. That is, nothing other than what had worked on him. Now was the time to test that theory out.
His target was ahead, a man in a business suit walking at a leisurely pace. He had his back to Josh, so he couldn’t see Josh turn his scooter towards him and accelerate. The gap between them rapidly closed, and Josh made contact with the man, slowing a little to avoid significant injury, but still fast enough to knock him over. Josh got off the scooter and stood by the man, sprawled on the footpath. Once again the horse appeared, as he knew it would. He looked down to see the man starting to get up. Josh had to go for it now! He took his helmet from his head, slapped it on the man and shouted, “Get a horse!”
Josh got back on the scooter and sped away for few seconds, then stopped to look behind him. He saw the man back on his feet, helmet in hand, looking at it much like Josh did a fortnight ago. He saw the horse as well, but this time it had its back to Josh, staring intently at the man, yet Josh could see that the man hasn’t noticed yet. He could see the man put the helmet by side of the road and walk away, turning a corner. The horse was fading from Josh’s eyes, yet it was still visible enough for Josh to see it follow the man around the corner, several metres behind.
Select Fun From PostgreSQL
Using PostgreSQL these last few months reminds me of just how much fun it is to work with a relational database. DynamoDB is very capable, but I wouldn’t call it fun. It’s kinda boring, actually. Not that that’s a bad thing: one could argue that “boring” is what you want from a database.
Working with PostgreSQL, on the other hand, has been fun. There’s no better word to describe it. It’s been quite enjoyable designing new tables and writing SQL statements.
Not sure why this is, but I’m guessing it’s got something to do with working with a schema. It exercises the same sort of brain muscles1 as designing data structures or architecting an application. Much more interesting than dealing with a schemaless database, where someone could simply say “ah, just shove this object it a DynamoDB table.”
It’s either that, or just that PostgreSQL has a more powerful query language than what DynamoDB offers. I mean, DynamoDB’s query capabilities need to be pretty restricted, thanks to how it stores it’s data. That’s the price you pay for scale.
Rubber-ducking: Of Config And Databases
It’s been a while since my last rubber-ducking session. Not that I’m in the habit of seeking them out: I mainly haven’t been in a situation when I needed to do one. Well that chance came by yesterday, when I was wondering whether to put queue configuration either in the database as data, or in the environment as configuration.
This one’s relatively short, as I was leaning towards one method of the other before I started. But doubts remained, so having the session was still useful.
So without further ado, let’s dive in. Begin scene.
L: Hello
🦆: Oh, you’re back. It’s been a while. How did that thing with the authorisation go?
L: Yeah, good. Turns out doing that was a good idea.
🦆: Ah, glad to hear it. Anyway, how can I help you today?
L: Ok, so I’m working on this queue system that works with a database. I’ve got a single queue working quite well, but I want to extend it to something that works across multiple queues.
🦆: Okay
L: So I’m wondering where I could store the configuration to these queues. I’m thinking either in the database as data, or in the configuration. I’m thinking the database as: A) a reference to the queue needs to be stored alongside each item anyway, and B) if we wanted to add more queues, we can almost do so by simply adding rows.
🦆: “almost do so?”
L: Yeah, so this is where I’m a little unsure. See, I don’t want to spend a lot of effort building out the logic to deal with relaunching the queue dispatcher when the rows change. I rather the dispatcher just read how the queues are configured during startup and stick with that until the application is restarted.
🦆: Okay
L: And such an approach is closer to configuration. In fact, it could be argued that having the queues defined as configuration would be better, as adding additional queues could be an activity that is considered “intentional”, with a proper deployment and release process.
I wonder if a good middle-ground might be to have the queues defined in the database as rows, yet managed via the migration script. That way, we can have the best of both worlds.
🦆: Why not just go with configuration?
L: The main reason is that I don’t want to add something like string representations of the queue to each queue item. I’m okay if it was just a UUID, since I’d imagine PostgreSQL could handle such fields relatively efficiently. But adding queue names like “default” or “test” as a string on each queue item seems like a bit of a waste.
🦆: Do they need to be strings? Could the be like an enum?
L: I rather they’re strings, as I want this arrangement to be relatively flexible. You know, “policy vs. mechanism” and all that.
🦆: So how would this look in the database?
L: Well, each row for a queue would have a string, say like a queue name. But each individual queue item would reference the queue via it’s ID.
🦆: Okay, so it sounds like adding it to the database yet managing it with the migration script is the way to go.
L: Yeah, that is probably the best approach.
🦆: Good. I’m glad you can come away with thinking this.
L: Yeah, honestly that was the way I was leaning anyway. But I’m glad that I can completely dismiss the configuration approach now.
🦆: Okay, good. So I’m guessing my job is done here.
L: Yeah, thanks again.
🦆: No worries.
About Those STOP Messages
John Gruber, discussing political spam text messages on Daring Fireball:
About a month ago I switched tactics and started responding to all such messages with “STOP”. I usually send it in all caps, just like that, because I’m so annoyed. I resisted doing this until a month ago thinking that sending any reply at all to these messages, including the magic “STOP” keyword, would only serve to confirm to the sender that an actual person was looking at the messages sent to my phone number. But this has actually worked. Election season is heating up but I’m getting way way fewer political spam texts now. Your mileage may vary, but for me, the “STOP” response works.
As someone who use to work for a company that operated a SMS messaging gateway, allow me to provide some insight into how this works. When you send an opt-out keyword — usually “STOP1” although there are a few others — this would be received by our messaging gateway, and your number would be added to an opt-out list for that sender. From that point on, any attempt by that sender to send a message to your number would fail.
Maintaining these opt-out lists is a legal requirement with some significant penalties, so the company I worked for took this quite seriously. Once, the service maintaining this list went down, and we couldn’t know whether someone opted-out or not. We actually stopped all messaging completely until we got that service back up again. I still remember that Friday afternoon (naturally, it happened on a Friday afternoon).
Now, if memory serves, there was a way for a sender to be notified when an opt-out occurred. This was mainly for customers that decided to take on the responsibility — and thus legal accountability — of maintaining the opt-out lists themselves. There were a handful of customers that had this enabled, and it was something that we had to enable for them on the backend, but most customers simply delegated this responsibility to us (I can’t remember if customers that had this feature off could still receive opt-out notifications).
Finally, there is a way, a variant of the “STOP” message, in which someone could opt-out of any message sent from our gateway, basically adding themselves to a global opt-out list which applies to everyone. The only way someone could remove themselves from this list was to call support, so I wouldn’t recommend doing this unless you know you would never need another 2FA code via SMS again.
Addendum: The customer never had access to these opt-out lists but I believe they could find out when a message they tried to send was blocked. This is because they would be charged per message sent, and if a message was blocked, they would receive a credit. There was also an API to return the status of a message, so if you knew the message ID, it was possible to call the API to know whether a message was blocked.
My Home Computer Naming Scheme
I enjoyed Manton’s post about the naming scheme he uses for Micro.blog servers. I see these names pop up in the logs when I go to rebuild my blog, each with a Wikipedia link explaining the origins of the name (that’s a really nice touch).
Having a server or desktop naming scheme is one of those fun little things to do when working with computers. Growing up we named our home desktops after major characters of Lord of the Rings, such as Bilbo, or Frodo, but I never devised a scheme for myself when I started buying my own computers. I may have kept it up if we were doing likewise at work, but when AWS came onto the scene, the prevailing train of thought was to treat your servers like cattle rather than pets. Granted, it is probably the correct approach, especially when the lifecycle of a particular EC2 instance could be as short as a few minutes.
But a few years ago, after buying a new desktop and setting up the old one to be a home server, and finding that I need a way to name them, I figured now was the time for a naming scheme. Being a fan of A Game of Thrones, both the book and the TV series, I’ve came up with one based on the major houses of Westeros.
So, to date, here are the names I’ve chosen:
I think at one point I had an Intel Nuc that was called Ghost, a reference to John Snow’s dire wolf, but I haven’t used that in a while so I may be misremembering things. I also don’t have a name for my work laptop: it’s simply called “work laptop.”
Go Feature Request: A 'Rest' Operator for Literals
Here’s a feature request for Go: shamelessly copying JavaScript and adding support for the “rest” operator in literals. Go does have a rest operator, but it only works in function calls. I was writing a unit test today and I was thinking to myself that it would be nice to use this operator in both slice and struct literals as well.
This could be useful for making copies of values without modifying the originals. Imagine the following bit of code:
type Vector struct { X, Y, Z int }
oldInts := []int{3, 4}
oldVec := Vector{X: 1}
newInts := append([]int{1, 2}, oldInts...)
newVec := oldVec
newVec.Y = 2
newVec.Z = 3
Now imagine how it would look if rest operators in literals were supported:
type Vector struct { X, Y, Z int }
oldInts := []int{3, 4}
oldVec := Vector{X: 1}
newInts := []int{1, 2, oldInts...}
newVec := Vector{Y: 2, Z: 3, oldVec...}
I hope you’ll agree that it looks a bit neater than the former. Certainly it looks more pleasing to my eyes. True, this is a contrived example, but the code I’m writing for real is not too far off from this.
On the other hand, Go does prefer clarity over brevity; and I have seen some JavaScript codebases which use these “rest” operators to an absurd level, making the code terribly hard to read. But I think the Go user-base is pretty good at moderating themselves, and just because it could result in unreadable code, doesn’t make it a forgone conclusion. Just look at Go’s use of type parameters.
Anyway, if the Go team is looking for things to do, here’s one.
A Follow-Up To Mockless Unit Testing
I’m sure everyone’s dying to hear how the mockless unit tests are going. It’s been almost two months since we started this service, and we’re smack bang in the middle of brownfield iterative development: adding new features to existing ones, fixing bugs, etc. So it seems like now is a good time to reflect on whether this approach is working or not.
And so far, it’s been going quite well. The amount of code we have to modify when refactoring or changing existing behaviour is dramatically smaller than before. Previously, when a service introduces a new method call, every single test for that service needed to be changed to handle the new mock assertions. Now, in most circumstances, it’s only one or maybe two tests that need to change. This has made maintenance so much easier, and although I’m not sure it mades us any faster, it just feels faster. Probably because there’s less faffing around unrelated tests that broke due to the updated mocks.
I didn’t think of it at the time, but it also made code reviews easer too. The old way meant longer, noisier PRs which — and I know this is a quality of mine that I need to work at — I usually ignore (I know, I know, I really shouldn’t). With the reviews being smaller, I’m much more likely to keep atop of them, and I attribute this to the way in which the tests are being written.
Code hygiene plays a role here. I got into the habit of adding test helpers to each package I work on. Much like the package is responsible for fulfilling the contract it has with its dependants, so too is it responsible for providing testing facilities for those dependants. I found this to be a great approach. It simplified the level of setup each dependant package needed to do in their tests, reducing the amount of copy and pasted code and, thus, containing the “blast radius” of logic changes. It’s not perfect — there are a few tests where setup and teardown were simply copied-and-pasted from other tests — but it is better.
We didn’t quite get rid of all the mocking though. Tests that exercise the database and message bus do so by calling out to these servers running in Docker, but this service also had to make calls to another worker. Since we didn’t have a test service available to us, we just implemented this using old-school test mocks. The use of the package test helpers did help here: instead of having each test declare the expected calls on this mock, the helper just made “maybe” calls to each of the service methods, and provided a way to assert what calls were recorded.
Of course, much like everything, there are trade-offs. The tests run much slower now, thanks to the database setup and teardown, and we had to lock them down to run on a single thread. It’s not much of an issue now, but we could probably mitigate this with using random database names, rather than have all the test run against the same one. Something that would be less easy to solve are the tests around the message bus, which do need to wait for messages to be received and handled. There might be a way to simplify this too, but since the tests are verifying the entire message exchange, it’d probably be a little more involved.
Another trade-off is that it does feel like you’re repeating yourself. We have tests that check that items are written to the database correctly for the database provider, the service layer, and the handler layer. Since we’re writing tests that operate over multiple layers at a time, this was somewhat expected, but I didn’t expect it to be as annoying as I found it to be. Might be that a compromise is to write the handler tests to use mocks rather than call down to the service layer. Those tests really only validate whether the hander converts the request and response to the models correctly, so best to isolate it there, and leave the tests asserting whether the business rules are correct to the server layer.
So if I were to do this again, that’ll be the only thing I change. But, on the whole, it’s been really refreshing writing unit tests like this. And this is not just my own opinion; I asked my colleague, who’s has told me how difficult it’s been maintaining tests with mocks, and he agrees that this new way has been an improvement. I’d like to see if it’s possible doing this for all other services going forward.
On the Easy Pit To Fall Into
From Matt Bircher’s latest post on Birchtree:
One of the hard parts about sharing one’s opinions online like I do is that it’s very easy to fall into the trap of mostly complaining about things.
This is something I also think about. While I haven’t done anything scientific to know what my ratio of posting about things I like vs. things I don’t, I feel like I’m getting the balance better. It might still be weighted too much on writing about the negatives, but I am trying to write more about things I think are good.
I do wonder, though, why it’s so easy to write about things you hate. Matt has a few theories regarding the dynamics of social media, but I wonder if it more about someone’s personal experience of that thing in question. You hear about something that you’d thought would be worth a try. I doubt many people would actually try something they know they’re going to dislike. If that’s the case, they wouldn’t try it at1. So I’m guessing that there’s some expectation that you’ll like the thing.
So you start experience the thing, and maybe it all goes well at first. Then you encounter something you don’t like about it. You make a note of it and keep going, only to encounter another thing you don’t like, then another. You eventually get to the point where you’ve had enough, and you decided to write about it. And lo, you’ve got this list of paper-cuts that can be easily be used as arguments as to why the thing is no good.
Compare this to something that you do end up liking. You can probably come up with a list of things that are good about it, but you’re less likely to encounter them while you’re experiencing the thing. You just experience them, and it flows through you like water. When the time comes to write about it, you can recall liking the plot, or this character, etc., but they’re more nebulous and it takes effort to solidify them into a post. The drive to find the path of least resistance prevails, and you decided that it’s enough to just like it.
Anyway, this is just a hypothesis. I’m not a psychologist and I’ve done zero research to find out if any of this is accurate. In the end, this post might simply describe why my posting seems to be more weighted towards things I find annoying.
A Tour Of My New Self-Hosted Code Setup
While working on the draft for this post, a quote from Seinfield came to mind which I thought was a quite apt description of this little project:
Breaking up is knocking over a Coke machine. You can’t do it in one push. You gotta rock it back and forth a few times and then it goes over.
I’ve been thinking about “breaking up” with Github on and off for a while now. I know I’m not the only one: I’ve seen a few people online talk about leaving Github too. They have their own reasons for doing so: some because of AI, others are just not fans of Microsoft. For me, it was getting bitten by the indie-web bug and wanting to host my code on my own domain name. I have more than a hundred repositories in Github, and that single github.com/lmika
namespace was getting quite crowded. Being able to organise all these repositories into groups, without fear of collisions or setting up new accounts, was the dream.
But much like the Seinfield quote, it took a few rocks of that fabled Coke machine to get going. I dipped my toe in the water a few times: launching Gitea instances in PikaPod, and also spinning up a Gitlab instance in Linode during a hackathon just to see how well it would feel to manage code that way. I knew it wouldn’t be easy: not only would I be paying more for doing this, it would involve a lot of effort up front (and on an ongoing basis), and I would be taking on the responsibility of backups, keeping CI/CD workers running, and making sure everything is secured and up-to-date. Not difficult work, but still an ongoing commitment.
Well, if I was going to do this at all, it was time to do it for real. I decided to set up my own code hosting properly this time, complete with CI/CD runners, all hosted under my own domain name. And well, that Coke machine is finally on the floor. I’m striking out on my own.
Let me give you a tour of what I have so far.
Infrastructure
My goal was to have a setup with the following properties:
For the SCM, I settled on Forgejo, which is a fork of Gitea, as it seemed like the one that required the least amount of resources to run. When I briefly looked at doing this a while back, Forgejo didn’t have anything resembling GitHub Actions, which was a non-starter for me. But they’re now in Forgejo as an alpha, preview, don’t-use-it-for-anything-resembling-production level of support, and I was curious to know how well they worked, so it was worth trying it out.
I did briefly look at Gitea’s hosted solution, but it was relatively new and I wasn’t sure how long their operations would last. At least with self-hosting, I can choose to exit on my own terms.
It was difficult thinking about how much I was willing to budget for this, considering that it’ll be more than what I’m currently paying for GitHub now, which is about $9 USD /month ($13.34 AUD /month). I settled for a budget of around $20.00 AUD /month, which is a bit much, but I think would give me something that I’d be happy with without breaking the bank.
I first had a go at seeing what Linode had to offer for that kind of money. A single virtual CPU, with 2 GB RAM and 50 GB storage costs around $12.00 USD /month ($17.79 AUD /month). This would be fine if it was just the SCM, but I also want something to run CI/CD jobs. So I then took a look at Hetzner. Not only do they charge in Euro’s, which works in my favour as far as currency conversions go, but their shared-CPU virtual servers were much cheaper. A server with the same specs could be had for only a few euro.
So after a bit of looking around, I settled for the following bill of materials:
This came to €10.21, which was around $16.38 AUD /month. Better infrastructure for a cheaper price is great in my books. The only downside is that they don’t have a data-centre presences in Australia. I settled for the default placement of Falkenstein, Germany and just hoped that the latency wasn’t too slow as to be annoying.
Installing Forgejo
The next step was setting up Forgejo. This can be done using official channels by either downloading a binary, or by installing a Docker image. But there’s also a forgejo-contrib repository that distributes it via common package types, with Systemd configurations that launch Forgejo on startup. Since I was using Ubuntu, I downloaded and installed the Debian package.
Probably the easiest way to get started with Forgejo is to use the version that comes with Sqlite, but since this is something that I’d rather keep for a while, I elected to use Postgre for my database. I installed the latest Ubuntu distribution of Postgres, and setup the database as per the instructions. I also made sure the mount point for the volume was ready, and created a new directory with the necessary owner and permissions so that Forgejo can write to it.
At this point I was able to launch Forgejo and go through the first launch experience. This is where I configured the database connection details, and set the location of the repository and LFS data (I didn’t take a screenshot at the time, sorry). Once that was done, I shut the server down again as I needed to make some changes within the config file itself:
After that, it was a matter of setting up the reverse proxy. I decided to use Caddy for this, as it comes with HTTPS out of the box. This I installed as a Debian package also. Configuring the reverse proxy by changing the Caddyfile deployed in /etc
was a breeze and after making the changes and starting Caddy, I was able to access Forgejo via the domain I setup.
One quick note about performance: although logging in via SSH was a little slow, I had no issues with the speed of accessing Forgejo via the browser.
The Runners
The next job was setting the runners. I thought this was going to be easier than setting up Forgejo itself, but I did run into a few snags which slowed me down.
The first was finding out that a Hetzner VM running without a public IP address actually doesn’t have any route to the internet, only the local network. The way to fix this is to setup one of the hosts which did have a public IP address to act as a NAT gateway. Hetzner has instructions on how to do this, and after performing a hybrid approach of following both the Ubuntu 20.04 instructions and Ubuntu 22.04 instructions, I was able to get the runner host online via the Forgejo host. Kinda wish I knew about this before I started this.
For the runners, I elected to go with the Docker-based setup. Forgejo has pretty straightforward instructions for setting them up using Docker Compose, and I changed it a bit so that I could have two runners running on the same host.
Setting up the runners took multiple attempts. The first attempts failed when Forgejo couldn’t locate any runners for an organisation to use. I’m not entirely sure why this was, as the runners were active and were properly registered with the Forgejo instance. It could be magical thinking, but my guess is that it was because I didn’t register the runners with an instance URL that ended with a slash. It seems like it’s possible to register runners that are only available to certain organisations or users. Might be that there’s some bit of code deep within Forgejo that’s expecting a slash to make the runners available to everyone? Not sure. In either case, after registering the runners with the trailing slash, the organisations started to recognise them.
The other error was seeing runs fail with the error message cannot find: node in PATH
. This resolved itself after I changed the run-on
label within the action YAML file itself from linux
to docker
. I wasn’t expecting this to be an issue — I though the run-on
field was used to select a runner based on their published labels, and that docker
was just one such label. The Forgejo documentation was not super clear on this, but I got the sense that the docker
label was special in some way. I don’t know. But whatever, I can use docker
in my workflows.
Once these battles were won, the runners were ready, and I was able to build and test a Go package successfully. One last annoying thing is that Forgejo doesn’t enable runners by default for new repositories — I guess because they’re still considered an alpha release. I can live with that in the short term, or maybe there’s some configuration I can enable to always have them turned on. But in either case, I’ve now got two Linux runners working.
MacOS Runner And Repository Backup
The last piece of the puzzle was to setup a MacOS runner. This is for the occasional MacOS application I’d like to build, but it’s also to run the nightly repository backups. For this, I’m using a Mac Mini currently being used as a home server. This has an external hard drive connect, with online backups enabled, which makes it a perfect target for a local backup of Forgejo and all the repository data should the worse come to pass.
Forgejo does’t have an official release of a MacOS runner, but Gitea does, and I managed to download a MacOS build of act_runner and deploy it onto the Mac Mini. Registration and performing a quick test with the runner running in the foreground went smoothly. I then went through the process of setting it up as a MacOS launch agent. This was a pain, and it took me a couple of hours to get this working. I won’t go through every issue I encountered, mainly because I couldn’t remember half of them, but here’s a small breakdown of the big ones:
It took me most of Saturday morning, but in the end I managed to get this MacOS runner working properly. I’ve not done anything MacOS-specific yet, so I suspect I may have some XCode related stuff to do, but the backup job is running now and I can see it write stuff to the external hard drive.
The backup routine itself is a simple Go application that’s kicked off daily by a scheduled Forgejo Action (it’s not in the documentation yet, but the version of Forgejo I deployed does support scheduled actions). It makes a backup of the Forgejo instance, the PostgreSQL database, and all the repository data using SSH and Rsync.
I won’t share these repositories as they contain references to paths and such that I consider sensitive; but if you’re curious about what I’m using for the launch agent settings, here’s the plist file I’ve made:
<!-- dev.lmika.repo-admin.macos-runner.plist -->
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>dev.lmika.repo-admin.macos-runner</string>
<key>ProgramArguments</key>
<array>
<string>/Users/lmika/opt/macos-runner/scripts/run-runner.sh</string>
</array>
<key>KeepAlive</key>
<true/>
<key>RunAtLoad</key>
<true/>
<key>StandardErrorPath</key>
<string>/tmp/runner-logs.err</string>
<key>StandardOutPath</key>
<string>/tmp/runner-logs.out</string>
</dict>
</plist>
This is deployed by copying it to $HOME/Library/LaunchAgents/dev.lmika.repo-admin.macos-runner.plist
, and then installed and enabled by running these commands:
launchctl bootstrap gui/$UID "Library/LaunchAgents/dev.lmika.repo-admin.macos-runner.plist"
launchctl kickstart gui/$UID/dev.lmika.repo-admin.macos-runner
The Price Of A Name
One might see this endeavour, when viewed from a pure numbers and effort perspective, as a bit of a crazy thing to do. Saying “no” to all this cheap code hosting, complete with the backing of a large cooperation, just for the sake of a name? I can’t deny that this may seem a little unusual, even a little crazy. After all, it’s more work and more money. And I’m not going to suggest that others follow me into this realm of a self-hosted SCM.
But I think my code deserves it’s own name now. After all, my code is my work; and much like we encourage writers to write under their own domain name, or for artists and photographers to move away from the likes of Instagram and other such services, so too should my work be under a name I own and control. The code I write may not be much, but it is my own.
Of course, I’m not going to end this without my usual “we’ll see how we go” hedge against myself. I can only hope I got enough safeguards in place to save me from my own decisions, or to easily move back to a hosted service, when things go wrong or when it all becomes just a bit much. More on that in the future, I’m sure.
A Bit of 'Illuminating' Computer Humour
Here’s some more computer-related humour to round out the week:
How many software developers does it take to change a lightbulb? Just one.
How many software developers does it take to change 2 lightbulbs? Just 10.
How many software developers does it take to change 7 lightbulbs? One, but everyone within earshot will know about it.
How many software developers does it take to change 32 lightbulbs? Just one, provided the space is there.
How many software developers does it take to change 35 lightbulbs? Just one. #lightbulbs
How many software developers does it take to change 65 lightbulbs? Just one, if they’re on their A grade.
How many software developers does it take to change 128 lightbulbs? Just one, but they’ll be rather negative about it.
How many software developers does it take to change 256 lightbulbs? What lightbulbs?
Enjoy your Friday.
Asciidoc, Markdown, And Having It All
Took a brief look at Asciidoc this morning.
This is for that Markdown document I’ve been writing in Obsidian. I’ve been sharing it with others using PDF exports, but it’s importance has grown to a point where I need to start properly maintaining a change log. And also… sharing via PDF exports? What is this? Microsoft Word in the 2000s?
So I’m hoping to move it to a Gitlab repo. Gitlab does support Markdown with integrated Mermaid diagrams, but not Obsidian’s extension for callouts. I’d like to be able to keep these callouts as I used them in quite a few places.
While browsing through Gitlabs’s help guide on Markdown extensions, I came across their support for Asciidoc. I’ve haven’t tried Asciidoc before, and after taking a brief look at it, it seemed like a format better suited for the type of document I’m working on. It has things like auto-generated table of contents, builtin support for callouts, proper title and heading separations; just features that work better than Markdown for long, technical documents. The language syntax also supports a number of text-based diagram formats, including Mermaid.
However, as soon as I started porting the document over to Asciidoc, I found it to be no Markdown in terms of mind share. Tool support is quite limited, in fact it’s pretty bad. There’s nothing like iA Writer for Asciidoc, with the split-screen source text and live preview that updates when you make changes. There’s loads of these tools for Markdown, so many that I can’t keep track of them (the name of the iA Writer alternative always eludes me).
Code editors should work, but they’re not perfect either. GoLand supports Asciidoc, but not with embedded Mermaid diagrams. At least not out of the box: I had to get a separate JAR which took around 10 minutes to download. Even now I’m fighting with the IDE, trying to get it to find the Mermaid CLI tool so it can render the diagrams. I encountered none of these headaches when using Markdown: GoLand supports embedded Mermaid diagrams just fine. I guess I could try VS Code, but to download it just for this one document? Hmm.
In theory the de-facto CLI tool should work, but in order to get Mermaid diagrams working there I need to download a Ruby gem and bundle it with the CLI tool (this is in addition to the same Mermaid command-line tool GoLand needs). Why this isn’t bundled by default in the Homebrew distribution is beyond me.
So for now I’m abandoning my wish for callouts and just sticking with Markdown. This is probably the best option, even if you set tooling aside. After all, everyone knows Markdown, a characteristic of the format that I shouldn’t simply ignore. Especially for these technical documents, where others are expected to contribute changes as well.
It’s a bit of a shame though. I still think Asciidoc could be better for this form of writing. If only those that make writing tools would agree.
Addendum: after drafting this post, I found that Gitlab actually supports auto-generated table of contents in Markdown too. So while I may not have it all with Markdown — such as callouts — I can still have a lot.