Ran into my old barista this morning. He made morning coffees at the station in the late 2010s, before Covid wiped his business out. I thought he’d headed back to New Zealand after that, but no he’s still around and doing quite well (just not morning coffees). Really great to see him again. ☕️
Spent the last week polishing up the tool I use that takes journal entries from a Day One export and adds them as blog posts to a Hugo site. That tool is now open source for anyone else who may want to do this. You can find a link to it here: day-one-to-hugo.
Also recorded a simple demo video on how to use it:
It would be nice for browsers to remember every close tab that’s been open for more than, say, a day. This can sit alongside the browser’s current history and closed tab group, which is more geared towards your recent browsing. But unlike those, this would maintain a long term history, recording every closed tab since the beginning of time. And it doesn’t even need to be the full back-stack: the last visited URL would be fine.
The day limit is important, as it provides a good hint that it’s a tab that I want to revist later. There’ve been many a time I’ve had a tab open for weeks, saying to myself that I’ll read or do something with it, only to close it later accidently or when I want a tidier browser workspace. If and when the time comes where I want to revisit it, it’s has fallen out of the history, and all I end up is regret for not making a bookmark.
I suppose I could get into the habit of bookmarking things when I close them, but that’ll just move the mess from the browser to Linkding. No, this is something that might work better in the browsers themselves.
So, walk up the hill, then it’s through the gate to get to the path. Piece of cake.
Oh… 🤔


Trying out a bit of an experiment. I’m going to start accenting a few posts here with a small image in the form of marginalia, similar to what Dave Winer does on Scripting News. We’ll start with a classic for this post: a “website under construction” sign (this won’t appear in the RSS feed, so click through to see it).
This has been something I’ve been thinking about trying for a while, and I’ll be honest, I have no idea how it’ll look here. In fact I’m a little nervous about this. Would it enhance the post in any way, or be the blogging equivalent of clip-art on a Powerpoint slide? Would it make the site look dated, or even work with the type of posts I write here? I personally like the ones that appear on Scripting News, but I do wonder if that’s because they’re more likely to sit beside comments about the status quo, rather than the “today I did this” or “struggling with that” posts I tend to write here (this unbalance of topics is an anxiety I have about this blog that’s best suited for another time).
I guess we’ll find out together. I’ll try them for a bit and see how I feel in, say, a month. If I don’t like them, or I find myself never adding them, then I’ll pull them down and consider the experiment complete. Hopefully by then I’d have some answers.
Really enjoyed the conversation between Sam Harris and George Saunders. I’ve not read anything from Saunders, but they had quite an insightful discussion about writing and culture (and a hint of politics) that it might be worth looking at some of his fiction. 🎙️
Almost didn’t make it to the gym this evening. Glad I had a change of heart. Also glad that it came while the gym was still open.
(I say “change of heart”, as if the decision was made in the abstract. But it was the bad, guilty vibes that actually drove me to go.)
Gave the sample Storytime episode for my train line a try, and it’s not for me. Aside from being something not available wherever I get my other podcasts, the sample was really overproduced, with backing audio and cheesey sound effects. Not a fan of those sorts of podcasts.
The nature of AWS is that, even with things like ChatGTP, there are still traps laying about for those poor souls that don’t know what they don’t know. For example: did you know that you cannot immediately delete a secret value? You can only “schedule” it to be deleted at a future date that’s no earlier than 7 days from now. The secret won’t show up in the console, but you can’t use the same secret ID until it’s actually gone.
So good luck recovering from any mistakes you’ve made creating a secret via the AWS console instead of using Cloud Formation, like I did today. I guess some things’ll never change.
Been working on a Cloud Formation stack that defines IAM resources: roles, policies, profiles, etc. I can do a little bit already, like change policy documents, but writing this all from scratch is beyond me. ChatGPT has been a great help here. Would’ve been bothering my coworkers all day otherwise.
Code merged and artefacts prepared. Now to deploy it on brand spanking new infrastructure.
So, this is how my morning went.

Apologies to my reviewers for all the notification emails they’re receiving during this battle with the CI/CD build.
Might be the only way I’ll learn another language is I put the spoken training audio to music, preferably something that can pass as a entry to Eurovision.
Linux administration is quite fun. I don’t usually get an opportunity to do it as part of my day-to-day, so it’s always a joy having a task that involves SSH and interacting with a shell. 🐧

📺 Fallout: Season 1 (2024)

👨💻 New post on Linux over at Coding Bits: Packaging Services With Systemd
More Tools For Blogging Tool
Spent the last week working on Blogging Tool. I want to get as much done as a I can before motivation begins to wain, and it begins languishing like every other project I’ve worked on. Not sure I can stop that, but I think I can get the big ticket items in there so it’ll be useful to me while I start work on something else.
I do have plans for some new tools for Blogging Tool: making it easier to make Lightbox Gallery was just the start. This last week I managed to get two of them done, along with some cross-functional features which should help with any other tools I make down the road.
Move To Sqlite
First, a bit of infrastructure. I moved away from Rainstorm as the data store and replaced it with Sqlite 3. I’m using a version of Sqlite 3 that doesn’t use CGO as the Docker container this app runs in doesn’t have libc. It doesn’t have as much support out there as the more popular Sqlite 3 client, but I’ve found it to work just as well.
One could argue that it would’ve been fine sticking with Rainstorm for this. But as good as Rainstorm’s API is, the fact that it takes out a lock on the database file is annoying. I’m running this app using Dokku, which takes a zero-downtime approach to deployments. This basically means that the old and new app container are running at the same time. The old container doesn’t get shut down for about a minute, and because it’s still holding the lock, I can’t use the new version during that time as the new container cannot access the Rainstorm database file. Fortunately, this is not an issue with Sqlite 3.
It took me a couple of evenings to port the logic over, but fortunately I did this early, while there was no production data to migrate. I’m using Sqlc for generating Go bindings from SQL statements, and a home grown library for dealing with the schema migrations. It’s not as easy to use as the Rainstorm API but it’ll do. I’m finding working with raw SQL again to be quite refreshing so it may end up being better in the long run.

Imaging Processing
Once that’s done, I focused on adding those tools I wanted. The first one to sit alongside the gallery tool, is something for preparing images for publishing. This will be particularly useful for screenshots. If you look carefully, you’d noticed that the screenshots on this site have a slightly different shadow than the MacOS default. It’s because I actually take a screenshot without the shadow, then use a CLI tool to add one prior to upload. I do this because the image margins MacOS includes with the shadow are pretty wide, which makes the actual screenshot part smaller than I like. Using the CLI tool is fine, but it’s not always available to me. So it seemed like a natural thing to add to this blogging tool.
So I added an image processing “app” (I’m calling these tools “apps” to distinguish them from features that work across all of them) which would take an image, and allows you to apply a processor on it. You can then download the processed image and use it in whatever you need.

This is all done within the browser, using the Go code from the CLI tool compiled to WASM. The reason for this is performance. These images can be quite large, and I’d rather avoid the network round-trip. I’m betting that it’ll be faster running it in the browser anyway, even if you consider the amount of time it takes to download the WASM binary (which is probably around a second or so).
One addition I did add was to allow processors to define parameters which are shown to the user as input fields. There’s little need for this now — it’s just being used in a simple meme-text processor right now — but it’s one of those features I’d like to at least get basic support for before my interest wains. It wouldn’t be the first time I stopped short of finishing something, thinking to my self that I’d add what I’ll need later, then never going back to do so. That said, I do have some ideas of processors which could use this feature for real, which I haven’t implemented yet. More on that in the future, maybe.

Audio Transcoding And Files
The other one I added deals with audio transcoding. I’ve gotten into the habit of narrating the long form posts I write. I usually use Quicktime Player to record these, but it only exports M4A audio files and I want to publish them as MP3s.
So after recording them, I need to do a transcode. There’s an ffmpeg
command line invocation I use to do this:
ffmpeg -i in.m4a -c:v copy -c:a libmp3lame -q:a 4 out.mp3
But I have to bring up a terminal, retrieve it from the history (while it’s still in there), pick a filename, etc. It’s not hard to do, but it’s a fair bit of busy work
I guess now that I’ve written it here, it’ll be less work to remember.
But it’s a bit late now since I’ve added the feature to do this for
me. I’ve included a statically linked version of ffmpeg
in the
Docker container (it needs to be statically linked for the same reason
why I can’t use CGO: there’s no libc or any other shared objects) and
wrapped it around a small form where I upload my
M4A.

The transcoding is done on the server (seemed a bit much asking for this to be done in the browser) but I’m hoping that most M4A files will be small enough that it wouldn’t slow things down too much. The whole process is synchronous right now, and I could’ve make the file available then and there, but it wouldn’t be the only feature I’m thinking of that would produced files that I’d like to do things with later. Plus, I’d like to eventually make it asynchronous so that I don’t have to wait for long transcodes, should there be any.
So along with this feature, I added a simple file manager in which these working files will go.

They’re backed by a directory running in the container with metadata managed by Sqlite 3. It’s not a full file system — you can’t do things like create directories, for example. Nor is it designed to be long term storage for these files. It’s just a central place where any app can write files out as a result of their processing. The user can download the files, or potentially upload them to a site, then delete them. This would be useful for processors which could take a little while to run, or run on a regular schedule.
I don’t have many uses for this yet, apart from the audio transcoder, but having this cross-functional facility opens it up to features that need something like this. It means I don’t have to hand-roll it for each app.
Anyway, that’s the current state of affairs. I have one, maybe two, large features I’d like to work on next. I’ll write about them once they’re done.
Oof! These mornings have been really cold this last week. Had to bring out my wool and possum fur gloves for the walk to the cafe in 0.5°C weather.
🔗 Adding Github-Style Markdown Alerts to Eleventy
GitHub has alerts (aka callouts) Markdown support where the syntax looks like [Obsidian’s.]
So apparently, if we were using Github instead of Gitlab, I could’ve had it all. 😏

One other thing I found this morning during my exploration of Markdown and Asciidoc is that many tools have a problem with JSON code blocks containing JavaScript-like comments. They’re reported as syntax errors, and sometimes they break the syntax highlighting. They’re still included in the rendered HTML, but it feels to me like the tools do so begrudgingly. Gitlab even marks them up with a red background colour.
Why so strict? The code blocks are for human consumption, and it’s really useful to annotate them occasionally. I always find myself adding remarks like “this is the new line”; or removing large, irrelevant chunk of JSON and replacing it with an ellipsis indicating that I’ve done so.
I know that some Markdown parsers support line annotations, but each one has a different syntax, and they don’t work for every annotation I want to make. But you know what does? Comments! I know how to write them, they’re easy to add, and they’re the same everywhere. Just let me use them in blocks of JSON code, please.
Oh, and also let me add trailing commas too.