Devlog
More Tools For Blogging Tool
Spent the last week working on Blogging Tool. I want to get as much done as a I can before motivation begins to wain, and it begins languishing like every other project Iāve worked on. Not sure I can stop that, but I think I can get the big ticket items in there so itāll be useful to me while I start work on something else.
I do have plans for some new tools for Blogging Tool: making it easier to make Lightbox Gallery was just the start. This last week I managed to get two of them done, along with some cross-functional features which should help with any other tools I make down the road.
Move To Sqlite
First, a bit of infrastructure. I moved away from Rainstorm as the data store and replaced it with Sqlite 3. Iām using a version of Sqlite 3 that doesnāt use CGO as the Docker container this app runs in doesnāt have libc. It doesnāt have as much support out there as the more popular Sqlite 3 client, but Iāve found it to work just as well.
One could argue that it wouldāve been fine sticking with Rainstorm for this. But as good as Rainstormās API is, the fact that it takes out a lock on the database file is annoying. Iām running this app using Dokku, which takes a zero-downtime approach to deployments. This basically means that the old and new app container are running at the same time. The old container doesnāt get shut down for about a minute, and because itās still holding the lock, I canāt use the new version during that time as the new container cannot access the Rainstorm database file. Fortunately, this is not an issue with Sqlite 3.
It took me a couple of evenings to port the logic over, but fortunately I did this early, while there was no production data to migrate. Iām using Sqlc for generating Go bindings from SQL statements, and a home grown library for dealing with the schema migrations. Itās not as easy to use as the Rainstorm API but itāll do. Iām finding working with raw SQL again to be quite refreshing so it may end up being better in the long run.

Imaging Processing
Once thatās done, I focused on adding those tools I wanted. The first one to sit alongside the gallery tool, is something for preparing images for publishing. This will be particularly useful for screenshots. If you look carefully, youād noticed that the screenshots on this site have a slightly different shadow than the MacOS default. Itās because I actually take a screenshot without the shadow, then use a CLI tool to add one prior to upload. I do this because the image margins MacOS includes with the shadow are pretty wide, which makes the actual screenshot part smaller than I like. Using the CLI tool is fine, but itās not always available to me. So it seemed like a natural thing to add to this blogging tool.
So I added an image processing āappā (Iām calling these tools āappsā to distinguish them from features that work across all of them) which would take an image, and allows you to apply a processor on it. You can then download the processed image and use it in whatever you need.

This is all done within the browser, using the Go code from the CLI tool compiled to WASM. The reason for this is performance. These images can be quite large, and Iād rather avoid the network round-trip. Iām betting that itāll be faster running it in the browser anyway, even if you consider the amount of time it takes to download the WASM binary (which is probably around a second or so).
One addition I did add was to allow processors to define parameters which are shown to the user as input fields. Thereās little need for this now ā itās just being used in a simple meme-text processor right now ā but itās one of those features Iād like to at least get basic support for before my interest wains. It wouldnāt be the first time I stopped short of finishing something, thinking to my self that Iād add what Iāll need later, then never going back to do so. That said, I do have some ideas of processors which could use this feature for real, which I havenāt implemented yet. More on that in the future, maybe.

Audio Transcoding And Files
The other one I added deals with audio transcoding. Iāve gotten into the habit of narrating the long form posts I write. I usually use Quicktime Player to record these, but it only exports M4A audio files and I want to publish them as MP3s.
So after recording them, I need to do a transcode. Thereās an ffmpeg
command line invocation I use to do this:
ffmpeg -i in.m4a -c:v copy -c:a libmp3lame -q:a 4 out.mp3
But I have to bring up a terminal, retrieve it from the history (while itās still in there), pick a filename, etc. Itās not hard to do, but itās a fair bit of busy work
I guess now that Iāve written it here, itāll be less work to remember.
But itās a bit late now since Iāve added the feature to do this for
me. Iāve included a statically linked version of ffmpeg
in the
Docker container (it needs to be statically linked for the same reason
why I canāt use CGO: thereās no libc or any other shared objects) and
wrapped it around a small form where I upload my
M4A.

The transcoding is done on the server (seemed a bit much asking for this to be done in the browser) but Iām hoping that most M4A files will be small enough that it wouldnāt slow things down too much. The whole process is synchronous right now, and I couldāve make the file available then and there, but it wouldnāt be the only feature Iām thinking of that would produced files that Iād like to do things with later. Plus, Iād like to eventually make it asynchronous so that I donāt have to wait for long transcodes, should there be any.
So along with this feature, I added a simple file manager in which these working files will go.

Theyāre backed by a directory running in the container with metadata managed by Sqlite 3. Itās not a full file system ā you canāt do things like create directories, for example. Nor is it designed to be long term storage for these files. Itās just a central place where any app can write files out as a result of their processing. The user can download the files, or potentially upload them to a site, then delete them. This would be useful for processors which could take a little while to run, or run on a regular schedule.
I donāt have many uses for this yet, apart from the audio transcoder, but having this cross-functional facility opens it up to features that need something like this. It means I donāt have to hand-roll it for each app.
Anyway, thatās the current state of affairs. I have one, maybe two, large features Iād like to work on next. Iāll write about them once theyāre done.
Blogging Gallery Tool
Oof! Itās been a while, hasnāt it.
Not sure why I expected my side-project work to continue while Iām here in Canberra. Feels like a waste of a trip to go somewhere ā well, not āuniqueā, Iāve been here before; but different ā and expect to spend all your time indoors writing code. Maybe a choice I wouldāve made when I was younger, but now? Hmm, better to spend my time outdoors, ātouching grassā. So thatās what Iāve been doing.
But I canāt do that all the time, and although I still have UCL (Iāve made some small changes recently, but nothing worth writing about) and Photo Bucket, I spent this past fortnight working on new things.
The first was an aborted attempt at an RSS reader for Android that works with Feedbin. I did get something working, but I couldnāt get it onto my mobile, and frankly it was rather ugly. So Iāve set that idea aside for now. Might revisit it again.
But all my outdoor adventures did motivate me to actually finish something Iāve been wanting to do for a couple of years now. For you see, I take a lot of photos and Iād like to publish them on my Micro.blog in the form of a GLightbox gallery (see this post for an example). But making these galleries is a huge pain. Setting aside that I always forget the short-codes to use, itās just a lot of work. Iām always switching back and forth between the Upload section in Micro.blog, looking that the images I want to include, and a text file where Iām working on the gallery markup and captions.
Iāve been wishing for some tool which would take on much of this work for me. Iād give it the photos, write the captions, and it would generate the markup. Iāve had a run at building something that would do this a few times already, including an idea for a feature in Photo Bucket. But I couldnāt get over the amount of effort it would take to upload, process, and store the photos. Itās not that itād would be hard, but it always seemed like double handling, since their ultimate destination was Micro.blog. Plus, I was unsure as to how much effort I wanted to put into this, and the minimum amount of effort needed to deal with the images seemed like a bit of a hassle.

It turns out the answer was in front of me this whole time. The hard part was preparing the markup so why couldnāt I build something that simply did that? The images would already be in Micro.blog; just use their URLs. A much simpler approach indeed.
So I started working on āBlogging Toolsā, a web-app thatāll handle this part of making galleries. First, I upload the images to Micro.blog, then I copy the image tags into to this tool:

The tool will parse these tags, preserving things like the āaltā attribute, and present the images in the order theyāll appear in the gallery, with text boxes beside each one allowing me to write the caption.

Once Iām done, I can then ārenderā the gallery, which will produce the Hugo short-codes that I can simply copy and paste into the post.


This took me about a few evenings of work. Itās a simple Go app, using Fiber and Rainstorm, running in Docker. Seeing that the image files themselves are not managed by the tool, once I got the image parsing and rendering done, the rest was pretty straight forward. Itās amazing to think that removing the image handling side of things has turned this once āsizeableā tool into something that that was quick to build and, most importantly, finally exists. I do have more ideas for this āBlogging Toolā. The next idea is porting various command line tools that do simple image manipulation to WASM so I can do them in the browser (these tools were use to crop and produce the shadow of the screenshot in this post). Iām hoping that these would work on the iPad, so that I can do more of the image processing there rather than give up and go to a ārealā computer. I should also talk a little about why I chose Rainstorm over Sqlite, or whether that was a good idea. Maybe be more on those topics later, but Iāll leave it here for now.
Bulk Image Selection
Some light housekeeping first: this is the 15th post on this blog so I thought it was time for a proper domain name. Not that buying a domain automatically means Iāll keep at it, but it does feel like Iāve got some momentum writing here now, so Iāll take the $24.00 USD risk. Iād also like to organise a proper site favicon too. Iāve got some ideas but Iāve yet to crack open Affinity Design just yet.
Anyway, Iāve been spending some time on Photo Bucket on and off this past week. Iāve fully implemented the new page model mentioned in the last post, and hooked it up to the switcher in the āDesignā admin section. Iāve also built the public gallery and gallery item page.


Theyāre a little on the simplistic side. Thatās partly due to my minimalistic design sensibilities, but itās also because I havenāt spent a lot of time on the public pages yet. I probably shouldnāt leave it too late, lest my impressions on how it looks drops to the point where I loose interest in working on this again. Itās a challenge, but I guess my counter is that Iāll probably be spending more time in the admin section, so as long as the experience is good enough there, I can probably go by with a very basic public site for now (but not for ever). Now that galleries can be shown on the landing page, Iād like to organise another deployment so that I can start showing images in galleries. But before I do, Iāll need an easy way to move all the existing images into a gallery. Clicking into 25 individual images just to select which gallery they should belong to doesnāt sound desirable to me. So I spent some time adding batch operations to the image admin page. The way it works is that by pressing Shift and clicking the images, you can now select them and perform batch operations, such as add them to a gallery (this is the only one I have now).
I do like how the selection indictor came out. Itās got some DaVinci
Resolve vibes (Iāve been using DaVinci Resolve recently to edit some
videos so I may have been inspired by their design language here) but I
think I might need to use another highlight colour though: I think the
black bleeds too easily into the images. Also, while I was recording the
demo, I realise I broke the ability to rearrange gallery items. I may
need to fix that before redeploying. Clicking āGalleryā brings up a
model similar to the one used in the individual image page. It workās
slightly differently though: instead of choosing whether the images
appear in the gallery or not, this one is used to choose which galleries
to add the selected images to.
Iām not sure that this is the best modal for this. It was quick to add,
but I get the feeling that using the same model in slightly different
ways could confuse people. So I might do something else here. An idea I
have is a modal more like the following:

The idea is that all the galleries will be listed like before, but will have a three-segmented button to the right. The centre button will be selected by default, and will show how many of the selected images are currently within that particular gallery. To the left will be the option to remove those images from the gallery, and to the right will be the option to add all the remaining selected images to the gallery. These are identified with the number of the selected images each gallery will have when the user clicks āSaveā: 0 for none, and the number of selected images for all. For good measure is an option to add all the selected images into a brand new gallery. This will require some backend work so I havenāt started work on this yet. Not sure if this too will be a bit confusing: may need some additional text explaining how it all works. Iām hoping that users would recognise it as operating similar to the Gallery model for a single image.
The Site Page Model
I opened up Photo Bucket this morning and found a bunch of commits involving pages. I had no idea why I added them, until I launched it and started poking around the admin section. I tried a toggle on the Design page which controlled whether the landing page showed a list of photos or galleries, and after finding that it wasnāt connected to anything, it all came flooding back to me. So while what Iām going to describe here isnāt fully implemented yet, I decided to write it down before I forget it again.
So here is where the story continues. Now that galleries have been added to the model, I want to make them available on the public site. For the first cut, Iām hoping to allow the admin (i.e. the site owner) the ability to switch the landing page between a grid of photos or a grid of galleries. This is that single toggle on the āDesignā page I was talking about earlier:

Oh, BTW: I finally got around to highlighting the active section in the admin screen. I probably shouldāve done this earlier, as deferring these āunnecessary aesthetics tasksā does affect how it feels to use this ā and whether or not Iām likely to continue working on it.
Anyway, if this was before the new model I wouldāve implemented
this as a flag on the Design
model. But Iād like to start actually
building out how pages on the site are to be modelled. What Iām
thinking is an architecture that looks a little like the
following:

The site content will be encoded using a new Page
model. These Pages
will be used to define the contents of the landing page (each site will
have at least this page by default) along with any additional pages the
user would like to add. Galleries and photos will automatically have
their own pages, and will not need to have any specific Page models to
be present on the site. How these will look, plus the properties and
stylings of the site itself, will be dictated by the Design
model.
Each Page instance will have the following properties:
- Slug ā the path the page is available on. For the landing page, this will be `/`.
- Type ā what the page will contain.
- other properties like title, description, etc. which have yet to be defined.
The āpage typeā is probably the most important property of a Page, as it will dictate what the contents of the page will be comprised of. The following page types will be supported at first:
- Photos ā grid of all the photos managed on the site.
- Galleries ā grid of all the galleries managed on the site.
The user will probably not have much control over how these pages will
look, apart from styling, footers, etc. which will live on the Design
model. But Iām also thinking of adding a page type which would just
produce a standard HTML page from a Markdown body. This could be useful
for About pages or anything else the user may want to add to their site.
I havenāt thought about navigation but I think choosing whether to
include the page on the siteās nav bar would be another useful
property.
The result would be a sitemap that could look a little like the following, where all the user defined pages reference automatically created pages:

And this is what that single toggle should do. It should change the page type of the landing page between photo list and gallery list.
As you can probably guess, thereās currently no way to add additional pages at the moment. But Iām doing this work now so that it should be easier to do later.
Indexing In UCL
Iāve been thinking a little about how to support indexing in UCL, as in
getting elements from a list or keyed values from a map. There already
exists an index
builtin that does this, but Iām wondering if this can
be, or even should be, supported in the language itself.
Iāve reserved .
for this, and itāll be relatively easy to make use
of it to get map fields. But I do have some concerns with supporting
list element dereferencing using square brackets. The big one being that
if I were to use square brackets the same way that many other languages
do, I suspect (although I havenāt confirmed) that it could lead to the
parser treating them as two separate list literals. This is because the
scanner ignores whitespace, and thereās no other syntactic indicators
to separate arguments to proc calls, like commas:
echo $x[4] --> echo $x [4]
echo [1 2 3][2] --> echo [1 2 3] [2]
So Iām not sure what to do here. Iād like to add support for .
for
map fields but it feels strange doing that just that and having nothing
for list elements.
I can think of three ways to address this.
Do Nothing ā the first option is easy: donāt add any new syntax to
the language and just rely on the index
builtin. TCL does with
lindex, as does Lisp with nth, so Iāll be in good company
here.
Use Only The Dot ā the second option is to add support for the dot
and not the square brackets. This is what the Go templating language
does for keys of maps or structs fields. They also have an index
builtin too, which will work with slice elements.
Iād probably do something similar but I may extend it to support index
elements. Getting the value of a field would be what youād expect, but
to get the element of a list, the construct .(x)
can be used:
echo $x.hello \# returns the "hello" field
echo $x.(4) \# returns the forth element of a list
One benefit of this could be that the .(x)
construct would itself be a
pipeline, meaning that string and calculated values could be used as
well:
echo $x.("hello")
echo $x.($key)
echo $x.([1 2 3] | len)
echo $x.("hello" | toUpper)
I can probably get away with supporting this without changing the scanner or compromising the language design too much. It would be nice to add support for ditching the dot completely when using the parenthesis, a.la. BASIC, but Iād probably run into the same issues as with the square brackets if I did, so I think thatās out.
Use Parenthesis To Be Explicit ā the last option is to use square brackets, and modify the grammar slightly to only allow the use of suffix expansion within parenthesis. That way, if youād want to pass a list element as an argument, you have to use parenthesis:
echo ($x[4]) \# forth element of $x
echo $x[4] \# $x, along with a list containing "4"
This is what youād see in more functional languages like Elm and I think Haskell. Iāll have see whether this could work with changes to the scanner and parser if I were to go with this option. I think it may be achievable, although Iām not sure how.
An alternative way might be to go the other way, and modify the grammar rules so that the square brackets would bind closer to the list, which would mean that separate arguments involving square brackets would need to be in parenthesis:
echo $x[4] \# forth element of $x
echo $x ([4]) \# $x, along with a list containing "4"
Or I could modify the scanner to recognise whitespace characters and use that as a guide to determine whether square brackets following a value. At least one space means the square bracket represent a element suffix, and zero mean two separate values.
So thatās where I am at the moment. I guess it all comes down to what works best for the language as whole. I can live with option one but it would be nice to have the syntax. I rather not go with option three as Iād like to keep the parser simple (I rather not add to all the new-line complexities Iāve have already).
Option two would probably be the least compromising to the design as a whole, even if the aesthetics are a bit strange. I can probably get use to them though, and I do like the idea of index elements being pipelines themselves. I may give option two a try, and see how it goes.
Anyway, more on this later.
Tape Playback Site
Thought Iād take a little break from UCL today.
Mum found a collection of old cassette tapes of us when we were kids, making and recording songs and radio shows. Iāve been digitising them over the last few weeks, and today the first recorded cassette was ready to share with the family.
I suppose I couldāve just given them raw MP3 files, but I wanted to record each cassette as two large files ā one per side ā so as to not loose much of the various crackles and clatters made when the tape recorder was stopped and started. But I did want to catalogue the more interesting points in the recording, and it wouldāve been a bit āmehā simply giving them to others as one long list of timestamps (simulating the rewind/fast-forward seeking action wouldāve been a step too far).
Plus, simply emailing MP3 files wasnāt nearly as interesting as what I did do, which was to put together a private site where others could browse and play the recorded tapes:


The site is not much to talk about ā itās a Hugo site using the Mainroad theme and deployed to Netlify. There is some JavaScript that moves the playhead when a chapter link is clicked, but the rest is just HTML and CSS. But I did want to talk about how I got the audio files into Netlify. I wanted to use `git lfs` for this and have Netlify fetch them when building the site. Netlify doesnāt do this by default, and I get the sense that Netlifyās support for LFS is somewhat deprecated. Nevertheless, I gave it a try by adding an explicit `git lfs` step in the build to fetch the audio files. And it couldāve been that I was using the LFS command incorrectly, or maybe it was invoked at the wrong time. But whatever the reason, the command errored out and the audio files didnāt get pulled. I tried a few more times, and I probably couldāve got it working if I stuck with it, but all those deprecation warnings in Netlifyās documentation gave me pause.

So what I ended up doing was turning off builds in Netlify and using a Github Action which built the Hugo site and publish it to Netlify using the CLI tool. Hereās the Github Action in full:
name: Publish to Netify
on:
push:
branches: [main]
jobs:
build:
name: Build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
submodules: true
fetch-depth: 0
lfs: true
- name: Setup Hugo
uses: peaceiris/actions-hugo@v3
with:
hugo-version: '0.119.0'
- name: Build Site
run: |
npm install
hugo
- name: Deploy
env:
NETLIFY_SITE_ID: ${{ secrets.NETLIFY_SITE_ID }}
NETLIFY_AUTH_TOKEN: ${{ secrets.NETLIFY_AUTH_TOKEN }}
run: |
netlify deploy --dir=public --prod
This ended up working quite well: the audio files made it to Netlify and were playable on the site. The builds are also quite fast; around 55 seconds (an earlier version involved building Hugo from source, which took 5 minutes). So for anyone else interested in trying to serve LFS files via Netlify, maybe try turning off the builds and going straight to using Github Action and the CLI tool. That is⦠if you can swallow the price of LFS storage in Github. Oof! A little pricy. Might be that Iāll need to use something else for the audio files.