Devlog
- No commands to actually filter or map. In fact, the whole standard library needs to be built out.
- No ability to get fields from hashes or lists, including proxy objects which can act as lists or hashes.
- The landing screen needs to show all the active items regardless of what project they belong to. Bouncing around projects is a bit of a hassle (to be fair, I haven’t spent any time styling the screens beyond what Buffalo generated).
- Add back ticket drafts. I enjoyed the flow of drafting all the tickets in one hit, then clicking “Publish” and seeing them come out as a fully specified Jira epic.
- Add the notion of logs, with a “log type”. Such types can include things like “todo” (i.e thing for me to work on), “decision” (a product decision that was agreed on), and “log” (work that was done).Β
- A way to quickly view the tickets in Jira so I can do batch operations over them.
- It should be able to support structured data, similar to how Lisp works
- It should be able to support something similar to pipes, similar to how the shell and Go’s template language works.
- It doesn’t have to be fast. In fact, it can be slow so long as the work embedding and operating it can be fast.
- It may not be completely featureful. I’ll go over the features I’m thinking of below, but I say upfront that you’re not going to be building any cloud services with this. Administering cloud servers, maybe; but leave the real programs to a real language.
-
Apparently I’m more than happy to discuss work in progress, yet when it comes to talking about something I’ve finished, I freeze up. π€· ↩︎
Imports And The New Model
Well, I dragged Photo Bucket out today to work on it a bit.
It’s fallen by the wayside a little, and I’ve been wondering if it’s worth continuing work on it. So many things about it that need to be looked at: the public site looks ugly, as does the admin section; working with more than a single image is a pain; backup and restore needs to be added; etc.
I guess every project goes through this “trough of discontent” where the initial excitement has warn off and all you see is a huge laundry list of things to do.Β Not to mention the wandering eye looking at the alternatives.
But I do have to stop myself from completely junking it, since it’s actually being used to host the Folio Red Gallery.Β I guess my push to deploy it has entrapped me (well, that was the idea of pushing it out there in the first place).
Anyway, it’s been a while (last update is here) and the move to the new model is progressing. And it’s occurred to me that I haven’t actually talked about the new model (well, maybe I have but I forgot about it).
Previously, the root model of this data structure is the Store
. All
images belong to a Store
, which is responsible for managing the
physical storage and retrieval of them. These stores can have
sub-stores, which are usually used to hold the images optimised for a
specific use (serving on the web, showing as a thumbnails, etc).
Separate to this was the public site Design
which handed properties of
the public site: how it should look, what the title, and description is,
etc.

There were some serious issues with this approach: images were owned by stores, and two images can belong to two different stores, but they all belonged to the same site. This made uploading confusing: which store should the image live on? I worked around this by adding the notion of a “primary store” but this was just ignoring the problem and defeated the whole multiple-store approach.
This is made even worse when one considers which store to use for
serving the images. Down the line I was hoping to support virtual domain
hosting, where one could setup different image sites on different
domains that all pointed to the same instance. So imagine how that would
work: one wanted to view images from alpha.example.com
and another
wanted to view images from beta.example.com
. Should the domains live
on the store? What about the site designs? Where should they live?
The result was that this model could only really ever support one site per Photo Bucket instance, requiring multiple deployments for different sites if one wanted to use a single host for separate photo sites.
So I re-engineered the model to simplify this dramatically.Β Now, the
route object is the Site
:

Here, the Site owns everything. The images are associated with sites, not stores. Stores still exist, but their role is now more in-line with what the sub-stores did. When an image is uploaded, it is stored in every Store of the site, and each Store will be responsible for optimising it for a specific use-case. The logic used to determine which Store to use to fetch the image is still in place but now it can be assumed that any Store associated with a site will have the image.
Now the question of which Store an image should be added to is easy: all the them.
Non-image data, such as Galleries and Designs now live off the Site as well, and if virtual hosting is added, so would the domain that serves that Site.
At least one site needs to be present at all time, and it’s likely most instances will simply have a single Site for now. But this assumption solves the upload and hosting resolution issues listed above. And if multiple site support is needed, a simple site picker can be added to the admin page (the public pages can will rely on the request hostname).
This has been added a while ago, and as of today, has been merged to
main
. But I didn’t want to deal with writing the data migration logic
for this, so my plan is to simply junk the existing instance and replace
it with the brand new one. But in order to do so, I needed to export the
photos from the old instance, and import them into the new one.
The export logic has been deployed and I’ve made an export it this
morning. Today, the import logic was finished and merged.Β Nothing
fancy: like the export
it’s only invokable from the command line. But
it’ll do the job for now.
Next steps is to actually deploy this, which I guess will be the ultimate test. Then, I’m hoping to add support for galleries in the public page so I can separate images on the Folio Red Gallery into projects. There’s still no way to add images in bulk to a gallery. Maybe this will give me an incentive to do that next.
UCL: Procs and Higher-Order Functions
More on UCL yesterday evening.Β Biggest change is the introduction of user functions, called “procs” (same name used in TCL):
proc greet {
echo "Hello, world"
}
greet
--> Hello, world
Naturally, like most languages, these can accept arguments, which use
the same block variable binding as the foreach
loop:
proc greet { |what|
echo "Hello, " $what
}
greet "moon"
--> Hello, moon
The name is also optional, and if omitted, will actually make the function anonymous.Β This allows functions to be set as variable values, and also be returned as results from other functions.
proc makeGreeter { |greeting|
proc { |what|
echo $greeting ", " $what
}
}
set helloGreater (makeGreeter "Hello")
call $helloGreater "world"
--> Hello, world
set goodbye (makeGreeter "Goodbye cruel")
call $goodbye "world"
--> Goodbye cruel, world
I’ve added procs as a separate object type. At first glance, this may seem a little unnecessary. After all, aren’t blocks already a specific object type?
Well, yes, that’s true, but there are some differences between a proc and a regular block. The big one being that the proc will have a defined scope. Blocks adapt to the scope to which they’re invoked whereas a proc will close over and include the scope to which it was defined, a lot like closures in other languages.
It’s not a perfect implementation at this stage, since the set
command only sets variables within the immediate scope. This means that
modifying closed over variables is currently not supported:
\# This currently won't work
proc makeSetter {
set bla "Hello, "
proc appendToBla { |x|
set bla (cat $bla $x)
echo $bla
}
}
set er (makeSetter)
call $er "world"
\# should be "Hello, world"
Higher-Order Functions
The next bit of work is finding out how best to invoke these procs in higher-order functions. There are some challenges here that deal with the language grammar.
Invoking a proc by name is fine, but since the grammar required the
first token to be a command name, there was no way to invoke a proc
stored in a variable. I quickly added a new call
command β which takes
the proc as the first argument β to work around it, but after a while,
this got a little unwieldy to use (you can see it in the code sample
above).
So I decided to modify the grammar to allow any arbitrary value to be the first token. If it’s a variable that is bound to something “invokable” (i.e. a proc), and there exist at-least one other argument, it will be invoked. So the above can be written as follows:
set helloGreater (makeGreeter "Hello")
$helloGreater "world"
--> Hello, world
At-least one argument is required, otherwise the value will simply be returned. This is so that the value of variables and literal can be returned as is, but that does mean lambdas will simply be dereferenced:
"just, this"
--> just, this
set foo "bar"
$foo
--> bar
set bam (proc { echo "BAM!" })
$bam
--> (proc)
To get around this, I’ve added the notion of the “empty sub”, which
is just the construct ()
. It evaluates to nil, and since a function
ignores any extra arguments not bound to variables, it allows for
calling a lambda that takes no arguments:
set bam (proc { echo "BAM!" })
$bam ()
--> BAM!
It does allow for other niceties, such as using a falsey value:
if () { echo "True" } else { echo "False" }
--> False
With lambdas now in place, I’m hoping to work on some higher order
functions. I’ve started working on map
which accepts both a list or a
stream. It’s a buggy mess at the moment, but some basic constructs
currently work:
map ["a" "b" "c"] (proc { |x| toUpper $x })
--> stream ["A" "B" "C"]
(Oh, by the way, when setting a variable to a stream using set
, it
will now collect the items as a list. Or at least that’s the idea.
It’s currently not working at the moment.)
A more refined approach would be to treat commands as lambdas. The grammar supports this, but the evaluator doesnβt. For example, you cannot write the following:
\# won't work
map ["a" "b" "c"] toUpper
This is because makeUpper
will be treated as a string, and not a
reference to an invokable command. It will work for variables. You can
do this:
set makeUpper (proc { |x| toUpper $x })
map ["a" "b" "c"] $makeUpper
I’m not sure how I can improve this. I don’t really want to add automatic dereferencing of identities: they’re very useful as unquoted string arguments. I suppose I could add another construct that would support dereferencing, maybe by enclosing the identifier in parenthesis:
\# might work?
map ["a" "b" "c"] (toUpper)
Anyway, more on this in the future I’m sure.
UCL: First Embed, and Optional Arguments
Came up with a name: Universal Control Language: UCL. See, you have TCL; but what if instead of being used for tools, it can be more universal? Sounds so much moreβ¦ universal, am I right? π
Yeah, okay. It’s not a great name. But it’ll do for now.
Anyway, I’ve started integrating this language with the admin tool I’m using at work. This tool I use is the impetus for this whole endeavour. Up until now, this tool was just a standard CLI command usable from the shell. But it’s not uncommon for me to have to invoke the tool multiple times in quick succession, and each time I invoke it, it needs to connect to backend systems, which can take a few seconds. Hence the reason why I’m converting it into a REPL.
Anyway, I added UCL to the tool, along with a readline library, and wow, did it feel good to use. So much better than the simple quote-aware string splitter I’d would’ve used. And just after I added it, I got a flurry of requests from my boss to gather some information, and although the language couldn’t quite handle the task due to missing or unfinished features, I can definitely see the potential there.
I’m trying my best to only use what will eventually be the public API to add the tool-specific bindings. The biggest issue is that these “user bindings” (i.e. the non-builtins) desperately need support for producing and consuming streams. They’re currently producing Go slices, which are being passed around as opaque “proxy objects”, but these can’t be piped into other commands to, say, filter or map. Some other major limitations:
One last thing that would be nice is the ability to define optional arguments. I actually started work on that last night, seeing that it’s relatively easy to build. I’m opting for a style that looks like the switches you’d find on the command line, with option names starting with dashes:
join "a" "b" -separator "," -reverse
--> b, a
Each option can have zero or more arguments, and boolean options can be represented as just having the switch. This does mean that they’d have to come after the positional arguments, but I think I can live with that.Β Oh, and no syntactic sugar for single-character options: each option must be separated by whitespace (the grammar actually treats them as identifiers). In fact, I’d like to discourage the use of single-character option names for these: I prefer the clarity that comes from having the name written out in full (that said, I wouldn’t rule out support for aliases). This eliminates the need for double dashes, to distinguish long option names from a cluster of single-character options, so only the single dash will be used.
I’ll talk more about how the Go bindings look later, after I’ve used them a little more and they’re a little more refined.
Tool Command Language: Lists, Hashs, and Loops
A bit more on TCL (yes, yes, I’ve gotta change the name) last night. Added both lists and hashes to the language. These can be created using a literal syntax, which looks pretty much looks how I described it a few days ago:
set list ["a" "b" "c"]
set hash ["a":"1" "b":"2" "c":"3"]
I had a bit of trouble working out the grammar for this, I first went with something that looked a little like the following, where the key of an element is optional but the value is mandatory:
list_or_hash --> "[" "]" \# empty list
| "[" ":" "]" \# empty hash
| "[" elems "]" \# elements
elems --> ((arg ":")? arg)* \# elements of a list or hash
arg --> <anything that can be a command argument>
But I think this confused the parser a little, where it was greedily
consuming the key arg and expecting the :
to be present to consume the
value.
So I flipped it around, and now the “value” is the optional part:
elems --> (arg (":" arg)?)*
So far this seems to work. I renamed the two fields “left” and “right”, instead of key and value.Β Now a list element will use the “left” part, and a hash element will use “left” for the key and “right” for the value.
You can probably guess that the list and hash are sharing the same AST types. This technically means that hybrid lists are supported, at least in the grammar. But I’m making sure that the evaluator throws an error when a hybrid is detected. I prefer to be strict here, as I don’t want to think about how best to support it. Better to just say either a “pure” list, or a “pure” hybrid.
Well, now that we have collections, we need some way to iterate over
them. For that, I’ve added a foreach
loop, which looks a bit like the
following:
\# Over lists
foreach ["a" "b" "c"] { |elem|
echo $elem
}
\# Over hashes
foreach ["a":"1" "b":"2"] { |key val|
echo $key " = " $val
}
What I like about this is that, much like the if
statement, it’s
implemented as a macro. It takes a value to iterate over, and a block
with bindable variables: one for list elements, or two for hash keys and
values. This does mean that, unlike most other languages, the loop
variable appears within the block, rather than to the left of the
element, but after getting use to this form of block from my Ruby days,
I can get use to it.
One fun thing about hashes is that they’re implemented using Go’s
map
type. This means that the iteration order is random, by design.
This does make testing a little difficult (I’ve only got one at the
moment, which features a hash of length one) but I rarely depend on the
order of hash keys so I’m happy to keep it as is.
This loop is only the barest of bones at the moment. It doesn’t support
flow control like break
or continue
, and it also needs to support
streams (I’m considering a version with just the block that will accept
the stream from a pipe). But I think it’s a reasonably good start.
I also spend some time today integrating this language in the tool I was building it for. I won’t talk about it here, but already it’s showing quite a bit of promise. I think, once the features are fully baked, that this would be a nice command language to keep in my tool-chest. But more of that in a later post.
Backlog Proc: A Better JQL
Backlog Proc is a simple item backlog tracker I built for work. I’d like to link them to Jira tickets, so that I know whether a particular backlog item actually has tasks written for them, and what the status of each of those tasks are.Β I guess these are meant to be tracked by epics, but Jira’s UI for handling such things is a mess, and I’d like to make notes that are only for my own eyes.
Anyway, I’m was using JQL to select the Jira tickets. And it worked,
but the language is a bit verbose. Plus the tool I’m running the
queries in, jira-cli
, requires that I add the project ID along with
the things like the epic or fix version.
So I’m started working on a simpler language, one that’s just a tokenised list of ticket numbers. For example, instead of writing:
project = "ABC" and key in (ABC-123 ABC-234 ABC-345)
One could just write:
ABC-123 ABC-234 ABC-345
And instead of writing:
(project = "ABC" and epicLink = "ABC-818") OR (project = "DEF" and epicLink = "DEF-222")
One could just write:
epic:(ABC-818, DEF-222)
(note here the use of OR, in that the sets are unionised; I’m not sure how this would scale for the other constructs).
Key characteristics is that the parser would be able to get the project ID from the query, instead of having the query writer (i.e. me) explicitly add it.
I can also do introspection, such as get the relevant projects, by “unparsing” the query. An advantage of controlling the parser and language. Can’t do that with JQL.
But, of-course, I can’t cover all possible bases with this language just yet, so I’ll need a way to include arbitrary JQL.. So I’ve also added a general “escape” clause to do this:
jql:"project in (bla)"
A Few Other Things
A few other things that is needed for Backlog Proc:
Tool Command Language: Macros And Blocks
More work on the tool command language (of which I need to come up with a name: I can’t use the abbreviation TCL), this time working on getting multi-line statement blocks working. As in:
echo "Here"
echo "There"
I got a little wrapped up about how I can configure the parser to
recognise new-lines as statement separators. I tried this in the past
with a hand rolled lexer and ended up peppering NL
tokens all around
the grammar. I was fearing that I needed to do something like this here.
After a bit of experimentation, I think I’ve come up with a way to
recognise new-lines as statement separators without making the grammar
too messy. The unit tests verifying this so far seem to work.
// Excerpt of the grammar showing all the 'NL' token matches.
// These match a new-line, plus any whitespace afterwards.
type astStatements struct {
First *astPipeline `parser:"@@"`
Rest []*astPipeline `parser:"( NL+ @@ )*"`
}
type astBlock struct {
Statements []*astStatements `parser:"LC NL? @@ NL? RC"`
}
type astScript struct {
Statements *astStatements `parser:"NL* @@ NL*"`
}
I’m still using a stateful lexer as it may come in handy when it comes to string interpolation. Not sure if I’ll add this, but I’d like the option.
Another big addition today was macros. These are much like commands, but instead ofΒ arguments being evaluated before being passed through to the command, they’re deferred and the command can explicitly request their evaluation whenever. I think Lisp has something similar: this is not that novel.
This was used to implement the if
command, which is now working:
set x "true"
if $x {
echo "Is true"
} else {
echo "Is not true"
}
Of course, there are actually no operators yet, so it doesn’t really do much at the moment.
This spurred the need for blocks. which is a third large addition made today. They’re just a group of statements that are wrapped in an object type.Β They’re “invokable” in that the statements can be executed and produce a result, but they’re also a value that can be passed around. It jells nicely with the macro approach.
Must say that I like the idea of using macros for things like if
over
baking it into the language. It can only add to the “embed-ability” of
this, which is what I’m looking for.
Finally, I did see something interesting in the tests. I was trying the following test:
echo "Hello"
echo "World"
And I was expecting a Hello
and World
to be returned over two lines.
But only World
was being returning. Of course! Since echo
is
actually producing a stream and not printing anything to stdout, it
would only return World
.
I decided to change this. If I want to use echo
to display a message,
then the above script should display both Hello
and World
in some
manner. The downside is that I don’t think I’ll be able to support
constructs like this, where echo provides a source for a pipeline:
\# This can't work anymore
echo "Hello" | toUpper
I mean, I could probably detect whether echo
is connected to a pipe
(the parser can give that information). But what about other commands
that output something? Would they need to be treated similarly?
I think it’s probably best to leave this out for now, and have a new construct for providing literals like this to a pipe. Heck, maybe just having the string itself would be enough:
"hello" | toUpper
Anyway, that’s all for today.
Tool Command Language
I have this idea for a tool command language. Something similar to TCL, in that it’s chiefly designed to be used as an embedded scripting language and chiefly in an interactive context.
It’s been an idea I’ve been having in my mind for a while, but I’ve got the perfect use case for it. I’ve got a tool at work I use to do occasional admin tasks. At the moment it’s implemented as a CLI tool, and it works. But the biggest downside is that it needs to form connections to the cluster to call internal service methods, and it always take a few seconds to do so. I’d like to be able to use it to automate certain actions, but this delay would make doing so a real hassle.
Some other properties that I’m thinking off:
Some of the trade-offs that come of it:
Some Notes On The Design
The basic concept is the statement. A statement consists of a command, and zero or more arguments. If you’ve used a shell before, then you can imagine how this’ll look:
firstarg "hello, world"
--> hello, world
Each statement produces a result. Here, the theoretical firstarg
will
return the first argument it receives, which will be the string "hello, world"
Statements are separated by new-lines or semicolons. In such a sequence, the return value of the last argument is returned:
firstarg "hello" ; firstarg "world"
--> world
I’m hoping to have a similar approach to how Go works, in that semicolons will be needed if multiple statements share a line, but will otherwise be unnecessary. I’m using the Participal parser library for this, and I’ll need to know how I can configure the scanner to do this (or even if using the scanner is the right way to go).
The return value of statements can be used as the arguments of other statements by wrapping them in parenthesis:
echo (firstarg "hello") " world"
--> hello world
This is taken directly from TCL, except that TCL uses the square brackets. I’m reserving the square brackets for data structures, but the parenthesis are free. It also gives it a bit of a Lisp feel.
Pipelines
Another way for commands to consume the output of other commands is to
build pipelines. This is done using the pipe |
character:
echo "hello" | toUpper
--> HELLO
Pipeline sources, that is the command on the left-most side, can be either commands that produce a single result, or a command that produces a “stream”. Both are objects, and there’s nothing inherently special about a stream, other than there some handling when used as a pipeline. Streams are also designed to be consumed once.
For example, one can consider a command which can read a file and produce a stream of the contents:
cat "taleOfTwoCities.txt"
--> It was the best of times,
--> it was the worst of times,
--> β¦
Not every command is “pipe savvy”. For example, piping the result of a
pipeline to echo
will discard it:
echo "hello" | toUpper | echo "no me"
--> no me
Of course, this may differ based on how the builtins are implemented.
Variables
Variables are treated much like TCL and shell, in that referencing them is done using the dollar sign:
set name "josh"
--> "Josh"
echo "My name is " $name
--> "My name is Josh"
Not sure how streams will be handled with variables but I’m wondering if they should be condensed down to a list.Β I don’t like the idea of assigning a stream to a variable, as streams are only consumed once, and I feel like some confusion will come of it if I were to allow this.
Maybe I can take the Perl approach and use a different variable
“context”, where you have a variable with a @
prefix which will
reference a stream.
set file (cat "eg.text")
echo @file
\# Echo will consume file as a stream
echo $file
\# Echo will consume file as a list
The difference is subtle but may be useful. I’ll look out for instances where this would be used.
Attempting to reference an unset variable will result in an error. This may also change.
Other Ideas
That’s pretty much what I have at the moment. I do have some other ideas, which I’ll document below.
Structured Data Support: Think lists and hashes. This language is to be used with structured data, so I think it’s important that the language supports this natively. This is unlike TCL which principally works with strings and the notion of lists feels a bit tacked on to some extent.
Both lists and hashes are created using square brackets:
\# Lists. Not sure if they'll have commas or not
set l [1 2 3 $four (echo "5")]
\# Maps
set m [a:1 "b":2 "see":(echo "3") (echo "dee"):$four]
Blocks: Yep, containers for a groups of statements. This will be used for control flow, as well as for definition of functions:
set x 4
if (eq $x 4) {
echo "X == 4"
} else {
echo "X != 4"
}
foreach [1 2 3] { |x|
echo $x
}
Here the blocks are just another object type, like strings and stream,
and both if
and foreach
are regular commands which will accept a
block as an argument. In fact, it would be theoretically possible to
write an if
statement this way (not sure if I’ll allow setting
variables to blocks):
set thenPart {
echo "X == 4"
}
if (eq $x 4) $thenPart
The block execution will exist in a context, which will control whether
a new stack frame will be used. Here the if
statement will simply use
the existing frame, but a block used in a new function can push a new
frame, with a new set of variables:
proc myMethod { |x|
echo $x
}
myMethod "Hello"
--> "Hello
Also note the use of |x|
at the start of the block.Β This is used to
declare bindable variables, such as function arguments or for loop
variables.Β This will be defined as part of the grammar, and be a
property of the block.
Anyway, that’s the current idea.
Photo Bucket Update: Exporting To Zip
Worked a little more on Photo Bucket this week. Added the ability to export the contents of an instance to a Zip file. This consist of both images and metadata.

I’ve went with lines of JSON file for the image metadata. I considered a CSV file briefly, but for optional fields like captions and custom properties, I didn’t like the idea of a lot of empty columns. Better to go with a format that’s a little more flexible, even if it does mean more text per line.
As for the images, I’m hoping the export to consist of the “best quality” version. What that means will depend on the instance. The idea is to have three tiers of image quality managed by the store: “original”, “web”, and “thumbnail”. The “original” version is the untouched version uploaded to the store. The “web” version is re-encoded from the “original” and will be slightly compressed with image metadata tags stripped out. The “thumbnail” version will be a small, highly compressed version suitable for the thumbnail. There is to be a decision algorithm in place to get an image given the desired quality level. For example, if something needed the “best quality” version of an image, and the “original” image is not available, the service will default to the “web” version (the idea is that some of these tiers will be optional depending on the need of the instances).
This is all partially working at the moment, and I’m hoping to rework all this when I replace how stores and images relate to each other (This is what I’m starting on now, and why I built export now since this will be backwards incompatible). So for the moment the export simply consists of the “web” version.
I’ve got unit tests working for this as well. I’m trying a new approach for unit testing in this project. Instead of using mocks, the tests are actually running against a fully instantiated versions of the services. There exists a servicestest
package which does all the setup (using temporary directories, etc) and tear down of these services. Each individual unit test gets the services from this package and will run tests against a particular one.
This does mean all the services are available and exercised within the tests, making them less like unit tests and more like integrations tests. But I think I prefer this approach. The fact that the dependent services are covered gives me greater confidence that they’re working. It also means I can move things around without changing mocks or touching the tests.
That’s not to say that I’m not trying to keep each service their own component as much as I can. I’m still trying to follow best practice of component design: passing dependencies in explicitly when the services are created, for example. But setting them all up as a whole in the tests means I can exercise them while they serve the component being tested. And the dependencies are explicit anyway (i.e. no interfaces) so it makes sense keeping it that way for the tests as well. And it’s just easier anyway. π€·
Anyway, starting rework on images and stores now. Will talk more about this once it’s done.
Photo Bucket Update: More On Galleries
Spent a bit more time working on Photo Bucket this last week1, particularly around galleries. They’re progressing quite well. I’m made some strides in getting two big parts of the UI working now: adding and removing images to galleries, and re-ordering gallery items via drag and drop.
I’ll talk about re-ordering first. This was when I had to bite the bullet and start coding up some JavaScript. Usually I’d turn to Stimulus for this but I wanted to give HTML web components a try. And so far, they’ve been working quite well.
The gallery page is generated server-side into the following HTML:
<main>
<pb-draggable-imageset href="/_admin/galleries/1/items" class="image-grid">
<pb-draggable-image position="0" item-id="7">
<a href="/_admin/photos/3">
<img src="/_admin/img/web/3">
</a>
</pb-draggable-image>
<pb-draggable-image position="1" item-id="4">
<a href="/_admin/photos/4">
<img src="/_admin/img/web/4">
</a>
</pb-draggable-image>
<pb-draggable-image position="2" item-id="8">
<a href="/_admin/photos/1">
<img src="/_admin/img/web/1">
</a>
</pb-draggable-image>
</pb-draggable-imageset>
</main>
Each <pb-draggable-image>
node is a direct child of an <pb-draggable-imageset>
. The idea is that the user can rearrange any of the <pb-draggable-image>
elements within a single <pb-draggable-imageset>
amongst themselves. Once the user has moved an image onto to another one, the image will signal its new position by firing a custom event. The containing <pb-draggable-imageset>
element is listening to this event and will respond by actually repositioning the child element and sending a JSON message to the backend to perform the move in the database.
A lot of this was based on the MDN documentation for drag and drop and it follows the examples quite closely. I did find a few interesting things though. My first attempt at this was to put it onto the <pb-draggable-image>
element, but I wasn’t able to get any drop events when I did. Moving the draggable
attribute onto the <a>
element seemed to work. I not quite sure why this is. Surely I can’t think of any reason as to why it wouldn’t work. It may had something else, such as how I was initialising the HTTP components.
Speaking of HTML components, there was a time where the custom component’s connectedCallback
method was being called before the child <a>
elements were present in the DOM. This was because I had the <script>
tag in the the HTML head and configured to be evaluated during parsing. Moving it to the end of the body and loading it as a module fixed that issue. Also I found that moving elements around using element.before and element.after would actually call connectedCallback
and disconnectedCallback
each time, meaning that any event listeners registered within connectedCallback
would need to be de-registered, otherwise events would be handled multiple times. This book-keeping was slightly annoying, but it worked.
Finally, there was moving the items with the database. I’m not sure how best to handle this, but I have that method that seems to work. What I’m doing is tracking the position of each “gallery item” using a position
field. This field would be 1 for the first item, 2 for the next, and so on for each item in the gallery. The result of fetching items would just order using this field, so as long as they’re distinct, they don’t need to be a sequence incrementing by 1, but I wanted to keep this as much as possible.
The actual move involves two update queries. The first one will update the positions of all the items that are to shift left or right by one to “fill the gap”. The way it does this is that when an item is moved from position X to position Y, the value of position
between X and Y would be changed by +1 if X > Y, or by β1 if Y > X. This is effectively the same as setting position X to X + 1, and so on, but done using one UPDATE
statement. The second query just sets the position of item X to Y.
So that’s moving gallery items. I’m not sure how confident I am with this approach, but I’ve been testing this, both manually and by writing unit tests. It’s not quite perfect yet: I’m still finding bugs (I found some while coming up with these screencasts). Hopefully, I’ll be able to get to the bottom of them soon.
The second bit of work was to actually add and remove images in the gallery themselves. This, for the moment, is done using a “gallery picker” which is available in the image details. Clicking “Gallery” while viewing an image will show the list of galleries in the system, with toggles on the left. The galleries an image already belongs to is enabled, and the user can choose the galleries they want the image to be in by switching the toggles on and off. These translate to inserts
and remove
statements behind the scenes.
The toggles are essentially just HTML and CSS, and a bulk of the code was taken from this example, with some tweaks. They look good, but I think I may need to make them slightly smaller for mouse and keyboard.
I do see some downside with this interaction. First, it reverses the traditional idea of adding images to a gallery: instead of doing that, your selecting galleries for an image. I’m not sure if this would be confusing for others (it is modelled on how Google Photos works). Plus, there’s no real way to add images in bulk. Might be that I’ll need to add a way to select images from the “Photos” section and have a dialog like this to add or remove them all from a gallery. I think this would go far in solving both of these issues.
So that’s where things are. Not sure what I’ll work on next, but it may actually be import and export, and the only reason for this is that I screwed up the base model and will need to make some breaking changes to the DB schema. And I want to have a version of export that’s compatible with the original schema that I can deploy to the one and only production instance of Photo Bucket so that I can port the images and captions over to the new schema. More on this in the future, I’m sure.
Photo Bucket Galleries and Using the HTML Popover API
Spent a bit more on Photo Bucket this evening. Tonight I started working on galleries, which’ll work more or less like albums.
At the moment you can create a gallery and add a photo to it. Most of the work so far has been backend so the UI is pretty rough. Eventually you’ll be able to do things like rearrange photos within galleries, but for the moment they’ll just be added to the end.
I did need to add some modals to the UI, such as the New Gallery model that’s shown for the gallery name. This gave me the opportunity to try out the new popover API. And yeah, it does exactly what it says on the tin: add the popover
attribute to an element and it becomes a working popover (at least in the latest version of Vivaldi). Must say it’s impressive that this is now possible with HTML alone.
The initial version of the modals used a <div>
for the popover target. And while that worked, there were some small annoyances. First was that the form within the popover didn’t get focus when the popover was displayed. It would be nice to click “New” and start typing out the gallery name. But this is a small thing that’s easily solvable with JavaScript, so it’s no big deal.
The second, slightly larger one, was that dismissing the popover by clicking outside of it will not eat the input. If you were to click a sidebar link while the New Gallery model is opened, you’ll end up on that newly selected page. I’m not a fan of this. Dismissing the popover feels like its own user gesture, and I fear the user accidentally activating things when all they’re trying to do is dismiss the popover (it’s not in place now, but I am planning to dim the background when the Create Gallery modal is visible).
Fortunately, there’s a simple solution to this. It turns out that replacing the <div>
element with a <dialog>
element would solves both problems. It works seamlessly with the new popover attributes, yet showing the dialog will give focus to the form, and will eat the click when the user dismisses it.
Perfect. Looks like I’ve delayed the need for JavaScript a little longer (it will come eventually; it always will).
Spent some time this evening working on my image hosting tool. It’s slowly coming along, but wow do I suck at UI design (the “Edit Photo” screen needs some rebalancing).


Working on one of the admin sections of the project I was alluding to yesterday. Here’s a screencast of how it’s looking so far.
The styling and layout is not quite final. I’m focusing more on functionality, and getting layout and whitespace looking good always takes time. But compared to how it looked before I started working on it this morning, I think it’s a good start.
In the end it took significantly more time to write about it then to actually do it, but the dot product approach seems to work.
So I guess today’s beginning with a game of “guess the secret password requirements.” π

Making some progress in learning Elm for building frontends. Started working on a Connections clone, which I’m calling “Clonections”. This is what I’ve got so far:
It’s been fun using Elm to build this. So far I’m liking the language. Of course, now I’ll have to come up with puzzles for this. π
Spent a little more time working on my idea for Dynamo-Browse this week. Managed to get it somewhat feature complete this weekend:
I probably should say a few words on what it actually is. The idea is to make it quick and easy to run pre-canned queries based on the currently selected item and table.
Let’s say you’ve got a table with customer information, and another table with subscription information, and they’re linked with some form of customer ID. If you wanted to see the subscriptions of a customer, up until now, you’d have to copy the customer ID to the paste-board, change the currently viewed table, then run a query to select the subscription with that customer ID. It’s not difficult but it’s extremely tedious.
This change is meant to streamline this. Now, in a script function, you can define a “related item” provider which, if matched against the currently displayed table, will be given the currently selected item, and will return a list of queries that will display items related to the current item (depending on whatever definition of “related” will be). This will be presented to the user as a list. When the user chooses the item, the query will run and the results will be displayed.
Here’s an example of the script used for the screencasts:
ext.related_items("business-addresses", func(item) {
return [
{"label": "Customer", "query": `city=$city`, "args": {"city": "Austin"}},
{"label": "Payment", "query": `address^="3"`},
{"label": "Thing", "table": "inventory",
"query": `pk=$pk`, "args": {"pk": "01fca33a-5817-4c27-8a8f-82380584e69c"}},
]
})
ext.related_items("inventory", func(item) {
sk := string(item.attr("sk"))
return [
{"label": "SK: " + sk, "table": "business-addresses",
"query": `pk^=$sk`, "args": {"sk": sk}},
]
})
Notice how the last business-addresses
item specifies the “inventory” table, and that the “inventory” provider actually uses an attribute of the item. Here’s a screencast of that working:
This feature has been on the idea board for a while. I was trying to work out how best to handle the pre-canned queries, especially considering that they will likely be different for each item and table. Some ideas I had were adding additional UI elements that the user could use to configure these queries. These would go into the workspace file, a sort of an embedded database which is created for each session. This was pretty crappy, especially when you consider that workspaces usually only last until the user exists. It was only a few weeks ago when I considered using the scripting facilities to implement this (which, honestly, shows how much it remains under-utilised).
Anyway, I’ve only just finished development of it. I’d still like to try it for the various related items I tend to use during my day-to-day. We’ll see how well it works out.
Idea For Mainboard Mayhem: A Remote Pickup
Sort of in-between projects at the moment so I’m doing a bit of light stuff on Mainboard Mayhem. I had an idea for a new element: a remote control which, when picked up, will allow the player to toggle walls and tanks using the keyboard, much like the green and blue buttons.
I used ChatGGT to come up with some artwork, and it produced something that was pretty decent.

Only issue was that the image was huge β 1024 x 1024 β and the tiles in Mainboard Mayhem were only 32 x 32.
I tried shrinking it down in Acorn, using various scaling algorithms. The closest that worked was bringing it down slowly to about 128 x 128 using Nearest Neighbour, than trying to go all the way down to 32 x 32 using Lanczos. That worked, but it required true 32 bit colour to be recognisable, and I wanted to preserve the 16 colour palette used by the original Chips Challenge.
So using the original image as a reference, I bit the bullet and drew my own in Acorn. You can see it here in this test level:

It turn out okay. At least it’s recognisable. Anyway, I coded it up and gave it a bit of a try:
Yeah, it works well. When the player has the appropriate colour remote, they can hit either Z or X to toggle the green walls or blue tanks respectively. I really should add some indicators in the status bar to show which button to press.
Not sure what I’ll do after this. The fun part was coming up with the element. But I guess I’ll have to come up with a few puzzles that use it.
More work on Mainboard Mayhem today. Had a bit more success getting the Windows build into a releasable state.
First thing was the app icon. That blog post I talked about yesterday worked: I was able to set the icon of the executable. I did make a slight adjustment though. The post suggested using ImageMagick to produce the ICO file, but I wasn’t happy with how they looked. There were a lot of artefacts on the smaller icon sizes.
So I looked around for an alternative, and found this package by Lea Anthony. He’s the maintainer of Wails, a cross-platform toolkit for making browser-based GUI apps in Go, sort of like Electron but without bundling Chrome. In fact, most of the build for Mainboard Mayhem was put together by reading the Wails source code, so I trust he knows what his doing. And sure enough, his package produced a nicely scaled ICO file from a source PNG image. Better yet, it was distributed as a Go package, so I could no need to install and shell-out to run it: I could just integrated it directly into the project’s build tool.
Using rsrc to generate the SYSO file with the icon worked as expected: Go did pick it up and embed it into the executable. I did have some trouble getting the Go compiler to pick up these files at first. In short, they need to be in the same directory as the main
package. So if you’re running go build ./cmd/thing
, make sure the SYSO files are in ./cmd/thing
. Other than that, no real issues here.

One last thing I had to deal with was the console window. Running a Go app in Windows shows the console by default. Perfectly fine for command line tools, but less so for games:

So I had to find a way to hide the console on launch. Since Mainboard Mayhem is using SDL, I’m actually using MinGW to cross-compile the Windows release on an Ubuntu build runner. The documentation for MinGW suggests adding -mwindows
as a linker option to hide the console:
# What I was doing before, which didn't work
CGO_ENABLED=1 \
CC="x86_64-w64-mingw32-gcc" \
GOOS="windows" \
CGO_LDFLAGS="-mwindows -Lβ¦" \
go build -o dist/cclm/mainboard.exe ./cmd/cclm'
This didn’t actually work when I tried it: launching the app kept bringing up the console. Turns out what I should’ve done was follow the advice of many Stack Overflow answers, and set -ldflags "-H=windowsgui"
on the Go command:
# This works
CGO_ENABLED=1 \
CC="x86_64-w64-mingw32-gcc" \
GOOS="windows" \
CGO_LDFLAGS="-Lβ¦" \
go build -ldflags "-H=windowsgui" -o dist/cclm/mainboard.exe ./cmd/cclm'
This works even without the -mwindows
switch. Not completely sure why though. I guess MinGW is not actually being used for linking? Or maybe -m
only works with C header files? Don’t know. π€· But doesn’t matter: the console no longer shows up on launch.

Finally, there was testing it all, and for this I just bit the bullet and set-up a Windows 10 virtual machine in Azure. The rate is something like $0.16 AUD an hour, an easy decision compared to spending time trying to get a VM with Windows 10 running on my machine.
One remaining thing that’s slightly annoying is Windows Defender refusing to launch it after download, doing effectively the same thing as Gatekeeper on MacOS does:

I’m sure there’s a way around it but it’s probably not worth learning about it at this stage. It’s easy enough to dismiss: click “More Info” and the click “Run Anyway”:

But other than that, I think the Windows version of Mainboard Mayhem is ready. I’ve updated the website to include the Windows archive if anyone’s interested.
Spent some time today on Mainboard Mayhem, trying to finish the Windows build. I’ve actually got Windows version of the game being built for a while now. I just haven’t published them, mainly because I haven’t got the app icon set-up yet.
But this week, Golang Weekly had a link to a blog post by Mahmud Ridwan on how to do so. It looked pretty straightforward, so I thought I’d give it a try.
And yeah, the instructions themselves were easy enough, and I wish I could say if they worked or not. But in order to test it, I need a Windows machine. And I don’t have one, and I wasn’t about to get one just for this.
So I tried setting up Windows in a VM using UTM. I got this far:

Yeah, this infinite spinner has been staring at me pretty much all day. I got a Windows 10 installer ISO using CrystalFetch, and it seemed to work. But it just doesn’t want to boot up for the first time.
Not actually sure what the problem is. The error message seems to suggest that it’s having trouble connecting to the internet. Might be that? Or maybe the installation didn’t complete properly? Could be anything. π€·
So no luck getting this tested yet. I’m wondering if it might be easier to forget virtualisation and just launch a Windows instance in the cloud somewhere instead.
Mainboard Mayhem
Project update on Mainboard Mayhem, my Chip’s Challenge fan game. I didn’t get it finished in time for the release deadline, which was last weekend. I blame work for that. We’re going through a bit of a crunch at the moment, and there was a need to work on the weekend.
The good news is that there wasn’t much left to do, and after a few more evenings, I’m please to say that it’s done. The game is finish, and ready for release.
So here it is: Mainboard Mayhem: A Chip’s Challenge fan game (and yes, that’s its full title).

At the moment it’s only available for MacOS. It should work on both Intel and Apple Silicon Macs, although I’ve only tested on my M2 Mac Mini running Ventura.
It’s good to finally see this project done. It’s been in development for about last ten years, and I spent half of that time wondering whether it was worth getting it finished it at all. Not committing to anything meant any work I did do on it was pretty aimless, and I always felt like I was wasting my time. Giving myself three weeks to either kill it, or release it helped a lot. I’ll start making deadlines for all the other unfinished projects I’m working on.
As to what that next project will be, I not sure at this stage. Part of me wants to wait until this crunch time ends, but I suspect I’ll get antsy before then and start work on something else. I’ll keep you posted one way or the other.
But for now, if you happen to give it a try, thank you and I hope you enjoy it.
