Imports And The New Model
Well, I dragged Photo Bucket out today to work on it a bit.
It’s fallen by the wayside a little, and I’ve been wondering if it’s worth continuing work on it. So many things about it that need to be looked at: the public site looks ugly, as does the admin section; working with more than a single image is a pain; backup and restore needs to be added; etc.
I guess every project goes through this “trough of discontent” where the initial excitement has warn off and all you see is a huge laundry list of things to do. Not to mention the wandering eye looking at the alternatives.
But I do have to stop myself from completely junking it, since it’s actually being used to host the Folio Red Gallery. I guess my push to deploy it has entrapped me (well, that was the idea of pushing it out there in the first place).
Anyway, it’s been a while (last update is here) and the move to the new model is progressing. And it’s occurred to me that I haven’t actually talked about the new model (well, maybe I have but I forgot about it).
Previously, the root model of this data structure is the Store
. All
images belong to a Store
, which is responsible for managing the
physical storage and retrieval of them. These stores can have
sub-stores, which are usually used to hold the images optimised for a
specific use (serving on the web, showing as a thumbnails, etc).
Separate to this was the public site Design
which handed properties of
the public site: how it should look, what the title, and description is,
etc.

There were some serious issues with this approach: images were owned by stores, and two images can belong to two different stores, but they all belonged to the same site. This made uploading confusing: which store should the image live on? I worked around this by adding the notion of a “primary store” but this was just ignoring the problem and defeated the whole multiple-store approach.
This is made even worse when one considers which store to use for
serving the images. Down the line I was hoping to support virtual domain
hosting, where one could setup different image sites on different
domains that all pointed to the same instance. So imagine how that would
work: one wanted to view images from alpha.example.com
and another
wanted to view images from beta.example.com
. Should the domains live
on the store? What about the site designs? Where should they live?
The result was that this model could only really ever support one site per Photo Bucket instance, requiring multiple deployments for different sites if one wanted to use a single host for separate photo sites.
So I re-engineered the model to simplify this dramatically. Now, the
route object is the Site
:

Here, the Site owns everything. The images are associated with sites, not stores. Stores still exist, but their role is now more in-line with what the sub-stores did. When an image is uploaded, it is stored in every Store of the site, and each Store will be responsible for optimising it for a specific use-case. The logic used to determine which Store to use to fetch the image is still in place but now it can be assumed that any Store associated with a site will have the image.
Now the question of which Store an image should be added to is easy: all the them.
Non-image data, such as Galleries and Designs now live off the Site as well, and if virtual hosting is added, so would the domain that serves that Site.
At least one site needs to be present at all time, and it’s likely most instances will simply have a single Site for now. But this assumption solves the upload and hosting resolution issues listed above. And if multiple site support is needed, a simple site picker can be added to the admin page (the public pages can will rely on the request hostname).
This has been added a while ago, and as of today, has been merged to
main
. But I didn’t want to deal with writing the data migration logic
for this, so my plan is to simply junk the existing instance and replace
it with the brand new one. But in order to do so, I needed to export the
photos from the old instance, and import them into the new one.
The export logic has been deployed and I’ve made an export it this
morning. Today, the import logic was finished and merged. Nothing
fancy: like the export
it’s only invokable from the command line. But
it’ll do the job for now.
Next steps is to actually deploy this, which I guess will be the ultimate test. Then, I’m hoping to add support for galleries in the public page so I can separate images on the Folio Red Gallery into projects. There’s still no way to add images in bulk to a gallery. Maybe this will give me an incentive to do that next.
Day 20: ice #mbapr

A small visitor is back, enjoying the sun on my kitchen tiles. 🦎

Watched Takashi Wakasugi’s show Japanese Aussie at the comedy festival last night with a few friends. Was quite good. A style of comedy that I really like.
Beanie weather’s back, baby! 🌡️
Thought I’d celebrate by updating my avatar on Micro.blog.
UCL: Procs and Higher-Order Functions
More on UCL yesterday evening. Biggest change is the introduction of user functions, called “procs” (same name used in TCL):
proc greet {
echo "Hello, world"
}
greet
--> Hello, world
Naturally, like most languages, these can accept arguments, which use
the same block variable binding as the foreach
loop:
proc greet { |what|
echo "Hello, " $what
}
greet "moon"
--> Hello, moon
The name is also optional, and if omitted, will actually make the function anonymous. This allows functions to be set as variable values, and also be returned as results from other functions.
proc makeGreeter { |greeting|
proc { |what|
echo $greeting ", " $what
}
}
set helloGreater (makeGreeter "Hello")
call $helloGreater "world"
--> Hello, world
set goodbye (makeGreeter "Goodbye cruel")
call $goodbye "world"
--> Goodbye cruel, world
I’ve added procs as a separate object type. At first glance, this may seem a little unnecessary. After all, aren’t blocks already a specific object type?
Well, yes, that’s true, but there are some differences between a proc and a regular block. The big one being that the proc will have a defined scope. Blocks adapt to the scope to which they’re invoked whereas a proc will close over and include the scope to which it was defined, a lot like closures in other languages.
It’s not a perfect implementation at this stage, since the set
command only sets variables within the immediate scope. This means that
modifying closed over variables is currently not supported:
\# This currently won't work
proc makeSetter {
set bla "Hello, "
proc appendToBla { |x|
set bla (cat $bla $x)
echo $bla
}
}
set er (makeSetter)
call $er "world"
\# should be "Hello, world"
Higher-Order Functions
The next bit of work is finding out how best to invoke these procs in higher-order functions. There are some challenges here that deal with the language grammar.
Invoking a proc by name is fine, but since the grammar required the
first token to be a command name, there was no way to invoke a proc
stored in a variable. I quickly added a new call
command — which takes
the proc as the first argument — to work around it, but after a while,
this got a little unwieldy to use (you can see it in the code sample
above).
So I decided to modify the grammar to allow any arbitrary value to be the first token. If it’s a variable that is bound to something “invokable” (i.e. a proc), and there exist at-least one other argument, it will be invoked. So the above can be written as follows:
set helloGreater (makeGreeter "Hello")
$helloGreater "world"
--> Hello, world
At-least one argument is required, otherwise the value will simply be returned. This is so that the value of variables and literal can be returned as is, but that does mean lambdas will simply be dereferenced:
"just, this"
--> just, this
set foo "bar"
$foo
--> bar
set bam (proc { echo "BAM!" })
$bam
--> (proc)
To get around this, I’ve added the notion of the “empty sub”, which
is just the construct ()
. It evaluates to nil, and since a function
ignores any extra arguments not bound to variables, it allows for
calling a lambda that takes no arguments:
set bam (proc { echo "BAM!" })
$bam ()
--> BAM!
It does allow for other niceties, such as using a falsey value:
if () { echo "True" } else { echo "False" }
--> False
With lambdas now in place, I’m hoping to work on some higher order
functions. I’ve started working on map
which accepts both a list or a
stream. It’s a buggy mess at the moment, but some basic constructs
currently work:
map ["a" "b" "c"] (proc { |x| toUpper $x })
--> stream ["A" "B" "C"]
(Oh, by the way, when setting a variable to a stream using set
, it
will now collect the items as a list. Or at least that’s the idea.
It’s currently not working at the moment.)
A more refined approach would be to treat commands as lambdas. The grammar supports this, but the evaluator doesn’t. For example, you cannot write the following:
\# won't work
map ["a" "b" "c"] toUpper
This is because makeUpper
will be treated as a string, and not a
reference to an invokable command. It will work for variables. You can
do this:
set makeUpper (proc { |x| toUpper $x })
map ["a" "b" "c"] $makeUpper
I’m not sure how I can improve this. I don’t really want to add automatic dereferencing of identities: they’re very useful as unquoted string arguments. I suppose I could add another construct that would support dereferencing, maybe by enclosing the identifier in parenthesis:
\# might work?
map ["a" "b" "c"] (toUpper)
Anyway, more on this in the future I’m sure.
Day 19: birthday
I got this almost 9 years ago. It’s never been used. #mbapr

Crashing Hemispheric Views #109: HAZCHEM
Okay, maybe not “crashing”, a.la Hey Dingus. But some thoughts did come to me while listening to Hemispheric Views #109: HAZCHEM that I’d though I share with others.
Haircuts
I’m sorry but I cannot disagree more. I don’t really want to talk while I’m getting a haircut. I mean I will if they’re striking up a conversation with me, but I’m generally not there to make new friends; just to get my hair cut quickly and go about my day. I feel this way about taxis too.
I’m Rooted
I haven’t really used “rooted” or “knackered” that much. My goto phrase is “buggered,” as in “oh man, I’m buggered!” or simply just “tired”. I sometimes used “exhausted” when I’m really tired, but there’s just too many syllable in that word for daily use.
Collecting
I’m the same regarding stickers. I received (although not really sought after) stickers from various podcasts and I didn’t know what to do with them. I’ve started keeping them in this journal I never used, and apart from my awful handwriting documenting where they’re from and when I added them, so far it’s been great.

I probably do need to get some HV stickers, though.
A Trash Ad for Zachary
Should check to see if Johnny Decimal got any conversions from that ad in #106. 😀
Also, here’s a free tag line for your rubbish bags: we put the trash in the bags, not the pods.
🍅⏲️ 00:39:05
I’m going to make the case for Vivaldi. Did you know there’s actually a Pomodoro timer built into Vivaldi? Click the clock on the bottom-right of the status bar to bring it up.

Never used it myself, since I don’t use a Pomodoro timer, but can Firefox do that?!
Once again, a really great listen, as always.
Pro tip: if you’re working with mocks, make sure they’re actually asserting the calls made to them. Otherwise, your buggy code will pass your buggy unit tests with flying colours, and you end up confused when it all falls down during QA verification.
Oh, um… this is all hypothetical, of course. 🤓
Day 18: mood
I guess one can describe this mode as flat. #mbapr

👨💻 On the Coding Bits blog: Changing gRPC Schemas
UCL: First Embed, and Optional Arguments
Came up with a name: Universal Control Language: UCL. See, you have TCL; but what if instead of being used for tools, it can be more universal? Sounds so much more… universal, am I right? 😀
Yeah, okay. It’s not a great name. But it’ll do for now.
Anyway, I’ve started integrating this language with the admin tool I’m using at work. This tool I use is the impetus for this whole endeavour. Up until now, this tool was just a standard CLI command usable from the shell. But it’s not uncommon for me to have to invoke the tool multiple times in quick succession, and each time I invoke it, it needs to connect to backend systems, which can take a few seconds. Hence the reason why I’m converting it into a REPL.
Anyway, I added UCL to the tool, along with a readline library, and wow, did it feel good to use. So much better than the simple quote-aware string splitter I’d would’ve used. And just after I added it, I got a flurry of requests from my boss to gather some information, and although the language couldn’t quite handle the task due to missing or unfinished features, I can definitely see the potential there.
I’m trying my best to only use what will eventually be the public API to add the tool-specific bindings. The biggest issue is that these “user bindings” (i.e. the non-builtins) desperately need support for producing and consuming streams. They’re currently producing Go slices, which are being passed around as opaque “proxy objects”, but these can’t be piped into other commands to, say, filter or map. Some other major limitations:
- No commands to actually filter or map. In fact, the whole standard library needs to be built out.
- No ability to get fields from hashes or lists, including proxy objects which can act as lists or hashes.
One last thing that would be nice is the ability to define optional arguments. I actually started work on that last night, seeing that it’s relatively easy to build. I’m opting for a style that looks like the switches you’d find on the command line, with option names starting with dashes:
join "a" "b" -separator "," -reverse
--> b, a
Each option can have zero or more arguments, and boolean options can be represented as just having the switch. This does mean that they’d have to come after the positional arguments, but I think I can live with that. Oh, and no syntactic sugar for single-character options: each option must be separated by whitespace (the grammar actually treats them as identifiers). In fact, I’d like to discourage the use of single-character option names for these: I prefer the clarity that comes from having the name written out in full (that said, I wouldn’t rule out support for aliases). This eliminates the need for double dashes, to distinguish long option names from a cluster of single-character options, so only the single dash will be used.
I’ll talk more about how the Go bindings look later, after I’ve used them a little more and they’re a little more refined.
Watched the MKBHD review of the Humane pin (the Streisand effect is real) and I must say, even if it worked perfectly, I’m not sure it’s for me. Something about having to wear it. Kind of conspicuous, no? Not to mention that it’s primarily a voice UI. I don’t even use the voice assistant on my phone.
I mean, credit to them for trying. But… nah, I’ll pass.
Tool Command Language: Lists, Hashs, and Loops
A bit more on TCL (yes, yes, I’ve gotta change the name) last night. Added both lists and hashes to the language. These can be created using a literal syntax, which looks pretty much looks how I described it a few days ago:
set list ["a" "b" "c"]
set hash ["a":"1" "b":"2" "c":"3"]
I had a bit of trouble working out the grammar for this, I first went with something that looked a little like the following, where the key of an element is optional but the value is mandatory:
list_or_hash --> "[" "]" \# empty list
| "[" ":" "]" \# empty hash
| "[" elems "]" \# elements
elems --> ((arg ":")? arg)* \# elements of a list or hash
arg --> <anything that can be a command argument>
But I think this confused the parser a little, where it was greedily
consuming the key arg and expecting the :
to be present to consume the
value.
So I flipped it around, and now the “value” is the optional part:
elems --> (arg (":" arg)?)*
So far this seems to work. I renamed the two fields “left” and “right”, instead of key and value. Now a list element will use the “left” part, and a hash element will use “left” for the key and “right” for the value.
You can probably guess that the list and hash are sharing the same AST types. This technically means that hybrid lists are supported, at least in the grammar. But I’m making sure that the evaluator throws an error when a hybrid is detected. I prefer to be strict here, as I don’t want to think about how best to support it. Better to just say either a “pure” list, or a “pure” hybrid.
Well, now that we have collections, we need some way to iterate over
them. For that, I’ve added a foreach
loop, which looks a bit like the
following:
\# Over lists
foreach ["a" "b" "c"] { |elem|
echo $elem
}
\# Over hashes
foreach ["a":"1" "b":"2"] { |key val|
echo $key " = " $val
}
What I like about this is that, much like the if
statement, it’s
implemented as a macro. It takes a value to iterate over, and a block
with bindable variables: one for list elements, or two for hash keys and
values. This does mean that, unlike most other languages, the loop
variable appears within the block, rather than to the left of the
element, but after getting use to this form of block from my Ruby days,
I can get use to it.
One fun thing about hashes is that they’re implemented using Go’s
map
type. This means that the iteration order is random, by design.
This does make testing a little difficult (I’ve only got one at the
moment, which features a hash of length one) but I rarely depend on the
order of hash keys so I’m happy to keep it as is.
This loop is only the barest of bones at the moment. It doesn’t support
flow control like break
or continue
, and it also needs to support
streams (I’m considering a version with just the block that will accept
the stream from a pipe). But I think it’s a reasonably good start.
I also spend some time today integrating this language in the tool I was building it for. I won’t talk about it here, but already it’s showing quite a bit of promise. I think, once the features are fully baked, that this would be a nice command language to keep in my tool-chest. But more of that in a later post.
Day 17: transcendence #mbapr

Pocket Cast should start tracking the number of times I tap the 30 seconds back button. It’s usually a good indication that I’m really engrossed in a particular show. That button sure got a lot of action this morning with today’s Stratechery and Dithering.
On Micro.blog, Scribbles, And Multi-homing
I’ve been ask why I’m using Scribbles given that I’m here on Micro.blog. Honestly I wish I could say I’ve got a great answer. I like both services very much, and I have no plans of abandoning Micro.blog for Scribbles, or visa-versa. But I am planning to use both for writing stuff online, at least for now, and I suppose the best answer I can give is a combination of various emotions and hang-ups I have about what I want to write about, and where it should go.
I am planning to continue to use Micro.blog pretty much how others would use Mastodon: short-form posts, with the occasional photo, mainly about what I’m doing or seeing during my day. I’ll continue to write the occasional long-form posts, but it won’t be the majority of what I write here.
My intentions for what I post on Scribbles is more likely to be long-form, which brings me to my first reason: I think I prefer Scribbles editor for long-form posts. Micro.blog works well for micro-blogging but I find any attempt to write something longer a little difficult. I can’t really explain it. It just feels like I’m spending more effort trying to get the words out on the screen, like they’re resisting in some way.
It’s easier for me to do this using Scribbles editor. I don’t know why. Might be a combination of how the compose screen is styled and laid out, plus the use of a WYSIWYG editor1. But whatever it is, it all combines into an experience where the words flow a little easier for me. That’s probably the only way I can describe it. There’s nothing really empirical about it all, but maybe that’s the point. It’s involves the emotional side of writing: the “look and feel”.
Second, I like that I can keep separate topics separate. I thought I could be someone who can write about any topic in one place, but when I’m browsing this site myself, I get a bit put out by all the technical topics mixed in with my day-to-day entries. They feel like they don’t belong here. Same with project notes, especially given that they tend to be more long-form anyway.
This I just attribute to one of my many hang-ups. I never have this issue with other sites I visit. It may be an emotional response from seeing what I wrote about; where reading about my day-to-day induces a different feeling (casual, reflective) than posts about code (thinking about work) or projects (being a little more critical, maybe even a little bored).
Being about to create multiple blogs in Scribbles, thanks to signing up for the lifetime plan, gives me the opportunity to create separate blogs for separate topics: one for current projects, one for past projects, and one for coding topics. Each of them, along with Micro.blog, can have their own purpose and writing style: more of a public journal for the project sites, more informational or critical on the coding topics, and more day-to-day Mastodon-like posts on Micro.blog (I also have a check-in blog which is purely a this-is-where-I’ve-been record).
Finally, I think it’s a bit of that “ooh, shiny” aspect of trying something new. I definitely got that using Scribbles. I don’t think there’s much I can do about that (nor, do I want to 😀).
And that’s probably the best explanation I can give. Arguably it’s easier just writing in one place, and to that I say, “yeah, it absolutely is.” Nothing about any of this is logical at all. I guess I’m trying to optimise to posting something without all the various hang-ups I have about posting it at all, and I think having these separate spaces to do so helps.
Plus, knowing me, it’s all likely to change pretty soon, and I’ll be back to posting everything here again.
-
Younger me would be shocked to learn that I’d favour a WYSIWYG editor over a text editor with Markdown support ↩︎
Day 16: flâneur
One extra flâneur not in frame. #mbapr

Went to Phone Phix and got my phone phixed. 🙃
Got the USB-C socket cleaned. Cost a bit but the USB-C plugs are staying in place now, so I call that a win.
Love the new categories feature in Scribbles. Went back and added them to the posts on Coding Bits and Workpad. They look and feel great.
