Long Form Posts
- How do I get the platform starting point?
- How can I actually respawn the platform?
- A project overview: basically answering the question of what this project is, why it s, and why should one use it.
- How to instructions: how does one install it, get started using it, etc.
- How does one engage and contribute to the project: how they can get help, contribute changes, etc.
- Credits, license and contact details
-
And the
seq
builtin ↩︎ -
If the days and nights are hot, I don’t bother with the timer and just leave it running all night long. ↩︎
-
Yes, this is probably just a rationalisation for trying to minimise sunk-costs, but I’ve got nothing else to work on, so why not this? ↩︎
You Probably Do Want To Know What You Had for Lunch That Other Day
There’s no getting around the fact that some posts you make are banal. You obviously thought your lunch was posting about at the time was worthy of sharing: after all, you took the effort to share it. Then a week goes buy and you wonder why you posted that. “Nobody cares about this,” you say to yourself. “This isn’t giving value to anyone.”
But I’d argue, as Doc did in Back to the Future, that you’re just not thinking forth-dimensionally enough. Sure it may seem pretty banal to you now, but what about 5 years in the future? How about 10? You could be persuing your old posts when you come across the one about your lunch, and be reminded of the food, the atmosphere, the weather, the joys of youth. It could be quite the bittersweet feeling.
Or you could feel nothing. And that’s fine too. The point is that you don’t know how banal a particular post is the moment you make it.
Worse still, you don’t know how banal anything will be 5 years from now, (as in right now, the moment you’re reading this sentence). The banality of anything is dynamic: it changes as a funcion of time. It could be completely irrelevant next week, then the best thing that’s happened to you a week later.
This is why I don’t understand the whole “post essays on my blogs and the smaller things on Twitter/Bluesky/Mastodon/whatever” dichotomy some writers have out there. Is what you write on those other sites less worthy than what you write on the site you own? Best be absolutely sure about that when you post it then, as you may come to regret making a point about posting banal tweets 17 years ago, only for that moratorium about banal tweets to be lost when you decided to move away from that micro-blogging site.
But whatever, you do you. I know for myself that I rather keep those supposedly banal thoughts on this site. And yeah, that’ll mean a lot of pretty pointless, uninteresting things get published here: welcome to life.
But with the pressure of time, they could turn into nice, shiny diamonds of days past. Or boring, dirty lumps of coal. Who knows? Only time will answer that.
Gallery: Morning In Sherbrooke
A visit to Sherbrooke in the Dandenong Ranges on Easter Monday included a walk along the falls track, a sighting of a Superb Lyrebird, and a brief exploration of Alfred Nicholas Memorial Garden.
Airing Of Draft Posts
A collection draft ideas and reflections, amassed over the last year, highlighting a mix of topics ranging from technology insights to personal musings.
On Go And Using Comments For Annotations
Some thoughts of whether Go should have a dedicated syntax for annotations that comments are currently being used for.
Don't Be Afraid Of Types
Types in coding projects are good. Don’t be afraid to create them when you need to.
Adventures In Godot: Respawning A Falling Platform
My taste of going through a Godot tutorial last week has got me wanting more, so I’ve set about building a game with it. Thanks to my limited art skills, I’m using the same asset pack that was used in the video, although I am planning to add a bit of my own here and there.
But it’s the new mechanics I enjoy working on, such as adding falling platforms. If you’ve played any platformer, you know what these look like: platforms that are suspended in air, until the player lands on them, at which point gravity takes a hold and they start falling, usually into a pit killing the player in the process:
The platforms are built as a CharacterBody2D
with an Area2D
that will detect when a player enters the collision shape. When they do, a script will run which will have the platform “slipping” for a second, before the gravity is turned on and the platform falls under it’s own weight. The whole thing is bundled as a reusable scene which I could drag into the level I’m working on.

I got the basics of this working reasonably quickly, yet today, I had a devil of a time going beyond that. The issue was that I wanted the platform to respawn after it fell off the map, so that the player wouldn’t get soft-locked at at an area where the platform was needed to escape. After a false-start trying to reposition the platform at it’s starting point after it fell, I figured it was just easier to respawn the platform when the old one was removed. To do this I had to solve two problems:
Getting The Starting Point
A Node2D
object has the property global_position, which returns the position of the object based on the world coordinates. However, it seems like this position is not correct when the _init function of the attached script is called. I suspect this is because this function is called before the platform is added to the scene tree, when the final world coordinates are known.
Fortunately, there exists the _ready
notification, which is invoked when the node is added to the scene tree. After some experimentation, I managed to confirm that global_position
properly was correct. So tracking the starting point is a simple as storing that value in a variable:
var init_global_position = null
func _ready():
init_global_position = global_position
Another option is to use the _enter_tree()
notification. From the documentation, it looks like either would probably work here, with the only difference being the order in which this notification is invoked on parents and children (_enter_tree
is called by the parent first, whereas _ready
is called by the children first).
Respawning The Platform
The next trick was finding out how to respawn the platform. The usual technique for doing so, based on the results of my web searching, is to load the platform scene, instantiate a new instance of it, and added it to the scene tree.
@onready var FallingPlatform = preload("res://scenes/falling_platform.tscn")
func respawn():
var dup = FallingPlatform.instantiate()
add_child(dup)
Many of the examples I’ve seen online added the new scene node as a child of the current node. This wouldn’t work for me as I wanted to free the current node at the same time, and doing so would free the newly instantiated child. The fix for this was easy enough: I just added the new node as a child of the current scene.
@onready var FallingPlatform = preload("res://scenes/falling_platform.tscn")
func respawn():
var dup = FallingPlatform.instantiate()
get_tree().current_scene.add_child(dup)
queue_free()
I still had to reposition the new node to the spawn point. Fortunately the global_position
property is also settable, so it was simply a matter of setting that property before adding it to the tree (this is so that it’s correct when the newly instantiated node receives the _ready
notification).
@onready var FallingPlatform = preload("res://scenes/falling_platform.tscn")
func respawn():
var dup = FallingPlatform.instantiate()
dup.global_position = init_global_position
get_tree().current_scene.add_child(dup)
queue_free()
This spawned the platform at the desired positioned, but there was a huge problem: when the player jumped on the newly spawn platform, it wouldn’t fall. The Area2D
connection was not invoking the script to turn on the gravity:
It took me a while to figured out what was going on, but I came to the conclusion that the packed scene was loading properly, but without the script attached. Turns out a Script is a resource separate from the scene, and can be loaded and attached to an object via the set_script method:
@onready var FallingPlatform = preload("res://scenes/falling_platform.tscn")
@onready var FallingPlatformScript = preload("res://scripts/falling_platform.gd")
func respawn():
var dup = FallingPlatform.instantiate()
dup.set_script(FallingPlatformScript)
dup.global_position = init_global_position
get_tree().current_scene.add_child(dup)
queue_free()
Finally, after figuring all this out, I was able to spawn a new falling platform, have it positioned at the starting position of the old platform, and react to the player standing on it.
The time it took to work this out is actually a little surprising. I was expecting others to run into the same problem I was facing, where they were trying to instantiate a scene only to have the scripts not do anything. Yet it took me 45 minutes of web searching through Stack Overflow and forum posts that didn’t solve my problem. It was only after a bit of experimentation and print-debugging on my own that I realised that I actually had to attached the script after instantiating the node.
To be fair, I will attribute some of this to not understanding the problem at first: I actually thought the Area2D
wasn’t actually being instantiated at all. Yet not one of the Stack Overflow answers or forum post floated the possibility that the script wasn’t being loaded alongside the scene. This does suggest to me that my approach may not be optimal. There does exist a “Local to Scene” switch in the script inspector that could help, although turning it on doesn’t seem to do much. But surely there must be some way to instantiate the script alongside the scene.
Anyway, that’s for later. For now, I’m happy that I’ve got something that works.
Running PeerTube In Coolify
A guide for setting up a basic PeerTube instance on Coolify using a docker-compose file.
Attending the DDD Melbourne 2025 Conference
Yesterday, I attended the DDD Melbourne 2025 conference. This was in service of my yearly goal to get out more, to be around people more often than I have been. So the whole reason I attended was to meet new people. That didn’t happen: I said hi to a few people I once worked with, and spoke to a few sponsors, but that was it. So although I marked it off my goal list, it wasn’t a huge success.

But a dev conference is still a dev conference and I’d thought I’d write a few notes of the sessions I attended, just to record what I did get out of it.
Keynote
Emerging trends in robots, by Sue Keay.
The keynote interesting session about the state of robotics in Australia. Didn’t get a lot of specifics, but I did get a name for the robot I saw once in a Tokyo department store that, let’s just say, left an impression on me.

First Session
Are you overcomplicating software development? I certainly have been…, by Ian Newmarch.
This speaker was absolutely preaching my gospel around complexity in software development. But it took someone to go deeper into into why developers are prone to take an inherently complex practice and add additional complexity (so call “accidental complexity”). This is mainly due to human factors: ego, fear, imposter syndrome, and to some extent, to keep the job interesting.

Very reliable. Only real way to mitigate this is going back to principals such as avoiding premature abstraction, YAGNI, and KISS. Thing about principals is that they’re always a little hard to know when you need it. So remember to always keep a focus on the problem - what you’re trying to solve - and working with people can help here.
Second Session
How to Design Your Next Career Move, by Emily Conaghan.
This speaker went through a process of how one could reflect on what they want out of their career, and how to come up with what they need to do to bridge the gap to get to it. The process is rather methodical, which is not a bad thing, and there’s a whole workbook component to this. This might be something that’s personally worth doing though: it does feel like I’m drifting aimlessly a little.
Third Session
The Lost Art of good README documentation, by Swapnil Ogale.

I found this one to be quite good. It touched on the properties of what makes a good README for a project, and why you’d want to (the reason is that a developer’s or user’s trust in a project directly relates to the support document). In short, a good readme should have:
But even though these could be described as what a “good” README looks like, a takeaway is there’s no such thing as a bad README, apart from not having any README at all.
One other thing that I didn’t know was that README’s are traditionally capitalised so that they appear near the top in a alphanumerical listing of files. That was interesting to know.
Lunch
Yeah, this was the hardest part of the day. But it’s amazing how much time you can kill just by waiting in lines.
Forth Session
Being consistently wrong, by Tom Ridge.
I was expecting this to be a bit more general, like techniques for keeping an open mind or learning from one’s mistakes. But it was largely focused on task estimations, which is a weakness of mine, but seeing that this was after lunch and I was getting a bit tired around this time, I only halved listened. But the takeaways I did get are the importance of measuring, how long tasks take to travel across the board, how long they’re in progress, in review, etc.; using those measurements to determine capacity using formula’s derived from queuing theory; keeping the amount of work in progress low; and keeping task duration variance low by slicing.

These are all valid points, although I’m not sure how applicable they are to how we work at my job. But it may be a worthy talk to revisit if that changes.
Fifth Session
On The Shoulders Of Giants — A Look At Modern Web Development, by Julian Burr.
Despite being a backend developer by day, I am still curious of the state of web developer. This talk was a good one, where the speaker went through the various “milestones” of major technical developments in web technology — such as when Javascript and jQuery was introduce, when AJAX became a thing, and when CSS was developed (I didn’t know CSS was devised at CERN).

Going back in time was fun (R.I.P. Java applets & Flash) but it seems the near-term future is all React, all the time. And not just React in the traditional sense, but React used for zero-hydration server side components (Qwik) and out of order streaming (React Suspense). Not sure that appeals to me. Although one thing that does is that Vite is becoming the build tool du jour for frontend stuff. This I may look at, since it looks simple enough to get started.
Some other fun things: JavaScript style sheets was a thing, and Houdini still is a thing.
Sixth Session
Dungeons and… Developers? Building skills in tech teams with table top role playing games, by Kirsty McDonald.
This was the talk that got me in the door in some respects. I’ve heard of role-playing games being a thing for scenario planning, so the idea of doing it for team development and practice responding to things like production incidents. This consisted of the normal thing’s you’d expect from a role playing game, like character cards, a game master, and scenario events with a random-number generator component it it.

I’ve never played D&D before, so I was curious as to how these games actually ran. Fortunately, I was not disappointed, as the last part of the talk was walking through an example game with a couple of volunteers from the audience. Definitely a talk worth staying back for.
Locknote
Coding Like it’s 2005, by Aaron Powell
This was a fun look-back on the state of the art of web development back in 2005, before jQuery, AJAX, decent editors, when annoying workarounds in JavaScript and CSS were necessary to get anything working in Internet Explorer. This was just before my time as a practicing dev, and apparently trying to replicate rich-client applications in the web browser were all the rage, which was something I missed. It was mainly focused on Microsoft technology, something I don’t have a lot of personal experience in, but I did get flashbacks of using Visual Studio 2003 and version 1 of Firefox.


Lot’s of fun going down memory lane (R.I.P clearfix & YUI; rot in hell, IE6 😛).
Overall
I was contemplating not showing up to this, and even while I was there, I was considering leaving at lunchtime, but overall I’m glad that I stayed the whole day. It got me out of the house, and I learnt a few interesting things. And let me be clear: DDD Melbourne and the volunteers did an excellent job! It was a wonderfully run conference with a lot of interesting speakers. I hope to see some of the talks on YouTube later.
But, I don’t think I’ll be going to a conference by myself again. I mean, it’s one thing to go if work asks you to: I can handle myself in that situation. But under my own volition? Hmm, it would be much easier going with someone else, just so that I have someone to talk to. It’s clear that I need to do something about my fear of approaching someone I don’t know and start speaking to them. Ah well, it was worth a try.
An Incomplete List of DRM-Free Media Stores
A collection of links to online stores that sell DRM-Free media.
Apple AI in Mail and What Could Be
Apple AI features in Mail currently do not help me. But they can, if Apple invited us to be more involved in what constitute an important email.
First Impressions of the Cursor Editor
Trying out the Cursor editor to build a tool to move Micro.blog posts.
UCL: Some Updates
Made a few minor changes to UCL. Well, actually, I made one large change. I’ve renamed the foreach
builtin to for
.
I was originally planning to have a for
loop that worked much like other languages: you have a variable, a start value, and an end value, and you’d just iterate over the loop until you reach the end. I don’t know how this would’ve looked, but I imagined something like this:
for x 0 10 {
echo $x
}
# numbers 0..9 would be printed.
But this became redundant after adding the seq
builtin:
foreach (seq 10) { |x|
echo $x
}
This was in addition to all the other useful things you could do with the foreach loop1, such as loop over lists and hashes, and consume values from iterators. It’s already a pretty versatile loop. So I elected to go the Python way and just made it so that the for
loop is the loop to use to iterate over collections.
This left an opening for a loop that dealt with guards, so I also added the while
loop. Again, much like most languages, this loop would iterate over a block until the guard becomes false:
set x 0
while (lt $x 5) {
echo $x
set x (add $x 1)
}
echo "done"
Unlike the for
loop, this is unusable in a pipeline (well, unless it’s the first component). I was considering having the loop return the result of the guard when it terminates, but I realised that would be either false, nil, or anything else that was “falsy.” So I just have the loop return nil. That said, you can break from this loop, and if the break call had a value, that would be used as the result of the loop:
set x 0
while (lt $x 5) {
set x (add $x 1)
if (ge $x 3) {
break "Ahh"
}
} | echo " was the break"
The guard is optional, and if left out, the while
loop will iterate for ever.
The Set! Builtin
Many of these changes come from using of UCL for my job, and one thing I found myself doing recently is writing a bunch of migration scripts. This needed to get data from a database, which may or may not be present. If it’s not, I want the script to fail immediately so I can check my assumptions. This usually results in constructs like the following:
set planID (ls-plans | first { |p| eq $p "Plan Name" } | index ID)
if (not $planID) {
error "cannot find plan"
}
And yeah, adding the if block is fine — I do it all the time when writing Go — but it would be nice to assert this when you’re trying to set the variable, for no reason other than the fact that you’re thinking about nullability while writing the expression to fetch the data.
So one other change I made was add the set!
builtin. This will basically set the variable only if the expression is not nil. Otherwise, it will raise an error.
set! planID (ls-plans | first { |p| eq $p "Missing Plan" } | index ID)
# refusing to set! `planID` to nil value
This does mean that !
and ?
are now valid characters to appear in identifiers, just like Ruby. I haven’t decided whether I want to start following the Ruby convention of question marks indicating a predicate or bangs indicating a mutation. Not sure that’s going to work out now, given that the bang is being used here to assert non-nullability. In either case, could be useful in the future.
About My New Cooler's Programming Feature
There’s lots to like about my new cooler, but the programming feature is not one of them. My old unit had a very simple timer with two modes: turn cooler on after N hours, or turn cooler off after N hours. Anything else requires manual intervention.

I really liked this simple setup. Many times in summer, when the days are warm but not hot, and the nights are cool, it was nice to turn the cooler’s fan on and set the timer to turn it off after 2 to 3 hours, maybe longer if the days were a bit warmer1. The cooler will simply pull cool air from outside and circulate it around the room. This was enough for me to get to sleep, at which point the cooler would shut itself off in the middle of the night.
With the new cooler comes a control panel that is effectively a cheap Android phone, so it’s capable of much more. You can now set a program that has four separate modes set to four separate times of the day. Each day of the week now has it’s own program too. A particular mode can either be a desired temperature setting, or “off”.

To recreate what I previously had, I would now have to choose a specific shutoff time for every day of the week. No longer am I able to set the running time based on how I feel: it has to be an actual timestamp, with several taps involve if you want to change it. This timestamp can only be set up to 11:59 PM so if you want the unit to shut off after midnight, you’ll have to remember to choose the program for the next day.
Oh, and mercy on you if you wanted a timestamp that didn’t land on the hour. The minutes can only be changed by 1, so you’ll be tapping 30 times if you want the unit to shut-off at the half hour.

You also have no control over the fan speed. That was another nice thing about the old unit: you set the speed to what you want, and then you set the timer. The unit will stay in that mode until it shuts off. I don’t want the fan to be blowing a gale when I’m trying to get to sleep, so the fan was usually set to the lowest or second-lowest setting.
This new programming modes only have a temperature setting, so if the house is warm, the cooler will crank up the fan until it reaches half-speed or just above; speeds I usually use in the middle of a very hot day. This means noise that will change in intensity as the target temperature is reached. I’m not a great sleeper so any additional noise like that is disruptive.

So I’m a little sad that I lost this simple timer-based approach to operating the cooler. I’m not even sure who this programming feature is built for. It sort of exists in that nether region where it’s too complicated for the simple things, yet useless for anything other than a set weekly routine. I set my cooler based on the weather conditions which, you may be surprised to know, does not fall into a nice weekly routine. Granted, it may make it possible to use this to recreate the simple timer approach I had before: I just preset everything and only activate the program when I want it. And yeah, it’ll probably be fine, but I do feel like I’ve lost something.
Update: Apparently the cooler does have a shutoff after N hours feature. It’s just buried in the settings menu. The post still stands, as it would’ve been nice that this was a feature of the Program mode, but at least there’s something I can use.
Idea for UCL: Methods
I’m toying with the idea of adding methods to UCL. This will be similar to the methods that exist in Lua, in that they’re essentially functions that pass in the receiver as the first argument, although methods would only be definable by the native layer for the first version.
Much like Lua though, methods would be invokable using the :
“pair” operator.
strs:to-upper "Hello"
--> HELLO
The idea is to make some of these methods on the types themselves, allowing their use on literals and the result of pipelines, as well as variables:
set h "Hello"
$hello:to-upper
--> HELLO
"Hello":to-upper
--> HELLO
(cat "Hello " "world"):to-upper
--> HELLO WORLD
The use of methods would be purely conveience. One could conceive of a type, like a CSV table, where there’s a need to perform a series of operations over it.
The potential downside of using :
is that it may make defining dictionaries more ambiguous. When the parser sees something that could either be a list or a dictionary, it does a scan to search for any pairs that may exist. If so, it treats the literal as a dictionary. But would this work with using :
as the method separator?
["Hello":to-upper]
--> Is this a dictionary {"Hello":"to-upper"}, or the list ["HELLO"]?
So that would require something. Might be that I need to require method invocations to occur within parenthesis for list literals.
Ambiguities like this aside, I’m planning to keep it simple for now. Methods will not be user definable within UCL out of the gate, but would be effectively another interface available to native types. For the builtin ones that exist now, it’ll most likely be little more than syntactic sugar over the standard library functions:
$hello:to-upper
# equivalent to
strs:to-upper $hello
Speaking of the standard library, the use of :
as a module-function separator may need some changes. A the moment it’s a bit of a hack: when a module is loaded, any procs defined within that module is stored in the environment with the operator: strs:to-upper
for example. The parser is sophisticated enough to recognised :
as a token, but when it parses two or more identifiers separated with :
, it just joins them up into a single identifier.
What’s needed is something else, maybe using something based on methods. The current idea is to define modules as being some special form of object, where the “methods” are simply the names of the exported symbol.
I was curious to know whether the language was capable of doing something similar now. It’s conceivable that a similar concept could be drafted with procs returning dictionaries of procs, effectively acting as a namespace for a collection of functions. So a bit of time in the playground resulted in this experiment:
proc fns {
return [
upper: (proc { |x| strs:to-upper $x })
lower: (proc { |x| strs:to-lower $x })
]
}
(fns).upper "Hello"
--> HELLO
(fns).lower "Hello"
--> hello
A good first start. This is theoretically possible in the language that exists at the moment. It’s not perfect thought. For one thing, the call to fns
needs to be enclosed in parenthesis in order to invoke it. Leaving them out results in an error:
fns.upper "Hello"
--> error: command is not invokable
The same is true when using variables instead of procs. I tried this experiment again using variables and it came to the same limitations:
set fns [
upper: (proc { |x| strs:to-upper $x })
lower: (proc { |x| strs:to-lower $x })
]
($fns).upper "Hello"
--> HELLO
$fns.upper "Hello"
--> error: command is not invokable
Obviously the parser needs to be changed to add additional support for the :
operator, but I also need to fix how strong .
binds to values too. But I think this may have legs.
On Slash Pages Verses Blog Posts
Interesting discussion on ShopTalk about slash pages and whether blog posts may make more sense for some of them. Chris and Dave makes the point that blog posts have the advantage of syndicating updates, something that static pages lack on most CMSs. It’s a good point, and a tension I feel occasionally. Not so much on this site, but there’ve been several attempts where I tried to make a site for technical knowledge, only to wonder whether a blog or a wiki makes more sense. I’d like the pages to be evergreen yet I also like to syndicate updates when I learn new stuff.
I’ve currently settled on the blog format for now, and it’s fine — tags tend to help here — but I wonder if something smarter could be useful here. One idea I have is to have a page with “sections” where each one could be seen as a mini blog post. You add and modify sections over time, and when you do, each section would be syndicated individually. Yet the page will be rendered as a whole when viewing it in the browser. It’s almost like the RSS feed contains diffs for the page, albeit something self contained and readable by humans. There might be a CMS that does this already, I haven’t looked. But I get the sense that most RSS feeds of wiki pages actually contain a diff, or a simple message saying “this page has been updated.” There’s nothing to suggest that what’s out there has this sections semantics.
In lieu of that, I like the idea proposed by Chris and Dave where you basically new versions of these slash pages as blog posts and redirect the slash URL to the latest one, kind of like a bookmark. I may start doing these for some of them, starting with /defaults which is, conveniently, already a blog post.
Project Update: DSL Formats For Interactive Fiction
Still bouncing around things to work on at the moment. Most of the little features have been addressed, and I have little need to add anything pressing for the things I’ve been working on recently. As for the large features, well apathy’s taking care of those. But there is one project that is tugging at my attention. And it’s a bit of a strange one, as part of me just wants to kill it. Yet it seems to be resisting.
About 6 months ago, I started working on some interactive fiction using Evergreen. I got most of the story elements squared away but much of the interactive elements were still left to be done. And as good as Evergreen is for crafting the story, I found it a little difficult tracking down all the scripts I needed to write, debug and test.
So in early November, I had a look at porting this story over to a tool of my own, called Pine Needle (yes, the name is a bit of a rip-off). Much like Evergreen, the artefact of this is a choose-your-own-adventure story implemented as a static webpage. Yet the means of building the story couldn’t be more different. Seeing that I’m more comfortable working with code and text files, I eschewed building any UI in favour of a tool that simply ran from the command line.
But this meant that I needed someway to represent the story in text. Early versions simply had the story hard coded in Go, but it wasn’t long before I started looking a using a DSL. My first attempt was a hand-built one based on Markdown with some additional meta-elements. The goal was to keep boilerplate to a minimum, with the meta-elements getting out of the way of the prose. Here’s a sample of what I had working so far:
// Three dashes separate pages, with the page ID following on.
// Also these are comments, as the hash is reserved for titles.
--- tulips-laundry-cupboard
You open the cupboard door and look at the shelf
above the brooms. There are a couple of aerosol cans up there,
including a red one that says "Begone Insecticide".
You bring it out and scan the active ingredients. There are a
bunch of letters and numbers back there, and none of them have
the word "organic."
\choice 'Take Insecticide' to=tulips-take-insecticide
\choice 'Leave Insecticide' to=tulips-leave-insecticide
--- tulips-take-insecticide
You return to the tulips with the insecticide, and start
spraying them. The pungent odour of the spray fills the air,
but you get the sense that it's helping a little.
\choice 'Continue' to=tulips-end
--- tulips-leave-insecticide
You decide against using the can of insecticide. You put the
can back on the shelf and close the cupboard door.
\choice 'Look Under The Trough' to=tulips-laundry-trough
\choice 'Exit Laundry' to=tulips-exit-laundry
The goal was to have the meta-elements look like LaTeX macros — for example, \option{Label}{target-screen}
— but I didn’t get far in finishing the parser for this. And I wasn’t convinced it had the flexibility I wanted. LaTeX macros relies pretty much on positional arguments, but I knew I wanted key-value pairs to make it easier to rely on defaults, plus easier to extend later.
I did imagine a fully LaTeX inspired DSL for this, but I quickly dismissed it for how “macro-heavy” it would be. For reference, here’s how I imagined it:
\screen{tulips-laundry-cupboard}{
You open the cupboard door and lo ok at the shelf
above the brooms. There are a couple of aerosol cans up there,
including a red one that says "Begone Insecticide".
You bring it out and scan the active ingredients. There are a
bunch of letters and numbers back there, and none of them have
the word "organic."
\choice{Take Insecticide}{to=tulips-take-insecticide}
\choice{Leave Insecticide}{to=tulips-leave-insecticide}
}
\screen{tulips-take-insecticide}{
You return to the tulips with the insecticide, and start
spraying them. The pungent odour of the spray fills the air,
but you get the sense that it's helping a little.
\choice{Continue}{to=tulips-end}
}
\screen{tulips-leave-insecticide}{
You decide against using the can of insecticide. You put the
can back on the shelf and close the cupboard door.
\choice{Look Under The Trough}{to=tulips-laundry-trough}
\choice{Exit Laundry}{to=tulips-exit-laundry}
}
I wasn’t happy with the direction of the DSL, so I looked for something else. I briefly had a thought about using JSON. I didn’t go so far as to try it, but the way this could work is something like this:
{"screens": {
"id": "tulips-laundry-cupboard",
"body": "
You open the cupboard door and look at the shelf
above the brooms. There are a couple of aerosol cans up
there, including a red one that says \"Begone Insecticide\".
You bring it out and scan the active ingredients. There
are a bunch of letters and numbers back there, and none of
them have the word \"organic.\"
",
"options": [
{
"screen":"tulips-take-insecticide",
"label":"Take Insecticide",
},
{
"screen":"tulips-leave-insecticide",
"label":"Leave Insecticide",
}
]
}, {
"id":"tulips-take-insecticide",
"body":"
You return to the tulips with the insecticide, and start
spraying them. The pungent odour of the spray fills the air,
but you get the sense that it's helping a little.
",
"options": [
{
"screen":"tulips-end",
"label":"Continue"
}
]
}}
I generally like JSON as a transport format, but it didn’t strike me as a format that suited the type of data I wanted to encode. Most of what this format would contain would be prose, which I’d prefer to keep as Markdown. But this would clash with JSON’s need for explicit structure. Setting aside the additional boilerplate this structure would require, all the prose would have to be encoded as one big string, which didn’t appeal to me. Also no comments, especially within string literals, which is a major deal breaker.
So, the current idea is to use something based on XML. This has some pretty significant benefits: editors have good support for XML, and Go has an unmarshaller which can read an XML directly into Go structures. JSON has this too, but I think it’s also a pretty decent format for at editing documents by hand, so long as you keep your XML elements to a minimum.
I think one aspect that turned people off XML back in the day was format designer’s embrace of XML’s ability to represent hierarchical data without leaning into it’s use as a language for documents. The clunky XML documents I had to deal with were purely used to encode structure, usually in a way that mapped directly to an domain’s class model. You had formats where you need 10 nested elements to encode a single bit of information that were a pain to read or edit by hand. These were usually dismissed by the designers with promises like, “Oh, you won’t be editing this by hand most of the time. You’ll have GUI design tools to help you.” But these were awful to use too, and that’s if they were available, which they usually were not (did you know building GUIs are hard?)
If you have an XML format that skews closer to HTML rather than something that’s representable in JSON, I think it could be made to work. So yesterday I had a go at seeing whether it this could work for Pine Needle. Here’s what I’ve got so far:
<?xml version="1.0">
<story>
<screen id="tulips-laundry-cupboard">
You open the cupboard door and look at the shelf
above the brooms. There are a couple of aerosol cans up
there, including a red one that says "Begone Insecticide".
You bring it out and scan the active ingredients. There
are a bunch of letters and numbers back there, and none of
them have the word "organic."
<option screen="tulips-take-insecticide">Take Insecticide</option>
<option screen="tulips-leave-insecticide">Leave Insecticide</option>
</screen>
<screen id="tulips-take-insecticide">
You return to the tulips with the insecticide, and start
spraying them. The pungent odour of the spray fills the air,
but you get the sense that it's helping a little.
<option screen="tulips-end">Continue</option>
</screen>
<screen id="tulips-leave-insecticide">
You decide against using the can of insecticide. You put the
can back on the shelf and close the cupboard door.
<option screen="tulips-laundry-trough">Look Under The Trough</option>
<option screen="tulips-exit-laundry">Exit Laundry</option>
</screen>
</story>
The idea is that the prose will still be Markdown, so things like blank lines will still be respected (the parser strips all the leading whitespace, allowing one to easily indent the prose). Attributes satisfy the key/value requirement for the elements, and I get the features that make this easy to modify by hand, such as comments and good editor support.
I think it’s going to work. It would require some custom code, as Go’s unmarshaller doesn’t quite like the mix of prose and declared <option>
elements, but I think it’s got the bones of a decent format for this interactive fiction. Already I’m coming up with ideas of how to add script elements and decompose fragments into sub-files to make it easier to test.
I’ll talk more about this project in the future if I’m still working on it. I don’t know if the story that started all this will see the light of day. I’ve gone through it a few times, and it’s not great. But shipping stuff you’re proud of comes from shipping stuff you’re not proud of, and given how far along it is, it probably deserved to be release in one form or another. That’s probably why it’s been saved from the chopping block so far1.
Gallery of Fake Logo For Test Organisations
In lieu of anything else, I thought I’d put together a gallery of the logos I use for test organisations. I occasionally need to create fake organisations for testing at work, and to add a bit of amusement to the mix, I sometimes make fake logo for them. These logos are typically made using ChatGPT, although if I’m particularly bored, I sometimes touched them up by hand (nothing too drastic, usually things like cropping or adding titles). Most of the fake organisation are film and media production studios, as that’s typically the client base of our product.
I do this mainly for fun, but there is some utility to it. A user can be a member of zero or more organisations, and can change the one they’re acting in at any time. Having a unique avatar for each one helps in distinguishing which one I have active. I do get cute with the names, and for that, I make no apologies. 🙂