Workpad

    Project Updates

    Well, it’s been three weeks since my last post here, and as hard as it was to write this update, not writing it would’ve been harder. So let’s just skip the preamble and go straight to the update.

    Cyber Burger (That Pico-8 Game)

    I’m terrible at being coy, I’ll just spill the beens. That game I’ve been working on is call Cyber Burger. It’s based on a DOS game I saw on YouTube, and it seemed like a fun project to try and work on, with some tweaks to the gameplay that I think would make it more forgiving.

    In the weeks since my last update, I finished with the prototypes and started working on the game itself. The initial set of art is done and a very simple “Game A” mode has been implemented. In this mode, you are shown a burger you’ll need to make in your basket. You do so by shooting down the items flying across the screen and catching them in order. When you do, you get a “tip”, which basically amounts to points. If you make a mistake, you’re given a demerit. There are five rounds in total, and once the round is complete, you move on to the next one, with maybe slightly different items, or different item speeds, etc.

    An example play session, with the new graphics.

    I managed to make a HTML version of this which plays through round 1. I gave it to someone at work to play test and the results were… well, they weren’t bad but it didn’t set the world on fire.

    I think I’m okay with that, but I do need to keep working on it. I think one thing that would help is adding sound. And I think it might help me deliver this earlier if I abandoned Mode A and start working on Mode B, which is closer to an arcade style of game that would allow for continuous play. These two things, I’ll aim to work on this next week.

    Oh, and I’ll need to fix the item spawner. Waiting ages for an item you need is no good.

    If you’re interested in giving it a try, you can do so by following this link (it runs in the browser). Feel free to send any feedback you may have.

    UCL

    The other thing I’ve been spending some time on over the last week or so was UCL. I’ve been using that work tool which has this language quite often recently and I’ve been running against a number of bugs and areas where quality of life changes could be made. Just small things, such as allowing the foreach command to be called with a proc name instead of requiring a block, much like the map function. Those have been addressed.

    But the biggest thing I’ve been worked on was building out the core library. I added the following functions:

    • The `seq` function, used for generating a sequence of integers. I like how I built this: it’s effectively a virtual list — that can be indexed, iterated over, or calculated the length of — but does not take up linear space.
    • Comparator functions, like `eq`, `ne`, `gt`, etc. plus settling on a type system much like Python, where values are strongly type (can’t compare ints to strings) but are also dynamic.
    • Arithmetic functions, like `add`, `sub`, etc. which operate on integers, and the `cat` function use to concatenate strings (these functions do try to cohere values to the correct type)
    • Logical functions, like `and`, `or`, and `not`.

    Along with this, I’ve started working through the strings package, which would add the various string functions you see, like trimming whitespace, splitting, joining, etc. I’ve got trimming and converting to upper and lower case, and my goal for next week is to add splitting to and joining from string lists. Once that’s done I’ll probably put this on the back-burner again so I can finish off Cyber Burger or work on something else.

    Just a reminder that there’s also a playground for this too, although I apologise for the lack of documentation. I’ll need to get onto that as well.

    I enjoyed reading Kev Quirk’s post about building a simple journal. I’m still using Day One, but I am still thinking of moving off it. So I was inspired to build a prototype similar to Kev’s, just to see if something similar works for me. Built using Go instead of PHP, but it also uses Simple CSS.

    Screenshot of a journal web-page with a text box with the contents saying 'Thanks, Kev, for the idea'.

    Project Seed - A Pico-8 Prototype

    Oof, another long stretch between updates. This has not been a productive winter.

    Much of the project I’ve been writing about here are, shall we say, “on ice”. UCL is still being used for the project it’s been built for, but there’s been no further work done on it recently. I think we can safely say Photo Bucket is dead, at least for now. Blogging Tool and that interactive fiction project is still ongoing, but both are running on a slow burn. I was hoping to get the fiction thing done by the end of winter, but it’s likely that timeline will slip. Maybe some time in spring.

    What have I been doing in the meantime? Watching YouTube videos on old DOS games, actually. Hardly an activity worth writing about here. But it did get me wanting to try working on a game again. And I did get an idea for one while watching videos of someone going through a collection of shovelware titles.

    This project is designated the codename “Project Seed”. I’m going to be a little cagey about the details, at least for now. But I will say that I’m planning to use Pico-8 for this. I bought a license for Pico-8 about 2 years ago (oof, did not expect it to have been that long ago) and I watched a few videos on how to use it, but I didn’t have a good idea for a project then. It is a fascinating bit of software, and I know it’s quite popular amongst hobbyists. One thing I like about it is that the games made with it are not expected to have great art. As someone who can’t draw to save himself, this works in my favour. So don’t expect anything resembling Celeste from me! 😄  Pico-8 also targets HTML5, which works for me.

    Anyway, I have this idea, and I thought about starting a prototype to see how it feels. I downloaded Pico-8, spun up a new project, and started drawing some of the graphics. I’ve got the bare minimum so far: a user-controlled paddle, called the “basket”; a laser bullet, and a thing that needs to be shot.

    The first few sprites. Must say I really like the Pico-8 palette.

    Next was coding up the Lua code. Using the Pico-8 builtin editor was fine for a bit, but I eventually switch to Nova just for the screen size. I am still trying to adhere to the whole retro-style approach to Pico-8. The code I write is still bound to the 8192 token limit, and I’m trying to avoid using too much virtual memory, capping elements to only a handful. But, yeah, using Nova to write the logic is so much better.

    Anyway, the first milestone was allowing the player to move the basket around and shoot laser bullets. Then it was to get one of the shootable items moving across the field. The idea is that the player will need to fire the laser to hit the shootable item. When it’s hit, it begins to fall, and the player needs to catch it in the basket.

    Shooting the laser and catching the item in the basket.

    This took about an hour or so, and already the glimpse of the core game mechanics are starting to show through. They’re just ridiculously primitive at this stage. I mean, the item really shouldn’t fall through the basket like that. But given that it’s a prototype, I’m okay with this so far.

    Next experiment was spawning multiple items onto the field. This got off to an interesting start:

    An "item" train.

    But adding a bit of randomness to the Y position and the spawn delay managed to make things a little more natural again:

    A more natural item spawner.

    One thing I’m considering is whether to add some randomness to the item X velocity, and even have items move from right to left. But this will do for now.

    At this stage, items were just being added to an array, which grew without bounds, and were not being released when they left the screen. Obviously not a good use of memory (even though this is running on an M2 Mac Mini and not retro hardware from the 1980’s, but that’s hardly the point of this exercise). Furthermore, the player is only able to shoot one bullet at a time, and those bullets weren’t being released either. So I set about resolving this, trying to do so in the spirit of a Pico-8 game. I’ve setup a fixed Lua array, which will grow up to a max size, and added a simple allocator which will search for an empty slot to put the next item in.

    function next_slot(tbl,n)
      if \#tbl < n then
        return \#tbl+1
      else
        for i = 1,n do
          if tbl[i] == nil then
            return i
          end
        end
      end
      return nil
    end
    

    This makes releasing items really easy: just set that slot to nil. It does mean that I can’t use ipairs to iterate over items, though. Instead I have to use the following construct:

    for i,item in pairs(items) do
      if item then
         -- Do thing with item
       end
    end
    

    It’s here that I wished Lua had a continue statement.

    I used this for both the item and bullets: there can now be up to 8 items and 4 bullets on the screen at a time. After making those changes, the prototype started to play a little better:

    So, a good start. But there are definitely things need to be fixed. The basket needs to be wider, for one. It’s fine for the prototype, and I’m okay with it the collision being pretty lenient, but it’s too narrow to make it fun.

    But the biggest issue is that the collision logic sucks. Bullets are flying through the items, and items are falling through the basket. I’m using a point-in-rectangle approach to collision detection, with a few probing points for each item, but they obviously need to be adjusted.

    So collision is what I’m hoping to work on next. More on this when I get around to it.

    Current Project Update

    Hmm, another long gap between posts. A little unexpected, but there’s an explanation for this: between setting up Forgejo and making the occasional update to Blogging Tools, I haven’t been doing any project work. Well, at least nothing involving code. What I have been doing is trying my hand at interactive fiction, using Evergreen by Big River Games.

    Well, okay, it’s not completely without code: there is a bit of JavaScript involved for powering the “interactive” logic part. But a bulk of the effort is in writing the narrative, albeit a narrative that’s probably closer to a video game rather than a work of pure fiction.

    Why? The main reason is to try something new. I had the occasional fancy to try my hand at fiction, like a short story or something. I have an idea for a novel — I mean, who doesn’t? — but the idea of writing it as a novel seems daunting at the moment (I’ve written it as short story for NanoWriteMo. It’s sitting online as an unedited draft somewhere. I should probably make a backup of it, actually). But the idea of writing something as interactive fiction seemed intriguing. I was never into text parser adventures, but I did enjoy the choose-your-own-adventure books growing up.

    So what’s the story about, I hear you saying? Well, believe it or not, it’s about gardening. Yes, something I have zero experience in. And perhaps that’s what made it an interesting subject to explore.

    I’ve been working on this for about a month now. I’m well past the everything-is-new-and-exciting phase, and I think I just made it through the oh-no-why-the-heck-am-I-even-doing-this pit of despair. I can see the finish line in terms of the narrative and the logic, and all that remains there should just be a matter of cleaning up, editing, and play testing. The biggest thing left to do is illustrations. I have zero artistic skills myself so I’m not quite sure what I’ll do here.

    If you’re curious about it, here’s a sample. It’s about the first third of the story. It’s a little rough, and requires editing and proof-reading, and illustrations. But let me know what you think.

    More Tools For Blogging Tool

    Spent the last week working on Blogging Tool. I want to get as much done as a I can before motivation begins to wain, and it begins languishing like every other project I’ve worked on. Not sure I can stop that, but I think I can get the big ticket items in there so it’ll be useful to me while I start work on something else.

    I do have plans for some new tools for Blogging Tool: making it easier to make Lightbox Gallery was just the start. This last week I managed to get two of them done, along with some cross-functional features which should help with any other tools I make down the road.

    Move To Sqlite

    First, a bit of infrastructure. I moved away from Rainstorm as the data store and replaced it with Sqlite 3. I’m using a version of Sqlite 3 that doesn’t use CGO as the Docker container this app runs in doesn’t have libc. It doesn’t have as much support out there as the more popular Sqlite 3 client, but I’ve found it to work just as well.

    One could argue that it would’ve been fine sticking with Rainstorm for this. But as good as Rainstorm’s API is, the fact that it takes out a lock on the database file is annoying. I’m running this app using Dokku, which takes a zero-downtime approach to deployments. This basically means that the old and new app container are running at the same time.  The old container doesn’t get shut down for about a minute, and because it’s still holding the lock, I can’t use the new version during that time as the new container cannot access the Rainstorm database file. Fortunately, this is not an issue with Sqlite 3.

    It took me a couple of evenings to port the logic over, but fortunately I did this early, while there was no production data to migrate. I’m using Sqlc for generating Go bindings from SQL statements, and a home grown library for dealing with the schema migrations. It’s not as easy to use as the Rainstorm API but it’ll do. I’m finding working with raw SQL again to be quite refreshing so it may end up being better in the long run.

    Current landing page.

    Imaging Processing

    Once that’s done, I focused on adding those tools I wanted. The first one to sit alongside the gallery tool, is something for preparing images for publishing. This will be particularly useful for screenshots. If you look carefully, you’d noticed that the screenshots on this site have a slightly different shadow than the MacOS default. It’s because I actually take a screenshot without the shadow, then use a CLI tool to add one prior to upload. I do this because the image margins MacOS includes with the shadow are pretty wide, which makes the actual screenshot part smaller than I like. Using the CLI tool is fine, but it’s not always available to me. So it seemed like a natural thing to add to this blogging tool.

    So I added an image processing “app” (I’m calling these tools “apps” to distinguish them from features that work across all of them) which would take an image, and allows you to apply a processor on it. You can then download the processed image and use it in whatever you need.

    The image processing tool, being used here to get the crop right for this particular screenshot.

    This is all done within the browser, using the Go code from the CLI tool compiled to WASM. The reason for this is performance. These images can be quite large, and I’d rather avoid the network round-trip. I’m betting that it’ll be faster running it in the browser anyway, even if you consider the amount of time it takes to download the WASM binary (which is probably around a second or so).

    One addition I did add was to allow processors to define parameters which are shown to the user as input fields. There’s little need for this now — it’s just being used in a simple meme-text processor right now — but it’s one of those features I’d like to at least get basic support for before my interest wains. It wouldn’t be the first time I stopped short of finishing something, thinking to my self that I’d add what I’ll need later, then never going back to do so. That said, I do have some ideas of processors which could use this feature for real, which I haven’t implemented yet. More on that in the future, maybe.

    The "Cheeseburger" processor, and it's use of image parameters.

    Audio Transcoding And Files

    The other one I added deals with audio transcoding. I’ve gotten into the habit of narrating the long form posts I write. I usually use Quicktime Player to record these, but it only exports M4A audio files and I want to publish them as MP3s.

    So after recording them, I need to do a transcode. There’s an ffmpeg command line invocation I use to do this:

    ffmpeg -i in.m4a -c:v copy -c:a libmp3lame -q:a 4 out.mp3
    

    But I have to bring up a terminal, retrieve it from the history (while it’s still in there), pick a filename, etc. It’s not hard to do, but it’s a fair bit of busy work

    I guess now that I’ve written it here, it’ll be less work to remember. But it’s a bit late now since I’ve added the feature to do this for me. I’ve included a statically linked version of ffmpeg in the Docker container (it needs to be statically linked for the same reason why I can’t use CGO: there’s no libc or any other shared objects) and wrapped it around a small form where I upload my M4A.

    The simple form used to transcode an M4A file.

    The transcoding is done on the server (seemed a bit much asking for this to be done in the browser) but I’m hoping that most M4A files will be small enough that it wouldn’t slow things down too much. The whole process is synchronous right now, and I could’ve make the file available then and there, but it wouldn’t be the only feature I’m thinking of that would produced files that I’d like to do things with later. Plus, I’d like to eventually make it asynchronous so that I don’t have to wait for long transcodes, should there be any.

    So along with this feature, I added a simple file manager in which these working files will go.

    The files list. Click the link to download the file (although this may changed to be preview in the future).

    They’re backed by a directory running in the container with metadata managed by Sqlite 3. It’s not a full file system — you can’t do things like create directories, for example. Nor is it designed to be long term storage for these files. It’s just a central place where any app can write files out as a result of their processing. The user can download the files, or potentially upload them to a site, then delete them. This would be useful for processors which could take a little while to run, or run on a regular schedule.

    I don’t have many uses for this yet, apart from the audio transcoder, but having this cross-functional facility opens it up to features that need something like this. It means I don’t have to hand-roll it for each app.

    Anyway, that’s the current state of affairs. I have one, maybe two, large features I’d like to work on next. I’ll write about them once they’re done.

    Blogging Gallery Tool

    Oof! It’s been a while, hasn’t it.

    Not sure why I expected my side-project work to continue while I’m here in Canberra. Feels like a waste of a trip to go somewhere — well, not “unique”, I’ve been here before; but different — and expect to spend all your time indoors writing code. Maybe a choice I would’ve made when I was younger, but now? Hmm, better to spend my time outdoors, “touching grass”. So that’s what I’ve been doing.

    But I can’t do that all the time, and although I still have UCL (I’ve made some small changes recently, but nothing worth writing about) and Photo Bucket, I spent this past fortnight working on new things.

    The first was an aborted attempt at an RSS reader for Android that works with Feedbin. I did get something working, but I couldn’t get it onto my mobile, and frankly it was rather ugly. So I’ve set that idea aside for now. Might revisit it again.

    But all my outdoor adventures did motivate me to actually finish something I’ve been wanting to do for a couple of years now. For you see, I take a lot of photos and I’d like to publish them on my Micro.blog in the form of a GLightbox gallery (see this post for an example). But making these galleries is a huge pain. Setting aside that I always forget the short-codes to use, it’s just a lot of work. I’m always switching back and forth between the Upload section in Micro.blog, looking that the images I want to include, and a text file where I’m working on the gallery markup and captions.

    I’ve been wishing for some tool which would take on much of this work for me.  I’d give it the photos, write the captions, and it would generate the markup. I’ve had a run at building something that would do this a few times already, including an idea for a feature in Photo Bucket. But I couldn’t get over the amount of effort it would take to upload, process, and store the photos. It’s not that it’d would be hard, but it always seemed like double handling, since their ultimate destination was Micro.blog. Plus, I was unsure as to how much effort I wanted to put into this, and the minimum amount of effort needed to deal with the images seemed like a bit of a hassle.

    One of the earlier attempts at these. Images were hosted, and were meant to be rearranged by dragging. You can see why this didn't go anywhere.

    It turns out the answer was in front of me this whole time. The hard part was preparing the markup so why couldn’t I build something that simply did that? The images would already be in Micro.blog; just use their URLs. A much simpler approach indeed.

    So I started working on “Blogging Tools”, a web-app that’ll handle this part of making galleries. First, I upload the images to Micro.blog, then I copy the image tags into to this tool:

    Create a new gallery by pasting the image tags from Micro.blog.

    The tool will parse these tags, preserving things like the “alt” attribute, and present the images in the order they’ll appear in the gallery, with text boxes beside each one allowing me to write the caption.

    The images get displayed alongside the captions, which you can edit on this screen.

    Once I’m done, I can then “render” the gallery, which will produce the Hugo short-codes that I can simply copy and paste into the post.

    Listing the galleries. Here you select "Render" from the Action column.
    Copy and paste the snippet into Micro.blog.

    This took me about a few evenings of work. It’s a simple Go app, using Fiber and Rainstorm, running in Docker. Seeing that the image files themselves are not managed by the tool, once I got the image parsing and rendering done, the rest was pretty straight forward. It’s amazing to think that removing the image handling side of things has turned this once “sizeable” tool into something that that was quick to build and, most importantly, finally exists. I do have more ideas for this “Blogging Tool”. The next idea is porting various command line tools that do simple image manipulation to WASM so I can do them in the browser (these tools were use to crop and produce the shadow of the screenshot in this post). I’m hoping that these would work on the iPad, so that I can do more of the image processing there rather than give up and go to a “real” computer. I should also talk a little about why I chose Rainstorm over Sqlite, or whether that was a good idea. Maybe be more on those topics later, but I’ll leave it here for now.

Bulk Image Selection

Some light housekeeping first: this is the 15th post on this blog so I thought it was time for a proper domain name. Not that buying a domain automatically means I’ll keep at it, but it does feel like I’ve got some momentum writing here now, so I’ll take the $24.00 USD risk. I’d also like to organise a proper site favicon too. I’ve got some ideas but I’ve yet to crack open Affinity Design just yet.

Anyway, I’ve been spending some time on Photo Bucket on and off this past week. I’ve fully implemented the new page model mentioned in the last post, and hooked it up to the switcher in the “Design” admin section. I’ve also built the public gallery and gallery item page.

Example of the landing page showing galleries (yes a lot of Jeff Wayne's War of the Worlds album covers in my test images.
Click one of the galleries to view the items within that gallery.

They’re a little on the simplistic side. That’s partly due to my minimalistic design sensibilities, but it’s also because I haven’t spent a lot of time on the public pages yet. I probably shouldn’t leave it too late, lest my impressions on how it looks drops to the point where I loose interest in working on this again. It’s a challenge, but I guess my counter  is that I’ll probably be spending more time in the admin section, so as long as the experience is good enough there, I can probably go by with a very basic public site for now (but not for ever). Now that galleries can be shown on the landing page, I’d like to organise another deployment so that I can start showing images in galleries. But before I do, I’ll need an easy way to move all the existing images into a gallery. Clicking into 25 individual images  just to select which gallery they should belong to doesn’t sound desirable to me. So I spent some time adding batch operations to the image admin page. The way it works is that by pressing Shift and clicking the images, you can now select them and perform batch operations, such as add them to a gallery (this is the only one I have now).

I do like how the selection indictor came out. It’s got some DaVinci Resolve vibes (I’ve been using DaVinci Resolve recently to edit some videos so I may have been inspired by their design language here) but I think I might need to use another highlight colour though: I think the black bleeds too easily into the images. Also, while I was recording the demo, I realise I broke the ability to rearrange gallery items. I may need to fix that before redeploying. Clicking “Gallery” brings up a model similar to the one used in the individual image page.  It work’s slightly differently though: instead of choosing whether the images appear in the gallery or not, this one is used to choose which galleries to add the selected images to.
I’m not sure that this is the best modal for this. It was quick to add, but I get the feeling that using the same model in slightly different ways could confuse people. So I might do something else here. An idea I have is a modal more like the following:

A better modal for managing galleries for multiple images.

The idea is that all the galleries will be listed like before, but will have a three-segmented button to the right. The centre button will be selected by default, and will show how many of the selected images are currently within that particular gallery. To the left will be the option to remove those images from the gallery, and to the right will be the option to add all the remaining selected images to the gallery. These are identified with the number of the selected images each gallery will have when the user clicks “Save”: 0 for none, and the number of selected images for all. For good measure is an option to add all the selected images into a brand new gallery. This will require some backend work so I haven’t started work on this yet. Not sure if this too will be a bit confusing: may need some additional text explaining how it all works. I’m hoping that users would recognise it as operating similar to the Gallery model for a single image.

The Site Page Model

I opened up Photo Bucket this morning and found a bunch of commits involving pages. I had no idea why I added them, until I launched it and started poking around the admin section. I tried a toggle on the Design page which controlled whether the landing page showed a list of photos or galleries, and after finding that it wasn’t connected to anything, it all came flooding back to me. So while what I’m going to describe here isn’t fully implemented yet, I decided to write it down before I forget it again.

So here is where the story continues. Now that galleries have been added to the model, I want to make them available on the public site. For the first cut, I’m hoping to allow the admin (i.e. the site owner) the ability to switch the landing page between a grid of photos or a grid of galleries. This is that single toggle on the “Design” page I was talking about earlier:

The Design section showing the new "Show Gallers on Homepage" toggle.

Oh, BTW: I finally got around to highlighting the active section in the admin screen. I probably should’ve done this earlier, as deferring these “unnecessary aesthetics tasks”  does affect how it feels to use this — and whether or not I’m likely to continue working on it.

Anyway, if this was before the new model I would’ve implemented this as a flag on the Design model. But I’d like to start actually building out how pages on the site are to be modelled. What I’m thinking is an architecture that looks a little like the following:

New architecture, with the new "Pages" model.

The site content will be encoded using a new Page model. These Pages will be used to define the contents of the landing page (each site will have at least this page by default) along with any additional pages the user would like to add. Galleries and photos will automatically have their own pages, and will not need to have any specific Page models to be present on the site. How these will look, plus the properties and stylings of the site itself, will be dictated by the Design model.

Each Page instance will have the following properties:

  • Slug — the path the page is available on. For the landing page, this will be `/`.
  • Type — what the page will contain.
  • other properties like title, description, etc. which have yet to be defined.

The “page type” is probably the most important property of a Page, as it will dictate what the contents of the page will be comprised of.  The following page types will be supported at first:

  • Photos — grid of all the photos managed on the site.
  • Galleries — grid of all the galleries managed on the site.

The user will probably not have much control over how these pages will look, apart from styling, footers, etc. which will live on the Design model. But I’m also thinking of adding a page type which would just produce a standard HTML page from a Markdown body. This could be useful for About pages or anything else the user may want to add to their site. I haven’t thought about navigation but I think choosing whether to include the page on the site’s nav bar would be another useful property.

The result would be a sitemap that could look a little like the following, where all the user defined pages reference automatically created pages:

Example sitemap, with the Page instances with solid boarders reference automatically define pages with dotted boarders.

And this is what that single toggle should do. It should change the page type of the landing page between photo list and gallery list.

As you can probably guess, there’s currently no way to add additional pages at the moment. But I’m doing this work now so that it should be easier to do later.

Indexing In UCL

I’ve been thinking a little about how to support indexing in UCL, as in getting elements from a list or keyed values from a map.  There already exists an index builtin that does this, but I’m wondering if this can be, or even should be, supported in the language itself.

I’ve reserved . for this, and it’ll be relatively easy to make use of it to get map fields. But I do have some concerns with supporting list element dereferencing using square brackets. The big one being that if I were to use square brackets the same way that many other languages do, I suspect (although I haven’t confirmed) that it could lead to the parser treating them as two separate list literals. This is because the scanner ignores whitespace, and there’s no other syntactic indicators to separate arguments to proc calls, like commas:

echo $x[4]      --> echo $x [4]
echo [1 2 3][2] --> echo [1 2 3] [2]

So I’m not sure what to do here. I’d like to add support for . for map fields but it feels strange doing that just that and having nothing for list elements.

I can think of three ways to address this.

Do Nothing — the first option is easy: don’t add any new syntax to the language and just rely on the index builtin. TCL does with lindex, as does Lisp with nth, so I’ll be in good company here.

Use Only The Dot — the second option is to add support for the dot and not the square brackets. This is what the Go templating language does for keys of maps or structs fields. They also have an index builtin too, which will work with slice elements.

I’d probably do something similar but I may extend it to support index elements. Getting the value of a field would be what you’d expect, but to get the element of a list, the construct .(x) can be used:

echo $x.hello     \# returns the "hello" field
echo $x.(4)       \# returns the forth element of a list

One benefit of this could be that the .(x) construct would itself be a pipeline, meaning that string and calculated values could be used as well:

echo $x.("hello")
echo $x.($key)
echo $x.([1 2 3] | len)
echo $x.("hello" | toUpper)

I can probably get away with supporting this without changing the scanner or compromising the language design too much. It would be nice to add support for ditching the dot completely when using the parenthesis, a.la. BASIC, but I’d probably run into the same issues as with the square brackets if I did, so I think that’s out.

Use Parenthesis To Be Explicit — the last option is to use square brackets, and modify the grammar slightly to only allow the use of suffix expansion within parenthesis. That way, if you’d want to pass a list element as an argument, you have to use parenthesis:

echo ($x[4])       \# forth element of $x
echo $x[4]         \# $x, along with a list containing "4"

This is what you’d see in more functional languages like Elm and I think Haskell. I’ll have  see whether this could work with changes to the scanner and parser if I were to go with this option. I think it may be achievable, although I’m not sure how.

An alternative way might be to go the other way, and modify the grammar rules so that the square brackets would bind closer to the list, which would mean that separate arguments involving square brackets would need to be in parenthesis:

echo $x[4]         \# forth element of $x
echo $x ([4])      \# $x, along with a list containing "4"

Or I could modify the scanner to recognise whitespace characters and use that as a guide to determine whether square brackets following a value. At least one space means the square bracket represent a element suffix, and zero mean two separate values.

So that’s where I am at the moment. I guess it all comes down to what works best for the language as whole. I can live with option one but it would be nice to have the syntax. I rather not go with option three as I’d like to keep the parser simple (I rather not add to all the new-line complexities I’ve have already).

Option two would probably be the least compromising to the design as a whole, even if the aesthetics are a bit strange. I can probably get use to them though, and I do like the idea of index elements being pipelines themselves. I may give option two a try, and see how it goes.

Anyway, more on this later.

Tape Playback Site

Thought I’d take a little break from UCL today.

Mum found a collection of old cassette tapes of us when we were kids, making and recording songs and radio shows. I’ve been digitising them over the last few weeks, and today the first recorded cassette was ready to share with the family.

I suppose I could’ve just given them raw MP3 files, but I wanted to record each cassette as two large files — one per side — so as to not loose much of the various crackles and clatters made when the tape recorder was stopped and started. But I did want to catalogue the more interesting points in the recording, and it would’ve been a bit “meh” simply giving them to others as one long list of timestamps (simulating the rewind/fast-forward seeking action would’ve been a step too far).

Plus, simply emailing MP3 files wasn’t nearly as interesting as what I did do, which was  to put together a private site where others could browse and play the recorded tapes:

The landing page, listing the available tapes (of which there's only one right now.
Playback of a tape side, with chapter links for seeking.

The site is not much to talk about — it’s a Hugo site using the Mainroad theme and deployed to Netlify. There is some JavaScript that moves the playhead when a chapter link is clicked, but the rest is just HTML and CSS. But I did want to talk about how I got the audio files into Netlify. I wanted to use `git lfs` for this and have Netlify fetch them when building the site. Netlify doesn’t do this by default, and I get the sense that Netlify’s support for LFS is somewhat deprecated. Nevertheless, I gave it a try by adding an explicit `git lfs` step in the build to fetch the audio files. And it could’ve been that I was using the LFS command incorrectly, or maybe it was invoked at the wrong time. But whatever the reason, the command errored out and the audio files didn’t get pulled. I tried a few more times, and I probably could’ve got it working if I stuck with it, but all those deprecation warnings in Netlify’s documentation gave me pause.

So what I ended up doing was turning off builds in Netlify and using a Github Action which built the Hugo site and publish it to Netlify using the CLI tool. Here’s the Github Action in full:

name: Publish to Netify
on:
  push:
    branches: [main]
jobs:
  build:
    name: Build
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          submodules: true
          fetch-depth: 0    
          lfs: true
      - name: Setup Hugo
        uses: peaceiris/actions-hugo@v3
        with:
            hugo-version: '0.119.0'
      - name: Build Site
        run: |
          npm install
          hugo          
      - name: Deploy
        env:
          NETLIFY_SITE_ID: ${{ secrets.NETLIFY_SITE_ID }}
          NETLIFY_AUTH_TOKEN: ${{ secrets.NETLIFY_AUTH_TOKEN }}
        run: |          
          netlify deploy --dir=public --prod

This ended up working quite well: the audio files made it to Netlify and were playable on the site. The builds are also quite fast; around 55 seconds (an earlier version involved building Hugo from source, which took 5 minutes). So for anyone else interested in trying to serve LFS files via Netlify, maybe try turning off the builds and going straight to using Github Action and the CLI tool. That is… if you can swallow the price of LFS storage in Github. Oof! A little pricy. Might be that I’ll need to use something else for the audio files.

UCL: Brief Integration Update and Modules

A brief update of where I am with UCL and integrating it into Dynamo-browse. I did managed to get it integrated, and it’s now serving as the interpreter of commands entered in during a session.

It works… okay. I decided to avoid all the complexities I mentioned in the last post — all that about continuations, etc. — and simply kept the commands returning tea.Msg values. The original idea was to have the commands return usable values if they were invoked in a non-interactive manner. For example, the table command invoked in an interactive session will bring up the table picker for the user to select the table. But when invoked as part of a call to another command, maybe it would return the current table name as a string, or something.

But I decided to ignore all that and simply kept the commands as they are. Maybe I’ll add support for this in a few commands down the line? We’ll see. I guess it depends on whether it’s necessary.

Which brings me up to why this is only working “okay” at the moment. Some commands return a tea.Msg which ask for some input from the user. The table command is one; another is set-attr, which prompts the user to enter an attribute value. These are implemented as a message which commands the UI to go into an “input mode”, and will invoke a callback on the message when the input is entered.

This is not an issue for single commands, but it becomes one when you start entering multiple commands that prompt for input, such as two set-attr calls:

set-attr this -S ; set-attr that -S

What happens is that two messages to show the prompt are sent, but only one of them is shown to the user, while the other is simply swallowed.

Fixing this would require some re-engineering, either with how the controllers returning these messages work, or the command handlers themselves. I can probably live with this limitation for now — other than this, the UCL integration is working well — but I may need to revisit this down the line.

Modules

As for UCL itself, I’ve started working on the builtins. I’m planning to have a small set of core builtins for the most common stuff, and the rest implemented in the form of “modules”. The idea is that the core will most likely be available all the time, but the modules can be turned on and off by the language embedder based on what they need or are comfortable having.

Each module is namespaces with a prefix, such as os for operating system operations, or fs for file-system operations. I’ve chosen the colon as the namespace separator, mainly so I can reserve the dot for field dereferencing, but also because I think TCL uses the colon as a namespace separator as well (I think I saw it in some sample code). The first implementation of this was simply adding the colon to the list of characters that make up the IDENT token. This broke the parser as the colon is also use as the map key/value separator, and the parser couldn’t resolve maps anymore. I had to extend the “indent” parse rule to support multiple IDENT tokens separated by colons. The module builtins are simply added to the environment with there fully-qualified name, complete prefix and colon, and invoking them with one of these idents will just “flatten” all these colon-separated tokens into a single string. Not sophisticated, but it’ll work for now.

There aren’t many builtins for these modules at the moment: just a few for reading environment variables and getting files as list of strings. Dynamo-browse is already using this in a feature branch, and it’s allows me to finally add a long-standing feature I’ve been meaning to add for a while: automatically enabling read-only mode when accessing DynamoDB tables in production. With modules, this construct looks a little like the following:

if (eq (os:env "ENV") "prod") {
    set-opt ro
}

It would’ve been possible to do this with the scripting language already used by Dynamo-browse. But this is the motivation of integrating UCL: it makes these sorts of constructs much easier to do, almost as one would do writing a shell-script over something in C.

UCL: Breaking And Continuation

I’ve started trying to integrate UCL into a second tool: Dynamo Browse. And so far it’s proving to be a little difficult. The problem is that this will be replacing a dumb string splitter, with command handlers that are currently returning a tea.Msg type that change the UI in some way.

UCL builtin handlers return a interface{} result, or an error result, so there’s no reason why this wouldn’t work. But tea.Msg is also an interface{} types, so it will be difficult to tell a UI message apart from a result that’s usable as data.

This is a Dynamo Browse problem, but it’s still a problem I’ll need to solve. It might be that I’ll need to return tea.Cmd types — which are functions returning tea.Msg — and have the UCL caller detect these and dispatch them when they’re returned. That’s a lot of function closures, but it might be the only way around this (well, the alternative is returning an interface type with a method that returns a tea.Msg, but that’ll mean a lot more types than I currently have).

Anyway, more on this in the future I’m sure.

Break, Continue, Return

As for language features, I realised that I never had anything to exit early from a loop or proc. So I added break, continue, and return commands. They’re pretty much what you’d expect, except that break can optionally return a value, which will be used as the resulting value of the foreach loop that contains it:

echo (foreach [5 4 3 2 1] { |n|
  echo $n
  if (eq $n 3) {
    break "abort"
  }
})
--> 5
--> 4
--> 3
--> abort

These are implemented as error types under the hood. For example, break will return an errBreak type, which will flow up the chain until it is handled by the foreach command (continue is also an errBreak with a flag indicating that it’s a continue). Similarly, return will return an errReturn type that is handled by the proc object.

This fits quite naturally with how the scripts are run. All I’m doing is walking the tree, calling each AST node as a separate function call and expecting it to return a result or an error. If an error is return, the function bails, effectively unrolling the stack until the error is handled or it’s returned as part of the call to Eval(). So leveraging this stack unroll process already in place makes sense to me.

I’m not sure if this is considered idiomatic Go. I get the impression that using error types to handle flow control outside of adverse conditions is frowned upon. This reminds me of all the arguments against using expressions for flow control in Java. Those arguments are good ones: following executions between try and catch makes little sense when the flow can be explained more clearly with an if.

But I’m going to defend my use of errors here. Like most Go projects, the code is already littered with all the if err != nil { return err } to exit early when a non-nil error is returned. And since Go developers preach the idea of errors simply being values, why not use errors here to unroll the stack? It’s better than the alternatives: such as detecting a sentinel result type or adding a third return value which will just be yet another if bla { return res } clause.

Continuations

Now, an idea is brewing for a feature I’m calling “continuations” that might be quite difficult to implement. I’d like to provide a way for a user builtin to take a snapshot of the call stack, and resume execution from that point at a later time.

The reason for this is that I’d like all the asynchronous operations to be transparent to the UCL user. Consider a UCL script with a sleep command:

echo "Wait here"
sleep 5
echo "Ok, ready"

sleep could simply be a call to time.Sleep() but say you’re running this as part of an event loop, and you’d prefer to do something like setup a timer instead of blocking the thread. You may want to hide this from the UCL script author, so they don’t need to worry about callbacks.

Ideally, this can be implemented by the builtin using a construct similar to the following:

func sleep(ctx context.Context, arg ucl.CallArgs) (any, error) {
  var secs int
  if err := arg.Bind(&secs); err != nil {
    return err
  }

  // Save the execution stack
  continuation := args.Continuation()

  // Schedule the sleep callback
  go func() {
    <- time.After(secs * time.Seconds)

    // Resume execution later, yielding `secs` as the return value
    // of the `sleep` call. This will run the "ok, ready" echo call
    continuation(ctx, secs)
  })()

  // Halt execution now
  return nil, ucl.ErrHalt
}

The only trouble is, I’ve got no idea how I’m going to do this. As mentioned above, UCL executes the script by walking the parse tree with normal Go function calls. I don’t want to be in a position to create a snapshot of the Go call stack. That a little too low level for what I want to achieve here.

I suppose I could store the visited nodes in a list when the ErrHalt is raised; or maybe replace the Go call stack with an in memory stack, with AST node handlers being pushed and popped as the script runs. But I’m not sure this will work either. It would require a significant amount of reengineering, which I’m sure will be technically interesting, but will take a fair bit of time. And how is this to work if a continuation is made in a builtin that’s being called from another builtin? What should happen if I were to run sleep within a map, for example?

So it might be that I’ll have to use something else here. I could potentially do something using Goroutines: the script is executed on Goroutine and args.Continuation() does something like pauses it on a channel. How that would work with a builtin handler requesting the continuation not being paused themselves I’m not so sure. Maybe the handlers could be dispatched on a separate Goroutine as well?

A simpler approach might be to just offload this to the UCL user, and have them run Eval on a separate Goroutine and simply sleeping the thread. Callbacks that need input from outside could simply be sent using channels passed via the context.Context. At least that’ll lean into Go’s first party support for synchronisation, which is arguably a good thing.

UCL: The Simplifications Paid Off

The UCL simplifications have been implemented, and they seem to be largely successful.

Ripped out all the streaming types, and changed pipes to simply pass the result of the left command as first argument of the right.

"Hello" | echo ", world"
--> "Hello, world"

This has dramatically improved the use of pipes. Previously, pipes could only be used to connect streams. But now, with pretty much anything flowing through a pipe, that list of commands has extended to pretty much every builtins and user-defined procs. Furthermore, a command no longer needs to know that it’s being used in a pipeline: whatever flows through the pipe is passed transparently via the first argument to the function call. This has made pipes more useful, and usable in more situations.

Macros can still know whether there exist a pipe argument, which can make for some interesting constructs. Consider this variant of the foreach macro, which can “hang off” the end of a pipe:

["1" "2" "3"] | foreach { |x| echo $x }
--> 1
--> 2
--> 3

Not sure if this variant is useful, but I think it could be. It seems like a natural way to iterate items passed through the pipe. I’m wondering if this could extend to the if macro as well, but that variant might not be as natural to read.

Another simplification was changing the map builtin to accept anonymous blocks, as well as an “invokable” commands by name. Naturally, this also works with pipes too:

[a b c] | map { |x| toUpper $x }
--> [A B C]

[a b c] | map toUpper
--> [A B C]

As for other language features, I finally got around to adding support for integer literals. They look pretty much how you expect:

set n 123
echo $n
--> 123

One side effect of this is that an identifier can no longer start with a dash followed by a digit, as that would be parsed as the start of a negative integer. This probably isn’t a huge deal, but it could affect command switches, which are essentially just identifiers that start with a dash.

Most of the other work done was behind the scenes trying to make UCL easier to embed. I added the notion of “listable” and “hashable” proxies objects, which allow the UCL user to treat a Go slice or a Go struct as a list or hash respectively, without the embedder doing anything other than return them from a function (I’ve yet to add this support to maps just yet).

A lot of the native API is still a huge mess, and I really need to tidy it up before I’d be comfortable opening the source. Given that the language is pretty featureful now to be useful, I’ll probably start working on this next. Plus adding builtins. Really need to start adding useful builtins.

Anyway, more to come on this topic I’m sure.

Oh, one last thing: I’ve put together an online playground where you can try the language out in the browser. It’s basically a WASM build of the language running in a JavaScript terminal emulator. It was a little bit of a rush job and there’s no reason for building this other than it being a fun little thing to do.

You can try it out here, if you’re curious.

Simplifying UCL

I’ve been using UCL for several days now in that work tool I mentioned, and I’m wondering if the technical challenge that comes of making a featureful language is crowding out what I set out to do: making a useful command language that is easy to embed.

So I’m thinking of making some simplifications.

The first is to expand the possible use of pipes. To date, the only thing that can travel through pipes are streams. But many of the commands I’ve been adding simply return slices. This is probably because there’s currently no “stream” type available to the embedder, but even if there was, I’m wondering if it make sense to allow the embedder to pass slices, and other types, through pipes as well.

So, I think I’m going to take a page out of Go’s template book and simply have pipes act as syntactic sugar over sequential calls. The goal is to make the construct a | b essentially be the same as b (a), where the first argument of b will be the result of a.

As for streams, I’m thinking of removing them as a dedicated object type. Embedders could certainly make analogous types if they need to, and the language should support that, but the language will no longer offer first class support for them out of the box. 

The second is to remove any sense of “purity” of the builtins. You may recall the indecision I had regarding using anonymous procs with the map command:

I’m not sure how I can improve this. I don’t really want to add automatic dereferencing of identities: they’re very useful as unquoted string arguments. I suppose I could add another construct that would support dereferencing, maybe by enclosing the identifier in parenthesis.

I think this is the wrong way to think of this. Again, I’m not here to design a pure implementation of the language. The language is meant to be easy to use, first and foremost, in an interactive shell, and if that means sacrificing purity for a map command that supports blocks, anonymous procs, and automatic dereferencing of commands just to make it easier for the user, then I think that’s a trade work taking.

Anyway, that’s the current thinking as of now.

Imports And The New Model

Well, I dragged Photo Bucket out today to work on it a bit.

It’s fallen by the wayside a little, and I’ve been wondering if it’s worth continuing work on it. So many things about it that need to be looked at: the public site looks ugly, as does the admin section; working with more than a single image is a pain; backup and restore needs to be added; etc.

I guess every project goes through this “trough of discontent” where the initial excitement has warn off and all you see is a huge laundry list of things to do.  Not to mention the wandering eye looking at the alternatives.

But I do have to stop myself from completely junking it, since it’s actually being used to host the Folio Red Gallery.  I guess my push to deploy it has entrapped me (well, that was the idea of pushing it out there in the first place).

Anyway, it’s been a while (last update is here) and the move to the new model is progressing. And it’s occurred to me that I haven’t actually talked about the new model (well, maybe I have but I forgot about it).

Previously, the root model of this data structure is the Store. All images belong to a Store, which is responsible for managing the physical storage and retrieval of them. These stores can have sub-stores, which are usually used to hold the images optimised for a specific use (serving on the web, showing as a thumbnails, etc). Separate to this was the public site Design which handed properties of the public site: how it should look, what the title, and description is, etc.

The "Store" model

There were some serious issues with this approach: images were owned by stores, and two images can belong to two different stores, but they all belonged to the same site. This made uploading confusing: which store should the image live on? I worked around this by adding the notion of a “primary store” but this was just ignoring the problem and defeated the whole multiple-store approach.

This is made even worse when one considers which store to use for serving the images. Down the line I was hoping to support virtual domain hosting, where one could setup different image sites on different domains that all pointed to the same instance. So imagine how that would work: one wanted to view images from alpha.example.com and another wanted to view images from beta.example.com. Should the domains live on the store? What about the site designs? Where should they live?

The result was that this model could only really ever support one site per Photo Bucket instance, requiring multiple deployments for different sites if one wanted to use a single host for separate photo sites.

So I re-engineered the model to simplify this dramatically.  Now, the route object is the Site:

The "Site" model

Here, the Site owns everything. The images are associated with sites, not stores. Stores still exist, but their role is now more in-line with what the sub-stores did. When an image is uploaded, it is stored in every Store of the site, and each Store will be responsible for optimising it for a specific use-case. The logic used to determine which Store to use to fetch the image is still in place but now it can be assumed that any Store associated with a site will have the image.

Now the question of which Store an image should be added to is easy: all the them.

Non-image data, such as Galleries and Designs now live off the Site as well, and if virtual hosting is added, so would the domain that serves that Site.

At least one site needs to be present at all time, and it’s likely most instances will simply have a single Site for now. But this assumption solves the upload and hosting resolution issues listed above. And if multiple site support is needed, a simple site picker can be added to the admin page (the public pages can will rely on the request hostname).

This has been added a while ago, and as of today, has been merged to main. But I didn’t want to deal with writing the data migration logic for this, so my plan is to simply junk the existing instance and replace it with the brand new one. But in order to do so, I needed to export the photos from the old instance, and import them into the new one.

The export logic has been deployed and I’ve made an export it this morning. Today, the import logic was finished and merged.  Nothing fancy: like the export it’s only invokable from the command line. But it’ll do the job for now.

Next steps is to actually deploy this, which I guess will be the ultimate test. Then, I’m hoping to add support for galleries in the public page so I can separate images on the Folio Red Gallery into projects. There’s still no way to add images in bulk to a gallery. Maybe this will give me an incentive to do that next.

UCL: Procs and Higher-Order Functions

More on UCL yesterday evening.  Biggest change is the introduction of user functions, called “procs” (same name used in TCL):

proc greet {
    echo "Hello, world"
}

greet
--> Hello, world

Naturally, like most languages, these can accept arguments, which use the same block variable binding as the foreach loop:

proc greet { |what|
    echo "Hello, " $what
}

greet "moon"
--> Hello, moon

The name is also optional, and if omitted, will actually make the function anonymous.  This allows functions to be set as variable values, and also be returned as results from other functions.

proc makeGreeter { |greeting|
    proc { |what|
        echo $greeting ", " $what
    }
}

set helloGreater (makeGreeter "Hello")
call $helloGreater "world"
--> Hello, world

set goodbye (makeGreeter "Goodbye cruel")
call $goodbye "world"
--> Goodbye cruel, world

I’ve added procs as a separate object type. At first glance, this may seem a little unnecessary. After all, aren’t blocks already a specific object type?

Well, yes, that’s true, but there are some differences between a proc and a regular block. The big one being that the proc will have a defined scope. Blocks adapt to the scope to which they’re invoked whereas a proc will close over and include the scope to which it was defined, a lot like closures in other languages.

It’s not a perfect implementation at this stage, since the set command only sets variables within the immediate scope. This means that modifying closed over variables is currently not supported:

\# This currently won't work
proc makeSetter {
    set bla "Hello, "
    proc appendToBla { |x|
        set bla (cat $bla $x)
        echo $bla
    }
}

set er (makeSetter)
call $er "world"
\# should be "Hello, world"

Higher-Order Functions

The next bit of work is finding out how best to invoke these procs in higher-order functions. There are some challenges here that deal with the language grammar.

Invoking a proc by name is fine, but since the grammar required the first token to be a command name, there was no way to invoke a proc stored in a variable. I quickly added a new call command — which takes the proc as the first argument — to work around it, but after a while, this got a little unwieldy to use (you can see it in the code sample above).

So I decided to modify the grammar to allow any arbitrary value to be the first token. If it’s a variable that is bound to something “invokable” (i.e. a proc), and there exist at-least one other argument, it will be invoked. So the above can be written as follows:

set helloGreater (makeGreeter "Hello")
$helloGreater "world"
--> Hello, world

At-least one argument is required, otherwise the value will simply be returned. This is so that the value of variables and literal can be returned as is, but that does mean lambdas will simply be dereferenced:

"just, this"
--> just, this

set foo "bar"
$foo
--> bar

set bam (proc { echo "BAM!" })
$bam
--> (proc)

To get around this, I’ve added the notion of the “empty sub”, which is just the construct (). It evaluates to nil, and since a function ignores any extra arguments not bound to variables, it allows for calling a lambda that takes no arguments:

set bam (proc { echo "BAM!" })
$bam ()
--> BAM!

It does allow for other niceties, such as using a falsey value:

if () { echo "True" } else { echo "False" }
--> False

With lambdas now in place, I’m hoping to work on some higher order functions. I’ve started working on map which accepts both a list or a stream. It’s a buggy mess at the moment, but some basic constructs currently work:

map ["a" "b" "c"] (proc { |x| toUpper $x }) 
--> stream ["A" "B" "C"]

(Oh, by the way, when setting a variable to a stream using set, it will now collect the items as a list. Or at least that’s the idea. It’s currently not working at the moment.)

A more refined approach would be to treat commands as lambdas. The grammar supports this, but the evaluator doesn’t. For example, you cannot write the following:

\# won't work
map ["a" "b" "c"] toUpper

This is because makeUpper will be treated as a string, and not a reference to an invokable command. It will work for variables. You can do this:

set makeUpper (proc { |x| toUpper $x })
map ["a" "b" "c"] $makeUpper

I’m not sure how I can improve this. I don’t really want to add automatic dereferencing of identities: they’re very useful as unquoted string arguments. I suppose I could add another construct that would support dereferencing, maybe by enclosing the identifier in parenthesis:

\# might work?
map ["a" "b" "c"] (toUpper)

Anyway, more on this in the future I’m sure.

UCL: First Embed, and Optional Arguments

Came up with a name: Universal Control Language: UCL. See, you have TCL; but what if instead of being used for tools, it can be more universal? Sounds so much more… universal, am I right? 😀

Yeah, okay. It’s not a great name. But it’ll do for now.

Anyway, I’ve started integrating this language with the admin tool I’m using at work. This tool I use is the impetus for this whole endeavour. Up until now, this tool was just a standard CLI command usable from the shell. But it’s not uncommon for me to have to invoke the tool multiple times in quick succession, and each time I invoke it, it needs to connect to backend systems, which can take a few seconds. Hence the reason why I’m converting it into a REPL.

Anyway, I added UCL to the tool, along with a readline library, and wow, did it feel good to use. So much better than the simple quote-aware string splitter I’d would’ve used. And just after I added it, I got a flurry of requests from my boss to gather some information, and although the language couldn’t quite handle the task due to missing or unfinished features, I can definitely see the potential there.

I’m trying my best to only use what will eventually be the public API to add the tool-specific bindings. The biggest issue is that these “user bindings” (i.e. the non-builtins) desperately need support for producing and consuming streams. They’re currently producing Go slices, which are being passed around as opaque “proxy objects”, but these can’t be piped into other commands to, say, filter or map. Some other major limitations:

One last thing that would be nice is the ability to define optional arguments. I actually started work on that last night, seeing that it’s relatively easy to build. I’m opting for a style that looks like the switches you’d find on the command line, with option names starting with dashes:

join "a" "b" -separator "," -reverse
--> b, a

Each option can have zero or more arguments, and boolean options can be represented as just having the switch. This does mean that they’d have to come after the positional arguments, but I think I can live with that.  Oh, and no syntactic sugar for single-character options: each option must be separated by whitespace (the grammar actually treats them as identifiers). In fact, I’d like to discourage the use of single-character option names for these: I prefer the clarity that comes from having the name written out in full (that said, I wouldn’t rule out support for aliases). This eliminates the need for double dashes, to distinguish long option names from a cluster of single-character options, so only the single dash will be used.

I’ll talk more about how the Go bindings look later, after I’ve used them a little more and they’re a little more refined.

Tool Command Language: Lists, Hashs, and Loops

A bit more on TCL (yes, yes, I’ve gotta change the name) last night. Added both lists and hashes to the language. These can be created using a literal syntax, which looks pretty much looks how I described it a few days ago:

set list ["a" "b" "c"]
set hash ["a":"1" "b":"2" "c":"3"]

I had a bit of trouble working out the grammar for this, I first went with something that looked a little like the following, where the key of an element is optional but the value is mandatory:

list_or_hash  --> "[" "]"        \# empty list
                | "[" ":" "]"    \# empty hash
                | "[" elems "]"  \# elements

elems --> ((arg ":")? arg)*      \# elements of a list or hash

arg --> <anything that can be a command argument>

But I think this confused the parser a little, where it was greedily consuming the key arg and expecting the : to be present to consume the value.

So I flipped it around, and now the “value” is the optional part:

elems --> (arg (":" arg)?)*

So far this seems to work. I renamed the two fields “left” and “right”, instead of key and value.  Now a list element will use the “left” part, and a hash element will use “left” for the key and “right” for the value.

You can probably guess that the list and hash are sharing the same AST types. This technically means that hybrid lists are supported, at least in the grammar. But I’m making sure that the evaluator throws an error when a hybrid is detected. I prefer to be strict here, as I don’t want to think about how best to support it. Better to just say either a “pure” list, or a “pure” hybrid.

Well, now that we have collections, we need some way to iterate over them. For that, I’ve added a foreach loop, which looks a bit like the following:

\# Over lists
foreach ["a" "b" "c"] { |elem|
  echo $elem
}

\# Over hashes
foreach ["a":"1" "b":"2"] { |key val|
  echo $key " = " $val
}

What I like about this is that, much like the if statement, it’s implemented as a macro. It takes a value to iterate over, and a block with bindable variables: one for list elements, or two for hash keys and values. This does mean that, unlike most other languages, the loop variable appears within the block, rather than to the left of the element, but after getting use to this form of block from my Ruby days, I can get use to it.

One fun thing about hashes is that they’re implemented using Go’s map type. This means that the iteration order is random, by design. This does make testing a little difficult (I’ve only got one at the moment, which features a hash of length one) but I rarely depend on the order of hash keys so I’m happy to keep it as is.

This loop is only the barest of bones at the moment. It doesn’t support flow control like break or continue, and it also needs to support streams (I’m considering a version with just the block that will accept the stream from a pipe). But I think it’s a reasonably good start.

I also spend some time today integrating this language in the tool I was building it for. I won’t talk about it here, but already it’s showing quite a bit of promise. I think, once the features are fully baked, that this would be a nice command language to keep in my tool-chest. But more of that in a later post.

Backlog Proc: A Better JQL

Backlog Proc is a simple item backlog tracker I built for work. I’d like to link them to Jira tickets, so that I know whether a particular backlog item actually has tasks written for them, and what the status of each of those tasks are.  I guess these are meant to be tracked by epics, but Jira’s UI for handling such things is a mess, and I’d like to make notes that are only for my own eyes.

Anyway, I’m was using JQL to select the Jira tickets. And it worked, but the language is a bit verbose. Plus the tool I’m running the queries in, jira-cli, requires that I add the project ID along with the things like the epic or fix version.

So I’m started working on a simpler language, one that’s just a tokenised list of ticket numbers. For example, instead of writing:

project = "ABC" and key in (ABC-123 ABC-234 ABC-345)

One could just write:

ABC-123 ABC-234 ABC-345

And instead of writing:

(project = "ABC" and epicLink = "ABC-818") OR (project = "DEF" and epicLink = "DEF-222")

One could just write:

epic:(ABC-818, DEF-222)

(note here the use of OR, in that the sets are unionised; I’m not sure how this would scale for the other constructs).

Key characteristics is that the parser would be able to get the project ID from the query, instead of having the query writer (i.e. me) explicitly add it.

I can also do introspection, such as get the relevant projects, by “unparsing” the query. An advantage of controlling the parser and language. Can’t do that with JQL.

But, of-course, I can’t cover all possible bases with this language just yet, so I’ll need a way to include arbitrary JQL.. So I’ve also added a general “escape” clause to do this:

jql:"project in (bla)"

A Few Other Things

A few other things that is needed for Backlog Proc:

Tool Command Language: Macros And Blocks

More work on the tool command language (of which I need to come up with a name: I can’t use the abbreviation TCL), this time working on getting multi-line statement blocks working. As in:

echo "Here"
echo "There"

I got a little wrapped up about how I can configure the parser to recognise new-lines as statement separators. I tried this in the past with a hand rolled lexer and ended up peppering NL tokens all around the grammar. I was fearing that I needed to do something like this here. After a bit of experimentation, I think I’ve come up with a way to recognise new-lines as statement separators without making the grammar too messy. The unit tests verifying this so far seem to work.

// Excerpt of the grammar showing all the 'NL' token matches.
// These match a new-line, plus any whitespace afterwards.

type astStatements struct {
    First *astPipeline   `parser:"@@"`
    Rest  []*astPipeline `parser:"( NL+ @@ )*"`
}

type astBlock struct {
    Statements []*astStatements `parser:"LC NL? @@ NL? RC"`
}

type astScript struct {
    Statements *astStatements `parser:"NL* @@ NL*"`
}

I’m still using a stateful lexer as it may come in handy when it comes to string interpolation. Not sure if I’ll add this, but I’d like the option.

Another big addition today was macros. These are much like commands, but instead of  arguments being evaluated before being passed through to the command, they’re deferred and the command can explicitly request their evaluation whenever. I think Lisp has something similar: this is not that novel.

This was used to implement the if command, which is now working:

set x "true"
if $x {
  echo "Is true"
} else {
  echo "Is not true"
}

Of course, there are actually no operators yet, so it doesn’t really do much at the moment.

This spurred the need for blocks. which is a third large addition made today. They’re just a group of statements that are wrapped in an object type.  They’re “invokable” in that the statements can be executed and produce a result, but they’re also a value that can be passed around. It jells nicely with the macro approach.

Must say that I like the idea of using macros for things like if over baking it into the language. It can only add to the “embed-ability” of this, which is what I’m looking for.

Finally, I did see something interesting in the tests. I was trying the following test:

echo "Hello"
echo "World"

And I was expecting a Hello and World to be returned over two lines. But only World was being returning. Of course! Since echo is actually producing a stream and not printing anything to stdout, it would only return World.

I decided to change this. If I want to use echo to display a message, then the above script should display both Hello and World in some manner. The downside is that I don’t think I’ll be able to support constructs like this, where echo provides a source for a pipeline:

\# This can't work anymore
echo "Hello" | toUpper 

I mean, I could probably detect whether echo is connected to a pipe (the parser can give that information). But what about other commands that output something? Would they need to be treated similarly?

I think it’s probably best to leave this out for now, and have a new construct for providing literals like this to a pipe. Heck, maybe just having the string itself would be enough:

"hello" | toUpper

Anyway, that’s all for today.

← Newer Posts Older Posts →