Videos
Build Indicators
AKA: Das Blinkenlights
Date: 2017 — now
Status: Steady Green
I sometimes envy those that work in hardware. To be able to build something that one can hold and touch. It’s something you really cannot do with software. And yeah, I dabbled a little with Arduino, setting up sketches that would run on prebuilt shields, but I never went beyond the point of building something that, however trivial or crappy, I could call my own.
Except for this one time.
And I admit that this thing is pretty trivial and crappy: little more than some controllable LEDs. But the ultimate irony is that it turned out to be quite useful for a bunch of software projects.
The Hardware

I built this Arduino shield a while ago, probably something like 2013. It’s really not that complicated: just a bunch of LEDs wired up in series to a bunch of resistors atop an Arduino prototyping shield. The LEDs can be divided up into two groups of three, with each group having a red, amber, and green LED, arrange much like two sets of traffic lights. I’m using the analogue pins of the Arduino, making it possible to dim the LEDs (well, “dimmed”: the analogue pins are little more than a square pulse with an adjustable duty cycle).


I can’t remember why I built this shield originally: it might had something to do with train signals, or maybe they were intended as indicators right out of the box. But after briefly using them for their original purpose, it sat on my desk for a while before I started using them as indicator lights. Their first use was for some tool that would monitor the download and transcode of videos. This would take between 45–60 minutes, and it was good to be able to start the job, leave the room, and get the current progress without having to wake the screen while I pass by the door. The red LED will slowly pulse while the download was in progress, then the yellow LED will start flashing when transcoding begins. Once everything is done, the green LED will be lit (or the red LED, which will indicate an error). The Arduino sketch had a bunch of predefined patterns, encoded as strings. Each character would indicate an intensity, with “a” being the brightness, and “z” being the dimmest (I believe the space or dot meant “off”). Each LED could be set to a different pattern, which was done via commands sent over the RS-232 connection. I think the code driving this connection was baked into the download-and-transcode tool itself. The Arduino would reset whenever the RS-232 connection is formed, and just letting it do this when the tool started up meant that I didn’t need to worry about connection state (it didn’t make the code portable though).
Watching Webpack
Eventually this tool fell out of use, and for the long time this board sat in my drawer. Projects came and went, until one came along with a problem that was perfect for this device. I was working on a HTML web-app and I was switching between the code and a web-browser, while Webpack was watching for changes. Because I only had a single screen, the terminal was always out of sight — behind either the code editor or the web-browser — and the version of Webpack I was using would stop watching when it encountered an error (a Go application was serving the files, and Webpack was simply deploying the bundle assets to a public folder, so even though Webpack would stop working, the actual web-server will continue running). Not seeing these errors, I’d fall into the trap into thinking that I was changing things, and getting confused as to why I wasn’t seeing them in the browser. I could go for a minute or two like this before I found out that Webpack died because of an earlier error and my changes were not getting deployed at all. So I dug this device out, built a very simple Go CLI tool and daemon that would talk to it, and hacked it into the Webpack config. When a Webpack build started, it would light up the amber LED. If the build was successful, the green LED would light up; if not, the red LED would.



This proved to be super useful, and took out the guesswork of knowing when a change was deployed. As long as the green LED was lit, it’s good to go, but as soon as amber becomes red, I know I’ll have to check for errors and get it green once more.
The sketch and daemon software is a lot simpler than what this device used to do. Now, instead of individual patterns of intensity, the daemon — which is itself controlled by a CLI tool — would communicate to the device using a very simple protocol that would either turn LEDs on or off. Some of the protocol details, taken from the Arduino sketch, are included below:
/*
* ledstatus - simple led indicators
*
* SERIAL FORMAT
*
* Commands take the form: <cmd> <pars>... NL. Any more than
* 8 bytes (1 command, 7 parameters) will be ignored.
*
* Responses from the device will take the form: <status> <par>... NL
*
*/
// Commands
\#define CMD_NOP 0x0
\#define CMD_PING 'p' // a ping, which should simply respond with RES_OK
\#define CMD_TURN_ON 'o' // 'o' <addr> :: turn on the leds at these addresses
\#define CMD_TURN_OFF 'f' // 'f' <addr> :: turn off the leds at these addresses
// Response
\#define RES_OK '1'
\#define PIN_ADDR_G1 (1 << 0)
\#define PIN_ADDR_Y1 (1 << 1)
\#define PIN_ADDR_R1 (1 << 2)
\#define PIN_ADDR_G2 (1 << 3)
\#define PIN_ADDR_Y2 (1 << 4)
\#define PIN_ADDR_R2 (1 << 5)
But in a way the simplicity actually helps here. Because it’s now a command and daemon, I could use it in anything else where I’d like to show progress without having to see the screen. Just now, for example, I’m working on a Go project that uses Air to rebuild and restart whenever I change a template. The cycle is slightly longer than a simple Webpack run, and I tend to reload the browser window too soon. So waiting for this device to go from amber to green cuts down on these early reloads.

So thats the Build Indicators. The project is steady, and I have no desire to do anything significant, like modify the hardware. But if I were to work on it again, I think I’d like it to add variable intensity back, and extend the command language to let the user upload customer patterns. But for the moment, it’s doing it’s job just fine.
Working on one of the admin sections of the project I was alluding to yesterday. Here’s a screencast of how it’s looking so far.
The styling and layout is not quite final. I’m focusing more on functionality, and getting layout and whitespace looking good always takes time. But compared to how it looked before I started working on it this morning, I think it’s a good start.
In the end it took significantly more time to write about it then to actually do it, but the dot product approach seems to work.
🎥 Elm Connections #7: Deploying
In which I round out the series by deploying Clonections to Netlify.
🎥 Elm Connections #6: Fetching And JSON Decoding Puzzles
In which I use Elm’s HTTP client and JSON decoder to fetch puzzles from an external resource.
🎥 Elm Connections #5: Option Shuffling
In which I use Elm’s random number generator to shuffle the options.
In which I put away Elm for a bit to make the playfield look good (or at least, better than it was).
🎥 Elm Connections #3: Group Matching
In which I work on “categories”, the model and logic that deals with the groups the player is to “connect”, plus find my way with how sets work in Elm.
🎥 Elm Connections #2: Starting The Playfield
In which I continue work on a Connections clone in Elm by starting work on the playfield.
🎥 Elm Connections #1: First Steps
In which I record video of me building a Connections (or Conlextions) clone in Elm (while at the same time, have a go at editing video).
Making some progress in learning Elm for building frontends. Started working on a Connections clone, which I’m calling “Clonections”. This is what I’ve got so far:
It’s been fun using Elm to build this. So far I’m liking the language. Of course, now I’ll have to come up with puzzles for this. 😐
I realise I’ve been posting a lot about Ivy, and not a whole lot about Archie. So to even the scales a little, here’s a video of Archie receiving a head scratch this morning.
🥛🦜
Should state that both vessels hold ordinary tap water.
Spent a little more time working on my idea for Dynamo-Browse this week. Managed to get it somewhat feature complete this weekend:
I probably should say a few words on what it actually is. The idea is to make it quick and easy to run pre-canned queries based on the currently selected item and table.
Let’s say you’ve got a table with customer information, and another table with subscription information, and they’re linked with some form of customer ID. If you wanted to see the subscriptions of a customer, up until now, you’d have to copy the customer ID to the paste-board, change the currently viewed table, then run a query to select the subscription with that customer ID. It’s not difficult but it’s extremely tedious.
This change is meant to streamline this. Now, in a script function, you can define a “related item” provider which, if matched against the currently displayed table, will be given the currently selected item, and will return a list of queries that will display items related to the current item (depending on whatever definition of “related” will be). This will be presented to the user as a list. When the user chooses the item, the query will run and the results will be displayed.
Here’s an example of the script used for the screencasts:
ext.related_items("business-addresses", func(item) {
return [
{"label": "Customer", "query": `city=$city`, "args": {"city": "Austin"}},
{"label": "Payment", "query": `address^="3"`},
{"label": "Thing", "table": "inventory",
"query": `pk=$pk`, "args": {"pk": "01fca33a-5817-4c27-8a8f-82380584e69c"}},
]
})
ext.related_items("inventory", func(item) {
sk := string(item.attr("sk"))
return [
{"label": "SK: " + sk, "table": "business-addresses",
"query": `pk^=$sk`, "args": {"sk": sk}},
]
})
Notice how the last business-addresses
item specifies the “inventory” table, and that the “inventory” provider actually uses an attribute of the item. Here’s a screencast of that working:
This feature has been on the idea board for a while. I was trying to work out how best to handle the pre-canned queries, especially considering that they will likely be different for each item and table. Some ideas I had were adding additional UI elements that the user could use to configure these queries. These would go into the workspace file, a sort of an embedded database which is created for each session. This was pretty crappy, especially when you consider that workspaces usually only last until the user exists. It was only a few weeks ago when I considered using the scripting facilities to implement this (which, honestly, shows how much it remains under-utilised).
Anyway, I’ve only just finished development of it. I’d still like to try it for the various related items I tend to use during my day-to-day. We’ll see how well it works out.
Idea For Mainboard Mayhem: A Remote Pickup
Sort of in-between projects at the moment so I’m doing a bit of light stuff on Mainboard Mayhem. I had an idea for a new element: a remote control which, when picked up, will allow the player to toggle walls and tanks using the keyboard, much like the green and blue buttons.
I used ChatGGT to come up with some artwork, and it produced something that was pretty decent.

Only issue was that the image was huge — 1024 x 1024 — and the tiles in Mainboard Mayhem were only 32 x 32.
I tried shrinking it down in Acorn, using various scaling algorithms. The closest that worked was bringing it down slowly to about 128 x 128 using Nearest Neighbour, than trying to go all the way down to 32 x 32 using Lanczos. That worked, but it required true 32 bit colour to be recognisable, and I wanted to preserve the 16 colour palette used by the original Chips Challenge.
So using the original image as a reference, I bit the bullet and drew my own in Acorn. You can see it here in this test level:

It turn out okay. At least it’s recognisable. Anyway, I coded it up and gave it a bit of a try:
Yeah, it works well. When the player has the appropriate colour remote, they can hit either Z or X to toggle the green walls or blue tanks respectively. I really should add some indicators in the status bar to show which button to press.
Not sure what I’ll do after this. The fun part was coming up with the element. But I guess I’ll have to come up with a few puzzles that use it.
Also got a bit of train spotting in as well. I parked at Kyneton station in the hope of seeing a train go by. Wish I could say my timing was strategic but in truth I was just lucky.
This conversation with @Gaby got my creative juices flowing so I though I’d put together a small video tutorial for using the GLightbox plugin by @jsonbecker. There have been a few updates with the plugin so how I use it might be slightly out of date. Either way, please enjoy.
Success! Managed to get a Go app built, signed, and notarised all from within a GitHub Action. It even cross-compiles to ARM, which is something considering that it’s using SDL. Here’s the test app being downloaded and launched in a VM (ignore the black window, the interesting part is the title).
I won’t lie to you. I got some pretty strong vibes of the Birds at this point in my walk.
🔗 github.com/charmbracelet/vhs
This little tool is awesome. It allows you to easily make GIFs of a command line session from a text-based DSL. I tried it on the full screen TUI app I’m working on and it worked flawlessly.
Now wondering if I could use it for automated testing. 🤔