Old Projects
- s3-browse: a TUI tool for browsing S3 buckets. This didn’t go beyond simply listing the files of a directory.
- scorepeer: An attempt to make a collection of online score-cards much like the Finska one I built.
- withenv: Preconfigure the environment for a command with the values of an
.env
file (there must be something out there that does this already). - About 3 aborted attempts to make a wiki-style site using Hugo (one called “Techknow Space” which I though was pretty cleaver).
-
Or possibly a Peertube client. ↩︎
- The Test Level Set β used to test the level script feature.
- The Additional Level Set β these were the levels that really made use of level scripts, including the one listed above.
- Having multiple catalogues with different access roles.
- Feature flags, which disabled certain features for users.
- Generating a QR code for an API token for easy sign in on the mobile app.
- Authentication added with username/password
- Retry failed video downloads.
- The ability to download YouTube videos in audio only (all these “podcasts” that are only available as YouTube videosβ¦ π)
- The ability to handle the lifecycle of videos a little better than it does now. It’s already doing this for errors: when a download fails, the video is deleted. But it would be nice if it did things like automatically delete videos 30 days after downloading them. This would require more control over the “video store” though.
Thought I’d have another go at looking at BoxedWine for making an online archive of my old Delphi projects. They’ve been some significant improvements since the last time I looked at it. They don’t run fast, but that’s fine. As long as they run.
That Which Didn't Make The Cut
I did a bit of a clean-up of my projects folder yesterday, clearing out all the ideas that never made it off the ground. I’d figured it’d be good to write a few words about each one before erasing them from my hard drive for good.
I suppose the healthiest thing to do would be to just let them go. But what can I say? Should a time come in the future where I wish to revisit them, it’d be better to have something written down than not. It wouldn’t be the first time I wished this was so.
Anyway, here are the ones that were removed today. I don’t have dates of when these were made or abandoned, but it’s likely somewhere between 2022 and 2024.
Interlaced
This was an idea for a YouTube client1 that would’ve used YouTube’s RSS feeds to track subscriptions. The idea came about during a time when I got frustrated with YouTube’s ads. I think it was an election year and I was seeing some distasteful political ads that really turned me off. This would’ve been a mobile app, most likely built using Flutter, and possibly with a server component to get this working with Chromecast, although I had no idea how that would work.
This never got beyond the UI mock-up stage, mainly because the prospect of working on something this large seemed daunting. Probably just as well, as YouTube solved the ads problem for me, with the release of YouTube Premium.
Red Crest
I thought I could build my own blogging engine and this is probably the closest I got (well, in recent years). This project began as an alternative frontend for Dave Winer’s Drummer, rendering posts that would be saved in OPML. But it eventually grew into something of it’s own with the introduction of authoring features.
I got pretty far on that front, allowing draft posts and possibly even scheduled posts (or at least the mechanics for scheduled posts). One feature I did like was the ability to make private posts. These would be interleaved with the public ones once I logged in, giving me something of a hybrid between a blogging CMS and a private journal. It was also possible to get these posts via a private RSS feed. I haven’t really seen a CMS do something quite like this. I know of some that allow posts to be visible to certain cohorts of readers, but nothing for just the blog author.
In the end, it all got a bit much. I started preparing the screen for uploading and managing media, I decided it wasn’t worth the effort. After all, there were so many other blogging CMS’s already out there that did 90% of what I wanted.
Reno
As in “Renovation”. Not much to say about this one, other than it being an attempt to make a Pipe Dreams clone. I think I was exploring a Go-based game library and I wanted to build something relatively simple. This didn’t really go any further that what you see here.
SLog
Short for “Structured Log”. This was a tool for reading JSON log messages, like the ones produce by zerolog. It’s always difficult to read these in a regular text editor, and to be able to list them in a table made sense to me. This one was built for the terminal but I did make a few other attempts building something for this; one using a web-based GUI tool, and another as a native MacOS app. None of these went very far β turns out there’s a lot of tedious code involved β but this version was probably the furthest along before I stopped work.
Despite appearing on this list, I think I’ll keep this one around. The coding might be tedious, but I still have need something like this, and spending the time to build this properly might be worth it one day.
Miscellany
Here are all the others that didn’t even get to the point that warranted a screenshot or a paragraph of text:
I’m sure there’ll be more projects down the line that would receive the same treatment as these, so expect similar posts in the future.
Side Scroller 95
I haven’t been doing much work on new projects recently. Mainly, I’ve been perusing my archives looking for interesting things to play around with. Some of them needed some light work to get working again but really I just wanted to experience them.
I did come across one old projects which I’ll talk about here: a game I called Side Scroller 95.Β And yes, the “95” refers to Windows 95.
This was a remake of Side Scroller, a QBasic game I made back in the day. Having played around with Delphi programming after a while, and finding a bunch of DirectX 7 components, I set about recreating this game.Β I’m not sure why I decided to remake this game vs. making something new. Was it because it was simple enough to try, or I found the levels interesting? Maybe? I can’t really say.
Anyway, there wasn’t much to this game from the beginning. All the movement was cell based, with the usual assortment of solid tiles, hazards, and keys and locks (no pickups though). I eventually added a simple enemy as well: a robot which just went from left to right.
This project really showcases my crappy art skills at the time (I wish I could say they improved, but that would be a lie). The tiles and sprites were creating using MSPaint. This was back in the day when MSPaint was actually quite usable for pixel art, where drawing a rectangle actually rendered a rectangle then and there, rather than product an object which could be moved around (it’s just not the same now). The backgrounds were made by making gradients in Microsoft Word, taking a screenshot, and cropping them. And the sound effects were taken from the PC version of Metal Gear Solid (the WAV files were just sitting there on the file system).
The game itself is pretty unremarkable, although I would say that one
attribute that I really enjoyed doing is adding level scripts. These
were Pascel scripts β interpreted by a Delphi control I found β that
intercepted events, such as triggers or timeouts, and modified the level
in some way. This was done late in the project, so it wasn’t used much,
but it did make for some unique level-specific elements in the later
levels. An example of one of the level scripts is provided below. This
added platforms which ascended one cell at a time when activated. I
forget most of what the built-ins did but I believe the OnActivateX
hooks fired when a switch was triggered and the OnTimedEventX
fired
when a timer elapsed.Β
unit LevelScr;
uses SSUTIL;
const
INT_STX = 0;
INT_STY = 1;
INT_EDX = 2;
INT_EDY = 3;
INT_CURX = 4;
INT_CURY = 5;
procedure LevSetupMove(sx, sy, ex, ey: integer);
begin
SetStackInteger(INT_STX, sx);
SetStackInteger(INT_STY, sy);
SetStackInteger(INT_EDX, ex);
SetStackInteger(INT_EDY, ey);
SetStackInteger(INT_CURX, sx);
SetStackInteger(INT_CURY, sy);
end;
procedure OnActivate1(tag: integer; ison: boolean);
begin
case tag of
1: LevSetupMove(6, 9, 6, 5);
2: LevSetupMove(18, 9, 18, 4);
3: begin
LevSetupMove(20, 19, 20, 16);
Level.SetTile(21, 18, 52);
end;
4: LevSetupMove(11, 23, 11, 19);
5: LevSetupMove(10, 23, 10, 17);
6: LevSetupMove(9, 17, 9, 15);
7: LevSetupMove(9, 15, 9, 13);
8: LevSetupMove(9, 13, 9, 11);
end;
SetupTimer(1, 2, 1, true);
end;
procedure OnActivate2(tag: integer; ison: boolean);
begin
case tag of
1: LevSetupMove(20, 20, 20, 16);
end;
SetupTimer(2, 2, 1, true);
end;
procedure OnTimedEvent1(event: integer);
var cx, cy: integer;
begin
cx := StackInteger(INT_CURX);
cy := StackInteger(INT_CURY);
if ((cx = StackInteger(INT_EDX)) and
(cy = StackInteger(INT_EDY))) then
begin
ResetTimer(1);
end
else
begin
Level.SetTile(cx, cy, 12);
cy := cy - 1;
Level.SetTile(cx, cy, 13);
SetStackInteger(INT_CURX, cx);
SetStackInteger(INT_CURY, cy);
PlaySound(11, false);
end;
end;
procedure OnTimedEvent2(event: integer);
var cx, cy: integer;
begin
cx := StackInteger(INT_CURX);
cy := StackInteger(INT_CURY);
if ((cx = StackInteger(INT_EDX)) and
(cy = StackInteger(INT_EDY))) then
begin
ResetTimer(2);
Explode(cy, cx);
end
else
begin
cy := cy - 1;
Level.SetTile(cx, cy, 0);
SetStackInteger(INT_CURX, cx);
SetStackInteger(INT_CURY, cy);
PlaySound(11, false);
end;
end;
end.
One other hallmark of this project was completely gutting all the hard-coded logic and moving it into a game definition file. I build a pretty simple “game designer” tool which managed all the artwork, tile and sprite definitions, and also had custom logic implemented using that same Pascal interpreter I was using for the level scripts. I never used it for anything than the “Side Scroller” game, apart from a recreation of the File Platform game I also built way back when.
Again, nothing about this was remarkable, and for a long time I had no way to get this working. But thanks to Whisky I managed to launch this for the first time in ages and have a play. I don’t know when I’ll be able to do so again, nor whether it’s a good use of my time to try, so I recorded some screencasts of the gameplay which you can see here:
The desire to move away from cell-based movement and physics continued my drive in making platform based games in Delphi. I eventually managed to build such a game, which I will talk about once I can get it working again.
Small Calculator Commands
This page documents the extra commands from Small Calculator. These were taken from source code, pretty much as is, but styled to suite the web, and any spelling mistakes fixed. These were retrievable from the application itself by typing “help” follow by the command.
Available Commands
The list of available commands are as follows
BLOCK <statements> Executes a block of statements
HELP [topic] Display help on topic
DEFFNC <function> Defines a new function
ECHO <text> Displays text on the line
ECHOEXPR <cmd> Executes a command and displays the result
EXEC <file> Executes a file of commands
FUNCTIONS Displays all predefined functions
IF <pred> Does a command on condition
RETURN <val> Sets the return value
RETURNEXPR <cmd> Sets the return value to the result of <cmd>
Type "HELP <command>" to see infomation on a command
BLOCK
BLOCK {<cmd1>} {<cmd2>} ...
Executes a block of commands.Β The commands can be any statement including other block statements.
DEFFNC
DEFFNC <fnname>(<parameters>) = <command>
Defines a new function.Β The function name can only consist of letters
and numbers.Β Only a maximum of 4 parameters can be used in the
parameter list.Β Parameters are required to be referred to using
$
Example:
deffnc test(x) = $x + 2
-- Adds two to any number
deffnc sign(k) = if ($k < 0} {-1} {if {$k > 0} {1} {0}}
-- Returns -1 if k is negative, 1 if k is positive and
0 if k is 0.
Functions can be recursive if using the “if” command.
ECHO
ECHO <string>
Displays a string on the console.
ECHOEXPR
ECHOEXPR <command>
Executes a command and displays the result on the console.
EXEC
EXEC <filename>
Executes a file of commands.Β Lines starting with “;” are considered comments.Β Lines ending with "" are considered incomplete and the next line is appended (after trimming) to the end of that line.
FUNCTIONS
functions
Displays all predefined functions.Β No user functions are included.
IF
IF {<cond>} {<truepart>} {<falsepart>}
If the result of
HELP
HELP [topic]
Displays a help topic on the console window.Β Use “HELP
RETURN
RETURN <val>
Sets the return value to
RETURNEXPR
RETURNEXPR <cmd>
Sets the return value to the return value of
Small Calculator
Date: Unknown, but probably around 2005
Status: Retired
Give me Delphi 7, a terminal control, and an expression parser, and of course I’m going to build a silly little REPL program.
I can’t really remember why I though this was worth spending time on, but I was always interested in little languages (still am), and I guess I though having a desk calculator that used one was worth having. I was using a parser library I found on Torry’s Delphi Pages (the best site at the time to get free controls for Delphi) for something else, and after getting a control which simulated a terminal, I wrote a very simple REPL loop which used the two.
And credit to the expression parser developer: it was pretty decent. It supported assignments and quite a number of functions. Very capable for powering a desk calculator.
For a while the app was simply that. But, as with most things like this, I got the itch to extend it a little. I started by added a few extra commands. Simple things, like one that would echo something to the screen. All quite innocent, if a little unnecessary. But it soon grew to things like if statements, blocks using curly brackets, and function definitions.
It even extended to small batch scripts, like the one below.Β The full set of commands is listed here.
x := 2
y := 3
if {x = y} {echo 5} \
{echo 232}
return
These never went anywhere beyond a few tests. The extra commands was not really enough to be useful, and they were all pretty awful. I was already using a parser library so I didn’t want to spend any time extending it. As a result, many of these extensions were little more than things that scanned and spliced strings together. It was more of a macro language rather than anything else.
Even with the expression parser the program didn’t see a great deal of use. I was working on the replacement at the time which would eventually be much more capable, and as soon as that was ready, this program fell out of use.
Even so, it was still quite a quirky little program to make a bit of an impression.
Alto
Date: 2020 β present
Status: Rockin'
The year was 2020. The pandemic was just beginning and I was stuck at home, not being able to do much of anything. Worse, rumours came around that Google was shutting down Google Play Music, my music player of choice. They were going to force everyone onto their streaming service instead. Oh, they may have a place for all the music you’ve downloaded (or written) yourself, but not in the first version. Maybe they’ll get to it later.
“Well, fuck that”, I said to myself. “I’ve been battered around by Google shutting down things and forcing migrations onto other things one to many times. I’m going build my own music app.”
And so I built my own music app: Alto.
Why “Alto”? Well, the name goes back to when I was in secondary school, when I was learning the viola, which uses the alto clef for it’s written music. I said to myself at the time that if I were ever to build a music player, it’ll have something referencing the viola. The alto clef seemed like the best thing to use. Plus, it stands out amongst the other music apps that tend to use other notation symbols like notes or the treble clef.
The idea for Alto was pretty straight forward: a music player and catalogue that’ll manage and stream music from an S3 bucket. No tracking, no “promotions” or “recommendations”, no bullshit UI that’s impossible to navigate. Only the music I’m interested in, played the way I want, and a dead simple UI that puts the album front and centre. The music player that was meant for me.
The Web Catalogue
The ultimate version that would come to be would consist of two parts: an Android mobile app, and a web-app. I’ll talk about the mobile app in the next post.
The web-app can be used as a player, but is ultimately be responsible for managing the collection.
The web-app was built using Buffalo, a rapid web development much like
Rails for Go. It’s basically a simple server-side rendered web-app. The
frontend consists mainly of Bootstrap plus some Stimulus and vanilla
JavaScript to handle the interactive elements.
It’s also using Turbo to prevent unloading the current page when
moving to a new one. This means clicking around the site will not stop
playback of the current track, a very nice feature (and doubly so when
you consider that this isn’t a single-page app).
Much of the UI is dedicated to managing the catalogue but there is also an integrated player, which can be invoked by clicking the play button. The player itself can be bought up at any time by pressing “P”. There’s no scrubber but there are seek by 30 second buttons which do the job. Each of the controls in this player have associated shortcut keys that are always available, even with the player hidden.
The collection is managed within a PostgreSQL database and referenced files stored on an S3 bucket. S3 was chosen to ensure that if I were ever to stop work on this project, or the database were to be corrupted, I wouldn’t loose my music. It does mean that I’ve got an ongoing cost for running this service, but based on the amount of music I’m keeping and my music listening patterns, the monthly bill for S3 is around 50Β’-60Β’ AUD, plus $12.00 US for the web-app server.
The catalogue model was made as simple as possible. The main construct is the Album, which would consist of zero or more Tracks. Albums had things like title, artist, cover images, etc. but these are nothing more than just properties of an album.
Tracks could be added one at a time via the frontend, or uploaded from a Zip file pulled from a URL (useful for songs bought on Bandcamp). The catalogue tries it’s best to avoid uploading media via the web-server, either opting to pull it in from the backend or upload it to S3 directly. This was a deliberate choice to reduce the amount of network it uses, but it does mean going through some strange loops. For example, when uploading a single track via the frontend, it would upload it directly to S3, then download it from S3 on the backend so that it can set catalogue metadata from the file itself (ID3 tags, MP3 length, etc). This is pretty convoluted and doesn’t even work half the time, and if I were making this again, I’d probably just bite the bullet and allow large uploads via the frontend.
Media
The media model is a little more complicated. Media β audio files, cover images, etc β all belonged to a Repository, which is essentially a reference to an S3 bucket, although it could also reference things like a HTTP domain. A repository does have some other configuration such as how to name the uploaded media files. I could’ve used something like a UUID, but I wanted to keep the names human readable as much as I could, so that if this project were to shutdown, I would still be able to access the files from S3 myself.
An Album or Track is linked to a Media record through what’s called a Media Reference. Each of these references has a “rel” property (short for “relevance”) which describes what the media is for: an audio file for a track, a cover image for an album, etc. There also exists a classifier which was to allow an Album of Track to use multiple Media records of a particular relevance. For example, some albums, released in different regions, had different album covers, with one being slightly darker than the other. The classifier could be use to switch between the two, based on whether Dark Mode was enabled. At least that was the theory: it never really got used for that.
Governing the link between albums, tracks and media was a simple resolution algorithm that supported things like inheritance. For example, tracks could have their own album cover, but if one didn’t exist, they would inherit it from the album. This went beyond just album covers: it would be possible, theoretically at least, to have the audio associated with the album and have the tracks reference a different cover image (I never tried this).
Finally, there are Playlists. These are pretty standard: just a collection of references to other tracks, plus some ordering information. Playlists and playlist items are essentially links and cannot have Media References themselves. There are some downsides to this: the biggest one being that Playlists do not have album cover art, which is something I’ll need to fix. Playlists can also have metadata items.
Speaking of metadata items.
Metadata Items
A number of objects can also have metadata, which could be use to attach extra attributes. The goal was to make this generic enough for end users to use it for whatever, with Alto having a few predefined names it uses for it’s own purpose.
There were actually two kinds of metadata record. The first was an
arbitrary JSON structure that can be set for each Track.Β Only one such
name was reserved, called heading
, which was used to define track
groups within albums. I do have plans for adding more attributes, such
as things dedicated to Eurovision tracks (year and country for example).
The second, and slightly older, type of metadata was for larger and more generic bits of information. I initially envisioned this to be used to store things like additional artist names, but since each metadata item was a separate row, that felt like overkill and why I added the JSON object (not sure why I thought this, PostgreSQL is quite a capable database). The main use of this is to store chapter markers. Chapters are a pretty simple format, just a bunch of lines with a timestamp and name separated by an equals sign:
0=Intro
139.5=Outlaw
229=Crises
467.5=The Watcher and the Tower
599=Interlude
774.5=Sequencer
1111=Coda
I suppose I should find a way to extract chapters from the MP3 file itself but for the moment I’m just setting them manually. I am also using it to store things like lyrics. I should say a few words as to how Media Refs and Metadata Items actually reference other things in the model. They use a notion of “object_type” and “object_id” which describes what the object is (album, track, playlist item, etc.) and object ID referencing the actual object itself. Because this is quite generic, I can’t rely on ON DELETE CASCADE to clean these up, so I opted for database triggers to remove media refs and metadata items when the base object type is removed.
CREATE FUNCTION delete_dependencies_of_object() RETURNS trigger AS $$
BEGIN
DELETE FROM media_references WHERE object_id = OLD.id;
DELETE FROM metadata_items WHERE object_id = OLD.id;
RETURN NULL;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER delete_dependencies_of_tracks
AFTER DELETE ON tracks
FOR EACH ROW EXECUTE PROCEDURE delete_dependencies_of_object();
CREATE TRIGGER delete_dependencies_of_albums
AFTER DELETE ON albums
FOR EACH ROW EXECUTE PROCEDURE delete_dependencies_of_object();
So far I haven’t had any issues with this, although I haven’t done a lot of snooping around the database to confirm these records are properly being cleaned up.
Grand Plans
I did have grand plans for this catalogue at one point: releasing it as open source, maybe making this a service that others could use. So there were a few things added which are unfinished and half baked. Some examples:
None of these really went anywhere, and if I were to rebuild this, I’d probably pull them out. As things stand now, it does need a bit of a refresh: upgrading Go packages, dealing with Node packages (ugh, it’s always a pain trying to update JS packages and Webpack). But since around late 2020, it’s been serving quite well as my primary music player.
Message Simulator Client
Years: 2017 β 2020
Status: Gone
I once worked at a company that was responsible for sending SMS messages via an API. Think one time passwords when you log into websites, before time-based OTP apps were a thing. And yeah, this did involve some “marketing” messages, although we were pretty strict about outright spam or phishing messages.
Anyway, since sending messages costed us money, we had a simulator setup in our non-prod environments which we used for testing. The features were pretty minimal: basically get the list of messages sent through the API, send a status message back, and simulating a reply from the receiver. The messages were kept in memory and there was no UI: everything was done via a web frontend generated from a Swagger spec.
Being somewhat bored one day, and getting frustrated with the clunky web frontend, I’d though I had a go at making a MacOS client for this thing. After finding my way around Xcode and AppKit, I managed to get something that was usable.
It was not a sophisticated app in the least. It only consisted of a toolbar, a client area, and a right sidebar.
The toolbar allowed switching the environment to connect to, such as Dev or Test. I believe the environments were hard-coded, and if I wanted to add a new one, I’d had to change the Swift code. There was a button to clear the messages from the simulator, and one to refresh the list of messages. There was also a very simple search, which simply did an in-memory substring match, but was good enough for finding unique IDs.
The client area consisted of the message ID and message body, and that’s it. Not included were source and target numbers, plus a few other things. These were occasionally useful to me, but not enough to justify the effort it would take to add them to the UI (I was more likely to use the message body anyway).
The right sidebar consisted of the message “details”, which was just the message ID and message content. There were also sections for sending a particular status message, or sending a reply to the selected message.
I always had grand plans for more features, but I couldn’t justify the time. And eventually I decided to leave, and the project was wiped once I returned my laptop.
Despite how bare-bones it was, it was still useful, and something I used most days if I had to work with the simulator. And given that it was my first attempt at a native Mac app, I was reasonably proud of it. So I think it deserves a place in the archives.
Build Indicators
AKA: Das Blinkenlights
Date: 2017 β now
Status: Steady Green
I sometimes envy those that work in hardware. To be able to build something that one can hold and touch. It’s something you really cannot do with software. And yeah, I dabbled a little with Arduino, setting up sketches that would run on prebuilt shields, but I never went beyond the point of building something that, however trivial or crappy, I could call my own.
Except for this one time.
And I admit that this thing is pretty trivial and crappy: little more than some controllable LEDs. But the ultimate irony is that it turned out to be quite useful for a bunch of software projects.Β
The Hardware
I built this Arduino shield a while ago, probably something like 2013. It’s really not that complicated: just a bunch of LEDs wired up in series to a bunch of resistors atop an Arduino prototyping shield. The LEDs can be divided up into two groups of three, with each group having a red, amber, and green LED, arrange much like two sets of traffic lights. I’m using the analogue pins of the Arduino, making it possible to dim the LEDs (well, “dimmed”: the analogue pins are little more than a square pulse with an adjustable duty cycle).
I can’t remember why I built this shield originally: it might had something to do with train signals, or maybe they were intended as indicators right out of the box. But after briefly using them for their original purpose, it sat on my desk for a while before I started using them as indicator lights. Their first use was for some tool that would monitor the download and transcode of videos. This would take between 45β60 minutes, and it was good to be able to start the job, leave the room, and get the current progress without having to wake the screen while I pass by the door. The red LED will slowly pulse while the download was in progress, then the yellow LED will start flashing when transcoding begins. Once everything is done, the green LED will be lit (or the red LED, which will indicate an error). The Arduino sketch had a bunch of predefined patterns, encoded as strings. Each character would indicate an intensity, with “a” being the brightness, and “z” being the dimmest (I believe the space or dot meant “off”). Each LED could be set to a different pattern, which was done via commands sent over the RS-232 connection. I think the code driving this connection was baked into the download-and-transcode tool itself. The Arduino would reset whenever the RS-232 connection is formed, and just letting it do this when the tool started up meant that I didn’t need to worry about connection state (it didn’t make the code portable though).
Watching Webpack
Eventually this tool fell out of use, and for the long time this board sat in my drawer. Projects came and went, until one came along with a problem that was perfect for this device. I was working on a HTML web-app and I was switching between the code and a web-browser, while Webpack was watching for changes. Because I only had a single screen, the terminal was always out of sight β behind either the code editor or the web-browser β and the version of Webpack I was using would stop watching when it encountered an error (a Go application was serving the files, and Webpack was simply deploying the bundle assets to a public folder, so even though Webpack would stop working, the actual web-server will continue running). Not seeing these errors, I’d fall into the trap into thinking that I was changing things, and getting confused as to why I wasn’t seeing them in the browser. I could go for a minute or two like this before I found out that Webpack died because of an earlier error and my changes were not getting deployed at all. So I dug this device out, built a very simple Go CLI tool and daemon that would talk to it, and hacked it into the Webpack config. When a Webpack build started, it would light up the amber LED. If the build was successful, the green LED would light up; if not, the red LED would.
This proved to be super useful, and took out the guesswork of knowing when a change was deployed. As long as the green LED was lit, it’s good to go, but as soon as amber becomes red, I know I’ll have to check for errors and get it green once more.
The sketch and daemon software is a lot simpler than what this device used to do. Now, instead of individual patterns of intensity, the daemon β which is itself controlled by a CLI tool β would communicate to the device using a very simple protocol that would either turn LEDs on or off. Some of the protocol details, taken from the Arduino sketch, are included below:
/*
* ledstatus - simple led indicators
*
* SERIAL FORMAT
*
* Commands take the form: <cmd> <pars>... NL. Any more than
* 8 bytes (1 command, 7 parameters) will be ignored.
*
* Responses from the device will take the form: <status> <par>... NL
*
*/
// Commands
\#define CMD_NOP 0x0
\#define CMD_PING 'p' // a ping, which should simply respond with RES_OK
\#define CMD_TURN_ON 'o' // 'o' <addr> :: turn on the leds at these addresses
\#define CMD_TURN_OFF 'f' // 'f' <addr> :: turn off the leds at these addresses
// Response
\#define RES_OK '1'
\#define PIN_ADDR_G1 (1 << 0)
\#define PIN_ADDR_Y1 (1 << 1)
\#define PIN_ADDR_R1 (1 << 2)
\#define PIN_ADDR_G2 (1 << 3)
\#define PIN_ADDR_Y2 (1 << 4)
\#define PIN_ADDR_R2 (1 << 5)
But in a way the simplicity actually helps here. Because it’s now a command and daemon, I could use it in anything else where I’d like to show progress without having to see the screen. Just now, for example, I’m working on a Go project that uses Air to rebuild and restart whenever I change a template. The cycle is slightly longer than a simple Webpack run, and I tend to reload the browser window too soon. So waiting for this device to goΒ from amber to green cuts down on these early reloads.
So thats the Build Indicators. The project is steady, and I have no desire to do anything significant, like modify the hardware. But if I were to work on it again, I think I’d like it to add variable intensity back, and extend the command language to let the user upload customer patterns. But for the moment, it’s doing it’s job just fine.
Broadtail
Date: 2021 β 2022
Status: Paused
First project I’ll talk about is Broadtail. I think I talked about this one before, or at least I posted screenshot of it. I started work on this in 2021. The pandemic was still raging, and much of my downtime was watching YouTube videos. We were coming up to a federal election, and I was getting frustrated with seeing YouTube ads from political parties that offend me. This was before YouTube Premium so there was no real way to avoid these ads. Or was there?
A Frontend For youtube-dl
I had some experience with youtube-dl in the past, downloading and saving videos that I hoped to watch later. I recently discovered that YouTube also published RSS feeds for channels and playlist. So I was wondering if it was possible to build something that could use both of these. My goal was to have something that would allow me to subscribe to YouTube channels via RSS, download videos using youtube-dl, and watch them via Plex on my TV. This was to be deployed on a Intel Nuc that I was using as a home server, and be accessible via the web-browser.
I deceided to get the YouTuble downloading feature built first. I started a new Go project and got something up and running reasonably quickly. It was a good excuse to get back to vanilla Go web development, using http.Handle and Go templates, instead of relying on frameworks like Buffalo (don’t get me wrong, I still like Buffalo, but it is quite heavy handed).
It was also an excuse to try out StormDB, which is an embedded NoSQL data store. The technology behind it is quite good β it’s used B-Trees memory mapped files under the cover β and I tend to use it for other things as well. It proved to be quite usable, apart from not allowing multiple read/writers at the same time, which made deployments difficult.
But the backend code was the easy part. What I lack was any sense of web design. That’s one good thing about a framework like Buffalo: it comes with a usable style framework out of the box (Bootstrap). If I were to go my own way, I’d have to start from scratch.
The other side of that coin though, is that it would give me the freedom to go for something that’s slightly off-beat. So I went for an aesthetics that reminded me of early 2000 web-design: san-serif font, all grey lines, dull pastels colours, small controls and widgets (I stopped short at gradients and table-based layouts).
This version also included a hand-rolled job manager that I used for a bunch of other things. It’sβ¦ fine. I wouldn’t use it for anything “real”, but it had a way of managing job lifecycles, updating progress, and the ability to cancel a running job. So for that, it was good enough.
Finally, it needed a name. At the time, I was giving all projects bird-like codename, since I can’t come up with names that I liked. I eventually settled on Broadtail, which was a reference to broadtail parrots, like the rosella.
RSS Subscriptions
It didn’t take long after I got this up and running before I realised I needed the RSS subscription feature. So that was the next thing I added.
That way it worked was pretty straight forward. One would setup a subscription to a YouTube channel or playlist. Broadtail will then poll that RSS subscription every 15 minutes or so, and show new videos on the homepage. Clicking that video item would bring up details and an option to download it.
Each RSS subscription had an associated target directory. Downloading an ad-hoc video would just dump it in a configured directory but I wanted to make it possible to organise downloads from feeds in a more structured way. This wasn’t perfect though: I can’t remember the reason but I had some trouble with this, and most videos just ended up in the download directory by default (it may have had to do with making directories or video permissions).
Only the feed polling was automatic at this stage. I was not interested in having all shows downloaded, as that would eat up on bandwidth and disk storage. So users still had to choose which videos they wanted to download. The list of recent feed items were available from the home-screen so they were able to just do so from there.
I also wanted to keep abreast with what jobs were currently running, so the home-screen also had the list of running job.
The progress-bar was powered by a web-socket backed by a goroutine on the server side, which meant realtime updates. Clicking the job would also show you the live output of the youtube-dl command, making it easy to troubleshoot any jobs that failed. Jobs could be cancelled at any time, but one annoying thing that was missing was the ability to retry failed job. If a download failed, you had to spin up a new job from scratch. This meant clearing out the old job from the file-system and finding the video ID again from wherever you found it.
If you were interested in a video but were not quite ready to download it right away, you could “favourite” it by clicking the star. This was available in every list that showed a video, and was a nightmare to code up, since I was keeping references to where the video came from, such as a feed or a quick look. Keeping atop of all the possible references were become difficult with the non-relational StormDB, and the code that handled this became quite dodgy (the biggest issue was dealing with favourites from feeds that were deleted).
Rules & WWDC Videos
The basics were working out quite well, but it was all so manual. Plus, going from video publication to having something to watch was not timely. The RSS feed from YouTube was always several hours out of date, and downloading whole videos took quite a while (it may not have been realtime, but it was pretty close).
So one of the later things I added was a feature I called “Rules”. These were automations that would run when the RSS feed was polled, and would automatically download videos that met certain criteria (you could also hide them or mark them as downloaded). I quite enjoy building these sorts of complex features, where the user is able to do configure sophisticated automatic tasks, so this was a fun thing to code up. And it worked: video downloads would start when they become available and would usually be in Plex when I want to watch it (it was also possible to ping Plex to update the library once the download was finished). It wasn’t perfect though: not retrying failed downloads did plague it a little. But it was good enough.
This was near the end of my use of Broadtail. Soon after adding Rules, I got onto the YouTube Premium bandwagon, which hid the ads and removed the need for Broadtail as a whole. It was a good thing too, as the Plex Android app had this annoying habit of causing the Chrome Cast to hang, and the only way to recover from this was to reboot the device.
So I eventually returned to just using YouTube, and Broadtail was eventually abandoned.
Although, not completely. One last thing I did was extend Broadtail’s video download capabilities to include Apple WWDC Videos. This was treated as a special kind of “feed” which, when polled, would scrap the WWDC video website. I was a little uncomfortable doing this, and I knew when videos were published, they wouldn’t change. So this “feed” was never polled and the user had to refresh it automatically.
Without the means to stream them using AirPlay, downloading them and making them available in Plex was the only way I knew of watching them on my TV, which is how I prefer to watch them.
So that’s what Broadtail is primarily used for now. It’s no longer running as a daemon: I just boot it up when I want to download new videos. And although it’s only a few years old, it’s starting to show signs of decay, with the biggest issue being youtube-dl slowly being abandoned.
So it’s unlikely that I’ll put any serious efforts into this now. But if I did, there are a few things I’d like to see:
So, that’s Broadtail.