Long Form Posts

    My Favourite Watch

    Seeing all the nostalgia for digital watches of the ’90s and early 2000s, following the release of retroest desk clock shaped like a large Casio digital watch, it got me thinking of the watches I owned growing up. I started off as a Casio person but I eventually moved on to Timex watches. I was pretty happy with all the watches I owned, but my favourite was the Timex Datalink USB Sports Edition, which stood head and shoulders about the rest.

    Auto generated description:  A Timex Ironman digital watch with a black strap displays the time as 3:41 and is water-resistant up to 100 metres
    Source: Hamonoaneraea (site no longer online)

    Not only was this watch featureful out of the box — having the usual stopwatch, timers, and alarms — it was also reprogrammable. There was some Windows software that allowed you to install new modes and arrange them in the mode menu. I remember a few of these, such as a mode allowing you to browse data arranged in a tree; a simple note taking mode; and a horizontal game of Tetris.

    There was also an SDK, allowing you to build new modes in assembly. I remember building a score keeping mode, where you could track points for a game between two or four competitors, with an optional auxiliary counter used to keep track of things like fouls. I also remember building a dice rolling mode, allowing you to roll up to 6 dice, with each dice having between 2 to 9 sides, and the total automatically being displayed to you.

    I never used these modes for anything — I’m neither sporty nor much of a gamer to have any real need for tracking scores or rolling multiple dice — but they were super fun to build, and I got a fair bit of experience learning assembly from it. And the SDK was pretty well built, with predefined entry points for the mode, reacting to events like button presses, and displaying things on the LCD. The fact that the SDK came with a random-number generator, which wasn’t even used with any of the built-in modes, just showed how well Timex thought about what was possible with this watch.

    This was the last watch I regularly wore: I’ve moved to using phones to keep track of time. But it was a great watch while it lasted.

    Why I Keep Multiple Blogs

    Kev Quirk wrote a post yesterday wondering why people have multiple blogs for different topics:

    A few people I follow have multiple blogs that they use for various topics, but I don’t really understand why. […] I personally prefer to have a single place where I can get all your stuff. If you’re posting about something I’m not interested in, I’ll just skip over it in my RSS feed. I don’t have to read everything in my feed reader.

    I’ve written about this before, and after taking a quick look at that post, most of those reasons still stand. So if you’ve read that post, you can probably stop reading this one at reason number two (unless you’re listening to the audio narration of this, in which case, please listen on as that last post predated that feature 🙂).

    I’m currently keeping four separate blogs: this one, one for checkins to places I’ve visited, one for remembering how to do something for work, and one for projects I’m working on1. This arrangement came about after a few years of spinning out and combining topics to and from a single blog, generally following the tension I felt after publishing something, wondering if that was the right place for it. As strange as it is to say it, this multi-blogging arrangement gives me the lowest amount of tension for writing online.

    There are a few reasons for this. First is that for certain topics, I like an easy way to reference posts quickly. This is the primary reason why I keep that work-related reference blog, so that when I’m faced with a software-related problem I know I’ve seen in the past, I can quickly lookup how I solved it. I’ve tried keeping those posts here, but it was always difficult finding them again amongst all the frequent, day-to-day stuff.

    It mainly comes down to the online reading experience. Categories can only do so much, and that’s if I’m categorising posts rigorously, which is not always the case. Here, the posts are displayed in full, encouraging the reader to browse. But for my reference blog, a list of bare links works better for going directly to the post I need.

    The second reason is the writing experience. For me, certain CMSes work better for certain types of posts. Micro.blog works well for micro-posts or mid-sized posts like this one, but for longer ones, I prefer the editors of either Scribbles or Pika. I don’t know why this is. Might be because all the code-blocks I tend to use on those blogs are easier to write using a WYSIWYG editor rather than Markdown.

    And finally, it’s a good excuse to try out multiple CMSes. I have no rational explanation for this one: it’s an arrangement that costs me more money and requires learning new software. Might be that I just like the variety.

    So that’s why I keep multiple blogs. I do recognise that it does make it harder for others to find my writing online, not to mention following along using RSS. But that’s a tradeoff I’m willing to make for a writing and reading arrangement that works better for me. Of course, like I said in my previous post, this might change in due course.


    1. Actually, I have a fifth blog which is for projects I’m working on that I rather keep private. Oh, and a sixth, which is a travel blog that I really should maintain better. Might be that I have a few too many blogs. ↩︎

    On Panic, iA, and Google Drive

    I see that Panic is shutting down their Google Drive integration in their Android app, much like iA did a few weeks ago. This doesn’t affect me directly: even though I am a user of both Android and Google Drive, I regret to say that I don’t use apps from either company on my phone (I do use a few things from both on my Apple devices).

    But I do wonder why Google is enacting policies that push developers away from using Drive as general purpose user storage. That’s what Drive was meant to be used for, no? Does Google not think that by adding these security conditions, and not getting back to developers trying to satisfy them, is maybe pushing the scale between security and usefulness a bit too far out of balance? Are they thinking through the implication of any of this at all?

    If you were to ask me, my guess would probably be that no, they’re not thinking about it. In fact, I get the sense that they’re making these decisions unconsciously, at least at an organisation level. Probably someone said to the Drive devision that they need to “improve security” and that their performance will be measured against them doing so. So they drafted up these conditions and said “job done” without thinking through how it may affect the actual usefulness of Drive.

    And it just reveals to me how large Google is, possibly too large to know why they do anything at all. It’s not like they’re being malicious or anything: they’re a victim of their own success, with way too many product lines making zero dollars that distract them from their raison d’être, which is getting that sweet, sweet ad money. After-all, what does Drive matter to Google in terms of increasing advertising revenue? It’s probably a division making a loss more than anything else.

    I suppose, given that I do use both Drive and Android, that I should care more about it. And yeah, I care enough to write about it, but that’s barely above the level of mild curiosity I’m feeling as to why Google is letting this happen. Might be that I’ve just gotten numb to Google not caring about their own products themselves.

    Passing

    Three nights ago, and two months before her 94th birthday, my Nonna, my maternal grandmother, suffered a stroke. She’s now in palliative care and there’s no telling how much longer she has left. Over the last few years she was slowing down, yet was still quite aware and was able to do many things on her own, even travel to the shops by bus. She had a scare over the weekend but was otherwise in reasonably good health. So all of this is incredibly sudden.

    I was unsure as to whether or not I wanted to actually write this post. I did have a draft planned yesterday, with the assumption that she wouldn’t make it through the night. Delaying it any further did not seem right. Neither is making this an eulogy or display of public grief — that’s not how I like to do thing. But to not acknowledge that any of this is happening felt just as wrong, at least for now.

    But what seemed right was a public declaration that I love her and I’ll miss her. I consider myself lucky to have said that to her in person, while she was lucid.

    So, what now? Timelines at this stage are uncertain. Would it be hours? Days? Who can say. I guess following that would be the funeral and other matters pertaining to the estate, but that won’t happen for a week or so. What about today? Does one simply go about their day as one normally would? Does life go on? Seems wrong that it should be so, yet I’m not sure there’s anything else that I’m capable of doing. Just the daily routine smeared with sadness and loss.

    I heard someone say that grief comes from love, that you can’t have one without the other. I can attest to that, but the edges of that double-edge sword are razor sharp. I know that eventually the pain will dull, and all that would remain are the memories. All it takes is time.

    Tools And Libraries I Use For Building Web-Apps In Go

    I think I’ve settled on a goto set of tools and libraries for building web-apps in Go. It used to be that I would turn to Buffalo for these sorts of projects, which is sort of a “Ruby on Rails but for Go” type of web framework. But I get the sense that Buffalo is no longer being maintained. And although it was easy to get a project up and running, it was a little difficult to go beyond the CRUD-like layouts that it would generate (or it didn’t motivate me enough to do so). Plus, all that JavaScript bundling… ugh!

    Huge pain to upgrade any of that.Since I’ve moved away from Buffalo, I’m now left to do more of the work up-front, but I think it helps me to be a little more deliberate in how I build something. And after getting burned with Buffalo shutting down, I think it’s was time to consider a mix of tools and libraries that would give me the greatest level of code stability while still being relatively quick to get something up and running.

    So, here’s my goto list of tools and libraries for building web-apps in Go.

    • HTTP Routing: For this, I use Fiber. I suppose using Go’s builtin HTTP router is probably the best approach, but I do like the utility Fiber gives for doing a lot of the things that go beyond what the standard library provides, such as session management and template rendering. Speaking of…
    • Server Side Templating: Nothing fancy here. I just use Go’s template engine via Fiber’s Render integration. It has pretty much all I need, so I don’t really look at anything else.
    • Database: If I need one, then I’ll first take a look at Sqlite. I use the modernc.org Sqlite driver, as it doesn’t require CGo, making deployments easier (more on that later). If I need something a bit larger, I tend to go with PostgreSQL using the pgx driver. I would also like to use StormDB if I could, but it doesn’t play well with how I like to deploy things, so I tend to avoid that nowadays.
    • Database ORM: I don’t really use an ORM (too much PTSD from using the various Java ORMs), but I do use sqlc to generate the Go code that interacts with the database. It’s not perfect, and it does require some glue code which is tedious to write. But what it does it does really well, and it’s better than writing all that SQL marshalling code from scratch.
    • Database Migration: I’ve tried using golang-migrate before, and we do use it at work for PostgreSQL databases, but it doesn’t work well with the modernc.org Sqlite driver. So I ended up writing my own. But if it makes sense to use golang-migrate, I will.
    • JavaScript: I try to keep my JavaScript usage to a minimum, favouring vanilla JavaScript if I only need a few things. For anything else, I usually turn to Stimulus.js, which adds just enough “magic” for the slightly more involved pieces of front-end logic. I’m also looking at HTMX, and have tried it for a few things, but I’ve yet to use it for a complete project. I use esbuild if I need to bundle my JavaScript, but I’m trying to go “builderless” for most things nowadays, relying on import maps and just serving the JavaScript as is.
    • CSS: Much like JavaScript, I still prefer to use vanilla CSS served directly for most things. I tend to start new projects by importing SimpleCSS by Kev Quirk. It makes the HTML look good right out of the gate, but it does make each project look a little “samey” but that’s up to me to address.
    • Live Reloading: I’ve only recently been a convert to live reloading. I did use it when I was bundling JavaScript, but since moving away from that, plus doing most things server-side anyway, I needed something that would build the entire app. I’ve started using Air for this, and it’s… fine. There are certain things that I don’t like about it — particularly that it tends to favour configuration over convention — but it does the job.
    • Deployment: Finally, when I’m ready to deploy something, I do so using Dokku running on a Linux server. I bundle the app in a Docker container, mainly using a Go builder image, and a scratch image for the run-time container (this scratch container has nothing else in it, not even libc, which is why I use the modernc.org Sqlite driver). All I need to do is run git push, and Dokku does the rest. Dokku also makes it easy to provision PostgreSQL databases with automated backups, and HTTPS certificates using Lets Encrypt. Deploying something new does involve logging into the remote server to run some commands, but having been burned by PaaS providers that are either too pricy, or not pricy enough to stay in business, I’ve found this setup to be the most stable way to host apps.

    So, that’s my setup. It’s a collection that’s geared towards keeping the code low maintenance, even if it may come at the cost of scalability. I can’t tell you anything about that myself: I’m not running anything that has more than a couple of users anyway, and most things I’m running are only being used by myself. But I think that’s a problem for later, should it ever arise.

    Micro-fiction: Get A Horse

    Trying something new here. I came up with the concept of this short-story while riding home on the tram yesterday. The rest of it sort-of fell into place when I woke up at 5AM this morning, unable to get back to sleep. Hope you enjoy it.

    Josh was riding the scooter on the city footpath, not trying super hard to avoid the other pedestrians. He was going at a speed that was both unsafe and illegal, but it was the only speed he knew that would prevent that horse from showing up. Besides, he had something that he needed to do, and it was only at such reckless speeds that he knew that that thing would work. Well, he didn’t know; but being at his wits' end after trying everything else, he had to try this. He picked his target ahead and sped up towards it. Good thing he was wearing his helmet.

    Josh never used these sorts of scooters before the collision two weeks ago. He was walking to work that day, when he saw someone on such a scooter coming towards him, helmet on head. The rider was going a ridiculous speed, and Josh tried to get out of his way as he approached, but the scooter driver turned towards him, not slowing down at all. Josh tried again but was not fast enough. The scooter rider ran straight into him and bowled him over onto the footpath. Before Josh could gather himself, the scooter rider slap his helmet onto Josh’s head and shouted, “Get a horse!” He got back onto the scooter and sped away.

    Josh got up, fighting the various aching muscles from the fall. He dusted himself down, took the helmet from his head and looked at it. It was very uncharacteristic of those worn by scooter riders. Most of them were plastic things, either green or orange, yet this one was grey, made of solid leather that was slightly fuzzy to the touch. Josh looked inside the rim and found some printed writing: Wilkinsons Equestrian Helmet. One side fits all. The one was underlined with some black marker.

    Josh put the helmet in his backpack and was about to resume his commute, when he stopped in place. Several metres away, a white horse stood, staring at him. Or at least it looked like a horse. The vision was misty and slightly transparent, giving the sense that it was not real. Yet after blinking and clearing his eyes, it didn’t go away. Josh started to move towards it, and when he was just within arms reach, it disappeared. Josh shook his head, and starting walking. But when he turned the next corner, there it was again: a horse, standing in the middle of the footpath several metres away, staring at him intently.

    Since that day that horse has been haunting Josh. On his walk, at his workplace, in his home, even on the tram. Always staring, always outside of reach. Either standing in his path or following close behind him. The vision will go whenever Josh approached it, only to reappear when he turned to look in another direction. Naturally, no one else could see it. When that horse was in a public place, people seemed to instinctively walk around it. Yet when he asked them if they could see it, they had no idea what he was talking about. But Josh couldn’t do anything to stop seeing it. At every waking hour of the day, from when he got out of bed to when he got back in, there it was, always staring. Never looking away.

    And he knew it had something to do with that helmet. He tried a few things to dispel the vision, such as leaving the helmet at home or trying to give to random strangers (who always refused it). Yet nothing worked to clear the vision. That is, nothing other than what had worked on him. Now was the time to test that theory out.

    His target was ahead, a man in a business suit walking at a leisurely pace. He had his back to Josh, so he couldn’t see Josh turn his scooter towards him and accelerate. The gap between them rapidly closed, and Josh made contact with the man, slowing a little to avoid significant injury, but still fast enough to knock him over. Josh got off the scooter and stood by the man, sprawled on the footpath. Once again the horse appeared, as he knew it would. He looked down to see the man starting to get up. Josh had to go for it now! He took his helmet from his head, slapped it on the man and shouted, “Get a horse!”

    Josh got back on the scooter and sped away for few seconds, then stopped to look behind him. He saw the man back on his feet, helmet in hand, looking at it much like Josh did a fortnight ago. He saw the horse as well, but this time it had its back to Josh, staring intently at the man, yet Josh could see that the man hasn’t noticed yet. He could see the man put the helmet by side of the road and walk away, turning a corner. The horse was fading from Josh’s eyes, yet it was still visible enough for Josh to see it follow the man around the corner, several metres behind.

    Select Fun From PostgreSQL

    Using PostgreSQL these last few months reminds me of just how much fun it is to work with a relational database. DynamoDB is very capable, but I wouldn’t call it fun. It’s kinda boring, actually. Not that that’s a bad thing: one could argue that “boring” is what you want from a database.

    Working with PostgreSQL, on the other hand, has been fun. There’s no better word to describe it. It’s been quite enjoyable designing new tables and writing SQL statements.

    Not sure why this is, but I’m guessing it’s got something to do with working with a schema. It exercises the same sort of brain muscles1 as designing data structures or architecting an application. Much more interesting than dealing with a schemaless database, where someone could simply say “ah, just shove this object it a DynamoDB table.”

    It’s either that, or just that PostgreSQL has a more powerful query language than what DynamoDB offers. I mean, DynamoDB’s query capabilities need to be pretty restricted, thanks to how it stores it’s data. That’s the price you pay for scale.


    1. My brain muscles couldn’t come up with a better term here. 😄 ↩︎

    Rubber-ducking: Of Config And Databases

    It’s been a while since my last rubber-ducking session. Not that I’m in the habit of seeking them out: I mainly haven’t been in a situation when I needed to do one. Well that chance came by yesterday, when I was wondering whether to put queue configuration either in the database as data, or in the environment as configuration.

    This one’s relatively short, as I was leaning towards one method of the other before I started. But doubts remained, so having the session was still useful.

    So without further ado, let’s dive in. Begin scene.

    L: Hello

    🦆: Oh, you’re back. It’s been a while. How did that thing with the authorisation go?

    L: Yeah, good. Turns out doing that was a good idea.

    🦆: Ah, glad to hear it. Anyway, how can I help you today?

    L: Ok, so I’m working on this queue system that works with a database. I’ve got a single queue working quite well, but I want to extend it to something that works across multiple queues.

    🦆: Okay

    L: So I’m wondering where I could store the configuration to these queues. I’m thinking either in the database as data, or in the configuration. I’m thinking the database as: A) a reference to the queue needs to be stored alongside each item anyway, and B) if we wanted to add more queues, we can almost do so by simply adding rows.

    🦆: “almost do so?”

    L: Yeah, so this is where I’m a little unsure. See, I don’t want to spend a lot of effort building out the logic to deal with relaunching the queue dispatcher when the rows change. I rather the dispatcher just read how the queues are configured during startup and stick with that until the application is restarted.

    🦆: Okay

    L: And such an approach is closer to configuration. In fact, it could be argued that having the queues defined as configuration would be better, as adding additional queues could be an activity that is considered “intentional”, with a proper deployment and release process.

    I wonder if a good middle-ground might be to have the queues defined in the database as rows, yet managed via the migration script. That way, we can have the best of both worlds.

    🦆: Why not just go with configuration?

    L: The main reason is that I don’t want to add something like string representations of the queue to each queue item. I’m okay if it was just a UUID, since I’d imagine PostgreSQL could handle such fields relatively efficiently. But adding queue names like “default” or “test” as a string on each queue item seems like a bit of a waste.

    🦆: Do they need to be strings? Could the be like an enum?

    L: I rather they’re strings, as I want this arrangement to be relatively flexible. You know, “policy vs. mechanism” and all that.

    🦆: So how would this look in the database?

    L: Well, each row for a queue would have a string, say like a queue name. But each individual queue item would reference the queue via it’s ID.

    🦆: Okay, so it sounds like adding it to the database yet managing it with the migration script is the way to go.

    L: Yeah, that is probably the best approach.

    🦆: Good. I’m glad you can come away with thinking this.

    L: Yeah, honestly that was the way I was leaning anyway. But I’m glad that I can completely dismiss the configuration approach now.

    🦆: Okay, good. So I’m guessing my job is done here.

    L: Yeah, thanks again.

    🦆: No worries.

    About Those STOP Messages

    John Gruber, discussing political spam text messages on Daring Fireball:

    About a month ago I switched tactics and started responding to all such messages with “STOP”. I usually send it in all caps, just like that, because I’m so annoyed. I resisted doing this until a month ago thinking that sending any reply at all to these messages, including the magic “STOP” keyword, would only serve to confirm to the sender that an actual person was looking at the messages sent to my phone number. But this has actually worked. Election season is heating up but I’m getting way way fewer political spam texts now. Your mileage may vary, but for me, the “STOP” response works.

    As someone who use to work for a company that operated a SMS messaging gateway, allow me to provide some insight into how this works. When you send an opt-out keyword — usually “STOP1” although there are a few others — this would be received by our messaging gateway, and your number would be added to an opt-out list for that sender. From that point on, any attempt by that sender to send a message to your number would fail.

    Maintaining these opt-out lists is a legal requirement with some significant penalties, so the company I worked for took this quite seriously. Once, the service maintaining this list went down, and we couldn’t know whether someone opted-out or not. We actually stopped all messaging completely until we got that service back up again. I still remember that Friday afternoon (naturally, it happened on a Friday afternoon).

    Now, if memory serves, there was a way for a sender to be notified when an opt-out occurred. This was mainly for customers that decided to take on the responsibility — and thus legal accountability — of maintaining the opt-out lists themselves. There were a handful of customers that had this enabled, and it was something that we had to enable for them on the backend, but most customers simply delegated this responsibility to us (I can’t remember if customers that had this feature off could still receive opt-out notifications).

    Finally, there is a way, a variant of the “STOP” message, in which someone could opt-out of any message sent from our gateway, basically adding themselves to a global opt-out list which applies to everyone. The only way someone could remove themselves from this list was to call support, so I wouldn’t recommend doing this unless you know you would never need another 2FA code via SMS again.

    Addendum: The customer never had access to these opt-out lists but I believe they could find out when a message they tried to send was blocked. This is because they would be charged per message sent, and if a message was blocked, they would receive a credit. There was also an API to return the status of a message, so if you knew the message ID, it was possible to call the API to know whether a message was blocked.


    1. I can’t remember if this is case insensitive, although I think it is. ↩︎

    My Home Computer Naming Scheme

    I enjoyed Manton’s post about the naming scheme he uses for Micro.blog servers. I see these names pop up in the logs when I go to rebuild my blog, each with a Wikipedia link explaining the origins of the name (that’s a really nice touch).

    Having a server or desktop naming scheme is one of those fun little things to do when working with computers. Growing up we named our home desktops after major characters of Lord of the Rings, such as Bilbo, or Frodo, but I never devised a scheme for myself when I started buying my own computers. I may have kept it up if we were doing likewise at work, but when AWS came onto the scene, the prevailing train of thought was to treat your servers like cattle rather than pets. Granted, it is probably the correct approach, especially when the lifecycle of a particular EC2 instance could be as short as a few minutes.

    But a few years ago, after buying a new desktop and setting up the old one to be a home server, and finding that I need a way to name them, I figured now was the time for a naming scheme. Being a fan of A Game of Thrones, both the book and the TV series, I’ve came up with one based on the major houses of Westeros.

    So, to date, here are the names I’ve chosen:

    • Stark — the M2 Mac Mini that I use as my desktop
    • Tully — the Intel Mac Mini that I use as my home server
    • Crow — a very old laptop that I occasionally use when I travel (this one is a reference to the Night’s Watch)

    I think at one point I had an Intel Nuc that was called Ghost, a reference to John Snow’s dire wolf, but I haven’t used that in a while so I may be misremembering things. I also don’t have a name for my work laptop: it’s simply called “work laptop.”

    Go Feature Request: A 'Rest' Operator for Literals

    Here’s a feature request for Go: shamelessly copying JavaScript and adding support for the “rest” operator in literals. Go does have a rest operator, but it only works in function calls. I was writing a unit test today and I was thinking to myself that it would be nice to use this operator in both slice and struct literals as well.

    This could be useful for making copies of values without modifying the originals. Imagine the following bit of code:

    type Vector struct { X, Y, Z int }
    oldInts := []int{3, 4}
    oldVec := Vector{X: 1}
    
    newInts := append([]int{1, 2}, oldInts...)
    newVec := oldVec
    newVec.Y = 2
    newVec.Z = 3
    

    Now imagine how it would look if rest operators in literals were supported:

    type Vector struct { X, Y, Z int }
    oldInts := []int{3, 4}
    oldVec := Vector{X: 1}
    
    newInts := []int{1, 2, oldInts...}
    newVec := Vector{Y: 2, Z: 3, oldVec...}
    

    I hope you’ll agree that it looks a bit neater than the former. Certainly it looks more pleasing to my eyes. True, this is a contrived example, but the code I’m writing for real is not too far off from this.

    On the other hand, Go does prefer clarity over brevity; and I have seen some JavaScript codebases which use these “rest” operators to an absurd level, making the code terribly hard to read. But I think the Go user-base is pretty good at moderating themselves, and just because it could result in unreadable code, doesn’t make it a forgone conclusion. Just look at Go’s use of type parameters.

    Anyway, if the Go team is looking for things to do, here’s one.

    A Follow-Up To Mockless Unit Testing

    I’m sure everyone’s dying to hear how the mockless unit tests are going. It’s been almost two months since we started this service, and we’re smack bang in the middle of brownfield iterative development: adding new features to existing ones, fixing bugs, etc. So it seems like now is a good time to reflect on whether this approach is working or not.

    And so far, it’s been going quite well. The amount of code we have to modify when refactoring or changing existing behaviour is dramatically smaller than before. Previously, when a service introduces a new method call, every single test for that service needed to be changed to handle the new mock assertions. Now, in most circumstances, it’s only one or maybe two tests that need to change. This has made maintenance so much easier, and although I’m not sure it mades us any faster, it just feels faster. Probably because there’s less faffing around unrelated tests that broke due to the updated mocks.

    I didn’t think of it at the time, but it also made code reviews easer too. The old way meant longer, noisier PRs which — and I know this is a quality of mine that I need to work at — I usually ignore (I know, I know, I really shouldn’t). With the reviews being smaller, I’m much more likely to keep atop of them, and I attribute this to the way in which the tests are being written.

    Code hygiene plays a role here. I got into the habit of adding test helpers to each package I work on. Much like the package is responsible for fulfilling the contract it has with its dependants, so too is it responsible for providing testing facilities for those dependants. I found this to be a great approach. It simplified the level of setup each dependant package needed to do in their tests, reducing the amount of copy and pasted code and, thus, containing the “blast radius” of logic changes. It’s not perfect — there are a few tests where setup and teardown were simply copied-and-pasted from other tests — but it is better.

    We didn’t quite get rid of all the mocking though. Tests that exercise the database and message bus do so by calling out to these servers running in Docker, but this service also had to make calls to another worker. Since we didn’t have a test service available to us, we just implemented this using old-school test mocks. The use of the package test helpers did help here: instead of having each test declare the expected calls on this mock, the helper just made “maybe” calls to each of the service methods, and provided a way to assert what calls were recorded.

    Of course, much like everything, there are trade-offs. The tests run much slower now, thanks to the database setup and teardown, and we had to lock them down to run on a single thread. It’s not much of an issue now, but we could probably mitigate this with using random database names, rather than have all the test run against the same one. Something that would be less easy to solve are the tests around the message bus, which do need to wait for messages to be received and handled. There might be a way to simplify this too, but since the tests are verifying the entire message exchange, it’d probably be a little more involved.

    Another trade-off is that it does feel like you’re repeating yourself. We have tests that check that items are written to the database correctly for the database provider, the service layer, and the handler layer. Since we’re writing tests that operate over multiple layers at a time, this was somewhat expected, but I didn’t expect it to be as annoying as I found it to be. Might be that a compromise is to write the handler tests to use mocks rather than call down to the service layer. Those tests really only validate whether the hander converts the request and response to the models correctly, so best to isolate it there, and leave the tests asserting whether the business rules are correct to the server layer.

    So if I were to do this again, that’ll be the only thing I change. But, on the whole, it’s been really refreshing writing unit tests like this. And this is not just my own opinion; I asked my colleague, who’s has told me how difficult it’s been maintaining tests with mocks, and he agrees that this new way has been an improvement. I’d like to see if it’s possible doing this for all other services going forward.

    On the Easy Pit To Fall Into

    From Matt Bircher’s latest post on Birchtree:

    One of the hard parts about sharing one’s opinions online like I do is that it’s very easy to fall into the trap of mostly complaining about things.

    This is something I also think about. While I haven’t done anything scientific to know what my ratio of posting about things I like vs. things I don’t, I feel like I’m getting the balance better. It might still be weighted too much on writing about the negatives, but I am trying to write more about things I think are good.

    I do wonder, though, why it’s so easy to write about things you hate. Matt has a few theories regarding the dynamics of social media, but I wonder if it more about someone’s personal experience of that thing in question. You hear about something that you’d thought would be worth a try. I doubt many people would actually try something they know they’re going to dislike. If that’s the case, they wouldn’t try it at1. So I’m guessing that there’s some expectation that you’ll like the thing.

    So you start experience the thing, and maybe it all goes well at first. Then you encounter something you don’t like about it. You make a note of it and keep going, only to encounter another thing you don’t like, then another. You eventually get to the point where you’ve had enough, and you decided to write about it. And lo, you’ve got this list of paper-cuts that can be easily be used as arguments as to why the thing is no good.

    Compare this to something that you do end up liking. You can probably come up with a list of things that are good about it, but you’re less likely to encounter them while you’re experiencing the thing. You just experience them, and it flows through you like water. When the time comes to write about it, you can recall liking the plot, or this character, etc., but they’re more nebulous and it takes effort to solidify them into a post. The drive to find the path of least resistance prevails, and you decided that it’s enough to just like it.

    Anyway, this is just a hypothesis. I’m not a psychologist and I’ve done zero research to find out if any of this is accurate. In the end, this post might simply describe why my posting seems to be more weighted towards things I find annoying.

    A Tour Of My New Self-Hosted Code Setup

    While working on the draft for this post, a quote from Seinfield came to mind which I thought was a quite apt description of this little project:

    Breaking up is knocking over a Coke machine. You can’t do it in one push. You gotta rock it back and forth a few times and then it goes over.

    I’ve been thinking about “breaking up” with Github on and off for a while now. I know I’m not the only one: I’ve seen a few people online talk about leaving Github too. They have their own reasons for doing so: some because of AI, others are just not fans of Microsoft. For me, it was getting bitten by the indie-web bug and wanting to host my code on my own domain name. I have more than a hundred repositories in Github, and that single github.com/lmika namespace was getting quite crowded. Being able to organise all these repositories into groups, without fear of collisions or setting up new accounts, was the dream.

    But much like the Seinfield quote, it took a few rocks of that fabled Coke machine to get going. I dipped my toe in the water a few times: launching Gitea instances in PikaPod, and also spinning up a Gitlab instance in Linode during a hackathon just to see how well it would feel to manage code that way. I knew it wouldn’t be easy: not only would I be paying more for doing this, it would involve a lot of effort up front (and on an ongoing basis), and I would be taking on the responsibility of backups, keeping CI/CD workers running, and making sure everything is secured and up-to-date. Not difficult work, but still an ongoing commitment.

    Well, if I was going to do this at all, it was time to do it for real. I decided to set up my own code hosting properly this time, complete with CI/CD runners, all hosted under my own domain name. And well, that Coke machine is finally on the floor. I’m striking out on my own.

    Let me give you a tour of what I have so far.

    Infrastructure

    My goal was to have a setup with the following properties:

    • A self-hosted SCM (source code management) system that can be bound to my own domain name.
    • A place to store Git repositories and LFS objects that can be scaled up as my needs for storage grow.
    • A CI/CD runner of some sort that can be used for automated builds, ideally something that supports Linux and MacOS.

    For the SCM, I settled on Forgejo, which is a fork of Gitea, as it seemed like the one that required the least amount of resources to run. When I briefly looked at doing this a while back, Forgejo didn’t have anything resembling GitHub Actions, which was a non-starter for me. But they’re now in Forgejo as an alpha, preview, don’t-use-it-for-anything-resembling-production level of support, and I was curious to know how well they worked, so it was worth trying it out.

    I did briefly look at Gitea’s hosted solution, but it was relatively new and I wasn’t sure how long their operations would last. At least with self-hosting, I can choose to exit on my own terms.

    It was difficult thinking about how much I was willing to budget for this, considering that it’ll be more than what I’m currently paying for GitHub now, which is about $9 USD /month ($13.34 AUD /month). I settled for a budget of around $20.00 AUD /month, which is a bit much, but I think would give me something that I’d be happy with without breaking the bank.

    I first had a go at seeing what Linode had to offer for that kind of money. A single virtual CPU, with 2 GB RAM and 50 GB storage costs around $12.00 USD /month ($17.79 AUD /month). This would be fine if it was just the SCM, but I also want something to run CI/CD jobs. So I then took a look at Hetzner. Not only do they charge in Euro’s, which works in my favour as far as currency conversions go, but their shared-CPU virtual servers were much cheaper. A server with the same specs could be had for only a few euro.

    So after a bit of looking around, I settled for the following bill of materials:

    • 2x vCPU (CX22) instances, each with 4 GB RAM and 40 GB storage
    • A virtual network which houses these two instances
    • One public IP address
    • One 50 GB volume which can be resized

    This came to €10.21, which was around $16.38 AUD /month. Better infrastructure for a cheaper price is great in my books. The only downside is that they don’t have a data-centre presences in Australia. I settled for the default placement of Falkenstein, Germany and just hoped that the latency wasn’t too slow as to be annoying.

    Architecture drawing of my coding setup, showing two CX22 virtual hosts, within a virtual network, with one connected to the internet, and one 50 GB volume

    Installing Forgejo

    The next step was setting up Forgejo. This can be done using official channels by either downloading a binary, or by installing a Docker image. But there’s also a forgejo-contrib repository that distributes it via common package types, with Systemd configurations that launch Forgejo on startup. Since I was using Ubuntu, I downloaded and installed the Debian package.

    Probably the easiest way to get started with Forgejo is to use the version that comes with Sqlite, but since this is something that I’d rather keep for a while, I elected to use Postgre for my database. I installed the latest Ubuntu distribution of Postgres, and setup the database as per the instructions. I also made sure the mount point for the volume was ready, and created a new directory with the necessary owner and permissions so that Forgejo can write to it.

    At this point I was able to launch Forgejo and go through the first launch experience. This is where I configured the database connection details, and set the location of the repository and LFS data (I didn’t take a screenshot at the time, sorry). Once that was done, I shut the server down again as I needed to make some changes within the config file itself:

    • I turned off the ability for others to register themselves as users, an important first step.
    • I changed the bind address of Forgejo. It listens to 0.0.0.0:3000 by default, but I wanted to put this behind a reverse proxy, so I changed it to 127.0.0.1:3000.
    • I also reduced the minimum size of SSH RSA keys. The default was 3,072, but I still have keys of length 2,048 that I wanted to use. There was also an option to turn off this verification.

    After that, it was a matter of setting up the reverse proxy. I decided to use Caddy for this, as it comes with HTTPS out of the box. This I installed as a Debian package also. Configuring the reverse proxy by changing the Caddyfile deployed in /etc was a breeze and after making the changes and starting Caddy, I was able to access Forgejo via the domain I setup.

    One quick note about performance: although logging in via SSH was a little slow, I had no issues with the speed of accessing Forgejo via the browser.

    The Runners

    The next job was setting the runners. I thought this was going to be easier than setting up Forgejo itself, but I did run into a few snags which slowed me down.

    The first was finding out that a Hetzner VM running without a public IP address actually doesn’t have any route to the internet, only the local network. The way to fix this is to setup one of the hosts which did have a public IP address to act as a NAT gateway. Hetzner has instructions on how to do this, and after performing a hybrid approach of following both the Ubuntu 20.04 instructions and Ubuntu 22.04 instructions, I was able to get the runner host online via the Forgejo host. Kinda wish I knew about this before I started this.

    For the runners, I elected to go with the Docker-based setup. Forgejo has pretty straightforward instructions for setting them up using Docker Compose, and I changed it a bit so that I could have two runners running on the same host.

    Setting up the runners took multiple attempts. The first attempts failed when Forgejo couldn’t locate any runners for an organisation to use. I’m not entirely sure why this was, as the runners were active and were properly registered with the Forgejo instance. It could be magical thinking, but my guess is that it was because I didn’t register the runners with an instance URL that ended with a slash. It seems like it’s possible to register runners that are only available to certain organisations or users. Might be that there’s some bit of code deep within Forgejo that’s expecting a slash to make the runners available to everyone? Not sure. In either case, after registering the runners with the trailing slash, the organisations started to recognise them.

    The other error was seeing runs fail with the error message cannot find: node in PATH. This resolved itself after I changed the run-on label within the action YAML file itself from linux to docker. I wasn’t expecting this to be an issue — I though the run-on field was used to select a runner based on their published labels, and that docker was just one such label. The Forgejo documentation was not super clear on this, but I got the sense that the docker label was special in some way. I don’t know. But whatever, I can use docker in my workflows.

    Once these battles were won, the runners were ready, and I was able to build and test a Go package successfully. One last annoying thing is that Forgejo doesn’t enable runners by default for new repositories — I guess because they’re still considered an alpha release. I can live with that in the short term, or maybe there’s some configuration I can enable to always have them turned on. But in either case, I’ve now got two Linux runners working.

    Screenshot of a completed CI/CD run within Forgejo
    The first successful CI/CD run using these Linux runners.

    MacOS Runner And Repository Backup

    The last piece of the puzzle was to setup a MacOS runner. This is for the occasional MacOS application I’d like to build, but it’s also to run the nightly repository backups. For this, I’m using a Mac Mini currently being used as a home server. This has an external hard drive connect, with online backups enabled, which makes it a perfect target for a local backup of Forgejo and all the repository data should the worse come to pass.

    Forgejo does’t have an official release of a MacOS runner, but Gitea does, and I managed to download a MacOS build of act_runner and deploy it onto the Mac Mini. Registration and performing a quick test with the runner running in the foreground went smoothly. I then went through the process of setting it up as a MacOS launch agent. This was a pain, and it took me a couple of hours to get this working. I won’t go through every issue I encountered, mainly because I couldn’t remember half of them, but here’s a small breakdown of the big ones:

    • I was unable to register the launch agent definition within the user domain. I had to use the gui domain instead, which requires the user to be logged in. I’ve got the Mac Mini setup to login on startup, so this isn’t a huge issue, but it’s not quite what I was hoping for.
    • Half the commands in launchctl are deprecated and not fully working. Apples documentation on the command is sparse, and many of the Stack Exchange answers are old. So a lot of effort was spent fumbling through unfinished and outdated documentation trying to install and enable the launch service.
    • The actual runner is launch using a shell script, but when I tried the backup job, Bash couldn’t access the external drive. I had to explicitly add Bash to Privacy & Security → Full Disk Access within the Settings app.
    • Once I finally got the runner up and running as a launch agent, jobs were failing because .bash_profile wasn’t been loaded. I had to adjust the launch script to include this explicitly, so that the PATH to Node and Go were set properly.
    • This was further exacerbated by two runners running at the same time. The foreground runner I was using to test with was configured correctly, while the one running as a launch agent wasn’t fully working yet. This manifested as the back-up job randomly failing with the same cannot find: node in PATH error half the time.

    It took me most of Saturday morning, but in the end I managed to get this MacOS runner working properly. I’ve not done anything MacOS-specific yet, so I suspect I may have some XCode related stuff to do, but the backup job is running now and I can see it write stuff to the external hard drive.

    The backup routine itself is a simple Go application that’s kicked off daily by a scheduled Forgejo Action (it’s not in the documentation yet, but the version of Forgejo I deployed does support scheduled actions). It makes a backup of the Forgejo instance, the PostgreSQL database, and all the repository data using SSH and Rsync.

    I won’t share these repositories as they contain references to paths and such that I consider sensitive; but if you’re curious about what I’m using for the launch agent settings, here’s the plist file I’ve made:

    <!-- dev.lmika.repo-admin.macos-runner.plist -->
    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
    <plist version="1.0">
    <dict>
        <key>Label</key>
        <string>dev.lmika.repo-admin.macos-runner</string>
        <key>ProgramArguments</key>
        <array>
            <string>/Users/lmika/opt/macos-runner/scripts/run-runner.sh</string>
        </array>
        <key>KeepAlive</key>
        <true/>
        <key>RunAtLoad</key>
        <true/>
        <key>StandardErrorPath</key>
        <string>/tmp/runner-logs.err</string>
        <key>StandardOutPath</key>
        <string>/tmp/runner-logs.out</string> 
    </dict>
    </plist>
    

    This is deployed by copying it to $HOME/Library/LaunchAgents/dev.lmika.repo-admin.macos-runner.plist, and then installed and enabled by running these commands:

    launchctl bootstrap gui/$UID "Library/LaunchAgents/dev.lmika.repo-admin.macos-runner.plist"
    launchctl kickstart gui/$UID/dev.lmika.repo-admin.macos-runner
    

    The Price Of A Name

    One might see this endeavour, when viewed from a pure numbers and effort perspective, as a bit of a crazy thing to do. Saying “no” to all this cheap code hosting, complete with the backing of a large cooperation, just for the sake of a name? I can’t deny that this may seem a little unusual, even a little crazy. After all, it’s more work and more money. And I’m not going to suggest that others follow me into this realm of a self-hosted SCM.

    But I think my code deserves it’s own name now. After all, my code is my work; and much like we encourage writers to write under their own domain name, or for artists and photographers to move away from the likes of Instagram and other such services, so too should my work be under a name I own and control. The code I write may not be much, but it is my own.

    Of course, I’m not going to end this without my usual “we’ll see how we go” hedge against myself. I can only hope I got enough safeguards in place to save me from my own decisions, or to easily move back to a hosted service, when things go wrong or when it all becomes just a bit much. More on that in the future, I’m sure.

    A Bit of 'Illuminating' Computer Humour

    Here’s some more computer-related humour to round out the week:

    How many software developers does it take to change a lightbulb? Just one.

    How many software developers does it take to change 2 lightbulbs? Just 10.

    How many software developers does it take to change 7 lightbulbs? One, but everyone within earshot will know about it.

    How many software developers does it take to change 32 lightbulbs? Just one, provided the space is there.

    How many software developers does it take to change 35 lightbulbs? Just one. #lightbulbs

    How many software developers does it take to change 65 lightbulbs? Just one, if they’re on their A grade.

    How many software developers does it take to change 128 lightbulbs? Just one, but they’ll be rather negative about it.

    How many software developers does it take to change 256 lightbulbs? What lightbulbs?

    Enjoy your Friday.

    A meme with a grey background and a lightbulb in the centre that's not illuminated. The text reads: Q: How many QAs does it take to change the lightbulbs changed by the software developers? A: How many have you got?

    Asciidoc, Markdown, And Having It All

    Took a brief look at Asciidoc this morning.

    This is for that Markdown document I’ve been writing in Obsidian. I’ve been sharing it with others using PDF exports, but it’s importance has grown to a point where I need to start properly maintaining a change log. And also… sharing via PDF exports? What is this? Microsoft Word in the 2000s?

    So I’m hoping to move it to a Gitlab repo. Gitlab does support Markdown with integrated Mermaid diagrams, but not Obsidian’s extension for callouts. I’d like to be able to keep these callouts as I used them in quite a few places.

    While browsing through Gitlabs’s help guide on Markdown extensions, I came across their support for Asciidoc. I’ve haven’t tried Asciidoc before, and after taking a brief look at it, it seemed like a format better suited for the type of document I’m working on. It has things like auto-generated table of contents, builtin support for callouts, proper title and heading separations; just features that work better than Markdown for long, technical documents. The language syntax also supports a number of text-based diagram formats, including Mermaid.

    However, as soon as I started porting the document over to Asciidoc, I found it to be no Markdown in terms of mind share. Tool support is quite limited, in fact it’s pretty bad. There’s nothing like iA Writer for Asciidoc, with the split-screen source text and live preview that updates when you make changes. There’s loads of these tools for Markdown, so many that I can’t keep track of them (the name of the iA Writer alternative always eludes me).

    Code editors should work, but they’re not perfect either. GoLand supports Asciidoc, but not with embedded Mermaid diagrams. At least not out of the box: I had to get a separate JAR which took around 10 minutes to download. Even now I’m fighting with the IDE, trying to get it to find the Mermaid CLI tool so it can render the diagrams. I encountered none of these headaches when using Markdown: GoLand supports embedded Mermaid diagrams just fine. I guess I could try VS Code, but to download it just for this one document? Hmm.

    In theory the de-facto CLI tool should work, but in order to get Mermaid diagrams working there I need to download a Ruby gem and bundle it with the CLI tool (this is in addition to the same Mermaid command-line tool GoLand needs). Why this isn’t bundled by default in the Homebrew distribution is beyond me.

    So for now I’m abandoning my wish for callouts and just sticking with Markdown. This is probably the best option, even if you set tooling aside. After all, everyone knows Markdown, a characteristic of the format that I shouldn’t simply ignore. Especially for these technical documents, where others are expected to contribute changes as well.

    It’s a bit of a shame though. I still think Asciidoc could be better for this form of writing. If only those that make writing tools would agree.

    Addendum: after drafting this post, I found that Gitlab actually supports auto-generated table of contents in Markdown too. So while I may not have it all with Markdown — such as callouts — I can still have a lot.

    My Position On Blocking AI Web Crawlers

    I’m seeing a lot of posts online about sites and hosting platforms blocking web crawlers used for AI training. I can completely understand their position, and fully support them: it’s their site and they can do what they want.

    Allow me to lay my cards on the table. My current position is to allow these crawlers to access my content. I’m choosing to opt in, or rather, not to opt out. I’m probably in the minority here (well, the minority of those I follow), but I do have a few reasons for this, with the principal one being that I use services like ChatGTP and get value from them. So to prevent them from training their models on my posts feels personally hypocritical to me. It’s the same reason why I don’t opt out of Github Copilot crawling my open source projects (although that’s a little more theoretical, as I’m not a huge user of Copilot). To some, this position might sound weird, and when you consider the gulf between what value these AI companies get from scraping the web verses what value I get from them as a user, it may seem downright stupid. And if you approach it from a logical perspective, it probably is. But hey, we’re in the realm of feelings, and right now this is just how I feel. Of course, if I were to make a living out of this site, it would be a different story. But I don’t.

    And this leads to the tension I see between site owners making decisions regarding their own content, and services making decisions on behalf of their users. This site lives on Micro.blog, so I’m governed by what Manton chooses to do or not do regarding these crawlers. I’m generally in favour of what Micro.blog has chosen so far: allowing people to block these scrapers via “robots.txt” but not yet blocking requests based on their IP address. I’m aware that others may not agree, and I can’t, in principal, reject the notion of a hosting provider choosing to block this crawlers at the network layer. I am, and will continue to be, a customer of such services.

    But I do think some care should be considered, especially when it comes to customers (and non-customer) asking these services to add these network blocks. You may have good reason to demand this, but just remember there are users of these services that have opinions that may differ. I personally would prefer a mechanism where you opt into these crawlers, and this would be an option I’ll probably take (or probably not; my position is not that strong). I know that’s not possible under all circumstances so I’m not going to cry too much if this was not offered to me in lieu of a blanket ban.

    I will make a point on some comments that I’ve seen that, if taken in an uncharitable way, imply that creators that have no problem with these crawlers do not care about their content. I think such opinions should be worded carefully. I know how polarising the use of AI currently is, and making such remarks, particularly within posts that are already heated due to the author’s feelings regarding these crawlers, risks spreading this heat to those that read it. The tone gives the impression that creators okay with these crawlers don’t care about what they push online, or should care more than they do. That might be true for some — might even be true for me once in a while — but to make such blanket assumptions can come off as a little insulting. And look, I know that’s not what they’re saying, but it can come across that way at times.

    Anyway, that’s my position as of today. Like most things here, this may change over time, and if I become disenfranchised with these companies, I’ll join the blockade. But for the moment, I’m okay with sitting this one out.

    Thinking About Plugins In Go

    Thought I’d give Go’s plugin package a try for something. Seems to works fine for the absolutely simple things. But start importing any dependencies and it becomes a non-starter. You start seeing these sorts of error messages when you try to load the plugin:

    plugin was built with a different version of package golang.org/x/sys/unix
    

    Looks like the host and plugins need to have exactly the same dependencies. To be fair, the package documentation says as much, and also states that the best use of plugins is for dynamically loaded modules build from the same source. But that doesn’t help me and what I’m trying to do, which is encoding a bunch of private struct types as Protobuf messages.

    So might be that I’ll need to find another approach. I wonder how others would do this. An embedded scripting language would probably not be suitable for this, since I’m dealing with Protobuf and byte slices. Maybe building the plugin as a C shared object? That could work, but then I’d loose all the niceties that come from using Go’s type system.

    Another option would be something like WASM. It’s interesting seeing WASM modules becoming a bit of a thing for plugin architectures. There’s even a Go runtime to host them. The only question is whether they would have the same facilities as regular process would have, like network access; or whether they’re completely sandboxed, and you as the plugin host would need to add support for these facilities.

    I guess I’d find out if I were to spend any more time looking at this. But this has been a big enough distraction already. Building a process to shell-out to would work just fine, so that’s probably what I’ll ultimately do.

    Word Cloud

    From Seth’s blog:

    Consider building a word cloud of your writing.

    Seems like a good idea so that’s what I did, taking the contents of the first page of this blog. Here it is:

    A word cloud containing the words of the first page of this blog

    Some observations:

    • One of the most prominent words is “just”, with “it’s” not far behind. I though it’s because I started a lot of sentences with “it’s just”, but it turns out I’ve only used that phrase once, while the individual words show up around 10 times each. I guess I use “just” a lot (apparently, so does Seth). I am surprise to see the word “anyway” only showing up twice.
    • Lots of first-person pronouns and articles, like “I’m”, “I’ve”, and “mine”. That’s probably not going to change either. This is just1 the tonal choice I’ve made. I read many blogs that mainly speak in the second person and I don’t think it’s a style that works for me. Although I consciously know that they’re not speaking to me directly, or even to the audience as a whole, I don’t want to give that impression myself, unless that’s my intention. So it’ll be first person for the foreseeable future I’m sure.
    • Because it’s only the first page, many of the more prominent words are from recent posts. So lots about testing, OS/2, and Bundanoon. I would like to cut down on how much I write about testing. A lot of it is little more than venting, which I guess is what one does on their blog, but I don’t want to make a habit of it.
    • I see the word “good” is prominent. That’s good: not a lot of negative writing (although, this is a choice too).
    • I see the word “video” is also prominent. That’s probably not as good. Might be a sign I’m talking a little too much about the videos I’ve been watching.

    Anyway, I thought these findings were quite interesting. One day, I’ll have to make another word cloud across all the posts on this blog.

    Day Trip to Bundanoon

    Decided to go on a day trip to Bundanoon today. It’s been five years since I last visited and I remember liking the town enough that I thought it’d be worth visiting again. It’s not close, around 1 hour and 40 minutes from Canberra, but it not far either and I thought it would be a nice way to spend the day. Naturally, others agreed, which I guess explains why it was busier than I expected, what with the long weekend and all. Fortunately, it wasn’t too crowded, and I still had a wonderful time.

    The goal was to go on a bush-walk first. I chose to do the Erith Coal Mine track, for no particular reason other than it sounded interesting. This circuit track was meant to take you to a waterfall by an old coal mine. However, the track leading to the actual mine was closed, thanks to the recent rain. In fact, if I could describe the bush-walks in one word, it would be “wet”. The ground was soaked, as were the paths, and although conditions were lovely, the paths were still very slippery.

    I assume the mine was across these rocks, but there was no way I was going to cross it.

    After completing that circuit in probably 45 minutes, my appetite for bush-walking was still unsatisfied, so I tried the Fairy Bower Falls walk next. This was not as steep as the first one, but it turned to be a much harder track due to how wet and slippery everything was.

    I stopped short of the end of this one too, as it seems the path was washed away. But I did manage to get a glimpse of the waterfall, so I’m considering that a win.

    After that, I returned to the town for lunch and some train spotting. The train line to Goulburn runs through Bundanoon, and the last time I was there, there was probably a freight train every hour or so. So I was hoping to get a good view of a lot of freight traffic. Maybe shoot a video of a train passing through the station I could share here.

    Auto-generated description: A quaint train station platform with a sign that reads Bundanoon is shown, surrounded by trees and blue skies.
    Bundanoon train station.

    I had lunch outside and walked around the town a little, always within sight of the railway line, hoping for at least one train to pass through. But luck wasn’t on my side, and it wasn’t until I was on my way home that I saw what I think was a grain train passing through Wingello. I pulled over to take a video, and while I miss the locomotive, I got a reasonable enough recording of the wagons.

    Stopping by the side of the road to film these grain wagons passing by.

    Being a little more hopeful, I stopped at Tallong, the next town along the road. I bought a coffee and went to the station to drink it and hopefully see a train pass through. Sadly, it was not to be. So I decided to head back home.

    Auto-generated description: A quiet train station platform is shown with tracks stretching into the distance and surrounded by trees.
    Tallong train station.

    So the train spotting was a bust, and the bush-walks were difficult, but all in all it was quite a nice day. I look forward to my next visit to Bundanoon. Lets hope the trains are running a little more frequently then.

Older Posts →