On Go Interfaces And Component-Oriented Design
Golang Weekly had a link to a thoughtful post about interfaces. It got me thinking about my use of interfaces in Go, and how I could improve here. I’ve been struggling with this a little recently. I think there’s still a bit I’ve got to unlearn.
In the Java world, where I got my start, the principal to maintainable systems was a component-based approach: small, internally coherent, units of functionality that you stick together. These units were self contained, dependencies were well defined, and everything was tested in isolation. This meant lots of interfaces, usually defined up front before you even start writing the code that uses them.
I can see why this is popular. The appeal of component design is reuse: build one component for your system and you can use it in another system. I think the principals come from electrical engineering, where you build isolated components, like a transistor or an IC, that when put together produces the working electrical system. So it’s no surprise that this was adopted by the Java and object orientated community, which took such ideals of reuse to extreme levels, as if you could build a system out of the components of another system (this seemed like the whole motivation behind J2EE). Nevertheless, it was a patten of design that appealed to me, especially my sense of coming up with grand designs of abstraction layers.
But recently, I’ve been experiencing a bit of loss in religion. As the post points out, the idea of component design have merit in principal, but they start to break down in reality. Code reuse isn’t free, and if taken too far, you end up spending so much effort on the costs of abstraction (rewriting models, maintaining interfaces, dealing with unit test mocks) for very little benefit. Are you really going to use that Go service in the “catalogue manager” in something else?
So I agree with the post but I come away from it wondering what an alternative to component design actually looks like. I’m still trying to figure this out, and it might be that I’ll need to read up on this some. But maybe it’s to take the idea of self contain units, and throw away the imagined idea of reuse. In concrete terms, ditch the interfaces and replace them with direct method calls.
As for testing, maybe focus less on testing individual units and more of the system as a whole. I’m already onboard with the idea of not mocking out the database in unit tests, but I’m starting to come around to the idea of a “unit” being more than just a single type tested in isolation. I’m guessing this is inevitable as you throw away your interfaces and start depending on other types explicitly. That is, until you begin seeing areas where reuse is possible. But maybe this gets back to the original idea of Go interfaces: they’re more discovered than prescribed.
Anyway, it might be worth trying this approach for the next personal project I have in mind. Of course, that’s the easy part. Finding a way to adopt this pattern at work would be interesting.
So apparently tonight’s earworm is lesser known songs from Men At Work’s “Business As Usual” album, like People Just Love To Play With Words, I Can See It In Your Eyes, and Be Good Johnny. 🎵
In the end it took significantly more time to write about it then to actually do it, but the dot product approach seems to work.
🎥 Elm Connections #7: Deploying
In which I round out the series by deploying Clonections to Netlify.
Detecting A Point In a Convex Polygon
Note: there are some interactive elements and MathML in this post. So for those reading this in RSS, if it looks like some formulas or images are missing, please click through to the post.
For reasons that may or may not be made clear lately, I’ve been working on something involving bestagons. I tended to shy away from things like this before, mainly because of the maths involved in tasks like determining whether a point is within a hexagon. But instead of running away once again from things more complex than a grid, I figured it was time to learn this once and for all. So off I went.
First stop was Stack Overflow, and this answer on how to test if a point is inside a convex polygon:
You can check that easily with the dot product (as it is proportional to the cosine of the angle formed between the segment and the point, if we calculate it with the normal of the edge, those with positive sign would lay on the right side and those with negative sign on the left side).
I suppose I could’ve taken this answer as it is, but I know if I did, I’d have something that’ll be little more than magic. It’ll do the job but I’d have no idea way. Now like many, if I can get away with having something that works without me knowing how, I’ll more likely to take it. But when it comes to code, doing this will usually comes back to bite me in the bum. So I’m trying to look for opportunities to dig a little deeper than I would in learning how and why it works.
It took me a while, and a few false starts, but I think I got there in the end. And I’d figured it would be helpful for others to know how I came to understand how this worked at all. And yeah, I’m sure this is provable with various theorems and relationships, but that’s just a little too abstract for me. No, what got me to the solution in the end was visualising it, along with attempting to explain it below.
First, let’s ignore polygons completely and consider a single line. Here’s one, represented as a vector:

Oh, I should point out that I’m assuming that you’re aware of things like vectors and trigonometric functions, and have heard of things like dot-product before. Hopefully it won’t be too involved.
Anyway, we have this line. Let’s say we want to know if a specific point is to the “right” of the line. Now, if the line was vertical, this would be trivial to do. But here we’ve got a line that’s is on an angle. And although a phrase like “to the right of” is still applicable, it’ll only be a matter of time before we have a line where “right” and “left” has no meaning to us.
So let’s generalise it and say we’re interested in seeing whether a point is on the same side as the line’s normal.
Now, there are actually two normals available to us, one going out on either side of the line. But let’s pick one and say we want the normal that points to the right if the line segment is pointing directly up. We can add that to our diagram as a grey vector:

Now let’s consider this point. We can represented as a vector that the shares the same origin as the line segment1. With this we can do all sorts of things, such as work out the angle between the two (if you’re viewing this in a browser, you can tap on the canvas to reposition the green ray):

This might give us a useful solution to our problem here; namely, if the angle between the two vectors falls between 0° and 180°, we can assume the point is to the “right” of the line. But we may be getting ahead of ourselves. We haven’t even discussed how we can go about “figuring out the angle” between these vectors.
This is where the dot product comes in. The dot product is an equation that takes two vectors and produces a scalar value, based on the formula below:
One useful relationship of the dot product is that it’s proportional to the cosign of the angle between the two vectors:
Rewriting this will give us a formula that will return the angle between two vectors:
So a solution here would be to calculate the angle between the line and the point vector, and as long as it falls between 0 and 180°, we can determine that the point is on the “right” side of the line.
Now I actually tried this approach in a quick and dirty mockup using JavaScript, but I ran into a bit of an issue. For you see, the available inverse cosign function did not provide a value beyond 180°. When you think about it, this kinda makes sense, as the cosign function starts moving from -1 back to 1 as the angle is greater than 180° (or less than 0°).
But we have another vector at our disposal, the normal. What if we were to calculate the angle between those two?

Ah, now we have a relationship that’s usable. Consider when the point moves to the “left” of the line. You’d notice that the angle is either greater than 90° or less than –90°. These just happen to be angles in which the cosign function will yield a negative result. So a possible solution before is is to work out the angle between the point vector and normal, take the cosign, and if it’s positive, the point will be on the “right” side of the line (and it’ll be on the “left” side if the cosign is negative).
But we can do better than that. Looking back at the relationship between the dot product and the angle, we can see that the only way this equation could be negative is if the cosign function is negative, since the vector magnitudes will always return a positive value. So we don’t even need to work out angles at all. We can just rely on the dot product between the point and the normal.
And it’s here that the solution clicked. A point is to the “right” of a line if the dot product of the point vector and the “right”-sided normal is positive. Look back at the original Stack Overflow answer above, and you’ll see that’s pretty much what was said there as well.
Now that we’ve got this working for a single line, it’s trivial to extend this to convex2 polygons. After including all the line segments, with the normals pointing inwards, calculate the dot product between each of the normals with the point, and check the sign. If they’re all positive, the point is within the polygon. If not, it’s outside.

So here’s an approach that’ll work for me, and is relatively easy and cheap to work out. And yeah, it’s not a groundbreaking approach, and basically involved relearning a bunch of linear algebra I’ve forgotten since high school. But hey, better late than never.
So I guess today’s beginning with a game of “guess the secret password requirements.” 😒

🎥 Elm Connections #6: Fetching And JSON Decoding Puzzles
In which I use Elm’s HTTP client and JSON decoder to fetch puzzles from an external resource.
For some reason, Android’s default setting for message notifications in Do Not Disturb is to notify on all messages. This, to me, seems like it defeats the purpose of DND.
You can turn it off or change it by going to Settings > Notifications > Do Not Disturb > People > Messages.

Re-reading Cory Doctorow’s post about the enshittification of TikTok. A bit coincidental, as a YouTuber I follow recently stated that he had to cut down on videos and look for work because YouTube’s push for Shorts has had an impact on revenue. Could this be how YouTube starts enshittifying?
All you have to do is dream up a good URL at your domain and redirect it to the feed’s URL provided by whatever service you use to host your stuff. And then that’s where you tell folks to subscribe.
It’s easy to forget (like I do) that there’s nothing magical about an RSS feed. It’s just one more thing served by HTTP at a URL. And thus, is useable with the real magic here which is HTTP redirects.
This is a brilliant idea. The only thing I’ll add is just to make those RSS feeds discoverable.
Discovered a few days ago that I was completely out of coffee beans. So after getting some emergency beans from the supermarket, I ordered a kilo of my default: Primo Fair Trade Organic.
And yes, part of the reason for this post is that I forgot the URL of this site.
Impressed with the new table editor added in Obsidian v1.5. Tables were Obsidian’s Achilles’ heel so it’s great to see them improve this. There’s a small bug when opening a new row, where focus is lost after I press Enter and I can’t just start typing in the new row. But otherwise, good job.
Edit: Ah, the bug might be because I had the “Advanced Table” plugin enabled. Turning that off seems to have fix it, and I don’t loose focus anymore.
Summer break over, back to work today. Though I’m glad I took an extra day of leave this time. Pushed the return to work feeling from “it’s too soon” to “okay, I’m ready to go back now.”
🎥 Elm Connections #5: Option Shuffling
In which I use Elm’s random number generator to shuffle the options.
The sun was peaking through the clouds this morning that for a minute I wondered whether it was worth taking my umbrella to the cafe this morning. I’m glad I did, because storms developed a few minutes ago and now it’s raining. For once, I’m ahead of the weather. ☔️
Of Lemons And Modern Software
I found myself nodding my head throughout Alex Russell’s post The Market For Lemons:
The complexity merchants knew their environments weren’t typical, but they sold highly specialised tools as though they were generally appropriate. They understood that most websites lack tight latency budgeting, dedicated performance teams, hawkish management reviews, ship gates to prevent regressions, and end-to-end measurements of critical user journeys. They understood the only way to scale JS-driven frontends are massive investments in controlling complexity, but warned none of their customers.
He’s talking about JavaScript frameworks and their failure to produce performant web frontends. But it’s amazing how much of this also applies to the backend world as well. Replace SPA with micro-services, and React with Kubernetes, and you’ve pretty much nailed the problem in that space too.
I’m willing to believe the the trap we find ourselves in is not entirely the fault of these vendors. I can picture a scenario where some dev got curious about React, used it for small project, thought it was the bees knees, and started sharing it with others. The tech was complex enough to be usable yet interesting, and had enough of a pedigree that it was easy to persuade others to adopt it (“Facebook and Google use this, and they’re bajillionaire serving half the world’s population. Why doubt tech from them?”)
Eventually it receives mainstream adoption, and despite the issues, it’s the default choice for anything new because of two characteristics. The first is the sunk cost associated with learning these frameworks, and justifying the battle scars that come from fighting with them.
The second is the fear of rework. I suspect most organisation forget that many of these Big Tech firms started with what are effectively LAMP stacks. They had to replace it with what they have now because of user demand, and the decision was easy because the millions of requests they got per second were stressing their software.
And I suspect organisations today feel the need to skip the simple tech stacks and go straight to something that can serve those billions of users just to save themselves the perceived future need to rewrite the app. Sure, that server-side rendered web page handled by a single web server and PostgreSQL database is enough to serve those 100 DAU now, but someday that may grow to three billion users, and we need to be ready for that possibility now (even if it doesn’t materialise).
So I totally understand the mess we’re in now. But that doesn’t explain why these vendors went to such lengths promoting this tech in the first place. Why would they not be upfront about the massive investment they made to be effective with this tech? Why not disclose the reason why they specifically had to go to such lengths, rather than tout properties like “the DOM is slow” or “Amazon uses micro-services”?
I’d be curious to find out, but maybe later. I’ve got some Kubernetes pods I’ve got deal with first.
In which I put away Elm for a bit to make the playfield look good (or at least, better than it was).
2023 Year In Review
Well, once more around the sun and it’s time again to look back on the year that was.
Career
Reflecting on the work we did this past year, there were a few highlights. We managed to get a few major things released, like the new billing and resource usage tracking system (not super exciting, but it was still fun to work on). And although the crunch period we had was a little hard — not to mention the 3 AM launch time — it was good to see it delivered on time. We’re halfway through another large change that I hope to get out before the end of summer, so it’ll probably be full steam ahead when I go back to work this week.
Highlights aside, there’s not much more to say here. Still feel like my career is in a bit of a rut. And although I generally still like my job, it’s difficult seeing a way forward that doesn’t involve moving away from being an “individual contributor” to a more managerial role. Not sure I like that prospect — leading a squad of 6 devs is probably the maximum number of people I can manage.
And honestly, I probably need to make thinking of this more of a priority for the new year. I’ve been riding in the backseat on this aspect of my life long enough. Might be time to spend a bit more of effort on driving aspects of my career, rather than letting things just happen to me.
Ok, that’s out of the way. Now for the more exciting topics.
Projects
Dynamo-browse is ticking along, which is good. I’ve added a few features here and there, but there’s nothing huge that needs doing to it right now. It’s received some traction from others, especially people from work. But I gotta be honest, I was hoping that it would be better received than it did. Oh yes, the project website gets visitors, but I just get the sense it hasn’t seen as many takers as I had hoped (I don’t collect app metrics so don’t know for certain). I’d like to say this it doesn’t matter: so long as I find it useful (and I do), that’s all that counts. And yeah, that’s true. But I’d be lying if I said that I wished others would find it useful as well. Ah well.
One new “major” project released this past year was F5 To Run. Now this, I’m not expecting anyone else to be interested in other than myself. This project to preserve the silly little games I made when I was a kid was quite a success. And now that they’re ensconced in the software equivalent of amber (i.e. web technologies) I hope they can live on for as long as I’m paying for the domain. Much credit goes to those that ported DosBox to a JavaScript library. They were reasonably easy to port over (it was just making the images), and it’s a testament to their work seeing stuff built on primitive 90’s IBM PC technologies actually being the easiest things to run this way. I just need to find a way to do this to my Windows projects next.
Another “major” project that I’m happy to have released was Mainboard Mayhem, a Chips Challenge clone. This was one of those projects that I’ve been building for myself over the last ten years, and debating with myself whether it was worth releasing or not. I’ve always sided on not releasing it for a number of reasons. But this past year, I figured it was time to either finish and release it, or stop work on it all together. I’m happy with the choice I made. And funnily enough, now that it’s finished, I haven’t had a need to tinker with it since (well, apart from that one time).
There’ve been a few other things I worked on this past year, many which didn’t really go anywhere. The largest abandoned project was probably those reclaimed devices built for schools I was planning to flash to be scoring devices. The biggest hurdle is connecting to the device. The loader PCB that was shipped to me didn’t quite work as the pins weren’t making good contact (plus, I broke one of them, which didn’t improve things). The custom board I built to do the same thing didn’t work either, the pins were too short and uneven. So I never got to do anything with them. They’re currently sitting in my cupboard in their box, gathering dust. I guess I could unscrew the back and hook wires up to the appropriate solder points, but that’s a time investment I’m not interested in taking at the moment.
This project may rise again with hardware that’s a little easier for me to work with. I have my eye on the BeepBerry, which looks to be easier to work with, at least with my skills. I added my name to the wait-list, but hearing from others, it might be some time before I can get my hands on one (maybe if the person running the project spent less time fighting with Apple, he can start going through the wait-list).
So yeah, got a few things finished this year. On the whole, I would like to get better at getting projects out there. Seeing people like Robb Knight who seem to just be continuously releasing things has been inspiring. And it probably doesn’t need to be all code either. Maybe other things, like music, video, and written prose. Throw a bit of colour into the mix.
Speaking of written prose…
Writing And Online Presence
The domain reduction goal continues. I’m honestly not sure if it’s better or worse than last year. I didn’t record the number of registered domains I had at the start of 2023, but as of 27 December 2023, the domain count is at 25, of which 16 have auto-renew turned on.
Domains | Count |
---|---|
Registered | 25 |
With auto-renew turned on | 16 |
Currently used for something | 13 |
Not currently for something but worth keeping | 3 |
Want to remove but stuck with it because it’s been shared by others | 1 |
Ultimately I’d like to continue cutting down the number of new domains I register. It’s getting to be an expensive hobby. I’ve started to switch to sub-domains for new things, so I shouldn’t be short of possible URLs for projects.
I’m still trying to write here once a day, mainly to keep me from falling out of the habit. But I think I’m at the point where I can start thinking less about the need for a daily post, and focus more towards “better” posts as a whole. What does “better” mean? 🤷 Ultimately it’s in the eye of the beholder, but publishing less posts that I find “cringeworthy” is a start. And maybe having less posts that are just me complaining about something that happened that day. Maybe more things about what I’m working on, or interesting things I encounter. Of course this is all just a matter of balance: I do enjoy reading (and writing) the occasional rant, and writing about things that frustrate me is cathartic. Maybe just less of that in the new year.
I did shut down the other blog I was using for tech and project posts. It’s now a digital garden and knowledge base, and so far working quite well. I’m so glad I did this in retrospect. I was paying unnecessary cognitive overhead deciding which blog a post should go. They all just go here now.
Travel
Oof, it was a big year of travel this past year. The amount of time I’ve spent away from home comes to 10 weeks in total, a full 20% of the year. This might actually be a record.
The highlight of the past year was my five-week trip to Europe. Despite it being my third visit to Europe (forth if you include the UK), I consider this to be what could only be described as my “Europe trip”. I had a lot of great memories there, and stacks of photos and journal entries that I’ve yet to go through. I’m please that it seemed to have bought my friends and I closer. These sorts of trips can make or break friendships, and I think we left Europe with tighter bonds than we arrived.
One other notable trip was a week in Singapore. This was for work, and much like my previous work trips, mainly consisted of being in offices. But we did get a chance to do some site-seeing and it was a pleasure to be able to work with those in Singapore.
And of course, there was another trip to Canberra to look after my sisters cockatiels, which was always a pleasure.
Not sure what this new year will bring in terms of travel. I’m predicting a relatively quiet one, but who knows.
Books And Media
This is the first year I setup a reading goal. I wanted to get out of the habit of starting books and not finishing them (nothing wrong with not finishing books; I was just getting distracted). This past year’s goal was quite modest — only 5 books — but it’s pleasing to see that I managed to surpass that and actually finish 7 books1.
As for visual media, well, there’s nothing really worth commenting about here. I did have a go at watching Bojack Horseman earlier in the year, and while the first few series were good, I bounced after starting the forth. I’ve also gave Mad Men a try, after hearing how well it was received by others, but I couldn’t get through the first series. I found the whole look-at-how-people-lived-in-the-’60s trope a bit too much after a first few episodes.
In general, I’ve found my viewing habits drift away from scripted shows over this past year. I’m more than happy to just watch things on YouTube; or more accurately, rewatch things, as I tend to stick with video’s I’ve seen before. And although I’ve got no plans to write a whole post about my subscriptions just yet (the sand just feels too nice around my face), I did get around to cancelling my Netflix subscription, seeing how little I used it this past year.
As for podcasts, not much change here. With a few exceptions, the shows I was listening to at the end of the year are pretty close to what I was listening to at the start. But I did find myself enjoying these new shows:
- Ruminate Podcast, with Robb Knight and John Voorhees
- Shop Talk Show, with Dave Rupert and Chris Coyier
- The Rest is History Podcast
- The Flop House, with Dan McCoy, Stuart Wellington, Elliott Kalan
These are now in my regular rotation.
The 2023 Word
My 2023 word for the year was generous, trying to be better at sharing things. And I like to think I’ve made some improvements here. It may not have come across in a summary post like this, but I’ve tried to keep it front of mind in most things I work on. I probably can do a little better here in my personal life. But hey, like most themes, it’s always a constant cycle of improvement.
I must say, this last year has been pretty good. Not all aspects of it — there will always be peeks and valleys — but thinking back on it now, I’ve felt that it’s been one of the better ones recently. And as for this review, I’ll just close by saying, here’s to a good 2024.
Happy New Year. 🥂
-
It’s a good thing I was tracking them as I though I’d only get to 6 this year. ↩︎
Got an earful of these buggers this morning (they stung me on the earlobe).

I did not take it gracefully. 😂