Long Form Posts

    Photo Bucket Update: Exporting To Zip

    Worked a little more on Photo Bucket this week. Added the ability to export the contents of an instance to a Zip file. This consist of both images and metadata.

    Screenshot of a finder window showing the contents of the exported Zip file

    I’ve went with lines of JSON file for the image metadata. I considered a CSV file briefly, but for optional fields like captions and custom properties, I didn’t like the idea of a lot of empty columns. Better to go with a format that’s a little more flexible, even if it does mean more text per line.

    As for the images, I’m hoping the export to consist of the “best quality” version. What that means will depend on the instance. The idea is to have three tiers of image quality managed by the store: “original”, “web”, and “thumbnail”. The “original” version is the untouched version uploaded to the store. The “web” version is re-encoded from the “original” and will be slightly compressed with image metadata tags stripped out. The “thumbnail” version will be a small, highly compressed version suitable for the thumbnail. There is to be a decision algorithm in place to get an image given the desired quality level. For example, if something needed the “best quality” version of an image, and the “original” image is not available, the service will default to the “web” version (the idea is that some of these tiers will be optional depending on the need of the instances).

    This is all partially working at the moment, and I’m hoping to rework all this when I replace how stores and images relate to each other (This is what I’m starting on now, and why I built export now since this will be backwards incompatible). So for the moment the export simply consists of the “web” version.

    I’ve got unit tests working for this as well. I’m trying a new approach for unit testing in this project. Instead of using mocks, the tests are actually running against a fully instantiated versions of the services. There exists a servicestest package which does all the setup (using temporary directories, etc) and tear down of these services. Each individual unit test gets the services from this package and will run tests against a particular one.

    This does mean all the services are available and exercised within the tests, making them less like unit tests and more like integrations tests. But I think I prefer this approach. The fact that the dependent services are covered gives me greater confidence that they’re working. It also means I can move things around without changing mocks or touching the tests.

    That’s not to say that I’m not trying to keep each service their own component as much as I can. I’m still trying to follow best practice of component design: passing dependencies in explicitly when the services are created, for example. But setting them all up as a whole in the tests means I can exercise them while they serve the component being tested. And the dependencies are explicit anyway (i.e. no interfaces) so it makes sense keeping it that way for the tests as well. And it’s just easier anyway. 🤷

    Anyway, starting rework on images and stores now. Will talk more about this once it’s done.

    Photo Bucket Update: More On Galleries

    Spent a bit more time working on Photo Bucket this last week1, particularly around galleries. They’re progressing quite well. I’m made some strides in getting two big parts of the UI working now: adding and removing images to galleries, and re-ordering gallery items via drag and drop.

    I’ll talk about re-ordering first. This was when I had to bite the bullet and start coding up some JavaScript. Usually I’d turn to Stimulus for this but I wanted to give HTML web components a try. And so far, they’ve been working quite well.

    The gallery page is generated server-side into the following HTML:

    <main>
      <pb-draggable-imageset href="/_admin/galleries/1/items" class="image-grid">
        <pb-draggable-image position="0" item-id="7">
          <a href="/_admin/photos/3">
            <img src="/_admin/img/web/3">
          </a>
        </pb-draggable-image>
            
        <pb-draggable-image position="1" item-id="4">
          <a href="/_admin/photos/4">
            <img src="/_admin/img/web/4">
          </a>
        </pb-draggable-image>
            
        <pb-draggable-image position="2" item-id="8">
          <a href="/_admin/photos/1">
            <img src="/_admin/img/web/1">
          </a>
        </pb-draggable-image>        
      </pb-draggable-imageset>
    </main>
    

    Each <pb-draggable-image> node is a direct child of an <pb-draggable-imageset>. The idea is that the user can rearrange any of the <pb-draggable-image> elements within a single <pb-draggable-imageset> amongst themselves. Once the user has moved an image onto to another one, the image will signal its new position by firing a custom event. The containing <pb-draggable-imageset> element is listening to this event and will respond by actually repositioning the child element and sending a JSON message to the backend to perform the move in the database.

    A lot of this was based on the MDN documentation for drag and drop and it follows the examples quite closely. I did find a few interesting things though. My first attempt at this was to put it onto the <pb-draggable-image> element, but I wasn’t able to get any drop events when I did. Moving the draggable attribute onto the <a> element seemed to work. I not quite sure why this is. Surely I can’t think of any reason as to why it wouldn’t work. It may had something else, such as how I was initialising the HTTP components.

    Speaking of HTML components, there was a time where the custom component’s connectedCallback method was being called before the child <a> elements were present in the DOM. This was because I had the <script> tag in the the HTML head and configured to be evaluated during parsing. Moving it to the end of the body and loading it as a module fixed that issue. Also I found that moving elements around using element.before and element.after would actually call connectedCallback and disconnectedCallback each time, meaning that any event listeners registered within connectedCallback would need to be de-registered, otherwise events would be handled multiple times. This book-keeping was slightly annoying, but it worked.

    Finally, there was moving the items with the database. I’m not sure how best to handle this, but I have that method that seems to work. What I’m doing is tracking the position of each “gallery item” using a position field. This field would be 1 for the first item, 2 for the next, and so on for each item in the gallery. The result of fetching items would just order using this field, so as long as they’re distinct, they don’t need to be a sequence incrementing by 1, but I wanted to keep this as much as possible.

    The actual move involves two update queries. The first one will update the positions of all the items that are to shift left or right by one to “fill the gap”. The way it does this is that when an item is moved from position X to position Y, the value of position between X and Y would be changed by +1 if X > Y, or by –1 if Y > X. This is effectively the same as setting position X to X + 1, and so on, but done using one UPDATE statement. The second query just sets the position of item X to Y.

    So that’s moving gallery items. I’m not sure how confident I am with this approach, but I’ve been testing this, both manually and by writing unit tests. It’s not quite perfect yet: I’m still finding bugs (I found some while coming up with these screencasts). Hopefully, I’ll be able to get to the bottom of them soon.

    The second bit of work was to actually add and remove images in the gallery themselves. This, for the moment, is done using a “gallery picker” which is available in the image details. Clicking “Gallery” while viewing an image will show the list of galleries in the system, with toggles on the left. The galleries an image already belongs to is enabled, and the user can choose the galleries they want the image to be in by switching the toggles on and off. These translate to inserts and remove statements behind the scenes.

    The toggles are essentially just HTML and CSS, and a bulk of the code was taken from this example, with some tweaks. They look good, but I think I may need to make them slightly smaller for mouse and keyboard.

    I do see some downside with this interaction. First, it reverses the traditional idea of adding images to a gallery: instead of doing that, your selecting galleries for an image. I’m not sure if this would be confusing for others (it is modelled on how Google Photos works). Plus, there’s no real way to add images in bulk. Might be that I’ll need to add a way to select images from the “Photos” section and have a dialog like this to add or remove them all from a gallery. I think this would go far in solving both of these issues.

    So that’s where things are. Not sure what I’ll work on next, but it may actually be import and export, and the only reason for this is that I screwed up the base model and will need to make some breaking changes to the DB schema. And I want to have a version of export that’s compatible with the original schema that I can deploy to the one and only production instance of Photo Bucket so that I can port the images and captions over to the new schema. More on this in the future, I’m sure.


    1. Apparently I’m more than happy to discuss work in progress, yet when it comes to talking about something I’ve finished, I freeze up. 🤷 ↩︎

    Complexity Stays At the Office

    It’s interesting to hear what others like to look at during their spare time, like setting up Temporal clusters or looking at frontend frameworks built atop five other frameworks built on React. I guess the thinking is that since we use it for our jobs, it’s helpful to keep abreast of these technologies.

    Not me. Not any more. Back in the day I may have though similar. I may even have had a passing fancy at stuff like this, revelling in its complexity with the misguided assumption that it’ll equal power (well, to be fair, it would equal leverage). But I’ve been burned by this complexity one to many times. Why just now, I’ve spent the last 30 minutes running into problem after problem trying to find a single root cause of something. It’s a single user interaction but because it involves 10 different systems, it means looking at 10 different places, each one having their own issues blocking me from forward progress.

    So I am glad to say that those days are behind me. Sure, I’ll learn new tech like Temporal if I need to, but I don’t go out looking for these anymore. If I want to build something, it would be radically simple: Go, Sqlite or PostgreSQL, server-side rendered HTML with a hint of JavaScript. I may not achieve the leverage these technologies may offer, but by gosh I’m not going to put up with the complexity baggage that comes with it.

    Implicit Imports To Load Go Database Drivers Considered Annoying (By Me)

    I wish Go’s approach to loading database drivers didn’t involve implicitly importing them as packages. At least that way, package authors would be more likely to get the driver from the caller, rather than load a driver themselves.

    I’ve been bitten by this recently, twice. I’m using a GitHub Linux driver to build an ARM version of something that needs to use SQLite. As far as I can tell, it’s not possible to build an ARM binary with CGO enabled with these runners (at-least, not without installing a bunch of dependencies — I’m not that desperate yet).

    I’m currently using an SQLite driver that doesn’t require CGO, so all my code builds fine. There also exists a substantially more popular SQLite driver that does require CGO, and twice I’ve tried importing packages which used this driver, thereby breaking the build. These packages don’t allow me pass in a database connection explicitly, and even if they did, I’m not sure if would help: they’re still importing this SQLite driver that needs CGO.

    So what am I to do? As long as I need to build ARM versions, I can’t use these packages (not that I need an ARM version, but it makes testing in a Linux VM running on an M1 Mac easier). I suppose I could roll my own, but it would be nice not to do so. It’d be much better for me to load the driver myself, and pass it to these packages explicitly.

    So yeah, I wish this was better.

    P.S. When you see the error message “unable to open database file: out of memory (14)” when you try to open an SQLite database, it may just mean the directory it’s in doesn’t exist.

    Rubber-ducking: On Context

    I’m torn between extracting auth credentials in the handler from a Go Context and passing them as arguments to service methods, or just passing the context and having the service methods get it from the Context themselves.

    Previously, when the auth credentials just had a user ID, we were doing the former. But we’re now using more information about what the user has access to and if we were to continue doing this, we’ll need to pass more parameters through to the service layer. Not only does this make things a little less neater, it’ll mean the next time we do this, we’ll have to do the whole thing again.

    But, it means that the service methods would need to get the user IDs themselves, along with this new stuff. Not that that’s an issue: there will be providers that are also using the context to get this info. So this is a viable option. And yet, I feel uneasy about using the context for this. 

    🦆: So what are your options?

    L: I guess I could replace the use of the user ID with a structure that holds both the user ID and this extra stuff.

    🦆: Would that work?

    L: I mean, I guess it would? It would make it clearer as to who’s request this. It would also mean that we’re being explicit about what the method needs.

    🦆: Do you see any downsides with this approach?

    L: The only thing I can see is that it would be inconsistent with other parts of the system that are getting it from the context.

    🦆: Your hesitating. You don’t seem sure about this.

    L: Well, I just don’t like the fact that we’re passing both the context which holds this auth info and the auth info alongside it. And I know that it’s unclear, and would mean that the tests would need to be changed (I mean they’ll need to be changed anyway if we went with this “principal” approach). 

    🦆: So what’s really going on? Why are you unsure about this?

    L: Well, it’s just showing this in a review and having people say “oh, that’s not the right way to write Go.”

    🦆: They say that?

    L: We’ll, not exactly. But they do have opinions about how best to do this (like pretty much everyone, I guess).

    🦆: Do they have an opinion about this decision?

    L: Well, not really. In fact, I think they’re pretty okay with either approach.

    🦆: So if they’re okay with either approach, it probably doesn’t matter that much. But if it were me, I’d probably prefer something a little more readable.

    L: Well, yeah. But how can I trust you? You’re me.

    🦆: Am I? I’m a duck. Are you a duck?

    L: No, I’m not. But you’re not a duck either. You don’t exist. You’re just a figment of my imagination.

    🦆: Really? Then how are you speaking to me?

    L: Because I conjured you up so I can work through this problem I’m having.

    🦆: So you’re having a go at me because I don’t exist, yet you still need me because you’re stuck on this decision and you need a resolution.

    L: Well, I didn’t say I don’t need you. It’s probably still helpful to me to have this conversation. 

    🦆: Ok, I think we’ve gone off track a little. What are you going to do about this context decision?

    🦆: Well?

    L: Ok, I’m not certain that implicitly including the user ID will work, as the user ID may be different to what is actually in the context. I also don’t like how it’s implicit in the context, and I think I do prefer something a little more readable. It pains me to think that I’ll be effectively duplicating values that are already available to the method. But we’re doing that anyway with the user ID.

    So here’s what I’ll do. I’ll replace it with a dedicated type, retrievable from the context and holding all the information that is needed to authorise the user. I’ll also retroactively make those changes to other areas of the code that are doing it.

    🦆: Okay. And what of your peers?

    L: If they ask about it, I’ll just tell them that I prefer something a little more explicit. I know it’s a departure from how I did things previously. But the benefits outweigh the costs I think in this case.

    🦆: Okay. Sounds like you’ve got a way forward now.

    L: Great. This has been helpful. Thanks for that, D.

    🦆: No worries. 

    Goland Debugger Not Working? Try Upgrading All The Things

    I’ve been having occasional trouble with the debugger in Goland. Every attempt to debug a test would just fail with the following error:

    /usr/local/go/bin/go tool test2json -t /Applications/GoLand.app/…
    API server listening at: 127.0.0.1:60732
    could not launch process: EOF
    
    Debugger finished with the exit code 1
    

    My previous attempts at fixing this — upgrading Go and Goland — did get it working for a while, but recently it’s been happening to me again. And being at the most recent version of Go and Goland, that avenue was not available to me.

    So I set about looking for other ways to fix this. Poking around the web netted this support post, which suggested upgrading the Xcode Command Line tools:

    $ sudo rm -rf /Library/Developer/CommandLineTools
    $ xcode-select --install
    

    I ran the command and the tools did upgrade successfully, but I was still encountering the problem. I then wondered if Goland used Delve for debugging, and if that actually need upgrading. I’ve got Delve via Homebrew, so I went about upgrading that:

    $ brew upgrade dlv
    

    And indeed, Homebrew did upgrade Delve from 1.21.0 to 1.22.0. And once that finished, and after restarting Goland1, I was able to use the debugger again.

    So, if you’re encountering this error yourself, try upgrading one or more of these tools:

    • Goland
    • Go
    • Xcode Command Line tools (if on a Mac)
    • Delve

    This was the order I tried them in, but you might be lucky by trying Delve first. YMMV


    1. Not sure that a restart is required, but I just did it anyway, “just in case”. ↩︎

    People Are More Interested In What You're Working On Than You Think

    If anyone else is weary about posting about what projects they’re working on, fearing that others would think they’re showing off or something, here’s two bits of evidence that I hope would allay these fears:

    Exhibit 1: I’m a bit of a fan of the GMTK YouTube channel. Lots of good videos there about game development that, despite not being a game developer myself, I find facinating. But the playlist I enjoy the most is the one where Mark Brown, the series creator, actually goes through the process of building a game himself. Now, you’re not going to learn how to use Unity from that series (although he does have a video about that), but it’s fun seeing him making design decisions, showing off prototypes, overcoming challenges — both external and self imposed, and seeing it all come together. I’m aways excited when he drops one of these videos, and when I learnt today that he’s been posting dev logs on his Discord, so interested am I in this topic that I immediately signed up as a Patreon supporter.

    Exhibit 2: I’ve been writing about my own projects on a new Scribbles blog. This was completely for myself, as a bit of an archive of previous work that would be difficult or impossible to revisit later. I had no expectations of anyone else finding these interesting. Yet, earlier this week, while at lunch, the conversation I was having with work colleagues turned to personal projects. He ask if I was working on anything, and when I told him about this blog, he expressed interest. I gave him the link and that afternoon I saw him taking a look (I not expecting him to be a regular visitor but the fact that he was interested at all was something).

    It turns out that my colleague gets a kick out of seeing others do projects like this on the side. I guess, in retrospect, that this shouldn’t be a surprise to me, seeing that I get the same thrill. Heck, that’s why I’ve subscribed to the tech podcasts like Under the Radar: I haven’t written an iOS app in my life, yet it’s just fun listing to dev stories like this.

    Yet, when it comes to something that I’m working on, for a long time I’ve always held back, thinking that talking about it is a form of showing off. I like to think I’m getting better here, but much like the Resistance, that feeling is still there. Wispering doubt in my ear. Asking who would be interested in these raw, unfinished, things that will never go beyond the four walls of the machine from whence then came? I don’t think that feeling will ever go away, but in case I loose my nerve again, I hope to return to the events of this week, just to remind myself that, yeah, people are interested in these stories. I can put money on that assurance. After all, I just did.

    So don’t be afraid to publish those blog posts, podcasts, or videos on what you’re working on. I can’t wait to see them.

    See also, this post by Aaron Francis that touches on the same topic (via The ReadME Project).

    Github Actions, Default Token Permissions, And Publishing Binaries

    Looks like Github’s locked down the access rights of the GITHUB_TOKEN recently. This is the token that’s available to all Github actions by default.

    After taking a GoReleaser config file from an old project and using it in a new one, I encountered this error when GoReleaser tried to publish the binaries as part of a Github Release:

    failed to publish artifacts:
    could not release:
    PATCH https://api.github.com/repos/lmika/<project>/releases/139475588:
    403 Resource not accessible by integration []
    

    After a quick search, I found this Github issue which seemed to cover the same problem. It looks like the way to resolve this is to explicitly add the content: write permission to the Github Actions YAML file:

    name: Create Release
    
    on:
      push:
        tags:
          - 'v*'
    
    # Add this section
    permissions:
      contents: write
      
    jobs:
      build:
        runs-on: ubuntu-latest
    

    And sure enough, after adding the permissions section, Goreleaser was able to publish the binaries once again.

    There’s a bunch of other permissions that might be helpful for other things, should you need it.

    Thoughts on The Failure of Microsoft Bob

    Watching a YouTube video about Microsoft Bob left me wondering if one of the reasons why Bob failed was that it assumed that users, who may have been intimidated by a GUI when they first encountered one, would be intimidated for ever. That their level of skill will always remain one in which the GUI was scary and unusable, and their only success in using a computer is through applications like Bob.

    That might be true for some, but I believe that such cases are a fewer representation of the userbase as a whole. If someone’s serious about getting the most out of their computer, even back then when the GUI was brand new, I can’t see how they wouldn’t naturally skill up, or at least want to.

    I think that’s why I’m bothered by GUIs that sacrifice functionality in leau of “simplicity.” It might be helpful at the start, but pretty soon people would grow comfortable using your UI, and will hit against the artificial capabilities of the application sooner than you expect.

    Not that I’m saying that all UIs should be as complex as Logic Pro for no reason: if the domain is simple, then keep it simple. But when deciding on the balance between simplicity and capability, perhaps have trust in your users’ abilities. If they’re motivated (and your UI design is decent) I’m sure they’ll be able to master something a little more complex.

    At least, that’s what this non-UI designer believes.

    Why I Use a Mac

    Why do I use a Mac?

    Because I can’t get anything I need to get done on an iPad.

    Because I can’t type to save myself on a phone screen.

    Because music software doesn’t exist on Linux.

    Because the Bash shell doesn’t exist on Windows (well, it didn’t when I stopped using it).

    That’s why I use a Mac.

    The AWS Generative AI Workshop

    Had an AI workshop today, where we went through some of the generative AI services AWS offers and how they could be used. It was reasonably high level yet I still got something out of it.

    What was striking was just how much of integrating these foundational models (something like an LLM that was pre-trained on the web) involved natural language. Like if you building a chat bot to have a certain personality, you’d start each context with something like:

    You are a friendly life-coach which is trying to be helpful. If you don’t know the answer to a question, you are to say I don’t know. (Question)

    This would extend to domain knowledge. Now you could fine tune a foundational model with your own data set, but an easier, allbeit slightly less efficient way, would be to do something like hand craft a bunch of questions and answers pairs, and feed that straight into the prompt.

    This may also extend to agents as well (code that the model interacts with). We didn’t cover agents to a significant degree, but after looking at some of the marketing materials, it seems to me that much of the integration is instructing the model to put parameters within XML tags (so that the much “dumber” agent can parse it out), and how to interpret the structured response.

    A lot of boilerplate, written in natural language, in the prompt just to deal with passing information around. I didn’t expect that.

    Nevertheless, it was pretty interesting. And although I haven’t got the drive to look into this much further, I would like to learn more about how one might hook up external data-sources and agents (somthing that involves vector databases that’s available to the model and doesn’t require fine turning. I not sure how to represent these “facts” so that it’s usable by the model, or even if that’s a thing).

    Replacing Ear Cups On JBL E45BT Headphones

    As far as wearables go, my daily drivers are a pair of JBL E45BT Bluetooth headphones. They’re several years old now and are showing their age: many of the buttons no longer work and it usually takes two attempts for the Bluetooth to connect. But the biggest issue is that the ear cups were no longer staying on. They’re fine when I wear them, but as soon as I take them off, the left cup would fall to the ground.

    But they’re a decent pair of headphones, and I wasn’t keen on throwing them out or shopping for another pair. So I set about looking for a set of new ear cups.

    This is actually the second pair of replacement cups I’ve bought for these headphones. The first had a strip of adhesive that stuck the cup straight on to the speaker (it was this adhesive that was starting to fail). I didn’t make a note of where I bought them and a quick search didn’t turn up anything that looked like them. So in December, I settled for this pair from this eBay seller. Yesterday, they arrived.

    New set of ear-cups for a JBL E-series bluetooth headphones
    The new set of ear cups.
    A black bluetooth headphone on a table, with the left cup fallen off exposing the speaker, and the right cup slightly removed from it's original position
    They couldn't have come sooner.

    First impressions were that they were maybe too big. I also didn’t see an adhesive strip to stick them on. Looking at the listing again, I realised that they’re actually for a different line of JBL headphones. But I was a little desperate, so I set about trying to get them on.

    The headphones in question on an old piece of paper with the left cup replaces with the new ear-cups and the right speaker exposed and bits of old adhestive laying on the paper
    Removing the old adhesive, with my fingers (yeah, I probably should buy some tools).

    It turns out that they’re actually still a good fit for my pair. The aperture is a little smaller than the headphone speaker, but there’s a little rim around each one and I found that by slotting one side of the padding over the rim, and then lightly stretching and rolling the aperture around the speaker, it was possible to get them on. It’s a tight fit, but that just means they’re likely to stay on. And without any adhesive, which is good.

    The headphones with the right cup in profile demonstrating the roll of the padding onto the rim
    It's a bit hard to see, but if you look at the top of the right cup, you can see how the padding was rolled onto the speaker from the bottom.

    After a quick road test (a walk around the block and washing the dishes), I found the replacement to be a success. So here’s to a few more years of this daily driver.

    The headphones in profile with the new replacement cups
    Headphones with the new cups. They look and feel pretty good.
    The old replacement cups on a table, with the left cup loosing it's vinyl skin revealing the actual foam.
    The old cups, ready for retirement.

    Detecting A Point In a Convex Polygon

    Note: there are some interactive elements and MathML in this post. So for those reading this in RSS, if it looks like some formulas or images are missing, please click through to the post.

    For reasons that may or may not be made clear lately, I’ve been working on something involving bestagons. I tended to shy away from things like this before, mainly because of the maths involved in tasks like determining whether a point is within a hexagon. But instead of running away once again from things more complex than a grid, I figured it was time to learn this once and for all. So off I went.

    First stop was Stack Overflow, and this answer on how to test if a point is inside a convex polygon:

    You can check that easily with the dot product (as it is proportional to the cosine of the angle formed between the segment and the point, if we calculate it with the normal of the edge, those with positive sign would lay on the right side and those with negative sign on the left side).

    I suppose I could’ve taken this answer as it is, but I know if I did, I’d have something that’ll be little more than magic. It’ll do the job but I’d have no idea way. Now like many, if I can get away with having something that works without me knowing how, I’ll more likely to take it. But when it comes to code, doing this will usually comes back to bite me in the bum. So I’m trying to look for opportunities to dig a little deeper than I would in learning how and why it works.

    It took me a while, and a few false starts, but I think I got there in the end. And I’d figured it would be helpful for others to know how I came to understand how this worked at all. And yeah, I’m sure this is provable with various theorems and relationships, but that’s just a little too abstract for me. No, what got me to the solution in the end was visualising it, along with attempting to explain it below.

    First, let’s ignore polygons completely and consider a single line. Here’s one, represented as a vector:

    A vector drawn on graph paper pointing to the top-right

    Oh, I should point out that I’m assuming that you’re aware of things like vectors and trigonometric functions, and have heard of things like dot-product before. Hopefully it won’t be too involved.

    Anyway, we have this line. Let’s say we want to know if a specific point is to the “right” of the line. Now, if the line was vertical, this would be trivial to do. But here we’ve got a line that’s is on an angle. And although a phrase like “to the right of” is still applicable, it’ll only be a matter of time before we have a line where “right” and “left” has no meaning to us.

    So let’s generalise it and say we’re interested in seeing whether a point is on the same side as the line’s normal.

    Now, there are actually two normals available to us, one going out on either side of the line. But let’s pick one and say we want the normal that points to the right if the line segment is pointing directly up. We can add that to our diagram as a grey vector:

    That same vector pointing to the top-right, with a normal originating from the same origin pointing to the bottom-right

    Now let’s consider this point. We can represented as a vector that the shares the same origin as the line segment1. With this we can do all sorts of things, such as work out the angle between the two (if you’re viewing this in a browser, you can tap on the canvas to reposition the green ray):

    That same vector and normal, now with an additional line coming from the origin drawn rotated 48° clockwise from the original vector

    This might give us a useful solution to our problem here; namely, if the angle between the two vectors falls between 0° and 180°, we can assume the point is to the “right” of the line. But we may be getting ahead of ourselves. We haven’t even discussed how we can go about “figuring out the angle” between these vectors.

    This is where the dot product comes in. The dot product is an equation that takes two vectors and produces a scalar value, based on the formula below:

    a b = a x b x + a y b y

    One useful relationship of the dot product is that it’s proportional to the cosign of the angle between the two vectors:

    a b = | a | | b | cos θ

    Rewriting this will give us a formula that will return the angle between two vectors:

    θ = cos -1 ( a b | a | | b | )

    So a solution here would be to calculate the angle between the line and the point vector, and as long as it falls between 0 and 180°, we can determine that the point is on the “right” side of the line.

    Now I actually tried this approach in a quick and dirty mockup using JavaScript, but I ran into a bit of an issue. For you see, the available inverse cosign function did not provide a value beyond 180°. When you think about it, this kinda makes sense, as the cosign function starts moving from -1 back to 1 as the angle is greater than 180° (or less than 0°).

    But we have another vector at our disposal, the normal. What if we were to calculate the angle between those two?

    That same vector and normal, and the additional line coming from the origin drawn 136° anti-clockwise from the normal, with text indicating that the dot product is -47500

    Ah, now we have a relationship that’s usable. Consider when the point moves to the “left” of the line. You’d notice that the angle is either greater than 90° or less than –90°. These just happen to be angles in which the cosign function will yield a negative result. So a possible solution before is is to work out the angle between the point vector and normal, take the cosign, and if it’s positive, the point will be on the “right” side of the line (and it’ll be on the “left” side if the cosign is negative).

    But we can do better than that. Looking back at the relationship between the dot product and the angle, we can see that the only way this equation could be negative is if the cosign function is negative, since the vector magnitudes will always return a positive value. So we don’t even need to work out angles at all. We can just rely on the dot product between the point and the normal.

    And it’s here that the solution clicked. A point is to the “right” of a line if the dot product of the point vector and the “right”-sided normal is positive. Look back at the original Stack Overflow answer above, and you’ll see that’s pretty much what was said there as well.

    Now that we’ve got this working for a single line, it’s trivial to extend this to convex2 polygons. After including all the line segments, with the normals pointing inwards, calculate the dot product between each of the normals with the point, and check the sign. If they’re all positive, the point is within the polygon. If not, it’s outside.

    A hexagon drawn in the centre of graph paper with normals and lines originating at each vertex and converting at a single point located in the centre of the hexagon. The lines indicating a positive dot product for each one and that the point is within the hexagon

    So here’s an approach that’ll work for me, and is relatively easy and cheap to work out. And yeah, it’s not a groundbreaking approach, and basically involved relearning a bunch of linear algebra I’ve forgotten since high school. But hey, better late than never.


    1. Well, technically these vectors should be offset from the global origin, but they’re translated in these plots for demonstration purposes. ↩︎

    2. I’m not sure if this is applicable to concave polygons, where the angle between segments can go beyond 180°. ↩︎

    Can a Single Line Or Even a Single Word Be Considered a Legitimate Blog Post?

    Yes.

    2023 Year In Review

    Well, once more around the sun and it’s time again to look back on the year that was.

    Career

    Reflecting on the work we did this past year, there were a few highlights. We managed to get a few major things released, like the new billing and resource usage tracking system (not super exciting, but it was still fun to work on). And although the crunch period we had was a little hard — not to mention the 3 AM launch time — it was good to see it delivered on time. We’re halfway through another large change that I hope to get out before the end of summer, so it’ll probably be full steam ahead when I go back to work this week.

    Highlights aside, there’s not much more to say here. Still feel like my career is in a bit of a rut. And although I generally still like my job, it’s difficult seeing a way forward that doesn’t involve moving away from being an “individual contributor” to a more managerial role. Not sure I like that prospect — leading a squad of 6 devs is probably the maximum number of people I can manage.

    And honestly, I probably need to make thinking of this more of a priority for the new year. I’ve been riding in the backseat on this aspect of my life long enough. Might be time to spend a bit more of effort on driving aspects of my career, rather than letting things just happen to me.

    Ok, that’s out of the way. Now for the more exciting topics.

    Projects

    Dynamo-browse is ticking along, which is good. I’ve added a few features here and there, but there’s nothing huge that needs doing to it right now. It’s received some traction from others, especially people from work. But I gotta be honest, I was hoping that it would be better received than it did. Oh yes, the project website gets visitors, but I just get the sense it hasn’t seen as many takers as I had hoped (I don’t collect app metrics so don’t know for certain). I’d like to say this it doesn’t matter: so long as I find it useful (and I do), that’s all that counts. And yeah, that’s true. But I’d be lying if I said that I wished others would find it useful as well. Ah well.

    One new “major” project released this past year was F5 To Run. Now this, I’m not expecting anyone else to be interested in other than myself. This project to preserve the silly little games I made when I was a kid was quite a success. And now that they’re ensconced in the software equivalent of amber (i.e. web technologies) I hope they can live on for as long as I’m paying for the domain. Much credit goes to those that ported DosBox to a JavaScript library. They were reasonably easy to port over (it was just making the images), and it’s a testament to their work seeing stuff built on primitive 90’s IBM PC technologies actually being the easiest things to run this way. I just need to find a way to do this to my Windows projects next.

    Another “major” project that I’m happy to have released was Mainboard Mayhem, a Chips Challenge clone. This was one of those projects that I’ve been building for myself over the last ten years, and debating with myself whether it was worth releasing or not. I’ve always sided on not releasing it for a number of reasons. But this past year, I figured it was time to either finish and release it, or stop work on it all together. I’m happy with the choice I made. And funnily enough, now that it’s finished, I haven’t had a need to tinker with it since (well, apart from that one time).

    There’ve been a few other things I worked on this past year, many which didn’t really go anywhere. The largest abandoned project was probably those reclaimed devices built for schools I was planning to flash to be scoring devices. The biggest hurdle is connecting to the device. The loader PCB that was shipped to me didn’t quite work as the pins weren’t making good contact (plus, I broke one of them, which didn’t improve things). The custom board I built to do the same thing didn’t work either, the pins were too short and uneven. So I never got to do anything with them. They’re currently sitting in my cupboard in their box, gathering dust. I guess I could unscrew the back and hook wires up to the appropriate solder points, but that’s a time investment I’m not interested in taking at the moment.

    This project may rise again with hardware that’s a little easier for me to work with. I have my eye on the BeepBerry, which looks to be easier to work with, at least with my skills. I added my name to the wait-list, but hearing from others, it might be some time before I can get my hands on one (maybe if the person running the project spent less time fighting with Apple, he can start going through the wait-list).

    So yeah, got a few things finished this year. On the whole, I would like to get better at getting projects out there. Seeing people like Robb Knight who seem to just be continuously releasing things has been inspiring. And it probably doesn’t need to be all code either. Maybe other things, like music, video, and written prose. Throw a bit of colour into the mix.

    Speaking of written prose…

    Writing And Online Presence

    The domain reduction goal continues. I’m honestly not sure if it’s better or worse than last year. I didn’t record the number of registered domains I had at the start of 2023, but as of 27 December 2023, the domain count is at 25, of which 16 have auto-renew turned on.

    Domains Count
    Registered 25
    With auto-renew turned on 16
    Currently used for something 13
    Not currently for something but worth keeping 3
    Want to remove but stuck with it because it’s been shared by others 1

    Ultimately I’d like to continue cutting down the number of new domains I register. It’s getting to be an expensive hobby. I’ve started to switch to sub-domains for new things, so I shouldn’t be short of possible URLs for projects.

    I’m still trying to write here once a day, mainly to keep me from falling out of the habit. But I think I’m at the point where I can start thinking less about the need for a daily post, and focus more towards “better” posts as a whole. What does “better” mean? 🤷 Ultimately it’s in the eye of the beholder, but publishing less posts that I find “cringeworthy” is a start. And maybe having less posts that are just me complaining about something that happened that day. Maybe more things about what I’m working on, or interesting things I encounter. Of course this is all just a matter of balance: I do enjoy reading (and writing) the occasional rant, and writing about things that frustrate me is cathartic. Maybe just less of that in the new year.

    I did shut down the other blog I was using for tech and project posts. It’s now a digital garden and knowledge base, and so far working quite well. I’m so glad I did this in retrospect. I was paying unnecessary cognitive overhead deciding which blog a post should go. They all just go here now.

    Travel

    Oof, it was a big year of travel this past year. The amount of time I’ve spent away from home comes to 10 weeks in total, a full 20% of the year. This might actually be a record.

    The highlight of the past year was my five-week trip to Europe. Despite it being my third visit to Europe (forth if you include the UK), I consider this to be what could only be described as my “Europe trip”. I had a lot of great memories there, and stacks of photos and journal entries that I’ve yet to go through. I’m please that it seemed to have bought my friends and I closer. These sorts of trips can make or break friendships, and I think we left Europe with tighter bonds than we arrived.

    One other notable trip was a week in Singapore. This was for work, and much like my previous work trips, mainly consisted of being in offices. But we did get a chance to do some site-seeing and it was a pleasure to be able to work with those in Singapore.

    And of course, there was another trip to Canberra to look after my sisters cockatiels, which was always a pleasure.

    Not sure what this new year will bring in terms of travel. I’m predicting a relatively quiet one, but who knows.

    Books And Media

    This is the first year I setup a reading goal. I wanted to get out of the habit of starting books and not finishing them (nothing wrong with not finishing books; I was just getting distracted). This past year’s goal was quite modest — only 5 books — but it’s pleasing to see that I managed to surpass that and actually finish 7 books1.

    Keep Going: 10 Ways to Stay Creative in Good Times and Bad What to Do When It's Your Turn Anything You Want Do The Work! Turning Pro The Song of Significance Evil Plans: Having Fun on the Road to World Domination

    As for visual media, well, there’s nothing really worth commenting about here. I did have a go at watching Bojack Horseman earlier in the year, and while the first few series were good, I bounced after starting the forth. I’ve also gave Mad Men a try, after hearing how well it was received by others, but I couldn’t get through the first series. I found the whole look-at-how-people-lived-in-the-’60s trope a bit too much after a first few episodes.

    In general, I’ve found my viewing habits drift away from scripted shows over this past year. I’m more than happy to just watch things on YouTube; or more accurately, rewatch things, as I tend to stick with video’s I’ve seen before. And although I’ve got no plans to write a whole post about my subscriptions just yet (the sand just feels too nice around my face), I did get around to cancelling my Netflix subscription, seeing how little I used it this past year.

    As for podcasts, not much change here. With a few exceptions, the shows I was listening to at the end of the year are pretty close to what I was listening to at the start. But I did find myself enjoying these new shows:

    These are now in my regular rotation.

    The 2023 Word

    My 2023 word for the year was generous, trying to be better at sharing things. And I like to think I’ve made some improvements here. It may not have come across in a summary post like this, but I’ve tried to keep it front of mind in most things I work on. I probably can do a little better here in my personal life. But hey, like most themes, it’s always a constant cycle of improvement.

    I must say, this last year has been pretty good. Not all aspects of it — there will always be peeks and valleys — but thinking back on it now, I’ve felt that it’s been one of the better ones recently. And as for this review, I’ll just close by saying, here’s to a good 2024.

    Happy New Year. 🥂


    1. It’s a good thing I was tracking them as I though I’d only get to 6 this year. ↩︎

    Day One Waffling

    Thinking about my journalling in Day One recently and I’m wondering if it’s time to move it off to something else, maybe Markdown files in a Git repository. Still mulling it over but every time I weigh the two options in my mind, the simpler Markdown approach always wins out.

    Plain old Markdown files are just way more versatile and portable than what Day One offers. I can put them in a private Hugo (or Eleventy) site and browse them in a web browser, with the backing of a full HTML renderer that offers, amongst other things, figures with captions (yes, I want them that badly). Making them into a book will be more involved than what Day One offers, but I’ve been a little unhappy with how books from Day One are laid out anyway. Doing it from Markdown files will be pricier and more involved, but at least I’ll have a bit more control over how it looks.1

    I’ll miss the writing experience from Day One though, especially things like recording the current location and weather for each entry. I’m still wondering what the best substitute for it will be.

    I’m toying around with a web-app I whipped up yesterday. A web-app will be fine for those times I’m online, but how would it work if I’m in an aeroplane? I’m also a little worried about trying it for a while, then abandoning it and leaving it to rot. I guess one thing going for it is that at-least it won’t lock me out of any entries, since they’ll just be Markdown files in Git.

    I’m also considering iA Writer. I haven’t tried it yet but from my understanding, it’ll just write Markdown files to a directory, which is the goal. But I’m not sure how I can get the posts and media from there to a Git repo.

    Anyway, that’s the current thinking. Will keep you posted on what happens.


    1. Of course, the challenge there will be to overcome the friction involved in doing this work to actually get the book made. ↩︎

    First Impressions of Eleventy

    I tend to use Hugo whenever I need a static site. But my magpie tendencies have driven me to take a look at Eleventy, and I can definitely see the appeal.

    Going through the Eleventy quick-start guide, I’m quite impressed with how easy it was to setup a bespoke layout for a single site. I’ve done similar things in a few Hugo sites and while I wouldn’t describe it as “hard”, it’s certainly more involved. Hugo’s decent, but it feels quite… engineered. That’s not necessarily a bad thing: putting together something using one of the pre-built themes is quite straightforward. But going beyond a few theme customisations involves a fair bit of work compare to Eleventy.

    There’s still much more I’ve got to learn, like how Eleventy handles resource bundling (I like how Hugo handles this directly in the template) and configuration (how Eleventy does this is very Node-esk, which is not my preferred approach). But it’s definitely something I’ll keep in my toolkit.

    2023 Song of The Year

    Well, believe it or not, my standing Christmas Eve Mass organ gig has come around once more1, so it’s time to decide on this year’s Song of The Year. This is the second post in this series, so please see last year’s post on what this nonsense is all about.

    This year’s nominees are (not too many this year):

    • Wooden Ship, from Antarctica — Suit for guitar and orchestra by Nigel Westlake.
    • Penguin Ballet, from Antarctica — Suit for guitar and orchestra by Nigel Westlake. Not really a new track for me, but I’m including it here anyway as it’s been many years since I last heard it2.

    And the winner is: Wooden Ship by Nigel Westlake 👏

    Album cover of the Australian Composers Series, 'Out of the Blue', by Nigel Westlake, performed by the Tasmanian Symphony Orchestra. Copyright the Australian Broadcasting Corporation

    Specifically, the version played by the Tasmanian Symphony Orchestra and released by the ABC. This has been quite a special song for me this year and was pretty certain to be the winner for most of the year. Well, since first hearing it in May, there hasn’t been another one to top it. So bravo!

    But that’s not to say there weren’t other tracks discovered this year. The honourable mentions:

    • The Last Place on Earth, from Antarctica — Suit for guitar and orchestra by Nigel Westlake. A good song, but a little too complex for me.
    • “Extremes”, from Music From the Private Life of Plants by Richard Grassby Lewis. Really wish I had a recent link to this (the only one I know of that works is one to a defunct music store, picked up by the Wayback Machine, that previously sold this album).
    • The Knight, from the Tunic OST by Lifeformed & Janice Kwan. Not a completely new album to me, but until now, I tended to skip this track.
    • Epic Grandpa, by Izioq.

    1. It’s the only gig on my calendar actually. ↩︎

    2. And to have at least one other nominee with what was ultimately going to be the winning song this whole year. ↩︎

    Test Creek: A Test Story With Evergreen.ink

    Had a play with Evergreen.ink this afternoon. It was pretty fun. Made myself a test story called Test Creek which you can try out (the story was written by me but all the images were done using DALL-E).

    The experience was quite intuitive. I’ve yet to try out the advanced features, like the Sapling scripting engine, but the basics are really approachable for anyone not interested with any of that.

    A screenshot of the Evergreen.ink editor, showing the contents of a card with two options and a preview on the right

    I would recommend not writing too much on a single card. Keep it to maybe two or three paragraphs. Otherwise the text will start to flow over the image, like it does on one of the cards in this story. Evergreen.ink does keep the text legible with a translucent background. But still, it’s just too much text.

    I should also say that the preview, to the right of the editor, is interactive, meaning that you can use it to jump to cards backed by the options. While I was playing around, I was wondering why there wasn’t a quick way to do this. It wasn’t until I started writing this post that I actually tried the option in the preview, and it worked.

    As for the app itself, if I could make one improvement, it would be something like an image picker which would allow me to reuse images already attached to other cards. I’m not sure how best to use images in these types of stories, but the way I was going for was more to accent the story instead of simply illustrating what’s going on in the prose. So I wanted to reuse images over a series of related cards, and in order to do that I had to upload duplicates.

    But really, this is quite a minor quibble. On the whole I was quite impress by how well the experience was. It’s not easy trying to express something as complex as an interactive story, and I think Evergreen.ink did a pretty decent job.

    So yeah, give it a try. It was quite fun putting this interactive story together. I haven’t got any other ideas lined up, but it would be good to make another one down the line.

    Edit: One other thing to be aware of is that the link given to you when you try to share a story requires a login. So if you want to avoid that, you’ll need to choose the Zip option, which basically bundles the story as a static website. You can deploy it to Netlify quite easily (just check the permissions of the files first, I had to chmod 666 them). Thank-you Robb for letting me know.

    Also, thank-you to omg.lol for the Switchboard feature. It saved my tale dealing with the new redirect.

    Best, First, Favourite

    On Reconcilable Difference #221, Merlin and John introduced the concept of “Best, First, Favourite”. For a particular category, which would you consider the best (i.e. closest to a perfect representation of that category, in however you define it), which would you recommend someone who’s interested in starting should experience first, and which one is your favourite.

    I thought it was a fun idea, so I’ve put together a few of my own.

    It was hard coming up with categories for this one, particularly when considering “best” and “favourite”. You need to have had enough experience to know what makes a good “thing”, in order to judge it against all the others and come up with a “best” one. It also helps to have enough experience to avoid picking your favourite as the best as well. I tried picking categories in which my favourite is different than what I consider the “best”. And it might be that I lack variety in my life, but the list of categories that I managed to come up with was relatively short.

    Nonetheless, here they are:

    Category: Mike Oldfield music

    • Best: Tubular Bells 3. Oldfield was in his element here. A balanced helping of both accoustic and electronic, slow and moving, and very consistent in it’s theming.
    • First: Tubular Bells 2. It may seem that Tubular Bells should be the album to goto for a taste of Mike Oldfield, and it certainty has the Oldfield signature sound. But I’d suggest considering going with this album first, as it’s a bit more refined while having the same basic structure. It’s also the album that grabbed me.
    • Favourite: Crises; The Songs of Distant Earth. I’d probably put TB3 here as well, but in lieu of choosing something that I also consider the best, these two are probably my next favourite. Or it could just be the positive associations I have of them: Songs of Distant Earth reminding me of faraway places, Crises reminding me of home.

    Category: Episodes of Seinfield

    • Best: The Parking Garage. A refinement of The Chinese Resturant, which was groundbreaking in it’s own right. Honourable mention: The Parking Space.
    • First: The Conversion. I think anything in Season 5 or Season 6 would work here. I’ve chosen this one as I wanted an episode which showcases all the character’s traits without having too many supporting character (that also features George’s parents). Honourable mention: The Bris.
    • Favourite: The Busboy. This is a series 2 episode, while they were still finding their feet. But it’s one of the first where the writers manage to have multiple plot threads all wrapped up together in a cohesive whole by the end, an attribute of the writing that I absolutely love. Honourable mention: The Dinner Party.

    Category: Programming text editors (for MacOS)

    • Best: VS Code. I’m not a user of this myself but I can’t deny the amount of effort (and that sweet, sweet Microsoft cash) that’s going to this project. Certainly it’s the most capable out there for pretty much any language you need to work in.
    • First: Nova, depending on which language you’re working in. Obviously if you’re doing anything Apple related, it’s probably best to go with XCode or something. But I think for anything else, Nova is a pretty decent text editor, and definitely one worth trying for anyone starting out.
    • Favourite: Anything from JetBrains. When you feel like moving into something a little more integrated, especially for languages considered “complicated”, I can definitely recommend the IDEs from JetBrains. I use Goland in my day-to-day, with the occaional WebStorm for anything frontend that is considered large. Others include IntelliJ and Android Studios.

    Category: Apple-related Tech Podcasts for anyone that has never heard a podcast before

    • Best: Upgrade. I’m not much of a listener of this one anymore, but it’s still a very well-produced show, and Jason really knows his stuff.
    • First: The Talk Show. I think having something a little more off-the-cuff is the way to get into the medium. You have to warm yourself into it, like you’re having a conversation with friends, and starting with something a little “produced” can leave you feeling as if you’re just another listener (which, I guess, you are but you shouldn’t want that feeling). I think the Talk Show fits the mould here. It did for me.
    • Favourite: Accidental Tech Podcast. Hands down. Informative and enjoyable to listen to. This is one that I do my best to catch every episode they release.

    Category: Walks in and around greater Melbourne

    • Best: Sherbrooke Falls, Mt. Dandenong. This is not the longest, nor the most challenging, but it’s by far the prettiest. Walking amongst the great Mountain Ash is quite a moving experience. Be sure to have the soundtrack of the Atterborough documentary series The Private Life of Plants playing while you do.
    • First: The Domino Trail, Trentham This is about an 1.5 hours out of Melbourne but a nice easy rail-trail going through the lovely forest around the Domino Creek.
    • Favourite: Bushrangers Bay to Cape Shank Lighthouse: A two hour return walk that is moderately challenging with lovely scenes of Bass Strait. Don’t be surprised to run into a kangaroo or two (plus the occasional snake; look out for those).

    Category: Pubs in and around greater Melbourne that make a decent parma

    • Best: The Panton Hill Pub, Panton Hill. This is a good 30 km out of Melboune area, in the green-wedge in a little town called Panton Hill. There’s not much there: a few houses, maybe a shop or two, and this pub. But they do a pretty solid parma there. Good fillet, decent balance of cheese and ham, and a good portion of chips and salid. It’s been a while since I’ve been there, so things may have change.
    • First: The Turf Bar, Queens St. If you’re visiting Melbourne for the first time, and you’d like to try a pretty decent parma, then I’d probably suggest trying out the Turf. It’s probably not the best pub in town: it’s more of a sports bar and can be pretty load when city-workers go there for Friday lunch or drinks. But I’ve been pretty impressed by the quality of their parmas. Be prepared to wait a little while for them.
    • Favourite: The Old England Hotel, Heidelberg. This is not the best parma out there, but they’re pretty consistent. One thing going for this place is that it’s easy to get to.
← Newer Posts Older Posts →