Long Form Posts

    Crashing Hemispheric Views #109: HAZCHEM

    Okay, maybe not “crashing”, a.la Hey Dingus. But some thoughts did come to me while listening to Hemispheric Views #109: HAZCHEM that I’d though I share with others.

    Haircuts

    I’m sorry but I cannot disagree more. I don’t really want to talk while I’m getting a haircut. I mean I will if they’re striking up a conversation with me, but I’m generally not there to make new friends; just to get my hair cut quickly and go about my day. I feel this way about taxis too.

    I’m Rooted

    I haven’t really used “rooted” or “knackered” that much. My goto phrase is “buggered,” as in “oh man, I’m buggered!” or simply just “tired”. I sometimes used “exhausted” when I’m really tired, but there’s just too many syllable in that word for daily use.

    Collecting

    I’m the same regarding stickers. I received (although not really sought after) stickers from various podcasts and I didn’t know what to do with them. I’ve started keeping them in this journal I never used, and apart from my awful handwriting documenting where they’re from and when I added them, so far it’s been great.

    Journal opened up to a double page showing stickers from omg.lol and Robb Knight

    I probably do need to get some HV stickers, though.

    A Trash Ad for Zachary

    Should check to see if Johnny Decimal got any conversions from that ad in #106. 😀

    Also, here’s a free tag line for your rubbish bags: we put the trash in the bags, not the pods.

    🍅⏲️ 00:39:05

    I’m going to make the case for Vivaldi. Did you know there’s actually a Pomodoro timer built into Vivaldi? Click the clock on the bottom-right of the status bar to bring it up.

    Screenshot of the Clock panel in Vivaldi, which shows a Countdown and Alarm with a Pomodoro option

    Never used it myself, since I don’t use a Pomodoro timer, but can Firefox do that?!

    Once again, a really great listen, as always.

    On Micro.blog, Scribbles, And Multi-homing

    I’ve been ask why I’m using Scribbles given that I’m here on Micro.blog. Honestly I wish I could say I’ve got a great answer. I like both services very much, and I have no plans of abandoning Micro.blog for Scribbles, or visa-versa. But I am planning to use both for writing stuff online, at least for now, and I suppose the best answer I can give is a combination of various emotions and hang-ups I have about what I want to write about, and where it should go.

    I am planning to continue to use Micro.blog pretty much how others would use Mastodon: short-form posts, with the occasional photo, mainly about what I’m doing or seeing during my day. I’ll continue to write the occasional long-form posts, but it won’t be the majority of what I write here.

    My intentions for what I post on Scribbles is more likely to be long-form, which brings me to my first reason: I think I prefer Scribbles editor for long-form posts. Micro.blog works well for micro-blogging but I find any attempt to write something longer a little difficult. I can’t really explain it. It just feels like I’m spending more effort trying to get the words out on the screen, like they’re resisting in some way.

    It’s easier for me to do this using Scribbles editor. I don’t know why. Might be a combination of how the compose screen is styled and laid out, plus the use of a WYSIWYG editor1. But whatever it is, it all combines into an experience where the words flow a little easier for me. That’s probably the only way I can describe it. There’s nothing really empirical about it all, but maybe that’s the point. It’s involves the emotional side of writing: the “look and feel”.

    Second, I like that I can keep separate topics separate. I thought I could be someone who can write about any topic in one place, but when I’m browsing this site myself, I get a bit put out by all the technical topics mixed in with my day-to-day entries. They feel like they don’t belong here. Same with project notes, especially given that they tend to be more long-form anyway.

    This I just attribute to one of my many hang-ups. I never have this issue with other sites I visit. It may be an emotional response from seeing what I wrote about; where reading about my day-to-day induces a different feeling (casual, reflective) than posts about code (thinking about work) or projects (being a little more critical, maybe even a little bored).

    Being about to create multiple blogs in Scribbles, thanks to signing up for the lifetime plan, gives me the opportunity to create separate blogs for separate topics: one for current projects, one for past projects, and one for coding topics. Each of them, along with Micro.blog, can have their own purpose and writing style: more of a public journal for the project sites, more informational or critical on the coding topics, and more day-to-day Mastodon-like posts on Micro.blog (I also have a check-in blog which is purely a this-is-where-I’ve-been record).

    Finally, I think it’s a bit of that “ooh, shiny” aspect of trying something new. I definitely got that using Scribbles. I don’t think there’s much I can do about that (nor, do I want to 😀).

    And that’s probably the best explanation I can give. Arguably it’s easier just writing in one place, and to that I say, “yeah, it absolutely is.” Nothing about any of this is logical at all. I guess I’m trying to optimise to posting something without all the various hang-ups I have about posting it at all, and I think having these separate spaces to do so helps.

    Plus, knowing me, it’s all likely to change pretty soon, and I’ll be back to posting everything here again.


    1. Younger me would be shocked to learn that I’d favour a WYSIWYG editor over a text editor with Markdown support ↩︎

    Self-Driving Bicycle for The Mind

    While listening to the Stratchery interview with Hugo Berra, a thought occurred to me. Berra mentioned that Xaomi was building an EV. Not a self-driving one, mind you: this one has a steering wheel and peddles. He made the comment that were Apple to actually go through with releasing a car, it would look a lot like what Xaomi has built. I haven’t seen either car project myself so I’ll take his word for it.

    This led to the thought that it was well within Apple’s existing capability to release a car. They would’ve had to skill up in automotive engineering, but they can hire people to do that. What they couldn’t do was all the self-driving stuff. No-one can do that yet, and it seems to me that being unable to deliver on this non-negotiable requirement was one of the things that doomed the project. Sure there were others — seems like they were lacking focus in a number of other areas — but this seems like a big one.

    This led to the next thought, which is why Apple thought it was ever a good idea to actually have the car self-driving. What’s wrong with having one driven by the user? Seems like this was a very un-Apple-like product decision. Has Apple ever been good a releasing tech that would replace, rather than augment, the user’s interaction with the device? Do they have phones that would browse the web for you? Have they replaced ZSH with ChatGPT in MacOS (heaven forbid). Probably the only product that comes close is Siri, and we all know what a roaring success that is.

    Apple’s strength is in releasing products that keep human interaction a central pillar of it’s design. They should just stick with that, and avoid any of the self-driving traps that come up. It’s a “bicycle for a mind” after all: the human is still the one doing the peddling.

    On Post Headers

    My answer to @mandaris question:

    How many of you are using headers in your blogging? Are you using anything that denotes different sections?

    I generally don’t use headers, unless the post is so long it needs them to break it up a little. When I do, I tend to start with H2, then step down to H3, H4, etc.

    I’d love to start with H1, but most themes I encounter, including those from software like Confluence, style H1 to be almost the same size as the page title. This kills me as the page title should be separate from any H1s in the body, and styled differently enough that there’s no mistaking what level the header’s on.

    But, c’est la vie.

    Sorting And Go Slices

    Word of caution for anyone passing Go slices to a function which will sort them. Doing so as is will modify the original slice. If you were to write this, for example:

    package main
    
    import (
    	"fmt"
    	"sort"
    )
    
    func printSorted(ys []int) {
    	sort.Slice(ys, func(i, j int) bool { return ys[i] < ys[j] })
    	fmt.Println(ys)
    }
    
    func main() {
    	xs := []int{3, 1, 2}
    	printSorted(xs)
    	fmt.Println(xs)
    }
    

    You will find, when you run it, that both xs and ys will be sorted:

    [1,2,3]
    [1,2,3]
    

    If this is not desired, the remedy would be make a copy of the slice prior to sorting it:

    func printSorted(ys []int) {
    	ysDup := make([]int, len(ys))
    	copy(ysDup, ys)
    	sort.Slice(ysDup, func(i, j int) bool { return ys[i] < ys[j] })
    	fmt.Println(ysDup)
    }
    

    This make sense when you consider that the elements of a slice are more or less stored in a normal array. Slices add things like start and end items, which are stored as values within the slice struct, but the array itself is just a normal reference and it is this that sort.Slice will modify.

    On the face of it, this is a pretty trivial thing to find out. But it’s worth noting here just so that I don’t have to remember it again.

    Adding A Sidebar To A Tiny Theme Micro.blog

    I’d though I’d write a little about how I added a sidebar with recommendations to my Tiny Theme’ed Micro.blog, for anyone else interested in doing likewise. For an example on how this looks, please see this post, or just go to the home page of this site.

    I should say that I wrote this in the form of a Micro.blog plugin, just so that I can use a proper text editor. It’s not published at the time of this post, but you can find all the code on Github, and although the steps here are slightly different, they should still work using Micro.blog’s template designer.

    I started by defining a new Hugo partial for the sidebar. This means that I can choose which page I want it to appear on without any copy-and-paste. You can do so by adding a new template with the name layouts/partials/sidebar.html, and pasting the following template:

    <div class="sidebar">
        <div class="sidebar-cell">
            <header>
                <h1>Recommendations</h1>
            </header>
            <ul class="blogroll">
                {{ range .Site.Data.blogrolls.recommendations }}
                    <li><a href="{{ .url }}">{{ .name }}: <span>{{ (urls.Parse .url).Hostname }}</span></a></li>
                {{ else }}
                    <p>No recommendations yet.</p>
                {{ end }}
            </ul>
        </div>
    </div>
    

    This creates a sidebar with a single cell containing your Micro.blog recommendations. Down the line I’m hoping to add additional cells with things like shoutouts, etc. The styling is not defined for this yet though.

    The sidebar is added to the page using Tiny Theme’s microhooks customisation feature. I set the microhook-after-post-list.html hook to the following HTML to include the sidebar on the post list:

    {{ partial "sidebar.html" . }}
    

    In theory it should be possible to add it to the other pages just by adding the same HTML snippet to the other microhooks (go for the “after” ones). I haven’t tried it myself though so I’m not sure how this will look.

    Finally, there’s the styling. I added the following CSS which will make the page slightly wider and place the sidebar to the right side of the page:

    @media (min-width: 776px) {
        body:has(div.sidebar) {
            max-width: 50em;
        }
    
        div.wrapper:has(div.sidebar) {
            display: grid;
            grid-template-columns: minmax(20em,35em) 15em;
            column-gap: 60px;
        }
    }
    
    div.sidebar {
        font-size: 0.9em;
        line-height: 1.8;
    }
    
    @media (max-width: 775px) {
        div.sidebar {
            display: none;
        }
    }
    
    div.sidebar header {
        margin-bottom: 0;
    }
    
    div.sidebar header h1 {
        font-size: 1.0em;
        color: var(--accent1);
    }
    
    ul.blogroll {
      padding-inline: 0;
    }
    
    ul.blogroll li {  
      list-style-type: none !important;
    }
    
    ul.blogroll li a {
      text-decoration: none;
      color: var(--text);
    }
    
    ul.blogroll li a span {
      color: var(--accent2);
    }
    

    This CSS uses the style variables defined by Tiny Theme so they should match the colour scheme of your blog. A page with a sidebar is also wider than one without it. It doesn’t change with width of pages that don’t have the sidebar (if this isn’t your cup of tea, you can remove the :has(div.sidebar) selector off the body tag) and the sidebar will not appear on small screens, like a phone in portrait orientation. I’m not entirely sure if I like this, and I may eventually make changes. But it’s fine for now.

    So that’s how the sidebar was added. More to come as I tinker with this down the line.

    Update: This is now a standalone Micro.blog Plugin called Sidebar For Tiny Theme.

    Photo Bucket Update: Exporting To Zip

    Worked a little more on Photo Bucket this week. Added the ability to export the contents of an instance to a Zip file. This consist of both images and metadata.

    Screenshot of a finder window showing the contents of the exported Zip file

    I’ve went with lines of JSON file for the image metadata. I considered a CSV file briefly, but for optional fields like captions and custom properties, I didn’t like the idea of a lot of empty columns. Better to go with a format that’s a little more flexible, even if it does mean more text per line.

    As for the images, I’m hoping the export to consist of the “best quality” version. What that means will depend on the instance. The idea is to have three tiers of image quality managed by the store: “original”, “web”, and “thumbnail”. The “original” version is the untouched version uploaded to the store. The “web” version is re-encoded from the “original” and will be slightly compressed with image metadata tags stripped out. The “thumbnail” version will be a small, highly compressed version suitable for the thumbnail. There is to be a decision algorithm in place to get an image given the desired quality level. For example, if something needed the “best quality” version of an image, and the “original” image is not available, the service will default to the “web” version (the idea is that some of these tiers will be optional depending on the need of the instances).

    This is all partially working at the moment, and I’m hoping to rework all this when I replace how stores and images relate to each other (This is what I’m starting on now, and why I built export now since this will be backwards incompatible). So for the moment the export simply consists of the “web” version.

    I’ve got unit tests working for this as well. I’m trying a new approach for unit testing in this project. Instead of using mocks, the tests are actually running against a fully instantiated versions of the services. There exists a servicestest package which does all the setup (using temporary directories, etc) and tear down of these services. Each individual unit test gets the services from this package and will run tests against a particular one.

    This does mean all the services are available and exercised within the tests, making them less like unit tests and more like integrations tests. But I think I prefer this approach. The fact that the dependent services are covered gives me greater confidence that they’re working. It also means I can move things around without changing mocks or touching the tests.

    That’s not to say that I’m not trying to keep each service their own component as much as I can. I’m still trying to follow best practice of component design: passing dependencies in explicitly when the services are created, for example. But setting them all up as a whole in the tests means I can exercise them while they serve the component being tested. And the dependencies are explicit anyway (i.e. no interfaces) so it makes sense keeping it that way for the tests as well. And it’s just easier anyway. 🤷

    Anyway, starting rework on images and stores now. Will talk more about this once it’s done.

    Photo Bucket Update: More On Galleries

    Spent a bit more time working on Photo Bucket this last week1, particularly around galleries. They’re progressing quite well. I’m made some strides in getting two big parts of the UI working now: adding and removing images to galleries, and re-ordering gallery items via drag and drop.

    I’ll talk about re-ordering first. This was when I had to bite the bullet and start coding up some JavaScript. Usually I’d turn to Stimulus for this but I wanted to give HTML web components a try. And so far, they’ve been working quite well.

    The gallery page is generated server-side into the following HTML:

    <main>
      <pb-draggable-imageset href="/_admin/galleries/1/items" class="image-grid">
        <pb-draggable-image position="0" item-id="7">
          <a href="/_admin/photos/3">
            <img src="/_admin/img/web/3">
          </a>
        </pb-draggable-image>
            
        <pb-draggable-image position="1" item-id="4">
          <a href="/_admin/photos/4">
            <img src="/_admin/img/web/4">
          </a>
        </pb-draggable-image>
            
        <pb-draggable-image position="2" item-id="8">
          <a href="/_admin/photos/1">
            <img src="/_admin/img/web/1">
          </a>
        </pb-draggable-image>        
      </pb-draggable-imageset>
    </main>
    

    Each <pb-draggable-image> node is a direct child of an <pb-draggable-imageset>. The idea is that the user can rearrange any of the <pb-draggable-image> elements within a single <pb-draggable-imageset> amongst themselves. Once the user has moved an image onto to another one, the image will signal its new position by firing a custom event. The containing <pb-draggable-imageset> element is listening to this event and will respond by actually repositioning the child element and sending a JSON message to the backend to perform the move in the database.

    A lot of this was based on the MDN documentation for drag and drop and it follows the examples quite closely. I did find a few interesting things though. My first attempt at this was to put it onto the <pb-draggable-image> element, but I wasn’t able to get any drop events when I did. Moving the draggable attribute onto the <a> element seemed to work. I not quite sure why this is. Surely I can’t think of any reason as to why it wouldn’t work. It may had something else, such as how I was initialising the HTTP components.

    Speaking of HTML components, there was a time where the custom component’s connectedCallback method was being called before the child <a> elements were present in the DOM. This was because I had the <script> tag in the the HTML head and configured to be evaluated during parsing. Moving it to the end of the body and loading it as a module fixed that issue. Also I found that moving elements around using element.before and element.after would actually call connectedCallback and disconnectedCallback each time, meaning that any event listeners registered within connectedCallback would need to be de-registered, otherwise events would be handled multiple times. This book-keeping was slightly annoying, but it worked.

    Finally, there was moving the items with the database. I’m not sure how best to handle this, but I have that method that seems to work. What I’m doing is tracking the position of each “gallery item” using a position field. This field would be 1 for the first item, 2 for the next, and so on for each item in the gallery. The result of fetching items would just order using this field, so as long as they’re distinct, they don’t need to be a sequence incrementing by 1, but I wanted to keep this as much as possible.

    The actual move involves two update queries. The first one will update the positions of all the items that are to shift left or right by one to “fill the gap”. The way it does this is that when an item is moved from position X to position Y, the value of position between X and Y would be changed by +1 if X > Y, or by –1 if Y > X. This is effectively the same as setting position X to X + 1, and so on, but done using one UPDATE statement. The second query just sets the position of item X to Y.

    So that’s moving gallery items. I’m not sure how confident I am with this approach, but I’ve been testing this, both manually and by writing unit tests. It’s not quite perfect yet: I’m still finding bugs (I found some while coming up with these screencasts). Hopefully, I’ll be able to get to the bottom of them soon.

    The second bit of work was to actually add and remove images in the gallery themselves. This, for the moment, is done using a “gallery picker” which is available in the image details. Clicking “Gallery” while viewing an image will show the list of galleries in the system, with toggles on the left. The galleries an image already belongs to is enabled, and the user can choose the galleries they want the image to be in by switching the toggles on and off. These translate to inserts and remove statements behind the scenes.

    The toggles are essentially just HTML and CSS, and a bulk of the code was taken from this example, with some tweaks. They look good, but I think I may need to make them slightly smaller for mouse and keyboard.

    I do see some downside with this interaction. First, it reverses the traditional idea of adding images to a gallery: instead of doing that, your selecting galleries for an image. I’m not sure if this would be confusing for others (it is modelled on how Google Photos works). Plus, there’s no real way to add images in bulk. Might be that I’ll need to add a way to select images from the “Photos” section and have a dialog like this to add or remove them all from a gallery. I think this would go far in solving both of these issues.

    So that’s where things are. Not sure what I’ll work on next, but it may actually be import and export, and the only reason for this is that I screwed up the base model and will need to make some breaking changes to the DB schema. And I want to have a version of export that’s compatible with the original schema that I can deploy to the one and only production instance of Photo Bucket so that I can port the images and captions over to the new schema. More on this in the future, I’m sure.


    1. Apparently I’m more than happy to discuss work in progress, yet when it comes to talking about something I’ve finished, I freeze up. 🤷 ↩︎

    Complexity Stays At the Office

    It’s interesting to hear what others like to look at during their spare time, like setting up Temporal clusters or looking at frontend frameworks built atop five other frameworks built on React. I guess the thinking is that since we use it for our jobs, it’s helpful to keep abreast of these technologies.

    Not me. Not any more. Back in the day I may have though similar. I may even have had a passing fancy at stuff like this, revelling in its complexity with the misguided assumption that it’ll equal power (well, to be fair, it would equal leverage). But I’ve been burned by this complexity one to many times. Why just now, I’ve spent the last 30 minutes running into problem after problem trying to find a single root cause of something. It’s a single user interaction but because it involves 10 different systems, it means looking at 10 different places, each one having their own issues blocking me from forward progress.

    So I am glad to say that those days are behind me. Sure, I’ll learn new tech like Temporal if I need to, but I don’t go out looking for these anymore. If I want to build something, it would be radically simple: Go, Sqlite or PostgreSQL, server-side rendered HTML with a hint of JavaScript. I may not achieve the leverage these technologies may offer, but by gosh I’m not going to put up with the complexity baggage that comes with it.

    Implicit Imports To Load Go Database Drivers Considered Annoying (By Me)

    I wish Go’s approach to loading database drivers didn’t involve implicitly importing them as packages. At least that way, package authors would be more likely to get the driver from the caller, rather than load a driver themselves.

    I’ve been bitten by this recently, twice. I’m using a GitHub Linux driver to build an ARM version of something that needs to use SQLite. As far as I can tell, it’s not possible to build an ARM binary with CGO enabled with these runners (at-least, not without installing a bunch of dependencies — I’m not that desperate yet).

    I’m currently using an SQLite driver that doesn’t require CGO, so all my code builds fine. There also exists a substantially more popular SQLite driver that does require CGO, and twice I’ve tried importing packages which used this driver, thereby breaking the build. These packages don’t allow me pass in a database connection explicitly, and even if they did, I’m not sure if would help: they’re still importing this SQLite driver that needs CGO.

    So what am I to do? As long as I need to build ARM versions, I can’t use these packages (not that I need an ARM version, but it makes testing in a Linux VM running on an M1 Mac easier). I suppose I could roll my own, but it would be nice not to do so. It’d be much better for me to load the driver myself, and pass it to these packages explicitly.

    So yeah, I wish this was better.

    P.S. When you see the error message “unable to open database file: out of memory (14)” when you try to open an SQLite database, it may just mean the directory it’s in doesn’t exist.

    Goland Debugger Not Working? Try Upgrading All The Things

    I’ve been having occasional trouble with the debugger in Goland. Every attempt to debug a test would just fail with the following error:

    /usr/local/go/bin/go tool test2json -t /Applications/GoLand.app/…
    API server listening at: 127.0.0.1:60732
    could not launch process: EOF
    
    Debugger finished with the exit code 1
    

    My previous attempts at fixing this — upgrading Go and Goland — did get it working for a while, but recently it’s been happening to me again. And being at the most recent version of Go and Goland, that avenue was not available to me.

    So I set about looking for other ways to fix this. Poking around the web netted this support post, which suggested upgrading the Xcode Command Line tools:

    $ sudo rm -rf /Library/Developer/CommandLineTools
    $ xcode-select --install
    

    I ran the command and the tools did upgrade successfully, but I was still encountering the problem. I then wondered if Goland used Delve for debugging, and if that actually need upgrading. I’ve got Delve via Homebrew, so I went about upgrading that:

    $ brew upgrade dlv
    

    And indeed, Homebrew did upgrade Delve from 1.21.0 to 1.22.0. And once that finished, and after restarting Goland1, I was able to use the debugger again.

    So, if you’re encountering this error yourself, try upgrading one or more of these tools:

    • Goland
    • Go
    • Xcode Command Line tools (if on a Mac)
    • Delve

    This was the order I tried them in, but you might be lucky by trying Delve first. YMMV


    1. Not sure that a restart is required, but I just did it anyway, “just in case”. ↩︎

    People Are More Interested In What You're Working On Than You Think

    If anyone else is weary about posting about what projects they’re working on, fearing that others would think they’re showing off or something, here’s two bits of evidence that I hope would allay these fears:

    Exhibit 1: I’m a bit of a fan of the GMTK YouTube channel. Lots of good videos there about game development that, despite not being a game developer myself, I find facinating. But the playlist I enjoy the most is the one where Mark Brown, the series creator, actually goes through the process of building a game himself. Now, you’re not going to learn how to use Unity from that series (although he does have a video about that), but it’s fun seeing him making design decisions, showing off prototypes, overcoming challenges — both external and self imposed, and seeing it all come together. I’m aways excited when he drops one of these videos, and when I learnt today that he’s been posting dev logs on his Discord, so interested am I in this topic that I immediately signed up as a Patreon supporter.

    Exhibit 2: I’ve been writing about my own projects on a new Scribbles blog. This was completely for myself, as a bit of an archive of previous work that would be difficult or impossible to revisit later. I had no expectations of anyone else finding these interesting. Yet, earlier this week, while at lunch, the conversation I was having with work colleagues turned to personal projects. He ask if I was working on anything, and when I told him about this blog, he expressed interest. I gave him the link and that afternoon I saw him taking a look (I not expecting him to be a regular visitor but the fact that he was interested at all was something).

    It turns out that my colleague gets a kick out of seeing others do projects like this on the side. I guess, in retrospect, that this shouldn’t be a surprise to me, seeing that I get the same thrill. Heck, that’s why I’ve subscribed to the tech podcasts like Under the Radar: I haven’t written an iOS app in my life, yet it’s just fun listing to dev stories like this.

    Yet, when it comes to something that I’m working on, for a long time I’ve always held back, thinking that talking about it is a form of showing off. I like to think I’m getting better here, but much like the Resistance, that feeling is still there. Wispering doubt in my ear. Asking who would be interested in these raw, unfinished, things that will never go beyond the four walls of the machine from whence then came? I don’t think that feeling will ever go away, but in case I loose my nerve again, I hope to return to the events of this week, just to remind myself that, yeah, people are interested in these stories. I can put money on that assurance. After all, I just did.

    So don’t be afraid to publish those blog posts, podcasts, or videos on what you’re working on. I can’t wait to see them.

    See also, this post by Aaron Francis that touches on the same topic (via The ReadME Project).

    Github Actions, Default Token Permissions, And Publishing Binaries

    Looks like Github’s locked down the access rights of the GITHUB_TOKEN recently. This is the token that’s available to all Github actions by default.

    After taking a GoReleaser config file from an old project and using it in a new one, I encountered this error when GoReleaser tried to publish the binaries as part of a Github Release:

    failed to publish artifacts:
    could not release:
    PATCH https://api.github.com/repos/lmika/<project>/releases/139475588:
    403 Resource not accessible by integration []
    

    After a quick search, I found this Github issue which seemed to cover the same problem. It looks like the way to resolve this is to explicitly add the content: write permission to the Github Actions YAML file:

    name: Create Release
    
    on:
      push:
        tags:
          - 'v*'
    
    # Add this section
    permissions:
      contents: write
      
    jobs:
      build:
        runs-on: ubuntu-latest
    

    And sure enough, after adding the permissions section, Goreleaser was able to publish the binaries once again.

    There’s a bunch of other permissions that might be helpful for other things, should you need it.

    Thoughts on The Failure of Microsoft Bob

    Watching a YouTube video about Microsoft Bob left me wondering if one of the reasons why Bob failed was that it assumed that users, who may have been intimidated by a GUI when they first encountered one, would be intimidated for ever. That their level of skill will always remain one in which the GUI was scary and unusable, and their only success in using a computer is through applications like Bob.

    That might be true for some, but I believe that such cases are a fewer representation of the userbase as a whole. If someone’s serious about getting the most out of their computer, even back then when the GUI was brand new, I can’t see how they wouldn’t naturally skill up, or at least want to.

    I think that’s why I’m bothered by GUIs that sacrifice functionality in leau of “simplicity.” It might be helpful at the start, but pretty soon people would grow comfortable using your UI, and will hit against the artificial capabilities of the application sooner than you expect.

    Not that I’m saying that all UIs should be as complex as Logic Pro for no reason: if the domain is simple, then keep it simple. But when deciding on the balance between simplicity and capability, perhaps have trust in your users’ abilities. If they’re motivated (and your UI design is decent) I’m sure they’ll be able to master something a little more complex.

    At least, that’s what this non-UI designer believes.

    Why I Use a Mac

    Why do I use a Mac?

    Because I can’t get anything I need to get done on an iPad.

    Because I can’t type to save myself on a phone screen.

    Because music software doesn’t exist on Linux.

    Because the Bash shell doesn’t exist on Windows (well, it didn’t when I stopped using it).

    That’s why I use a Mac.

    The AWS Generative AI Workshop

    Had an AI workshop today, where we went through some of the generative AI services AWS offers and how they could be used. It was reasonably high level yet I still got something out of it.

    What was striking was just how much of integrating these foundational models (something like an LLM that was pre-trained on the web) involved natural language. Like if you building a chat bot to have a certain personality, you’d start each context with something like:

    You are a friendly life-coach which is trying to be helpful. If you don’t know the answer to a question, you are to say I don’t know. (Question)

    This would extend to domain knowledge. Now you could fine tune a foundational model with your own data set, but an easier, allbeit slightly less efficient way, would be to do something like hand craft a bunch of questions and answers pairs, and feed that straight into the prompt.

    This may also extend to agents as well (code that the model interacts with). We didn’t cover agents to a significant degree, but after looking at some of the marketing materials, it seems to me that much of the integration is instructing the model to put parameters within XML tags (so that the much “dumber” agent can parse it out), and how to interpret the structured response.

    A lot of boilerplate, written in natural language, in the prompt just to deal with passing information around. I didn’t expect that.

    Nevertheless, it was pretty interesting. And although I haven’t got the drive to look into this much further, I would like to learn more about how one might hook up external data-sources and agents (somthing that involves vector databases that’s available to the model and doesn’t require fine turning. I not sure how to represent these “facts” so that it’s usable by the model, or even if that’s a thing).

    Replacing Ear Cups On JBL E45BT Headphones

    As far as wearables go, my daily drivers are a pair of JBL E45BT Bluetooth headphones. They’re several years old now and are showing their age: many of the buttons no longer work and it usually takes two attempts for the Bluetooth to connect. But the biggest issue is that the ear cups were no longer staying on. They’re fine when I wear them, but as soon as I take them off, the left cup would fall to the ground.

    But they’re a decent pair of headphones, and I wasn’t keen on throwing them out or shopping for another pair. So I set about looking for a set of new ear cups.

    This is actually the second pair of replacement cups I’ve bought for these headphones. The first had a strip of adhesive that stuck the cup straight on to the speaker (it was this adhesive that was starting to fail). I didn’t make a note of where I bought them and a quick search didn’t turn up anything that looked like them. So in December, I settled for this pair from this eBay seller. Yesterday, they arrived.

    New set of ear-cups for a JBL E-series bluetooth headphones
    The new set of ear cups.
    A black bluetooth headphone on a table, with the left cup fallen off exposing the speaker, and the right cup slightly removed from it's original position
    They couldn't have come sooner.

    First impressions were that they were maybe too big. I also didn’t see an adhesive strip to stick them on. Looking at the listing again, I realised that they’re actually for a different line of JBL headphones. But I was a little desperate, so I set about trying to get them on.

    The headphones in question on an old piece of paper with the left cup replaces with the new ear-cups and the right speaker exposed and bits of old adhestive laying on the paper
    Removing the old adhesive, with my fingers (yeah, I probably should buy some tools).

    It turns out that they’re actually still a good fit for my pair. The aperture is a little smaller than the headphone speaker, but there’s a little rim around each one and I found that by slotting one side of the padding over the rim, and then lightly stretching and rolling the aperture around the speaker, it was possible to get them on. It’s a tight fit, but that just means they’re likely to stay on. And without any adhesive, which is good.

    The headphones with the right cup in profile demonstrating the roll of the padding onto the rim
    It's a bit hard to see, but if you look at the top of the right cup, you can see how the padding was rolled onto the speaker from the bottom.

    After a quick road test (a walk around the block and washing the dishes), I found the replacement to be a success. So here’s to a few more years of this daily driver.

    The headphones in profile with the new replacement cups
    Headphones with the new cups. They look and feel pretty good.
    The old replacement cups on a table, with the left cup loosing it's vinyl skin revealing the actual foam.
    The old cups, ready for retirement.

    Detecting A Point In a Convex Polygon

    Note: there are some interactive elements and MathML in this post. So for those reading this in RSS, if it looks like some formulas or images are missing, please click through to the post.

    For reasons that may or may not be made clear lately, I’ve been working on something involving bestagons. I tended to shy away from things like this before, mainly because of the maths involved in tasks like determining whether a point is within a hexagon. But instead of running away once again from things more complex than a grid, I figured it was time to learn this once and for all. So off I went.

    First stop was Stack Overflow, and this answer on how to test if a point is inside a convex polygon:

    You can check that easily with the dot product (as it is proportional to the cosine of the angle formed between the segment and the point, if we calculate it with the normal of the edge, those with positive sign would lay on the right side and those with negative sign on the left side).

    I suppose I could’ve taken this answer as it is, but I know if I did, I’d have something that’ll be little more than magic. It’ll do the job but I’d have no idea way. Now like many, if I can get away with having something that works without me knowing how, I’ll more likely to take it. But when it comes to code, doing this will usually comes back to bite me in the bum. So I’m trying to look for opportunities to dig a little deeper than I would in learning how and why it works.

    It took me a while, and a few false starts, but I think I got there in the end. And I’d figured it would be helpful for others to know how I came to understand how this worked at all. And yeah, I’m sure this is provable with various theorems and relationships, but that’s just a little too abstract for me. No, what got me to the solution in the end was visualising it, along with attempting to explain it below.

    First, let’s ignore polygons completely and consider a single line. Here’s one, represented as a vector:

    A vector drawn on graph paper pointing to the top-right

    Oh, I should point out that I’m assuming that you’re aware of things like vectors and trigonometric functions, and have heard of things like dot-product before. Hopefully it won’t be too involved.

    Anyway, we have this line. Let’s say we want to know if a specific point is to the “right” of the line. Now, if the line was vertical, this would be trivial to do. But here we’ve got a line that’s is on an angle. And although a phrase like “to the right of” is still applicable, it’ll only be a matter of time before we have a line where “right” and “left” has no meaning to us.

    So let’s generalise it and say we’re interested in seeing whether a point is on the same side as the line’s normal.

    Now, there are actually two normals available to us, one going out on either side of the line. But let’s pick one and say we want the normal that points to the right if the line segment is pointing directly up. We can add that to our diagram as a grey vector:

    That same vector pointing to the top-right, with a normal originating from the same origin pointing to the bottom-right

    Now let’s consider this point. We can represented as a vector that the shares the same origin as the line segment1. With this we can do all sorts of things, such as work out the angle between the two (if you’re viewing this in a browser, you can tap on the canvas to reposition the green ray):

    That same vector and normal, now with an additional line coming from the origin drawn rotated 48° clockwise from the original vector

    This might give us a useful solution to our problem here; namely, if the angle between the two vectors falls between 0° and 180°, we can assume the point is to the “right” of the line. But we may be getting ahead of ourselves. We haven’t even discussed how we can go about “figuring out the angle” between these vectors.

    This is where the dot product comes in. The dot product is an equation that takes two vectors and produces a scalar value, based on the formula below:

    a b = a x b x + a y b y

    One useful relationship of the dot product is that it’s proportional to the cosign of the angle between the two vectors:

    a b = | a | | b | cos θ

    Rewriting this will give us a formula that will return the angle between two vectors:

    θ = cos -1 ( a b | a | | b | )

    So a solution here would be to calculate the angle between the line and the point vector, and as long as it falls between 0 and 180°, we can determine that the point is on the “right” side of the line.

    Now I actually tried this approach in a quick and dirty mockup using JavaScript, but I ran into a bit of an issue. For you see, the available inverse cosign function did not provide a value beyond 180°. When you think about it, this kinda makes sense, as the cosign function starts moving from -1 back to 1 as the angle is greater than 180° (or less than 0°).

    But we have another vector at our disposal, the normal. What if we were to calculate the angle between those two?

    That same vector and normal, and the additional line coming from the origin drawn 136° anti-clockwise from the normal, with text indicating that the dot product is -47500

    Ah, now we have a relationship that’s usable. Consider when the point moves to the “left” of the line. You’d notice that the angle is either greater than 90° or less than –90°. These just happen to be angles in which the cosign function will yield a negative result. So a possible solution before is is to work out the angle between the point vector and normal, take the cosign, and if it’s positive, the point will be on the “right” side of the line (and it’ll be on the “left” side if the cosign is negative).

    But we can do better than that. Looking back at the relationship between the dot product and the angle, we can see that the only way this equation could be negative is if the cosign function is negative, since the vector magnitudes will always return a positive value. So we don’t even need to work out angles at all. We can just rely on the dot product between the point and the normal.

    And it’s here that the solution clicked. A point is to the “right” of a line if the dot product of the point vector and the “right”-sided normal is positive. Look back at the original Stack Overflow answer above, and you’ll see that’s pretty much what was said there as well.

    Now that we’ve got this working for a single line, it’s trivial to extend this to convex2 polygons. After including all the line segments, with the normals pointing inwards, calculate the dot product between each of the normals with the point, and check the sign. If they’re all positive, the point is within the polygon. If not, it’s outside.

    A hexagon drawn in the centre of graph paper with normals and lines originating at each vertex and converting at a single point located in the centre of the hexagon. The lines indicating a positive dot product for each one and that the point is within the hexagon

    So here’s an approach that’ll work for me, and is relatively easy and cheap to work out. And yeah, it’s not a groundbreaking approach, and basically involved relearning a bunch of linear algebra I’ve forgotten since high school. But hey, better late than never.


    1. Well, technically these vectors should be offset from the global origin, but they’re translated in these plots for demonstration purposes. ↩︎

    2. I’m not sure if this is applicable to concave polygons, where the angle between segments can go beyond 180°. ↩︎

    Can a Single Line Or Even a Single Word Be Considered a Legitimate Blog Post?

    Yes.

    2023 Year In Review

    Well, once more around the sun and it’s time again to look back on the year that was.

    Career

    Reflecting on the work we did this past year, there were a few highlights. We managed to get a few major things released, like the new billing and resource usage tracking system (not super exciting, but it was still fun to work on). And although the crunch period we had was a little hard — not to mention the 3 AM launch time — it was good to see it delivered on time. We’re halfway through another large change that I hope to get out before the end of summer, so it’ll probably be full steam ahead when I go back to work this week.

    Highlights aside, there’s not much more to say here. Still feel like my career is in a bit of a rut. And although I generally still like my job, it’s difficult seeing a way forward that doesn’t involve moving away from being an “individual contributor” to a more managerial role. Not sure I like that prospect — leading a squad of 6 devs is probably the maximum number of people I can manage.

    And honestly, I probably need to make thinking of this more of a priority for the new year. I’ve been riding in the backseat on this aspect of my life long enough. Might be time to spend a bit more of effort on driving aspects of my career, rather than letting things just happen to me.

    Ok, that’s out of the way. Now for the more exciting topics.

    Projects

    Dynamo-browse is ticking along, which is good. I’ve added a few features here and there, but there’s nothing huge that needs doing to it right now. It’s received some traction from others, especially people from work. But I gotta be honest, I was hoping that it would be better received than it did. Oh yes, the project website gets visitors, but I just get the sense it hasn’t seen as many takers as I had hoped (I don’t collect app metrics so don’t know for certain). I’d like to say this it doesn’t matter: so long as I find it useful (and I do), that’s all that counts. And yeah, that’s true. But I’d be lying if I said that I wished others would find it useful as well. Ah well.

    One new “major” project released this past year was F5 To Run. Now this, I’m not expecting anyone else to be interested in other than myself. This project to preserve the silly little games I made when I was a kid was quite a success. And now that they’re ensconced in the software equivalent of amber (i.e. web technologies) I hope they can live on for as long as I’m paying for the domain. Much credit goes to those that ported DosBox to a JavaScript library. They were reasonably easy to port over (it was just making the images), and it’s a testament to their work seeing stuff built on primitive 90’s IBM PC technologies actually being the easiest things to run this way. I just need to find a way to do this to my Windows projects next.

    Another “major” project that I’m happy to have released was Mainboard Mayhem, a Chips Challenge clone. This was one of those projects that I’ve been building for myself over the last ten years, and debating with myself whether it was worth releasing or not. I’ve always sided on not releasing it for a number of reasons. But this past year, I figured it was time to either finish and release it, or stop work on it all together. I’m happy with the choice I made. And funnily enough, now that it’s finished, I haven’t had a need to tinker with it since (well, apart from that one time).

    There’ve been a few other things I worked on this past year, many which didn’t really go anywhere. The largest abandoned project was probably those reclaimed devices built for schools I was planning to flash to be scoring devices. The biggest hurdle is connecting to the device. The loader PCB that was shipped to me didn’t quite work as the pins weren’t making good contact (plus, I broke one of them, which didn’t improve things). The custom board I built to do the same thing didn’t work either, the pins were too short and uneven. So I never got to do anything with them. They’re currently sitting in my cupboard in their box, gathering dust. I guess I could unscrew the back and hook wires up to the appropriate solder points, but that’s a time investment I’m not interested in taking at the moment.

    This project may rise again with hardware that’s a little easier for me to work with. I have my eye on the BeepBerry, which looks to be easier to work with, at least with my skills. I added my name to the wait-list, but hearing from others, it might be some time before I can get my hands on one (maybe if the person running the project spent less time fighting with Apple, he can start going through the wait-list).

    So yeah, got a few things finished this year. On the whole, I would like to get better at getting projects out there. Seeing people like Robb Knight who seem to just be continuously releasing things has been inspiring. And it probably doesn’t need to be all code either. Maybe other things, like music, video, and written prose. Throw a bit of colour into the mix.

    Speaking of written prose…

    Writing And Online Presence

    The domain reduction goal continues. I’m honestly not sure if it’s better or worse than last year. I didn’t record the number of registered domains I had at the start of 2023, but as of 27 December 2023, the domain count is at 25, of which 16 have auto-renew turned on.

    Domains Count
    Registered 25
    With auto-renew turned on 16
    Currently used for something 13
    Not currently for something but worth keeping 3
    Want to remove but stuck with it because it’s been shared by others 1

    Ultimately I’d like to continue cutting down the number of new domains I register. It’s getting to be an expensive hobby. I’ve started to switch to sub-domains for new things, so I shouldn’t be short of possible URLs for projects.

    I’m still trying to write here once a day, mainly to keep me from falling out of the habit. But I think I’m at the point where I can start thinking less about the need for a daily post, and focus more towards “better” posts as a whole. What does “better” mean? 🤷 Ultimately it’s in the eye of the beholder, but publishing less posts that I find “cringeworthy” is a start. And maybe having less posts that are just me complaining about something that happened that day. Maybe more things about what I’m working on, or interesting things I encounter. Of course this is all just a matter of balance: I do enjoy reading (and writing) the occasional rant, and writing about things that frustrate me is cathartic. Maybe just less of that in the new year.

    I did shut down the other blog I was using for tech and project posts. It’s now a digital garden and knowledge base, and so far working quite well. I’m so glad I did this in retrospect. I was paying unnecessary cognitive overhead deciding which blog a post should go. They all just go here now.

    Travel

    Oof, it was a big year of travel this past year. The amount of time I’ve spent away from home comes to 10 weeks in total, a full 20% of the year. This might actually be a record.

    The highlight of the past year was my five-week trip to Europe. Despite it being my third visit to Europe (forth if you include the UK), I consider this to be what could only be described as my “Europe trip”. I had a lot of great memories there, and stacks of photos and journal entries that I’ve yet to go through. I’m please that it seemed to have bought my friends and I closer. These sorts of trips can make or break friendships, and I think we left Europe with tighter bonds than we arrived.

    One other notable trip was a week in Singapore. This was for work, and much like my previous work trips, mainly consisted of being in offices. But we did get a chance to do some site-seeing and it was a pleasure to be able to work with those in Singapore.

    And of course, there was another trip to Canberra to look after my sisters cockatiels, which was always a pleasure.

    Not sure what this new year will bring in terms of travel. I’m predicting a relatively quiet one, but who knows.

    Books And Media

    This is the first year I setup a reading goal. I wanted to get out of the habit of starting books and not finishing them (nothing wrong with not finishing books; I was just getting distracted). This past year’s goal was quite modest — only 5 books — but it’s pleasing to see that I managed to surpass that and actually finish 7 books1.

    Keep Going: 10 Ways to Stay Creative in Good Times and Bad What to Do When It's Your Turn Anything You Want Do The Work! Turning Pro The Song of Significance Evil Plans: Having Fun on the Road to World Domination

    As for visual media, well, there’s nothing really worth commenting about here. I did have a go at watching Bojack Horseman earlier in the year, and while the first few series were good, I bounced after starting the forth. I’ve also gave Mad Men a try, after hearing how well it was received by others, but I couldn’t get through the first series. I found the whole look-at-how-people-lived-in-the-’60s trope a bit too much after a first few episodes.

    In general, I’ve found my viewing habits drift away from scripted shows over this past year. I’m more than happy to just watch things on YouTube; or more accurately, rewatch things, as I tend to stick with video’s I’ve seen before. And although I’ve got no plans to write a whole post about my subscriptions just yet (the sand just feels too nice around my face), I did get around to cancelling my Netflix subscription, seeing how little I used it this past year.

    As for podcasts, not much change here. With a few exceptions, the shows I was listening to at the end of the year are pretty close to what I was listening to at the start. But I did find myself enjoying these new shows:

    These are now in my regular rotation.

    The 2023 Word

    My 2023 word for the year was generous, trying to be better at sharing things. And I like to think I’ve made some improvements here. It may not have come across in a summary post like this, but I’ve tried to keep it front of mind in most things I work on. I probably can do a little better here in my personal life. But hey, like most themes, it’s always a constant cycle of improvement.

    I must say, this last year has been pretty good. Not all aspects of it — there will always be peeks and valleys — but thinking back on it now, I’ve felt that it’s been one of the better ones recently. And as for this review, I’ll just close by saying, here’s to a good 2024.

    Happy New Year. 🥂


    1. It’s a good thing I was tracking them as I though I’d only get to 6 this year. ↩︎

Older Posts →