Long Form Posts

    As Someone Who Works In Software

    As someone who works in software…

    1. I cringe every time I see society bend to the limitations of the software they use. It shouldn’t be this way; the software should serve the user, not the other way around.
    2. I appreciate a well designed API. Much of my job is using APIs built by others, and the good ones always feel natural to use, like water flowing through a creek. Conversely, a badly designed API makes me want to throw may laptop to the ground.
    3. I think a well designed standard is just as important as a well designed API. Thus, if you’re extending the standard in a way that adds a bunch of exceptions to something that’s already there, you may want to reflect on your priorities and try an approach that doesn’t do that.
    4. I also try to appreciate, to varying levels of success, that there are multiple ways to do something and once all the hard and fast requirements are settled, it usually just comes down to taste. I know what appeals to my taste, but I also (try to) recognise that others have their own taste as well, and what appeals to them may not gel with me. And I just have to deal with it. I may not like it, but sometimes we have to deal with things we don’t like.
    5. I believe a user’s home directory is their space, not yours. And you better have a bloody good reason for adding stuff there that the user can see and didn’t ask for.

    The Perfect Album

    The guys on Hemispheric Views have got me blogging once again. The latest episode bought up the topic of the perfect album: an album that you can “just start from beginning, let it run all the way through without skipping songs, without moving around, just front to back, and just sit there and do nothing else and just listen to that whole album”.

    Well, having crashed Hemispheric Views once, I’d thought it’s time once again to give my unsolicited opinion on the matter. But first, some comments on some of the suggestions made on the show.

    I’ll start with Martin’s suggestion of the Cat Empire. I feel like I should like Cat Empire more than I currently do. I used to know something who was fanatic about them. He shared a few of their songs when we were jamming out — we were in a band together — and on the whole I thought they were pretty good. They’re certainly a talented bunch of individuals. But it’s not a style of music that gels with me. I’m just not a huge fan of scar, which is funny considering the band we were both in was a scar band.

    I feel like I haven’t given Radiohead a fair shake. There were many people that approached me and said something of the lines of “you really should try Radiohead; it’s a style of music you may enjoy,” and I never got around to following their advice. I probably should though, I think they may be right. Similarly for Daft Punk, of which I have heard a few tracks of and thought them to be pretty good. I really should give Random Access Memory a listen.

    I would certainly agree with Jason’s suggestion of the Dark Side of the Moon. I count myself a Pink Floyd fan, and although I wouldn’t call this my favourite album by them, it’s certainly a good album (if you were to ask, my favourite would probably be either The Wall or Wish You Were Here, plus side B of Metal).

    As to what my idea of a perfect album would be, my suggestion is pretty simple: it’s anything by Mike Oldfield.

    LOL, just kidding!1 😄

    No, I’d say a great example of a perfect album is Jeff Wayne’s musical adaptation of The War Of The Worlds.

    The album cover of Jeff Wayne’s Musical Version of The War Of The Worlds

    I used to listen to this quite often during my commute, before the pandemic arrived and bought that listen count down to zero. But I’ve picked it back up a few weeks ago and it’s been a constant ear-worm since. I think it ticks most of the boxes for a perfect album. It’s a narrative set to music, which makes it quite coherent and naturally discourages skipping tracks. The theming around the various elements of the story are really well done: hearing one introduced near the start of the album come back later is always quite a thrill, and you find yourself picking up more of these as you listen to the album multiple times. It’s very much not a recent album but, much like Pink Floyd, there’s a certain timelessness that makes it still a great piece of music even now.

    Just don’t listen to the recent remakes.


    1. Although not by much. ↩︎

    Favourite Comp. Sci. Textbooks

    John Siracusa talked about his two favourite textbooks on Rec Diffs #233: Modern Operation Systems, and Computer Networks, both by Andrew S. Tanenbaum. I had those textbooks at uni as well. I still do, actually. They’re fantastic. If I were to recommend something on either subject, it would be those two.

    Auto-generated description: Two textbooks titled Modern Operating Systems and Computer Networks by Andrew S. Tanenbaum are placed side by side on a surface.
    The two Tanenbaums.

    I will add that my favourite textbook I had during my degree was Compilers: Principal, Techniques and Tools by Alfred V. Aho, et al. also known as the “dragon book.” If you’re interested in compiler design in any way, I can definitely recommend this book. It’s a little old, but really, the principals are more or less the same.

    Auto-generated description: A book titled Compilers: Principles, Techniques, and Tools by Alfred V. Aho, Ravi Sethi, and Jeffrey D. Ullman, often referred to as the Dragon Book, is lying on a textured surface.
    And dragon makes three.

    Thou Doth Promote Too Much

    Manual Moreale wrote an interesting post about self promotion, where he reflects on whether closing out all his People and Blogs post with a line pointing to his Ko-Fi page is too much:

    And so I added that single line. But adding that single line was a struggle. Because in my head, it’s obvious that if you do enjoy something and are willing to support it, you’d probably go look for a way to do it. That’s how my brain works. But unfortunately, that’s not how the internet works. Apparently, the correct approach seems to be the opposite one. You have to constantly remind people to like and subscribe, to support, to contribute, and to share.

    I completely understand his feelings about this. I’m pretty sure I’d have just as much trouble adding such a promotion at the bottom of my post. Heck, it’s hard enough to write about what I’m working on here without any expectation from the reader other than to maybe, possibly, read it. They’ve been relegated to a separate blog, so as to not bother anyone.

    But as a reader of P&B, I think the line he added is perfectly fine. I think it’s only fair to ask people to consider supporting something where it’s obvious someone put a lot of effort into it, as he obviously has been doing with P&B.

    As for where to draw the line, I think I agree with Moreale:

    How much self-promotion is too much? Substack interrupting your reading experience to remind you to subscribe feels too much to me. An overlay interrupting your browsing to ask you to subscribe to a newsletter is also too much. Am I wrong? Am I crazy in thinking it’s too much?

    I get the need to “convert readers” but interrupting me to sign up to a newsletter is just annoying. And I’m not sure “annoying” is the feeling you want to imbue in your readers if you want them to do something.

    But a single line at the end of a quality blog post? Absolutely, go for it!

    Crashing Hemispheric Views #109: HAZCHEM

    Okay, maybe not “crashing”, a.la Hey Dingus. But some thoughts did come to me while listening to Hemispheric Views #109: HAZCHEM that I’d though I share with others.

    Haircuts

    I’m sorry but I cannot disagree more. I don’t really want to talk while I’m getting a haircut. I mean I will if they’re striking up a conversation with me, but I’m generally not there to make new friends; just to get my hair cut quickly and go about my day. I feel this way about taxis too.

    I’m Rooted

    I haven’t really used “rooted” or “knackered” that much. My goto phrase is “buggered,” as in “oh man, I’m buggered!” or simply just “tired”. I sometimes used “exhausted” when I’m really tired, but there’s just too many syllable in that word for daily use.

    Collecting

    I’m the same regarding stickers. I received (although not really sought after) stickers from various podcasts and I didn’t know what to do with them. I’ve started keeping them in this journal I never used, and apart from my awful handwriting documenting where they’re from and when I added them, so far it’s been great.

    Journal opened up to a double page showing stickers from omg.lol and Robb Knight

    I probably do need to get some HV stickers, though.

    A Trash Ad for Zachary

    Should check to see if Johnny Decimal got any conversions from that ad in #106. 😀

    Also, here’s a free tag line for your rubbish bags: we put the trash in the bags, not the pods.

    🍅⏲️ 00:39:05

    I’m going to make the case for Vivaldi. Did you know there’s actually a Pomodoro timer built into Vivaldi? Click the clock on the bottom-right of the status bar to bring it up.

    Screenshot of the Clock panel in Vivaldi, which shows a Countdown and Alarm with a Pomodoro option

    Never used it myself, since I don’t use a Pomodoro timer, but can Firefox do that?!

    Once again, a really great listen, as always.

    On Micro.blog, Scribbles, And Multi-homing

    I’ve been ask why I’m using Scribbles given that I’m here on Micro.blog. Honestly I wish I could say I’ve got a great answer. I like both services very much, and I have no plans of abandoning Micro.blog for Scribbles, or visa-versa. But I am planning to use both for writing stuff online, at least for now, and I suppose the best answer I can give is a combination of various emotions and hang-ups I have about what I want to write about, and where it should go.

    I am planning to continue to use Micro.blog pretty much how others would use Mastodon: short-form posts, with the occasional photo, mainly about what I’m doing or seeing during my day. I’ll continue to write the occasional long-form posts, but it won’t be the majority of what I write here.

    My intentions for what I post on Scribbles is more likely to be long-form, which brings me to my first reason: I think I prefer Scribbles editor for long-form posts. Micro.blog works well for micro-blogging but I find any attempt to write something longer a little difficult. I can’t really explain it. It just feels like I’m spending more effort trying to get the words out on the screen, like they’re resisting in some way.

    It’s easier for me to do this using Scribbles editor. I don’t know why. Might be a combination of how the compose screen is styled and laid out, plus the use of a WYSIWYG editor1. But whatever it is, it all combines into an experience where the words flow a little easier for me. That’s probably the only way I can describe it. There’s nothing really empirical about it all, but maybe that’s the point. It’s involves the emotional side of writing: the “look and feel”.

    Second, I like that I can keep separate topics separate. I thought I could be someone who can write about any topic in one place, but when I’m browsing this site myself, I get a bit put out by all the technical topics mixed in with my day-to-day entries. They feel like they don’t belong here. Same with project notes, especially given that they tend to be more long-form anyway.

    This I just attribute to one of my many hang-ups. I never have this issue with other sites I visit. It may be an emotional response from seeing what I wrote about; where reading about my day-to-day induces a different feeling (casual, reflective) than posts about code (thinking about work) or projects (being a little more critical, maybe even a little bored).

    Being about to create multiple blogs in Scribbles, thanks to signing up for the lifetime plan, gives me the opportunity to create separate blogs for separate topics: one for current projects, one for past projects, and one for coding topics. Each of them, along with Micro.blog, can have their own purpose and writing style: more of a public journal for the project sites, more informational or critical on the coding topics, and more day-to-day Mastodon-like posts on Micro.blog (I also have a check-in blog which is purely a this-is-where-I’ve-been record).

    Finally, I think it’s a bit of that “ooh, shiny” aspect of trying something new. I definitely got that using Scribbles. I don’t think there’s much I can do about that (nor, do I want to 😀).

    And that’s probably the best explanation I can give. Arguably it’s easier just writing in one place, and to that I say, “yeah, it absolutely is.” Nothing about any of this is logical at all. I guess I’m trying to optimise to posting something without all the various hang-ups I have about posting it at all, and I think having these separate spaces to do so helps.

    Plus, knowing me, it’s all likely to change pretty soon, and I’ll be back to posting everything here again.


    1. Younger me would be shocked to learn that I’d favour a WYSIWYG editor over a text editor with Markdown support ↩︎

    Self-Driving Bicycle for The Mind

    While listening to the Stratchery interview with Hugo Berra, a thought occurred to me. Berra mentioned that Xaomi was building an EV. Not a self-driving one, mind you: this one has a steering wheel and peddles. He made the comment that were Apple to actually go through with releasing a car, it would look a lot like what Xaomi has built. I haven’t seen either car project myself so I’ll take his word for it.

    This led to the thought that it was well within Apple’s existing capability to release a car. They would’ve had to skill up in automotive engineering, but they can hire people to do that. What they couldn’t do was all the self-driving stuff. No-one can do that yet, and it seems to me that being unable to deliver on this non-negotiable requirement was one of the things that doomed the project. Sure there were others — seems like they were lacking focus in a number of other areas — but this seems like a big one.

    This led to the next thought, which is why Apple thought it was ever a good idea to actually have the car self-driving. What’s wrong with having one driven by the user? Seems like this was a very un-Apple-like product decision. Has Apple ever been good a releasing tech that would replace, rather than augment, the user’s interaction with the device? Do they have phones that would browse the web for you? Have they replaced ZSH with ChatGPT in MacOS (heaven forbid). Probably the only product that comes close is Siri, and we all know what a roaring success that is.

    Apple’s strength is in releasing products that keep human interaction a central pillar of it’s design. They should just stick with that, and avoid any of the self-driving traps that come up. It’s a “bicycle for a mind” after all: the human is still the one doing the peddling.

    On Post Headers

    My answer to @mandaris question:

    How many of you are using headers in your blogging? Are you using anything that denotes different sections?

    I generally don’t use headers, unless the post is so long it needs them to break it up a little. When I do, I tend to start with H2, then step down to H3, H4, etc.

    I’d love to start with H1, but most themes I encounter, including those from software like Confluence, style H1 to be almost the same size as the page title. This kills me as the page title should be separate from any H1s in the body, and styled differently enough that there’s no mistaking what level the header’s on.

    But, c’est la vie.

    Sorting And Go Slices

    Word of caution for anyone passing Go slices to a function which will sort them. Doing so as is will modify the original slice. If you were to write this, for example:

    package main
    
    import (
    	"fmt"
    	"sort"
    )
    
    func printSorted(ys []int) {
    	sort.Slice(ys, func(i, j int) bool { return ys[i] < ys[j] })
    	fmt.Println(ys)
    }
    
    func main() {
    	xs := []int{3, 1, 2}
    	printSorted(xs)
    	fmt.Println(xs)
    }
    

    You will find, when you run it, that both xs and ys will be sorted:

    [1,2,3]
    [1,2,3]
    

    If this is not desired, the remedy would be make a copy of the slice prior to sorting it:

    func printSorted(ys []int) {
    	ysDup := make([]int, len(ys))
    	copy(ysDup, ys)
    	sort.Slice(ysDup, func(i, j int) bool { return ys[i] < ys[j] })
    	fmt.Println(ysDup)
    }
    

    This make sense when you consider that the elements of a slice are more or less stored in a normal array. Slices add things like start and end items, which are stored as values within the slice struct, but the array itself is just a normal reference and it is this that sort.Slice will modify.

    On the face of it, this is a pretty trivial thing to find out. But it’s worth noting here just so that I don’t have to remember it again.

    Adding A Sidebar To A Tiny Theme Micro.blog

    I’d though I’d write a little about how I added a sidebar with recommendations to my Tiny Theme’ed Micro.blog, for anyone else interested in doing likewise. For an example on how this looks, please see this post, or just go to the home page of this site.

    I should say that I wrote this in the form of a Micro.blog plugin, just so that I can use a proper text editor. It’s not published at the time of this post, but you can find all the code on Github, and although the steps here are slightly different, they should still work using Micro.blog’s template designer.

    I started by defining a new Hugo partial for the sidebar. This means that I can choose which page I want it to appear on without any copy-and-paste. You can do so by adding a new template with the name layouts/partials/sidebar.html, and pasting the following template:

    <div class="sidebar">
        <div class="sidebar-cell">
            <header>
                <h1>Recommendations</h1>
            </header>
            <ul class="blogroll">
                {{ range .Site.Data.blogrolls.recommendations }}
                    <li><a href="{{ .url }}">{{ .name }}: <span>{{ (urls.Parse .url).Hostname }}</span></a></li>
                {{ else }}
                    <p>No recommendations yet.</p>
                {{ end }}
            </ul>
        </div>
    </div>
    

    This creates a sidebar with a single cell containing your Micro.blog recommendations. Down the line I’m hoping to add additional cells with things like shoutouts, etc. The styling is not defined for this yet though.

    The sidebar is added to the page using Tiny Theme’s microhooks customisation feature. I set the microhook-after-post-list.html hook to the following HTML to include the sidebar on the post list:

    {{ partial "sidebar.html" . }}
    

    In theory it should be possible to add it to the other pages just by adding the same HTML snippet to the other microhooks (go for the “after” ones). I haven’t tried it myself though so I’m not sure how this will look.

    Finally, there’s the styling. I added the following CSS which will make the page slightly wider and place the sidebar to the right side of the page:

    @media (min-width: 776px) {
        body:has(div.sidebar) {
            max-width: 50em;
        }
    
        div.wrapper:has(div.sidebar) {
            display: grid;
            grid-template-columns: minmax(20em,35em) 15em;
            column-gap: 60px;
        }
    }
    
    div.sidebar {
        font-size: 0.9em;
        line-height: 1.8;
    }
    
    @media (max-width: 775px) {
        div.sidebar {
            display: none;
        }
    }
    
    div.sidebar header {
        margin-bottom: 0;
    }
    
    div.sidebar header h1 {
        font-size: 1.0em;
        color: var(--accent1);
    }
    
    ul.blogroll {
      padding-inline: 0;
    }
    
    ul.blogroll li {  
      list-style-type: none !important;
    }
    
    ul.blogroll li a {
      text-decoration: none;
      color: var(--text);
    }
    
    ul.blogroll li a span {
      color: var(--accent2);
    }
    

    This CSS uses the style variables defined by Tiny Theme so they should match the colour scheme of your blog. A page with a sidebar is also wider than one without it. It doesn’t change with width of pages that don’t have the sidebar (if this isn’t your cup of tea, you can remove the :has(div.sidebar) selector off the body tag) and the sidebar will not appear on small screens, like a phone in portrait orientation. I’m not entirely sure if I like this, and I may eventually make changes. But it’s fine for now.

    So that’s how the sidebar was added. More to come as I tinker with this down the line.

    Update: This is now a standalone Micro.blog Plugin called Sidebar For Tiny Theme.

    Photo Bucket Update: Exporting To Zip

    Worked a little more on Photo Bucket this week. Added the ability to export the contents of an instance to a Zip file. This consist of both images and metadata.

    Screenshot of a finder window showing the contents of the exported Zip file

    I’ve went with lines of JSON file for the image metadata. I considered a CSV file briefly, but for optional fields like captions and custom properties, I didn’t like the idea of a lot of empty columns. Better to go with a format that’s a little more flexible, even if it does mean more text per line.

    As for the images, I’m hoping the export to consist of the “best quality” version. What that means will depend on the instance. The idea is to have three tiers of image quality managed by the store: “original”, “web”, and “thumbnail”. The “original” version is the untouched version uploaded to the store. The “web” version is re-encoded from the “original” and will be slightly compressed with image metadata tags stripped out. The “thumbnail” version will be a small, highly compressed version suitable for the thumbnail. There is to be a decision algorithm in place to get an image given the desired quality level. For example, if something needed the “best quality” version of an image, and the “original” image is not available, the service will default to the “web” version (the idea is that some of these tiers will be optional depending on the need of the instances).

    This is all partially working at the moment, and I’m hoping to rework all this when I replace how stores and images relate to each other (This is what I’m starting on now, and why I built export now since this will be backwards incompatible). So for the moment the export simply consists of the “web” version.

    I’ve got unit tests working for this as well. I’m trying a new approach for unit testing in this project. Instead of using mocks, the tests are actually running against a fully instantiated versions of the services. There exists a servicestest package which does all the setup (using temporary directories, etc) and tear down of these services. Each individual unit test gets the services from this package and will run tests against a particular one.

    This does mean all the services are available and exercised within the tests, making them less like unit tests and more like integrations tests. But I think I prefer this approach. The fact that the dependent services are covered gives me greater confidence that they’re working. It also means I can move things around without changing mocks or touching the tests.

    That’s not to say that I’m not trying to keep each service their own component as much as I can. I’m still trying to follow best practice of component design: passing dependencies in explicitly when the services are created, for example. But setting them all up as a whole in the tests means I can exercise them while they serve the component being tested. And the dependencies are explicit anyway (i.e. no interfaces) so it makes sense keeping it that way for the tests as well. And it’s just easier anyway. 🤷

    Anyway, starting rework on images and stores now. Will talk more about this once it’s done.

    Photo Bucket Update: More On Galleries

    Spent a bit more time working on Photo Bucket this last week1, particularly around galleries. They’re progressing quite well. I’m made some strides in getting two big parts of the UI working now: adding and removing images to galleries, and re-ordering gallery items via drag and drop.

    I’ll talk about re-ordering first. This was when I had to bite the bullet and start coding up some JavaScript. Usually I’d turn to Stimulus for this but I wanted to give HTML web components a try. And so far, they’ve been working quite well.

    The gallery page is generated server-side into the following HTML:

    <main>
      <pb-draggable-imageset href="/_admin/galleries/1/items" class="image-grid">
        <pb-draggable-image position="0" item-id="7">
          <a href="/_admin/photos/3">
            <img src="/_admin/img/web/3">
          </a>
        </pb-draggable-image>
            
        <pb-draggable-image position="1" item-id="4">
          <a href="/_admin/photos/4">
            <img src="/_admin/img/web/4">
          </a>
        </pb-draggable-image>
            
        <pb-draggable-image position="2" item-id="8">
          <a href="/_admin/photos/1">
            <img src="/_admin/img/web/1">
          </a>
        </pb-draggable-image>        
      </pb-draggable-imageset>
    </main>
    

    Each <pb-draggable-image> node is a direct child of an <pb-draggable-imageset>. The idea is that the user can rearrange any of the <pb-draggable-image> elements within a single <pb-draggable-imageset> amongst themselves. Once the user has moved an image onto to another one, the image will signal its new position by firing a custom event. The containing <pb-draggable-imageset> element is listening to this event and will respond by actually repositioning the child element and sending a JSON message to the backend to perform the move in the database.

    A lot of this was based on the MDN documentation for drag and drop and it follows the examples quite closely. I did find a few interesting things though. My first attempt at this was to put it onto the <pb-draggable-image> element, but I wasn’t able to get any drop events when I did. Moving the draggable attribute onto the <a> element seemed to work. I not quite sure why this is. Surely I can’t think of any reason as to why it wouldn’t work. It may had something else, such as how I was initialising the HTTP components.

    Speaking of HTML components, there was a time where the custom component’s connectedCallback method was being called before the child <a> elements were present in the DOM. This was because I had the <script> tag in the the HTML head and configured to be evaluated during parsing. Moving it to the end of the body and loading it as a module fixed that issue. Also I found that moving elements around using element.before and element.after would actually call connectedCallback and disconnectedCallback each time, meaning that any event listeners registered within connectedCallback would need to be de-registered, otherwise events would be handled multiple times. This book-keeping was slightly annoying, but it worked.

    Finally, there was moving the items with the database. I’m not sure how best to handle this, but I have that method that seems to work. What I’m doing is tracking the position of each “gallery item” using a position field. This field would be 1 for the first item, 2 for the next, and so on for each item in the gallery. The result of fetching items would just order using this field, so as long as they’re distinct, they don’t need to be a sequence incrementing by 1, but I wanted to keep this as much as possible.

    The actual move involves two update queries. The first one will update the positions of all the items that are to shift left or right by one to “fill the gap”. The way it does this is that when an item is moved from position X to position Y, the value of position between X and Y would be changed by +1 if X > Y, or by –1 if Y > X. This is effectively the same as setting position X to X + 1, and so on, but done using one UPDATE statement. The second query just sets the position of item X to Y.

    So that’s moving gallery items. I’m not sure how confident I am with this approach, but I’ve been testing this, both manually and by writing unit tests. It’s not quite perfect yet: I’m still finding bugs (I found some while coming up with these screencasts). Hopefully, I’ll be able to get to the bottom of them soon.

    The second bit of work was to actually add and remove images in the gallery themselves. This, for the moment, is done using a “gallery picker” which is available in the image details. Clicking “Gallery” while viewing an image will show the list of galleries in the system, with toggles on the left. The galleries an image already belongs to is enabled, and the user can choose the galleries they want the image to be in by switching the toggles on and off. These translate to inserts and remove statements behind the scenes.

    The toggles are essentially just HTML and CSS, and a bulk of the code was taken from this example, with some tweaks. They look good, but I think I may need to make them slightly smaller for mouse and keyboard.

    I do see some downside with this interaction. First, it reverses the traditional idea of adding images to a gallery: instead of doing that, your selecting galleries for an image. I’m not sure if this would be confusing for others (it is modelled on how Google Photos works). Plus, there’s no real way to add images in bulk. Might be that I’ll need to add a way to select images from the “Photos” section and have a dialog like this to add or remove them all from a gallery. I think this would go far in solving both of these issues.

    So that’s where things are. Not sure what I’ll work on next, but it may actually be import and export, and the only reason for this is that I screwed up the base model and will need to make some breaking changes to the DB schema. And I want to have a version of export that’s compatible with the original schema that I can deploy to the one and only production instance of Photo Bucket so that I can port the images and captions over to the new schema. More on this in the future, I’m sure.


    1. Apparently I’m more than happy to discuss work in progress, yet when it comes to talking about something I’ve finished, I freeze up. 🤷 ↩︎

    Complexity Stays At the Office

    It’s interesting to hear what others like to look at during their spare time, like setting up Temporal clusters or looking at frontend frameworks built atop five other frameworks built on React. I guess the thinking is that since we use it for our jobs, it’s helpful to keep abreast of these technologies.

    Not me. Not any more. Back in the day I may have though similar. I may even have had a passing fancy at stuff like this, revelling in its complexity with the misguided assumption that it’ll equal power (well, to be fair, it would equal leverage). But I’ve been burned by this complexity one to many times. Why just now, I’ve spent the last 30 minutes running into problem after problem trying to find a single root cause of something. It’s a single user interaction but because it involves 10 different systems, it means looking at 10 different places, each one having their own issues blocking me from forward progress.

    So I am glad to say that those days are behind me. Sure, I’ll learn new tech like Temporal if I need to, but I don’t go out looking for these anymore. If I want to build something, it would be radically simple: Go, Sqlite or PostgreSQL, server-side rendered HTML with a hint of JavaScript. I may not achieve the leverage these technologies may offer, but by gosh I’m not going to put up with the complexity baggage that comes with it.

    Implicit Imports To Load Go Database Drivers Considered Annoying (By Me)

    I wish Go’s approach to loading database drivers didn’t involve implicitly importing them as packages. At least that way, package authors would be more likely to get the driver from the caller, rather than load a driver themselves.

    I’ve been bitten by this recently, twice. I’m using a GitHub Linux driver to build an ARM version of something that needs to use SQLite. As far as I can tell, it’s not possible to build an ARM binary with CGO enabled with these runners (at-least, not without installing a bunch of dependencies — I’m not that desperate yet).

    I’m currently using an SQLite driver that doesn’t require CGO, so all my code builds fine. There also exists a substantially more popular SQLite driver that does require CGO, and twice I’ve tried importing packages which used this driver, thereby breaking the build. These packages don’t allow me pass in a database connection explicitly, and even if they did, I’m not sure if would help: they’re still importing this SQLite driver that needs CGO.

    So what am I to do? As long as I need to build ARM versions, I can’t use these packages (not that I need an ARM version, but it makes testing in a Linux VM running on an M1 Mac easier). I suppose I could roll my own, but it would be nice not to do so. It’d be much better for me to load the driver myself, and pass it to these packages explicitly.

    So yeah, I wish this was better.

    P.S. When you see the error message “unable to open database file: out of memory (14)” when you try to open an SQLite database, it may just mean the directory it’s in doesn’t exist.

    Goland Debugger Not Working? Try Upgrading All The Things

    I’ve been having occasional trouble with the debugger in Goland. Every attempt to debug a test would just fail with the following error:

    /usr/local/go/bin/go tool test2json -t /Applications/GoLand.app/…
    API server listening at: 127.0.0.1:60732
    could not launch process: EOF
    
    Debugger finished with the exit code 1
    

    My previous attempts at fixing this — upgrading Go and Goland — did get it working for a while, but recently it’s been happening to me again. And being at the most recent version of Go and Goland, that avenue was not available to me.

    So I set about looking for other ways to fix this. Poking around the web netted this support post, which suggested upgrading the Xcode Command Line tools:

    $ sudo rm -rf /Library/Developer/CommandLineTools
    $ xcode-select --install
    

    I ran the command and the tools did upgrade successfully, but I was still encountering the problem. I then wondered if Goland used Delve for debugging, and if that actually need upgrading. I’ve got Delve via Homebrew, so I went about upgrading that:

    $ brew upgrade dlv
    

    And indeed, Homebrew did upgrade Delve from 1.21.0 to 1.22.0. And once that finished, and after restarting Goland1, I was able to use the debugger again.

    So, if you’re encountering this error yourself, try upgrading one or more of these tools:

    • Goland
    • Go
    • Xcode Command Line tools (if on a Mac)
    • Delve

    This was the order I tried them in, but you might be lucky by trying Delve first. YMMV


    1. Not sure that a restart is required, but I just did it anyway, “just in case”. ↩︎

    People Are More Interested In What You're Working On Than You Think

    If anyone else is weary about posting about what projects they’re working on, fearing that others would think they’re showing off or something, here’s two bits of evidence that I hope would allay these fears:

    Exhibit 1: I’m a bit of a fan of the GMTK YouTube channel. Lots of good videos there about game development that, despite not being a game developer myself, I find facinating. But the playlist I enjoy the most is the one where Mark Brown, the series creator, actually goes through the process of building a game himself. Now, you’re not going to learn how to use Unity from that series (although he does have a video about that), but it’s fun seeing him making design decisions, showing off prototypes, overcoming challenges — both external and self imposed, and seeing it all come together. I’m aways excited when he drops one of these videos, and when I learnt today that he’s been posting dev logs on his Discord, so interested am I in this topic that I immediately signed up as a Patreon supporter.

    Exhibit 2: I’ve been writing about my own projects on a new Scribbles blog. This was completely for myself, as a bit of an archive of previous work that would be difficult or impossible to revisit later. I had no expectations of anyone else finding these interesting. Yet, earlier this week, while at lunch, the conversation I was having with work colleagues turned to personal projects. He ask if I was working on anything, and when I told him about this blog, he expressed interest. I gave him the link and that afternoon I saw him taking a look (I not expecting him to be a regular visitor but the fact that he was interested at all was something).

    It turns out that my colleague gets a kick out of seeing others do projects like this on the side. I guess, in retrospect, that this shouldn’t be a surprise to me, seeing that I get the same thrill. Heck, that’s why I’ve subscribed to the tech podcasts like Under the Radar: I haven’t written an iOS app in my life, yet it’s just fun listing to dev stories like this.

    Yet, when it comes to something that I’m working on, for a long time I’ve always held back, thinking that talking about it is a form of showing off. I like to think I’m getting better here, but much like the Resistance, that feeling is still there. Wispering doubt in my ear. Asking who would be interested in these raw, unfinished, things that will never go beyond the four walls of the machine from whence then came? I don’t think that feeling will ever go away, but in case I loose my nerve again, I hope to return to the events of this week, just to remind myself that, yeah, people are interested in these stories. I can put money on that assurance. After all, I just did.

    So don’t be afraid to publish those blog posts, podcasts, or videos on what you’re working on. I can’t wait to see them.

    See also, this post by Aaron Francis that touches on the same topic (via The ReadME Project).

    Github Actions, Default Token Permissions, And Publishing Binaries

    Looks like Github’s locked down the access rights of the GITHUB_TOKEN recently. This is the token that’s available to all Github actions by default.

    After taking a GoReleaser config file from an old project and using it in a new one, I encountered this error when GoReleaser tried to publish the binaries as part of a Github Release:

    failed to publish artifacts:
    could not release:
    PATCH https://api.github.com/repos/lmika/<project>/releases/139475588:
    403 Resource not accessible by integration []
    

    After a quick search, I found this Github issue which seemed to cover the same problem. It looks like the way to resolve this is to explicitly add the content: write permission to the Github Actions YAML file:

    name: Create Release
    
    on:
      push:
        tags:
          - 'v*'
    
    # Add this section
    permissions:
      contents: write
      
    jobs:
      build:
        runs-on: ubuntu-latest
    

    And sure enough, after adding the permissions section, Goreleaser was able to publish the binaries once again.

    There’s a bunch of other permissions that might be helpful for other things, should you need it.

    Thoughts on The Failure of Microsoft Bob

    Watching a YouTube video about Microsoft Bob left me wondering if one of the reasons why Bob failed was that it assumed that users, who may have been intimidated by a GUI when they first encountered one, would be intimidated for ever. That their level of skill will always remain one in which the GUI was scary and unusable, and their only success in using a computer is through applications like Bob.

    That might be true for some, but I believe that such cases are a fewer representation of the userbase as a whole. If someone’s serious about getting the most out of their computer, even back then when the GUI was brand new, I can’t see how they wouldn’t naturally skill up, or at least want to.

    I think that’s why I’m bothered by GUIs that sacrifice functionality in leau of “simplicity.” It might be helpful at the start, but pretty soon people would grow comfortable using your UI, and will hit against the artificial capabilities of the application sooner than you expect.

    Not that I’m saying that all UIs should be as complex as Logic Pro for no reason: if the domain is simple, then keep it simple. But when deciding on the balance between simplicity and capability, perhaps have trust in your users’ abilities. If they’re motivated (and your UI design is decent) I’m sure they’ll be able to master something a little more complex.

    At least, that’s what this non-UI designer believes.

    Why I Use a Mac

    Why do I use a Mac?

    Because I can’t get anything I need to get done on an iPad.

    Because I can’t type to save myself on a phone screen.

    Because music software doesn’t exist on Linux.

    Because the Bash shell doesn’t exist on Windows (well, it didn’t when I stopped using it).

    That’s why I use a Mac.

    The AWS Generative AI Workshop

    Had an AI workshop today, where we went through some of the generative AI services AWS offers and how they could be used. It was reasonably high level yet I still got something out of it.

    What was striking was just how much of integrating these foundational models (something like an LLM that was pre-trained on the web) involved natural language. Like if you building a chat bot to have a certain personality, you’d start each context with something like:

    You are a friendly life-coach which is trying to be helpful. If you don’t know the answer to a question, you are to say I don’t know. (Question)

    This would extend to domain knowledge. Now you could fine tune a foundational model with your own data set, but an easier, allbeit slightly less efficient way, would be to do something like hand craft a bunch of questions and answers pairs, and feed that straight into the prompt.

    This may also extend to agents as well (code that the model interacts with). We didn’t cover agents to a significant degree, but after looking at some of the marketing materials, it seems to me that much of the integration is instructing the model to put parameters within XML tags (so that the much “dumber” agent can parse it out), and how to interpret the structured response.

    A lot of boilerplate, written in natural language, in the prompt just to deal with passing information around. I didn’t expect that.

    Nevertheless, it was pretty interesting. And although I haven’t got the drive to look into this much further, I would like to learn more about how one might hook up external data-sources and agents (somthing that involves vector databases that’s available to the model and doesn’t require fine turning. I not sure how to represent these “facts” so that it’s usable by the model, or even if that’s a thing).

Older Posts →