Long Form Posts
-
Apparently I’m more than happy to discuss work in progress, yet when it comes to talking about something I’ve finished, I freeze up. 🤷 ↩︎
- Goland
- Go
- Xcode Command Line tools (if on a Mac)
- Delve
-
Not sure that a restart is required, but I just did it anyway, “just in case”. ↩︎
Moan-routine: Stripe Prices
I love coding and anything computers. I’ve spent, and continue to spend, a significant amount of my life writing code. And on the whole, it’s been a magical experience.
But not always.
Sometimes I encounter something that makes me wonder why? Why was that designed that way? Why doesn’t it work? Why couldn’t this be easier? You encounter something that blocks you or puzzles you, maybe even questions how anything in computers can work at all. You’ve got things to do, and you try your best to work around the problem. Sometimes you succeed. Most times you get blocked and need to find some other way to fix it. And so the frustration builds, with no easy way to dissipate it.
Well, this is my attempt to do just that. I need a place to write these somethings down, not least to make me feel better. To air thy grievance is to start the healing process. They say that “a moan begun is half done” after all. Well, okay, no-one has ever said that. But maybe we should. We may not have the power or energy to change things, or even to find out why things the way they are, but by God, we can make sure others hear about it.
So enjoy these “moan-routines”. Or don’t. Honestly, it’s totally up to you. 🙂 P.S. The name moan-routine is a play on go-routine, a concept in Go.
So, onto today’s moan. We’ll start the moans with something that saved my bacon today: which is Stripe prices. Actually, what saved me was something that we didn’t do with Stripe prices, which was archive them. For you see, archiving a price in Stripe makes it effectively unusable.
This is arguably a good thing: you change a price, you don’t want anyone using the old one. Well, I say “change”; what I actually mean is replace. You can’t “change” a price directly in Stripe, say from $10 to $12. Instead, you create a new $12 price which will replace the old $10 one. Any new subscriptions you create will use the $12 one from now on, and ideally you’d never use that $10 price ever again.
So you want to archive the $10 price. But there’s a problem: you’ve got all these customers with subscriptions still paying the old $10 price. Stripe doesn’t provide an easy way to move customers over to the new $12 one: this is something you have to do yourself. And you may not want to do that anyway. You may want to send your customers an email informing them about the price change; letting them know they have 30 days or whatever to be ready for it.
And until you actually go in and change their subscriptions, they’ll be paying this old $10 price. And the minute you archive that $10 price, it becomes radioactive. Any attempt to do anything with it, or any subscription using it, will resolve in an error.
Which gets to the moan: why do this for archive prices? I would’ve thought that Stripe was aware that subscriptions with old price was a thing. Am I to keep this old price active for as long as those accounts are still using it? Even if no-one else will ever get that price again? I wouldn’t be able to archive anything at all until them. And it may be a while before accounts do get changed. Think beta users grandfathered in-to a lower rate.
So I’d like Stripe to change this. Either get rid of the pitfalls around archive prices, or make it easy to port subscriptions over to the new price in some way (subscription schedules are a whole other moan). Instead, I stuck keeping old prices active because I’m afraid that archiving them will things. This clutters up the dashboard and introduce traps like creating a subscription with an old price.
Look, prices change. It’s just the way of the world. And if Stripe is going to make prices effectively immutable, it would be helpful for us to make it easy to mark a price as “don’t use this for new stuff but keep the old staff effectively unchanged.” I expected this what archiving a price would be. Turns out I was wrong.
Small Calculator Commands
This page documents the extra commands from Small Calculator. These were taken from source code, pretty much as is, but styled to suite the web, and any spelling mistakes fixed. These were retrievable from the application itself by typing “help” follow by the command.
Available Commands
The list of available commands are as follows
BLOCK <statements> Executes a block of statements
HELP [topic] Display help on topic
DEFFNC <function> Defines a new function
ECHO <text> Displays text on the line
ECHOEXPR <cmd> Executes a command and displays the result
EXEC <file> Executes a file of commands
FUNCTIONS Displays all predefined functions
IF <pred> Does a command on condition
RETURN <val> Sets the return value
RETURNEXPR <cmd> Sets the return value to the result of <cmd>
Type "HELP <command>" to see infomation on a command
BLOCK
BLOCK {<cmd1>} {<cmd2>} ...
Executes a block of commands. The commands can be any statement including other block statements.
DEFFNC
DEFFNC <fnname>(<parameters>) = <command>
Defines a new function. The function name can only consist of letters
and numbers. Only a maximum of 4 parameters can be used in the
parameter list. Parameters are required to be referred to using
$
Example:
deffnc test(x) = $x + 2
-- Adds two to any number
deffnc sign(k) = if ($k < 0} {-1} {if {$k > 0} {1} {0}}
-- Returns -1 if k is negative, 1 if k is positive and
0 if k is 0.
Functions can be recursive if using the “if” command.
ECHO
ECHO <string>
Displays a string on the console.
ECHOEXPR
ECHOEXPR <command>
Executes a command and displays the result on the console.
EXEC
EXEC <filename>
Executes a file of commands. Lines starting with “;” are considered comments. Lines ending with "" are considered incomplete and the next line is appended (after trimming) to the end of that line.
FUNCTIONS
functions
Displays all predefined functions. No user functions are included.
IF
IF {<cond>} {<truepart>} {<falsepart>}
If the result of
HELP
HELP [topic]
Displays a help topic on the console window. Use “HELP
RETURN
RETURN <val>
Sets the return value to
RETURNEXPR
RETURNEXPR <cmd>
Sets the return value to the return value of
Small Calculator
Date: Unknown, but probably around 2005
Status: Retired
Give me Delphi 7, a terminal control, and an expression parser, and of course I’m going to build a silly little REPL program.
I can’t really remember why I though this was worth spending time on, but I was always interested in little languages (still am), and I guess I though having a desk calculator that used one was worth having. I was using a parser library I found on Torry’s Delphi Pages (the best site at the time to get free controls for Delphi) for something else, and after getting a control which simulated a terminal, I wrote a very simple REPL loop which used the two.

And credit to the expression parser developer: it was pretty decent. It supported assignments and quite a number of functions. Very capable for powering a desk calculator.

For a while the app was simply that. But, as with most things like this, I got the itch to extend it a little. I started by added a few extra commands. Simple things, like one that would echo something to the screen. All quite innocent, if a little unnecessary. But it soon grew to things like if statements, blocks using curly brackets, and function definitions.
It even extended to small batch scripts, like the one below. The full set of commands is listed here.
x := 2
y := 3
if {x = y} {echo 5} \
{echo 232}
return
These never went anywhere beyond a few tests. The extra commands was not really enough to be useful, and they were all pretty awful. I was already using a parser library so I didn’t want to spend any time extending it. As a result, many of these extensions were little more than things that scanned and spliced strings together. It was more of a macro language rather than anything else.

Even with the expression parser the program didn’t see a great deal of use. I was working on the replacement at the time which would eventually be much more capable, and as soon as that was ready, this program fell out of use.
Even so, it was still quite a quirky little program to make a bit of an impression.
Self-Driving Bicycle for The Mind
While listening to the Stratchery interview with Hugo Berra, a thought occurred to me. Berra mentioned that Xaomi was building an EV. Not a self-driving one, mind you: this one has a steering wheel and peddles. He made the comment that were Apple to actually go through with releasing a car, it would look a lot like what Xaomi has built. I haven’t seen either car project myself so I’ll take his word for it.
This led to the thought that it was well within Apple’s existing capability to release a car. They would’ve had to skill up in automotive engineering, but they can hire people to do that. What they couldn’t do was all the self-driving stuff. No-one can do that yet, and it seems to me that being unable to deliver on this non-negotiable requirement was one of the things that doomed the project. Sure there were others — seems like they were lacking focus in a number of other areas — but this seems like a big one.
This led to the next thought, which is why Apple thought it was ever a good idea to actually have the car self-driving. What’s wrong with having one driven by the user? Seems like this was a very un-Apple-like product decision. Has Apple ever been good a releasing tech that would replace, rather than augment, the user’s interaction with the device? Do they have phones that would browse the web for you? Have they replaced ZSH with ChatGPT in MacOS (heaven forbid). Probably the only product that comes close is Siri, and we all know what a roaring success that is.
Apple’s strength is in releasing products that keep human interaction a central pillar of it’s design. They should just stick with that, and avoid any of the self-driving traps that come up. It’s a “bicycle for a mind” after all: the human is still the one doing the peddling.
On Post Headers
My answer to @mandaris question:
How many of you are using headers in your blogging? Are you using anything that denotes different sections?
I generally don’t use headers, unless the post is so long it needs them to break it up a little. When I do, I tend to start with H2, then step down to H3, H4, etc.
I’d love to start with H1, but most themes I encounter, including those from software like Confluence, style H1 to be almost the same size as the page title. This kills me as the page title should be separate from any H1s in the body, and styled differently enough that there’s no mistaking what level the header’s on.
But, c’est la vie.
Sorting And Go Slices
Word of caution for anyone passing Go slices to a function which will sort them. Doing so as is will modify the original slice. If you were to write this, for example:
package main
import (
"fmt"
"sort"
)
func printSorted(ys []int) {
sort.Slice(ys, func(i, j int) bool { return ys[i] < ys[j] })
fmt.Println(ys)
}
func main() {
xs := []int{3, 1, 2}
printSorted(xs)
fmt.Println(xs)
}
You will find, when you run it, that both xs
and ys
will be sorted:
[1,2,3]
[1,2,3]
If this is not desired, the remedy would be make a copy of the slice prior to sorting it:
func printSorted(ys []int) {
ysDup := make([]int, len(ys))
copy(ysDup, ys)
sort.Slice(ysDup, func(i, j int) bool { return ys[i] < ys[j] })
fmt.Println(ysDup)
}
This make sense when you consider that the elements of a slice are more or less stored in a normal array. Slices add things like start and end items, which are stored as values within the slice struct, but the array itself is just a normal reference and it is this that sort.Slice will modify.
On the face of it, this is a pretty trivial thing to find out. But it’s worth noting here just so that I don’t have to remember it again.
Adding A Sidebar To A Tiny Theme Micro.blog
This is now a standalone Micro.blog Plugin called Sidebar For Tiny Theme which adds support for this out of the box. The method documented below no longer works, but I'm keeping it here for posterity reason.
I’d though I’d write a little about how I added a sidebar with recommendations to my Tiny Theme’ed Micro.blog, for anyone else interested in doing likewise. For an example on how this looks, please see this post, or just go to the home page of this site.
I should say that I wrote this in the form of a Micro.blog plugin, just so that I can use a proper text editor. It’s not published at the time of this post, but you can find all the code on Github, and although the steps here are slightly different, they should still work using Micro.blog’s template designer.
I started by defining a new Hugo partial for the sidebar. This means that I can choose which page I want it to appear on without any copy-and-paste. You can do so by adding a new template with the name layouts/partials/sidebar.html
, and pasting the following template:
<div class="sidebar">
<div class="sidebar-cell">
<header>
<h1>Recommendations</h1>
</header>
<ul class="blogroll">
{{ range .Site.Data.blogrolls.recommendations }}
<li><a href="{{ .url }}">{{ .name }}: <span>{{ (urls.Parse .url).Hostname }}</span></a></li>
{{ else }}
<p>No recommendations yet.</p>
{{ end }}
</ul>
</div>
</div>
This creates a sidebar with a single cell containing your Micro.blog recommendations. Down the line I’m hoping to add additional cells with things like shoutouts, etc. The styling is not defined for this yet though.
The sidebar is added to the page using Tiny Theme’s microhooks customisation feature. I set the microhook-after-post-list.html
hook to the following HTML to include the sidebar on the post list:
{{ partial "sidebar.html" . }}
In theory it should be possible to add it to the other pages just by adding the same HTML snippet to the other microhooks (go for the “after” ones). I haven’t tried it myself though so I’m not sure how this will look.
Finally, there’s the styling. I added the following CSS which will make the page slightly wider and place the sidebar to the right side of the page:
@media (min-width: 776px) {
body:has(div.sidebar) {
max-width: 50em;
}
div.wrapper:has(div.sidebar) {
display: grid;
grid-template-columns: minmax(20em,35em) 15em;
column-gap: 60px;
}
}
div.sidebar {
font-size: 0.9em;
line-height: 1.8;
}
@media (max-width: 775px) {
div.sidebar {
display: none;
}
}
div.sidebar header {
margin-bottom: 0;
}
div.sidebar header h1 {
font-size: 1.0em;
color: var(--accent1);
}
ul.blogroll {
padding-inline: 0;
}
ul.blogroll li {
list-style-type: none !important;
}
ul.blogroll li a {
text-decoration: none;
color: var(--text);
}
ul.blogroll li a span {
color: var(--accent2);
}
This CSS uses the style variables defined by Tiny Theme so they should match the colour scheme of your blog. A page with a sidebar is also wider than one without it. It doesn’t change with width of pages that don’t have the sidebar (if this isn’t your cup of tea, you can remove the :has(div.sidebar)
selector off the body
tag) and the sidebar will not appear on small screens, like a phone in portrait orientation. I’m not entirely sure if I like this, and I may eventually make changes. But it’s fine for now.
So that’s how the sidebar was added. More to come as I tinker with this down the line.
Photo Bucket Update: Exporting To Zip
Worked a little more on Photo Bucket this week. Added the ability to export the contents of an instance to a Zip file. This consist of both images and metadata.

I’ve went with lines of JSON file for the image metadata. I considered a CSV file briefly, but for optional fields like captions and custom properties, I didn’t like the idea of a lot of empty columns. Better to go with a format that’s a little more flexible, even if it does mean more text per line.
As for the images, I’m hoping the export to consist of the “best quality” version. What that means will depend on the instance. The idea is to have three tiers of image quality managed by the store: “original”, “web”, and “thumbnail”. The “original” version is the untouched version uploaded to the store. The “web” version is re-encoded from the “original” and will be slightly compressed with image metadata tags stripped out. The “thumbnail” version will be a small, highly compressed version suitable for the thumbnail. There is to be a decision algorithm in place to get an image given the desired quality level. For example, if something needed the “best quality” version of an image, and the “original” image is not available, the service will default to the “web” version (the idea is that some of these tiers will be optional depending on the need of the instances).
This is all partially working at the moment, and I’m hoping to rework all this when I replace how stores and images relate to each other (This is what I’m starting on now, and why I built export now since this will be backwards incompatible). So for the moment the export simply consists of the “web” version.
I’ve got unit tests working for this as well. I’m trying a new approach for unit testing in this project. Instead of using mocks, the tests are actually running against a fully instantiated versions of the services. There exists a servicestest
package which does all the setup (using temporary directories, etc) and tear down of these services. Each individual unit test gets the services from this package and will run tests against a particular one.
This does mean all the services are available and exercised within the tests, making them less like unit tests and more like integrations tests. But I think I prefer this approach. The fact that the dependent services are covered gives me greater confidence that they’re working. It also means I can move things around without changing mocks or touching the tests.
That’s not to say that I’m not trying to keep each service their own component as much as I can. I’m still trying to follow best practice of component design: passing dependencies in explicitly when the services are created, for example. But setting them all up as a whole in the tests means I can exercise them while they serve the component being tested. And the dependencies are explicit anyway (i.e. no interfaces) so it makes sense keeping it that way for the tests as well. And it’s just easier anyway. 🤷
Anyway, starting rework on images and stores now. Will talk more about this once it’s done.
Photo Bucket Update: More On Galleries
Spent a bit more time working on Photo Bucket this last week1, particularly around galleries. They’re progressing quite well. I’m made some strides in getting two big parts of the UI working now: adding and removing images to galleries, and re-ordering gallery items via drag and drop.
I’ll talk about re-ordering first. This was when I had to bite the bullet and start coding up some JavaScript. Usually I’d turn to Stimulus for this but I wanted to give HTML web components a try. And so far, they’ve been working quite well.
The gallery page is generated server-side into the following HTML:
<main>
<pb-draggable-imageset href="/_admin/galleries/1/items" class="image-grid">
<pb-draggable-image position="0" item-id="7">
<a href="/_admin/photos/3">
<img src="/_admin/img/web/3">
</a>
</pb-draggable-image>
<pb-draggable-image position="1" item-id="4">
<a href="/_admin/photos/4">
<img src="/_admin/img/web/4">
</a>
</pb-draggable-image>
<pb-draggable-image position="2" item-id="8">
<a href="/_admin/photos/1">
<img src="/_admin/img/web/1">
</a>
</pb-draggable-image>
</pb-draggable-imageset>
</main>
Each <pb-draggable-image>
node is a direct child of an <pb-draggable-imageset>
. The idea is that the user can rearrange any of the <pb-draggable-image>
elements within a single <pb-draggable-imageset>
amongst themselves. Once the user has moved an image onto to another one, the image will signal its new position by firing a custom event. The containing <pb-draggable-imageset>
element is listening to this event and will respond by actually repositioning the child element and sending a JSON message to the backend to perform the move in the database.
A lot of this was based on the MDN documentation for drag and drop and it follows the examples quite closely. I did find a few interesting things though. My first attempt at this was to put it onto the <pb-draggable-image>
element, but I wasn’t able to get any drop events when I did. Moving the draggable
attribute onto the <a>
element seemed to work. I not quite sure why this is. Surely I can’t think of any reason as to why it wouldn’t work. It may had something else, such as how I was initialising the HTTP components.
Speaking of HTML components, there was a time where the custom component’s connectedCallback
method was being called before the child <a>
elements were present in the DOM. This was because I had the <script>
tag in the the HTML head and configured to be evaluated during parsing. Moving it to the end of the body and loading it as a module fixed that issue. Also I found that moving elements around using element.before and element.after would actually call connectedCallback
and disconnectedCallback
each time, meaning that any event listeners registered within connectedCallback
would need to be de-registered, otherwise events would be handled multiple times. This book-keeping was slightly annoying, but it worked.
Finally, there was moving the items with the database. I’m not sure how best to handle this, but I have that method that seems to work. What I’m doing is tracking the position of each “gallery item” using a position
field. This field would be 1 for the first item, 2 for the next, and so on for each item in the gallery. The result of fetching items would just order using this field, so as long as they’re distinct, they don’t need to be a sequence incrementing by 1, but I wanted to keep this as much as possible.
The actual move involves two update queries. The first one will update the positions of all the items that are to shift left or right by one to “fill the gap”. The way it does this is that when an item is moved from position X to position Y, the value of position
between X and Y would be changed by +1 if X > Y, or by –1 if Y > X. This is effectively the same as setting position X to X + 1, and so on, but done using one UPDATE
statement. The second query just sets the position of item X to Y.
So that’s moving gallery items. I’m not sure how confident I am with this approach, but I’ve been testing this, both manually and by writing unit tests. It’s not quite perfect yet: I’m still finding bugs (I found some while coming up with these screencasts). Hopefully, I’ll be able to get to the bottom of them soon.
The second bit of work was to actually add and remove images in the gallery themselves. This, for the moment, is done using a “gallery picker” which is available in the image details. Clicking “Gallery” while viewing an image will show the list of galleries in the system, with toggles on the left. The galleries an image already belongs to is enabled, and the user can choose the galleries they want the image to be in by switching the toggles on and off. These translate to inserts
and remove
statements behind the scenes.
The toggles are essentially just HTML and CSS, and a bulk of the code was taken from this example, with some tweaks. They look good, but I think I may need to make them slightly smaller for mouse and keyboard.
I do see some downside with this interaction. First, it reverses the traditional idea of adding images to a gallery: instead of doing that, your selecting galleries for an image. I’m not sure if this would be confusing for others (it is modelled on how Google Photos works). Plus, there’s no real way to add images in bulk. Might be that I’ll need to add a way to select images from the “Photos” section and have a dialog like this to add or remove them all from a gallery. I think this would go far in solving both of these issues.
So that’s where things are. Not sure what I’ll work on next, but it may actually be import and export, and the only reason for this is that I screwed up the base model and will need to make some breaking changes to the DB schema. And I want to have a version of export that’s compatible with the original schema that I can deploy to the one and only production instance of Photo Bucket so that I can port the images and captions over to the new schema. More on this in the future, I’m sure.
Complexity Stays At the Office
It’s interesting to hear what others like to look at during their spare time, like setting up Temporal clusters or looking at frontend frameworks built atop five other frameworks built on React. I guess the thinking is that since we use it for our jobs, it’s helpful to keep abreast of these technologies.
Not me. Not any more. Back in the day I may have though similar. I may even have had a passing fancy at stuff like this, revelling in its complexity with the misguided assumption that it’ll equal power (well, to be fair, it would equal leverage). But I’ve been burned by this complexity one to many times. Why just now, I’ve spent the last 30 minutes running into problem after problem trying to find a single root cause of something. It’s a single user interaction but because it involves 10 different systems, it means looking at 10 different places, each one having their own issues blocking me from forward progress.
So I am glad to say that those days are behind me. Sure, I’ll learn new tech like Temporal if I need to, but I don’t go out looking for these anymore. If I want to build something, it would be radically simple: Go, Sqlite or PostgreSQL, server-side rendered HTML with a hint of JavaScript. I may not achieve the leverage these technologies may offer, but by gosh I’m not going to put up with the complexity baggage that comes with it.
Message Simulator Client
Years: 2017 — 2020
Status: Gone
I once worked at a company that was responsible for sending SMS messages via an API. Think one time passwords when you log into websites, before time-based OTP apps were a thing. And yeah, this did involve some “marketing” messages, although we were pretty strict about outright spam or phishing messages.
Anyway, since sending messages costed us money, we had a simulator setup in our non-prod environments which we used for testing. The features were pretty minimal: basically get the list of messages sent through the API, send a status message back, and simulating a reply from the receiver. The messages were kept in memory and there was no UI: everything was done via a web frontend generated from a Swagger spec.
Being somewhat bored one day, and getting frustrated with the clunky web frontend, I’d though I had a go at making a MacOS client for this thing. After finding my way around Xcode and AppKit, I managed to get something that was usable.

It was not a sophisticated app in the least. It only consisted of a toolbar, a client area, and a right sidebar.
The toolbar allowed switching the environment to connect to, such as Dev or Test. I believe the environments were hard-coded, and if I wanted to add a new one, I’d had to change the Swift code. There was a button to clear the messages from the simulator, and one to refresh the list of messages. There was also a very simple search, which simply did an in-memory substring match, but was good enough for finding unique IDs.
The client area consisted of the message ID and message body, and that’s it. Not included were source and target numbers, plus a few other things. These were occasionally useful to me, but not enough to justify the effort it would take to add them to the UI (I was more likely to use the message body anyway).
The right sidebar consisted of the message “details”, which was just the message ID and message content. There were also sections for sending a particular status message, or sending a reply to the selected message.
I always had grand plans for more features, but I couldn’t justify the time. And eventually I decided to leave, and the project was wiped once I returned my laptop.
Despite how bare-bones it was, it was still useful, and something I used most days if I had to work with the simulator. And given that it was my first attempt at a native Mac app, I was reasonably proud of it. So I think it deserves a place in the archives.
Implicit Imports To Load Go Database Drivers Considered Annoying (By Me)
I wish Go’s approach to loading database drivers didn’t involve implicitly importing them as packages. At least that way, package authors would be more likely to get the driver from the caller, rather than load a driver themselves.
I’ve been bitten by this recently, twice. I’m using a GitHub Linux driver to build an ARM version of something that needs to use SQLite. As far as I can tell, it’s not possible to build an ARM binary with CGO enabled with these runners (at-least, not without installing a bunch of dependencies — I’m not that desperate yet).
I’m currently using an SQLite driver that doesn’t require CGO, so all my code builds fine. There also exists a substantially more popular SQLite driver that does require CGO, and twice I’ve tried importing packages which used this driver, thereby breaking the build. These packages don’t allow me pass in a database connection explicitly, and even if they did, I’m not sure if would help: they’re still importing this SQLite driver that needs CGO.
So what am I to do? As long as I need to build ARM versions, I can’t use these packages (not that I need an ARM version, but it makes testing in a Linux VM running on an M1 Mac easier). I suppose I could roll my own, but it would be nice not to do so. It’d be much better for me to load the driver myself, and pass it to these packages explicitly.
So yeah, I wish this was better.
P.S. When you see the error message “unable to open database file: out of memory (14)” when you try to open an SQLite database, it may just mean the directory it’s in doesn’t exist.
Rubber-ducking: On Context
I’m torn between extracting auth credentials in the handler from a Go Context and passing them as arguments to service methods, or just passing the context and having the service methods get it from the Context themselves.
Previously, when the auth credentials just had a user ID, we were doing the former. But we’re now using more information about what the user has access to and if we were to continue doing this, we’ll need to pass more parameters through to the service layer. Not only does this make things a little less neater, it’ll mean the next time we do this, we’ll have to do the whole thing again.
But, it means that the service methods would need to get the user IDs themselves, along with this new stuff. Not that that’s an issue: there will be providers that are also using the context to get this info. So this is a viable option. And yet, I feel uneasy about using the context for this.
🦆: So what are your options?
L: I guess I could replace the use of the user ID with a structure that holds both the user ID and this extra stuff.
🦆: Would that work?
L: I mean, I guess it would? It would make it clearer as to who’s request this. It would also mean that we’re being explicit about what the method needs.
🦆: Do you see any downsides with this approach?
L: The only thing I can see is that it would be inconsistent with other parts of the system that are getting it from the context.
🦆: Your hesitating. You don’t seem sure about this.
L: Well, I just don’t like the fact that we’re passing both the context which holds this auth info and the auth info alongside it. And I know that it’s unclear, and would mean that the tests would need to be changed (I mean they’ll need to be changed anyway if we went with this “principal” approach).
🦆: So what’s really going on? Why are you unsure about this?
L: Well, it’s just showing this in a review and having people say “oh, that’s not the right way to write Go.”
🦆: They say that?
L: We’ll, not exactly. But they do have opinions about how best to do this (like pretty much everyone, I guess).
🦆: Do they have an opinion about this decision?
L: Well, not really. In fact, I think they’re pretty okay with either approach.
🦆: So if they’re okay with either approach, it probably doesn’t matter that much. But if it were me, I’d probably prefer something a little more readable.
L: Well, yeah. But how can I trust you? You’re me.
🦆: Am I? I’m a duck. Are you a duck?
L: No, I’m not. But you’re not a duck either. You don’t exist. You’re just a figment of my imagination.
🦆: Really? Then how are you speaking to me?
L: Because I conjured you up so I can work through this problem I’m having.
🦆: So you’re having a go at me because I don’t exist, yet you still need me because you’re stuck on this decision and you need a resolution.
L: Well, I didn’t say I don’t need you. It’s probably still helpful to me to have this conversation.
🦆: Ok, I think we’ve gone off track a little. What are you going to do about this context decision?
…
🦆: Well?
L: Ok, I’m not certain that implicitly including the user ID will work, as the user ID may be different to what is actually in the context. I also don’t like how it’s implicit in the context, and I think I do prefer something a little more readable. It pains me to think that I’ll be effectively duplicating values that are already available to the method. But we’re doing that anyway with the user ID.
So here’s what I’ll do. I’ll replace it with a dedicated type, retrievable from the context and holding all the information that is needed to authorise the user. I’ll also retroactively make those changes to other areas of the code that are doing it.
🦆: Okay. And what of your peers?
L: If they ask about it, I’ll just tell them that I prefer something a little more explicit. I know it’s a departure from how I did things previously. But the benefits outweigh the costs I think in this case.
🦆: Okay. Sounds like you’ve got a way forward now.
L: Great. This has been helpful. Thanks for that, D.
🦆: No worries.
Rubberducking: On Context
I’m torn between extracting auth credentials in the handler from a Go Context and passing them as arguments to service methods, or just passing the context and having the service methods get it from the Context themselves.
Previously, when the auth credentials just had a user ID, we were doing the former. But we’re now using more information about what the user has access to and if we were to continue doing this, we’ll need to pass more parameters through to the service layer. Not only does this make things a little less neater, it’ll mean the next time we do this, we’ll have to do the whole thing again.
But, it means that the service methods would need to get the user IDs themselves, along with this new stuff. Not that that’s an issue: there will be providers that are also using the context to get this info. So this is a viable option. And yet, I feel uneasy about using the context for this.
🦆: So what are your options
L: I guess I could replace the use of the user ID with a structure that holds both the user ID and this extra stuff.
🦆: Would that work?
L: I mean, I guess it would? It would make it clearer as to whoes request this. It would also mean that we’re being explicit about what the method needs.
🦆: Do you see any downsides with this approach?
L: The only thing I can see is that it would be inconsistent with other parts of the system that are getting it from the context.
🦆: You’re hesitating. You don’t seem sure about this.
L: Well, I just don’t like the fact that we’re passing both the context which holds this auth info and the auth info alongside it. And I know that it’s unclear, and would mean that the tests would need to be changed (I mean they’ll need to be changed anyway if we went with this “principal” approach).
🦆: So what’s really going on? Why are you unsure about this?
L: Well, it’s just showing this in a review and having people say “oh, that’s not the right way to write Go.”
🦆: They say that?
L: We’ll, not exactly. But they do have opinions about how best to do this (like pretty much everyone, I guess).
🦆: Do they have an opinion about this decision?
L: Well, not really. In fact, I think they’re pretty okay with either approach.
🦆: So if they’re okay with either approach, it probably doesn’t matter that much. But if it were me, I’d probably prefer something a little more readable.
L: Well, yeah. But how can I trust you? You’re me.
🦆: Am I? I’m a duck. Are you a duck?
L: No, I’m not. But you’re not a duck either. You don’t exist. You’re just a figment of my imagination.
🦆: Really? Then how are you speaking to me?
L: Because I conjured you up so I can work through this problem I’m having.
🦆: So you’re having a go at me because I don’t exist, yet you still need me because you’re stuck on this decision and you need a resolution.
L: Well, I didn’t say I don’t need you. It’s probably still helpful to me to have this conversation.
🦆: Ok, I think we’ve gone off track a little. What are you going to do about this context decision?
L: …
🦆: Well?
L: Ok, I’m not certain that implicitly including the user ID will work, as the user ID may be different to what is actually in the context. I also don’t like how it’s implicit in the context, and I think I do prefer something a little more readable. It pains me to think that I’ll be effectively duplicating values that are already available to the method. But we’re doing that anyway with the user ID.
So here’s what I’ll do. I’ll replace it with a dedicated type, retrievable from the context and holding all the information that is needed to authorise the user. I’ll also retroactively make those changes to other areas of the code that are doing it.
🦆: Okay. And what of your peers?
L: If they ask about it, I’ll just tell them that I prefer something a little more explicit. I know it’s a departure from how I did things previously. But the benefits outweigh the costs I think in this case.
🦆: Okay. Sounds like you’ve got a way forward now.
L: Great. This has been helpful. Thanks for that, D.
🦆: No worries.
Goland Debugger Not Working? Try Upgrading All The Things
I’ve been having occasional trouble with the debugger in Goland. Every attempt to debug a test would just fail with the following error:
/usr/local/go/bin/go tool test2json -t /Applications/GoLand.app/…
API server listening at: 127.0.0.1:60732
could not launch process: EOF
Debugger finished with the exit code 1
My previous attempts at fixing this — upgrading Go and Goland — did get it working for a while, but recently it’s been happening to me again. And being at the most recent version of Go and Goland, that avenue was not available to me.
So I set about looking for other ways to fix this. Poking around the web netted this support post, which suggested upgrading the Xcode Command Line tools:
$ sudo rm -rf /Library/Developer/CommandLineTools
$ xcode-select --install
I ran the command and the tools did upgrade successfully, but I was still encountering the problem. I then wondered if Goland used Delve for debugging, and if that actually need upgrading. I’ve got Delve via Homebrew, so I went about upgrading that:
$ brew upgrade dlv
And indeed, Homebrew did upgrade Delve from 1.21.0 to 1.22.0. And once that finished, and after restarting Goland1, I was able to use the debugger again.
So, if you’re encountering this error yourself, try upgrading one or more of these tools:
This was the order I tried them in, but you might be lucky by trying Delve first. YMMV
People Are More Interested In What You're Working On Than You Think
If anyone else is weary about posting about what projects they’re working on, fearing that others would think they’re showing off or something, here’s two bits of evidence that I hope would allay these fears:
Exhibit 1: I’m a bit of a fan of the GMTK YouTube channel. Lots of good videos there about game development that, despite not being a game developer myself, I find facinating. But the playlist I enjoy the most is the one where Mark Brown, the series creator, actually goes through the process of building a game himself. Now, you’re not going to learn how to use Unity from that series (although he does have a video about that), but it’s fun seeing him making design decisions, showing off prototypes, overcoming challenges — both external and self imposed, and seeing it all come together. I’m aways excited when he drops one of these videos, and when I learnt today that he’s been posting dev logs on his Discord, so interested am I in this topic that I immediately signed up as a Patreon supporter.
Exhibit 2: I’ve been writing about my own projects on a new Scribbles blog. This was completely for myself, as a bit of an archive of previous work that would be difficult or impossible to revisit later. I had no expectations of anyone else finding these interesting. Yet, earlier this week, while at lunch, the conversation I was having with work colleagues turned to personal projects. He ask if I was working on anything, and when I told him about this blog, he expressed interest. I gave him the link and that afternoon I saw him taking a look (I not expecting him to be a regular visitor but the fact that he was interested at all was something).
It turns out that my colleague gets a kick out of seeing others do projects like this on the side. I guess, in retrospect, that this shouldn’t be a surprise to me, seeing that I get the same thrill. Heck, that’s why I’ve subscribed to the tech podcasts like Under the Radar: I haven’t written an iOS app in my life, yet it’s just fun listing to dev stories like this.
Yet, when it comes to something that I’m working on, for a long time I’ve always held back, thinking that talking about it is a form of showing off. I like to think I’m getting better here, but much like the Resistance, that feeling is still there. Wispering doubt in my ear. Asking who would be interested in these raw, unfinished, things that will never go beyond the four walls of the machine from whence then came? I don’t think that feeling will ever go away, but in case I loose my nerve again, I hope to return to the events of this week, just to remind myself that, yeah, people are interested in these stories. I can put money on that assurance. After all, I just did.
So don’t be afraid to publish those blog posts, podcasts, or videos on what you’re working on. I can’t wait to see them.
See also, this post by Aaron Francis that touches on the same topic (via The ReadME Project).
Github Actions, Default Token Permissions, And Publishing Binaries
Looks like Github’s locked down the access rights of the GITHUB_TOKEN
recently. This is the token that’s available to all Github actions by default.
After taking a GoReleaser config file from an old project and using it in a new one, I encountered this error when GoReleaser tried to publish the binaries as part of a Github Release:
failed to publish artifacts:
could not release:
PATCH https://api.github.com/repos/lmika/<project>/releases/139475588:
403 Resource not accessible by integration []
After a quick search, I found this Github issue which seemed to cover the same problem. It looks like the way to resolve this is to explicitly add the content: write
permission to the Github Actions YAML file:
name: Create Release
on:
push:
tags:
- 'v*'
# Add this section
permissions:
contents: write
jobs:
build:
runs-on: ubuntu-latest
And sure enough, after adding the permissions
section, Goreleaser was able to publish the binaries once again.
There’s a bunch of other permissions that might be helpful for other things, should you need it.
Thoughts on The Failure of Microsoft Bob
Watching a YouTube video about Microsoft Bob left me wondering if one of the reasons why Bob failed was that it assumed that users, who may have been intimidated by a GUI when they first encountered one, would be intimidated for ever. That their level of skill will always remain one in which the GUI was scary and unusable, and their only success in using a computer is through applications like Bob.
That might be true for some, but I believe that such cases are a fewer representation of the userbase as a whole. If someone’s serious about getting the most out of their computer, even back then when the GUI was brand new, I can’t see how they wouldn’t naturally skill up, or at least want to.
I think that’s why I’m bothered by GUIs that sacrifice functionality in leau of “simplicity.” It might be helpful at the start, but pretty soon people would grow comfortable using your UI, and will hit against the artificial capabilities of the application sooner than you expect.
Not that I’m saying that all UIs should be as complex as Logic Pro for no reason: if the domain is simple, then keep it simple. But when deciding on the balance between simplicity and capability, perhaps have trust in your users’ abilities. If they’re motivated (and your UI design is decent) I’m sure they’ll be able to master something a little more complex.
At least, that’s what this non-UI designer believes.
Build Indicators
AKA: Das Blinkenlights
Date: 2017 — now
Status: Steady Green
I sometimes envy those that work in hardware. To be able to build something that one can hold and touch. It’s something you really cannot do with software. And yeah, I dabbled a little with Arduino, setting up sketches that would run on prebuilt shields, but I never went beyond the point of building something that, however trivial or crappy, I could call my own.
Except for this one time.
And I admit that this thing is pretty trivial and crappy: little more than some controllable LEDs. But the ultimate irony is that it turned out to be quite useful for a bunch of software projects.
The Hardware

I built this Arduino shield a while ago, probably something like 2013. It’s really not that complicated: just a bunch of LEDs wired up in series to a bunch of resistors atop an Arduino prototyping shield. The LEDs can be divided up into two groups of three, with each group having a red, amber, and green LED, arrange much like two sets of traffic lights. I’m using the analogue pins of the Arduino, making it possible to dim the LEDs (well, “dimmed”: the analogue pins are little more than a square pulse with an adjustable duty cycle).


I can’t remember why I built this shield originally: it might had something to do with train signals, or maybe they were intended as indicators right out of the box. But after briefly using them for their original purpose, it sat on my desk for a while before I started using them as indicator lights. Their first use was for some tool that would monitor the download and transcode of videos. This would take between 45–60 minutes, and it was good to be able to start the job, leave the room, and get the current progress without having to wake the screen while I pass by the door. The red LED will slowly pulse while the download was in progress, then the yellow LED will start flashing when transcoding begins. Once everything is done, the green LED will be lit (or the red LED, which will indicate an error). The Arduino sketch had a bunch of predefined patterns, encoded as strings. Each character would indicate an intensity, with “a” being the brightness, and “z” being the dimmest (I believe the space or dot meant “off”). Each LED could be set to a different pattern, which was done via commands sent over the RS-232 connection. I think the code driving this connection was baked into the download-and-transcode tool itself. The Arduino would reset whenever the RS-232 connection is formed, and just letting it do this when the tool started up meant that I didn’t need to worry about connection state (it didn’t make the code portable though).
Watching Webpack
Eventually this tool fell out of use, and for the long time this board sat in my drawer. Projects came and went, until one came along with a problem that was perfect for this device. I was working on a HTML web-app and I was switching between the code and a web-browser, while Webpack was watching for changes. Because I only had a single screen, the terminal was always out of sight — behind either the code editor or the web-browser — and the version of Webpack I was using would stop watching when it encountered an error (a Go application was serving the files, and Webpack was simply deploying the bundle assets to a public folder, so even though Webpack would stop working, the actual web-server will continue running). Not seeing these errors, I’d fall into the trap into thinking that I was changing things, and getting confused as to why I wasn’t seeing them in the browser. I could go for a minute or two like this before I found out that Webpack died because of an earlier error and my changes were not getting deployed at all. So I dug this device out, built a very simple Go CLI tool and daemon that would talk to it, and hacked it into the Webpack config. When a Webpack build started, it would light up the amber LED. If the build was successful, the green LED would light up; if not, the red LED would.



This proved to be super useful, and took out the guesswork of knowing when a change was deployed. As long as the green LED was lit, it’s good to go, but as soon as amber becomes red, I know I’ll have to check for errors and get it green once more.
The sketch and daemon software is a lot simpler than what this device used to do. Now, instead of individual patterns of intensity, the daemon — which is itself controlled by a CLI tool — would communicate to the device using a very simple protocol that would either turn LEDs on or off. Some of the protocol details, taken from the Arduino sketch, are included below:
/*
* ledstatus - simple led indicators
*
* SERIAL FORMAT
*
* Commands take the form: <cmd> <pars>... NL. Any more than
* 8 bytes (1 command, 7 parameters) will be ignored.
*
* Responses from the device will take the form: <status> <par>... NL
*
*/
// Commands
\#define CMD_NOP 0x0
\#define CMD_PING 'p' // a ping, which should simply respond with RES_OK
\#define CMD_TURN_ON 'o' // 'o' <addr> :: turn on the leds at these addresses
\#define CMD_TURN_OFF 'f' // 'f' <addr> :: turn off the leds at these addresses
// Response
\#define RES_OK '1'
\#define PIN_ADDR_G1 (1 << 0)
\#define PIN_ADDR_Y1 (1 << 1)
\#define PIN_ADDR_R1 (1 << 2)
\#define PIN_ADDR_G2 (1 << 3)
\#define PIN_ADDR_Y2 (1 << 4)
\#define PIN_ADDR_R2 (1 << 5)
But in a way the simplicity actually helps here. Because it’s now a command and daemon, I could use it in anything else where I’d like to show progress without having to see the screen. Just now, for example, I’m working on a Go project that uses Air to rebuild and restart whenever I change a template. The cycle is slightly longer than a simple Webpack run, and I tend to reload the browser window too soon. So waiting for this device to go from amber to green cuts down on these early reloads.

So thats the Build Indicators. The project is steady, and I have no desire to do anything significant, like modify the hardware. But if I were to work on it again, I think I’d like it to add variable intensity back, and extend the command language to let the user upload customer patterns. But for the moment, it’s doing it’s job just fine.
Why I Use a Mac
Why do I use a Mac?
Because I can’t get anything I need to get done on an iPad.
Because I can’t type to save myself on a phone screen.
Because music software doesn’t exist on Linux.
Because the Bash shell doesn’t exist on Windows (well, it didn’t when I stopped using it).
That’s why I use a Mac.