Doing some tool smithing at work to make my current task a little easier. Using Bubble Tea to build a TUI-based SQS message browser. Early days so far but already showing promise. Anything which will save me clicks around the AWS console is always a plus.

Even more work on Feed Journaler. Still trying to tune the title removal logic. Will probably require a lot of testing.

The “Explore Repositories” sidebar on GitHub’s home page (if you’re logged in) is a bit strange. Why am I getting recommendations for projects that I either don’t need, or haven’t sought out? Am I expected to just pull in any bit of code people are publishing? Really weird.

Hmm, I’m wondering if Broadtail needs a mode for downloading the audio of a video only, and uploading it to a podcasting service like Pocketcasts. I did this twice before, and now wish I can do it again. Third time is when a pattern is emerging.

Also, a bit of a deviation to finish the current work outstanding for Feed Journaler. Much of it is trying to get the Markdown translator working with Write.as post, mainly to deal with posts without titles, and to strip any hashtags that appear in the first and last paragraph. The code is a real mess. I should probably spend some time cleaning it all up.

Showing Only Changed Files in Git Log

This technique shows only the changed files in a git log call without having to show the entire patch:

git log --name-only ...

If there’s anyone else out there using Buffalo to build web-apps, I just discovered that it doesn’t clean up old versions of bundled JavaScript files. This means that the public/asset directory can grow to gigabytes in size, eventually reaching the point where Go will simply refuse to embed that much data.

The tell-tail sign is this error message when you try to run the application:

too much data in section SDWARFSECT (over 2e+09 bytes)

If you see that, deleting public/assets should solve your problem.

Showing A File At a Specific Git Revision

To display the contents of a file at a given revision in Git, run the following command:

$ git show <revision>:<filename>

For example, to view the version of “README.md” on the dev branch:

$ git show dev:README.md

There is an alternative form of this command that will show the changes applied to that file as part of the commit:

$ git show <revision> -- <filename>

This can be used alongside the log command to work out what happened to a file that was deleted.

First, view the history of the file. You are interested in the ID before the commit that deleted the file: attempting to run git show using the deletion commit ID it will result in nothing being shown.

$ git log -- file/that/was/deleted
commit abc123
Author: The Deleter <deleter@example.com>
Date:   XXX

    Deleted this file.  Ha ha ha!

commit beforeCommit
Author: File Changer <changer@example.com>
Date:   XXX

    Added a new file at file/that/was/deleted

Then, use git show to view the version before it was deleted:

$ git show beforeCommit:file/that/was/deleted

🔗 A big bet to kill the password for good

Lot of talk about getting the user experience/OS compatability right, which is good. But I see no real indication on how they’re going to get the millions of app/website developers to switch to this. Hope they’ve considered that.

I may need to shake up my podcast listening habits a little.

I subscribe to several tech-based podcasts which I regularly listen to. But because I am a creature of habit, I only listen to each one during certain times of the week. Plus I’m finding that I can only take so much tech talk these days, and I need some other topic to listen to.

I also subscribe to some long form, analysis based podcasts which cover a broad range of subjects, but they don’t come out on a regular schedule. So in order to “fill in the gaps”, I usually turn to some specific news based podcasts. They do come out regularly, but the news of late has been a bit anxiety inducing, and I may need take a little break from them.

So I think my regular podcast listening rotation needs something new. Might be time to start throwing in some microcasts from fellow Micro-blogers into the mix.

A snapshot of the indie-web style project mentioned yesterday:

This is just static HTML and JavaScript at the moment. The idea is that each of these “cards” can be rearranged by direct manipulation. So I spent most of this morning trying to get drag and drop working in JavaScript.

After hooking up dragenter and dragleave event listeners to the Stimulus controllers bound to each of the cards, I found that the controllers were getting enter and leave events continuously as the dragged element was moved over them. These enter and leave events occurred almost immediately, making it difficult to indicate a drop-zone just by reacting to the events themselves. Setting a class on enter and then removing that class immediately on leave meant that the class was never shown.

Turns out this is because the events were being sent for all the subelements of the card. Moving the dragged element from the card padding to the text field resulted in an enter event for text field, followed immediately by a leave event on the card element.

This means that in order to do any form of indication, some element tracking is required. I’m currently using a counter to do that, which will increment by one when a dragenter event is received and decrement by 1 on dragleave. It sort of works — I’ve got to fix it when the drop event occurs — but I get super concerned about the counter getting out of sync and the indicator being stuck “on” when the dragged element is completely outside it.

One other thing I could try is having some form of set tracking each of the entered and exited elements and using the set length to determine whether to show the indicator. I trust that the browser will guarantee that a dragleave will always follow a dragenter event sent to the element. 🤔

We were hoping for a game of bocce in Argyle Square, which has a bocce pitch. But alas!

Beaten there by some Grand Prix event. 🙁

Code Review Software Sucks. Here's How I Would Improve It

This post is about code reviews, and the software that facilitates them.

I’ll be honest: I’m not a huge fan of code reviews, so a lot of what I speak of below can probably be dismissed as that from someone who blames their tools. Be that as it may, I do think there is room for improvements in the tooling used to review code, and this post touches on a few additional features which would help.

First I should say that I have no experience with dedicated code review tools. I’m mainly talking about code review tools that are part of hosted source code repository systems, like GitHub, GitLab, and Bitbucket. And since these are quite large and comprehensive systems, it might be that the priorities are different compared to a dedicated code review tool with a narrower focus. Pull requests and code reviews are just one of the many tasks that these systems need to handle, along with browsing code repositories, managing CI/CD runs, hosting binary releases, etc.

So I think it’s fair to say that such tools may not have the depth that a dedicated code review tool would have. After all, GitHub Actions does not have the same level of sophistication as something like Jenkins or BuildKite, either.

But even so, I’d say that there’s still room for improvement in the code review facilities that these systems do offer. Improvements that could be tailored more to the code review workflow. It’s a bit like using a text editor to manage your reminders. Yeah, you can use a text editor, but most of the features related specifically to reminders will not be available to you, and you’ll have to plug the feature gap yourself. Compare this to a dedicated reminder app, which would do a lot of the work for you, such as notifying you of upcoming reminders or having the means to marking an item as done.

So, what should be improved in the software that is used to review code? I can think of a few things:

Inbox: When you think about it, code reviews are a bit like emails and Jira tickets: they come to you and require you to action them in some way in order to get the code merged. But the level of attention you need to give them changes over time. If you’ve made comments on a review, there’s really no need to look at it again until the code has been updated or those the author has replied.

But this dynamic aspect of code reviews is not well reflected in most of these systems. Usually what I see is simply a list of pull requests that have not yet been approved or merged, and I have to keep track of the reviews that need my attention now myself, verses those that I can probably wait for action from others.

I think what would be better than a simple list would be something more of an inbox: a subset of the open reviews that are important to me now. As I action them, they’ll drop off the list and won’t come back until I need to action them again.

The types of reviews that I’d like to appear in the inbox, in the ordered listed below, would be the following:

  1. Ones that have been opened in which I’ve made comments that have been responded to — either by a code change or a reply — that I need to look at. The ones with more responses would appear higher in the list than the ones with less.
  2. Reviews that are brand new that I haven’t looked at yet, but others have.
  3. Brand new reviews that haven’t been looked at by anyone.

In fact, the list can be extended to filter out reviews that I don’t need to worry about, such as:

  1. Reviews that I’ve made comments on that have not been responded to yet. This indicates either that the author has not gotten around to it yet, in which case looking at the pull request again serves no purpose.
  2. Reviews that have enough approvals by others and do not necessarily need mine.
  3. Reviews that I’ve approved.

This doesn’t necessarily need to replace the list of open reviews: that might still be useful. But it will no longer be the primary list of reviews I need to work with during the day to day.

Approval pending resolution of comments: One thing I always find myself indecisive about is when I should hit that Approve button. Let’s say I’ve gone through the code, made some comments that I’d like the submitter to look at, but the rest of the code looks good. When should I approve the pull request? If I do it now, then the author may not have seen the comments or have indication that I’d like them to make changes, and will go ahead and merge.

I guess then the best time to approve it is when the changes are made. But that means the onerous is on me to remember to review the changes again. If the requests are trivial — such as renaming things — and I’d trust the person to make the changes and going through to review them once again is a waste of time.

This is where “Approval pending resolution of comments” would come in handy. Selecting this approval mode would mean that my approval would be granted once the author has resolve the outstanding review comments. This would not replace the regular approval mode: if there are changes which do require a re-review, I’d just approve it normally once I’ve gone through it again. But it’s one more way to let the workflow of code reviews work in my favour.

Speaking of review comments…

Review comment types: I think it’s a mistake to assume that all review comments are equal. Certainly in my experience I find myself unable to read the urgency of comments on the reviews I submit. I also find it difficult to telegraph in the comments that I make on code reviews of others. This usually results in longer comments with phrases such as “you don’t have to do this now, but…”, or “something to consider in the future…”

Some indication of the urgency of the comment alongside the comment itself would be nice. I can think of a system that has at least three levels:

  1. Request for change: this is the highest level. It’s an indication that you see something wrong with the code that must be changed. In these cases, these comments would need to be resolved with either a change to the code, or a discussion of some sort, but they need to be resolved before the code is merged.
  2. Request for improvement: This is a level lower, and indicates that there is something in the code that may need to be change, but not doing so would not block the code review. This can be used to suggest improvements to how things were done, or maybe to suggest an alternative approach to solving the problem. All those nitpicking comments can go here.
  3. Comments: This is the lowest level. It provides the way to make remarks about the code, but that require no further action from the author. Uses for this might be praise for doing something a certain way, or FYI type comments that the submitter may need to be aware of for future changes.

Notes to self: Finally, one thing that is on way too few systems that deal with shared data is the ability to annotate pull requests or the commented files with private notes. These won’t be seen by the author or any of the other reviewers, and are only there to facilitate making notes to self, such as things like “Looked at it, waiting for comments to be addressed”, or “review no longer pending”. This is probably the minimum, and would be less important if the other things are not addressed.

So that’s how I’d improve code review software. It may be that I’m the only one with this problem, and that others are perfectively able to review code effectively without these features. But I know they would work for me, and if I start seeing these in services like GitHub or GitLab, I probably would start using them.

Revising the micro-bulk-uploader project. I’ve been reading a lot about IndieAuth and Micropub formats and I got really excited about it. I figured this would be a good project to try them.

In short, this is a bulk image uploader. It’s primarily tailored for Micro.blog, but since these formats are standards in a way, there’s no real reason why it couldn’t work for other services.

Basically, the idea is this:

  • User authenticates with their website.
  • They are presented with a set of file uploads, which they can used to prep their images. They can reorder their images (ideally using drag-and-drop) and write some alt text.
  • Once they’re happy with the images, they click “Upload”.
  • The server will upload the images and then generate the HTML to produce a gallery (or just a list of images if they so prefer).

I have a few goals for this project:

  1. To ship a hosted service for others to use. The goal is to get into the practice of shipping software for other people. This is the principal goal: it must be usable by others. Otherwise, there’s no point doing this.
  2. To get a working version out the door in two weeks for others to start trying. Two weeks is April 1st, which should be achievable.
  3. To learn about the Indypub formats, specifically IndieAuth and Micropub.

I forgot that yesterday was St. Patricks Day. But I did wear a green polo, listened to some Celtic inspired music, and watched a few episodes of Derry Girls last night. Does that mean I accidentally celebrated St. Patricks Day? ☘️

Sometimes I wish I had the patients (and the hardware) to play some of the video games indies have been releasing. After reading about them, or watching a speed-run, I see that there some that are actually quite incredible. There are some really talented game designers out there.

Reading the chapter on Microformats in Indie Microblogging brought me back to uni where we were taught about the Semantic Web. I tell you, I feel exhausted just thinking about it now.

All these standards: RDF, OWLS, tautologies, and like seven other acronyms I’ve sinced forgotten. Each one building on top of the other like a gigantic wedding cake. And oh yeah, all of them using XML1 and requiring seven different namespaces and five different forms of URI’s to express even the most basic relationship.

It was the only part of the subject that didn’t seem fun. HTML? CSS? Great, sign me up! Keeping seven different XML files up to date when a relationship changes? No, thank you.

If there was ever an instance of technologists overengineering a solution without considering how it would be used to solve the problem, the Semantic Web is a great example.


  1. RDF did have a non-XML representation but I do remember being told that using it was discourage in favour of the XML standard. ↩︎

An important tenet of software development that I don’t think is appreciated that much: make your software easy to test. This includes building tests, test setup, running tests, configuration for testing, etc. If you don’t do this, your software will not be tested.

🔗 Go 1.18 Release Notes

Happy “generics finally in Go” day.

(I wouldn’t call myself someone who was itching for the Go devs to add generics; but now that they are in the language, I’ll probably use them).

Broadtail 0.0.7

Released Broadtail 0.0.7 about a week ago. This included some restyling of the job list on the home page, which now includes a progress bar updated using web-sockets (no need for page refreshes anymore).

For the frontend, the Websocket APIs that come from the browser are used. There’s not much to it — it’s managed by a Stimulus controller which sets up the websocket and listen for updates. The updates are then pushed as custom events to the main window, which the Stimulus controllers used to update the progress bar are listening out for. This allows for a single Stimulus controller to manage the websocket connection and make use of the window as a message bus.

Working out the layers of the progress bar took me a bit of time, as I wanted to make sure the text in the progress bar itself was readable as the progress bar filled. I settled in a HTML tree that looked like the following:

<div class="progressbar">
  <!-- The filled in layer, at z-index: 10 -->
  <div class="complete">
    <span class="label">45% complete</span>
  </div>

  <!-- The unfilled layer -->
  <span class="label">45% complete</span>
</div>

As you can see, there’s a base layer and a filled in layer that overlaps it. Both of these layers have a progress label that contain the same status message. As the .complete layer fills in, it will hide the unfilled layer and label. The various CSS properties used to get this effect can be found here.

The backend was a little easier. There is a nice websocket library for Go which handles the connection upgrades and provides a nice API for posting JSON messages. Once the upgrade is complete, a goroutine servicing the connection will just start listening to status updates from the jobs manager and forward these messages as JSON text messages.

Although this works, it’s not perfect. One small issue is that updates will not reconnect if there is an error. I imagine that it’s just a matter of listening out for the relevant events and retrying, but I’ll need to learn more about how this actually works. Another thing is that the styling of the progress bar relies of fixed widths. If I get around to reskinning the style of the entire application, that might be the time to address this.

The second thing this release has is a simple integration with Plex. If this integration is configured, Broadtail will now send a request to Plex to rescan the library for new files, meaning that there’s no real need to wait for the schedule rescan to occur before the videos are available in the app. This simply uses Plex’s API, but it needs the Plex token, which can be found using this method.

Anyway, that’s it for this version. I’m working on re-engineering how favourites work for the next release. Since this is still in early development, I won’t be putting in any logic to migrate the existing favourites, so just be weary that you may loose that data. If that’s going to be a problem, feel free to let me know.

Learning Through Video

Mike Crittenden wrote a post this morning about how he hates learning through videos. I know for myself that I occasionally do prefer videos for learning new things, but not always.

Usually if I need to learn something, it would be some new technology that I have to know for my job. In those cases, I find that if I have absolutely no experience in the subject matter, a good video which provides a decent overview of the major concepts helps me a great deal. Trying to learn the same thing from reading a lengthy blog post, especially when jargon is used, is less effective for me. I find myself getting tired and loosing my place. Now, this could just be because of the writing — dry blocks of text are the worst, but I tend to do better if the posts are shorter and formulated more like a tutorial.

If there is a video, I generally prefer them to be delivered in the style of a lecture or presentation. Slides that I can look at while the presenter is speaking are fine, but motion graphics or a live demo is better, especially if the subject is complex enough to warrant them. But in either case, I need something visual that I can actually watch. Having someone simply talk to the camera really doesn’t work for me, and makes watching the video more of a hassle (although it’s slightly better if I just listen to the audio).

Once I’ve become proficient in the basics, learning through video become less useful to me and a decent blog post or documentation page works better. By that time, my learning needs become less about the basics and more about something specific, like how to do a particular thing or details of a particular item. At that point, speed is more important to me, and I prefer to have something that I can skim and search in my own time, rather than watch videos that tend to take much longer.

So that’s how and when I prefer to learn something from video. I’ll close by saying that this is my preferred approach when I need to learn something for work. If it’s during my downtime, either a video or blog-post is fine, so long as my curiosity is satisfied.