More work on Broadtail this morning. Managed to get the new favourite stuff finished. Favourites are now a dedicated entity, instead of being tied to a feed item. This means that any video can now be favourited, including ones not from a feed. The favourite list is now a top-level menu item as well.

Also found a useful CLI tool for browsing BoltDB files.

An idea for that micro-bulk-image tool: instead of prioritising uploads, maybe prioritise processing of already uploaded images. Things like cropping, optimising, etc. after the upload, and then re-uploading it.

New AWS Tools Commands

For a while now, I’ve been wanting some tools which would help manage AWS resources that would also run in the terminal. I know in most circumstances the AWS console would work, but I know for myself, there’s a lot of benefit from doing this sort of administration from the command line.

I use the terminal a lot when I’m developing or investigating something. Much of the time while I’m in the weeds I’ve got a bunch of tabs with tools running and producing output, and I’m switching between them as I try to get something working across a bunch of systems.

This is in addition to cases when I need to manage an AWS mock running on the local machine. The AWS console will not work then.

At the start of the week, I was thinking of at least the following three tools that I would like to see exist:

  • A TUI browser/workspace for DynamoDB tables
  • A TUI workspace for monitoring SQS queues
  • Some tool for reading JSON style log files.

As of yesterday, I actually got around to building the first two.

The first is a tool for browsing DynamoDB tables, which I’m calling dynamo-browse (yes, the names are not great). This tool does a scan of a DynamoDB table, and shows the resulting items in a table. Each item can be inspected in full in the lower half of the window by moving the selection.

Dynamo-Browse

At the moment this tool only does a simple scan, and some very lightweight editing of items (duplicate and delete). But already it’s proven useful with the task I was working on, especially when I came to viewing the contents of test DynamoDB instances running on the local machine.

The second tool is used for monitoring SQS queues.

SQS-Browse

This will poll for SQS messages from a queue and display them in the table here. The message itself will be pretty printed in the lower half of the screen. Messages can also be pushed to another queue. That’s pretty much it so far.

There are a bunch of things I’d like to do in these tools. Here’s a list of them:

  • Queries in DynamoDB: There’s currently no way to run queries or filter the resulting items. I’m hoping to design a small query language to do this. I’m already using Participal to power a very simple expression language to duplicate items, so as long as I can design something that is expressive enough and knows how to use particular indices, I think this should work.
  • Putting brand new items in DynamoDB: At the moment you can create new items based on existing items in a way, but there’s currently no way to create brand new items or adjust the attributes of existing items. For this, I’d like to see an “edit item” mode, where you can select the attribute and edit the value, change the type, add or remove attributes, etc. This would require some UI rework, which is already a bit tentative at this stage (it was sort of rushed together, and although some architectural changes have been made to the M and the C in MVC, work on the V is still outstanding).
  • Preview item changes before putting them to DynamoDB: This sort of extends the point above, where you see the diff between the old item and new item before it’s put into DynamoDB.
  • Workspaces in SQS Browse: One idea I have for SQS browse is the notion of “workspace”. This is a persistent storage area located on the disk where all the messages from the queue would be saved to. Because SQS browse is currently pulling messages from the SQS queue, I don’t want the user to get into a state where they’ve lost their messages for good. The idea is that a workspace is always created when SQS browse is launch. The user can choose the workspace file explicitly, but if they don’t, then the workspace will be created in the temp directory. Also implicit in this is support for opening existing workspaces to continue work in them.
  • Multiple queues in SQS Browse: Something that I’d like to see in SQS Browse is the ability to deal with multiple queues. Say you’re pulling from one queue and you’d like to push it to another. You can use a command to add a queue to the workspace. Then, you can do things like poll the queue, push messages in the workspace to the queue, monitor it’s queue length, etc. Queues would be addressable by number or some other way, so you can simply run the command push 2 to push the current message to queue 2.

As for when I’ll actively work on these tools. It will probably be when I need to use them. But in the short term, I’m glad I got the opportunity to start working on them. They’ve already proven quite useful to me.

Woke up expecting today to be long and annoying, but it turned out not to be as annoying as I anticipated (or as long). I guess the lesson here is not to presume how good or bad a day will be until you’ve lived through it first.

Got my copy of Bloch’s Effective Java out today for the first time in years, in order to prepare for an interview. As far as books about programming languages go, this is probably my favourite. Very informative and easy to read.

No day trip to Warburton this year: too much to do at work unfortunately. I’ve got a day in leau owed to me so we’ll see if I could make it in April.

Wondering if there’s a way where I could quit my job and spend my waking hours building text-based UI applications. Not sure there’s a lot of money in such things unfortunately.

Doing some tool smithing at work to make my current task a little easier. Using Bubble Tea to build a TUI-based SQS message browser. Early days so far but already showing promise. Anything which will save me clicks around the AWS console is always a plus.

Even more work on Feed Journaler. Still trying to tune the title removal logic. Will probably require a lot of testing.

The “Explore Repositories” sidebar on GitHub’s home page (if you’re logged in) is a bit strange. Why am I getting recommendations for projects that I either don’t need, or haven’t sought out? Am I expected to just pull in any bit of code people are publishing? Really weird.

Hmm, I’m wondering if Broadtail needs a mode for downloading the audio of a video only, and uploading it to a podcasting service like Pocketcasts. I did this twice before, and now wish I can do it again. Third time is when a pattern is emerging.

Also, a bit of a deviation to finish the current work outstanding for Feed Journaler. Much of it is trying to get the Markdown translator working with Write.as post, mainly to deal with posts without titles, and to strip any hashtags that appear in the first and last paragraph. The code is a real mess. I should probably spend some time cleaning it all up.

Showing Only Changed Files in Git Log

This technique shows only the changed files in a git log call without having to show the entire patch:

git log --name-only ...

If there’s anyone else out there using Buffalo to build web-apps, I just discovered that it doesn’t clean up old versions of bundled JavaScript files. This means that the public/asset directory can grow to gigabytes in size, eventually reaching the point where Go will simply refuse to embed that much data.

The tell-tail sign is this error message when you try to run the application:

too much data in section SDWARFSECT (over 2e+09 bytes)

If you see that, deleting public/assets should solve your problem.

Showing A File At a Specific Git Revision

To display the contents of a file at a given revision in Git, run the following command:

$ git show <revision>:<filename>

For example, to view the version of “README.md” on the dev branch:

$ git show dev:README.md

There is an alternative form of this command that will show the changes applied to that file as part of the commit:

$ git show <revision> -- <filename>

This can be used alongside the log command to work out what happened to a file that was deleted.

First, view the history of the file. You are interested in the ID before the commit that deleted the file: attempting to run git show using the deletion commit ID it will result in nothing being shown.

$ git log -- file/that/was/deleted
commit abc123
Author: The Deleter <deleter@example.com>
Date:   XXX

    Deleted this file.  Ha ha ha!

commit beforeCommit
Author: File Changer <changer@example.com>
Date:   XXX

    Added a new file at file/that/was/deleted

Then, use git show to view the version before it was deleted:

$ git show beforeCommit:file/that/was/deleted

🔗 A big bet to kill the password for good

Lot of talk about getting the user experience/OS compatability right, which is good. But I see no real indication on how they’re going to get the millions of app/website developers to switch to this. Hope they’ve considered that.

I may need to shake up my podcast listening habits a little.

I subscribe to several tech-based podcasts which I regularly listen to. But because I am a creature of habit, I only listen to each one during certain times of the week. Plus I’m finding that I can only take so much tech talk these days, and I need some other topic to listen to.

I also subscribe to some long form, analysis based podcasts which cover a broad range of subjects, but they don’t come out on a regular schedule. So in order to “fill in the gaps”, I usually turn to some specific news based podcasts. They do come out regularly, but the news of late has been a bit anxiety inducing, and I may need take a little break from them.

So I think my regular podcast listening rotation needs something new. Might be time to start throwing in some microcasts from fellow Micro-blogers into the mix.

A snapshot of the indie-web style project mentioned yesterday:

This is just static HTML and JavaScript at the moment. The idea is that each of these “cards” can be rearranged by direct manipulation. So I spent most of this morning trying to get drag and drop working in JavaScript.

After hooking up dragenter and dragleave event listeners to the Stimulus controllers bound to each of the cards, I found that the controllers were getting enter and leave events continuously as the dragged element was moved over them. These enter and leave events occurred almost immediately, making it difficult to indicate a drop-zone just by reacting to the events themselves. Setting a class on enter and then removing that class immediately on leave meant that the class was never shown.

Turns out this is because the events were being sent for all the subelements of the card. Moving the dragged element from the card padding to the text field resulted in an enter event for text field, followed immediately by a leave event on the card element.

This means that in order to do any form of indication, some element tracking is required. I’m currently using a counter to do that, which will increment by one when a dragenter event is received and decrement by 1 on dragleave. It sort of works — I’ve got to fix it when the drop event occurs — but I get super concerned about the counter getting out of sync and the indicator being stuck “on” when the dragged element is completely outside it.

One other thing I could try is having some form of set tracking each of the entered and exited elements and using the set length to determine whether to show the indicator. I trust that the browser will guarantee that a dragleave will always follow a dragenter event sent to the element. 🤔

We were hoping for a game of bocce in Argyle Square, which has a bocce pitch. But alas!

Beaten there by some Grand Prix event. 🙁

Code Review Software Sucks. Here's How I Would Improve It

This post is about code reviews, and the software that facilitates them.

I’ll be honest: I’m not a huge fan of code reviews, so a lot of what I speak of below can probably be dismissed as that from someone who blames their tools. Be that as it may, I do think there is room for improvements in the tooling used to review code, and this post touches on a few additional features which would help.

First I should say that I have no experience with dedicated code review tools. I’m mainly talking about code review tools that are part of hosted source code repository systems, like GitHub, GitLab, and Bitbucket. And since these are quite large and comprehensive systems, it might be that the priorities are different compared to a dedicated code review tool with a narrower focus. Pull requests and code reviews are just one of the many tasks that these systems need to handle, along with browsing code repositories, managing CI/CD runs, hosting binary releases, etc.

So I think it’s fair to say that such tools may not have the depth that a dedicated code review tool would have. After all, GitHub Actions does not have the same level of sophistication as something like Jenkins or BuildKite, either.

But even so, I’d say that there’s still room for improvement in the code review facilities that these systems do offer. Improvements that could be tailored more to the code review workflow. It’s a bit like using a text editor to manage your reminders. Yeah, you can use a text editor, but most of the features related specifically to reminders will not be available to you, and you’ll have to plug the feature gap yourself. Compare this to a dedicated reminder app, which would do a lot of the work for you, such as notifying you of upcoming reminders or having the means to marking an item as done.

So, what should be improved in the software that is used to review code? I can think of a few things:

Inbox: When you think about it, code reviews are a bit like emails and Jira tickets: they come to you and require you to action them in some way in order to get the code merged. But the level of attention you need to give them changes over time. If you’ve made comments on a review, there’s really no need to look at it again until the code has been updated or those the author has replied.

But this dynamic aspect of code reviews is not well reflected in most of these systems. Usually what I see is simply a list of pull requests that have not yet been approved or merged, and I have to keep track of the reviews that need my attention now myself, verses those that I can probably wait for action from others.

I think what would be better than a simple list would be something more of an inbox: a subset of the open reviews that are important to me now. As I action them, they’ll drop off the list and won’t come back until I need to action them again.

The types of reviews that I’d like to appear in the inbox, in the ordered listed below, would be the following:

  1. Ones that have been opened in which I’ve made comments that have been responded to — either by a code change or a reply — that I need to look at. The ones with more responses would appear higher in the list than the ones with less.
  2. Reviews that are brand new that I haven’t looked at yet, but others have.
  3. Brand new reviews that haven’t been looked at by anyone.

In fact, the list can be extended to filter out reviews that I don’t need to worry about, such as:

  1. Reviews that I’ve made comments on that have not been responded to yet. This indicates either that the author has not gotten around to it yet, in which case looking at the pull request again serves no purpose.
  2. Reviews that have enough approvals by others and do not necessarily need mine.
  3. Reviews that I’ve approved.

This doesn’t necessarily need to replace the list of open reviews: that might still be useful. But it will no longer be the primary list of reviews I need to work with during the day to day.

Approval pending resolution of comments: One thing I always find myself indecisive about is when I should hit that Approve button. Let’s say I’ve gone through the code, made some comments that I’d like the submitter to look at, but the rest of the code looks good. When should I approve the pull request? If I do it now, then the author may not have seen the comments or have indication that I’d like them to make changes, and will go ahead and merge.

I guess then the best time to approve it is when the changes are made. But that means the onerous is on me to remember to review the changes again. If the requests are trivial — such as renaming things — and I’d trust the person to make the changes and going through to review them once again is a waste of time.

This is where “Approval pending resolution of comments” would come in handy. Selecting this approval mode would mean that my approval would be granted once the author has resolve the outstanding review comments. This would not replace the regular approval mode: if there are changes which do require a re-review, I’d just approve it normally once I’ve gone through it again. But it’s one more way to let the workflow of code reviews work in my favour.

Speaking of review comments…

Review comment types: I think it’s a mistake to assume that all review comments are equal. Certainly in my experience I find myself unable to read the urgency of comments on the reviews I submit. I also find it difficult to telegraph in the comments that I make on code reviews of others. This usually results in longer comments with phrases such as “you don’t have to do this now, but…”, or “something to consider in the future…”

Some indication of the urgency of the comment alongside the comment itself would be nice. I can think of a system that has at least three levels:

  1. Request for change: this is the highest level. It’s an indication that you see something wrong with the code that must be changed. In these cases, these comments would need to be resolved with either a change to the code, or a discussion of some sort, but they need to be resolved before the code is merged.
  2. Request for improvement: This is a level lower, and indicates that there is something in the code that may need to be change, but not doing so would not block the code review. This can be used to suggest improvements to how things were done, or maybe to suggest an alternative approach to solving the problem. All those nitpicking comments can go here.
  3. Comments: This is the lowest level. It provides the way to make remarks about the code, but that require no further action from the author. Uses for this might be praise for doing something a certain way, or FYI type comments that the submitter may need to be aware of for future changes.

Notes to self: Finally, one thing that is on way too few systems that deal with shared data is the ability to annotate pull requests or the commented files with private notes. These won’t be seen by the author or any of the other reviewers, and are only there to facilitate making notes to self, such as things like “Looked at it, waiting for comments to be addressed”, or “review no longer pending”. This is probably the minimum, and would be less important if the other things are not addressed.

So that’s how I’d improve code review software. It may be that I’m the only one with this problem, and that others are perfectively able to review code effectively without these features. But I know they would work for me, and if I start seeing these in services like GitHub or GitLab, I probably would start using them.

Revising the micro-bulk-uploader project. I’ve been reading a lot about IndieAuth and Micropub formats and I got really excited about it. I figured this would be a good project to try them.

In short, this is a bulk image uploader. It’s primarily tailored for Micro.blog, but since these formats are standards in a way, there’s no real reason why it couldn’t work for other services.

Basically, the idea is this:

  • User authenticates with their website.
  • They are presented with a set of file uploads, which they can used to prep their images. They can reorder their images (ideally using drag-and-drop) and write some alt text.
  • Once they’re happy with the images, they click “Upload”.
  • The server will upload the images and then generate the HTML to produce a gallery (or just a list of images if they so prefer).

I have a few goals for this project:

  1. To ship a hosted service for others to use. The goal is to get into the practice of shipping software for other people. This is the principal goal: it must be usable by others. Otherwise, there’s no point doing this.
  2. To get a working version out the door in two weeks for others to start trying. Two weeks is April 1st, which should be achievable.
  3. To learn about the Indypub formats, specifically IndieAuth and Micropub.