Hearing that Microsoft Recall uses an AI model trained to observe the GUI, it all feels so… bizatine to me. Might be that my experience is coloured by previous attempts at UI automation testing but surely there’s a better way to do this at the API layer.

Well, I guess it includes a collective action problem to solve, too.

And it doesn’t guarantee all that sweet, sweet shareholder investment cash that seems to drive most decisions to turn to AI features recently (this “stock-price driven development” is one of the reasons why I try to avoid working at public companies).

Don't Leave User Experience For Later

DDH wrote a post yesterday that resonates with me. This is how he opens:

Programmers are often skeptical of aesthetics because they frequently associate it with veneering

I doubt DHH reads this blog, but he could’ve address this post directly at me. I’m skeptical about aesthetics. Well… maybe not skeptical, but if we’re talking about personal projects, I do consider it less important than the functional side of things. Or at least I did.

He continues:

Primary reason I appreciate aesthetics so much is its power to motivate. And motivation is the rare fuel that powers all the big leaps I’ve ever taken in my career and with my projects and products. It’s not time, it’s even attention. [sic1] It’s motivation. And I’ve found that nothing quite motivates me like using and creating beautiful things.

He was mainly talking about the code design, but I think this extends to the project’s UI and UX. Particularly if you’re building it for yourself. Maybe especially if you’re building it for yourself.

And this is where my story begins. From the start I’ve been putting off any task that would improve the user experience on one of my side project. I considered such tasks unnecessary, or certainly less important than the “functional” side of things. Whenever faced with a decision on what part to work on next, the user experience work was left undone, usually with a thought along the lines of “eh, it’s UI stuff, I’ll do it later.”

But I think this was a mistake. Since I was actually using this tool, I was exposed to the clunky, unfinished UI whenever I needed to do something with it. And it turns out no matter how often you tell yourself that you’ll fix it later, a bad UI is still a bad UI, and it affects how it feels to use it. And let me tell you: it didn’t feel good at all. In fact, I detested it so much that I thought about junking it all together.

It was only when I decided to add a bit of polish did things improved. And it didn’t take much: just fixing the highlights of the nav to reflect the current section, for example. But it was enough, and the improved UI made it feel better to use, which motivated me to get back to working on it again.

So I guess the take away is similar to the point DHH made in his post, which is if something feels good to use, you’re more likely to work on it. And sure, you can’t expect to have a great user experience out of the box: I’ll have to work through some early iterations of it as I build things out. But I shouldn’t ignore user experience completely, or defer it until “later”. If it’s something I’m using myself, and it doesn’t feel good to use, best to spend some time on it now.


  1. Should there be a “not” here? “It’s not even attention?” ↩︎

I used Micro.blogs find and replace for the first time today, to change a bunch of links to point to a new domain. Worked like a charm. I really like these sorts of power features; the ones that you don’t use everyday, but when you need it, and they deliver, it makes your job a whole lot easier.

Kind of had it with Lime and the other dockless scooters. I’m sure most users are fine, but a large number of them are inconsiderate pricks that I just want the whole enterprise to close. If you use these services, then I implore you: stand your bike/scooter upright and do not block the path! 🚳

Sometimes in life, when you’re faced with a task you don’t know to solve, the best way to make forward progress is to close your eyes and just start with something, anything, even if it’s not the best or even a good idea.

So watch out Dev cluster. I’m going to YOLO this! 🏃‍♂️

Two Months Doing Weeknotes

It’s been a bit over two months since I’ve started writing weeknotes at work, and I’m sure you’re all on the edge of your seat dying to know how it’s going. Well, I’m please to say that, on net, it’s been quite helpful.

Now, I’ve gotta be honest here: doing weeknotes is not quite a decision that’s completely my own. We’re technically required to write these notes and submit them to our managers1. But I was so bad at doing this. Most weeks I submitted nothing at all, and when I did write something, it was little more than three or four dot points with a Jira ticket number and a summary.

So I guess you could say the decision I made here was writing “good” weeknotes. And I think putting some effort in the notes I wrote was the right decision to make. I managed to clear the first hurdle, which was developing a routine. First thing Monday morning, after booting my laptop and before I grab my (second) coffee, I sit down for 15-30 minutes and write out what I achieved last week and what I plan to focus on for the week ahead. I switched away from dot points to writing prose. It’s not very interesting prose — it’s little more than “last week I did this, this week I’ll do that” — but I think it makes for a better record of what I was working on. I hear that others keep their weeknotes open during the week so they can add to them as things crop up, sort of like a live journal. I tried this for a little bit, but the day-to-day tasks I work on are not particularly interesting. I think the weekly summary works.

I think there’s still areas for fine tuning this, particularly around the content itself. As useful as the weekly record of work could be, I think including some thoughts of designing a task or opinions on how we do things could also be beneficial, or at least interesting. I do know that my manages read these notes, as I’ve received questions from them about them. And although I’ve yet to actually need to reference previous notes I’ve written (it’s only been two months after all) I’m guessing that’ll just come with time as well.

So this is definitely something I will continue (again, partly because I have to).

Oh, and I did end up using the blogging feature in Confuence for this. First time I used it for anything, actually.


  1. I do know the reason why we do this. I’m not entirely sure I can say, but I can tell you that it’s not (entirely?) for monitoring performance. ↩︎

Got gallery.folio.red back up this morning. Took longer than I hoped, since the cert expired and renewing it was delayed due to the Lets Encrypt outage. Also fought with the bugs in Photo Bucket during the upgrade. But the import worked and the site’s back up again, so all’s well that ends well.

📝 New post on Photo Bucket over at the Workpad blog: The Site Page Model

🔗 Finish your projects

If there’s ever an article I should print out and staple to my forehead, it’s this one.

I’ve been really enjoying all the WeblogPoMo posts that the PoMo bot has been relaying. Discovered a bunch of new blogs this way, that I’ve now added to NetNewsWire.

Had to miss the first part of Micro.camp this year, unfortunately. My meeting with the sandman went long. Hope to catch up on the keynote and state of the platform videos later.

🔗 Slack users horrified to discover messages used for AI training

I’d like to avoid jumping on the “I have everything AI” bandwagon, but I agree that Slacks use of private message data to train their LLM is a pretty significant breach of trust. A lot of sensative data runs through their system, and although they may be hosting it, it’s not theirs to do as they please. Maybe they think it’s within their right, what with their EULAs and everything, but if I were a paying customer — of enterprise software, if you remember — I’d make bloody sure that data is the customer and the customer’s own.

It’ll be interesting to see how this will affect me personally. We use Slack at work and I know management is very sensative about IP (and given the domain, I can understand). Maybe I’ll finally get to try Teams out.

Friday Development Venting Session

Had a great venting session with someone at work about the practices of micro-services, the principals of component driven development, mocking in unit tests, and interfaces in Go. Maybe one day I’ll write all this up, but it was so cathartic to express how we can do better on all these fronts.

If anyone is to ask what I think, here it is in brief:

  1. Micro-services might be suitable for what you’re building if you’re Amazon or Google, where you have teams of 20 developers working on a single micro-service. But if you’ve got a team of 20 developers working on the entire thing, you may want to consider a monolith instead. Easier to deploy, easier to operate, and you get to rely on the type system telling you when there’s an integration problem rather than finding it out during runtime.
  2. The idea of component driven design — which is modelled on electrical engineering principals whereby a usable system is composed of a bunch of ICs and other discrete components — is nice in theory, but I think it’s outlived it’s usefulness for most online services. It probably still makes sense if you’re stuck in the world of Java and J2EE, where your “system” is just a bunch of components operating within a container, or if you actually are vending components to be reused. But most of the time what you’re working on is an integrated system. So you should be able to leverage that fact, rather than think that you’re building and testing ICs that you expect others to use. You’re not likely going to completely replace one component for another when you need to change a database (which you’re unlikely to do anyway). You’re more likely going to modify the database component instead. So don’t make that assumption in your design.
  3. This also extends to the idea of unit testing, with the assumption that you must test the component in isolation. Again, you’re not building ICs that you’re expecting to transplant into other projects (if you are, then keep testing in isolation). So it makes more sense to build tests that leverage the other components of the system. This means building tests that actually call the components directly: the service layer calling the actual database driver, for example. This produces a suite of tests that looks like a staircase, each one relying on the layers below it: the database driver working with a mock database, the service layer using the actual database driver, and the handlers using the actual services. Your unit test coverage should only be that of the thing you’re testing: don’t write database driver tests in your handler package. But I don’t see a reason why you shouldn’t be able to rely on existing areas of the system in your tests.
  4. The end result of doing this is that you’re tests are actually running on a mini-version of the application itself. This naturally means that there’s less need to mock things out. I know I’ve said this before, but the idea of mocking out other services in unit tests instead of just using them really defeats the idea of writing tests in the first place: gaining confidence in the correct operation of the system. How can you know whether a refactor was successful if you need to change the mocks in the unit test just to get them green again? Really, you should only need to use mocks to stub out external dependencies that you cannot run in a local Docker container. Otherwise, run your tests against a local database running in Docker, and use the actual services you’ve built as your dependencies. And please: make it easy for devs to run the unit tests in their IDE or in the command line with a single “make” command. If I need to set environment variables to run a test, then I’ll never run them.
  5. Finally, actually using dependent services directly means there’s less need to defined interfaces up front. This, after starting my career as a Java dev, is something I’m trying to unlearn myself. The whole idea of Go interfaces is that they should come about organically as the need arise, not pre-defined from above before the implementation is made. That is the same level of thinking that comes from component design (you’re not building ICs here, remember?). Just call the service directly and when you need the interface, you can add one. But not before. And definitely not because you finding yourself needing to mock something out (because, again, you shouldn’t need to mock other components of the system).

Anyway, that’s my rant.

Flights to Canberra booked. Going to be bird watching again real soon.

If the macOS devs are looking for something to do: here’s a free idea. Detect when the user is typing on their keyboard, say using keystrokes in the last N seconds, and if it’s greater than some low number, prevent any window from stealing keyboard focus.

I must agree once again with Manual Morale on his recent post about search and the future of the web:

I think curation, actual human curation, is going to play an important role in the future. In a web filled with generated nonsense, content curated by knowledgeable human beings is going to be incredibly valuable.

Ben Thompson has been arguing this point too: in a world of AI generating undifferentiated “content”, that which has the human element, either in it’s creation or curation, would stand apart. He says he bets his career on this belief. I think it’s a bet worth taking.

How is it that it’s become so natural to write about stuff here, yet I’m freezing in my boots drafting up an email to a blogger in response to a call for some feedback?

Love that NetNewsWire has a setting to open links in Safari instead of the built-in WebView. Very useful for articles which require an active login session, which I’m more likely to have in Safari. To enable, go to Settings and turn off “Open Links in NetNewsWire”.

Screenshot of a portion of NetNewsWires iOS setting with Open Link in NetNewsWire turned off

Never thought I’d be desperate enough for food and money that I’d be forced to learn everything there is to know about authentication, OAuth, and SSO, but here we are. 🤓

P.S. I’m trying to be droll here. Please don’t test me on my knowledge of OAuth or SSO. 😅

Writing Good Data Migration Scripts

I’m waiting for a data migration to finish, so I’ve naturally got migration scripts on my mind.

There’s an art to writing a good migration script. It may seem that simply throwing together a small Python script would be enough; and for the simpler cases, it very well might be. But it’s been my experience that running the script in prod is likely to be very different than doing test runs in dev.

There are a few reasons for this. For one thing, prod is likely to have way more data so there will always be more to migrate. And dealing with production data is always going to be a little more stressful than in non-prod, especially when you consider things like restricted access and the impact of things going wrong. So a script with a better “user experience” is always going to be better than one slapped togeather.

So without further ado, here’s the attributes for what I think makes for a good migration script:

  1. No migration script — If you can get away with not writing a migration script, then this is preferred option. Of course, this will depend on how much data you’ll need to migrate, and how complicated keeping support for the previous version in your code-base. If the amount of data is massive (we’re talking millions or hundred of millions of rows), then this is probably your only option. On the other hand, if there’s a few hundred or a few thousands, then it’s probably just worth migrating the data.
  2. Always indicate progress — You’re likely going to have way more data in prod that your dev environments so consider showing ongoing progress of the running script. If there’s multiple stages in the migration process, make sure you log when each stage begins. If you’re running a scan or processing records, then give some indication of progress through the collection of rows. A progress bar is nice, but failing that, include a log message say every 1,000 records or so.
  3. Calculate expected migration size if you can — If it’s relatively cheap to get a count of the number of records that need to be migrated, then it’s helpful for the user to report this to the user. Even an estimate would be good just to give a sense of magnitude. If it’ll be too expensive to do so, then you can ignore it: better to just get migrating rather than have the user wait for a count.
  4. Silence is golden — Keep logging to the screen to a minimum, mainly progress indicators plus and any serious warnings or errors. Avoid bombarding the user with spurious log messages. They want to know when things go wrong, otherwise they just want to know that the script is running properly. That said:
  5. Log everything to a file — If the script is involved with migrating data, but will ignored records that have already been migrated, then log that records will be skipped. What you’re looking for is assurance that all records have been dealt with, meaning that any discrepancy with the summary report (such as max records encountered vs. max records migrated) can be reconciled with the log file.

May your migrations be quite and painless.