Long Form Posts

    First encounters with GitHub (and Substack)

    All these new Substack newsletters that I’m seeing reminds me of my first encounter with GitHub.

    Back in 2009, I was checking out the source code of an open-source library we were using. Clicking on the link to the source bought me to this strange, new code-hosting service that I’ve never seen before. As someone who was use to the heaviness that was SourceForge, or the boring uniformity that was Google Code, the experience felt very minimal and slightly unintuitive. It took me a while, for instance, to realise that the version tags were selectable from a drop-down list. I also thought that it was quite restrictive to only offer checking out the source code with this weird SCM client called “git”. The whole experience left me thinking of this website as rather niche, and I never really expected to see it that often, given that Source Forge or Google Code reigned supreme at the time.

    I held this believe for a year or two. Then, as I continued to deal with different open-source projects, I started noticing more and more of them showing up on this weird new service. This was infrequent at first: maybe around one in ten projects or so. But the ratio was starting to shift faster and faster, soon becoming one in eight projects, then one in five. Around this time, GitHub was starting to gain momentum in the technological zeitgeist: projects announced that they were moving off SourceForge and migrating their codebase to GitHub. Then, Google Code announce that they were shutting down, and that they were migrating projects over to GitHub. Eventually, a tipping point was reached, and GitHub was the code hosting service for pretty much every project I encountered.

    My experience with Substack was similar, except on a much shorter timescale. I remember subscribing to my first Substack newsletter back in 2019. I was already a Stratechy subscriber so the whole email-newsletter thing was familiar to me. But it was another case of being a little unimpressed with the Substack experience β€” difficult to read on-line, impossible to get it as RSS, weird font choices, etc. β€” that I was expecting the platform to remain relatively niche. Contrast that to today, where every forth or fifth link I see is to a Substack newsletter, and not a month goes by where a new Substack publication is announced.

    There’s no real lesson to this (or if there is, I’m too dense to see it). Just the amusing observation of first encountering something that, to you, seems rather niche and unusual until one day, faster than you can blink, it is the only thing anyone uses.

    A Quick Review of the Year

    Here are a few words about the year gone by, and what I’m hoping to focus on the year ahead. It’s not a full “year in review” type post, although there’s a bit of that, and there’s no dramatic insight or anything of that nature. It’s more of a chance for reflection, plus a bit of a document for future me on what the year was like.

    Personally, as difficult as this past year was, I wouldn’t necessarily say 2020 was a bad year. I know I’m saying this from a position of good fortune: I didn’t loose my job, or my house, or my health. So I know there are a lot of others that have experienced a much worst year than I have. But for me, I’m coming out of this year feeling a little better than I have the previous couple of years gone by.

    I think a big reason for this was that I was forced to change my routine. I’m someone that can operate on a routine that don’t change for a long period of time. There are benefits to this but it does mean that every passing year feels more-or-less like the previous one, and I always find myself feeling a little bad on New Years Eve for not “doing something different”. Being forced to change my routine by the pandemic and resulting lock-downs was a small positive that came out of an otherwise bad situation. The did mean that I could no longer rely on the the various chronological anchors, like birthdays or even going to the office, that are useful for experiencing the passage of time, resulting in a year that felt to be passing too slowly and quickly simultaneously. But it also added some variety to the activities that got me through the day, which resulted in a year that was slightly more novel to the previous few.

    Working from home also provided an opportunity to try something new. The whole working from home experience was something that I was curious about, and I am glad that I had an opportunity to try it out. It worked out better than I expected: although there were times when I really did miss working closely with other people, I found that I could work effectively at home as long as I had work to do. The lack of a commute also meant that I had more time on my hands. I’m glad that I didn’t just spend that time just coding on personal projects or vegging out on the couch (although there was a bit of that as well). I joined Micro.blog, which was probably one the best decisions I’ve made this year. I also learnt a lot about writing and creativity, and I got back into music composition, something that I have neglected for a while.

    Writing and publishing is something that I’m hoping to continue in 2021. I hope to setup new routines and systems to write more often, both on this Micro.blog and another writing project that I’m in the process of starting. I’m trying a few things to keep myself true to this, like keeping a daily log (which is private at the moment, but I’m hoping to make it public eventually). Publishing things online is something that I need to work on a bit more but it is also an area that I’m hoping to work on. Although I not seriously following the yearly theme system, if I had to choose, I plan to make this one a year of “sharing”1.

    Hope you all have a Happy New Year and here’s to a great 2021.


    1. Incidentally, the theme for 2020 was “chance”, which became a little morbid as the year went on. ↩︎

    Vivaldi - My Recommended Alternative to Chrome

    I’m seeing a few people on Micro.blog post about how Chrome Is Bad. Instead of replying to each one with my recommendations, I figured it would be better just to post my story here.

    I became unhappy with Chrome about two years ago. I can’t remember exactly why, but I know it was because Google was doing something that I found distasteful. I was also getting concerned about how much data I was making available to Google in general. I can prove nothing, but something about using a browser offered for free by an advertising company made me feel uneasy.

    So I started looking for alternatives. The type of browser I was looking for needed the following attributes:

    • It had to use the Blink rendering engine: I liked how Chrome rendered web pages.
    • It had to offer the same developer tools as Chrome: in my opinion, they are the best in the business.
    • It had to run Chrome plugins.
    • It had to be a bit more customisable than Chrome was: I was getting a little bored with the Chrome’s minimalist aesthetic, and there were a few niggling things that I wanted to change.
    • It had to have my interests at hand: so no sneaky business like observing my web browsing habits.

    I heard about the Vivialdi browser from an Ars Technica article and I decided to give it a try. The ambition of the project intrigued me: building a browser based on Chromium but with a level of customisation that can make the browser yours. I use a browser every day so the ability to tailor the experience to my needs was appealing. I decided to download it and give it a try. First impressions were mixed: the UI is not a polished as Chrome’s, and seeing various controls everywhere was quite a shock compared to the minimalism that Chrome offered. But I eventually set it as my default browser, and over time I grew to really like it.

    I’m now using Vivialdi on all my machines, and my Android phone as well. It feels just like Chrome, but offers so much more. I came to appreciate the nice little utilities that the developers added; like the note taking panel which comes in really handy for writing Jira comments that survive when Jira decides to crash. It’s not as stable as Chrome; it feels a tad slower, and it does occasionally crash β€” this is one of those browsers that need to be closed at the end of the day. But other than that, I’m a huge fan.

    So for those looking for an alternative to Chrome that feels a lot like Chrome, I recommend giving Vivialdi a try.

    Revisiting the decision to build a CMS

    It’s been almost a month since I wrote about my decision to write a CMS for a blog that I was planning. I figured it might be time for an update.

    In short, and for the second time this year, I’ve come to the conclusion that maintaining a CMS is not a good use of my time. The largest issue was the amount of effort that would have been needed in order to work on the things that don’t relate to content, such as styling. I’m not a web designer, so building the style from scratch would have taken a fair amount of time, which would have eaten into the amount of time I would have spent actually writing content. A close second was the need to add additional features to the CMS that were missing, like the ability to add extra pages, and a RSS feed. If I were to do this properly without taking any shortcuts, this too would have resulted in less time spent on content.

    The final issue is that the solutions that I was trying to optimise for turned out not to be as big a deal as I first thought, such as the “theoretical ability to blog from anywhere”. I’ve tried using the CMS on the iPad a few times, and although it worked, the writing experience was far from ideal, and making it so would have meant more effort put into the CMS. In addition to this, I’ve discovered that I prefer working on the blog using my desktop, as I’ll more likely be in the state of mind to do so. Since my desktop already has tools like Git, I already had what I needed to theoretically blog from anywhere, and it was no longer necessary to recreate this requirement within the CMS itself.

    So I’ve switched to a statically generated Hugo site served using GitHub Pages. I’m writing blog posts using Nova, which is actually a pretty decent tool for writing prose. To deploy, I simply generate the site using Hugo’s command line tool, and commit it all to Git. GitHub Pages does the rest.

    After working this way for a week and a half, it turns out that I actually prefer the simplicity of this approach. At this stage both the CMS, and the blog it would have powered, has been in the works for about a month. The result is zero published posts, and it will probably not be launched at all1. Using the new approach, the new blog is public right now and I have already written 5 posts on it within the last 11 days, which I consider a good start.

    We’ll see how far I’ll go with this new approach before I consider a custom CMS again, but I think it’s one that I’ve managed to get some traction now, particularly since it actually resulted in something. I think it’s also an approach I will adopt for new blogs going forward.


    1. This is not wholly because of the CMS: the focus of the blog would have been a bit narrower than I’d would have liked; but the custom CMS did has some bearing over this decision. ↩︎

    A Brief Look at Stimulus

    Over the last several months, I’ve been doing a bit of development using Buffalo, which is a rapid web development framework in Go, similar to Ruby on Rails. Like Ruby on Rails, the front-end layer is very simple: server-side rendered HTML with a bit of jQuery augmenting the otherwise static web-pages.

    After a bit of time, I wanted to add a bit of dynamic flare to the frontend, like automatically fetch and update elements on the page. These projects were more or less small personal things that I didn’t want to spend a lot of time maintaining, so doing something dramatic like rewriting the UI in React or Vue would have been overkill. jQuery was available to me but using it always required a bit of boilerplate to setup the bindings between the HTML and the JavaScript. Also, since Buffalo uses Webpack to produce a single, minified JavaScript file that is included on every page, it would also be nice to have a mechanism to selectively apply the JavaScript logic based on the attributes on the HTML itself.

    I since came across Stimulus, which looks to provide what I was looking for.

    A Whirlwind Tour of Stimulus

    The best places to look if you’re interest in learning about Stimulus is the Stimulus handbook, or for those that prefer a video, there is one available at Drifting Ruby. But to provide some context for the rest of this post, here’s an extreamily brief introduction to Stimulus.

    The basic element of an application using Stimulus is the controller, which is the JavaScript aspect of your frontend. A very simple controller might look something like the following (this example was taken from the Stimulus home page):

    // hello_controller.js
    import { Controller } from "stimulus"
        
    export default class extends Controller {
      static targets = [ "name", "output" ]
        
      greet() {
        this.outputTarget.textContent =
          `Hello, ${this.nameTarget.value}!`
      }
    }
    

    A controller can have the following things (there are more than just these two, but these are the minimum to make the controller useful):

    • Targets, which are declared using the static target class-level attribute, and are used to reference individual DOM elements within the controller.
    • Actions, which are methods that can be attached as handlers of DOM events.

    The link between the HTML and the JavaScript controller is made by adding a data-controller attributes within the HTML source, and setting it to the name of the controller:

    <div data-controller="hello">
      <input data-hello-target="name" type="text">  	
      
      <button data-action="click->hello#greet">Greet</button>
    
      <span data-hello-target="output"></span>
    </div>
    

    If everything is setup correctly, then the controllers should be automatically attached to the HTML elements with the associated data-controller annotation. Elements with data-*-target and data-action attributes will also be attached as targets and actions respectively when the controller is attached.

    There is some application setup that is not included in the example above. Again, please see the handbook.

    Why Stimulus?

    Far be it for me to suggest yet another frontend framework in an otherwise large and churning ecosystem of web frontend technologies. However, there is something about Stimulus which seems appealing for small projects that product server-side rendered HTML. Here are the reasons that attract it to me:

    1. It was written by, and is used in, Basecamp, which gives it some industry credibility and a high likelihood that it will be maintained (in fact, I believe version 2.0 was just release).

    2. It doesn’t promise the moon: it provides a mechanism for binding to HTML elements, reacting to events, and providing some mechanisms for maintaining state in the DOM, and that’s it. No navigation, no pseudo-DOM with diffing logic, no requirement for maintaining a global state with reducers, no templating: just a simple mechanism for binding to HTML elements.

    3. It plays nicely with jQuery. This is because the two seem to touch different aspects of web-development: jQuery with providing a nicer interface with the DOM, and Stimulus with providing a way to easily bind to DOM elements declared via HTML attributes.

    4. That said, it doesn’t require jQuery. You are free to use whatever JavaScript framework that you need, or no framework at all.

    5. It maintains the relationship between HTML and JavaScript, even if the DOM is changed dynamically. For example, modifying the innerHTML by including an element with an appropriate data-controller attribute will automatically setup a new controller and bind it to the new elements, all without having you to do anything yourself.

      It doesn’t matter how the HTML gets to the browser, whether it’s AJAX or front-end templates. It will also work with manual DOM manipulation, like so:

      let elem = document.createElement("div");
      elem.setAttribute('data-controller', 'hello');
      
      document.append(elem);
      

      This allows a dynamic UI without having to worry about making sure each element added to the DOM is appropriately decorated with the logic that you need, something that was difficult to do with jQuery.

    Finally, and probably most importantly, it does not require the UI to be completely rewritten as JavaScript. In fact, it seems to be built with with this use case in mind. The tag-line on the site, A modest JavaScript framework for the HTML you already have, is true to it’s word.

    So, if you have a web-app with server-side rendering, and you need something a bit more β€” but not too much more β€” than what jQuery or native DOM provides, this JavaScript framework might be worth a look.

    Some uninformed thoughts about Salesforce acquiring Slack

    John Gruber raised an interesting point about the future of Slack after being purchased by Salesforce:

    First, my take presupposes that the point of Slack is to be a genuinely good service and experience. […] To succeed by appealing to people who care about quality. Slack, as a public company, has been under immense pressure to do whatever it takes to make its stock price go up in the face of competition from Microsoft’s Teams.

    […]

    Slack, it seems to me, has been pulled apart. What they ought to be entirely focused on is making Slack great in Slack-like ways. Perhaps Salesforce sees that Slack gives them an offering competitive to Teams, and if they just let Slack be Slack, their offering will be better — be designed for users, better integrated for developers.

    When I first heard the rumour of Salesforce was buying Slack, I really had no idea why they would. The only similarity between the markets the two operate in is that they are things businesses buy, and I saw no points of synergy between the two products that would make this acquisition worth it.

    I’m starting to come round to the thinking that the acquisition is not to integrate the two products, at least not to the degree I was fearing. I think Gruber’s line of thinking is the correct one: that Salesforce recognises that it’s in their interest to act as Slack’s benefactor to ensure that they can continue to build a good product. Given that Salesforce has bought Tableau and Heroku and more-or-less left them alone, there’s evidence that the company can do this.

    As to what Salesforce gets out of it, Jason Calacanis raises a few good reasons in his Emergency Pod on the topic around markets and competition. Rather than attempt to explain them, I recommend that you take a listen and hear them from him.

    Why I'm Considering Building A Blogging CMS

    I’m planning to start a new blog about Go development and one of the things that I’m currently torn on is how to host it. The choice look to be either using a service like blot.im or micro.blog or some other hosting service, using a static site generation tool like Hugo, or building my own CMS for it. I know that one of the things people tell you about blogging is that building your CMS is not worth your time: I myself even described it as “second cardinal sin of programming” on my first post to micro.blog.

    Nevertheless, I think at this stage I will actually do just that. Despite the effort that comes from building a CMS I see some advantages in doing so:

    1. The (theoretical) ability to blog from anywhere: This is one of the weaknesses of static site hosting that I’ve run into when trying this approach before. I tend to work on different machines throughout the week, which generally means that when I find inspiration, I don’t have the source files on hand to work on them. The source files are kept in source control and are hosted on GitHub, but I’ve find that I tend to be quite lax with making sure I have the correct version checked out and that any committed changes are pushed. This is one of the reasons why I like micro.blog: having a service with a web interface that I can just log into, and that will have all the posts there, means that I can work on them as long as I have an internet connection.
    2. Full control over the appearance and workflow: Many of the other services provide the means for adjusting the appearance of the web-page, so this is only a minor reason for taking on this effort. But one thing that I would find useful is to have some control over is the blogging workflow itself. There are some ideas that I might like to include, like displaying summaries in the main page, or sharing review links for posts prior to publishing them. Being able to easily do that in a codebase that I’m familiar with would help.
    3. Good practice of my skills: As someone who tends to work on backend systems for his day-to-day job, some of my development and operational experience are a little rusty. Building, hosting and operating a site would provide an opportunity to exercise these muscles, and may also come in handy if I were to choose to build something for others to use (something that I’ve been contemplating for a while).

    Note that price is not one of these reasons. In fact it might actually cost me a little more to put together a site like this. But I think the experience and control that I hope to get out of this endeavour might be worth it.

    I am also aware of some of the risks of this approach. Here is how I plan to mitigate them:

    1. Security and stability: This is something that comes for free from a blogging platform that I’ll need to take on myself. There’s always a risk with putting a new system onto the internet, and having a web site with remote administration is an invitation for others to abuse it. To me this is another area of development I believe I need to work on. Although I don’t intend to store any personal information but my own, I do have to be mindful of the risks of putting anything online, and making sure that the appropriate mitigations are in place to prevent that. I’ll also have to make sure that I’m maintaining proper backups of the content, and periodically exercising them to make sure they work. The fact that my work is at stake is a good incentive to keep on top of this.
    2. Distractions: Yeah, this is a classic problem with me: I use something that I build, I find a small problem or something that can be improved, then instead of actually finishing the task, I actually work on the code for the service. This may have to be something that only gets addressed with discipline. It may help using the CMS on a machine that doesn’t have the source code.

    I will also have to be aware of the amount of time I put into this. I actually started working on a CMS several months ago so I’m not starting completely from scratch, but I’ve learnt with too many other of my personal projects that maintaining something like this is for the long term. It might be fine to occasionally tinker with it, but I cannot spend too much effort working on the system at the expense of actually writing content.

    So this is what I might do. I might give myself the rest of the month to do what I need to do to get it up to scratch, then I will start measuring how much time I spend working on it, vs. the amount of time I actually use it to write content. If the time I spend working on the code base is more than 50% of time I use it to write content, then it will indicate to me that it’s a distraction and I will abandon it for an alternative setup. To keep myself honest, I’ll post the outcomes of this on my micro blog (if I remember).

    A few other minor points:

    • Will this delay publishing of the blog? No. The CMS is functionally complete but there are some rough edges that I’d like to smooth out. I hope to actually start publishing this new blog very shortly.
    • Will I be moving the hosting of this blog onto the new CMS? No, far from it. The services here works great for how I want to maintain this blog, and the community aspects are fantastic. The CMS also lacks the IndiWeb features that micro.blog offers and it may be some time before they get built.

    I’ll be interested if anyone has any thoughts on this so feel free to reply to this post on micro.blog.

    Update: I’ve posted a follow-up on this decision about a month after writing this.

    An anecdote regarding the removal of iSH from the App Store

    Around April this year, my old Android Nexus 9 tablet was becoming unusable due to it’s age and I was considering which tablet to move to next. I have been a user of Android tablets since the Nexus 7 and I have been quite happy with them (yes, we do exist). However, it was becoming clear that Google’s was no longer interested in maintaining first-party support for Android on a tablet, and none of the other brands that were available were very inspiring.

    I took this as a sign to move to a new platform. It was a toss-up between a Chromebook or an iPad. I understood that it was possible to get a Linux shell on a Chromebook by enabling development mode, so for a while I was looking a possible Chromebooks that would work as a tablet replacement. But after hearing about iSH from an ATP episode, I got the impression that it would be possible to recreate the same thing with nicer hardware and app ecosystem, despite how locked down it is. So the iPad won out.

    If at the time I knew that Apple would be removing iSH from the App Store (via Twitter) I’m not sure I would have bought the iPad, and would have probably gone with the Chromebook.

    Tracking Down a Lost Album

    Here’s a short story about my endeavours to find an album that seems to have disappeared from the face of the internet. I’m a bit of a sucker for original sound tracks, particularly instrumental ones. One that I remember being very good is the music from The Private Life of Plants, a documentary series from David Attenborough made in the mid 1990s. It was one of those sound tracks that occasionally popped into my mind, particularly when looking at lovely autumn leaves or other scenes from the show. But it has been a while since I last watched it, and I never though to look at whether an album of the sound track actually existed.

    It was only earlier this year when I discovered that this was a possibility. I was watching Curb Your Enthusiasm and one episode featured a scene which had the music from the documentary series in the background. I recognised it immediately and after some quick searches online, I discovered that there did exist at one point an album of the sound track.

    I started looking around to see if it was available to listen to. I started with Spotify, the music service that I’m subscribe to, but searches there did not return any results. I then went to the other streaming services that were available, like Apple Music and Amazon music, but there was no luck there either. I then started looking to see if I could get the physical CD. I looked on Amazon, eBay, JB Hi-Fi and even Sanity, a music shop that is still operating here in Australia. None of these sites turned up anything indicating that this album was available. I then tried my local library, the online ABC shop, and the BBC shops, but those turned up with no results as well. It looked like this album was no longer available for sale anywhere.

    I then started making generic web searches on Google and DuckDuckGo. There were very few hits, most of them referencing the documentary series itself. It was here that I started venturing into the abandoned areas of the web, with old pages, riddled with ads, that are barely functioning at all. I found a last.fm page for the composer with looked to have the track list of the album, but attempting to play the tracks through the in-browser player only produced errors. Going further through the abandoned web, I found an old download site which looked to have links to some of the tracks on the album. After following the links however, it looked like the site has since stopped operating: the links only produced 404 Not Founds, and attempts to go to the main site only produced a page indicating that the domain was for sale.

    It was then that I remembered Wayback Machine, and I went there to see if it was possible to get to the archived version of the site. Sure enough, there existed a snapshot of the site from 2006. The site itself looked to be an old online music store that at one time offered the album for sale. The album page was there and was indexed by Wayback Machine. Better still, the site posted 5 of the tracks online, I’m guessing as samples, which were also indexed by Wayback Machine and were available for download. Success! I was able to download the 5 sample tracks from Wayback Machine and play them on my computer.

    I don’t know if there’s a moral to this story. I guess if it’s anything, it’s that preserving these archives is important, especially for media under the control of gatekeepers that can pull it from distribution at any time. It’s certainly made me appreciate the important work that the Internet Archive does, and I have since made a small donation to them to allow that work to continue.

    I think it’s also fair to say that this story is not yet over. I don’t care how long it will take me, I’ll continue to track this album down until I’ve found it and be able to play it in it’s entirety.

    Offical support for file embedding coming to Go

    I’m excited to see, via Golang Weekly, that the official support for embedding static files in Go is being realised, with the final commit merged to the core a couple of days ago. This, along with the new file-system abstraction, will mean that it will be much easier to embed files in Go applications, and make use of them within the application itself.

    One of the features I like about coding is Go is that the build artefacts are statically linked executables that can be distributed without any additional dependencies. This means that if I wanted to share something, all I need to do is to give them a single file that they can just run, without needing to worry about whether particular dependencies or runtimes are installed prior to doing so.

    However there are times when the application requires static files, and to maintain this simple form of distribution, I generally want to embed these files within the application itself. This comes up surprisingly often and is not something that was officially supported within the core language tools, meaning that this gap had to be filled by 3rd parties. Given the number of tools available to do this, I can see that I’m not alone in needing this. And as great as it is to see the community step in to fill this gap, relying on an external tool complicates the build process a bit1: making sure the tool is installed, making sure that it is executed when the build run, making sure that the tool is actively being maintained so that changes to the language will be supported going forward, etc.

    One other thing about these tools is that the API used to access the file is always slightly different as well, meaning that if there’s a need to change tools, you end up needing to change your code to actually access the statically embedded file.

    Now that embedding files is officially coming to the language itself, there is less need to rely on all of this. There’s no need to worry about various tools being installed on the machines that are building the application. And the fact that this feature will work hand in hand with the new file system abstraction means that embedded files would be easier to work with within the code base itself.

    So kudos to the core Go development team. I’m really looking forward to using this.


    1. Although not as much as many other languages. ↩︎

    Advice to those working with annotations in Preview

    For those of you using Preview in macOS for viewing an annotated PDF, if you need to move or delete the annotations in order to select the text, be sure to undo your changes prior to closing Preview. Otherwise your changes will be saved without asking you first.1

    This just happened to me. I have a PDF annotated with edits made with the iPad pencil and I wanted to copy the actual text. The annotations seemed to sit on top of the text in an image layer, which means that in order to select the text, I have to move or delete this layer first. I didn’t want the annotations mixed up with the ones on the other page, so I decided to delete this layer instead of moving it. This was a mistake.

    I copied the text and wanted to get the annotations back. I probably should have just pressed ⌘Z to undo my changes, but I saw “– Edited” in the title bar so I assumed that if I just closed Preview, it would discard my changes and I would be able to get my annotations back just by reopening the PDF. But it turns out, after closing it and opening it again, the changes were saved without asking me first, resulting in my annotations being lost.

    Developers of macOS: this is a terrible user experience that needs to be fixed. Preview saving my changes from under me has now resulted in data loss, the cardinal sin of any software. Either ask me before saving changes when the application is closed, support a notion of versions, or do something else. But do not just save my changes without asking me, and do not imply that Preview is aware of pending changes by having “– Edited” in the title bar if it isn’t going to discard the changes, or confirm that they should be saved when I close the app.

    Ugh, I need another coffee now.


    1. This is macOS Mojave. I hope this has been fixed some way in later versions. ↩︎

    Doughnut Day 2020

    Good day today. From a high of 725 Covid-19 cases in August 25, Victoria has just had 24 hours of zero new cases and zero deaths. This is during a period of extensive testing in the north of the metropolitan, during a testing blitz in an attempt to contain an outbreak. Labs have been processing tests late into the night, with not a single one so far coming back positive.

    As good as this news is, I’d imagine the government wants to remain cautious here. The easing of restrictions that were scheduled for yesterday have been delayed, I guess to make sure that contact tracers are on top of things in the north. As disappointing as this is, I can see why they did this. It makes sense that they take advantage of the current situation to get as much information about where the virus is as they can. I don’t believe anyone wants to go back into lock-down a third time so they really have one shot of this. The government is still confident that we are on track for lifting restrictions before November 1st. I guess we’ll see what happens when they give their briefing this morning, but after going through this for 4 months, I can wait a few more days.

    For the moment, it’s good seeing this result. Truth is, things were always touch and go in Victoria even during the relative period of free movement that we experienced in June, when we last had a day of zero new cases. Seeing this now, with the restricted movement and testing blitz, gives me hope that we can keep this virus suppressed until a vaccine comes in.

    Update at 15:45: Results from the testing blitz from the northern suburbs have been trickling all day, and so far still no positive cases. It looks like the Victorian government is happy with this result because they have announced that the state will be moving to the 3rd step of re-opening on Wednesday.

    Reflections On Writing On The Web

    I fell into a bit of a rabbit hole about writing and publishing online yesterday after reading this article from Preetam Nath and this article from James Clear. I’ve been thinking about creating and publishing on the web for a little while now, which is probably why these two articles resonated with me.

    These articles highlight the importance of creating and publishing regardless of what the topic is. There have been a few things that I’ve been wanting to share but I haven’t done so, probably because I worry about what other people think. The interesting thing about that line of thinking is that I tend to enjoy reading posts from other people as they go about their lives. I guess that’s what the original intention of blogging actually is.

    There have also been times during the past week that I’ve been craving content, either from Twitter, Micro.blog or the various RSS feeds that I read. And there have been times when I’ve caught up with everything that I follow, and nothing happens for a while. I think to myself, “when will someone post something? I need to be distracted for a while.” I think I need to remember that someone needs to create that content in order for it to be consumed, and although it’s much easier consuming content than it is to produce it, I should not feel entitled to it and expect others to amuse me.

    The interesting thing about these thoughts is that it is joining a confluence of other changes to my daily work setup that has happened recently. I use to write in my Day One journal almost every day, but since moving to Linux for work that has prooved a little difficult to maintain. It might be that more of my journalling will go here instead, given that micro.blog provides a nice, cross-plaform interface for writing entries of any size.

    Unit Tests and Verifying Mocks

    I’m working with a unit test that uses mocks in which every method in the mock is verified after the method under test is called, even if it is not relevant to the test. Furthermore, the tear down method verifies that every dependent services has no more interactions, which means that removing a verification that is not relevant to the specific test case will cause the test to fail.

    Please do not do this. It makes modifying the tests really difficult and results in really long unit tests that hides what the test is trying to assert. It also makes it harder to create new tests to verify a particular behaviour, as you find yourself copying all the verification code that is not relevant to the case that you’re trying to test for.

    In my opinion, tests should clearly demonstrate the specific behaviour that you’re trying to verify, and should only include verification of mocks that are directly related to that case. Writing tests that are effectively photo-negatives of the method being tested, one in which the dependent services are verified instead of called, is not a good practice for unit testing.

    Instead, have multiple, smaller unit-tests that asserts a particular behaviour, and only verify the mocks that are explicitly required. You gain the coverage by having all these unit tests effectively overlap the various paths a particular method will take. But the important benefit is that it results in more maintainable tests that are easier to work with. That makes it easier to write tests, which means you find yourself doing so more often. Path of least resistance and all that.

    A Database Client Wishlist

    I’ve recently started a new job so I’ve been spending a bit of time trying to become familiar with how the relational databases are structured. Usually when I’m doing any database work, I tend to use the CLI clients like mysql or pg_sql. I tend to prefer them, not only as they’re usually easy to use via SSH, but the REPL is a nice interaction model when querying data: you type a query, and the results appear directly below it. The CLI tools do have a few drawbacks though. Dealing with large result sets or browsing the schema tend to be harder, which makes it difficult when dealing with an unfamiliar database.

    So I’ve been finding myself using the GUI database browsers more, like DataGrip or MySQL Workbench. It is much easier and nicer to navigate the schema using these, along with deal with large result sets, but they do remove the connection between a query and the associated results. The queries are usually entered in an editor-like console, like those used to enter code, and the results are usually in another window panel, or in a separate tab. This mode of interaction has nothing like the recency or locality between the query and the results that you get from a CLI.

    While working with both of these tools and seeing their shortcomings, I’ve been casually wondering what a perfect decent database client would have. I think it will need these attributes in some prominent way (this covers the complaints listed above but also addresses some others I think would also help):

    Results appearing below queries: I think this tool will need an interaction model similar to the CLI tools. There is so much benefit in seeing the results directly below the query that produce it. Anything other than this is likely to result in a situation where I’ll be looking at seven different queries and wondering which of them produced the single result set that I see.

    An easy way to view, filter and export large result sets: Although the interaction should be is closer to the CLI, there must be a way to deal with large queries and result sets, something that the GUI tools do really well. There should also be a way to export them to CSV files or something similar without having to remember the appropriate copy command.

    Some sort of snippet support and persistent scroll-back: This one can be best summarised as “whatever you find yourself copy and pasting into notepad”. The ability to store snippets and saved queries will save time trying to find or rewrite the big complex queries. And the persistent scroll-back of previously executed queries, with their results, will help with maintaining my train of thought while investigating something. This can come in handy especially when the investigation spans multiple days.

    A quick way to annotate queries or results: Big long SQL queries eventually look the same to me after a while, so it would also be nice to add inline comments or notes to remind myself what the results actually represent.

    An easy way to browse the schema: This could be a tree-like structure similar to all the GUI tools, which will make browsing the schema really easy. I think at a minimum, it should be a consistent set of meta-commands such as listing tables in a database or describing a tables columns, etc.

    An easy way to run automation tasks: Finally, some form of scripting language to be able to “orchestrate” multiple queries without having to formulate one large SQL query, or copy and paste result sets around. It’s true that writing an external tool to do this is also possible, but avoiding the context switch would be a huge benefit if this was available from within the app. Doesn’t have to be full featured either, in fact it’s probably better if it isn’t.

    It would be interesting exploring this further. I think the last thing I need now is another project to work on, but maybe over the weekend I might start prototyping this to see if the workflow makes sense.

    Sharing links to private podcast episodes

    There have been times when I’ve wanted to share a link to an episode of a podcast that I pay for, but I’m hesitant to do so as the feed is private and unique to my account. The episode is also available in the public feed, but has been trimmed as an incentive for listeners to pay for the show. I can always find the episode in the public feed and share that, but I’m wondering if there’s a better way to handle this.

    How do other podcast listeners share links to episodes from private feeds that also have a public version? Is there something in the RSS standard1 that allows for podcast producers to link a private episode to the same one in the public feed? If so, do the major podcast players, specifically Pocketcasts, honour this link when sharing an episode?

    I’m asking as a podcast listener: I don’t have a podcast myself (yet).


    1. “Standard” is probably not the right word here but let’s go with it for the moment. ↩︎

    Let's hold the line, Melbourne. We've got this.

    Today is a good day. Melbourne’s 14 day daily Covid-19 case average is now 29.4, which is beyond the 30 to 50 band required to move to the next stage of reopening. Seeing the fruits of our collective sacrifice, bringing the daily case numbers from a peak of around 740 in August down to the 11 we saw on Monday, makes me proud to be a Melburnian.

    As much as I like for things to reopen sooner than planned, I think we should hold the line for as long as we possibly can. The potential prizes for doing so – the crushing of the virus, the ability to travel interstate again, the chance to eat at restaurants without fear of infection, the chance for a normalish Christmas and summer – are within reach. I know that’s easy for me to say as someone who has the ability to work from home, and I completely recognise those of us suffering right now being unable to work at all. But just like the darkest hour is before the dawn, so too will the sweet taste of victory and accomplishment be when we finally crush this virus and meet the rest of the country where they are. To rush this, to reopen too early, and see our effort thrown away would be upsetting.

    Let’s hold out that little bit longer, Melbourne. We’ve got this.

    Getting screen capture working in Vivaldi on Fedora 32

    Moving from a Mac Pro back to Linux for work, I’ve come to appreciate how well things just work out of the box in macOS. Things like Web RTC display capture, which is used for sharing the screen in browser-based video conferencing sites (and I think also in Slack, since it’s using Electron and, thus, the Blink rendering engine), work flawlessly in macOS, but proved to be a bit of trouble within Linux.

    From my limited reading, it looks like this might be related to the use of Wayland, the new user-space video stack that is currently being built, and the corresponding security model. This exists alongside a new mechanism for acquiring audio and video feeds called PipeWire, but this is not enabled by default in Vivaldi, the browser I’m using.

    Using the instructions found here, I think I’ve managed to fix this by:

    1. Going to chrome://flags
    2. Enabling “WebRTC PipeWire support”
    3. Restarting Vivaldi

    I then went to a test WebRTC site to verify that it works (there is also one on MDC). After going through some security prompts to allow sharing of the screen, I was now able to see my desktop being displayed back to me.

    I’m not sure how I can fix this in Electron apps like Slack. Prior to this fix, Vivaldi did allow sharing of individual windows, but this doesn’t seem possible in Slack at the moment. If I find a fix for this, I might update this post.

    First Foray Into Home Automation

    After recently changing jobs, I’ve received a brand new Lenovo work laptop. As good as the laptop is, and it’s OK for a work laptop, it has one annoying feature. Whenever the laptop is plugged in and powered, there is a bright white LED that is always illuminated. Because I’m still working from home β€” and it is likely that after the pandemic I will be working from home at least a few days a week β€” and my desk is in my bedroom, having this white LED is no good for my sleep.

    For the first few evenings, I’ve been unplugging the laptop prior to going to bed. I rather not use electrical tape to block out the LED: the is not my laptop and such tape would be ugly, and the LED itself is close to other ports which would make tape placement a bit awkward. Plus the LED does serves the useful purpose of indicating that the laptop is powered. It’s just not useful indicating this fact at night. Unplugging the laptop works but I’m not too keen with this solution long term: it’s only going to be a matter of time when I unplug it one day, forget to plug it in the next day and I eventually run out of juice when I need it the most.

    Another solution for this problem is a dumb timer. I do own a timer β€” one that has a circular clock that is configured by pressing in a black nub for each 15 minutes that you want the plug to be energised β€” and it could work in this scenario, but it does has some awkward properties. The principal one being that I’d like to ensure that the laptop is powered when I’m using it, and there could be times when I’m using it during the hours that I’m usually asleep, like when I’m to responding to incidents or working late. The timer does have an override, but it’s along the side of the plug itself, so in these cases I’d have to get under my desk to turn it on.

    So I decided to take this opportunity to try out some home automation.

    The Smart Plug

    The way I plan to tackle this problem is by doing the following:

    • Connecting the laptop to a smart plug
    • Setting up a schedule so that the smart plug will automatically turn off at 10:00 in the evening, and on at 6:30 in the morning
    • Having a way to override the schedule if I need to turn on the plug if I need to

    The smart plug chosen for this is the TP-Link HS-100 Smart Wi-Fi Plug. It’s was not my first choice but it was in stock and was delivered within a few days, way before the expected delivery date (good job Australia Post).

    The TP-Link HS 100 Smart

    The plugs themselves are nothing remarkable. It’s just a standard plug, with a LED indicating the current state of the plug and Wi-Fi connectivity. They’re a little bulky, and the do encroach a bit on some of the adjacent plugs: I needed to move a few plugs around in the power board that I’m using. Fortunately there is some clearance between the prongs and the actual body of the device, which made it possible to position it so that it overlaps some of the other plugs with slimmer profiles. The relay within the plug is much quieter than I expected, which was a nice surprise.

    Linking the smart plug up to the Wi-Fi was relatively painless, however I did need to download an app and create a new TP-Link Kasa Smart account. During the actual on-boarding, the app also asked for my location for some reason. It could have been to configure time-zones? I don’t know, but it would have been nice for the app to disclose why it needed my location. But after that, it was more or less what you’d expect: after and following the instructions within the app to plug the device in and turn it on, the smart plug started a Wi-Fi hotspot that the phone connected to. One the pairing was complete, it was possible to turn the device on and off within the app.

    Google Home with the smart plugs registered

    Setting Up The Schedule

    I first tried setting up the schedule for the smart plug in Google Home. First, I’ve got to say that doing something mildly complicated like this in a mobile app was annoying, and I wish Google published a web or desktop version of their Home management app so this could use a mouse and keyboard. But I had no trouble registering the smart plug in Google Home. It basically involved linking the Kasa Smart account with my Google account and once that was done, the smart plug could be added to a room and was ready to go.

    Setting up a schedule within Google Home involved creating a new “Scene”, and expected information like trigger words and a spoken response when the scene ran. There were also some built in scenes but they didn’t seem suitable for my use case. The whole thing seems geared towards triggering the scene with a Google Home smart speakers (I just realised the the app and the smart speakers had the same name), and seems to assume that one is available. I don’t have a smart speaker and the prospect of the Google Assistant speaking when the scene is triggered did not appeal to me. It might have been possible to set it up the way I desired, but it felt like my use case was not exactly what this automation system is geared towards, so I abandoned pursuing this approach any further.

    Fortunately the smart plugs integrate with IFTTT, so I turned to that next. After recovering my old account, I set out to configure the schedule.

    Firstly, I have to say that the UX of IFTTT’s site is dramatically different than what I remember, and not in a good way. It seemed like they noticed that most of their users were accessing their site from their mobiles, and they redesigned the UI to work for them at the expense of desktop users. They reduce the amount of information density of each page so that it took three clicks to do anything, and cranked up the font size so much that every label or line of copy was larger than a header. This, mixed with a garish new colour scheme, made the page physically hard to look at. I’d recommend that IFTTT’s UX designers reconsider their design decisions.

    Usability aside, setting up the schedule was reasonably straightforward here as well. I first had to link the IFTTT and Kasa Smart accounts, which made the devices selectable within IFTTT. I then went about setting up an applet to turn off the plug at the scheduled time. Initially I set it up to turn it off 15 minutes from the current time, just so that I could test it. It was not successful on the first go and I had to ensure that the plug was selected within the applet correctly; but on the second go, it worked without any problem: at the scheduled time, the plug turned itself off. Most importantly of all, the state of the plug was properly reflected within the Google Home app and I was able to turn it back on from there.

    One last thing about the schedules, IFTTT does not make this clear when you’re setting up the applet, but dates and times that are used by an applet are in the time-zone of your account. To check or change it, go to your account profile settings and it should be listed there.

    I then had to create a second applet to turn the plug on at a scheduled time, which was just as easy to do. The entire schedule was set up in a few minutes, minus the test time, with two IFTTT applets. This leaves me with one remaining applet on the free IFTTT plan, which means that I’ll need to consider something else when I set up the other plug.

    IFTTT with the two applets setup

    After testing the entire set up end to end, and confirming that the override works, I reconfigured the schedule for the evening times and it was good to go.

    That evening, the scheduled ran without a hitch. The smart plug cut power to the laptop at 10:00 and the LED was extinguished, giving me the much needed darkness for a good night sleep. The next morning at 6:30, the smart plug was turned on again and power was restored to the laptop. The only downside is that the smart plug itself has a green LED which, although not as distracting as the one on the laptop, is still visible during the night. Fortunately this is something I could easily fix with electrical tape.

    Summary

    So far, I’d say this set up has been successful. It’s been two nights now, and in both cases power to the laptop was turned off on schedule, and restored the next morning. The LED from the laptop no longer distracts me and I don’t have to manually unplug the laptop every evening. This is now something that I can forget, which is the ultimate indication of success.

    On Ordered Lists in Markdown

    One of the things I like about Markdown as a form of writing online, is that ordered lists can simply begin with the prefix 1., and there is no need to update the leading number in the subsequent items. To produce the following list:

    1. First
    2. Second
    3. Third

    One only needs to write:

    1. First
    1. Second
    1. Third
    

    or:

    1. First
    2. Second
    3. Third
    

    or even:

    1. First
    3. Second
    2. Third
    

    The one downside to this approach, unfortunately, is that there is no nice way to specify what the first ordinal should be. If I were to use 3. as the prefix of the first item, the generated ordered list will always begin at 1.

    This means that there’s no nice way to continue lists that are separated by block elements. For example, let’s say I want to have a list of 4 items, then a paragraph of text or some other block element, then continue the list from 5. The only way to do so in “common-style” Markdown is to write the second list in HTML with an <od start=5> tag:

    <ol start=5>
      <li>Fifth</li>
      <li>Sixth</li>
      <li>Seventh</li>  
    </ol>
    

    It would be nice if this was representable within Markdown itself. Maybe by taking into account the first ordinal and just incrementing by 1 from there. For example:

    5. Fifth
    5. Sixth
    5. Seventh
    

    becomes

    1. Fifth
    2. Sixth
    3. Seventh

    So what?, you might say. You just demonstrated that this could be done in HTML.

    That’s true, however I use wiki software with rich-text editors that don’t allow modifying the underlying HTML (they may have the ability to specify a “region” of HTML, but not a way to modify the underlying body text itself) and they use Markdown as a way of triggering formatting changes. For example, typing * twice will enable bold face, typing three ` will start a code block… and typing 1. will start an ordered list.

    Changing the first ordinal or continuing the previous list might be considered an advanced operation that the developers of these wikis may not have considered. But I can’t help wonder if Markdown had this feature from the start, all these editors would have supported it in one form or another.

← Newer Posts Older Posts β†’