Long Form Posts

    A Year Under The Pandemic

    This was originally a journal entry but I thought I’d share it here as well. Today is the end of week 52, almost a year to the day that the pandemic became all to real for me. I’ve taken today day off to spend some time in Warburton. It was in Warburton last year, almost to the day (13th of March), that things began to get serious. The news coming out of China and Italy was grave: hundreds of deaths, thousands of new cases, hospitals filling up, lack of ventilators and staff to operate them, PPE shortages, scenes of people locked down in their home. The outbreak in New York was becoming serious as well, and the US government announced closure of their borders to Europe.

    There were also a number of new cases here as well, it may have been 100 or so around the country. That Friday a number public events were cancelled, like the AFL and Grand Prix, and the borders were closed off to the rest of the world — nobody was allowed in or out. There was a run on things at the shops as officials advised people to be stocked for two weeks should you need to isolate. Toilet paper was in short supply, along with some other staples like pasta and tuna. There was a general sense of unease around the place.

    It was also the time when I started working from home. I returned one last time to the office of my old job on Tuesday the following week. The city was quite quiet. A lot more people were wearing marks and half the cafes were closed for the afternoon. I haven’t been back to that office since. I think the weekend following I stopped meeting my parents for dinner, and only went out for groceries.

    I guess it’s hard to describe how scary the situation was at the time. The testing and tracing infrastructure was not yet setup, so nobody really knew where the virus was. The government ensured us that there was no local transmission, but it was difficult to believe them, especially as case numbers were rising rapidly. The reported death rate was also terrifying — up to 3% at the time but higher in certain places. I was fearful of everyone I loved, as well as myself, catching the disease and ending up on a ventilator, or worse, dying. Taiwan was the only country at the time to have curbed the virus: most Western countries were struggling with outbreaks, so at the time I had little faith that Australia would be able to manage the virus as well.

    I was also afraid that the lock downs would last until a vaccine is available. At the time medical experts were tempering expectations of a speedy delivery of a vaccine. Turnaround times were usually 1 to 1.5 years. The fact that they were ready the same year was considered a bit of a breakthrough (I guess these things really do happen).

    A lot has happened this past 52 weeks. The nation has managed to keep the virus more or less under control. There were setbacks though: the second Melbourne lock down was regrettable. But we have managed to setup a somewhat decent testing and contact tracing regime, along with hotel quarantine, and new local cases have been at or close to zero for most of the past 5 months. Vaccinations of the border workers, front line workers, and people at risk are currently in progress.

    A sense of normalcy has returned, in what is generally called “Covid normal”. The borders are still closed to everyone except New Zealanders, and no one is generally permitted to leave the country. Since November, things have pretty much remained more-or-less opened. Events like the AFL are back on with small crowds that are socially distanced.

    But the threat remains. Every day I’m looking on Twitter to see what the latest number of new cases is. There’s a constant trickle of positive cases coming in from overseas, where the virus is still raging. There have been new, more contagious and deadly, variants popping up, and it’s a constant struggle to keep them out: we have had to go through a 5 day snap lock-down to stop local transmission of one.

    So there’s little to do but wait. I appreciate that we’ve managed to gain some semblance of normalcy back, something that I’m aware others around the world have been denied so far. Eventually this will pass as well, but I’m hoping it doesn’t take another 52 weeks.

    Australia’s ABC News shot to the top of the App Store charts following Facebook’s news ban

    From the Verge:

    The Australian Broadcasting Corporation’s ABC News app shot to the top of Apple’s App Store charts in Australia over the course of the last few days, not long after Facebook banned Australian news sources on its platform.

    […]

    ABC News currently sits at No. 2 in the App Store’s overall app rankings in Australia, according to the analytics firm App Annie, and No. 1 in the news app charts. When Patel noticed the change, the app was also briefly No. 1 overall, ahead of Instagram, Facebook Messenger, and the Facebook app itself.

    I’ve been seeing these banners on ABC News site myself, and I was sceptical that anything would come of it. Turn’s out I’m pleasantly mistaken. It’s really good to see people choosing to go to a reputable news source directly.

    Some uninformed thoughts about the ACCC Media Bargaining Code

    Yesterday, when the news about the news and Facebook was making the rounds in Australia, I have been wondering about my position about the whole thing. After listening to the Stratechery Daily Update1 from Ben Thompson about it, I think my position on this has solidified.

    I’m no fan of Facebook, but I can completely understand why they took the action they did, and I believe that it is in their right to do so. It could be argued that banning links and pages from government and NPO organisations was wide-reaching, and for that, I think it’s important to consider the motivation here. Was Facebook being cautious about their interpretation of the proposed law, which was written so poorly to suggest that anything related to the goings on in Australia fell under the code? Where they just being sloppy about which organisations were banned? Or were they going to such broad lengths to make a point and leverage their negotiation position? I don’t know: all three scenarios seem plausible.

    But I think Ben Thompson is spot-on in making the point that there was a lack of political ground work prior to Facebook taking this action. Contrast that with Google, which since last month was warning about maybe pulling out of the Australian market if there was a chance that they would violate the code. I didn’t see anything like this from Facebook, and Ben Thompson made the point that this was a lot like the saga around WhatsApp and the privacy changes. Who knows? Maybe if they did this, they would have more sympathy in the eyes of the public.

    It may sound like I’m taking Facebook side here. I won’t go so far as to say that, but living in a society with free enterprise I don’t see why they couldn’t do what they did. No one is preordained with a right to post on Facebook, and expect that they can extract rent for doing so. It was a mistake on part of the government and media organisations (hi, Mr. Murdoch) to think otherwise.

    The whole mess is regrettable but I hope everyone comes out of this a little wiser. I think for me it just reinforces the importance of maintaining your own, independent position on the web. There might be subsequent words on this thought down the line.


    1. This article is paywalled, but if you are in any way interested in technology, I highly recommend subscribing. ↩︎

    A $2000.00 Smartphone with Ads

    I just learnt today that the Samsung Galaxy S21 Ultra has ads. I’m generally not that interested in Samsung phones, but the idea of putting ads on a device that costs up to $US 2,000.00 offends me so much that I had to comment.

    If I shell out that amount of money for a device, I expect an experience that is worthy of that price. Having that experienced degraded with crappy banner ads, and a built-in app1 which hijacks the lock screen, really brings down the intrinsic worth of the whole device to a point that doesn’t justify the price they’re asking for. It shows contempt for the customer — you know, the person that, by right, is the owner of the phone they paid for — and it’s just overall dishonorable.

    I know how difficult it can be for Android OEMs to compete given the current race to the bottom. But my understanding is that Samsung is actually second to Apple in terms of revenue per device, so I see absolutely no reason why they would consider a move like this.

    Yeah, I know it’s a first world problem, but I’m seeing more and more phone vendors thinking that the device that they sell, supposably at a profit, is also a vector for which they can push marketing messages through without regard, and I really don’t want that to happen.


    1. The built in app, called “Samsung Global Goals”, is designed to promote charitable causes through the use of promotions. I appreciate the motivation of funding these causes, but not the approach they use to do so. ↩︎

    Seth Godin on Rank Choice Voting

    Seth Godin on Rank Choice Voting:

    The surprising thing? In a recent primary in New York, some people had trouble with the new method. It’s not that the method of voting is particularly difficult. The problem is that we’ve trained ourselves to be RIGHT. To have “our candidate” and not be open (or pushed) to even consider that there might be an alternative. And to feel stress when we need to do the hard work of ranking possible outcomes, because that involves, in advance, considering acceptable outcomes that while not our favorite, would be acceptable.

    Living in a country that has rank-choice voting across the board, I could be biased in this, but I think that’s one of the beautiful things about this voting system. It changes the thinking of “will my candidate win” to “what candidate can I live with”. A candidate representing several thousand people is not going to be everyones first preference, but they might be happy enough if they’re their second or third.

    A Feature Request for Twitter, Free of Charge

    It looks like Twitter’s product design team need some help. Their recent ideas, “inspired” by the features of other companies like Snap (Stories) and Club House (Audio Clips), don’t seem to be setting the world on fire. Well, here’s an idea for them to pursue, free of charge.

    A lot of people I follow seem to use Twitter threads for long-form writing. This might be intentional, or it might be because they had a fleeting thought that they developed on the spot. But the end result is a single piece of writing, quantised over a series of tweets, and assembled as a thread.

    I’d argue that consuming long-form writing this way is suboptimal. It’s doable: I can read the thread as they’re currently presented in the Twitter app and website. But it would also be nice to read it as a single web page, complete with a link that can be used to reference the thread as a whole.

    Now, I don’t see these writers changing their behaviour any time soon. It’s obvious that people see value in producing content this way, otherwise they would do something else. But it’s because of this value that Twitter should lean in to this user behaviour, and make it easier to consume these threads as blog posts. The way they should do this is:

    • Offer the author the choice to publish the thread on a single web page, complete with a permalink URL that can be shared, once they have finished writing it. This could be on demand as they’re composing the thread, or it can be done automatically once the thread reaches a certain size.
    • Provide the link to this web page on the first tweet of the thread. The reader can follow the link to consume the thread on a single page, or can use the link to reference the thread as a whole.

    I know that this is possible now, with things like the Thread Reader app, but there are some benefits for Twitter adding first party support for this. The first being that it keeps user on their “property”, especially if they add this feature to the app as well as the Twitter website. This neutralises the concern of sending the author or reader to another site to consume or publish their content, thereby feeding into the second benefit, which is that it elevates Twitter as a platform for writing long-form writing, in addition to microblogging. If their content can be enjoyed in a nicer reading experience, more writers would use Twitter to write this form of content, keeping users and writers on Twitter. The user benefits, the publisher benefits, and Twitter benefits. Win-win-win all round.

    So there you are, Twitter, the next feature for the product backlog. It could be that I’m the only that that wants this, but I personally see more value in this than their other pie-in-the-sky endeavours that Twitter is perusing.

    One final thing: I’m a big proponent of the open web and owning your own content, so I don’t endorse this as a way to publish your work. I’m coming at this as a reader of those that choose to use Twitter this way. Just because they’re OK with Twitter owning their content this way doesn’t mean I should have a less-than-adequate reading experience.

    Adding Blog Posts to Day One using RSS

    Prior to joining Micro.blog, I had a journal in Day One, which was the sole destination for all my personal writing. I still have the journal, mainly for stuff that I keep to myself, but since starting the blog, I always wondered how I could get my posts in there as well. It would be nice to collect everything I’ve written in a single place. In fact, there was a time I was considering building something that used Day One’s email to entry feature, just so I could achieve this.

    I’ve since discovered, after reading this blog post, that Day One actually has an IFTTT integration. This means that it’s possible to setup an applet that takes new entries from an RSS feed, and add them as entries into a Day One journal.

    I decided to give this a go, and it was quite simple to set up. I’m using the blog’s RSS feed as the source, and a new “Blog Journal” as the destination new entries will be created in. Setting up the integration with Day One was straight forward, however I had to make sure that the encryption mechanism of the new journal was “Standard” instead of “End-to-end”. This is slightly less secure but everything in that journal is going to be public anyway, so I’m not too concerned about that.

    The Day One integration allows you to select the journal, any tags, and how the content is to look. This follows something resembling a template and allows the use of placeholders to select elements of the post, like the title and the body. The integration also allows you to specify the location and entry image, although I left that blank. In fact, I ran into trouble trying to set the entry image when a post didn’t have one: journal entries were being created with a generic IFTTT 404 image instead.

    So far it’s been working well. I’ve only being writing short posts without a title — this will be the first long post with a title — but they’ve been showing up in Day One without any issues. There are still some unknowns about this integration. For example, I don’t know how images will work. I would hope that even though they’re links that Day One will handle them properly if you wanted to do something like make a physical book. It’s likely I’ll need to make a few tweaks before this is perfect.

    But all-in-all, I’m pleased with this setup. It’s nice seeing everything I write show up in a single place now. In fact, I’m wondering if there are other things this integration could be useful for, now that I know that all that needs doing is setting up an RSS feed.

    Some thoughts on app permissions in macOS

    It’s funny how the casual meandering of your mind can be a source of inspiration. This morning, my mind casually turned to thinking about all the work that Mac developers need to do to get access to privileged APIs — like location, contacts, or the accessibility APIs. My experience of going through the motions to enable these permissions for the apps I use, along with hearing of the lengths developers go through to make this as seamless as they can, reveals to me the clunkiness that this entails. I could imaging this being a huge source of frustration for these developers, not to mention a huge source of support requests.

    It’s laudable of Apple to lock-down access to these APIs: the days of assumed access to everything is over, and I believe we are better for it. But I believe it’s worth considering ways to streamline the process of granting these permissions, making it easier to work with them without any sacrifice to the user’s security or privacy.

    I’m wondering if one such approach could be to do something similar to how permissions for web pages are managed. In most browsers, clicking on the pad-lock icon just left of the URL brings up a popup listing all the capabilities the website has access to, such as whether they can use the microphone, whether they can show notifications, etc. Within this popup, the user can grant or revoke these permissions, controlling which APIs the JavaScript on the website can use. The web-site itself cannot do anything with this pane: all that can be done is to provide instructions on how enable these permissions, and progressively enabling or disabling features based on whether those permissions are granted.

    Maybe something similar would help for macOS apps. What I had pictured in my head is something similar to the following (no mock-ups, sorry; you’ll need to use your imagination):

    • The app will disclosed in their Info.plist the privileged APIs that they need access to (they might do this already, I’m really not sure).
    • A new “Permissions” menu item will be added by the OS to the application menu. This menu item is beyond the control of the app: it could not be automatically triggered or otherwise controlled.
    • Clicking this menu item will bring up a window listing all the permissions that the app has disclosed. Beside each one is a toggle, allowing the user to turn them on and off at will. Like the menu item, this window is managed by the OS itself, not the application, and could require users to enter their admin passwords before enabling the permissions if necessary.

    I can see this approach having some benefits to both users and developers. This will reduce the level of friction involved in dealing with permissions, making it easier for the user to enable, and most importantly disable, these permissions when needed. The app developer has an easier time asking the user to enable these permissions: no need to do things like open the Settings app and draw arrows on screen pointing to the accessibility pane. Since this is all managed by the OS, the various setting panes can still exists, but they become secondary avenues to controlling these permissions. Conceptually, the permissions belong to the app, which maintains the wholistic app paradigm that Apple is moving towards, not to mention eliminating the need for the user to context switch when they open the Settings app.

    I’m not a Mac developer so I’m not sure how possible this is. I can’t imagine this approach would break any of the existing APIs of AppKit: this will all be stuff that Apple needs to do. I’d be interested in hearing what others think of this approach, so let me know your thoughts on this.

    First encounters with GitHub (and Substack)

    All these new Substack newsletters that I’m seeing reminds me of my first encounter with GitHub.

    Back in 2009, I was checking out the source code of an open-source library we were using. Clicking on the link to the source bought me to this strange, new code-hosting service that I’ve never seen before. As someone who was use to the heaviness that was SourceForge, or the boring uniformity that was Google Code, the experience felt very minimal and slightly unintuitive. It took me a while, for instance, to realise that the version tags were selectable from a drop-down list. I also thought that it was quite restrictive to only offer checking out the source code with this weird SCM client called “git”. The whole experience left me thinking of this website as rather niche, and I never really expected to see it that often, given that Source Forge or Google Code reigned supreme at the time.

    I held this believe for a year or two. Then, as I continued to deal with different open-source projects, I started noticing more and more of them showing up on this weird new service. This was infrequent at first: maybe around one in ten projects or so. But the ratio was starting to shift faster and faster, soon becoming one in eight projects, then one in five. Around this time, GitHub was starting to gain momentum in the technological zeitgeist: projects announced that they were moving off SourceForge and migrating their codebase to GitHub. Then, Google Code announce that they were shutting down, and that they were migrating projects over to GitHub. Eventually, a tipping point was reached, and GitHub was the code hosting service for pretty much every project I encountered.

    My experience with Substack was similar, except on a much shorter timescale. I remember subscribing to my first Substack newsletter back in 2019. I was already a Stratechy subscriber so the whole email-newsletter thing was familiar to me. But it was another case of being a little unimpressed with the Substack experience — difficult to read on-line, impossible to get it as RSS, weird font choices, etc. — that I was expecting the platform to remain relatively niche. Contrast that to today, where every forth or fifth link I see is to a Substack newsletter, and not a month goes by where a new Substack publication is announced.

    There’s no real lesson to this (or if there is, I’m too dense to see it). Just the amusing observation of first encountering something that, to you, seems rather niche and unusual until one day, faster than you can blink, it is the only thing anyone uses.

    A Quick Review of the Year

    Here are a few words about the year gone by, and what I’m hoping to focus on the year ahead. It’s not a full “year in review” type post, although there’s a bit of that, and there’s no dramatic insight or anything of that nature. It’s more of a chance for reflection, plus a bit of a document for future me on what the year was like.

    Personally, as difficult as this past year was, I wouldn’t necessarily say 2020 was a bad year. I know I’m saying this from a position of good fortune: I didn’t loose my job, or my house, or my health. So I know there are a lot of others that have experienced a much worst year than I have. But for me, I’m coming out of this year feeling a little better than I have the previous couple of years gone by.

    I think a big reason for this was that I was forced to change my routine. I’m someone that can operate on a routine that don’t change for a long period of time. There are benefits to this but it does mean that every passing year feels more-or-less like the previous one, and I always find myself feeling a little bad on New Years Eve for not “doing something different”. Being forced to change my routine by the pandemic and resulting lock-downs was a small positive that came out of an otherwise bad situation. The did mean that I could no longer rely on the the various chronological anchors, like birthdays or even going to the office, that are useful for experiencing the passage of time, resulting in a year that felt to be passing too slowly and quickly simultaneously. But it also added some variety to the activities that got me through the day, which resulted in a year that was slightly more novel to the previous few.

    Working from home also provided an opportunity to try something new. The whole working from home experience was something that I was curious about, and I am glad that I had an opportunity to try it out. It worked out better than I expected: although there were times when I really did miss working closely with other people, I found that I could work effectively at home as long as I had work to do. The lack of a commute also meant that I had more time on my hands. I’m glad that I didn’t just spend that time just coding on personal projects or vegging out on the couch (although there was a bit of that as well). I joined Micro.blog, which was probably one the best decisions I’ve made this year. I also learnt a lot about writing and creativity, and I got back into music composition, something that I have neglected for a while.

    Writing and publishing is something that I’m hoping to continue in 2021. I hope to setup new routines and systems to write more often, both on this Micro.blog and another writing project that I’m in the process of starting. I’m trying a few things to keep myself true to this, like keeping a daily log (which is private at the moment, but I’m hoping to make it public eventually). Publishing things online is something that I need to work on a bit more but it is also an area that I’m hoping to work on. Although I not seriously following the yearly theme system, if I had to choose, I plan to make this one a year of “sharing”1.

    Hope you all have a Happy New Year and here’s to a great 2021.


    1. Incidentally, the theme for 2020 was “chance”, which became a little morbid as the year went on. ↩︎

    Vivaldi - My Recommended Alternative to Chrome

    I’m seeing a few people on Micro.blog post about how Chrome Is Bad. Instead of replying to each one with my recommendations, I figured it would be better just to post my story here.

    I became unhappy with Chrome about two years ago. I can’t remember exactly why, but I know it was because Google was doing something that I found distasteful. I was also getting concerned about how much data I was making available to Google in general. I can prove nothing, but something about using a browser offered for free by an advertising company made me feel uneasy.

    So I started looking for alternatives. The type of browser I was looking for needed the following attributes:

    • It had to use the Blink rendering engine: I liked how Chrome rendered web pages.
    • It had to offer the same developer tools as Chrome: in my opinion, they are the best in the business.
    • It had to run Chrome plugins.
    • It had to be a bit more customisable than Chrome was: I was getting a little bored with the Chrome’s minimalist aesthetic, and there were a few niggling things that I wanted to change.
    • It had to have my interests at hand: so no sneaky business like observing my web browsing habits.

    I heard about the Vivialdi browser from an Ars Technica article and I decided to give it a try. The ambition of the project intrigued me: building a browser based on Chromium but with a level of customisation that can make the browser yours. I use a browser every day so the ability to tailor the experience to my needs was appealing. I decided to download it and give it a try. First impressions were mixed: the UI is not a polished as Chrome’s, and seeing various controls everywhere was quite a shock compared to the minimalism that Chrome offered. But I eventually set it as my default browser, and over time I grew to really like it.

    I’m now using Vivialdi on all my machines, and my Android phone as well. It feels just like Chrome, but offers so much more. I came to appreciate the nice little utilities that the developers added; like the note taking panel which comes in really handy for writing Jira comments that survive when Jira decides to crash. It’s not as stable as Chrome; it feels a tad slower, and it does occasionally crash — this is one of those browsers that need to be closed at the end of the day. But other than that, I’m a huge fan.

    So for those looking for an alternative to Chrome that feels a lot like Chrome, I recommend giving Vivialdi a try.

    Revisiting the decision to build a CMS

    It’s been almost a month since I wrote about my decision to write a CMS for a blog that I was planning. I figured it might be time for an update.

    In short, and for the second time this year, I’ve come to the conclusion that maintaining a CMS is not a good use of my time. The largest issue was the amount of effort that would have been needed in order to work on the things that don’t relate to content, such as styling. I’m not a web designer, so building the style from scratch would have taken a fair amount of time, which would have eaten into the amount of time I would have spent actually writing content. A close second was the need to add additional features to the CMS that were missing, like the ability to add extra pages, and a RSS feed. If I were to do this properly without taking any shortcuts, this too would have resulted in less time spent on content.

    The final issue is that the solutions that I was trying to optimise for turned out not to be as big a deal as I first thought, such as the “theoretical ability to blog from anywhere”. I’ve tried using the CMS on the iPad a few times, and although it worked, the writing experience was far from ideal, and making it so would have meant more effort put into the CMS. In addition to this, I’ve discovered that I prefer working on the blog using my desktop, as I’ll more likely be in the state of mind to do so. Since my desktop already has tools like Git, I already had what I needed to theoretically blog from anywhere, and it was no longer necessary to recreate this requirement within the CMS itself.

    So I’ve switched to a statically generated Hugo site served using GitHub Pages. I’m writing blog posts using Nova, which is actually a pretty decent tool for writing prose. To deploy, I simply generate the site using Hugo’s command line tool, and commit it all to Git. GitHub Pages does the rest.

    After working this way for a week and a half, it turns out that I actually prefer the simplicity of this approach. At this stage both the CMS, and the blog it would have powered, has been in the works for about a month. The result is zero published posts, and it will probably not be launched at all1. Using the new approach, the new blog is public right now and I have already written 5 posts on it within the last 11 days, which I consider a good start.

    We’ll see how far I’ll go with this new approach before I consider a custom CMS again, but I think it’s one that I’ve managed to get some traction now, particularly since it actually resulted in something. I think it’s also an approach I will adopt for new blogs going forward.


    1. This is not wholly because of the CMS: the focus of the blog would have been a bit narrower than I’d would have liked; but the custom CMS did has some bearing over this decision. ↩︎

    A Brief Look at Stimulus

    Over the last several months, I’ve been doing a bit of development using Buffalo, which is a rapid web development framework in Go, similar to Ruby on Rails. Like Ruby on Rails, the front-end layer is very simple: server-side rendered HTML with a bit of jQuery augmenting the otherwise static web-pages.

    After a bit of time, I wanted to add a bit of dynamic flare to the frontend, like automatically fetch and update elements on the page. These projects were more or less small personal things that I didn’t want to spend a lot of time maintaining, so doing something dramatic like rewriting the UI in React or Vue would have been overkill. jQuery was available to me but using it always required a bit of boilerplate to setup the bindings between the HTML and the JavaScript. Also, since Buffalo uses Webpack to produce a single, minified JavaScript file that is included on every page, it would also be nice to have a mechanism to selectively apply the JavaScript logic based on the attributes on the HTML itself.

    I since came across Stimulus, which looks to provide what I was looking for.

    A Whirlwind Tour of Stimulus

    The best places to look if you’re interest in learning about Stimulus is the Stimulus handbook, or for those that prefer a video, there is one available at Drifting Ruby. But to provide some context for the rest of this post, here’s an extreamily brief introduction to Stimulus.

    The basic element of an application using Stimulus is the controller, which is the JavaScript aspect of your frontend. A very simple controller might look something like the following (this example was taken from the Stimulus home page):

    // hello_controller.js
    import { Controller } from "stimulus"
        
    export default class extends Controller {
      static targets = [ "name", "output" ]
        
      greet() {
        this.outputTarget.textContent =
          `Hello, ${this.nameTarget.value}!`
      }
    }
    

    A controller can have the following things (there are more than just these two, but these are the minimum to make the controller useful):

    • Targets, which are declared using the static target class-level attribute, and are used to reference individual DOM elements within the controller.
    • Actions, which are methods that can be attached as handlers of DOM events.

    The link between the HTML and the JavaScript controller is made by adding a data-controller attributes within the HTML source, and setting it to the name of the controller:

    <div data-controller="hello">
      <input data-hello-target="name" type="text">  	
      
      <button data-action="click->hello#greet">Greet</button>
    
      <span data-hello-target="output"></span>
    </div>
    

    If everything is setup correctly, then the controllers should be automatically attached to the HTML elements with the associated data-controller annotation. Elements with data-*-target and data-action attributes will also be attached as targets and actions respectively when the controller is attached.

    There is some application setup that is not included in the example above. Again, please see the handbook.

    Why Stimulus?

    Far be it for me to suggest yet another frontend framework in an otherwise large and churning ecosystem of web frontend technologies. However, there is something about Stimulus which seems appealing for small projects that product server-side rendered HTML. Here are the reasons that attract it to me:

    1. It was written by, and is used in, Basecamp, which gives it some industry credibility and a high likelihood that it will be maintained (in fact, I believe version 2.0 was just release).

    2. It doesn’t promise the moon: it provides a mechanism for binding to HTML elements, reacting to events, and providing some mechanisms for maintaining state in the DOM, and that’s it. No navigation, no pseudo-DOM with diffing logic, no requirement for maintaining a global state with reducers, no templating: just a simple mechanism for binding to HTML elements.

    3. It plays nicely with jQuery. This is because the two seem to touch different aspects of web-development: jQuery with providing a nicer interface with the DOM, and Stimulus with providing a way to easily bind to DOM elements declared via HTML attributes.

    4. That said, it doesn’t require jQuery. You are free to use whatever JavaScript framework that you need, or no framework at all.

    5. It maintains the relationship between HTML and JavaScript, even if the DOM is changed dynamically. For example, modifying the innerHTML by including an element with an appropriate data-controller attribute will automatically setup a new controller and bind it to the new elements, all without having you to do anything yourself.

      It doesn’t matter how the HTML gets to the browser, whether it’s AJAX or front-end templates. It will also work with manual DOM manipulation, like so:

      let elem = document.createElement("div");
      elem.setAttribute('data-controller', 'hello');
      
      document.append(elem);
      

      This allows a dynamic UI without having to worry about making sure each element added to the DOM is appropriately decorated with the logic that you need, something that was difficult to do with jQuery.

    Finally, and probably most importantly, it does not require the UI to be completely rewritten as JavaScript. In fact, it seems to be built with with this use case in mind. The tag-line on the site, A modest JavaScript framework for the HTML you already have, is true to it’s word.

    So, if you have a web-app with server-side rendering, and you need something a bit more — but not too much more — than what jQuery or native DOM provides, this JavaScript framework might be worth a look.

    Some uninformed thoughts about Salesforce acquiring Slack

    John Gruber raised an interesting point about the future of Slack after being purchased by Salesforce:

    First, my take presupposes that the point of Slack is to be a genuinely good service and experience. […] To succeed by appealing to people who care about quality. Slack, as a public company, has been under immense pressure to do whatever it takes to make its stock price go up in the face of competition from Microsoft’s Teams.

    […]

    Slack, it seems to me, has been pulled apart. What they ought to be entirely focused on is making Slack great in Slack-like ways. Perhaps Salesforce sees that Slack gives them an offering competitive to Teams, and if they just let Slack be Slack, their offering will be better — be designed for users, better integrated for developers.

    When I first heard the rumour of Salesforce was buying Slack, I really had no idea why they would. The only similarity between the markets the two operate in is that they are things businesses buy, and I saw no points of synergy between the two products that would make this acquisition worth it.

    I’m starting to come round to the thinking that the acquisition is not to integrate the two products, at least not to the degree I was fearing. I think Gruber’s line of thinking is the correct one: that Salesforce recognises that it’s in their interest to act as Slack’s benefactor to ensure that they can continue to build a good product. Given that Salesforce has bought Tableau and Heroku and more-or-less left them alone, there’s evidence that the company can do this.

    As to what Salesforce gets out of it, Jason Calacanis raises a few good reasons in his Emergency Pod on the topic around markets and competition. Rather than attempt to explain them, I recommend that you take a listen and hear them from him.

    Why I'm Considering Building A Blogging CMS

    I’m planning to start a new blog about Go development and one of the things that I’m currently torn on is how to host it. The choice look to be either using a service like blot.im or micro.blog or some other hosting service, using a static site generation tool like Hugo, or building my own CMS for it. I know that one of the things people tell you about blogging is that building your CMS is not worth your time: I myself even described it as “second cardinal sin of programming” on my first post to micro.blog.

    Nevertheless, I think at this stage I will actually do just that. Despite the effort that comes from building a CMS I see some advantages in doing so:

    1. The (theoretical) ability to blog from anywhere: This is one of the weaknesses of static site hosting that I’ve run into when trying this approach before. I tend to work on different machines throughout the week, which generally means that when I find inspiration, I don’t have the source files on hand to work on them. The source files are kept in source control and are hosted on GitHub, but I’ve find that I tend to be quite lax with making sure I have the correct version checked out and that any committed changes are pushed. This is one of the reasons why I like micro.blog: having a service with a web interface that I can just log into, and that will have all the posts there, means that I can work on them as long as I have an internet connection.
    2. Full control over the appearance and workflow: Many of the other services provide the means for adjusting the appearance of the web-page, so this is only a minor reason for taking on this effort. But one thing that I would find useful is to have some control over is the blogging workflow itself. There are some ideas that I might like to include, like displaying summaries in the main page, or sharing review links for posts prior to publishing them. Being able to easily do that in a codebase that I’m familiar with would help.
    3. Good practice of my skills: As someone who tends to work on backend systems for his day-to-day job, some of my development and operational experience are a little rusty. Building, hosting and operating a site would provide an opportunity to exercise these muscles, and may also come in handy if I were to choose to build something for others to use (something that I’ve been contemplating for a while).

    Note that price is not one of these reasons. In fact it might actually cost me a little more to put together a site like this. But I think the experience and control that I hope to get out of this endeavour might be worth it.

    I am also aware of some of the risks of this approach. Here is how I plan to mitigate them:

    1. Security and stability: This is something that comes for free from a blogging platform that I’ll need to take on myself. There’s always a risk with putting a new system onto the internet, and having a web site with remote administration is an invitation for others to abuse it. To me this is another area of development I believe I need to work on. Although I don’t intend to store any personal information but my own, I do have to be mindful of the risks of putting anything online, and making sure that the appropriate mitigations are in place to prevent that. I’ll also have to make sure that I’m maintaining proper backups of the content, and periodically exercising them to make sure they work. The fact that my work is at stake is a good incentive to keep on top of this.
    2. Distractions: Yeah, this is a classic problem with me: I use something that I build, I find a small problem or something that can be improved, then instead of actually finishing the task, I actually work on the code for the service. This may have to be something that only gets addressed with discipline. It may help using the CMS on a machine that doesn’t have the source code.

    I will also have to be aware of the amount of time I put into this. I actually started working on a CMS several months ago so I’m not starting completely from scratch, but I’ve learnt with too many other of my personal projects that maintaining something like this is for the long term. It might be fine to occasionally tinker with it, but I cannot spend too much effort working on the system at the expense of actually writing content.

    So this is what I might do. I might give myself the rest of the month to do what I need to do to get it up to scratch, then I will start measuring how much time I spend working on it, vs. the amount of time I actually use it to write content. If the time I spend working on the code base is more than 50% of time I use it to write content, then it will indicate to me that it’s a distraction and I will abandon it for an alternative setup. To keep myself honest, I’ll post the outcomes of this on my micro blog (if I remember).

    A few other minor points:

    • Will this delay publishing of the blog? No. The CMS is functionally complete but there are some rough edges that I’d like to smooth out. I hope to actually start publishing this new blog very shortly.
    • Will I be moving the hosting of this blog onto the new CMS? No, far from it. The services here works great for how I want to maintain this blog, and the community aspects are fantastic. The CMS also lacks the IndiWeb features that micro.blog offers and it may be some time before they get built.

    I’ll be interested if anyone has any thoughts on this so feel free to reply to this post on micro.blog.

    Update: I’ve posted a follow-up on this decision about a month after writing this.

    An anecdote regarding the removal of iSH from the App Store

    Around April this year, my old Android Nexus 9 tablet was becoming unusable due to it’s age and I was considering which tablet to move to next. I have been a user of Android tablets since the Nexus 7 and I have been quite happy with them (yes, we do exist). However, it was becoming clear that Google’s was no longer interested in maintaining first-party support for Android on a tablet, and none of the other brands that were available were very inspiring.

    I took this as a sign to move to a new platform. It was a toss-up between a Chromebook or an iPad. I understood that it was possible to get a Linux shell on a Chromebook by enabling development mode, so for a while I was looking a possible Chromebooks that would work as a tablet replacement. But after hearing about iSH from an ATP episode, I got the impression that it would be possible to recreate the same thing with nicer hardware and app ecosystem, despite how locked down it is. So the iPad won out.

    If at the time I knew that Apple would be removing iSH from the App Store (via Twitter) I’m not sure I would have bought the iPad, and would have probably gone with the Chromebook.

    Tracking Down a Lost Album

    Here’s a short story about my endeavours to find an album that seems to have disappeared from the face of the internet. I’m a bit of a sucker for original sound tracks, particularly instrumental ones. One that I remember being very good is the music from The Private Life of Plants, a documentary series from David Attenborough made in the mid 1990s. It was one of those sound tracks that occasionally popped into my mind, particularly when looking at lovely autumn leaves or other scenes from the show. But it has been a while since I last watched it, and I never though to look at whether an album of the sound track actually existed.

    It was only earlier this year when I discovered that this was a possibility. I was watching Curb Your Enthusiasm and one episode featured a scene which had the music from the documentary series in the background. I recognised it immediately and after some quick searches online, I discovered that there did exist at one point an album of the sound track.

    I started looking around to see if it was available to listen to. I started with Spotify, the music service that I’m subscribe to, but searches there did not return any results. I then went to the other streaming services that were available, like Apple Music and Amazon music, but there was no luck there either. I then started looking to see if I could get the physical CD. I looked on Amazon, eBay, JB Hi-Fi and even Sanity, a music shop that is still operating here in Australia. None of these sites turned up anything indicating that this album was available. I then tried my local library, the online ABC shop, and the BBC shops, but those turned up with no results as well. It looked like this album was no longer available for sale anywhere.

    I then started making generic web searches on Google and DuckDuckGo. There were very few hits, most of them referencing the documentary series itself. It was here that I started venturing into the abandoned areas of the web, with old pages, riddled with ads, that are barely functioning at all. I found a last.fm page for the composer with looked to have the track list of the album, but attempting to play the tracks through the in-browser player only produced errors. Going further through the abandoned web, I found an old download site which looked to have links to some of the tracks on the album. After following the links however, it looked like the site has since stopped operating: the links only produced 404 Not Founds, and attempts to go to the main site only produced a page indicating that the domain was for sale.

    It was then that I remembered Wayback Machine, and I went there to see if it was possible to get to the archived version of the site. Sure enough, there existed a snapshot of the site from 2006. The site itself looked to be an old online music store that at one time offered the album for sale. The album page was there and was indexed by Wayback Machine. Better still, the site posted 5 of the tracks online, I’m guessing as samples, which were also indexed by Wayback Machine and were available for download. Success! I was able to download the 5 sample tracks from Wayback Machine and play them on my computer.

    I don’t know if there’s a moral to this story. I guess if it’s anything, it’s that preserving these archives is important, especially for media under the control of gatekeepers that can pull it from distribution at any time. It’s certainly made me appreciate the important work that the Internet Archive does, and I have since made a small donation to them to allow that work to continue.

    I think it’s also fair to say that this story is not yet over. I don’t care how long it will take me, I’ll continue to track this album down until I’ve found it and be able to play it in it’s entirety.

    Offical support for file embedding coming to Go

    I’m excited to see, via Golang Weekly, that the official support for embedding static files in Go is being realised, with the final commit merged to the core a couple of days ago. This, along with the new file-system abstraction, will mean that it will be much easier to embed files in Go applications, and make use of them within the application itself.

    One of the features I like about coding is Go is that the build artefacts are statically linked executables that can be distributed without any additional dependencies. This means that if I wanted to share something, all I need to do is to give them a single file that they can just run, without needing to worry about whether particular dependencies or runtimes are installed prior to doing so.

    However there are times when the application requires static files, and to maintain this simple form of distribution, I generally want to embed these files within the application itself. This comes up surprisingly often and is not something that was officially supported within the core language tools, meaning that this gap had to be filled by 3rd parties. Given the number of tools available to do this, I can see that I’m not alone in needing this. And as great as it is to see the community step in to fill this gap, relying on an external tool complicates the build process a bit1: making sure the tool is installed, making sure that it is executed when the build run, making sure that the tool is actively being maintained so that changes to the language will be supported going forward, etc.

    One other thing about these tools is that the API used to access the file is always slightly different as well, meaning that if there’s a need to change tools, you end up needing to change your code to actually access the statically embedded file.

    Now that embedding files is officially coming to the language itself, there is less need to rely on all of this. There’s no need to worry about various tools being installed on the machines that are building the application. And the fact that this feature will work hand in hand with the new file system abstraction means that embedded files would be easier to work with within the code base itself.

    So kudos to the core Go development team. I’m really looking forward to using this.


    1. Although not as much as many other languages. ↩︎

    Advice to those working with annotations in Preview

    For those of you using Preview in macOS for viewing an annotated PDF, if you need to move or delete the annotations in order to select the text, be sure to undo your changes prior to closing Preview. Otherwise your changes will be saved without asking you first.1

    This just happened to me. I have a PDF annotated with edits made with the iPad pencil and I wanted to copy the actual text. The annotations seemed to sit on top of the text in an image layer, which means that in order to select the text, I have to move or delete this layer first. I didn’t want the annotations mixed up with the ones on the other page, so I decided to delete this layer instead of moving it. This was a mistake.

    I copied the text and wanted to get the annotations back. I probably should have just pressed ⌘Z to undo my changes, but I saw “– Edited” in the title bar so I assumed that if I just closed Preview, it would discard my changes and I would be able to get my annotations back just by reopening the PDF. But it turns out, after closing it and opening it again, the changes were saved without asking me first, resulting in my annotations being lost.

    Developers of macOS: this is a terrible user experience that needs to be fixed. Preview saving my changes from under me has now resulted in data loss, the cardinal sin of any software. Either ask me before saving changes when the application is closed, support a notion of versions, or do something else. But do not just save my changes without asking me, and do not imply that Preview is aware of pending changes by having “– Edited” in the title bar if it isn’t going to discard the changes, or confirm that they should be saved when I close the app.

    Ugh, I need another coffee now.


    1. This is macOS Mojave. I hope this has been fixed some way in later versions. ↩︎

    Doughnut Day 2020

    Good day today. From a high of 725 Covid-19 cases in August 25, Victoria has just had 24 hours of zero new cases and zero deaths. This is during a period of extensive testing in the north of the metropolitan, during a testing blitz in an attempt to contain an outbreak. Labs have been processing tests late into the night, with not a single one so far coming back positive.

    As good as this news is, I’d imagine the government wants to remain cautious here. The easing of restrictions that were scheduled for yesterday have been delayed, I guess to make sure that contact tracers are on top of things in the north. As disappointing as this is, I can see why they did this. It makes sense that they take advantage of the current situation to get as much information about where the virus is as they can. I don’t believe anyone wants to go back into lock-down a third time so they really have one shot of this. The government is still confident that we are on track for lifting restrictions before November 1st. I guess we’ll see what happens when they give their briefing this morning, but after going through this for 4 months, I can wait a few more days.

    For the moment, it’s good seeing this result. Truth is, things were always touch and go in Victoria even during the relative period of free movement that we experienced in June, when we last had a day of zero new cases. Seeing this now, with the restricted movement and testing blitz, gives me hope that we can keep this virus suppressed until a vaccine comes in.

    Update at 15:45: Results from the testing blitz from the northern suburbs have been trickling all day, and so far still no positive cases. It looks like the Victorian government is happy with this result because they have announced that the state will be moving to the 3rd step of re-opening on Wednesday.

← Newer Posts Older Posts →