Leon Mika

📚 “The War of Art”, by Steven Pressfield. An excellent book about the creative process. Very easy read as well, can get through it in a weekend.

Good news is hard to come by recently, and the Stage 4 restrictions in Melbourne are anything but easy. But seeing 21 new cases yesterday, and 14 new cases today, the lowest in 3 months, is encouraging. Lets see how this last week of Stage 4 restrictions go.

I used iPad OS’s markup feature on a multi-page PDF for the first time this morning. It works reasonably well, except that there’s no way to hide the overview, and it’s positioned where I usually rest my wrist to write. The palm rejection is not perfect either, so when I try to write something, it sometimes interprets my wrist as a tap, and sends me to another page.

I wish there was a way to hide the overview, or move it to the left side of the screen. It could even be configurable based on which hand you use to write with.

Getting screen capture working in Vivaldi on Fedora 32

Moving from a Mac Pro back to Linux for work, I’ve come to appreciate how well things just work out of the box in macOS. Things like Web RTC display capture, which is used for sharing the screen in browser-based video conferencing sites (and I think also in Slack, since it’s using Electron and, thus, the Blink rendering engine), work flawlessly in macOS, but proved to be a bit of trouble within Linux.

From my limited reading, it looks like this might be related to the use of Wayland, the new user-space video stack that is currently being built, and the corresponding security model. This exists alongside a new mechanism for acquiring audio and video feeds called PipeWire, but this is not enabled by default in Vivaldi, the browser I’m using.

Using the instructions found here, I think I’ve managed to fix this by:

  1. Going to chrome://flags
  2. Enabling “WebRTC PipeWire support”
  3. Restarting Vivaldi

I then went to a test WebRTC site to verify that it works (there is also one on MDC). After going through some security prompts to allow sharing of the screen, I was now able to see my desktop being displayed back to me.

I’m not sure how I can fix this in Electron apps like Slack. Prior to this fix, Vivaldi did allow sharing of individual windows, but this doesn’t seem possible in Slack at the moment. If I find a fix for this, I might update this post.

Now that we’re allowed to venture outside a bit more, I’d like to start my lunchtime walks again. However, the sun is starting to intensify so I’ll have to start wearing a hat again. This means my headphone situation will need to change.

First Foray Into Home Automation

After recently changing jobs, I’ve received a brand new Lenovo work laptop. As good as the laptop is, and it’s OK for a work laptop, it has one annoying feature. Whenever the laptop is plugged in and powered, there is a bright white LED that is always illuminated. Because I’m still working from home — and it is likely that after the pandemic I will be working from home at least a few days a week — and my desk is in my bedroom, having this white LED is no good for my sleep.

For the first few evenings, I’ve been unplugging the laptop prior to going to bed. I rather not use electrical tape to block out the LED: the is not my laptop and such tape would be ugly, and the LED itself is close to other ports which would make tape placement a bit awkward. Plus the LED does serves the useful purpose of indicating that the laptop is powered. It’s just not useful indicating this fact at night. Unplugging the laptop works but I’m not too keen with this solution long term: it’s only going to be a matter of time when I unplug it one day, forget to plug it in the next day and I eventually run out of juice when I need it the most.

Another solution for this problem is a dumb timer. I do own a timer — one that has a circular clock that is configured by pressing in a black nub for each 15 minutes that you want the plug to be energised — and it could work in this scenario, but it does has some awkward properties. The principal one being that I’d like to ensure that the laptop is powered when I’m using it, and there could be times when I’m using it during the hours that I’m usually asleep, like when I’m to responding to incidents or working late. The timer does have an override, but it’s along the side of the plug itself, so in these cases I’d have to get under my desk to turn it on.

So I decided to take this opportunity to try out some home automation.

The Smart Plug

The way I plan to tackle this problem is by doing the following:

  • Connecting the laptop to a smart plug
  • Setting up a schedule so that the smart plug will automatically turn off at 10:00 in the evening, and on at 6:30 in the morning
  • Having a way to override the schedule if I need to turn on the plug if I need to

The smart plug chosen for this is the TP-Link HS-100 Smart Wi-Fi Plug. It’s was not my first choice but it was in stock and was delivered within a few days, way before the expected delivery date (good job Australia Post).

The TP-Link HS 100 Smart

The plugs themselves are nothing remarkable. It’s just a standard plug, with a LED indicating the current state of the plug and Wi-Fi connectivity. They’re a little bulky, and the do encroach a bit on some of the adjacent plugs: I needed to move a few plugs around in the power board that I’m using. Fortunately there is some clearance between the prongs and the actual body of the device, which made it possible to position it so that it overlaps some of the other plugs with slimmer profiles. The relay within the plug is much quieter than I expected, which was a nice surprise.

Linking the smart plug up to the Wi-Fi was relatively painless, however I did need to download an app and create a new TP-Link Kasa Smart account. During the actual on-boarding, the app also asked for my location for some reason. It could have been to configure time-zones? I don’t know, but it would have been nice for the app to disclose why it needed my location. But after that, it was more or less what you’d expect: after and following the instructions within the app to plug the device in and turn it on, the smart plug started a Wi-Fi hotspot that the phone connected to. One the pairing was complete, it was possible to turn the device on and off within the app.

Google Home with the smart plugs registered

Setting Up The Schedule

I first tried setting up the schedule for the smart plug in Google Home. First, I’ve got to say that doing something mildly complicated like this in a mobile app was annoying, and I wish Google published a web or desktop version of their Home management app so this could use a mouse and keyboard. But I had no trouble registering the smart plug in Google Home. It basically involved linking the Kasa Smart account with my Google account and once that was done, the smart plug could be added to a room and was ready to go.

Setting up a schedule within Google Home involved creating a new “Scene”, and expected information like trigger words and a spoken response when the scene ran. There were also some built in scenes but they didn’t seem suitable for my use case. The whole thing seems geared towards triggering the scene with a Google Home smart speakers (I just realised the the app and the smart speakers had the same name), and seems to assume that one is available. I don’t have a smart speaker and the prospect of the Google Assistant speaking when the scene is triggered did not appeal to me. It might have been possible to set it up the way I desired, but it felt like my use case was not exactly what this automation system is geared towards, so I abandoned pursuing this approach any further.

Fortunately the smart plugs integrate with IFTTT, so I turned to that next. After recovering my old account, I set out to configure the schedule.

Firstly, I have to say that the UX of IFTTT’s site is dramatically different than what I remember, and not in a good way. It seemed like they noticed that most of their users were accessing their site from their mobiles, and they redesigned the UI to work for them at the expense of desktop users. They reduce the amount of information density of each page so that it took three clicks to do anything, and cranked up the font size so much that every label or line of copy was larger than a header. This, mixed with a garish new colour scheme, made the page physically hard to look at. I’d recommend that IFTTT’s UX designers reconsider their design decisions.

Usability aside, setting up the schedule was reasonably straightforward here as well. I first had to link the IFTTT and Kasa Smart accounts, which made the devices selectable within IFTTT. I then went about setting up an applet to turn off the plug at the scheduled time. Initially I set it up to turn it off 15 minutes from the current time, just so that I could test it. It was not successful on the first go and I had to ensure that the plug was selected within the applet correctly; but on the second go, it worked without any problem: at the scheduled time, the plug turned itself off. Most importantly of all, the state of the plug was properly reflected within the Google Home app and I was able to turn it back on from there.

One last thing about the schedules, IFTTT does not make this clear when you’re setting up the applet, but dates and times that are used by an applet are in the time-zone of your account. To check or change it, go to your account profile settings and it should be listed there.

I then had to create a second applet to turn the plug on at a scheduled time, which was just as easy to do. The entire schedule was set up in a few minutes, minus the test time, with two IFTTT applets. This leaves me with one remaining applet on the free IFTTT plan, which means that I’ll need to consider something else when I set up the other plug.

IFTTT with the two applets setup

After testing the entire set up end to end, and confirming that the override works, I reconfigured the schedule for the evening times and it was good to go.

That evening, the scheduled ran without a hitch. The smart plug cut power to the laptop at 10:00 and the LED was extinguished, giving me the much needed darkness for a good night sleep. The next morning at 6:30, the smart plug was turned on again and power was restored to the laptop. The only downside is that the smart plug itself has a green LED which, although not as distracting as the one on the laptop, is still visible during the night. Fortunately this is something I could easily fix with electrical tape.


So far, I’d say this set up has been successful. It’s been two nights now, and in both cases power to the laptop was turned off on schedule, and restored the next morning. The LED from the laptop no longer distracts me and I don’t have to manually unplug the laptop every evening. This is now something that I can forget, which is the ultimate indication of success.

I wonder if Oracle’s first change to TikTok, should they ever buy it, would be to add 5 different screens asking the user to sign up to “TikTok Enterprise” whenever they want to watch a video, and the only way around each one is a tiny link at the bottom of the screen.

Why yes, I am installing Java and MySQL.

On Ordered Lists in Markdown

One of the things I like about Markdown as a form of writing online, is that ordered lists can simply begin with the prefix 1., and there is no need to update the leading number in the subsequent items. To produce the following list:

  1. First
  2. Second
  3. Third

One only needs to write:

1. First
1. Second
1. Third


1. First
2. Second
3. Third

or even:

1. First
3. Second
2. Third

The one downside to this approach, unfortunately, is that there is no nice way to specify what the first ordinal should be. If I were to use 3. as the prefix of the first item, the generated ordered list will always begin at 1.

This means that there’s no nice way to continue lists that are separated by block elements. For example, let’s say I want to have a list of 4 items, then a paragraph of text or some other block element, then continue the list from 5. The only way to do so in “common-style” Markdown is to write the second list in HTML with an <od start=5> tag:

<ol start=5>

It would be nice if this was representable within Markdown itself. Maybe by taking into account the first ordinal and just incrementing by 1 from there. For example:

5. Fifth
5. Sixth
5. Seventh


  1. Fifth
  2. Sixth
  3. Seventh

So what?, you might say. You just demonstrated that this could be done in HTML.

That’s true, however I use wiki software with rich-text editors that don’t allow modifying the underlying HTML (they may have the ability to specify a “region” of HTML, but not a way to modify the underlying body text itself) and they use Markdown as a way of triggering formatting changes. For example, typing * twice will enable bold face, typing three ` will start a code block… and typing 1. will start an ordered list.

Changing the first ordinal or continuing the previous list might be considered an advanced operation that the developers of these wikis may not have considered. But I can’t help wonder if Markdown had this feature from the start, all these editors would have supported it in one form or another.

If Google does this to the Pixel 4, just what do they expect for the Pixel 5?

What is Google doing cancelling the Pixel 4 after 6 months? They spend $1.1 billion buying the HTC mobile division and state that they plan to start making their own mobile chips, giving the impression that they are serious about producing decent, flagship hardware for Android. And then go ahead with discontiuning their current flagship phone after 6 months?

Look, I know from a purely economical perspective, the Pixel line makes little sense. Android is not iOS. They don’t hold the prestigious high-end of the market, with the margins that come from it. But that’s not Google’s business. They’re an advertising company first, and a search company second. So I can understanding that Android to them is more of a cost centre; the price of keeping access to their services open to mobile users.

But I had the impression that they also recognised that there exists a market of Android users that appreciate good quality hardware and decent, stock-standard software stack with no shovelware, and are willing to pay a premium for it. It might not be a big market, that’s true. But if they’re serious about keeping Android around and want to keep these customers (you know, the one’s with disposable income that advertisers love), they should continue to be a player in it. I guess it’s possible that they simply offload this to another device manufacturer like Nokia, but then they’re giving up any leverage of ensuring good quality hardware which will attract these buyers.

As a Pixel owner myself, this move really concerns me. It’s getting increasingly harder to recommend Pixel phones to anyone, and I’m starting to wonder whether it’s time to consider something else.

John Gruber’s comments, after quoting a piece from Ars Technica on the Pixel 5 being slower than the Pixel 4:

If all of this is true, what phone is someone supposed to buy if they want top-shelf hardware and the pure no-junk Android experience?

If someone’s got an answer to this, please let me know.

You can tell it’s August in Melbourne as the days start to get noticeably longer, the blossums are in blume, and you start hearing blackbirds in the morning and evening. Lovely time of year.

I’m beginning to wonder, after looking at all the services in IndieWeb Site Deaths page, if people will start to be more cautious about signing up to new services, making it harder for those attempting to start sustainable businesses online.

Idea for MacOS: an option to install system updates on shutdown. This can replace the “in 1 hour” option. That option is not useful to me, particuarily since the notification appears at the start my session and I rather not distrupt my workflow.

On Suppression vs. Elimination

It was around the beginning of June, when the number of new Covid-19 cases for Victoria were around 10-20 a day, that there was a general feeling that suppression was working and that it was time to begin opening up. I will admit I took advantage of the looser restrictions, but I always wondered whether it would be better to remain closed for a little while longer and go for elimination. This was not the official strategy though: we have testing and tracing up and running and as long as we know where the virus is, we can continue to roll-back restrictions and achieve some semblance of normalcy.

Fast-forward to today and the daily number of cases is higher than what it was back in March, Melbourne is back under Stage 3 restrictions and I’m shopping on-line for masks.

It seems obvious to me that suppression as a strategy may not be enough. We may eventually (hopefully) get the virus tamped down once more, but it’s still out there and our efforts to keep it at bay are only as strong as our weakest link.

I think it’s time we go for elimination. It won’t be easy, but there are three reasons why I reckon it’s worth a shot:

  • Most of the other states in the country have effectively achieved eliminated. Some of them have gone weeks without any new cases, and are cautiously in the process of opening up once again. However, this can only hold as long as the state borders remain close to Victorians (and possibly soon to the New South Welsh) and I don’t see these states willing throwing away their hard won achievement just because the official strategy is suppression. If Victoria (and NSW) go for elimination, we can meet the other states where they are, making it a no-brainer to open up interstate travel once again, not to mention the trans-Tasman bubble with New Zealand.
  • It seems more economically stable over the long term. Economic activity is tied to confidence: people will only go out and spend money if they believe it’s safe to do so. Even when restrictions are rolled-back, I’m doubtful people will be quick to flock to cafes and gyms if there’s a risk of another wave. Compare this with elimination: evidence from New Zealand shows that consumer spending is pretty much back to pre-pandemic levels, despite going harder during the initial lock-down.
  • It may be a way to win back the public’s confidence in the government. The Victorian government has taken a hit in the polls due to the mistakes that caused the current round of lock-downs. I can see rallying the public around the goal of elimination a way to win them back. You can even use the current situation as a unique opportunity to achieve this, maybe by saying, “given that we’re already going through another round of lock-downs, let’s go for broke and remain locked down until we’ve eliminated this virus once and for all.” Now you have a something that people can work towards, and the feeling that their current sacrifice is not for nothing if (when?) another wave comes through.

I’m aware that this a post written by someone who is in a position of relative privilege. I haven’t lost my job, and I remain relatively healthy and financially secure. I also know that it will be expensive and will cause a fair bit more suffering for those with small businesses that will need to shut their doors. So I recognised that I don’t have all the facts, and this may not be feasible at all. But I also question the feasibility of maintaining a long-term suppression strategy until treatments or a vaccine become available: this is a tricky virus to handle.

In the end, I guess I’m just a bit disappointed by the lack of abition in attempting this as a goal. It seems advantageous, especially now, to seize the moment and go for making our second round of lock-downs our last.

Remarks on Go's Error Handling using Facebook's SDK Crashes As a Framing Device

There are new reports of Facebook’s SDK crashing apps again due to server changes. The post above links to Bugsnag article which explores the underlying cause: that’s worth a read.

I’m going to throw a shout-out to Go’s approach to error handling here. I’m not saying that this shows the superiority of Go over Objective C: these sorts of things can happen in any language. The difference I want to highlight is that Go treats error handling as part of the standard flow of the language, rather than the exceptional flow. This forces you to think about error conditions when you’re making calls to code that can fail.

This does result in some annoying code of the form:

result, err := doOperation()
if err != nil {
    return nil, err

result2, err := doSecondOperation(result)
if err != nil {
    return nil, err

// and so on

and there’s nothing stopping your from completely ignoring the error.

But there’s no way to call these two functions without dealing with the error in some way. That is, there’s no way to simply write doSecondOperation(doOperation()): you’re given an error and you have to do something with it. So you might as well handle it gracefully.

P.S. I should probably state that I know very little about Objective C. I do know that a fair number of APIs in AppKit and UIKit make use of completion handlers which can provide an error value, although to me it seems a littler easier to ignore it vs. deailing with the error values in Go. I also know that Swift makes improvements here, forcing you to prefix calls that can fail with the try keyword. Again, this is not to rag on Objective C; rather it’s a comment on the idioms of error handling in Go and how these sort of events could prevent the app from crashing.

I’ve just realised the irony of posting a 624 word article about posting smaller articles. It was meant to be smaller, but it felt good telling the story of how I got here.

Signed Up To micro.blog

I’ve signed up with micro.blog in an attempt to post to the blog more frequently than I have been. The last post I had on my existing blog was in March, and it felt to me like it was starting to become a bit negelected. I think the main reason for the delay is that I feel the need to publish long form articles, which involves a lot of work to write, review, etc. I will try to continue to do that, but I also want to start posting shorter articles more often.

Interesting story: I had this idea for a while, since the start of June. Back then my blog was a simple Hugo site managed in Git, and hosted within Google Cloud’s object store. I had a few posts there — these have been migrated to this site — and I also had a few ideas for posts in the pipeline. I knew I wanted to write more often, but I was starting to get the sense of “overhead” involved in creating new posts. Writing doesn’t come naturally to me, and I think one of the barriers of posting was the amount of non-writing involved in doing so, things like checking out the latest copy, writing it, pushing the branch holding the draft, reviewing the PR (not that there was much to review), merging it, checking out master and runing “make” to generate and deploy it. Each step is not hard in itself, I do it many times a day at work. But it’s just more overhead making the actual act of posting just a little bit harder, and I was begining to realise that if I wanted to write more often, I needed a way to do so effortlessly.

So I committed the second cardinal sin of programming and spent a few weeks making my own CMS (I was also close to committing the first cardinal sin of programming — making my own text editer — much earlier in my programming life, but luckly lost intrested after starting). The aim was to setup a service and workflow that would make it easier to post smaller articles, more often, and from any machine that I was currently on. I also got swepted away with hearing others discuss the techonologies of their own blogging engines, plus their approach to “owning the entire stack” as it was. Plus, I cannot resist starting a new project, epecially now when it’s difficult doing things outside or with other people around.

However, as I got closer to “launch”, I was beginning to consider the amount of work involved in maintaining it and extending it to suppot things I want further down the line, things like extra pages, etc. This is a classic problem of mine. I get a sense of enthusiasm as I see the core features come togeather… and then I think about what work I need to do to support afterwards, and I completely loose interest. The project then begins to deterorate as additional hacks are added to support these things, and it just becomes less maintainable and fun to work on over time.

It also serves as a great distraction: what better way to avoid writing, than to work on an application that would reduce the barriers that inhibit me to write.

So, I’m doing the smart thing: I’ve stopped working on it and have moved to micro.blog. Being a subscriber to Martin Reece’s feed, I see the amount of effort and care he puts into this platform, something that I don’t see myself doing for my own CMS. I can only hope this would result in me publishing posts more frequently, we’ll see. But now I have no more excuses to actually write.

Features From Android In iOS 14, and The Enthusiasm Gap

John Gruber on Daring Fireball, commenting on an article about features in iOS 14 that Android had first:

Do you get the sense that Google, company-wide, is all that interested in Android? I don’t. Both as the steward of the software platform and as the maker of Pixel hardware, it seems like Google is losing interest in Android. Flagship Android hardware makers sure are interested in Android, but they can’t move the Android developer ecosystem — only Google can.

Apple, institutionally, is as attentive to the iPhone and iOS as it has ever been. I think Google, institutionally, is bored with Android.

As an Android user, and occasional dabbler in Android app development, this concerns me if it is true. I doubt Google will completely give up on Android, but given the recent shutdowns of Googles services over the years, it’s clear that there are very few things Google is “married” to in the long term.

With Android’s success and it’s raison d’être, one could argue that Google has room to take a more relaxed attitude towards advancing Android as a platform, so long as cheap phones are still being bought and people are still using them. But I certantly hope that they do not completely abandon it.

YouTube Music and Uploaded Music Libraries

Ron Amado, from Ars Technica:

YouTube Music is really only for The Music Renter—someone who wants to pay $10 per month, every month, forever, for “Music Premium.” This fee is to buy a monthly streaming license for music you do not own, and I’d imagine a good portion of it goes to music companies. When you don’t pay this rental fee, YouTube Music feels like a demo app.

I prefer to own my music, and I own a lot of independent music that wouldn’t be covered under this major-record-label-streaming-license anyway, so I have no interest in this service. The problem is YouTube Music also locks regular music-playback features behind this monthly rental fee, even for music you’ve uploaded to the service. The biggest offense is that you can’t use Google Cast without paying the rental fee, but when it’s music that I own and a speaker that I own, that’s really not OK. Google Music did not do this.

These last couple of weeks I’ve actually been working on a personal music app that will playback music uploaded to S3. It was mainly for listening to music that I composed myself, although being able to listen to music that I’ve purchased and ripped to MP3 was a key motivating factor here as well. I was aware that such services existed so I occasionally wondered if my time could be better spent doing something else. Now, I feel like I’ve made the right choice here.

On Go’s Type Parameters Proposal

The developers of Go have release a new draft proposal for type parameters. The biggest change is the replacing the concept of constraints, which complicated the proposal somewhat, and replaced it with interfaces to express the same thing. You can read the proposal here latest proposal here.

I think they’re starting to reach a great balance between what currently exists in the language, and the features required to make a useful type parameter system. The use of interfaces to constrain the type, that is declare the operators that a type must implement in order to be used as a type parameter for a function or struct, makes total sense. It also makes moving to type parameters in some areas of the standard library trivial. For example, the sort.Sort function prototype:

func Sort(data Interface)

can simply be written as:

func Sort(type T Interface)(data T)

I do have some minor concerns though. The biggest one is the use of interfaces to express constraints related to operators, which are expressed as type lists. I think listing the types that a particular type parameter can instantiate makes sense. It dramatically simplifies the process of expressing a constraint based on the operators a particular type supports. However, using the concept of interfaces for this purpose seems a little strange, especially so when these interfaces cannot be used in areas other than type constraints. To be fair, it seems like they recognise this, and I’m suspecting that in practice these interfaces will be defined in a package that can simply be used, thereby not requiring us to deal with them directly unless we need to.

But all in all, this looks promising. It is starting to feel like the design is coming together, with the rough edges starting to be smoothed out. I appreciate the level of careful consideration the core Go developers are exhibiting in this process. This is after all a large change to the language, and they only have one real chance at this. Sure we have to wait for it, but especially in language design, mistakes are forever.

Don't Get it Now

It’s scary times at the moment. The Corona Virus (SARS-CoV-2 and Covid-19) is raging through Europe at this moment, with hundreds of people dying in Italy, Spain and France and most of the those countries, along with the US, in lock-down. The hospital system is currently not equipped to be able to handle the peak number of patients that will require intensive care: doctors from Italy, France and New York are telling stories about how they have to choose who lives and dies, and I’m fearful that we may start hearing stories like that here. There is currently no cure, nor no treatment. There’s been models indicating that even if we take steps to suppress the virus now, there will be continuous surges in outbreaks until a vaccine is ready in 12 to 18 months, suggesting that we may need to be in a state of lock-down or at the very least, rigid social distancing until August 2021 at the latest. The WHO reckons that a majority of the worlds population will get infected over the next year.

I’m not an doctor, nor an etymologist. I cannot begin to suggest what we should do as a society. But I’m going to give a few thoughts as to how I plan to weather this storm.

I think at this current stage, our enemy, along with the virus, is time. I hope I don’t have to tell you that the virus is moving through the worlds population now, even as we speak. But humanity is not standing still either:

  • We are researching the hell out of this thing. One such example is on Tuesday we learnt how the body reacts to the virus, which could help with understanding how best to treat it. Along with this, there are still some very important unanswered questions about the actual death rate and transmission rate, as well as whether heard immunity will work, that we’ll hopefully get the answer to soon.
  • We’re started clinical trials of potential treatments, and a vaccine. It’s still early days at the moment, and we probably won’t have anything ready soon, but the early indications of this sounds promising.
  • And, if the above should fail, we are (should?) be ramping up our hospital capacity to handle the influx of patients, meaning that if someone should unfortunately die from this, it won’t be because they didn’t have a bed.

So my mantra for the next few months is “don’t get it now.” Wait to get infected for as long as you can. The ideal case is not to catch it at all, but if we’re destine to get infected, best to get in infected later, when some of the points above have been addressed, instead of sooner when they have not. This will obviously mean sacrificing things like going to the gym, going out for coffee, or seeing friends and family. But I believe that this is a price worth paying, especially if the alternative is loosing someone you love, or potentially your own life.

So that’s my current strategy at this time. I don’t know if it will work, and as things develop it may need refining. But after thinking about this for the previous few weeks, it’s the best strategy I can think of. And I think it will help me get through this.

P.S. A lot of my thoughts on this came from reading this article by Tomas Pueyo. He’s obviously more knowledgeable about how we should act on this as a whole. It is worth your time reading this.

P.P.S. I spoke quite abstractly about the health system, but it’s important to remember that these systems are made up of people: doctors, nurses and paramedics on the front line, along with the researchers, manufacturers and logistics who support them. At this time, they are giving their all, and then some, to help us through this crisis. Once this is over, I think we owe every single one of these individuals a beer.

Reflections On Virus Scanners on Windows

I was listening to Episode 277 of The Talk Show in which John Gruber was discussing virus scanners on Apple Macs with John Moltz. The discussion turned briefly to the state of virus scanners on Windows, and how invasive these commercial scanners were compared to Windows Defender provided by Microsoft.

Hearing this discussion brought memories of my experience with virus scanners back in the days of Windows XP and earlier. There was no Microsoft Defender back then so we had to have a license for one of the commercial scanners that were sold to home users at the time, such as Norton AntiVirus. Given how insecure Windows was back then, it was one of the first things we had to put on a fresh install of Windows. And these things certainly slowed Windows down. But we recognised that it was necessary and after a couple of weeks, we eventually got use to it.

However, after setting up a new install, there was this brief period of time when we got to experience Windows without a virus scanner. And the difference in the user experience was significant. The boot processed was fast, the UI snappy, and the applications quick to launch. In fact it was so good, it felt strange and slightly uneasy, as the knowledge that there was no virus scanner protecting the system was evident. Only after the virus scanner was installed, with the resulting hit in performance, did it fell safe to use Windows again. It was not until I listened to this episode that I realised how perverted this feeling is.

I cannot imagine how it must feel for those Microsoft developers who worked hard on providing a user experience that was responsive only to see it slowed down on almost every machine by a virus scanner. I’m sure they knew that, due to the prevalence of malware for Windows back then, it was necessary. Still, I could not imagine that they would have been thrilled about it.

New Home of Steve Yegge's Rant About Google Services

I’ve always enjoyed this rant from Steve Yegge about how Google differed from Amazon in how they develop their services. Not sure if it’s applicable now but it was quite interesting to hear how the two companies differed in their approach in building and releasing products. After hearing that Google+ was being shutdown, I wondered what would happen with the rant, and whether it would be lost to time. It was fortunate that someone saved it.

For those of you who haven’t read any of Steve’s other blogposts, please checkout his current blog, plus several of his other Drunken Blog Rants. They are well worth your time.

Five Common Data Stores and When to Use Them

Very interesting post on the Shopify Engineering Blog on the difference between 5 types of data-stores available to developers, and under what circumstances they should be used.

I find it tricky to decide on the best technology for storing data for a particular project. I guess the important thing to keep in mind is to try and figure out as best you can how the data is going to be used (i.e. queried). If you know that, the decision should be easy once you know what’s out there, and this blog post certainly helps in this regard. If you don’t, I guess the next best thing is to try to find the option that will give you the most flexibility with hopefully not too much loss in performance.