Long Form Posts

    Reflections On Writing On The Web

    I fell into a bit of a rabbit hole about writing and publishing online yesterday after reading this article from Preetam Nath and this article from James Clear. I’ve been thinking about creating and publishing on the web for a little while now, which is probably why these two articles resonated with me.

    These articles highlight the importance of creating and publishing regardless of what the topic is. There have been a few things that I’ve been wanting to share but I haven’t done so, probably because I worry about what other people think. The interesting thing about that line of thinking is that I tend to enjoy reading posts from other people as they go about their lives. I guess that’s what the original intention of blogging actually is.

    There have also been times during the past week that I’ve been craving content, either from Twitter, Micro.blog or the various RSS feeds that I read. And there have been times when I’ve caught up with everything that I follow, and nothing happens for a while. I think to myself, “when will someone post something? I need to be distracted for a while.” I think I need to remember that someone needs to create that content in order for it to be consumed, and although it’s much easier consuming content than it is to produce it, I should not feel entitled to it and expect others to amuse me.

    The interesting thing about these thoughts is that it is joining a confluence of other changes to my daily work setup that has happened recently. I use to write in my Day One journal almost every day, but since moving to Linux for work that has prooved a little difficult to maintain. It might be that more of my journalling will go here instead, given that micro.blog provides a nice, cross-plaform interface for writing entries of any size.

    Unit Tests and Verifying Mocks

    I’m working with a unit test that uses mocks in which every method in the mock is verified after the method under test is called, even if it is not relevant to the test. Furthermore, the tear down method verifies that every dependent services has no more interactions, which means that removing a verification that is not relevant to the specific test case will cause the test to fail.

    Please do not do this. It makes modifying the tests really difficult and results in really long unit tests that hides what the test is trying to assert. It also makes it harder to create new tests to verify a particular behaviour, as you find yourself copying all the verification code that is not relevant to the case that you’re trying to test for.

    In my opinion, tests should clearly demonstrate the specific behaviour that you’re trying to verify, and should only include verification of mocks that are directly related to that case. Writing tests that are effectively photo-negatives of the method being tested, one in which the dependent services are verified instead of called, is not a good practice for unit testing.

    Instead, have multiple, smaller unit-tests that asserts a particular behaviour, and only verify the mocks that are explicitly required. You gain the coverage by having all these unit tests effectively overlap the various paths a particular method will take. But the important benefit is that it results in more maintainable tests that are easier to work with. That makes it easier to write tests, which means you find yourself doing so more often. Path of least resistance and all that.

    A Database Client Wishlist

    I’ve recently started a new job so I’ve been spending a bit of time trying to become familiar with how the relational databases are structured. Usually when I’m doing any database work, I tend to use the CLI clients like mysql or pg_sql. I tend to prefer them, not only as they’re usually easy to use via SSH, but the REPL is a nice interaction model when querying data: you type a query, and the results appear directly below it. The CLI tools do have a few drawbacks though. Dealing with large result sets or browsing the schema tend to be harder, which makes it difficult when dealing with an unfamiliar database.

    So I’ve been finding myself using the GUI database browsers more, like DataGrip or MySQL Workbench. It is much easier and nicer to navigate the schema using these, along with deal with large result sets, but they do remove the connection between a query and the associated results. The queries are usually entered in an editor-like console, like those used to enter code, and the results are usually in another window panel, or in a separate tab. This mode of interaction has nothing like the recency or locality between the query and the results that you get from a CLI.

    While working with both of these tools and seeing their shortcomings, I’ve been casually wondering what a perfect decent database client would have. I think it will need these attributes in some prominent way (this covers the complaints listed above but also addresses some others I think would also help):

    Results appearing below queries: I think this tool will need an interaction model similar to the CLI tools. There is so much benefit in seeing the results directly below the query that produce it. Anything other than this is likely to result in a situation where I’ll be looking at seven different queries and wondering which of them produced the single result set that I see.

    An easy way to view, filter and export large result sets: Although the interaction should be is closer to the CLI, there must be a way to deal with large queries and result sets, something that the GUI tools do really well. There should also be a way to export them to CSV files or something similar without having to remember the appropriate copy command.

    Some sort of snippet support and persistent scroll-back: This one can be best summarised as “whatever you find yourself copy and pasting into notepad”. The ability to store snippets and saved queries will save time trying to find or rewrite the big complex queries. And the persistent scroll-back of previously executed queries, with their results, will help with maintaining my train of thought while investigating something. This can come in handy especially when the investigation spans multiple days.

    A quick way to annotate queries or results: Big long SQL queries eventually look the same to me after a while, so it would also be nice to add inline comments or notes to remind myself what the results actually represent.

    An easy way to browse the schema: This could be a tree-like structure similar to all the GUI tools, which will make browsing the schema really easy. I think at a minimum, it should be a consistent set of meta-commands such as listing tables in a database or describing a tables columns, etc.

    An easy way to run automation tasks: Finally, some form of scripting language to be able to “orchestrate” multiple queries without having to formulate one large SQL query, or copy and paste result sets around. It’s true that writing an external tool to do this is also possible, but avoiding the context switch would be a huge benefit if this was available from within the app. Doesn’t have to be full featured either, in fact it’s probably better if it isn’t.

    It would be interesting exploring this further. I think the last thing I need now is another project to work on, but maybe over the weekend I might start prototyping this to see if the workflow makes sense.

    Sharing links to private podcast episodes

    There have been times when I’ve wanted to share a link to an episode of a podcast that I pay for, but I’m hesitant to do so as the feed is private and unique to my account. The episode is also available in the public feed, but has been trimmed as an incentive for listeners to pay for the show. I can always find the episode in the public feed and share that, but I’m wondering if there’s a better way to handle this.

    How do other podcast listeners share links to episodes from private feeds that also have a public version? Is there something in the RSS standard1 that allows for podcast producers to link a private episode to the same one in the public feed? If so, do the major podcast players, specifically Pocketcasts, honour this link when sharing an episode?

    I’m asking as a podcast listener: I don’t have a podcast myself (yet).


    1. “Standard” is probably not the right word here but let’s go with it for the moment. ↩︎

    Let's hold the line, Melbourne. We've got this.

    Today is a good day. Melbourne’s 14 day daily Covid-19 case average is now 29.4, which is beyond the 30 to 50 band required to move to the next stage of reopening. Seeing the fruits of our collective sacrifice, bringing the daily case numbers from a peak of around 740 in August down to the 11 we saw on Monday, makes me proud to be a Melburnian.

    As much as I like for things to reopen sooner than planned, I think we should hold the line for as long as we possibly can. The potential prizes for doing so – the crushing of the virus, the ability to travel interstate again, the chance to eat at restaurants without fear of infection, the chance for a normalish Christmas and summer – are within reach. I know that’s easy for me to say as someone who has the ability to work from home, and I completely recognise those of us suffering right now being unable to work at all. But just like the darkest hour is before the dawn, so too will the sweet taste of victory and accomplishment be when we finally crush this virus and meet the rest of the country where they are. To rush this, to reopen too early, and see our effort thrown away would be upsetting.

    Let’s hold out that little bit longer, Melbourne. We’ve got this.

    Getting screen capture working in Vivaldi on Fedora 32

    Moving from a Mac Pro back to Linux for work, I’ve come to appreciate how well things just work out of the box in macOS. Things like Web RTC display capture, which is used for sharing the screen in browser-based video conferencing sites (and I think also in Slack, since it’s using Electron and, thus, the Blink rendering engine), work flawlessly in macOS, but proved to be a bit of trouble within Linux.

    From my limited reading, it looks like this might be related to the use of Wayland, the new user-space video stack that is currently being built, and the corresponding security model. This exists alongside a new mechanism for acquiring audio and video feeds called PipeWire, but this is not enabled by default in Vivaldi, the browser I’m using.

    Using the instructions found here, I think I’ve managed to fix this by:

    1. Going to chrome://flags
    2. Enabling “WebRTC PipeWire support”
    3. Restarting Vivaldi

    I then went to a test WebRTC site to verify that it works (there is also one on MDC). After going through some security prompts to allow sharing of the screen, I was now able to see my desktop being displayed back to me.

    I’m not sure how I can fix this in Electron apps like Slack. Prior to this fix, Vivaldi did allow sharing of individual windows, but this doesn’t seem possible in Slack at the moment. If I find a fix for this, I might update this post.

    First Foray Into Home Automation

    After recently changing jobs, I’ve received a brand new Lenovo work laptop. As good as the laptop is, and it’s OK for a work laptop, it has one annoying feature. Whenever the laptop is plugged in and powered, there is a bright white LED that is always illuminated. Because I’m still working from home — and it is likely that after the pandemic I will be working from home at least a few days a week — and my desk is in my bedroom, having this white LED is no good for my sleep.

    For the first few evenings, I’ve been unplugging the laptop prior to going to bed. I rather not use electrical tape to block out the LED: the is not my laptop and such tape would be ugly, and the LED itself is close to other ports which would make tape placement a bit awkward. Plus the LED does serves the useful purpose of indicating that the laptop is powered. It’s just not useful indicating this fact at night. Unplugging the laptop works but I’m not too keen with this solution long term: it’s only going to be a matter of time when I unplug it one day, forget to plug it in the next day and I eventually run out of juice when I need it the most.

    Another solution for this problem is a dumb timer. I do own a timer — one that has a circular clock that is configured by pressing in a black nub for each 15 minutes that you want the plug to be energised — and it could work in this scenario, but it does has some awkward properties. The principal one being that I’d like to ensure that the laptop is powered when I’m using it, and there could be times when I’m using it during the hours that I’m usually asleep, like when I’m to responding to incidents or working late. The timer does have an override, but it’s along the side of the plug itself, so in these cases I’d have to get under my desk to turn it on.

    So I decided to take this opportunity to try out some home automation.

    The Smart Plug

    The way I plan to tackle this problem is by doing the following:

    • Connecting the laptop to a smart plug
    • Setting up a schedule so that the smart plug will automatically turn off at 10:00 in the evening, and on at 6:30 in the morning
    • Having a way to override the schedule if I need to turn on the plug if I need to

    The smart plug chosen for this is the TP-Link HS-100 Smart Wi-Fi Plug. It’s was not my first choice but it was in stock and was delivered within a few days, way before the expected delivery date (good job Australia Post).

    The TP-Link HS 100 Smart

    The plugs themselves are nothing remarkable. It’s just a standard plug, with a LED indicating the current state of the plug and Wi-Fi connectivity. They’re a little bulky, and the do encroach a bit on some of the adjacent plugs: I needed to move a few plugs around in the power board that I’m using. Fortunately there is some clearance between the prongs and the actual body of the device, which made it possible to position it so that it overlaps some of the other plugs with slimmer profiles. The relay within the plug is much quieter than I expected, which was a nice surprise.

    Linking the smart plug up to the Wi-Fi was relatively painless, however I did need to download an app and create a new TP-Link Kasa Smart account. During the actual on-boarding, the app also asked for my location for some reason. It could have been to configure time-zones? I don’t know, but it would have been nice for the app to disclose why it needed my location. But after that, it was more or less what you’d expect: after and following the instructions within the app to plug the device in and turn it on, the smart plug started a Wi-Fi hotspot that the phone connected to. One the pairing was complete, it was possible to turn the device on and off within the app.

    Google Home with the smart plugs registered

    Setting Up The Schedule

    I first tried setting up the schedule for the smart plug in Google Home. First, I’ve got to say that doing something mildly complicated like this in a mobile app was annoying, and I wish Google published a web or desktop version of their Home management app so this could use a mouse and keyboard. But I had no trouble registering the smart plug in Google Home. It basically involved linking the Kasa Smart account with my Google account and once that was done, the smart plug could be added to a room and was ready to go.

    Setting up a schedule within Google Home involved creating a new “Scene”, and expected information like trigger words and a spoken response when the scene ran. There were also some built in scenes but they didn’t seem suitable for my use case. The whole thing seems geared towards triggering the scene with a Google Home smart speakers (I just realised the the app and the smart speakers had the same name), and seems to assume that one is available. I don’t have a smart speaker and the prospect of the Google Assistant speaking when the scene is triggered did not appeal to me. It might have been possible to set it up the way I desired, but it felt like my use case was not exactly what this automation system is geared towards, so I abandoned pursuing this approach any further.

    Fortunately the smart plugs integrate with IFTTT, so I turned to that next. After recovering my old account, I set out to configure the schedule.

    Firstly, I have to say that the UX of IFTTT’s site is dramatically different than what I remember, and not in a good way. It seemed like they noticed that most of their users were accessing their site from their mobiles, and they redesigned the UI to work for them at the expense of desktop users. They reduce the amount of information density of each page so that it took three clicks to do anything, and cranked up the font size so much that every label or line of copy was larger than a header. This, mixed with a garish new colour scheme, made the page physically hard to look at. I’d recommend that IFTTT’s UX designers reconsider their design decisions.

    Usability aside, setting up the schedule was reasonably straightforward here as well. I first had to link the IFTTT and Kasa Smart accounts, which made the devices selectable within IFTTT. I then went about setting up an applet to turn off the plug at the scheduled time. Initially I set it up to turn it off 15 minutes from the current time, just so that I could test it. It was not successful on the first go and I had to ensure that the plug was selected within the applet correctly; but on the second go, it worked without any problem: at the scheduled time, the plug turned itself off. Most importantly of all, the state of the plug was properly reflected within the Google Home app and I was able to turn it back on from there.

    One last thing about the schedules, IFTTT does not make this clear when you’re setting up the applet, but dates and times that are used by an applet are in the time-zone of your account. To check or change it, go to your account profile settings and it should be listed there.

    I then had to create a second applet to turn the plug on at a scheduled time, which was just as easy to do. The entire schedule was set up in a few minutes, minus the test time, with two IFTTT applets. This leaves me with one remaining applet on the free IFTTT plan, which means that I’ll need to consider something else when I set up the other plug.

    IFTTT with the two applets setup

    After testing the entire set up end to end, and confirming that the override works, I reconfigured the schedule for the evening times and it was good to go.

    That evening, the scheduled ran without a hitch. The smart plug cut power to the laptop at 10:00 and the LED was extinguished, giving me the much needed darkness for a good night sleep. The next morning at 6:30, the smart plug was turned on again and power was restored to the laptop. The only downside is that the smart plug itself has a green LED which, although not as distracting as the one on the laptop, is still visible during the night. Fortunately this is something I could easily fix with electrical tape.

    Summary

    So far, I’d say this set up has been successful. It’s been two nights now, and in both cases power to the laptop was turned off on schedule, and restored the next morning. The LED from the laptop no longer distracts me and I don’t have to manually unplug the laptop every evening. This is now something that I can forget, which is the ultimate indication of success.

    On Ordered Lists in Markdown

    One of the things I like about Markdown as a form of writing online, is that ordered lists can simply begin with the prefix 1., and there is no need to update the leading number in the subsequent items. To produce the following list:

    1. First
    2. Second
    3. Third

    One only needs to write:

    1. First
    1. Second
    1. Third
    

    or:

    1. First
    2. Second
    3. Third
    

    or even:

    1. First
    3. Second
    2. Third
    

    The one downside to this approach, unfortunately, is that there is no nice way to specify what the first ordinal should be. If I were to use 3. as the prefix of the first item, the generated ordered list will always begin at 1.

    This means that there’s no nice way to continue lists that are separated by block elements. For example, let’s say I want to have a list of 4 items, then a paragraph of text or some other block element, then continue the list from 5. The only way to do so in “common-style” Markdown is to write the second list in HTML with an <od start=5> tag:

    <ol start=5>
      <li>Fifth</li>
      <li>Sixth</li>
      <li>Seventh</li>  
    </ol>
    

    It would be nice if this was representable within Markdown itself. Maybe by taking into account the first ordinal and just incrementing by 1 from there. For example:

    5. Fifth
    5. Sixth
    5. Seventh
    

    becomes

    1. Fifth
    2. Sixth
    3. Seventh

    So what?, you might say. You just demonstrated that this could be done in HTML.

    That’s true, however I use wiki software with rich-text editors that don’t allow modifying the underlying HTML (they may have the ability to specify a “region” of HTML, but not a way to modify the underlying body text itself) and they use Markdown as a way of triggering formatting changes. For example, typing * twice will enable bold face, typing three ` will start a code block… and typing 1. will start an ordered list.

    Changing the first ordinal or continuing the previous list might be considered an advanced operation that the developers of these wikis may not have considered. But I can’t help wonder if Markdown had this feature from the start, all these editors would have supported it in one form or another.

    If Google does this to the Pixel 4, just what do they expect for the Pixel 5?

    What is Google doing cancelling the Pixel 4 after 6 months? They spend $1.1 billion buying the HTC mobile division and state that they plan to start making their own mobile chips, giving the impression that they are serious about producing decent, flagship hardware for Android. And then go ahead with discontiuning their current flagship phone after 6 months?

    Look, I know from a purely economical perspective, the Pixel line makes little sense. Android is not iOS. They don’t hold the prestigious high-end of the market, with the margins that come from it. But that’s not Google’s business. They’re an advertising company first, and a search company second. So I can understanding that Android to them is more of a cost centre; the price of keeping access to their services open to mobile users.

    But I had the impression that they also recognised that there exists a market of Android users that appreciate good quality hardware and decent, stock-standard software stack with no shovelware, and are willing to pay a premium for it. It might not be a big market, that’s true. But if they’re serious about keeping Android around and want to keep these customers (you know, the one’s with disposable income that advertisers love), they should continue to be a player in it. I guess it’s possible that they simply offload this to another device manufacturer like Nokia, but then they’re giving up any leverage of ensuring good quality hardware which will attract these buyers.

    As a Pixel owner myself, this move really concerns me. It’s getting increasingly harder to recommend Pixel phones to anyone, and I’m starting to wonder whether it’s time to consider something else.

    On Suppression vs. Elimination

    It was around the beginning of June, when the number of new Covid-19 cases for Victoria were around 10-20 a day, that there was a general feeling that suppression was working and that it was time to begin opening up. I will admit I took advantage of the looser restrictions, but I always wondered whether it would be better to remain closed for a little while longer and go for elimination. This was not the official strategy though: we have testing and tracing up and running and as long as we know where the virus is, we can continue to roll-back restrictions and achieve some semblance of normalcy.

    Fast-forward to today and the daily number of cases is higher than what it was back in March, Melbourne is back under Stage 3 restrictions and I’m shopping on-line for masks.

    It seems obvious to me that suppression as a strategy may not be enough. We may eventually (hopefully) get the virus tamped down once more, but it’s still out there and our efforts to keep it at bay are only as strong as our weakest link.

    I think it’s time we go for elimination. It won’t be easy, but there are three reasons why I reckon it’s worth a shot:

    • Most of the other states in the country have effectively achieved eliminated. Some of them have gone weeks without any new cases, and are cautiously in the process of opening up once again. However, this can only hold as long as the state borders remain close to Victorians (and possibly soon to the New South Welsh) and I don’t see these states willing throwing away their hard won achievement just because the official strategy is suppression. If Victoria (and NSW) go for elimination, we can meet the other states where they are, making it a no-brainer to open up interstate travel once again, not to mention the trans-Tasman bubble with New Zealand.
    • It seems more economically stable over the long term. Economic activity is tied to confidence: people will only go out and spend money if they believe it’s safe to do so. Even when restrictions are rolled-back, I’m doubtful people will be quick to flock to cafes and gyms if there’s a risk of another wave. Compare this with elimination: evidence from New Zealand shows that consumer spending is pretty much back to pre-pandemic levels, despite going harder during the initial lock-down.
    • It may be a way to win back the public’s confidence in the government. The Victorian government has taken a hit in the polls due to the mistakes that caused the current round of lock-downs. I can see rallying the public around the goal of elimination a way to win them back. You can even use the current situation as a unique opportunity to achieve this, maybe by saying, “given that we’re already going through another round of lock-downs, let’s go for broke and remain locked down until we’ve eliminated this virus once and for all.” Now you have a something that people can work towards, and the feeling that their current sacrifice is not for nothing if (when?) another wave comes through.

    I’m aware that this a post written by someone who is in a position of relative privilege. I haven’t lost my job, and I remain relatively healthy and financially secure. I also know that it will be expensive and will cause a fair bit more suffering for those with small businesses that will need to shut their doors. So I recognised that I don’t have all the facts, and this may not be feasible at all. But I also question the feasibility of maintaining a long-term suppression strategy until treatments or a vaccine become available: this is a tricky virus to handle.

    In the end, I guess I’m just a bit disappointed by the lack of abition in attempting this as a goal. It seems advantageous, especially now, to seize the moment and go for making our second round of lock-downs our last.

    Remarks on Go's Error Handling using Facebook's SDK Crashes As a Framing Device

    There are new reports of Facebook’s SDK crashing apps again due to server changes. The post above links to Bugsnag article which explores the underlying cause: that’s worth a read.

    I’m going to throw a shout-out to Go’s approach to error handling here. I’m not saying that this shows the superiority of Go over Objective C: these sorts of things can happen in any language. The difference I want to highlight is that Go treats error handling as part of the standard flow of the language, rather than the exceptional flow. This forces you to think about error conditions when you’re making calls to code that can fail.

    This does result in some annoying code of the form:

    result, err := doOperation()
    if err != nil {
        return nil, err
    }
    
    result2, err := doSecondOperation(result)
    if err != nil {
        return nil, err
    }
    
    // and so on
    

    and there’s nothing stopping your from completely ignoring the error.

    But there’s no way to call these two functions without dealing with the error in some way. That is, there’s no way to simply write doSecondOperation(doOperation()): you’re given an error and you have to do something with it. So you might as well handle it gracefully.

    P.S. I should probably state that I know very little about Objective C. I do know that a fair number of APIs in AppKit and UIKit make use of completion handlers which can provide an error value, although to me it seems a littler easier to ignore it vs. deailing with the error values in Go. I also know that Swift makes improvements here, forcing you to prefix calls that can fail with the try keyword. Again, this is not to rag on Objective C; rather it’s a comment on the idioms of error handling in Go and how these sort of events could prevent the app from crashing.

    Signed Up To micro.blog

    I’ve signed up with micro.blog in an attempt to post to the blog more frequently than I have been. The last post I had on my existing blog was in March, and it felt to me like it was starting to become a bit negelected. I think the main reason for the delay is that I feel the need to publish long form articles, which involves a lot of work to write, review, etc. I will try to continue to do that, but I also want to start posting shorter articles more often.

    Interesting story: I had this idea for a while, since the start of June. Back then my blog was a simple Hugo site managed in Git, and hosted within Google Cloud’s object store. I had a few posts there — these have been migrated to this site — and I also had a few ideas for posts in the pipeline. I knew I wanted to write more often, but I was starting to get the sense of “overhead” involved in creating new posts. Writing doesn’t come naturally to me, and I think one of the barriers of posting was the amount of non-writing involved in doing so, things like checking out the latest copy, writing it, pushing the branch holding the draft, reviewing the PR (not that there was much to review), merging it, checking out master and runing “make” to generate and deploy it. Each step is not hard in itself, I do it many times a day at work. But it’s just more overhead making the actual act of posting just a little bit harder, and I was begining to realise that if I wanted to write more often, I needed a way to do so effortlessly.

    So I committed the second cardinal sin of programming and spent a few weeks making my own CMS (I was also close to committing the first cardinal sin of programming — making my own text editer — much earlier in my programming life, but luckly lost intrested after starting). The aim was to setup a service and workflow that would make it easier to post smaller articles, more often, and from any machine that I was currently on. I also got swepted away with hearing others discuss the techonologies of their own blogging engines, plus their approach to “owning the entire stack” as it was. Plus, I cannot resist starting a new project, epecially now when it’s difficult doing things outside or with other people around.

    However, as I got closer to “launch”, I was beginning to consider the amount of work involved in maintaining it and extending it to suppot things I want further down the line, things like extra pages, etc. This is a classic problem of mine. I get a sense of enthusiasm as I see the core features come togeather… and then I think about what work I need to do to support afterwards, and I completely loose interest. The project then begins to deterorate as additional hacks are added to support these things, and it just becomes less maintainable and fun to work on over time.

    It also serves as a great distraction: what better way to avoid writing, than to work on an application that would reduce the barriers that inhibit me to write.

    So, I’m doing the smart thing: I’ve stopped working on it and have moved to micro.blog. Being a subscriber to Martin Reece’s feed, I see the amount of effort and care he puts into this platform, something that I don’t see myself doing for my own CMS. I can only hope this would result in me publishing posts more frequently, we’ll see. But now I have no more excuses to actually write.

    Features From Android In iOS 14, and The Enthusiasm Gap

    John Gruber on Daring Fireball, commenting on an article about features in iOS 14 that Android had first:

    Do you get the sense that Google, company-wide, is all that interested in Android? I don’t. Both as the steward of the software platform and as the maker of Pixel hardware, it seems like Google is losing interest in Android. Flagship Android hardware makers sure are interested in Android, but they can’t move the Android developer ecosystem — only Google can.

    Apple, institutionally, is as attentive to the iPhone and iOS as it has ever been. I think Google, institutionally, is bored with Android.

    As an Android user, and occasional dabbler in Android app development, this concerns me if it is true. I doubt Google will completely give up on Android, but given the recent shutdowns of Googles services over the years, it’s clear that there are very few things Google is “married” to in the long term.

    With Android’s success and it’s raison d’être, one could argue that Google has room to take a more relaxed attitude towards advancing Android as a platform, so long as cheap phones are still being bought and people are still using them. But I certantly hope that they do not completely abandon it.

    YouTube Music and Uploaded Music Libraries

    Ron Amado, from Ars Technica:

    YouTube Music is really only for The Music Renter—someone who wants to pay $10 per month, every month, forever, for “Music Premium.” This fee is to buy a monthly streaming license for music you do not own, and I’d imagine a good portion of it goes to music companies. When you don’t pay this rental fee, YouTube Music feels like a demo app.

    I prefer to own my music, and I own a lot of independent music that wouldn’t be covered under this major-record-label-streaming-license anyway, so I have no interest in this service. The problem is YouTube Music also locks regular music-playback features behind this monthly rental fee, even for music you’ve uploaded to the service. The biggest offense is that you can’t use Google Cast without paying the rental fee, but when it’s music that I own and a speaker that I own, that’s really not OK. Google Music did not do this.

    These last couple of weeks I’ve actually been working on a personal music app that will playback music uploaded to S3. It was mainly for listening to music that I composed myself, although being able to listen to music that I’ve purchased and ripped to MP3 was a key motivating factor here as well. I was aware that such services existed so I occasionally wondered if my time could be better spent doing something else. Now, I feel like I’ve made the right choice here.

    On Go’s Type Parameters Proposal

    The developers of Go have release a new draft proposal for type parameters. The biggest change is the replacing the concept of constraints, which complicated the proposal somewhat, and replaced it with interfaces to express the same thing. You can read the proposal here latest proposal here.

    I think they’re starting to reach a great balance between what currently exists in the language, and the features required to make a useful type parameter system. The use of interfaces to constrain the type, that is declare the operators that a type must implement in order to be used as a type parameter for a function or struct, makes total sense. It also makes moving to type parameters in some areas of the standard library trivial. For example, the sort.Sort function prototype:

    func Sort(data Interface)
    

    can simply be written as:

    func Sort(type T Interface)(data T)
    

    I do have some minor concerns though. The biggest one is the use of interfaces to express constraints related to operators, which are expressed as type lists. I think listing the types that a particular type parameter can instantiate makes sense. It dramatically simplifies the process of expressing a constraint based on the operators a particular type supports. However, using the concept of interfaces for this purpose seems a little strange, especially so when these interfaces cannot be used in areas other than type constraints. To be fair, it seems like they recognise this, and I’m suspecting that in practice these interfaces will be defined in a package that can simply be used, thereby not requiring us to deal with them directly unless we need to.

    But all in all, this looks promising. It is starting to feel like the design is coming together, with the rough edges starting to be smoothed out. I appreciate the level of careful consideration the core Go developers are exhibiting in this process. This is after all a large change to the language, and they only have one real chance at this. Sure we have to wait for it, but especially in language design, mistakes are forever.

    Don't Get it Now

    It’s scary times at the moment. The Corona Virus (SARS-CoV-2 and Covid-19) is raging through Europe at this moment, with hundreds of people dying in Italy, Spain and France and most of the those countries, along with the US, in lock-down. The hospital system is currently not equipped to be able to handle the peak number of patients that will require intensive care: doctors from Italy, France and New York are telling stories about how they have to choose who lives and dies, and I’m fearful that we may start hearing stories like that here. There is currently no cure, nor no treatment. There’s been models indicating that even if we take steps to suppress the virus now, there will be continuous surges in outbreaks until a vaccine is ready in 12 to 18 months, suggesting that we may need to be in a state of lock-down or at the very least, rigid social distancing until August 2021 at the latest. The WHO reckons that a majority of the worlds population will get infected over the next year.

    I’m not an doctor, nor an etymologist. I cannot begin to suggest what we should do as a society. But I’m going to give a few thoughts as to how I plan to weather this storm.

    I think at this current stage, our enemy, along with the virus, is time. I hope I don’t have to tell you that the virus is moving through the worlds population now, even as we speak. But humanity is not standing still either:

    So my mantra for the next few months is “don’t get it now.” Wait to get infected for as long as you can. The ideal case is not to catch it at all, but if we’re destine to get infected, best to get in infected later, when some of the points above have been addressed, instead of sooner when they have not. This will obviously mean sacrificing things like going to the gym, going out for coffee, or seeing friends and family. But I believe that this is a price worth paying, especially if the alternative is loosing someone you love, or potentially your own life.

    So that’s my current strategy at this time. I don’t know if it will work, and as things develop it may need refining. But after thinking about this for the previous few weeks, it’s the best strategy I can think of. And I think it will help me get through this.

    P.S. A lot of my thoughts on this came from reading this article by Tomas Pueyo. He’s obviously more knowledgeable about how we should act on this as a whole. It is worth your time reading this.

    P.P.S. I spoke quite abstractly about the health system, but it’s important to remember that these systems are made up of people: doctors, nurses and paramedics on the front line, along with the researchers, manufacturers and logistics who support them. At this time, they are giving their all, and then some, to help us through this crisis. Once this is over, I think we owe every single one of these individuals a beer.

    Update On 4th Dec 2022: Almost three years since writing this post, I tested positive for Covid-19 for the first time. My symptoms were that of a pretty rough cold which, given what the possibilities could have been when I wrote this post, meant that I weathered the disease pretty well. I finally caught it at a time when vaccines and treatments were wildly available and I was up to date with my inoculations. So all in all, I’m glad the whole “don’t get it now” worked in my favour.

    Reflections On Virus Scanners on Windows

    I was listening to Episode 277 of The Talk Show in which John Gruber was discussing virus scanners on Apple Macs with John Moltz. The discussion turned briefly to the state of virus scanners on Windows, and how invasive these commercial scanners were compared to Windows Defender provided by Microsoft.

    Hearing this discussion brought memories of my experience with virus scanners back in the days of Windows XP and earlier. There was no Microsoft Defender back then so we had to have a license for one of the commercial scanners that were sold to home users at the time, such as Norton AntiVirus. Given how insecure Windows was back then, it was one of the first things we had to put on a fresh install of Windows. And these things certainly slowed Windows down. But we recognised that it was necessary and after a couple of weeks, we eventually got use to it.

    However, after setting up a new install, there was this brief period of time when we got to experience Windows without a virus scanner. And the difference in the user experience was significant. The boot processed was fast, the UI snappy, and the applications quick to launch. In fact it was so good, it felt strange and slightly uneasy, as the knowledge that there was no virus scanner protecting the system was evident. Only after the virus scanner was installed, with the resulting hit in performance, did it fell safe to use Windows again. It was not until I listened to this episode that I realised how perverted this feeling is.

    I cannot imagine how it must feel for those Microsoft developers who worked hard on providing a user experience that was responsive only to see it slowed down on almost every machine by a virus scanner. I’m sure they knew that, due to the prevalence of malware for Windows back then, it was necessary. Still, I could not imagine that they would have been thrilled about it.

    New Home of Steve Yegge's Rant About Google Services

    I’ve always enjoyed this rant from Steve Yegge about how Google differed from Amazon in how they develop their services. Not sure if it’s applicable now but it was quite interesting to hear how the two companies differed in their approach in building and releasing products. After hearing that Google+ was being shutdown, I wondered what would happen with the rant, and whether it would be lost to time. It was fortunate that someone saved it.

    For those of you who haven’t read any of Steve’s other blogposts, please checkout his current blog, plus several of his other Drunken Blog Rants. They are well worth your time.

    Five Common Data Stores and When to Use Them

    Very interesting post on the Shopify Engineering Blog on the difference between 5 types of data-stores available to developers, and under what circumstances they should be used.

    I find it tricky to decide on the best technology for storing data for a particular project. I guess the important thing to keep in mind is to try and figure out as best you can how the data is going to be used (i.e. queried). If you know that, the decision should be easy once you know what’s out there, and this blog post certainly helps in this regard. If you don’t, I guess the next best thing is to try to find the option that will give you the most flexibility with hopefully not too much loss in performance.

← Newer Posts