Long Form Posts

    Why I Like Go

    This question was posed to me in the Hemispheric Views Discord the other day. Itā€™s a bit notable that I didnā€™t have an answer written down for this already, seeing that I do have pretty concrete reasons for why I really like Go. So I figured it was time to write them out.

    I should preface this by saying that by liking Go it doesnā€™t mean I donā€™t use or like any other languages. I donā€™t fully understand those that need to dislike other languages like theyā€™re football teams. ā€œRight tool for the jobā€ and all that. But I do have a soft-spot for Go and it tends to be my go-to language for any new projects or scripting tasks.

    So here it is: the reasons why I like Go.

    First, itā€™s simplicity. Go is not a large language, and a large majority of it you can keep in your working memory. This makes Go easy to write and, more importantly, easy to read.

    It might seam that a small feature set makes the language quite limiting. Well, it does, to a degree, but Iā€™d argue thatā€™s not a bad thing. If there are only a couple of ways to do something, it makes it way easier to predict what code youā€™re expecting to see. Itā€™s sort of like knowing what the next sentence will be in a novel: you know a piece of logic will require some dependant behaviour, and you start thinking to yourself ā€œhow would I do that?ā€ If the answer space is small, youā€™re more likely to see what you expect in the actual code.

    But just because Go is a deliberately small language doesnā€™t mean that itā€™s a stagnant language. There have been some pretty significant features added to it recently, and there are more plans for smoothing out the remaining rough edges. Itā€™s just that the dev team are very deliberate in how they approach these additions. They consider forward compatibility as much as they do about backwards compatibility, being careful not to paint themselves into a corner with every new feature.

    Type parameters are a great example of this. There were calls for type parameters since Go 1.0, but the dev team pushed back until they came up with a design that worked well for the language. It wasnā€™t the first design either. I remember one featuring some new constraint-based constructs that did about 80% what interfaces were doing already. If that were shipped it wouldā€™ve meant a lot of extra complexity just for type parameters. What was shipped isnā€™t perfect, and it doesnā€™t cover every use case type parameters could theoretically support. But it made sense: type parameters built on interfaces, a construct that already existed and was understood.

    This, I think, is where C++ fails to a huge degree. The language is massive. It was massive 30 years ago, and theyā€™ve been adding features to the language every few year since, making it larger still. I see Apple doing something similar with Swift, and Iā€™m not sure thatā€™s good for the language. Itā€™s already quite a bit larger than Go, and I think Apple really should curb their desire for adding features to the language unless thereā€™s a good reason for doing so.

    The drive for simplicity also extends to deployments. Go is compiled to a static binary, one that is extremely portable and could be easily deployed or package as you so desire. No need to fluff about with dependencies. This is where I found Go having a leg-up over scripting languages, like Python and Ruby. Not because Go is compiled (although that helps) but that you have less need to think about packaging dependencies during a deploy. I wrote a lot of Ruby before looking at Go, and dealing with gems and Bundle was a pain. Iā€™m not a huge Python expert to comment on the various ways that language deals with dependencies, but hearing about the various ways to setup virtual environments doesnā€™t fill me with confidence that itā€™s simple.

    And I will admit that Goā€™s approach to this isnā€™t perfect either. For a long while Go didnā€™t even have a way to manage versioned dependencies: they were all lumped into a single repository. The approach with modules is better, but not without some annoyances themselves. Any dependency that goes beyond version 1 requires you to change the import statement to include a vX, an unnecessary measure if the version change is backwards compatible. Thatā€™s not even considering packages that avoid this, and are forever on version 1 (or 0).

    But since moving to modules my encounters with package dependencies issues remains quite rare, and once youā€™ve got the build sorted out, thatā€™s it. No need to deal with packaging after that.

    And Iā€™d argue that Go rides that sweet-spot between a scripting language like Python and Ruby, and a compiled language like C (maybe Rust, but I know very little of that language to comment on it). Itā€™s type safe, but type inferences make it easy to write concise code without excessive annotations everywhere. Itā€™s compiled to an executable, yet memory is managed for you. I wonā€™t talk about how Go does concurrency, but after dealing with Java threads for several years, using them are a joy.

    I should probably balance the scale a bit and talk about areas where I think Go could be made better. The big one is error handling. While I do like the principals ā€” that errors are values and can be handled as such ā€” it does mean a lot of boilerplate like this:

    foo, err := doThis()
    if err != nil {
      return err
    }
    
    bar, err := doThat(foo)
    if err != nil {
      return err
    }
    
    baz, err := doAnotherThing(bar)
    if err != nil {
      return err
    }
    

    To the Go teams credit, the are looking at improving this. And I think thereā€™s enough prior art out there for a solution thatā€™ll look pretty nice without having to resort to exceptions. Maybe something like Swiftā€™s guard statement, that can be used in an expression.

    And yeah, other areas of Go, like itā€™s support for mobile or GUI-style programming and lacking a bit. That could probably be plugged with third-party modules to a degree, although I think because Go is not an object-orientated language, the seals wonā€™t be perfect (take a look at Goā€™s implementation of QT to see how imperfect Go maps to a toolkit that assumes objects). And some gaps need to be plugged by Google themselves, like with mobile support (they do have something here, but Iā€™m not sure to what degree theyā€™re being maintained).

    But Iā€™m sure most of these issues are surmountable. And no language is perfect. If Go doesnā€™t work for a situation, Iā€™ll use Java or Swift or something else. ā€œRight tool for the jobā€ and all that.

    So these are the reasons why I like Go. And I think it all boils down to trying to keep the language simple while still being useful. And as far as my experience with Go is concerned, those maintaining it are doing a pretty stellar job.

    Work Email Spam

    Opened my work email this morning and received a greeting from the following spam messages:

    • Webinar to ā€œovercome the fear of public speakingā€ from some HR Training mob
    • A training course on ā€œaccelerating innovation in data science an MLā€ (thereā€™re a few emails about AI here)
    • Webinars from Stripe, Slack, and Cloudflare about how other companies are using them
    • Weekly updates about whatā€™s happening on our Confluence wiki (this probably could be usefulā€¦ maybe? But our wiki is so large that most updates are about things other teams are working on)
    • A training course on some legal mandates about hiring (honestly, my email mustā€™ve appeared on some mailing list for HR professionals)
    • Another webinar from the first training mob about dealing with ā€œemployees from hellā€

    Marked all as read, closed email, and opened Slack.

    Pixel Phones Are Not Dog-food, and That's a Problem

    John Gruber on the Pixel 8 launch event:

    Itā€™s also impossible not to comment on just how much less interest there is in Googleā€™s Pixel ecosystem. [ā€¦] On the one hand Iā€™m tempted to say the difference is just commensurate with how much better at hardware Apple is than Google. But I think thereā€™s more to it than that. Thereā€™s something ineffable about it. There are aspects of marketshare tractionā€‰ā€”ā€‰in any marketā€‰ā€”ā€‰that canā€™t be explained by side-by-side product comparisons alone.

    Canā€™t speak for the market but as a Pixel 6 Pro owner I can give you my opinion. You donā€™t need to watch the keynote to get that sense of disinterest. You can get it just by using the phone.

    For the last few months1 Iā€™ve been experiencing a bug with the calendar widget. If you have nothing on your calendar for the next two weeks, it completely blanks out:

    An Android phone screen with the calendar widget on the right that is completely white except for a blue plus button

    I doubt that this is intentional as the plus button doesnā€™t work either. Tapping it does nothing at all.

    For comparison, hereā€™s how itā€™s meant to look:

    An Android phone screen with the same calendar widget functioning normally: it has the current date, a message saying 'Nothing scheduled', and two entries in blue for dates in the future

    Now, bugs in software happen ā€” they certainly happen in mine ā€” and thereā€™s no reason why Google would be immune to this, so I can forgive them for this bug showing up in a shipped version of Android. My problem is that itā€™s been like this for months now. This is a widget built by Google, included in Googleā€™s Calendar app running on Googleā€™s OS and Googleā€™s hardware, and itā€™s been broken for this long. I wouldā€™ve expected this to be fixed in a few weeks, but for it to take this long?

    I canā€™t see how anyone with an Android phone using this widget would not notice this. And the only reason I can come up with is that no-one in Google has noticed this. They simply donā€™t use Android, the OS that they build, in their day-to-day. Maybe some of them do, but obviously not enough of them to drive change. If there was, they wouldā€™ve found this problem and fix it by now. To quote Linus, ā€œgiven enough eyeballs, all bugs are shallow,ā€ and those eyeballs are obviously looking elsewhere.

    Now this theory may be far fetched, but after reading Gruberā€™s piece, it seems like Iā€™m not alone in thinking this. As he says later in the same article:

    Iā€™d wager that more Google employees carry an iPhone than carry a Pixel.

    It shows.


    1. I canā€™t remember when I first saw this, but I think it was in July. ā†©ļøŽ

    Your Dev Environment is Not Your Production Environment

    There will be certain things youā€™re going to need to do in your development environments that you should never do in production. Thatā€™s pretty much a given: playing around with userā€™s data or potentially doing something that will cause an incident is generally not a good idea.

    But there are things you shouldnā€™t do in prod that you may need to do in dev. And make no mistake, there may be a legitimate need to do these things. Using Auth0 and only have a limited number of emails available for your test environment? You may need a way to quickly reset a user. Support billing in multiple countries and need to do a test in one of them? Youā€™ll need a way to change the userā€™s countries.

    And I think thatā€™s fine. Not every environment needs to be a reflection of production. As long youā€™ve got a staging or pre-prod environment where you can do things like rehearse deployments. But everything else should be skewed towards ease of development, which will mean making these drastic options available and easy to use.

    Electrification of Melbourne Suburban Railways Plaque

    Found this plaque while passing through Southern Cross station this morning.

    Plaque about the Electrification of Melbourne Suburban Railways

    I didnā€™t have time to read it, and the subject matter looks really interesting to me (Trains? Power Lines? Whatā€™s not to love? šŸ˜€). I also donā€™t know how long itā€™ll be up for, and Iā€™ve been burned in the past of not capturing something when I had the chance.

    So Iā€™m posting photos of it here for posterity reasons. Enjoy.

    Alternative Day Four Photo

    I had an alternative idea for todayā€™s photo challenge, which is ā€œorangeā€. I was hoping to post a photo of something related to Melbourneā€™s busses.

    You see, PTV has designated different colour for different modes of transport. Blue for metro trains, purple for regional trains, green for trams, and orange for busses. And from my experience using the service, theyā€™re pretty consistent with adhering to this design language:

    A bus in orange livery at a bus-stop with an orange sign and trim

    Anyway, theyā€™re doing train works along my rail line over the past few weeks and this morning I noticed this sign (forgive the lighting, it was before dawn):

    A large orange sign that reads 'Buses replace trains' and then below an exclamation icon reads 'Plan ahead at ptv.vic.gov.au'

    Itā€™s not the first time I saw this sign, but I had orange on my mind and the fact that it mentioned busses got me thinking, ā€œhow cleaver, theyā€™re maintaining the design language through and through, using an orange sign to reference the bus service that would be replacing the trains.ā€ Or so I thought, until I saw this sign:

    A large orange sign that reads 'Car space closures', along with details of when the car park will be closed and how many spaces would no longer be available

    Ah, that blew that theory out of the water. And also the opportunity to use it as todayā€™s photo. I mean, I couldā€™ve still used it ā€” itā€™s still orange after all ā€” but it doesnā€™t have the neat adherence to the design language that I was hoping it did.

    Mainboard Mayhem

    Project update on Mainboard Mayhem, my Chipā€™s Challenge fan game. I didnā€™t get it finished in time for the release deadline, which was last weekend. I blame work for that. Weā€™re going through a bit of a crunch at the moment, and there was a need to work on the weekend.

    The good news is that there wasnā€™t much left to do, and after a few more evenings, Iā€™m please to say that itā€™s done. The game is finish, and ready for release.

    So here it is: Mainboard Mayhem: A Chipā€™s Challenge fan game (and yes, thatā€™s its full title).

    Screenshot of Mainboard Mayhem

    At the moment itā€™s only available for MacOS. It should work on both Intel and Apple Silicon Macs, although Iā€™ve only tested on my M2 Mac Mini running Ventura.

    Itā€™s good to finally see this project done. Itā€™s been in development for about last ten years, and I spent half of that time wondering whether it was worth getting it finished it at all. Not committing to anything meant any work I did do on it was pretty aimless, and I always felt like I was wasting my time. Giving myself three weeks to either kill it, or release it helped a lot. Iā€™ll start making deadlines for all the other unfinished projects Iā€™m working on.

    As to what that next project will be, I not sure at this stage. Part of me wants to wait until this crunch time ends, but I suspect Iā€™ll get antsy before then and start work on something else. Iā€™ll keep you posted one way or the other.

    But for now, if you happen to give it a try, thank you and I hope you enjoy it.

    The app icon of Mainboard Mayhem

    Early Version of This Blog

    I was looking for something in GitHub the other day when I found the repository for the first iteration of this blog. I was curious as to how it looked and Iā€™d thought that Iā€™d boot it up and post a few screenshots of it.1

    It started life as a Hugo site. There a two reasons for that, with the first being that I didnā€™t have the patients to style a website from scratch, and Hugo came with some pretty nice templates. I chose the Vienna template, which seems to have fallen out date: many of the template variables no longer work with a modern version of Hugo. Iā€™m also please to see that I did end up customising the header image ā€” a photo taken in Macedon of the train line to Bendigo ā€” although thatā€™s pretty much all I customised.

    Believe it or not, I feel a little nostelgic for it. Such simple innocence in trying to summon up the courage to write stuff on the internet. Although donā€™t let the article count fool you: I think there were a total of 10 posts, with half of those being unfinished drafts. I was still trying to work out whether Iā€™d like to write mainly about software technology, or simply talk about my day. But one thingā€™s for sure, I was under the impression that ā€œrealā€ blogs required posts with a title and at-least 300 words of content. Thatā€™s probably why I only had 5 posts finished in 8 months.

    The second reason why I went with Hugo was that Iā€™d have no excuse to tinker with a CMS. Iā€™d figure that, given that I wasnā€™t using one, Iā€™d be force to focus on the content. Well, that level of self-discipline didnā€™t last long. About in the middle of 2020, I started building a CMS for the blog using Buffalo. I was thinking of launching it with the name ā€œ72kā€ (72k.co), named after the milepost the header photo was taken at.

    I got reasonably far with building this CMS but it still lacked a lot, like uploads and an RSS feed. It also involved a really annoying workflow: in order to publish something, you needed to choose a ā€œpost typeā€ (whether itā€™s a long-form post; a link post; or a note), the ā€œstreamā€ the post will appear in, write a summary, and then ā€œreviewā€ it. Once all thatā€™s good, youā€™re free to publish it. This was in service of building this up into a popular, wizz-bang blog with a well-engineered navigation and category-specific feeds (I think thatā€™s what ā€œstreamsā€ were). Yeah, these grand plans got the better of me and really crippled the usability of the CMS2. I never launched it, opting instead to move to Micro.blog.

    So thatā€™s what this blog looked like, back in the day. I probably wonā€™t look at these projects again. Itā€™s only been four years and already bit-rot is settling in: it took me all morning trying to hack these into a state where I can open them in a browser. But itā€™s good to look back at what it was.

    Still really happy I moved it over to Micro.blog.


    1. I donā€™t deny that part of this is procrastination of other things I should be finishing. ā†©ļøŽ

    2. To be honest, I think part of this lengthy workflow was to satisfy the ā€œresistanceā€: self-imposed roadblocks to stop me from publishing anything at all. ā†©ļøŽ

    On Tools and Automation

    The thing about building tools to automate your work is that itā€™s hard to justify doing so when youā€™re in the thick of it. Easy to see all the time you save in the aggregate, but when youā€™re faced with the task in your day to day, youā€™re just as likely to say ā€œI can build a tool which will let me do this task in a couple of seconds, but itā€™ll take me an hour to build it verses the 5 minutes itā€™ll take for me to just do the task.ā€

    So you just ā€œdo the task.ā€ And the next time you get that task, you face the same dilemma.

    Of course the alternative is spending the hour to automate it, and then never running that tool again (or investing more time than you save keeping it up to date l).

    Iā€™m not sure what the best answer is. Maybe tracking times where you wish you had that tool you didnā€™t build somewhere? Then, when youā€™ve done it at least 3 times for the same thing, you have supporting evidence that itā€™s worth automating. Maybe include the time it took to do it manually as well, so you can compare it to how long it might take to build the automation.

    Might be worth a try.

    šŸ”— XML is the future - Bite code!

    I wanted to write something about fads in the software development industry when the post about Amazon Prime Video moving away from micro-services back to monoliths was making the rounds. A lot of the motivation towards micro-services can be traced back to Amazonā€™s preaching about them being the best way to architect scalable software. Having a team from Amazon saying ā€œmicro-services didnā€™t work; we went back to a monolith and it was more scalable and cheaper to runā€ is, frankly, a bit like the Pope renouncing his Catholic faith.

    I didnā€™t say anything at the time as doing so seemed like jumping on the fad wagon along with everyone else, but I have to agree with this article that this following along with the crowd is quite pervasive in the circuits I travel in. I did witness the tail end of the XML fad when I first started working. My first job had all the good stuff: XML for data and configuration, XSLT to render HTML and to ingest HL71, XForms for customisable forms. We may have used XSD somewhere as well. Good thing we stopped short of SOAP.

    The whole feeling that XML was the answer to any problem was quite pervasive, and with only a few evangelists, it was enough to drive the team in a particular direction. And I wish I could say that I was above it all, but that would be a lie. I drank the cool-aid like many others about the virtues of XML.

    But here lies the seductive thing about these technology fads: theyā€™re not without their merits. There were cases where XML was the answer, just like there are cases where micro-services are. The trap is assuming that just because it worked before, it would work again, 100% of the time in fact, even if the problem is different. After all, Amazon or whatever is using it, and theyā€™re successful. And you do want to see this project succeed, right? Especially when weā€™re pouring all this money into it and your job is on the line, hmm?

    Thus, teams are using micro-services, Kubernetes, 50 different middleware and sidecar containers, and pages and pages of configuration to build a service where the total amount of data can be loaded into an SQLite3 database2. And so it goes.

    So weā€™ll see what would come of it all. I hope there is a move away from micro-services back to simpler forms of software designs; one where the architecture can fit entirely in oneā€™s head. Of course, just as this article says, theyā€™ll probably be an overcorrection, and a whole set of new problems arise when micro-services are ditched in favour of monoliths. I only hope that, should teams decide to do this, they do so with both eyes open and avoid the pitfalls these fads can lay for them.


    1. HL7 is a non-XML format used in the medical industry. We mapped it to XML and passed it through an XSLT to extract patient information. Yes, we really use XSLT to do this! ā†©ļøŽ

    2. Ok, this is a bit of an exaggeration, but not by much. ā†©ļøŽ

    Code First, Tests After

    Still doing the code first, tests after at work and Iā€™m really starting to see the benefits from it. Test driven development is fine, but most of our recent issues ā€” excess logging or errors that are false positives ā€” have nothing to do with buggy business logic. Itā€™s true that you can catch these in unit tests (although I find them to be the worst possible tests to write) but I think you gain a lot more just from launching the application and seeing it run.

    Now granted, itā€™s not always possible to do this with micro-services. Thereā€™s always some dependency you need, and setting all these up is a bit of a pain. Thatā€™s probably why I deferred all my manual testing to the end, when Iā€™ve pushed my changes to get them reviewed and deployed it to the environment. Do a quick cursory test from the frontend just to make sure it hasnā€™t broken anything, then move on to the next task.

    I think this way of working was a mistake. This is something frontend developments get right: you need to run your software while youā€™re working on it. Itā€™s so important to see not just how well it works, but how it feels to work1: what goes to the log, how fast it performs, etc. You donā€™t get this feeling from just depending on unit tests.

    Plus, thereā€™s always a nice buzz to see the thing youā€™re working on run for the first time. That magic seems to decay the further you are from where itā€™s running. It just becomes another cog in the system. And maybe thatā€™s what itā€™s destined to be, but it doesnā€™t need to be this way while youā€™re working on it.


    1. I donā€™t know of a better way to say this other than ā€œhow it feels to workā€. I suppose I could use boring words like ā€œtight iteration loopā€ but there are too many boring words on the blog already. ā†©ļøŽ

    On The Reddit Strike

    Ben Thompson has been writing about the Reddit strike in his daily updates. I like this excerpt from the one he wrote yesterday:

    Reddit is miffed that Google and OpenAI are taking its data, but Huffman and team didnā€™t create that data: Redditā€™s users did, under the watchful eyes of Redditā€™s unpaid mod workforce. In other words, my strong suspicion is that what undergirds everything that is happening this week is widespread angst and irritation that everything that was supposed to be special about the web, particularly the bit where it gives everyone a voice, has turned out to be nothing more than grist to be fought over by millionaires and billionaires.

    That, though, takes me back to Bierā€™s tweet; the crazy thing about the Internet is that said grist is in fact worth fighting over.

    Itā€™s easy for me to say this, as Iā€™m not a user of Reddit, but I have full sympathy for the striking moderators.

    You spend much of your free time volunteering to keep a community on a site, producing value for itā€™s users and owner, with the expectation that the site would recognise your efforts and reciprocate by serving your needs with, say, an API. I can understand how enraging that would feel when they turn around and ā€œalter the dealā€ while expecting the mods to continue as if nothing has changed.

    So good on the moderators showing that they too have leverage.

    And as to OpenAI using the API to train its model: well yeah I can understand the CEO of Reddit feeling shitty about that, but I wouldā€™ve hope he would have the ingenuity to solve that while maintaining the needs of those that actually provide value to the site. Either he doesnā€™t which, given that heā€™s one of the founders, I find hard to believe; or he just doesnā€™t want to.

    Truthful Travel Talk

    Itā€™s time to be honest: I think overseas travel is wasted on me.

    We were driving down from Antibes to Genova today. It was a nice trip, complete with picturesque towns passing us by as we drove along the motorway. My friend was oohing and ahhing at each one: remaking about how nice it would be to see them, stay in them for a while. He was also remarking on what we would do when we arrived at our destination. There was just this air of enthusiasm about the whole thing.

    I didnā€™t feel that enthausiasm. We heard some news that another friend of ours had their luggage stolen, and I just couldnā€™t stop thinking about it. I spent a fair bit of last night going through possible ways on how I could avoid it happening to me, and how I would handle it if it did, and just the whole hassle of dealing with that possibility.

    This, mixed with the inevitable task of finding my barring in an unfamiliar place, stressing about how I would interact with the locals in their non-native language, the ongoing recovery from Covid-19, and a bit of home-sickness, and you can probably guess that Iā€™m just not feeling the vibe of adventure at the moment.

    And yeah, I might have done this to myself, particularly since I havenā€™t had much to do with this part of the trip.Happy to ā€œgo with the flowā€ of what others are doing. And people might ask me ā€œoh, wouldnā€™t it be good to see this?ā€ or ā€œwouldnā€™t it be fun to experience that?ā€ Yeah, maybe? I might get some enjoyment out of it, but Iā€™m not sure if itā€™ll offset the stress I feel with the logistics of it all.

    So thatā€™s where I am at the moment. Itā€™s got to the point where Iā€™m contemplating coming home early. It would actually simplify my itinerary quite a bit, and I wonā€™t be leaving my friends in the lurch: it would be a portion of the trip where I would be travelling by myself. Even with a week less, thatā€™s still about 4 weeks in total, which I think itā€™s plenty, or at least plenty for me.

    Update 30 June: Apart from taking a slightly earlier flight home, I ended up staying the the full 5 weeks. And in retrospect, Iā€™m really glad I did. I figured that I would regret not visiting the places Iā€™ve wouldā€™ve cut out if I were to go home early; and after visiting them, I know now that I wouldā€™ve missed out on some of the most memorable parts of the trip.

    I think what sparked this post was a mixture of anxiety of travelling alone and little bit of home sickness. But nothing beats anxiety like working through the problem (if you can call travelling solo a ā€œproblemā€). And as for home sickness: well, Iā€™m not sure thereā€™s much I can do about that apart from remembering that home will always be there.

    Where Have I Been

    Inspired by Manton and Maique, I thought Iā€™d document the places Iā€™ve visited as well. Iā€™d had to refer to this list a few times in the past so having a record like this is helpful.

    Transfers are not included here. In order for a place to be listed here, Iā€™ve have had to have landed there. Also, Iā€™ve excluded Victoria, Australia, as this is where I live.

    • šŸ‡¦šŸ‡ŗAustralia (home)
      • New South Wales
      • South Australia
      • ACT
    • šŸ‡§šŸ‡·Brazil
    • šŸ‡ØšŸ‡°Cook Islands
    • šŸ‡«šŸ‡ÆFiji
    • šŸ‡«šŸ‡·France
    • šŸ‡®šŸ‡¹Italy
    • šŸ‡®šŸ‡©Indonesia
    • šŸ‡ÆšŸ‡µJapan
    • šŸ‡°šŸ‡®Kiribati
    • šŸ‡³šŸ‡æNew Zealand
    • šŸ‡³šŸ‡ŗNiue
    • šŸ‡µšŸ‡¬Papua New Guinea
    • šŸ‡¼šŸ‡øSamoa
    • šŸ‡øšŸ‡¬Singapore
    • šŸ‡øšŸ‡§Solomon Islands
    • šŸ‡ŖšŸ‡øSpain
    • šŸ‡ØšŸ‡­Switzerland
    • šŸ‡¹šŸ‡“Tonga
    • šŸ‡¹šŸ‡»Tuvalu
    • šŸ‡¦šŸ‡ŖUnited Arab Emirates
    • šŸ‡¬šŸ‡§United Kingdom
      • Devon, England
    • šŸ‡ŗšŸ‡øUnited States of America
      • District of Columbia
      • Maryland
      • Nevada
    • šŸ‡»šŸ‡ŗVanuatu
    • šŸ‡»šŸ‡³Vietnam
    Last Updated 12 Nov 2023
    • 12 Nov 2023: Added Singapore and Indonesia
    • 28 May 2023: First version. Note: Iā€™m writing this while on an overseas trip, so Iā€™ll also be including the countries that Iā€™ll be visiting over the next few weeks.

    Full Width Notes In Obsidian

    More custom styling of Obsidian today. This snippet turns off fixed-width display of notes, so that they can span the entire window. Useful if youā€™re dealing with a bunch of wide tables, as I am right now.

    body {
        --file-line-width: 100%;
    }
    
    div.cm-sizer {
        margin-left: 0 !important;
        margin-right: 0 !important;
    }
    

    I wish I could say credit goes to ChatGPT, but the answer it gave wasnā€™t completely correct (although it was close). The way I got this was by enabling the developer tools ā€” which you can do from the View menu ā€” and just going through the HTML DOM to find the relevant CSS class. I guess this means that thisā€™ll break the minute Obsidian decides to change their class names, but I guess weā€™ll cross that bridge when we come to it.

    F5 To Run

    While going through my archive about a month ago, I found all my old Basic programs I wrote when I was going through school. I had a lot of fun working on them back in the day, and I though it would be nice to preserve them in some way. Maybe even make them runnable in the browser, much like what the Wayback Machine did with the more well-known DOS programs.

    So I set about doing just that, and today the site is live: F5 To Run.

    And yeah, itā€™s likely that Iā€™m the only one interested in this. No matter. Iā€™m glad theyā€™re off my dying portable drive and preserved on the web in some fashion.

    Twitter, Public Alerts, And Federated Protocols

    So apparently Twitterā€™s leadership team has discovered the value it has for public alerts:

    Of all the changes Elon Musk has made to Twitter, blocking emergency and public transit services from tweeting automated alerts might have been his least popular. User backlash roared, as National Weather Service accounts got suspended. Then, one of the countryā€™s largest public transit services, Metropolitan Transportation Authority (MTA), had so much trouble tweeting, it decided to quit posting updates to Twitter.

    It always seemed a little off that these organisations were using Twitter for this. Not everyone is on Twitter, and those that were had to agree to the terms of a private company which could, at any time, doā€¦ well, what itā€™s doing now. Should public alerts for weather and transportation really rely on such private entities?

    I can see why these companies were used back in the late 2000ā€™s, when they first came onto the scene. They had apps with push-based notification with a good (enough) user experience. They were also investing in the backend, setting up services that can scale. So organisations palming off dissemination of these alerts to Twitter made sense.

    But I donā€™t think it makes sense anymore. With ActivityPub and (in theory) whatever BlueSky is cooking up, you now have open, federated protocols, and a bunch of apps people are building which use them. You also have public clouds which provide an easier way to scale a service. With these two now available, it seems clear to me that these organisations should deploy their own service for sending out these alerts using any or all of these open protocols.

    Then, the public can come to them on their terms. Those using Mastodon or BlueSky can get the alerts in their app of choice. Those that arenā€™t interested in either can still use any mobile apps theseā€™s organisations have released, and these protocols can be used there as well. One can imagine a very simple ActivityPub ā€œreceiverā€ app, stripped of all the social features apart from receiving notifications, that can be used for organisations that donā€™t or canā€™t release a mobile app. Plus, having a service that they run themselves could also make it possible to setup more esoteric notification channels, like web push-notifications through the browser.

    And yeah, itā€™ll cost money and will require some operational expertise. But Iā€™d argue that serving the public in this way is their perdure, and the reason why tax dollars go in their direction.

    So nowā€™s a great time for these organisations to step away from relying on these private companies for disseminating alerts and embrace the new federated protocols coming onto the scene. Who knows, maybe theyā€™ll also embrace RSS. That would be nice.

    Content Warning: About A Spider

    This spider was hanging around my garage door opening button for a few weeks now. I didnā€™t think much of it until today, when I noticed that it was actually a redback. Not the largest redback Iā€™ve seen, but one located pretty close to a button I push quite frequently.

    Photo of said redback (it's small, but the photo is a close-up)
    Photo of a redback spider beside a garage door opener, with another spider on the left.
    If you look closely you can see a bit of the classic red stripe on the spider's abdomen.

    I donā€™t know about other Australians, but Iā€™ve got a ā€œkill on sightā€1 policy with redbacks, so it had to go.


    1. Of course I say that, but Iā€™ve seen redbacks on shed doors that I havenā€™t done anything about. Though I wouldnā€™t call them harmless, there were out of the way enough for me to disregard them. ā†©ļøŽ

    About Those Checkmarks

    This posts going to be about Twitter. Yes, I know; another one out there. Itā€™s also going to be a bit speculative in nature, so feel free to skip it if you like.

    Iā€™ve been reading the coverage over the ā€œretirementā€ of the legacy verification system, both in the news and on the socials. And what I find interesting about this whole affair is all the new Twitter Blue subscribers complaining about people that had the checkmark choosing not to sign up.

    Their displeasure comes through in their tweets on why they think these people choose not to subscribe. Many tout money (these people are too stingy) or logistics (write it off as a business expense). But they donā€™t give a reason as to why they care. Surely money or logistics is their problem to sort through. Why should you be unhappy that they chose not to join Twitter Blue? I havenā€™t seen any tweets answering this question.

    Iā€™m not surprised by that. I wonder if the reason is that many of those that have acquired a checkmark saw the those with a verified Twitter handle as being part of the in-group; members of an elite club that you cannot get a membership for1. Naturally they wanted to be part of this in-group, and when this new Twitter Blue subscription offer rolled out, they saw an easy opportunity to gain entry.

    But the thing about status symbols is that theyā€™re only valuable if the in-group chooses to keep them. When all these formally verified people refuse to sign up to Twitter Blue, and their checkmarks were removed from their handle, so too did the checkmark loose itā€™s value as an indicator of worth. The checkmark no longer a signals status.

    Even worse is this in-group has changed their position to one where not having the checkmark is the sign of status. Suddenly, those that have signed up to Twitter Blue found that their attempts to buy their way in was for naught. And thatā€™s what I think theyā€™re angry about. Their new checkmark doesnā€™t impart status anymore, since those that had it donā€™t want it. Now itā€™s just an indicator that youā€™ve paid $8 a month, with maybe a hint that you found the symbol important in the first place.

    Thatā€™s also probably why Musk saw it fit to ā€œpayā€ for Twitter Blue for accounts with more than a million followers, trying to prop up any remaining status this indicator once had. This raises more questions though. Surely he would have seen that allowing anyone to verify their account would dilute the intrinsic status that came with it. I guess he thought that those with the checkmark felt it important enough to keep it, and it will retain its value as a status indicator.

    Anyway, this could be all pretty obvious to a first year psychology student, but I found it all very revealing. Itā€™s certainly interesting seeing this play out over the last couple of days.


    1. I know thatā€™s not the point of this verification status, but it does seem like many saw it as an ā€œIā€™m an important personā€ signal. ā†©ļøŽ

    Day One and Project Jurassic

    So, Day One is in danger of being sherlocked by rumorā€™s of Appleā€™s upcoming journaling app:

    Mayne echoes the sentiment of several app developers who have been frustrated when Apple launched in-house competitors to the apps they have introduced to the ecosystem, often copying features those apps innovated and adding functionality that only Apple can offer, per the iPhoneā€™s privacy and security policies and APIs.

    Iā€™m a user of Day One and I have my doubts that Appleā€™s app would be a drop-in replacement for my journaling needs. And I think the reasons why Day One works for me ā€” and could be made to work better ā€” are also opportunities for Auttomatic to differentiate Day One from project Jurassic.

    The first is access to userā€™s data. If Appleā€™s going to leverage the data it has access to on the phone, then Auttomatic should go the other way, making it dead easy for services outside Apples ecosystem to add stuff to peopleā€™s journal. Have a blog? Post photos to Flickr? Track movies in Letterboxd? Wouldnā€™t it be nice to get this into your Day One journal, safe and secure? A public API that these services can use to add posts to userā€™s journal would go a long way here. These services can offer an export option straight from the app, and Day One can be the private collection of all things a user does on the web, sort of like a private blog.

    And yes, I know thereā€™s that IFTTT integration, but I found it to be pretty crummy (all the post formatting was stripped and images were not uploaded). And it would be a pretty ordinary user experience to have these services say to their users ā€œhey, if you want the stuff you track here in your journal, you have to create an account at this other service.ā€ I guess all these services could publish this information as RSS feeds, and I would settle for that, if the IFTTT integration is actually working.

    But arguments about IFTTT aside, the point is that Day One should fully embrace other services getting userā€™s data into their journal, and the best way to do this is with a public API. I know it wonā€™t work for all their journals (oneā€™s encrypted E2E should remain so) but the userā€™s should have that option, and services should be empowered to allow this.

    And letā€™s not forget the largest trump card Automattic has over Apple: an Android app and web app. I havenā€™t used the web app but I use the Android app all the time. I canā€™t imagine Apple releasing an Android version of their journalling app, particularly if theyā€™re gearing it towards health and leveraging all the private data people have on their iPhones. Automattic should keep working on both the Android and web app, so that users not completely in Appleā€™s ecosystem can keep journals.

    So I donā€™t think Auttomatic has much to fear about project Jurassic. But they canā€™t rest on their laurels. They should embrace the platforms outside of Apple and iOS to really differentiate Day One, and keep it a favourite of mine for journaling.

ā† Newer Posts Older Posts ā†’
Lightbox Image