Long Form Posts

    Twitter, Public Alerts, And Federated Protocols

    So apparently Twitter’s leadership team has discovered the value it has for public alerts:

    Of all the changes Elon Musk has made to Twitter, blocking emergency and public transit services from tweeting automated alerts might have been his least popular. User backlash roared, as National Weather Service accounts got suspended. Then, one of the country’s largest public transit services, Metropolitan Transportation Authority (MTA), had so much trouble tweeting, it decided to quit posting updates to Twitter.

    It always seemed a little off that these organisations were using Twitter for this. Not everyone is on Twitter, and those that were had to agree to the terms of a private company which could, at any time, do… well, what it’s doing now. Should public alerts for weather and transportation really rely on such private entities?

    I can see why these companies were used back in the late 2000’s, when they first came onto the scene. They had apps with push-based notification with a good (enough) user experience. They were also investing in the backend, setting up services that can scale. So organisations palming off dissemination of these alerts to Twitter made sense.

    But I don’t think it makes sense anymore. With ActivityPub and (in theory) whatever BlueSky is cooking up, you now have open, federated protocols, and a bunch of apps people are building which use them. You also have public clouds which provide an easier way to scale a service. With these two now available, it seems clear to me that these organisations should deploy their own service for sending out these alerts using any or all of these open protocols.

    Then, the public can come to them on their terms. Those using Mastodon or BlueSky can get the alerts in their app of choice. Those that aren’t interested in either can still use any mobile apps these’s organisations have released, and these protocols can be used there as well. One can imagine a very simple ActivityPub “receiver” app, stripped of all the social features apart from receiving notifications, that can be used for organisations that don’t or can’t release a mobile app. Plus, having a service that they run themselves could also make it possible to setup more esoteric notification channels, like web push-notifications through the browser.

    And yeah, it’ll cost money and will require some operational expertise. But I’d argue that serving the public in this way is their perdure, and the reason why tax dollars go in their direction.

    So now’s a great time for these organisations to step away from relying on these private companies for disseminating alerts and embrace the new federated protocols coming onto the scene. Who knows, maybe they’ll also embrace RSS. That would be nice.

    Content Warning: About A Spider

    This spider was hanging around my garage door opening button for a few weeks now. I didn’t think much of it until today, when I noticed that it was actually a redback. Not the largest redback I’ve seen, but one located pretty close to a button I push quite frequently.

    Photo of said redback (it's small, but the photo is a close-up)
    Photo of a redback spider beside a garage door opener, with another spider on the left.
    If you look closely you can see a bit of the classic red stripe on the spider's abdomen.

    I don’t know about other Australians, but I’ve got a “kill on sight”1 policy with redbacks, so it had to go.


    1. Of course I say that, but I’ve seen redbacks on shed doors that I haven’t done anything about. Though I wouldn’t call them harmless, there were out of the way enough for me to disregard them. ↩︎

    About Those Checkmarks

    This posts going to be about Twitter. Yes, I know; another one out there. It’s also going to be a bit speculative in nature, so feel free to skip it if you like.

    I’ve been reading the coverage over the “retirement” of the legacy verification system, both in the news and on the socials. And what I find interesting about this whole affair is all the new Twitter Blue subscribers complaining about people that had the checkmark choosing not to sign up.

    Their displeasure comes through in their tweets on why they think these people choose not to subscribe. Many tout money (these people are too stingy) or logistics (write it off as a business expense). But they don’t give a reason as to why they care. Surely money or logistics is their problem to sort through. Why should you be unhappy that they chose not to join Twitter Blue? I haven’t seen any tweets answering this question.

    I’m not surprised by that. I wonder if the reason is that many of those that have acquired a checkmark saw the those with a verified Twitter handle as being part of the in-group; members of an elite club that you cannot get a membership for1. Naturally they wanted to be part of this in-group, and when this new Twitter Blue subscription offer rolled out, they saw an easy opportunity to gain entry.

    But the thing about status symbols is that they’re only valuable if the in-group chooses to keep them. When all these formally verified people refuse to sign up to Twitter Blue, and their checkmarks were removed from their handle, so too did the checkmark loose it’s value as an indicator of worth. The checkmark no longer a signals status.

    Even worse is this in-group has changed their position to one where not having the checkmark is the sign of status. Suddenly, those that have signed up to Twitter Blue found that their attempts to buy their way in was for naught. And that’s what I think they’re angry about. Their new checkmark doesn’t impart status anymore, since those that had it don’t want it. Now it’s just an indicator that you’ve paid $8 a month, with maybe a hint that you found the symbol important in the first place.

    That’s also probably why Musk saw it fit to “pay” for Twitter Blue for accounts with more than a million followers, trying to prop up any remaining status this indicator once had. This raises more questions though. Surely he would have seen that allowing anyone to verify their account would dilute the intrinsic status that came with it. I guess he thought that those with the checkmark felt it important enough to keep it, and it will retain its value as a status indicator.

    Anyway, this could be all pretty obvious to a first year psychology student, but I found it all very revealing. It’s certainly interesting seeing this play out over the last couple of days.


    1. I know that’s not the point of this verification status, but it does seem like many saw it as an “I’m an important person” signal. ↩︎

    Day One and Project Jurassic

    So, Day One is in danger of being sherlocked by rumor’s of Apple’s upcoming journaling app:

    Mayne echoes the sentiment of several app developers who have been frustrated when Apple launched in-house competitors to the apps they have introduced to the ecosystem, often copying features those apps innovated and adding functionality that only Apple can offer, per the iPhone’s privacy and security policies and APIs.

    I’m a user of Day One and I have my doubts that Apple’s app would be a drop-in replacement for my journaling needs. And I think the reasons why Day One works for me — and could be made to work better — are also opportunities for Auttomatic to differentiate Day One from project Jurassic.

    The first is access to user’s data. If Apple’s going to leverage the data it has access to on the phone, then Auttomatic should go the other way, making it dead easy for services outside Apples ecosystem to add stuff to people’s journal. Have a blog? Post photos to Flickr? Track movies in Letterboxd? Wouldn’t it be nice to get this into your Day One journal, safe and secure? A public API that these services can use to add posts to user’s journal would go a long way here. These services can offer an export option straight from the app, and Day One can be the private collection of all things a user does on the web, sort of like a private blog.

    And yes, I know there’s that IFTTT integration, but I found it to be pretty crummy (all the post formatting was stripped and images were not uploaded). And it would be a pretty ordinary user experience to have these services say to their users “hey, if you want the stuff you track here in your journal, you have to create an account at this other service.” I guess all these services could publish this information as RSS feeds, and I would settle for that, if the IFTTT integration is actually working.

    But arguments about IFTTT aside, the point is that Day One should fully embrace other services getting user’s data into their journal, and the best way to do this is with a public API. I know it won’t work for all their journals (one’s encrypted E2E should remain so) but the user’s should have that option, and services should be empowered to allow this.

    And let’s not forget the largest trump card Automattic has over Apple: an Android app and web app. I haven’t used the web app but I use the Android app all the time. I can’t imagine Apple releasing an Android version of their journalling app, particularly if they’re gearing it towards health and leveraging all the private data people have on their iPhones. Automattic should keep working on both the Android and web app, so that users not completely in Apple’s ecosystem can keep journals.

    So I don’t think Auttomatic has much to fear about project Jurassic. But they can’t rest on their laurels. They should embrace the platforms outside of Apple and iOS to really differentiate Day One, and keep it a favourite of mine for journaling.

    Nerd Counterflex

    You know that Washington Post article that has the list of websites Google used to train Bard? I been seeing people post screenshots of their sites in the training set on their blogs and Mastodon. This morning I read a post from Chris Coyier about it:

    My largest corpus of writing to date is on the web at css-tricks.com (along with many other writers), so naturally, I’m interested in seeing if it was used. (Plus, I’ve been seeing people post their rank as a weird nerd flex, so I’m following suit.)

    I’d suggest reading it. The post is more than just him flexing his ranking in the training set.

    Well, I got curious to see if any of my writing was there. Here’s the result:

    Screenshot of Washington Post article with Bard training set with ‘no results’ showing up for lmika.org

    I guess you can call this a form of a nerd counterflex?

    None of my other sites were there either. There was “lmika.com”, but that’s not me. Maybe having that was good enough for Google.

    So yeah, you won’t be seeing Bard sharing any of my… “insightful” thoughts about code reviews anytime soon. 😄

    First Posts Of The Day

    It’s bit strange how the first post of the day can always feel like the hardest to get out. Every one after it is so much easier to write.

    I wonder if it’s because when faced with an empty text-box, there are these grand plans about what I’m going to write, as if everyone reading this is hanging on my every word: it’ll be my masterpiece of wit, inspiration, and insightfulness that will spread far and wide and blow the minds of everrryyywoonnneee1. Then I write something, and naturally it falls far short of these expectations: mundane, unimportant, already said before2.

    Then I say to myself, “ah well, at least it’s written down.” And with that, the expected level of quality for anything else that day has been set.

    So, this is today’s first post. More might come, probably along the same level of importance as this one. At least until the tomorrow, when the cycle starts again.


    1. Another possibility is that I feel I need to write something at the same level of quality of those that I read. That’s probably not a bad feeling to have; but, at least for me, it can get in the way of writing anything at all that day. ↩︎

    2. And lets not forget the bad spelling and grammar I failed to catch. ↩︎

    Reheating Chicken Schnitzel in a Microwave

    Some tips for heating up chicken schnitzel that you had for dinner in a 1.1 kW microwave for lunch the next day. This is something I occasionally do, and today I found a process that works that I’d like to document for the future.

    First, don’t use the high setting on the microwave. A minute at high will heat the schnitzel up, but would also harden the crumbling, making it rubbery and unpleasent to eat. Even worst is using a plate instead of a container. That would ruin the meat even more and make a mess of your microwave.

    Instead, put the schnitzel in a microwave-safe container and heat it up twice, one minute each time, at medium. This will heat it up without making it rubbery. If still not warm enough, do it a third time for about 30 seconds (this I haven’t tried, but it seems like a good approach to getting the meat slightly warm while giving you time to make sure it’s still nice to eat).

    Photos Of Churchill Island

    Yesterday, my parents and I went to Churchill Island for afternoon tea and a walk around the homestead. Here are a few photos of that outing. Apologies that some of them are not great — they were taken in a bit of a hurry.

    Hiding Your Attachment Folder In Obsidian's Outline

    A useful little CSS snippet for anyone using Obsidian that wants to hide their attachment folder from their outline.

    .nav-folder.mod-root>.nav-folder-children .nav-folder>.nav-folder-title[data-path^="Attachments"],
    .nav-folder.mod-root>.nav-folder-children .nav-folder>.nav-folder-title[data-path^="Attachments"] + .nav-folder-children {
            display: none;
    }
    

    To use:

    • Go to the directory $VAULT/.obsidian/snippets where $VAULT is the directory of you vault. If the snippets directory doesn’t exist, create it.
    • Copy the CSS snippet into a new CSS file.
    • Open you vault settings and go to Appearance.
    • Scroll to the bottom to where you see CSS snippets.
    • Click the reload button. You should see the CSS file you’ve just created appear in the list. Turn it on to apply it.

    This’ll work if you’ve configured Obsidian to store attachments in a folder called “Attachments” located at the root of your vault, like I do. But I suspect the data-path attribute holds the folder’s path so you could use whatever CSS attribute selector you need based on how you’ve configured attachments. For example, [data-path*="/Files"] selector will probably work if you’ve configured attachments to be in folders called “Files” that sits alongside your notes (I haven’t tested this so YMMV).

    Source: Scribbles_some_words on this Reddit response

    To Wordpress Or Not To Wordpress

    I’m facing a bit of a dilemma.

    I’ve been asked to setup a new website for someone who wants to stand up a new business. In therory this is something that I can do quite easily. I know HTML and CSS. I’ve made a living building backends for web-apps. I do have an undeveloped eye for design, but I like to think I have an idea of the principal of good website usability; and as long as I’m not too ambitious, and aim for a minimal usable site, I can probably put together a simple static website.

    The only problem is that this may not work for the person that I’m building a site for. This is someone that has no experience with putting together websites, and if I were to go down the static HTML road, I’d probably be on the hook to make changes going forward.

    So the alternative is to use a CRM like Wordpress. That way, once I hand ownership of the site to the client, he could either contract someone else to maintain it going forward or even learn to do it himself.

    Only problem with that is that my experience with Wordpress is quite minimal. I can get around the dashboard no problem, but when it comes to designing or customising themes or (sigh) using the Block editor, I’m just as much as a novice as he is. And I’m not sure to what degree I can leverage my HTML and CSS skills to style the site. I may be able to change a few things but I’d have to do so within the confines of the block templating system.

    So, what to do?

    Maybe the best way forward is to get a sense of how often this person would need the site changed. That’s by far the biggest variable here. I only know what he wants at a very superficial sense at this moment. I don’t believe it’ll need any sort of blog or product catalogue; just a simple landing page with contact details.

    In that case, I’m wondering if a static site with just plain HTML and CSS would be enough. That’ll be easy enough to put together. It can probably scale with some basic dynamic aspects as well, maybe powered with a simple backend that can regenerate the site. Maybe something like Carrd could work here as well.

    But the danger is that he’ll be locked into using a static site. Any changes would require someone who’s versed in HTML and CSS. Even worse would be a static site with a bit of backend “sprinkled in”. Then he’d be locked into using me. Not sure I like that for his sake or for mine. You read about those developers in The Daily WTF who’ve put together a custom backend for a “simple website” that has grown unwieldily and become a huge mess that someone who inherits it needs to cleanup or take responsibility for. The prospect of being such a developer is not a great one.

    Which is why I’m looking at Wordpress, and wondering the pain of learning how to work with it is worth it. I guess it’s offsetting the potential future pain (and embarrassment) of transitioning a static site to a proper CRM later.

    So, Leon, which pain is worse?

    Quotes Around Names In Error Messages

    I saw this error a few minutes ago:

    failed to process input: RUNTIME ERROR: function has no parameter stack
    

    This threw me for a minute as I was trying to work out which parameter stack went missing, what I did to cause it to go missing, and what the heck a parameter stack actually is anyway.

    But it had nothing to do with any sort of stack. The error message was showing up because a function call was expecting a parameter with name “stack” which was missing from the function definition.

    This is why I always like putting quotes around names in logs or error messages. It removes any ambiguity about what the message is referring to. If the message was:

    failed to process input: RUNTIME ERROR: function has no parameter "stack"
    

    then you’re more likely to infer that the thing missing was a parameter with the name stack.

    Consider doing this in the error messages you write.

    Web Search Works With Blogs Too

    Here’s one more reason to write (or syndicate) to your blog instead of post directly to social media: you can use web search engines to find what you need.

    I hear a lot of people complain about the crappy search in Twitter or the lack of search in Mastodon, but this won’t be a problem if you post to your site and let public search engines crawl it. They’re incentivised to make sure their search is good, so you’re more likely to get better results more quickly.

    Honestly, it works. I used it today to find a post from a fellow Micro.blogger that I wanted to reread. A site:<url> query with a few keywords. Found it in 5 seconds. I can’t imagine how long it would’ve taken if I had to track it down in Mastodon.

    Obviously this won’t work for posts from others, unless they too write to their blog. But it’s probably still worth doing for others that enjoy your work. And who knows? It might be useful to yourself one day. I know it has been for me.

    Less Consuming, More Creating

    Mike Crittenden posted a good quote from a random Hacker News commenter:

    Less consuming, more creating.
    Doesn’t matter what it is, doesn’t matter if it’s bad.

    This quote actually sums up this blog quite nicely. The first line explains why it came to exist. The second line describes how it continues to exist.

    Happy 1,000th post.

    Ballarat Beer Festival 2023

    My friends and I returned to Ballarat today for the Beer Festival. It was another stunning day for it: sunny, mild, not too hot. Much like last year I took an earlier train to walk around Ballarat a little. Not much to report here: very little has changed. But I never see Ballarat so it’s good to walk around a little. My friends were on the train behind mine and I caught up with them when I boarded at Ballarat. We then made our way to Wendouree park for the festival.

    We were given a plastic pot when we entered and brewers typically offered either a tasting size, a half pot, or a full pot of a particular drink. Half pots are usually the best value for money: you get a decent amount to enjoy but you pace yourself and avoid reaching your limit before you tasted everything you wanted to. I made that mistake last year: buying too many full-sized pot servings. I went half pots this year.

    I also made the mistake of not making notes of the beers I tried last year. I made sure to record them this year. Given the occasion, I decided to go for drinks that I wouldn’t normally go for. In other words: lots of sours today. I’m not a sour drinker and I think I eventually reached my enjoyment limit of them today. But that’s fine: I guess it’s good finding my limits this way.

    Here’s a quick rundown of the drinks I tried:

    • Fox Friday “Feeling Peachy” Fruited Sour: This was less sour and more on the bitter sweet side. The peach flavour came through strong, which took the edge off and made it quite refreshing. Made for a nice starter.
    • Dollar Bill “Australian Wild Ale”: This was a regular ale and a little more bitter than I was expecting (although that could’ve been because of the peach sour). It felt like a heavy sort of ale. Maybe a little too heavy for me. Not sure it’ll be something I go for again.
    • Mountain Culture “MS DOS” West Coast IPA: Nothing too remarkable about this. Just an IPA. But a decent IPA. Will definitely have again.
    • Wild Life Citrus Sour: I can’t quite remember the make up of this sour. I think it was lemon and blood-orange. Definitely something and blood-orange, as the blood-orange taste really came through. I didn’t think much of this but that’s probably because I reached my limit for sours at this point. This brewery was also advertising a “pineapple sour,” which would’ve been amazing, but sadly they weren’t pouring it when we arrived. Might have affected how I thought about the blood-orange drink: feeling that it was playing second fiddle to the pineapple one.
    • Prancing Pony “10 Year” Beer: This was one my friend got but he offered me a taste of it. It was a pilsner IPA but more on the pilsner side. It’s probably one I can see myself drinking if it wasn’t for it’s alcoholic content. It was quite high: 7.5%, or 3 standard drinks in a 500 ml can. That’s just a little too high for me. The can was quite something though.
    • Molly Rose “Strawberry Sublime”: This was a low alcholoic strawberry and lime gose. A bit of a mix of sweet and sour. It was nice, but it really wasn’t doing it for me, and frankly I’m not really sure why I went for it at this point in the day.

    The event was back in Wendouree park, and was pretty much like last year. Which is good: they did a good job last year.

    But there were a few small changes this year. For one, more tables were placed under the trees, which was a good move. There were more tables in the sun last year, which never had anyone on it for long as people preferred the ones in the shade. I think there were fewer brewers this year too. It might have been because of the layout change but it felt a little smaller this year, and some brewers from last year didn’t make an appearance.

    Also, no finska this year.

    But all in all, it was a good day. Good excuse to get on a regional train out to the country for a change.

    Ignoring Bard to Speak to Paulie

    So this happened today.

    Our team was testing the integration between two systems. The first system — let’s call it Bard — can be configured to make API calls directly to Stripe, or be configured to use the second system — let’s call it Paulie — to call Stripe on it’s behalf. Bard has a REST API that is used by the HTML front-end to handle user requests. Paulie is designed to be completely isolated from the front-end and has a simple gRPC API that Bard calls. Whether or not it Bard calls Paulie at all is determined by the value of an SSM parameter.

    The test was setup with Bard configured to bypass Paulie and make calls directly to Stripe. The way we were to verify this was to tail the logs of both Bard and Paulie, make a REST-API call, and confirm that logs showed up in Bard but not Paulie.

    I got called by those running the test to help, as they were seeing something unusual: when the test was performed, logs were showing up in Paulie. The system was configured for Bard to ignore Paulie and go directly to Stripe, and yet Paulie was being spoken to.

    So we started going through the motions. We checked to make sure we had the correct version of Bard deployed, checked the SSM parameter, traced through the code, restarted Bard a couple of times to make sure it was configured correctly. And after every check we tried the test again, to nothing changing: logs will still coming through from Paulie.

    We were at it for about 15 minutes. I was staring to go through the more esoteric explanations for why this was happening, like whether we were using SSM parameters incorrectly and we may have been using an old configuration or something. Then as I was going through the traces one last time before giving up, I noticed something: there were no traces from Bard. This REST-API it had did all sorts of things like contact the database before going to Paulie or Stripe so I was expecting something like that to show up. Yet there was no evidence of any of that happening.

    I then asked how this was actually being tested. And you can probably guess what the response was. Turns out the person running the test wasn’t using Bard’s REST-API at all, and was making gRPC calls directly to Paulie.

    Well, naturally, if you called Paulie directly without calling Bard, it doesn’t matter what Bard is configured to do. 🤪

    Now, I don’t write this because I’m angry or annoyed. In fact, I came away from this feeling very zen about the whole thing. Mistakes like this happen all the time, it’s fine.

    But it’s a perfect opportunity remind myself that working on tech can sometimes give you tunnel vision, and that sometimes the explanation isn’t technical at all. Sometimes the answer is much simpler than you think.

    New Stuff Setup Weekend

    A bunch of new stuff I’ve bought has arrived recently and this is the weekend I finally get around to setting it up.

    New Furniture

    The largest one is a new couch. I’ve been sitting on a second-hand two seater that my parents gave me when I’ve moved out. It did the job but it was getting quite old and saggy, and I’ve been finding myself wanting to have something larger that I can lie across. So about six months ago, I bought a new couch. It was meant to come last December, but got delayed thanks to Covid-19 supply chain issues. But it finally arrived this Saturday.

    But before it can be delivered, I had to prepare the way, as they said. I did that on Friday, moving the old couch around to make space for the new one and also taking the opportunity to clean up a little.

    An empty space where the new couch will go, with the old couch to the right, beside the bookcase
    Preparing the way for the new couch. The old couch is temporarily beside the book shelf.

    The placement of the old couch is a little awkward, but it will only be for about a week and a half before it leaves my house.

    Delivery time was between 10 and 12 on Saturday morning, and it was near the end of the delivery window when the movers eventually arrived. There was some concerns about getting the couch through the door but they managed to do so by standing it up on its side and sliding it through.

    The new couch placed in the living room. The couch is a maroone reclying three seater
    The new couch.

    The delivery went smoothly, but I wouldn’t call it a great “first launch” experience. Apparently it’s policy not to take all the packing material, which means it’s something I have to deal with. It’s not a huge problem, and it’s not the first time I had to get rid of waste over several weeks, but it did take the shine out of enjoying a new piece of furniture.

    A pile of packing material waste in the living room
    All the waste I need to sort and throw out.
    A wooden board propped up against a wall
    Most of the waste is plastic and cardboard, but there's also this wooden board. I'll probably keep this. Might be useful in the future.

    But that aside, it’s great having a new couch. One interesting thing about the long time gap from purchase and arrival is that I forget how it felt trying it out in the store. It’s firmer than I remember and the height of the seats are a little short. It’s feels like a whole new experience from scratch. But I expect I’ll get use to it over time. And it’s not like I didn’t realise these properties when I actually tried it out during the shopping phase.

    All in all, I’m really happy with it.

    New Electronic Devices

    Today (Sunday) was all out electronic devices, starting with a new M2 Mac mini.

    A M2 Mac mini in its box
    The new M2 Mac Mini

    This will replace my 2018 Intel Mac mini, which will become a home server. This is actually the first Apple Silicon computer I own. I’ve been using a M1 MacBook Pro for work for a year and a half, and I’m reasonably confident that the M1 chips will handle the type of things I’d like to run.

    Going through the setup was pretty seamless. I tend to start all new machines from scratch, meaning I don’t migrate anything over. Since the old Mac mini will be still around, I’ll move projects and documents over to the new Mac over time.

    The one thing that didn’t work out as well as I expected was my USB audio interface. I’ve been using a Roland Quad-Capture as my audio interface and while I was getting ready to move to the M2 chip, I did a search to see whether Roland had drivers that worked with Apple Silicon. At the time I thought they did, but when I tried installing them they didn’t work at all. Another look today confirmed that there was no driver support the M2 chip.

    This was a bit of a setback. I think next time I make sure to actually do a search for “thing M2 support” instead of just browse the driver download page and infer support for something when it doesn’t explicitly say “does not support M1 Macs”. It would also be helpful to remember that MacOS 10.X does not equal MacOS X. 🤦

    Anyway I’ve got another audio interface on the way. It’s another Roland product, since I think something designed for music production means the device will be able to handle low latency audio. I also need something with MIDI since I do occasionally use it for music production. This new one uses the built-in MacOS audio drivers so hopefully I don’t need to worry about driver support going forward.

    Apart from that, I’m still in the process of setting the Mac up. It always feels a little strange moving to a brand new machine. It’s like moving house or going to a holiday let: everything its new to you, you need to find out where things are, and none of your old things are there. But it’s also a good opportunity to form a few new habits. For example, I may try using Safari as my browser instead of Vivaldi, and start using Mac-Assed Mac Apps like NetNewsWire in lead of web-apps. I might be a little more judicious about keeping my Download folder cleen as well. I saw a cronjob that will remove things in that folder after a week. I’ll give this a try and see if a clean Downloads folder works for me.

    The last bit of kit is a new Smart Keyboard Folio case for my iPad.

    An iPad Smart Keyboard Folio in its box
    The new Smart Keyboard Folio

    I finally bit the bullet and replaced my old keyboard folio with a new one. The keyboard was completely non functional in the end and the lining was starting to peel off, so it was probably time for a replacement anyway.

    An old Smart Keyboard Folio, with the top lining starting to peel off
    The "retired" Smart Keyboard Folio. Notice the lining above the keyboard starting to peel off.

    I’ve only tried the new one for a few minutes as I was looking up passwords and setup instructions for the new Mac. So far it’s working well. The only concern I have is that I’ll have to go through this again in three years time.

    So that’s all the new stuff I got setup this weekend. Most of the setup will continue over the next few weeks, especially the new Mac, but I’m happy I got the most of it done.

    Spotify Video Follow-up

    Some follow-up from my post about Spotify videos. I looked into this a little and from what I understand they’re not full videos but “short looping video clips that play during certain songs,” at least according to this website.

    So I guess my initial belief is incorrect. Spotify might have music videos (they’re a bunch of articles about them thinking about it in 2020-21) but this looks to be completely different.

    Furthermore, you can turn them off. They’re called “Canvas Video Clips”, and if you go into the preferences of the Android mobile app, sure enough there’s a switch for them.

    Spotify preference screen with the toggle for Canvas visualisations enabled

    Not sure why I missed that when I was looking for that option earlier. I guess because I was looking for a preference with the name “video” in the label. But this switch seems to work and after I turned it off, this visuals stopped.

    Still toying with the idea of cancelling my subscription for other reasons but at least this is one less concern I have for using the service.

    On Higher Order Functions In Go

    It’s a bit surprising that higher-order functions like map and filter have not caught on in Go.

    They seemed to have caught on quickly when they were added to Java. One of the long standing issues back then was the clunky and verbose approach to writing closures. Java 8 fixed this with the introduction of the lambda (the -> operator). Suddenly, what once took multiple lines of boilerplate could be done in a single expression. The underlying mechanism was still the same but the new syntax was enough to get people to use it (amongst other things, read on).

    I don’t see that in Go. With generics in Go 1.18 reducing the need for interface{} and type assertions, I would have expected the tide to turn a little: more maps and filter functions, and way less for loops. But it hasn’t seem to happen yet. I still see those same for loops that I’ve been seeing over the last eight years.

    I’m not sure of the reason but I can guess I could be explained by two things.

    The first is Go’s culture. And yeah, you could describe Go as having a culture1. It’s one that’s quite conservative and methodical. Fancy ways of doing things that sacrifice readability in favour of terseness is usually frowned upon. It’s proper to make sure the code is clear, even if it takes more room on the screen.

    The culture comes through in the design of the Go language itself. A classic example is the use of the error type rather than exceptions. And I think it partly explains why higher-order functions have not caught on. It’s not because you can’t do it. At least you had proper closures in the language, which is something you couldn’t say about Java, back in the pre-1.8 days.

    But I don’t think culture is enough. You couldn’t say that using higher-order functions in Java 1.6 was a big thing back then either2. What got them moving so quickly?

    This is where I think reason number two comes in, which is the lack of standard library support. When Java 8 came out, every collection type was retrofitted with a bunch of higher-order methods which made it trivial to map, filter or reduce anything you need. There was event a new streams package, allowing you to build pipelines that are nothing but higher-order methods. All of this was useful and fun to work with, and people naturally wanted to use them.

    Nothing like this existed when Go 1.18 was release. Nothing like this is in the upcoming release of Go 1.20.

    Now, to be fair, this is very characteristic of how Go maintainers add features. They take their time, making sure not to break backwards compatibility or locking themselves into a design that is difficult to evolve. And I understand the reasons for why they want to go slow here. But that means that an “official” package of higher-order functions will take time to be ready. And no such package exists now. Sure, there are open-source and experimental ones out there, but would you be using those for any production level code? Maybe adding one more for isn’t exciting, but at least it doesn’t involve another dependency (and you’re already using for loops in several of your other functions anyway).

    So I guess I’ll need to wait a bit longer for higher-order functions to be more of a thing. I can’t say I’m not disappointed: one of nice things about working in a language like Ruby, JavaScript, or even Java itself, is all the higher-order functions they have. I’m still hopeful that they will come eventually. After all, generics are only a year old. And Go as a language may move slowly, but at least it’s still moving.


    1. Maybe another way to put it is a “way that things are done.” ↩︎

    2. Unless you’re writing Android apps, in which case you’re forced into a culture of anonymous classes for all the callbacks you need to write. ↩︎

    Making A Long Form Posts Category In Micro.blog

    I use the Categories feature of Micro.blog to organise the types of posts I make on this site. One of the categories I have on this blog is called Long Form Posts, which I use to file all the posts I have that have titles. This is done automatically, such that I don’t have to think about adding a post to this category once I’ve written it1.

    It’s a little hard to find the relevant features in Micro.blog to do this, but they’re there. Here’s how you can use them to make such a category on your Micro.blog blog.

    Creating The Category

    The new category edit box with the name Long Form Post
    The New Category form.

    The first thing you need to do is create the category:

    1. Click “Categories” in the sidebar. You should be presented with a list of categories on your blog. You can add a new one by clicking “New Category”.
    2. Give your category a name. I chose the name “Long Form Posts” but it can be anything you want: Titled Posts, Essays, etc.
    3. Click “Create Category”.

    The new category should show up in the list of categories on Micro.blog. You should also see the category appear on your blog as well. If you were go to the archive page, the list of categories should appear, along with all the posts on your blog. Clicking it will show only the posts that have that category.

    The new category should also have an RSS feed, which you can use in any standard feed reader. You can get to it by clicking the category on your blog, and adding feed.xml to the URL. For example: the URL https://lmika.org/categories/long-form-posts/feed.xml is the RSS feed of my Long Form Post category.

    Creating The Filter

    The new filter form configured for putting titled posts in the Long Form Posts category
    The new filter form configured for filing titled posts in the Long Form Posts category.

    The Long Form Post category should exist now, but you may notice that it’s empty. At this point you need to manually add the Long Form Post category to each post you want in this category by selecting the checkbox in the Edit Post window. In you want Micro.blog to do this automatically for each category that has a title, you will need to create a Filter:

    1. Within the “Categories” section, click “Edit Filters”, then click “New Filter”.
    2. For a filter that will select all blog posts with a title; in the “Post length” picker, choose “Only long posts with a title”.
    3. Select the category you want these posts to have, then click “Add Filter”.

    Now any post with a title will automatically be given the Long Form Post category. You can try this out by writing a post, giving a title, then saving it as a draft. When you go back to edit the post, the Long Form Post category checkbox should be checked.

    Finally, to apply the new filter for any existing post, click “Run Filter”.


    1. I haven’t managed to get automatic category selection working for blogging apps like MarsEdit. There might be a way to do this, but I haven’t really looked. ↩︎

    I, Developer

    There was a bit of a discussion on Mastodon and various blogs about how best to call someone who writes code for fun or profit. I’ll spare you the prologue of how this discussion that has been going on since the start of the profession itself: I’m sure you’ve heard it all before. But hearing one of these terms today got me thinking about this, and I thought I’d say what my preferences are.

    As someone who writes software for my job and hobby, I personally prefer the term “developer”. I usually call myself a “developer” or “dev” when I’m around a group of my peers. When I’m with lay people, I usually say that I’m a “software developer” as people can associate a developer as one who’s involved with building houses (this has happened to me once). I don’t mind the term “coder” or “programmer” either, but I don’t feel like it fully describes what I actually do, given that about half my job involves things other than code (as much as I dislike that fact).

    Officially my role is “engineer” but I don’t really care for the term. The reasons are the same as anyone else that’s got a problem with it, namely the fact that we’re not bound to the same level of accreditation that “real” engineers are (civil, electrical, etc.). But I think my dislike for it also has to do with the fact that the job of a “software engineer” usually involves more than just the “engineering” side of things. There’s design work, planning work, operations, etc. that feel beyond the scope of what could simply be called engineering. I guess one could say that an engineer is required to consider maintenance when they’re designing a structure or electrical circuit, but I feel like us software developers are more involved in the day-to-day operations of things than our “real” engineer counterparts. I could be completely wrong here though: I don’t know a thing about what “real” engineers really get up to, so I probably can’t say.

    One term I’ve recently started hearing more is “individual contributor”, and I must say I don’t care for the term. It’s feels so abstract and wishy-washy; so divorced from the actual act of working with the code which, arguably, is a pretty important part of delivering value for a project. I don’t know how this term got so widespread. Maybe is a way of grouping all the activities involved in software development into one noun-phrase. I guess if I’m being charitable, I can see it that way. After all, the existing terms don’t really work as well for doing this (I’m guess that’s why the question was posted on Mastodon in the first place). And yet, I still get this feeling that the existence of this term is to deliberately reduce the importance of value these people deliver, as if we’re interchangeable cogs. It might just be where I see this term, so I could be completely unfair. But that’s how I feel, and it’s for that reason I don’t like using this term.

    So that’s pretty much it. All in all I’m generally okay with being called what you’d want me to call, and I won’t call you out if you called me something else (except “Java monkey”, especially since I haven’t work in Java for a few years now). But if I had the choice: call me a “dev”, “developer” or “programmer”; try not to call me a “engineer”; and please don’t call me an “individual contributor”.

    And please don’t call me at home. 😛

← Newer Posts Older Posts →