Long Form Posts

    My Impressions of GitHub Codespaces

    The GitHub Universe 2021 conference started a few days ago and one of the features touted in the day one keynote was GitHub Codespaces. This is a development environment that is accessible from within your web browser. It’s based on VSCode, which is a popular and well-designed IDE that is already written in JavaScript1, and also provides access to a Linux shell running in the cloud, allowing you to do various things like build and test your code.

    After playing around with Codespaces during the beta, and seeing the feature develop into something that is available for everyone, I’d thought I’d say a few words on how I use it and what I think of it.

    First, I should probably say that I’m not a heavy Codespaces user. The keynote seemed to be touting the features of Codespaces that makes it useful as a dev environment for collaborative coding. This is something I have no personal use for since it’s mainly just me working on repos I use with Codespaces, so I haven’t really explored these features myself.

    The main use I have for Codespaces is making changes to repos on a machine that is not my own. There are times when I need to add a new feature or fix a bug on a tool I use, and for various reasons I cannot (or choose not to) setup a dev environment on the machine I’m working on to make the change. A case like this would have me look at one of the alternatives, or even make use of the GitHub web-editor. The web-editor works but doesn’t really offer much when it comes to building and testing your changes (I’ll talk more about the alternatives later).

    I should say that this doesn’t happen very often. Most of my time I’m on my own machine, and I have no need for a web-based dev environment as I can just use the environment I have. But when that’s not possible, being able to spin up and make the change in Codespaces, complete with a Linux environment which you have sudo access to, is quite useful.

    Codespaces is also pretty nice in terms of a coding environment. This is no real surprise since it’s based on VSCode, but compared to the alternatives, the little things like keystroke performance and doing things quickly in the editor make a huge difference. I’ve tried a bunch of alternatives in the past like Cloud9 and CodeAnywhere, and Codespaces is by far the most polished.

    Another advantage Codespaces have over the comparison is that it seems suited to throw-away development environments. It might be possible to keep an environment around for an extended period of time, but I tend to spin up temporary workspace when I need them. The alternatives tend to prefer a more long-lived environment, which involves a lot setup for something that is kept around. This feels like splitting your dev environments in two, and I always feel the need to select one as the “definitive workspace” for that particular project going forward. I don’t feel that with Codespaces: I can quickly spin up a new environment, which has most of what I need already installed, and make the change I need with the full knowledge that once I no longer need it, it will be teared down (pro-tip: always push your changes to origin once you’re done making your changes in Codespaces). It helps that spinning up a new environment is quite fast.

    So, that’s my impression of GitHub Codespaces. I’m not sure who has access to it: you may need to be on a paid plan, for example. But if it’s enable for your account, and you find yourself needing a temporary, cloud-based dev environment to do your work in, I’d suggest giving it a try.


    1. It’s actually TypeScript ↩︎

    Alto Catalogue Update

    I’ve really tied myself up in knots here. I’m spending some time working on Alto Catalogue, trying to streamline the process of uploading individual tracks into a new album. This is a workflow that is absolutely not user friendly at the moment, and the only way I’ve gotten tracks into the catalogue is to run a hacked-together tool to upload the tracks from the command line. The reason why I’m addressing this now is that it’s slightly embarrassing to have this open-source project without having a nice way of doing something that, by all accounts, is quite fundamental (a good hint for when you’re facing this is when it comes time to write the end-user documentation: if you can’t explain how to do something in a way that doesn’t include the word “hack”, “complicated”, or “unsupported”, then something is missing).

    So I’m trying to close this feature gap, but it’s proving to be more complicated than I expected. The main issue relates ID3 tags and how media is arrange in the repository. Previous versions of the catalogue actually did have a way of uploading track media to the repository, which is essentially an S3 bucket. The way this work is that the catalogue will issue the browser a pre-signed Put URL, and the browser could upload the track media directly to S3. But in order to get a pre-signed URL, you need to know the object key, which is a bit like a file path. The old upload flow had the user enter the object key manually in the upload form.

    This worked but I had some real issues with it. The first is that I’d like the objects within the S3 bucket to be organised in a nice way, for example “artist/album/tracknum-title.mp3”. I’m hoping that this S3 bucket will be my definitive music collection and I don’t want just some random IDs that are completely indecipherable when I browse the objects in the S3 bucket. That way, if I were ever to shutdown the catalogue down or loose all the metadata, I’d still be able to navigate my collection via the object keys alone.

    The second was that this approach did not take into account the track metadata. Track metadata is managed in a PostgreSQL database and had to be entered in manually; yes, this included the track duration. The only reason I used the hacked together tool to upload tracks was that it was a tool I was already using to set ID3 tags on MP3 files, and it was trivial to add a HTTP client to do the upload from there. Obviously, asking users to run a separate tool to do their track uploads is not going to fly.

    So I’m hoping to improve this. The ideal flow would be that the user will simply select an MP3 from their file system. When they click upload, the following things will happen:

    • The ID3 tags of the MP3 will be read.
    • That metadata will be used to determine the location of the object in S3.
    • A pre-signed URL will be generated and sent to the browser to upload the file.
    • The file is uploaded to S3.
    • A new track record is created with the same metadata.

    The libraries I’m using to read the ID3 tags and track duration requires the track media to be available as a file on the local file system (I assume this is for random access). Simply uploading the track media to the local file system would be the easiest approach, since it would allow me to read the metadata, upload the media to the repository on the backend, and setup the track metadata all in a single transaction. But I have some reservations about allowing large uploads to the server, and most of the existing infrastructure already makes use of pre-signed URLs. So the first run at this feature involved uploading the file to S3 and then downloading it on the server backend to read the metadata.

    But you see the problem here: in order to generate a pre-signed URL to upload the object to S3, I need to know the location of the media, which I want to derive from the track metadata. So if I don’t want uploads to go straight to the file system, I need the object to already be in S3 in order to work out the best location of where to put the object in S3.

    So I’m wondering what the best ways to fix this would be. My current thing is this series of events:

    • Create a pre-signed URL to a temporary location in the S3 bucket.
    • Allow the user to Upload the media directly to that location in the S3 bucket.
    • On the server, download that media object to get the metadata and duration.
    • From that, derive the objects location and move the object within S3, something I’m guessing should be relatively easy if the objects are in the same bucket.
    • Create a new track record from the metadata.

    The alternative is biting the bullet and allowing track uploads directly to the file system. That will simplify the crazy workflow above but means that I’ll need to configure the server for large uploads. This is not entirely without precedence though: there is a feature for uploading tracks in a zip file downloaded from a URL which uses the local file system. So there’s not a whole lot stopping me from doing this altogether.

    The third approach might be looking for a JavaScript library to read the ID3 tags. This is not great as I’d need to get the location from the server anyway, as the metadata-derive object location is configured on a per repository basis. It also means I’ll be mixing up different ways to get metadata.

    In any case, not a great set of options here.

    Feeds In Broadtail

    My quest to watch YouTube without using YouTube got a little closer recently with the addition of feeds in Broadtail. This uses the YouTube RSS feed endpoint to list videos recently added to a channel or playlist.

    Feed listing, in all it's 90's web style glory.

    There are a bunch of channels that I watch regularly but I’m very hesitant to subscribe to them within YouTube itself (sorry YouTubers, but I choose not to smash that bell icon). I’m generally quite hesitant to give any signal to YouTube about my watching habits, feeding their machine learning models even more information about myself. But I do want to know when new videos are available, so that I can get them into Plex once they’re released. There is where feeds come in handy.

    Recent videos of a feed.

    Also improved is the display of video metadata when selecting a feed item or entering a video ID in the quick look bar. Previously this would immediately start a download of the video, but I prefer knowing more about the video first. These downloads aren’t free, and they usually take many hours to get. Better to know more about them before committing to it.

    Video details page.

    Incidentally, I think this mode of watching has a slight benefit. There are days when I spend the whole evening binging YouTube, not so much following the algorithm but looking at the various channels I’m interested in for videos that I haven’t seen yet. Waiting several hours for a video download feels a little more measured, and less likely to send me down the YouTube rabbit hole. I’m sure there will still be evenings when I do nothing else other than watch TV, but hopefully that’s more of a choice rather than an accident.

    I think this is enough on Broadtail for the time being. It’s more or less functional for what I want to do with it. Time to move onto something else.

    Some Screenshots Of Broadtail

    I spent some time this morning doing some styling work on Broadtail, my silly little YouTube video download manager I’m working on.

    Now, I think it’s fair to say that I’m not a designer. And these designs look a little dated, but, surprisingly, this is sort of the design I’m going for: centered pages, borders, etc. A bit of a retro, tasteless style that may be ugly, but still usable(-ish).

    It’s not quite finished — the colours need a bit of work — but it’s sort of the style I have in my head.

    Start of Yet Another Project Because I Can't Help Myself

    One of the reasons why I stopped work on Lorikeet was that I was inspired by those on Micro.blog to setup a Plex server for my YouTube watching needs. A few years ago, I actually bought an old Intel Nuc for that reason, but I never got around to setting it up. I managed to do so last Wednesday and so far it’s working pretty well.

    The next thing I’d like to do is setup RSS subscriptions for certain YouTube channels and automatically download the videos when they are publish. I plan to use “youtube-dl” for the actual video downloading part, but I’m hoping to build something that would poll the RSS feeds and trigger the download when new videos are published. I’m hoping that this service would have a web-based frontend so I don’t have to login via SSH to monitor progress, etc.

    The download’s would need to be automatic as the requests made by youtube-dl seem to be throttled by YouTube and a longish video may take several hours to download. If this was a manual process, assuming that I would actually remember to start the download myself, the video won’t be ready for my evening viewing. I’m hoping that my timezone would work to my advantage here. The evenings on the US East Coast are my mornings, so if a video download starts at the beginning of the day, hopefully it would be finish when my evening rolls around. I guess we’ll see.

    Anyway, that’s what my current coding project will be on: something that would setup RSS subscriptions for YouTube channels, and download new videos when they are published.

    This is probably one of those things that already exist out there. That may be true, but there are certain things that I’m hoping to add down the line. One such thing might be adding the notion of an “interest level” to channels which would govern how long a video would be kept around. For example, a channel that is marked as very interested would have every video downloaded and stored into Plex straight away. Mildly interested channels would have videos download but kept in a holding place until I choose to watch it, in which case it would be moved to Plex. If that doesn’t happen in 7 days or so, the videos would be removed.

    I’d like to also add some video lifecycle management into the mix as well, just to avoid the disk being completely used up. I can see instances where I’d like to mark videos as “keep for ever” and all the others will churn away after 14 days or so. It might be worth checking out what Plex offers for this, just to avoid doubling up on effort.

    But that’s all for the future. For the moment, my immediate goal is to get the basics working.

    Abandoning Project Lorikeet

    I’ll admit it: the mini-project that I have been working on may not have been a good idea.

    The project, which I gave the codename Lorikeet, was to provide a way to stream YouTube videos to a Chromecast without using the YouTube app. Using the YouTube app is becoming a real pain. Ads aside, they’ve completely replaced the Chromecast experience from a very basic viewing destination to something akin to a Google TV, complete with recommendations of “Breaking News” from news services that I have no interest in seeing.

    So I spent some time trying to build something to avoid the YouTube app completely, using a mixture of youtube-dl, a Buffalo web-app, and a Flutter mobile app. I spent the last week on it (it’s not pretty so no screenshots), but at this stage I don’t see much point continuing to work on it.

    For one, the experience is far from perfect. Video loading is slow and there are cases when the video pauses due to buffering. I’m sure there are ways around this, but I really don’t want to spend the time learning how to do this.

    It was also expensive. I have a Linode server running in Sydney which acts as a bit of a hobby server (it’s also running Pagepark to serve this site); but in order to be closer to the YouTube CDNs that are closer to me, I had to rent a server that would run in Melbourne. And there are not many VPS hosting providers that offer hosting here.

    So I went with Google Cloud.

    Now, I’m sure there’s a lot to like about Google Cloud, but I found its VPS hosting to be quite sub-par. For just over $10 USD a month, I had a Linux virtual server with 512 MB of RAM, 8 GB of storage, and a CPU which I’d imagine is throttled all the way back as trying to do anything of significants slowed it to a crawl. I had immense issues installing OS updates, getting the Dokku based web-app deployed, and trying to avoid hitting the storage limit.

    For the same amount of money, Linode offers me a virtual server with 2 GB of RAM, 50 GB of storage, and a real virtual CPU. This server is running 4 Dokku apps, 3 of them with dedicated PostgreSQL databases, and apart from occasionally needing to remove dangling Docker images, I’ve had zero issues with it. None! (The podcasters were right).

    Where was I? Oh, yeah. So, that’s the reason why I’m abandoning this project and will need to re-evaluate my online video watching experience. I might give Plex a try, although before doing something like setting up a dedicated media server, I’ll probably just use the Mac Mini I’ve been using for a desktop in the short term.

    So, yeah, that’s it. It’s hard to abandon a project you spent any amount of time on. I suppose the good thing is that I got to play around with Flutter and learnt how to connect to a Chromecast using Dart, so it’s not a complete waste.

    Two People

    There are two people, and each one has the same problem that they want to get solved.

    The first person chooses the option to pay $10 a month, and all they have to do is sign up to a service that will solve the problem for them. The service they sign up for takes care of the rest.

    The second person chooses the option to pay $15 a month, 20 hours of work to get something built, and an ongoing commitment to keep it maintained.

    Guess which person I am today.

    (Hyper)critical Acclaim

    There were a couple of events that led me to writing this post. I’m sure part of it was seeing the posts on the 10 year anniversary of Steve Jobs, although such an event would probably not have been sufficient in itself. What tipped it over the edge was seeing the Ars Technica review of iOS showing up in my RSS feed on the same day. Pretty soon I’m expecting the MacOS review to drop as well.

    The quality of the reviews are still quite good, and I try to read them when I have the time. But sadly they do not grab me the way the Siracusa reviews did.

    It’s usually around this time of year I would start expecting them, waiting for the featured article to show up on Ars' homepage. Once they came out, I would read them from cover to cover. I wouldn’t rush it either, taking my time with them over a period of about a week, reading them slowly and methodically as one would sip a nice glass of wine.

    Thinking back on them now, what grabbed me about these pieces was the level of detail. It was clear from the writing that a lot of effort was put into them: every pixel of the new OS was looked at methodically with a fine eye to detail. This level of study made to the design of the OS release, trying to find the underlying theme running through the decisions made, was something that was not found in any of the other OS reviews on Ars or any other tech site. I’m sure it was in no small part the reason why I eventually move to Apple for my computing needs.

    Eventually, the Siracusa reviews stopped. But by then I was well down the rabbit hole of Apple tech-nerd podcasts like the Talk Show and ATP. Now a regular listener to these shows, I still enjoy getting my fix of software technologies critical review, albeit in audio form. I eventually discovered the Hypercitical podcast, well after it finished, and I still occasionally listen to old episodes when there’s nothing new.

    Incidentally, there is one episode that I haven’t listen to yet. Episode 37, recorded on 8th October 2021, just after the death of Steve Jobs. Here, on the 10 year anniversary of his death, it might be a good time to have a listen.

    On Choosing the Hard Way Forward

    This is about work, so the usual disclaimers about opinions being my own, etc. apply here.

    I have an interesting problem in front of me at the moment: I’ve need to come up with a way to be notified when a user connects to, or disconnects from, a PostgreSQL database. This is not something that’s supported by PostgreSQL out of the box1, so my options are limited to building something that sits outside the database. I can think of two ways that I can do this: have something that sits in front of the database which acts as a proxy, or have something that sits behind the database and generates the notifications by parsing the server log.

    A database proxy is probably the better option in the long run. Not only will it allow us to know exactly when a user connects or disconnects — since they will be connecting to the proxy itself — it could potentially allow us to do a few other things that have been discussed, such as IP address whitelisting. It might be a fair bit of work to do, and would require us to know the PostgreSQL wire protocol, but given how widespread PostgreSQL is, I’m suspecting that this could be done once and not need many changes going forward.

    Despite these advantages, I find myself considering the log parsing approach as the recommended solution. It’s probably a more fragile solution — unlike the wire protocol, there’s nothing stopping the PostgresSQL devs from changing a log message whenever they like — and it would not allow us to do all the other stuff that we’d like it to do. But it will be faster to build, and would involve less “hard programming” than the alternative. It can be knocked out quite quickly with a couple of regular expressions.

    Weighing the two options, I find myself wondering why I’m preferring the latter. Why go for the quick and easy solution when the alternative, despite requiring more work, would give us the greatest level of flexibility? It’s not like we couldn’t do it: I’m pretty confident anyone on the team would be able to put this proxy service together. It’s not even like my employer is requiring one particular solution over another (they haven’t yet received any of the suggestions I’m planning to propose, so they haven’t given a preference one way or the other). So what’s giving me pause from recommending it?

    No decision is completely made in a vacuum, and this is especially true in the mind of the decider. There are forces that sit outside the immediate problem that weigh on the decision itself: personal experience mixed with the prevailing zeitgeist of the industry expressed as opinions of “best practice”. Getting something that works out quickly vs. taking the time to build something more correct; a sense that taking on something this large would also result in a fair amount of support and maintenance (at the very least, we would need to be aware of changes in the wire protocol); and just a sense that going for the proxy option would mean we’re building something this is “not part of our core business”.

    Ah, yes, the old “core business” argument. I get the sense that a lot of people treat this one as a binary decision: either it’s something that we as a business does, or it’s not. But I wonder if it’s more of a continuum. After all, if we need to block users based on their IP address, is it not in our interest that we have something that does this? At what point does the lost opportunity of not having this outweigh the cost of taking on the development work now to build it? If we build the easy thing now, and later on we find ourselves needing the thing we don’t have, would we regret it?

    This is a bit of a rambling post, but I guess I’m a little conflicted about the prevailing approach within the tech industry of building less. It’s even said that part of the job is not only know what to build, but when NOT to build. I imagine no-one wants to go to the bad old days where everyone and their dog was building their own proprietary database from scratch. I definitely don’t want that either, but I sometimes wonder whether we’ve overcorrected in the other direction somewhat.


    1. I didn’t look at whether this is doable with hooks or extensions. ↩︎

    Some Notes About the Covid-19 Situation

    Now that the vaccines are here and are (slowly) being rolled out, and that Covid zero is no longer achievable in any realistic sense, the pandemic seems to be taking on a bit of a different vibe at the moment. I am no longer religiously watching the daily press conferences as I did in the past. They’re still occurring as far as I know, and I do appreciate that the authorities are showing up every day once again to brief the public.

    But I’m starting to get the sense that people are generally loosing interest in it now. Well, maybe “loosing interest” is the wrong way to say it. It’s not like it’s something that could be ignored: if you don’t get affected yourself, you’re still bound by the current restrictions in some way. Maybe it’s more like it’s starting to slip into the background somewhat.

    Slowly the we’re-all-in-this-together collectivism is morphing into one of personal responsibility. Except for the need to keep the medical systems from being overwhelmed, it’s now up to the individual to take care of themselves, whether it’s by masking up and social distancing, or by getting vaccinated. Everyone that I love has done just this: they’re got their shots when they could and are generally being very careful. But there are people out there that are not. Even some of my friends parents are hesitant to get the vaccine, either waiting for Pfizer (they’ll be waiting a while) or just being suspect of the vaccine at all.

    I also think that the success of the lockdowns before the delta variant has lulled people into a sense of security that is no longer present anymore. The latest lockdown has largely failed, and now it’s immunity from the vaccines that will have to protect us. I hope these people that are not taking this seriously realise that the protection that comes from collective actions will no longer be around when the virus comes for them.

    On Confluence

    I’m sorry. I know the saying about someone complaining about their tools. But this has been brewing for a little while and I need to get it off my chest.

    It’s becoming a huge pain using Atlassian Confluence WISIWIG editor to create wiki pages. Trying to use Confluence to write out something that is non-trivial, with tables and diagrams, so that it is clear to everyone in the team (including yourself) is so annoying to do now I find myself wishing for alternatives. It seems like the editor is actively resisting you in your efforts to get something down on paper.

    For one thing, there’s no way to type something that begins with [ or {. Doing so will switch modes for adding links or macros. This actively breaks my train of thought. The rude surprise that comes from this shunts me out of my current thought to one that tries to remember the proper way to back out of this unwanted mode change, which is not easy to do. There’s no easy way to get out of the new mode, and simply leave the brace as you typed it. It seems to be that the only way to disable this is to turn off all auto-formatting. I never need to create new macros by typing them out, but I do use h3. to create a new headings and * to bold text all the time. In order to actually type out an opening brace, I have to turn these niceties off.

    The second issue is that it’s soooo sloooow. For a large page, characters take around a second to appear on the screen after being typed. This does not help when you’re trying to get your thoughts down on the page as quickly as they come to you. You find yourself pausing and waiting for the words to catch up, which just slows your thinking down. And I won’t mention the number of errors that show up because of this (to be fair, I’m not the best typer in the world, but I find myself typing out fewer errors in an editor with faster feedback than the one that Confluence uses).

    I appreciate the thinking behind moving from a plain text editor to a WISIWIG one: it does make it more approachable to users not comfortable working with a markup language (although I also believe this is something that could be learnt, and that these users will eventually get comfortable with it and appreciate the speed at which they could type things out). It’s just a shame that there’s no alternative to those who need an interface that is fast and will just get out of the way.

    On Apple's Media Release Gymnastics

    I started listening to the latest Talk Show, where John Gruber and MG Siegler discuss Apple’s media release of the class action settlement. Releasing it to the major media outlets in such a way as to spin the guideline clarification as a concession to developers, even though nothing has actually change, is genius if true. I imagine that’s why Apple’s PR department get the big bucks.

    But I wonder if Apple has considered the potential blowback of this approach. I might be naive here, but I can’t help wonder whether these media outlets publishing that Apple hasn’t actually conceded anything will eventually realised that they have been had. Would that affect the relationship between the two in any way? Say Apple wants to publish some good news and expect these outlets to maintain the favourable air of their release. Would they do it?

    Then again, it’s most likely that nothing will really change. There’s little trust lost between the two anyway, and if this gymnastics actually happened, Apple knows it. Also, it sounds like Apple’s media release has had the desired effect of reaching those in the US government applying anti-trust pressure on the company. They probably think it’s worth the credibility they have burned with these outlets, if any1.

    One thing that seems clear though: this is doing no favours in addressing the trust lost between Apple and their developers, no matter how much clarifying this release does.


    1. I realise that I’m probably so far removed from how much the general zeitgeist knows or cares about the relationship between Apple and their developers, so even expecting that these outlets know that they have been had is a huge assumption. ↩︎

    Post Of Little Consequence

    I’m wouldn’t call myself a regular poster on this blog. I don’t have a goal of writing a post a day or something. But I do want to keep it up with some frequency, and post at least one item a week. I realised today that it’s been a week since my last post.

    However, due to the current lockdown, very little of note has happened over the last week. Apart from work, TV, reading, and doing a few personal projects on the side, there really isn’t much going on. So it was hard to come up with something that was interesting to write about.

    So I’m playing my post-about-how-difficult-it-is-to-post post, which is what you are reading now. In short: there’s nothing spectacular that’s going on at the moment.

    That said, I did start three drafts that I thought about publishing. It felt a little strange posting them individually, so here they are as bullet points:

    • I’m completely recovered from the side effects of the vaccine now. I did have a sore arm for about a week, and actually saw to the doctor about it, but he said that it took him about a week to get over it as well, so nothing to worry about here. Incidentally, this was a day after seeing the same doctor about renewing a prescription, making it the first time in my life I went to the doctors two days in a row, which I guess is something. Now just need to wait 11 more weeks for my next dose.
    • I’ve got a research task at work today, meaning no podcast listening at work today.
    • Related, I find it relatively easy to listen to music and podcasts at the same time I’m writing code, but as soon as I need to write prose (documentation, copy, or writing a blog post), I have to turn the music off. I guess my mind needs complete focus when writing English sentences, probably because it’s an area that has been underdeveloped somewhat.

    Anyway, that’s what’s happing at the moment. Hope I didn’t waste too much of your time 🙂

    Six Weeks Off Twitter

    It’s been roughly six weeks since I’ve stopped using Twitter on a daily basis. I initially took a break to stay away from some anxiety inducing news, and I was initially going to return to daily use once that passed. But after hearing others on Micro.blog post about their experience closing their Twitter account, I decided to see how long I can go staying off Twitter myself.

    I wouldn’t say that I am a big Twitter user. I don’t have a Twitter audience (I think my follower count is in the single digits), I hardly ever tweet or reply, and although I have a few friends and family on there, I have other means of communicating with them that I tend to use more often. The only thing I would miss are the occasional interesting or amusing tweets from those that I follow, something that is not guaranteed in any particular reading session.

    In those six weeks, I notice my reading patterns have change. I’m reading a lot more books and blog posts now. I found that having something to read during the time you’re usually browsing Twitter helps a great deal, so there’s always some long form written piece that I can turn to when I’ve caught up with everything else. And although I wouldn’t say my anxiety has gone, I do think that it’s lower than it was. It’s calming to know that there are no shocking/depressing items that can jump out of me during a particular reading session. I think that mechanic has a lot to do with the addictiveness of Twitter and it’s ilk.

    I’m not quite at the point where I will completely close my Twitter account, and there are some users that I may move over to Feedbin (I haven’t done that yet, so I’m not sure how interested in them I really am). But, all in all, I think this break from daily use of Twitter has been good for me, and I found myself having no real urge to going back.

    Seeking Out Bad News

    Sometimes I wonder if I’m just going out of the way to seek bad news. Maybe it’s because I think that if I don’t, then a problem will go unaddressed as no-one else is aware of it.

    There’s probably some evolutionary trait to this. Being the one that hears a predator, and reacts to it before anyone else, is an advantage. But in this day and age, many of the problems that I have anxiety about is pretty much known by everyone, and addressing it in any meaningful way is beyond my direct control.

    So in the interest for my own mental health, I should cut down on seeking out these stories, do what I can to help with the problem, and just hope that someone who does have the ability to do something substantial knows about it, and can address it in some way.

    While total ignorance is probably not ideal, being up to speed with the woes of the world is probably not healthy either.

    A Year On Micro.blog

    It’s been a year since I’ve signed up to Micro.blog and written my first post, and the only regret I have is that I didn’t do it sooner.

    The reason for joining was to write more; to focus less on the blogging engine and more on the blog itself. At the time I have only posted seven times over a period of 9 months. In the last year, that post count has risen to 222. I guess I can call that objective achieved.

    But the biggest reason for staying, and one I wish I knew sooner, is the fantastic community here. Having such a great bunch of people online is quite rare now, and this has quickly been my favourite place on the internet. We truly have something special here.

    Thank you all for being such awesome people.

    Working On The Weekend

    I’ve saw a Tweets last night saying that the best thing a young person can do to help their career is to work on the weekend. The implication there is that being the one that “puts in the extra hours” can seem, in the eyes of your employer, like you’re the hardest worker there, that you’re committed to the project and the job. This could lead to bonuses, promotions, perks, a reputation, you name it.

    I’m always sceptical when I see advice like that. Coming in on the weekend on a voluntary bases might be good for your career, but is it actually good for you? Are you doing yourself any favours spending two additional days a week creating value for someone else?

    What about the things that will create value for you? That can help you be a more rounded person? Things like learning a new skill, starting a new side project, socialising, taking up a hobby. When will you have time for that? Not to mention just fricken resting, which is really not as valued as it should be.

    My feeling is that you already work for someone else 5 out of 7 days a week. By all means work on the weekend if you want to, but make sure you’re doing it for yourself.

    Stumbling Into the Concept of Narrating Your Work

    About a week ago, we had a retro. One of the things that was brought up was the sense that management felt that the team was not delivering as much as we could be. There are a number of reasons for this. For one, the higher ups work in Perth, a good 3,452 KM away from Melbourne. Another was that a lot of the work the team deals with is experimental in nature: more R&D vs. product development (a large portion of it involves dealing with a lot of barely supported, and completely undocumented APIs for MacOS).

    Nevertheless, it was a bit of a downer that management felt this way. A solution proposed by a team member was to maintain a work log. Doing so would give the product owner, who works in Perth, the ammunition needed to push back on the notion that the team was just sitting on their hands. Prior to the pandemic, I started keeping a bullet journal, which helped me keep on top of things that I needed to do. But this would been different: it would essentially be keeping a log of what I did, and how I did it.

    Last week I started to do this. I took a silly little web-app that I built for maintaining a “now” page (which I never used), and started to use it a bit more like a journal. I added it as a web panel in Vivaldi so that I could open it whenever I was in the browser. Every so often while I’m working on a task, I would quickly jot down a paragraph of what I just did. If I got stuck, I would write down what the problem is and my thoughts on how I can get out of it. I originally thought about setting up a reminder to do this like once every 30 minutes, but I found that simply journalling the thoughts as they come work quote well. At the end of the day I usually had at-least 3 bullet points on what I was working on (the record was 12) and at the beginning of the next day, I cut-and-pasted these bullet points into the Jira work log.

    It’s too early to say whether this would help dispel the notion that the team is not delivering; we may get an update from the product owner when the next retro comes around. But I’ve found the process actually quite useful for myself. I’ve forgotten how beneficial it is to write my thoughts down as they come, and I’ve never used a process like this before: everything up to date has been simply todo items or scrawls of the next task to look at.

    I learnt this morning that this is not a new concept. Dave Winer wrote a blog post about this in 2009, called Narrate Your Work which talks about how he adopted this practice at UserLand. I guess the same thing could be helpful here. After all, you could probably say the place I work at is distributed in nature, given that there exist a whole continent between the two offices.

    It’s only been a week so the habit has not fully settled in but I hope to continue to do this.

    On the Souring Relationship Between Apple and its Developers

    Listening to episode #430 of ATP yesterday, it was kind of shocking to hear the loss of good will experienced by the hosts towards Apple and their developer relations. I can’t say that I blame them though. Although John’s point about lawyers making the case for Apple is a good one, I do get the same feeling that Marco does about Apples opinion about developers, which is not a positive one.

    It feels a lot like Apple believes that developers building on their platform owe them everything to them, and that without Apple none of these businesses would exist at all. It does feel a lot like they are entitled to a cut of everything that is happening on their platform. It does feel a lot like they think a developer releasing their app for free on the App Store is an ungrateful free-loader, that is taking advantage of all their hard work building the platforms and developer toolkits. This is not just from what’s coming up during the Epic-Apple lawsuit discovery. Remember what happened to Basecamp last year, when tried to release a new version of their Hey iOS app.

    None of these is accurate in the remotest sense. Although it is true that some of these business may not be around if iOS was ever invented, it’s not to say that these developers wouldn’t be doing something else. Also, these developers DO pay for the privilege to build on their platform. Let’s not forget the $99.00 USD ($149.00 AUD) that these developers pay yearly, not to mention all the hardware they buy to run XCode and these other tools. And it’s not like any of these tools wouldn’t exist at all if these devs were free to use another IAP provider. Apple, I assume, would like something like XCode to exist so that they can build their own apps.

    I hope people in Apple are listening to this. Anti-trust regulation aside, they are doing themselves a massive disservice by treating their developers like this. These people are their biggest evangelists. I’m not sure it will come to the point where they will abandon the iOS platform, at least not at this stage. But I could foresee these developers being hesitant to adopt any new app platforms that Apple release, say a future AR platform that will feature hardware devices.

    Podcast Roll 2021

    Yesterday, @Munish had the courage to share his podcast subscriptions1. Sensing an opportunity to talk about with what I’m currently listening to, even though it may reveal more about myself that I’m usually comfortable with, I’m taking up his dare and sharing mine.

    So, here is my podcast roll as of early May 2021:

    Podcast subscriptions 2021

    The shows above can roughly be divided up into the following categories:

    Technology: This is a topic that I’m very interested in so there are fair few of these. A lot of them are Apple centric, but this is more of an accident than by design. The second podcast that I started regularly listening to was The Talk Show since I was a casual reader of Daring Fireball at the time (and I still am). That opened me up to ATP, which led to a bunch of Relay.fm shows.

    Business: These could probably be lumped into technology, but are focused more on the business side of things rather than product development. Ben Thompson shows up a lot here, with Exponent, Dithering, and the Stratechery daily update my regular gotos. The release of that last one helped set some new routines while I was working from home last year. There’s a decent collection of shows from indy developers here as well.

    Science, History, and Philosophy: These are where the real heavy podcast listening comes in, the shows that are 2 to 4 hours long and go deep into a particular topic or event. I have to be in the right kind of mood for these one. Key drivers here are Making Sense, Mindscape and Hardcore History.

    Politics and Society: I am somewhat interested in US politics, which could explain the shows that appear in this category. Deep State Radio is one that I still listen to occasionally. Also of note is the NPR Planet Money podcast, which was the first podcast that I’ve ever subscribed to. A recent addition is the ABC Coronacast which provides a decent briefing of the coronavirus pandemic in Australia.

    Popular Culture: This is probably where all the Incomparable shows come in, when I’m in the mood for something lighthearted and funny. My usual goto’s there are the Incomparable Game Show, Robot or Not and Pants in the Boot. One or two Relay.fm shows fall in here as well, including Reconcilable Differences, which is a favourite of mine.

    Micro.blog: The final category is more-or-less podcasts that I’ve subscribed to while spending time on Micro.blog. This includes shows like the Micro Monday podcast, but also shows from those on Micro.blog like Core Intuition and Hemispheric Views.

    So that’s it. There are a fair few subscriptions listed above, not all of them I regularly listen to. I guess I should probably unsubscribe from those that I haven’t listen to for a while. I probably keep them around for the same reason why I keep RSS feeds around: just in case something worth listening to pops up in there.


    1. Or “follows”, which is I guess the new term for it. ↩︎

← Newer Posts Older Posts →