Long Form Posts

    The Future of Computing

    I got into computers when I was quite young, and to satisfy my interest, I read a lot of books about computing during my primary school years. I remember one such book that included a discussion about how computing could evolve in the future.

    The book approached the topic using a narrative of a “future” scenario, that would probably correspond with today’s present. In that story, the protagonist was late for school because of a fault with the “home computer” regarding the setting of the thermostat or something similar. Upon arriving home from school, he interacted with the computer by speaking to it as if he was talking to another person, expressing his anger of the events that morning by speaking in full, natural-language sentences. The computer responded in kind.

    This book was published at a time when most personal computing involved typing in BASIC programs, so you could imagine that a bit of creative license was taken in the discussion. But I remember reading this and being quite ambivalent about this prospective future. I could not imagine the idea of central computers being installed in houses and controlling all aspects of their environment. Furthermore, I balked at the idea of people choosing to interact with these computers using natural language. I’m not much of a people person so the idea of speaking to computers as if it was another person, and having to deal with the computer speaking back, was not attractive to me.

    Such is the feeling I have now with the idea of anyone wanting to put on AR and VR headsets. This seems to be the current focus of tech companies like Apple and Google, trying to find the successor to the smartphone. And although nothing from these companies have been announced yet, and these technologies have yet to escape the niche of gaming, I still cannot see a future in which people walk around with these headsets outside in public. Maybe with AR if they can do so in a device that looks like a pair of regular-looking glasses, but VR? No way.

    But as soon as I reflected on those feelings, that book I read all those years ago came back to me. As you can probably guess, the future predicted in that story has more-or-less become reality, with the rise of the cloud, home automation, and smart speakers like the Amazon Echo. And more than that, people are using these systems and liking it, or at least putting up with it.

    So might the same thing happen with AR and VR headsets. I should probably stay out of the future predicting business.

    PGBC Scoring Rules

    I get a bit of a thrill when there’s a need to design a mini-language. I have one facing me now for a little project I’m responsible for, which is maintaining a scoring site for a bocce comp I’m involve in with friends.

    How scoring works now is that the winner of a particular bocce match gets one point for the season. The winner for the season is the person with the most points. However, we recently discuss the idea of adding “final matches,” which will give the match winner 7 points, the runner up 2 points, and the person who came in third 1 point. At the same time I want to add the notion of “friendly matches” which won’t count to the season score.

    It might have been that a simple solution was to encode these rules directly in the app, and have a flag indicating whether a match was normal, final or friendly. But this was suboptimal as there is another variant of the game we play which do not have the notion of finals, and if we did, we may eventually have different rule for it. So I opted for a design in which a new “match type” is added as a new database entity, which will have the scoring rules encoded as a PostgreSQL JSON column type. Using this as a mechanism of encoding free(ish) structured data when there’s no need to query it has worked for me in the past. There was no need to add the notion of seasons points as it was already present as an easy way to keep track of wins for a season.

    For the scoring rules JSON structure, I’m considering the use of an array of conditions. When a player meets the conditions of a particular array element, they will be awarded the points associated with that condition. Each player will only be permitted to match one condition, and if they don’t match any, they won’t get any points. The fields of the condition that a player can be matched to can be made up of the following attributes:

    • rank: (int) the position the player has in the match just played in accordance with the scoring, with 1 being the player with the highest score, 2 being the player with the second highest score, and so on.
    • winner: (bool) whether the player is considered the winner of the match. The person with the highest score usually is, but this is treated as an independent field and so it should be possible to define rules accordingly.
    • draw: (bool) whether the player shares their rank with another player. When a draw occurs, both winning players will have a rank of 1, with the player of the second highest score having a rank of 2.

    Using this structure, a possible scoring rules definition for a normal match may look like the following:

    { "season_score": [
      { "condition": { "winner": true }, "points": 1 }
    ]}
    

    whereas a rules definition for the final match may look like the following:

    { "season_score": [
      { "condition": { "rank": 1 }, "points": 7 },
      { "condition": { "rank": 2 }, "points": 2 },
      { "condition": { "rank": 3 }, "points": 1 }
    }]
    

    Finally, for friendlies, the rules can simply look like the following:

    { "season_score": [] }
    

    I think this provides a great deal of flexibility and extensibility without making the rules definition too complicated.

    On the Moxie Marlinspike Post About web3

    Today, I took a look at the Moxie Marlinspike post about web31. I found this post interesting for a variety of reasons, not least because unlike many other posts on the subject, it was a post that was level-headed and was coming from a position of want to learn more rather than persuade (or hustle). Well worth the read, especially for those that are turned off by the whole web3 crap like I am.

    Anyway, there were a few things from the post that I found amusing. The first, and by far the most shocking, was that the “object” of a NFT is not derived from the actual item in question, like the artwork image, or the music audio, etc. It’s essentially just a URL. And not even a URL with an associated hash. Just a plain old URL, as in “example.com”, which points to a resources on the internet that can be change or removed at any time. Not really conducive to the idea of digital ownership if the thing that you “own” is just something that points to something else that you don’t actually control.

    Also amusing was the revelation that for a majority of these so-called “distributed apps”, the “distribution” part is a bit of a misnomer. They might be using a blockchain to handle state, but many of the apps themselves are doing so by calling regular API services. They don’t build their own blockchain or even run a node on an existing blockchain, which is what I assumed they were doing. I can achieve the same thing without a blockchain if I make the database I use for my apps public and publish the API keys (yes, I’m being facetious).

    The final thing I found amusing was that many of these platforms are actually building features into the platform that are not even using the blockchain at all. Moxie made the excellent point that the speed to which a protocol evolves, especially ones that are distributed by design, is usually very slow. Likely too slow if you’re trying to add features to a platform in an attempt to make it attractive to users. So services like OpenSeas are sometimes bypassing the blockchain altogether, and just adding propriety features which are backed by regular data stores like Firebase. Seems to me this is undermining the very idea of web3 itself.

    So given these three revelations, what can we conclude from all the rhetoric of web3 that’s currently out there? That, I’ll leave up to you. I have my own opinions which I hope comes through from the tone of this post.

    I’ll close by saying that I think the most insightful thing I learnt from the post had nothing to do with web3 at all. It was the point that the reason why Web 2 came about was that people didn’t want to run their own servers, and never will. This is actually quite obvious now when I think about it.


    1. Ben Thompson wrote a teriffic post about it as well. ↩︎

    Burnt Out on Design

    I’ve been doing a heap of design work at my job at the moment; writing documents, drawing up architecture diagrams, etc. I’d thought I would like this sort of work but I realise now that I can only tolerate it in small doses. Doing it for as long as I have been is burning me out slightly. I’d just like to go back to coding.

    I’m wondering why this is. I think the biggest feeling I have is that it feels like I’m not delivering value. I understand the need to get some sort of design up so that tasks can be written up and allocated. I think a big problem is the feeling that everything needs to be in the design upfront, waterfall style, whereas the method I’d prefer is to have a basic design upfront — something that we can start work on — which can be iterated and augment over time.

    I guess my preference with having something built vs. something perfect on paper differs from those that I work with. Given my current employer, which specialise more in hardware design, I can understand that line of thinking.

    I’m also guessing that software architecture is not for me.

    Still Off Twitter

    A little while ago, I stopped using Twitter on a daily basis as the continuous barrage of news was getting me down. Six weeks after doing so, I wrote a post about it. Those six weeks have now become six months, and I can say I’m still off Twitter and have no immediate intention of going back.

    My anxiety levels dropped since getting off1, and although they’ve not completely gone, the baseline has remained low with occasional spikes that soon subside. But the best thing is that the time I would have spend reading Twitter I now spend reading stuff that would have taken longer than 30 seconds to write. Things like books, blog posts and long-form articles (and Micro.blog posts, I always have time for those). It feels like the balance of my information diet has centred somewhat. I still occasionally read the news (although I stay away from the commercial news sources) but I try not to spend too much time on it. Most things I don’t need to be informed about in real time: if I learn about it the follow day, it’s no big deal.

    I’m also seeing more and more people making the same choice I’ve made. The continuous stream of news on Twitter is just becoming too much for them, and they want off. I think Timo Koola’s post sums it up pretty well:

    I wonder how much studies there are about harmfulness of following the news too closely? I don’t think our minds were made for constant bombardment of distressing things we can’t do anything about.

    It’s not healthy being constantly reminded of events going on, most of them undesirable, that you can’t change. Better for myself that I spend my attention on things that interest me and help me grow.


    1. It’s amusing that the language I found myself using for this post sounds like I’m recovering from some form of substance abuse. I’m guessing the addictive nature of Twitter and its ilk are not too different. ↩︎

    100 Day Writing Streak

    I promise I won’t post about every single milestone that comes along, but I’m quite happy that I reached 100 consecutive days of at least one blog post or journal entry.

    100 day Day One streak

    On Treating Users As If They're Just There To Buy Stuff

    Ars Technica has published a third post about the annoying user experience of Microsoft Edge in as many days. Today’s was about a notice that appears when the user tries to use Edge to download Chrome. These are notices that are displayed by the browser itself whenever the user opens up the Chrome download page.

    Now, setting aside the fact that these notices shouldn’t be shown to the user at all, what got up my goat was the copy that appears in one of them:

    ‘I hate saving money,’ said no one ever. Microsoft Edge is the best browser for online shopping.

    What is with this copy? Do they assume that all users do with their computers is buy stuff? That their only motivation with using a browser at all is to participate in rampant consumerism?

    I’m not a Microsoft Edge user, so it’s probably not worth my time to comment on this. But what bothers me is that I’m seeing a trend suggesting that large software companies only think their users are just using their devices to consume stuff. This might be true in the majority — I really don’t know — but the problem is that this line of thinking starts to bleed into their product decisions, and reveals what lengths they will go to to extract more money from these users. I’m going on about Edge here but Apple does the same thing in their OSes: showing notifications for TV+ or Apple Music or whatever service they’re trying to flog onto their customers this month. At least with web companies like Google, Twitter and Meta (née Facebook 😒), we get to use the service for free.

    I know software is expensive to build and maintain, etc, etc. But this mode of thinking is so sleazy it’s becoming insulting. It just makes the experience of using the product worse all around, like going to a “free” event when you know you’ll be pushed to buy something. This is how these software companies want their users to feel?

    Weekend In Mansfield

    Over the weekend, I had the opportunity to spend some time with my parents who were staying in Mansfield, in regional Victoria. We were staying in a small cottage located on a hill, which meant some pretty stunning views, especially in the evening light.

    The cottage in the late evening light View from the balcony

    We didn’t do a heap during our trip, although we did manage to do the The Paps trail on Saturday, which involved a 700 metre climb.

    Annotated image of The Paps

    (Apologies for the photo, I had another one that was zoomed in a bit more but the photo turned out quite muddy. Might need to consider another phone or camera.)

    It was a bit of a challenge — the trail was quite steep at times — and there were a few instances when we considered turning back. But we did eventually reached the summit, and got some spectacular views of Lake Eldon, which was quite full thanks to all the rainfall we got over the last few months.

    One the path Approaching the summit View of the lake Another view of the lake View up the highway towards Bonni Doon The summit

    This was followed by a pub lunch at the Bonni Doon Hotel. The place was chokers, probably with people eager to get out of the city at the end of lockdown (likewise for the cottage we stayed at, which has been booked solid for the next couple of months). But the food (and beer) was good and it was perfect weather to be dining outside, with the sun shining and temperature in the low 20’s Celsius.

    All in all it was good to get out of the city, and out of my weekend routine, for a spell.

    Cookie Disclosure Popups Should be Handled by the Browser

    I really dislike the cookie disclosure popups that appear on websites. Ideally I shouldn’t be seeing them at all — I know that the EU requires it, but I’m not a citizen of the EU so the regulation should not apply to me. But I’m pragmatic enough to know that not every web developer can or will selectively show this disclosure popup based on the geographic region of the visitor.

    That’s why I’m wondering if these disclosure popups would be better handled by the browser.

    The way I see this working is that when a website tries to set a cookie, either through a response header or within JavaScript, and the user is located in a jurisdiction that requires them to be aware of this, the browser would be responsible for telling them. They could show it as a permission request popup, much like the ones you see already when the site wants to use your microphone or get your location. The user can then choose to “accept”, in which case the cookie would be saved; or they can choose to “deny”, in which case the cookie would be silently dropped or an error will be returned.

    This has some major advantages over the system we have now:

    • It would save the website dev from building the disclosure popup themselves. I’ve seen some real creative ways in which websites show this disclosure, but honestly it would just be simpler not to do it. It would also cover those web developers that forget (or “forget”) to disclose the presence of cookies when they need to.
    • The website does not need to know where the user is browsing from. Privacy issues aside, it’s just a hassle to lookup the jurisdiction of the originator based on their IP address. Which is probably why no-one does it, and why even non-EU citizens see these disclosure popups. This is not a problem for the browser, which I’d imagine would have the necessary OS privileges to get the users’ current location. This would be especially true for browsers bundled with the OS like Safari and Edge.
    • When the user chooses an option, their choice can be remembered. The irony of this whole thing is that I rarely see websites use cookies to save the my preferences for allowing cookies. These sites seem to just show the popup again the next time I visit. Of course for a user chooses to deny the use of cookies, it wouldn’t be possible for the site to use cookies to record this fact. If the browser is managing this preference, it can be saved alongside all the other site permissions like microphone access, thereby sitting outside what the site can make use of.
    • Most important of all to me: those outside the jurisdiction don’t even need to see the disclosure popup. Websites that I visit could simply save cookies as they have been for 25 years now. This can be an option in the browser, so that users that prefer to see the disclosure prompt can do so. This option could come in handy for those EU citizens that prefer to just allow (or deny) cookies across the board, so they don’t have to see the disclosure popup either (I don’t know if this is possible in the regulation).

    Of course the actual details of this would need to be ironed out, like how a website would know whether the user has denied cookie storage. That’s something for standards committee to work out. But it seems to me that this feature is a no-brainer.

    My Impressions of GitHub Codespaces

    The GitHub Universe 2021 conference started a few days ago and one of the features touted in the day one keynote was GitHub Codespaces. This is a development environment that is accessible from within your web browser. It’s based on VSCode, which is a popular and well-designed IDE that is already written in JavaScript1, and also provides access to a Linux shell running in the cloud, allowing you to do various things like build and test your code.

    After playing around with Codespaces during the beta, and seeing the feature develop into something that is available for everyone, I’d thought I’d say a few words on how I use it and what I think of it.

    First, I should probably say that I’m not a heavy Codespaces user. The keynote seemed to be touting the features of Codespaces that makes it useful as a dev environment for collaborative coding. This is something I have no personal use for since it’s mainly just me working on repos I use with Codespaces, so I haven’t really explored these features myself.

    The main use I have for Codespaces is making changes to repos on a machine that is not my own. There are times when I need to add a new feature or fix a bug on a tool I use, and for various reasons I cannot (or choose not to) setup a dev environment on the machine I’m working on to make the change. A case like this would have me look at one of the alternatives, or even make use of the GitHub web-editor. The web-editor works but doesn’t really offer much when it comes to building and testing your changes (I’ll talk more about the alternatives later).

    I should say that this doesn’t happen very often. Most of my time I’m on my own machine, and I have no need for a web-based dev environment as I can just use the environment I have. But when that’s not possible, being able to spin up and make the change in Codespaces, complete with a Linux environment which you have sudo access to, is quite useful.

    Codespaces is also pretty nice in terms of a coding environment. This is no real surprise since it’s based on VSCode, but compared to the alternatives, the little things like keystroke performance and doing things quickly in the editor make a huge difference. I’ve tried a bunch of alternatives in the past like Cloud9 and CodeAnywhere, and Codespaces is by far the most polished.

    Another advantage Codespaces have over the comparison is that it seems suited to throw-away development environments. It might be possible to keep an environment around for an extended period of time, but I tend to spin up temporary workspace when I need them. The alternatives tend to prefer a more long-lived environment, which involves a lot setup for something that is kept around. This feels like splitting your dev environments in two, and I always feel the need to select one as the “definitive workspace” for that particular project going forward. I don’t feel that with Codespaces: I can quickly spin up a new environment, which has most of what I need already installed, and make the change I need with the full knowledge that once I no longer need it, it will be teared down (pro-tip: always push your changes to origin once you’re done making your changes in Codespaces). It helps that spinning up a new environment is quite fast.

    So, that’s my impression of GitHub Codespaces. I’m not sure who has access to it: you may need to be on a paid plan, for example. But if it’s enable for your account, and you find yourself needing a temporary, cloud-based dev environment to do your work in, I’d suggest giving it a try.


    1. It’s actually TypeScript ↩︎

    Alto Catalogue Update

    I’ve really tied myself up in knots here. I’m spending some time working on Alto Catalogue, trying to streamline the process of uploading individual tracks into a new album. This is a workflow that is absolutely not user friendly at the moment, and the only way I’ve gotten tracks into the catalogue is to run a hacked-together tool to upload the tracks from the command line. The reason why I’m addressing this now is that it’s slightly embarrassing to have this open-source project without having a nice way of doing something that, by all accounts, is quite fundamental (a good hint for when you’re facing this is when it comes time to write the end-user documentation: if you can’t explain how to do something in a way that doesn’t include the word “hack”, “complicated”, or “unsupported”, then something is missing).

    So I’m trying to close this feature gap, but it’s proving to be more complicated than I expected. The main issue relates ID3 tags and how media is arrange in the repository. Previous versions of the catalogue actually did have a way of uploading track media to the repository, which is essentially an S3 bucket. The way this work is that the catalogue will issue the browser a pre-signed Put URL, and the browser could upload the track media directly to S3. But in order to get a pre-signed URL, you need to know the object key, which is a bit like a file path. The old upload flow had the user enter the object key manually in the upload form.

    This worked but I had some real issues with it. The first is that I’d like the objects within the S3 bucket to be organised in a nice way, for example “artist/album/tracknum-title.mp3”. I’m hoping that this S3 bucket will be my definitive music collection and I don’t want just some random IDs that are completely indecipherable when I browse the objects in the S3 bucket. That way, if I were ever to shutdown the catalogue down or loose all the metadata, I’d still be able to navigate my collection via the object keys alone.

    The second was that this approach did not take into account the track metadata. Track metadata is managed in a PostgreSQL database and had to be entered in manually; yes, this included the track duration. The only reason I used the hacked together tool to upload tracks was that it was a tool I was already using to set ID3 tags on MP3 files, and it was trivial to add a HTTP client to do the upload from there. Obviously, asking users to run a separate tool to do their track uploads is not going to fly.

    So I’m hoping to improve this. The ideal flow would be that the user will simply select an MP3 from their file system. When they click upload, the following things will happen:

    • The ID3 tags of the MP3 will be read.
    • That metadata will be used to determine the location of the object in S3.
    • A pre-signed URL will be generated and sent to the browser to upload the file.
    • The file is uploaded to S3.
    • A new track record is created with the same metadata.

    The libraries I’m using to read the ID3 tags and track duration requires the track media to be available as a file on the local file system (I assume this is for random access). Simply uploading the track media to the local file system would be the easiest approach, since it would allow me to read the metadata, upload the media to the repository on the backend, and setup the track metadata all in a single transaction. But I have some reservations about allowing large uploads to the server, and most of the existing infrastructure already makes use of pre-signed URLs. So the first run at this feature involved uploading the file to S3 and then downloading it on the server backend to read the metadata.

    But you see the problem here: in order to generate a pre-signed URL to upload the object to S3, I need to know the location of the media, which I want to derive from the track metadata. So if I don’t want uploads to go straight to the file system, I need the object to already be in S3 in order to work out the best location of where to put the object in S3.

    So I’m wondering what the best ways to fix this would be. My current thing is this series of events:

    • Create a pre-signed URL to a temporary location in the S3 bucket.
    • Allow the user to Upload the media directly to that location in the S3 bucket.
    • On the server, download that media object to get the metadata and duration.
    • From that, derive the objects location and move the object within S3, something I’m guessing should be relatively easy if the objects are in the same bucket.
    • Create a new track record from the metadata.

    The alternative is biting the bullet and allowing track uploads directly to the file system. That will simplify the crazy workflow above but means that I’ll need to configure the server for large uploads. This is not entirely without precedence though: there is a feature for uploading tracks in a zip file downloaded from a URL which uses the local file system. So there’s not a whole lot stopping me from doing this altogether.

    The third approach might be looking for a JavaScript library to read the ID3 tags. This is not great as I’d need to get the location from the server anyway, as the metadata-derive object location is configured on a per repository basis. It also means I’ll be mixing up different ways to get metadata.

    In any case, not a great set of options here.

    Feeds In Broadtail

    My quest to watch YouTube without using YouTube got a little closer recently with the addition of feeds in Broadtail. This uses the YouTube RSS feed endpoint to list videos recently added to a channel or playlist.

    Feed listing, in all it's 90's web style glory.

    There are a bunch of channels that I watch regularly but I’m very hesitant to subscribe to them within YouTube itself (sorry YouTubers, but I choose not to smash that bell icon). I’m generally quite hesitant to give any signal to YouTube about my watching habits, feeding their machine learning models even more information about myself. But I do want to know when new videos are available, so that I can get them into Plex once they’re released. There is where feeds come in handy.

    Recent videos of a feed.

    Also improved is the display of video metadata when selecting a feed item or entering a video ID in the quick look bar. Previously this would immediately start a download of the video, but I prefer knowing more about the video first. These downloads aren’t free, and they usually take many hours to get. Better to know more about them before committing to it.

    Video details page.

    Incidentally, I think this mode of watching has a slight benefit. There are days when I spend the whole evening binging YouTube, not so much following the algorithm but looking at the various channels I’m interested in for videos that I haven’t seen yet. Waiting several hours for a video download feels a little more measured, and less likely to send me down the YouTube rabbit hole. I’m sure there will still be evenings when I do nothing else other than watch TV, but hopefully that’s more of a choice rather than an accident.

    I think this is enough on Broadtail for the time being. It’s more or less functional for what I want to do with it. Time to move onto something else.

    Some Screenshots Of Broadtail

    I spent some time this morning doing some styling work on Broadtail, my silly little YouTube video download manager I’m working on.

    Now, I think it’s fair to say that I’m not a designer. And these designs look a little dated, but, surprisingly, this is sort of the design I’m going for: centered pages, borders, etc. A bit of a retro, tasteless style that may be ugly, but still usable(-ish).

    It’s not quite finished — the colours need a bit of work — but it’s sort of the style I have in my head.

    Start of Yet Another Project Because I Can't Help Myself

    One of the reasons why I stopped work on Lorikeet was that I was inspired by those on Micro.blog to setup a Plex server for my YouTube watching needs. A few years ago, I actually bought an old Intel Nuc for that reason, but I never got around to setting it up. I managed to do so last Wednesday and so far it’s working pretty well.

    The next thing I’d like to do is setup RSS subscriptions for certain YouTube channels and automatically download the videos when they are publish. I plan to use “youtube-dl” for the actual video downloading part, but I’m hoping to build something that would poll the RSS feeds and trigger the download when new videos are published. I’m hoping that this service would have a web-based frontend so I don’t have to login via SSH to monitor progress, etc.

    The download’s would need to be automatic as the requests made by youtube-dl seem to be throttled by YouTube and a longish video may take several hours to download. If this was a manual process, assuming that I would actually remember to start the download myself, the video won’t be ready for my evening viewing. I’m hoping that my timezone would work to my advantage here. The evenings on the US East Coast are my mornings, so if a video download starts at the beginning of the day, hopefully it would be finish when my evening rolls around. I guess we’ll see.

    Anyway, that’s what my current coding project will be on: something that would setup RSS subscriptions for YouTube channels, and download new videos when they are published.

    This is probably one of those things that already exist out there. That may be true, but there are certain things that I’m hoping to add down the line. One such thing might be adding the notion of an “interest level” to channels which would govern how long a video would be kept around. For example, a channel that is marked as very interested would have every video downloaded and stored into Plex straight away. Mildly interested channels would have videos download but kept in a holding place until I choose to watch it, in which case it would be moved to Plex. If that doesn’t happen in 7 days or so, the videos would be removed.

    I’d like to also add some video lifecycle management into the mix as well, just to avoid the disk being completely used up. I can see instances where I’d like to mark videos as “keep for ever” and all the others will churn away after 14 days or so. It might be worth checking out what Plex offers for this, just to avoid doubling up on effort.

    But that’s all for the future. For the moment, my immediate goal is to get the basics working.

    Abandoning Project Lorikeet

    I’ll admit it: the mini-project that I have been working on may not have been a good idea.

    The project, which I gave the codename Lorikeet, was to provide a way to stream YouTube videos to a Chromecast without using the YouTube app. Using the YouTube app is becoming a real pain. Ads aside, they’ve completely replaced the Chromecast experience from a very basic viewing destination to something akin to a Google TV, complete with recommendations of “Breaking News” from news services that I have no interest in seeing.

    So I spent some time trying to build something to avoid the YouTube app completely, using a mixture of youtube-dl, a Buffalo web-app, and a Flutter mobile app. I spent the last week on it (it’s not pretty so no screenshots), but at this stage I don’t see much point continuing to work on it.

    For one, the experience is far from perfect. Video loading is slow and there are cases when the video pauses due to buffering. I’m sure there are ways around this, but I really don’t want to spend the time learning how to do this.

    It was also expensive. I have a Linode server running in Sydney which acts as a bit of a hobby server (it’s also running Pagepark to serve this site); but in order to be closer to the YouTube CDNs that are closer to me, I had to rent a server that would run in Melbourne. And there are not many VPS hosting providers that offer hosting here.

    So I went with Google Cloud.

    Now, I’m sure there’s a lot to like about Google Cloud, but I found its VPS hosting to be quite sub-par. For just over $10 USD a month, I had a Linux virtual server with 512 MB of RAM, 8 GB of storage, and a CPU which I’d imagine is throttled all the way back as trying to do anything of significants slowed it to a crawl. I had immense issues installing OS updates, getting the Dokku based web-app deployed, and trying to avoid hitting the storage limit.

    For the same amount of money, Linode offers me a virtual server with 2 GB of RAM, 50 GB of storage, and a real virtual CPU. This server is running 4 Dokku apps, 3 of them with dedicated PostgreSQL databases, and apart from occasionally needing to remove dangling Docker images, I’ve had zero issues with it. None! (The podcasters were right).

    Where was I? Oh, yeah. So, that’s the reason why I’m abandoning this project and will need to re-evaluate my online video watching experience. I might give Plex a try, although before doing something like setting up a dedicated media server, I’ll probably just use the Mac Mini I’ve been using for a desktop in the short term.

    So, yeah, that’s it. It’s hard to abandon a project you spent any amount of time on. I suppose the good thing is that I got to play around with Flutter and learnt how to connect to a Chromecast using Dart, so it’s not a complete waste.

    Two People

    There are two people, and each one has the same problem that they want to get solved.

    The first person chooses the option to pay $10 a month, and all they have to do is sign up to a service that will solve the problem for them. The service they sign up for takes care of the rest.

    The second person chooses the option to pay $15 a month, 20 hours of work to get something built, and an ongoing commitment to keep it maintained.

    Guess which person I am today.

    (Hyper)critical Acclaim

    There were a couple of events that led me to writing this post. I’m sure part of it was seeing the posts on the 10 year anniversary of Steve Jobs, although such an event would probably not have been sufficient in itself. What tipped it over the edge was seeing the Ars Technica review of iOS showing up in my RSS feed on the same day. Pretty soon I’m expecting the MacOS review to drop as well.

    The quality of the reviews are still quite good, and I try to read them when I have the time. But sadly they do not grab me the way the Siracusa reviews did.

    It’s usually around this time of year I would start expecting them, waiting for the featured article to show up on Ars’ homepage. Once they came out, I would read them from cover to cover. I wouldn’t rush it either, taking my time with them over a period of about a week, reading them slowly and methodically as one would sip a nice glass of wine.

    Thinking back on them now, what grabbed me about these pieces was the level of detail. It was clear from the writing that a lot of effort was put into them: every pixel of the new OS was looked at methodically with a fine eye to detail. This level of study made to the design of the OS release, trying to find the underlying theme running through the decisions made, was something that was not found in any of the other OS reviews on Ars or any other tech site. I’m sure it was in no small part the reason why I eventually move to Apple for my computing needs.

    Eventually, the Siracusa reviews stopped. But by then I was well down the rabbit hole of Apple tech-nerd podcasts like the Talk Show and ATP. Now a regular listener to these shows, I still enjoy getting my fix of software technologies critical review, albeit in audio form. I eventually discovered the Hypercitical podcast, well after it finished, and I still occasionally listen to old episodes when there’s nothing new.

    Incidentally, there is one episode that I haven’t listen to yet. Episode 37, recorded on 8th October 2021, just after the death of Steve Jobs. Here, on the 10 year anniversary of his death, it might be a good time to have a listen.

    On Choosing the Hard Way Forward

    This is about work, so the usual disclaimers about opinions being my own, etc. apply here.

    I have an interesting problem in front of me at the moment: I’ve need to come up with a way to be notified when a user connects to, or disconnects from, a PostgreSQL database. This is not something that’s supported by PostgreSQL out of the box1, so my options are limited to building something that sits outside the database. I can think of two ways that I can do this: have something that sits in front of the database which acts as a proxy, or have something that sits behind the database and generates the notifications by parsing the server log.

    A database proxy is probably the better option in the long run. Not only will it allow us to know exactly when a user connects or disconnects — since they will be connecting to the proxy itself — it could potentially allow us to do a few other things that have been discussed, such as IP address whitelisting. It might be a fair bit of work to do, and would require us to know the PostgreSQL wire protocol, but given how widespread PostgreSQL is, I’m suspecting that this could be done once and not need many changes going forward.

    Despite these advantages, I find myself considering the log parsing approach as the recommended solution. It’s probably a more fragile solution — unlike the wire protocol, there’s nothing stopping the PostgresSQL devs from changing a log message whenever they like — and it would not allow us to do all the other stuff that we’d like it to do. But it will be faster to build, and would involve less “hard programming” than the alternative. It can be knocked out quite quickly with a couple of regular expressions.

    Weighing the two options, I find myself wondering why I’m preferring the latter. Why go for the quick and easy solution when the alternative, despite requiring more work, would give us the greatest level of flexibility? It’s not like we couldn’t do it: I’m pretty confident anyone on the team would be able to put this proxy service together. It’s not even like my employer is requiring one particular solution over another (they haven’t yet received any of the suggestions I’m planning to propose, so they haven’t given a preference one way or the other). So what’s giving me pause from recommending it?

    No decision is completely made in a vacuum, and this is especially true in the mind of the decider. There are forces that sit outside the immediate problem that weigh on the decision itself: personal experience mixed with the prevailing zeitgeist of the industry expressed as opinions of “best practice”. Getting something that works out quickly vs. taking the time to build something more correct; a sense that taking on something this large would also result in a fair amount of support and maintenance (at the very least, we would need to be aware of changes in the wire protocol); and just a sense that going for the proxy option would mean we’re building something this is “not part of our core business”.

    Ah, yes, the old “core business” argument. I get the sense that a lot of people treat this one as a binary decision: either it’s something that we as a business does, or it’s not. But I wonder if it’s more of a continuum. After all, if we need to block users based on their IP address, is it not in our interest that we have something that does this? At what point does the lost opportunity of not having this outweigh the cost of taking on the development work now to build it? If we build the easy thing now, and later on we find ourselves needing the thing we don’t have, would we regret it?

    This is a bit of a rambling post, but I guess I’m a little conflicted about the prevailing approach within the tech industry of building less. It’s even said that part of the job is not only know what to build, but when NOT to build. I imagine no-one wants to go to the bad old days where everyone and their dog was building their own proprietary database from scratch. I definitely don’t want that either, but I sometimes wonder whether we’ve overcorrected in the other direction somewhat.


    1. I didn’t look at whether this is doable with hooks or extensions. ↩︎

    Some Notes About the Covid-19 Situation

    Now that the vaccines are here and are (slowly) being rolled out, and that Covid zero is no longer achievable in any realistic sense, the pandemic seems to be taking on a bit of a different vibe at the moment. I am no longer religiously watching the daily press conferences as I did in the past. They’re still occurring as far as I know, and I do appreciate that the authorities are showing up every day once again to brief the public.

    But I’m starting to get the sense that people are generally loosing interest in it now. Well, maybe “loosing interest” is the wrong way to say it. It’s not like it’s something that could be ignored: if you don’t get affected yourself, you’re still bound by the current restrictions in some way. Maybe it’s more like it’s starting to slip into the background somewhat.

    Slowly the we’re-all-in-this-together collectivism is morphing into one of personal responsibility. Except for the need to keep the medical systems from being overwhelmed, it’s now up to the individual to take care of themselves, whether it’s by masking up and social distancing, or by getting vaccinated. Everyone that I love has done just this: they’re got their shots when they could and are generally being very careful. But there are people out there that are not. Even some of my friends parents are hesitant to get the vaccine, either waiting for Pfizer (they’ll be waiting a while) or just being suspect of the vaccine at all.

    I also think that the success of the lockdowns before the delta variant has lulled people into a sense of security that is no longer present anymore. The latest lockdown has largely failed, and now it’s immunity from the vaccines that will have to protect us. I hope these people that are not taking this seriously realise that the protection that comes from collective actions will no longer be around when the virus comes for them.

    On Confluence

    I’m sorry. I know the saying about someone complaining about their tools. But this has been brewing for a little while and I need to get it off my chest.

    It’s becoming a huge pain using Atlassian Confluence WISIWIG editor to create wiki pages. Trying to use Confluence to write out something that is non-trivial, with tables and diagrams, so that it is clear to everyone in the team (including yourself) is so annoying to do now I find myself wishing for alternatives. It seems like the editor is actively resisting you in your efforts to get something down on paper.

    For one thing, there’s no way to type something that begins with [ or {. Doing so will switch modes for adding links or macros. This actively breaks my train of thought. The rude surprise that comes from this shunts me out of my current thought to one that tries to remember the proper way to back out of this unwanted mode change, which is not easy to do. There’s no easy way to get out of the new mode, and simply leave the brace as you typed it. It seems to be that the only way to disable this is to turn off all auto-formatting. I never need to create new macros by typing them out, but I do use h3. to create a new headings and * to bold text all the time. In order to actually type out an opening brace, I have to turn these niceties off.

    The second issue is that it’s soooo sloooow. For a large page, characters take around a second to appear on the screen after being typed. This does not help when you’re trying to get your thoughts down on the page as quickly as they come to you. You find yourself pausing and waiting for the words to catch up, which just slows your thinking down. And I won’t mention the number of errors that show up because of this (to be fair, I’m not the best typer in the world, but I find myself typing out fewer errors in an editor with faster feedback than the one that Confluence uses).

    I appreciate the thinking behind moving from a plain text editor to a WISIWIG one: it does make it more approachable to users not comfortable working with a markup language (although I also believe this is something that could be learnt, and that these users will eventually get comfortable with it and appreciate the speed at which they could type things out). It’s just a shame that there’s no alternative to those who need an interface that is fast and will just get out of the way.

← Newer Posts Older Posts →