Long Form Posts
-
This is all hypothetical. I’m not a Windows user. ↩︎
-
Hey Google, having a way to indicate zero interest in seeing ads from someone is signal of intent. Consider making this option available to us and you get more info for your user profiles. ↩︎
-
Please don’t publish your long form writing as a Twitter thread. ↩︎
- rank: (int) the position the player has in the match just played in accordance with the scoring, with
1
being the player with the highest score,2
being the player with the second highest score, and so on. - winner: (bool) whether the player is considered the winner of the match. The person with the highest score usually is, but this is treated as an independent field and so it should be possible to define rules accordingly.
- draw: (bool) whether the player shares their rank with another player. When a draw occurs, both winning players will have a rank of
1
, with the player of the second highest score having a rank of2
. -
Ben Thompson wrote a teriffic post about it as well. ↩︎
-
It’s amusing that the language I found myself using for this post sounds like I’m recovering from some form of substance abuse. I’m guessing the addictive nature of Twitter and its ilk are not too different. ↩︎
- It would save the website dev from building the disclosure popup themselves. I’ve seen some real creative ways in which websites show this disclosure, but honestly it would just be simpler not to do it. It would also cover those web developers that forget (or “forget”) to disclose the presence of cookies when they need to.
- The website does not need to know where the user is browsing from. Privacy issues aside, it’s just a hassle to lookup the jurisdiction of the originator based on their IP address. Which is probably why no-one does it, and why even non-EU citizens see these disclosure popups. This is not a problem for the browser, which I’d imagine would have the necessary OS privileges to get the users’ current location. This would be especially true for browsers bundled with the OS like Safari and Edge.
- When the user chooses an option, their choice can be remembered. The irony of this whole thing is that I rarely see websites use cookies to save the my preferences for allowing cookies. These sites seem to just show the popup again the next time I visit. Of course for a user chooses to deny the use of cookies, it wouldn’t be possible for the site to use cookies to record this fact. If the browser is managing this preference, it can be saved alongside all the other site permissions like microphone access, thereby sitting outside what the site can make use of.
- Most important of all to me: those outside the jurisdiction don’t even need to see the disclosure popup. Websites that I visit could simply save cookies as they have been for 25 years now. This can be an option in the browser, so that users that prefer to see the disclosure prompt can do so. This option could come in handy for those EU citizens that prefer to just allow (or deny) cookies across the board, so they don’t have to see the disclosure popup either (I don’t know if this is possible in the regulation).
Broadtail 0.0.7
Released Broadtail 0.0.7 about a week ago. This included some restyling of the job list on the home page, which now includes a progress bar updated using web-sockets (no need for page refreshes anymore).
For the frontend, the Websocket APIs that come from the browser are used. There’s not much to it — it’s managed by a Stimulus controller which sets up the websocket and listen for updates. The updates are then pushed as custom events to the main window
, which the Stimulus controllers used to update the progress bar are listening out for. This allows for a single Stimulus controller to manage the websocket connection and make use of the window
as a message bus.
Working out the layers of the progress bar took me a bit of time, as I wanted to make sure the text in the progress bar itself was readable as the progress bar filled. I settled in a HTML tree that looked like the following:
<div class="progressbar">
<!-- The filled in layer, at z-index: 10 -->
<div class="complete">
<span class="label">45% complete</span>
</div>
<!-- The unfilled layer -->
<span class="label">45% complete</span>
</div>
As you can see, there’s a base layer and a filled in layer that overlaps it. Both of these layers have a progress label that contain the same status message. As the .complete layer fills in, it will hide the unfilled layer and label. The various CSS properties used to get this effect can be found here.
The backend was a little easier. There is a nice websocket library for Go which handles the connection upgrades and provides a nice API for posting JSON messages. Once the upgrade is complete, a goroutine servicing the connection will just start listening to status updates from the jobs manager and forward these messages as JSON text messages.
Although this works, it’s not perfect. One small issue is that updates will not reconnect if there is an error. I imagine that it’s just a matter of listening out for the relevant events and retrying, but I’ll need to learn more about how this actually works. Another thing is that the styling of the progress bar relies of fixed widths. If I get around to reskinning the style of the entire application, that might be the time to address this.
The second thing this release has is a simple integration with Plex. If this integration is configured, Broadtail will now send a request to Plex to rescan the library for new files, meaning that there’s no real need to wait for the schedule rescan to occur before the videos are available in the app. This simply uses Plex’s API, but it needs the Plex token, which can be found using this method.
Anyway, that’s it for this version. I’m working on re-engineering how favourites work for the next release. Since this is still in early development, I won’t be putting in any logic to migrate the existing favourites, so just be weary that you may loose that data. If that’s going to be a problem, feel free to let me know.
Learning Through Video
Mike Crittenden wrote a post this morning about how he hates learning through videos. I know for myself that I occasionally do prefer videos for learning new things, but not always.
Usually if I need to learn something, it would be some new technology that I have to know for my job. In those cases, I find that if I have absolutely no experience in the subject matter, a good video which provides a decent overview of the major concepts helps me a great deal. Trying to learn the same thing from reading a lengthy blog post, especially when jargon is used, is less effective for me. I find myself getting tired and loosing my place. Now, this could just be because of the writing — dry blocks of text are the worst, but I tend to do better if the posts are shorter and formulated more like a tutorial.
If there is a video, I generally prefer them to be delivered in the style of a lecture or presentation. Slides that I can look at while the presenter is speaking are fine, but motion graphics or a live demo is better, especially if the subject is complex enough to warrant them. But in either case, I need something visual that I can actually watch. Having someone simply talk to the camera really doesn’t work for me, and makes watching the video more of a hassle (although it’s slightly better if I just listen to the audio).
Once I’ve become proficient in the basics, learning through video become less useful to me and a decent blog post or documentation page works better. By that time, my learning needs become less about the basics and more about something specific, like how to do a particular thing or details of a particular item. At that point, speed is more important to me, and I prefer to have something that I can skim and search in my own time, rather than watch videos that tend to take much longer.
So that’s how and when I prefer to learn something from video. I’ll close by saying that this is my preferred approach when I need to learn something for work. If it’s during my downtime, either a video or blog-post is fine, so long as my curiosity is satisfied.
Some More Updates of Broadtail
I’ve made some more changes to Broadtail over the last couple of weeks.
The home page now shows a list of recently published videos below the currently running jobs.

Clicking through to “Show All” displays all the published videos. A simple filter can be applied to filter them down to videos with titles containing the keywords (note: nothing fancy with the filter, just tokenisation and an OR query).

Finally, items can now be favourited. This can be used to select videos that you may want to download in the future. I personally use this to keep the list of “new videos” in the Plex server these videos go to to a minimum.

Time and Money
Spending a lot of time in Stripe recently. It’s a fantastic payment gateway and a pleasure to use, compared to something like PayPal which really does show its age.
But it’s so stressful and confusing dealing with money and subscriptions. The biggest uncertainty is dealing with anything that takes time. The problem I’m facing now is if the customer chooses to buy something like a database, which is billed a flat fee every month, and then they choose to buy another database during the billing period, can I track that with a single subscription and simply adjust the quantity amount? My current research suggests that I can, and that Stripe will handle the prorating of partial payments and credits. They even have a nice API to preview the next invoice which can be used to show the customer how much they will be paying for.
But despite all the documentation, test environments, and simulations, I still can’t be sure that it will happen in real life, when real money is exchanged in real time. I guess some real life testing would be required. 💸
Cling Wrap
I bought this roll of cling wrap when I moved into my current place. Now, after 6.5 years and 150 metres, it’s finally all used up.

In the grand scheme of things, this is pretty unimportant. It happens every day: people buy something, they use it, and eventually it’s all used up. Why spend the time and energy writing and publishing this post to discuss it? Don’t you have better things to do?
And yet, there’s still a feeling of weight to this particular event that I felt was worth documenting. Perhaps it’s because it was the first roll of cling wrap I bought after I moved out. Or maybe it’s because it lasted for this long, so long in fact that the roll I bought to replace it was sitting in my cupboard for over a year. Or maybe it’s the realisation that with my current age and consumption patterns, I probably wouldn’t use up more than 7 rolls like this in my lifetime.
Who knows? All I know is that despite the banality of the whole affair, I spent just spent the better part of 20 minutes trying to work out how best to talk about it here.
I guess I’m in a bit of a reflective mood today.
Trip to Ballarat and the Beer Festival
I had the opportunity to go to Ballarat yesterday to attend the beer festival with a couple of mates. It’s been a while since I last travelled to Ballarat — I think the last time was when I was a kid. It was also the first time I took the train up there. I wanted to travel the Ballarat line for a while but I never had a real reason to do so.
The festival started at noon but I thought I’d travel up there earlier to look around the city for a while.
I didn’t stay long in the city centre as I needed to take the train to Wendouree, where the festival was located.
The beer festival itself was at Wendouree park. Layout of the place was good: vendors (breweries, food, etc.) was laid out along the perimeter, and general seating was available in the middle. They did really well with the seating. There were more than enough tables and chairs for everyone there.
Day was spectacular, if a bit sunny: the tables and chairs in the shade were prime real-estate. Whole atmosphere was pleasant: everyone was just out to have a nice time. Got pretty crowded as the day wore on. Lots of people with dogs, and a few families as well.
I’m not a massive beer connoisseur so I won’t talk much about the beers. Honestly, the trip for me was more of a chance to get out of the city and catch up with mates. But I did tried a pear cider for the first time, which was a little on the sweet side, which I guess was to be expected. I also had a Peach Melba inspired pale ale that was actually kind of nice.
Trip home was a bit of an adventure. A train was waiting at Wendouree station when I got there. There was nobody around and it was about 5 minutes until departure so I figured I’d board. Turns out it was actually not taking passengers. I was the only one that boarded, and when I actually realised that it was not in service, the doors closed and the train departed. I had to make my presence known to the driver and one other V/Line worker. They were really nice about it, and fortunately for me, they were on their way to Ballarat anyway, so it wasn’t a major issue. Even so, it was quite embarrassing. Fortunately the train home was easy enough.
OS Vendors and Online Accounts
Looks like the next version of Windows will require an online account, and while the reason for this could be something else, I’m guessing this would be used to enable file sync, mail account sync, calendar sync, etc.
I think it’s a mistake for OS vendors to assume that people would want to share their sole online identity across different devices. Say that I had a work computer and a home computer, and I’d use the same online account for both. Do I really want my personal files and work files being synced across, or my scheduled meetings to start showing up in my personal calendar?
I guess the response would be to create two online accounts: one for work and one for home. This might be possible: I don’t know how difficult it would be to create multiple Microsoft accounts for the same person. But if I do this1, and there’s software that I’ve purchased with my home account that I’d like to use on my work device, I’d have to repurchase it. I guess if I’m employed full time it should be work purchasing software, but come on, am I really going to go through the whole precurement buracracy to buy something like a $29 image editor?
This could be all theoretical: might be that this wouldn’t be a problem for Windows users. But I know from my limited experience with using MacOS that issues based on the assumption that everything associated with an online account should be shared on every device can crop up. That’s why I don’t open Mail.app on my home computer.
My YouTube Watching Setup
I’m not a sophisticated YouTube watcher but I do watch a lot of YouTube. For a while I was happy enough to simply use the YouTube app with a Chromecast. Yes there were ads, but the experience was nice enough that I tolerated them.
Recently, however, this became untenable.
It started with Google deciding to replace their simple Chromecast target with a Google TV style app, complete with a list of video recommendations I had no interest in watching. This redesign also came with more ads, which themselves would be annoying enough. But with this year being an election year, I started seeing campaign ads from a political party I have absolutely zero interest in seeing ads from. Naturally Google being Google, there was no way for me to block them1. I guess I could have just paid to remove the ads, but this wouldn’t solve the Chromecast problem. Besides, the feeling of paying for something that is arguably not a great use of my time felt wrong. I felt that a bit of friction in my YouTube watching habits wouldn’t be a bad thing to introduce.
It was time to consider an alternative setup.
Plex
Taking inspiration from those on Micro.blog and certain podcasters, I decided to give Plex a go. I had an Intel Nuc that I purchased a few years ago that I wasn’t using and it seemed like a good enough machine for a Plex server. The Nuc is decent enough, but it’s a little loud and I didn’t want it anywhere where I usually spend my time. It’s currently in a wardrobe in my spare bedroom.
After upgrading it to Ubuntu 20.04 LTS, I installed the Plex Media Server. I had to create a Plex account, which was a little annoying, but after doing so, I was able to setup a new library for YouTube videos relatively easily. I configured the library to poll every hour, which would come in handy for the next part of this setup.
I also installed the Plex app on my Android phone to act as the media player. The app has support for Chromecast, which is my preferred setup. Getting the app to talk with the media server was a little fiddly. I can’t remember all the details as it was a couple of months ago, but I do remember it taking several times before the app was listing videos in the library. But once the link was established, it because quite easy to play downloaded videos on my TV. I’ll have more to say about the app near the end of the post.
Youtube-dl And Broadtail
Once Plex was setup, I needed a way to download the YouTube videos. I was hoping to use youtube-dl, but the idea of SSH’ing into the media server to do so was unappealing. I was also aware that it was possible to subscribe to YouTube channels via RSS, which is my preferred way to be notified of new content. I’m tend not to subscribe to channels within YouTube itself as I rather Google didn’t know too much about my viewing preferences (sorry YouTubers).
I figured having a small web-app which will run alongside Plex that would allow me to subscribe to YouTube RSS feeds, and download the videos using youtube-dl to the Plex library, would be ideal. I’m sure that such applications already exist, but I decided to build my own.
So I built a small Go web-app to do this. I called it Broadtail, mainly because I’m using bird-related terms for working project names and I couldn’t think of anything better. It’s pretty basic, and it is ugly as sin, but it does the job.

I can setup an RSS subscription to YouTube channels and playlists, which it will periodically poll and store in a small embedded database. I can get a list of videos for each feed I’ve subscribed to and if it looks interesting, I can start a download from the UI. The app will run the appropriate youtube-dl incantation and provide a running status update, with some really basic job controls.

The downloaded videos are saved as MP4s in a directory configured as a Plex library. The one hour scan will pick them up, although I occasionally need to trigger a rescan manually if the video was downloaded relatively recently. During the day, I look for any new videos which look interesting, and start downloads in Broadtail. The videos would (usually) be ready and available in Plex by evening. The only exception are videos which are 3 to 4 hours long, which usually take around a day to download thanks to YouTube’s throttling.
How It’s Working Out
Putting this together took roughly a month or so, and I’ve been using it for my YouTube viewing for a couple of months now. In general, it’s working OK. The Plex media server is working quite well, as is the Plex mobile app. Broadtail is pretty bare bones but I’ve been slowly making changes to it over time as my needs evolve.
There are a few annoyances though. One large one is that the Plex app for Android is a little buggy. It gets into a state in which it is unable to start playback of a video, and the only way I know of fixing this is by rebooting the Chromecast device. This is really annoying and it’s gotten to the point when I’m doing this almost daily. I contemplated actually setting the Chromecast up on a smart plug so that I can force a restart simply by killing power to it in the middle of the night. It hasn’t quite gotten to the point where I’ve done this, but if Plex doesn’t fix their app soon, I think I may go ahead with this.
Also annoying is that sometimes the Plex app will loose connection with the media server, and will not list the contents of my library. Fortunately a restart of the mobile app is enough to resolve this.
As for the Intel Nuc itself, there have been instances when it seems to lock up, and I had to hard power it down. I don’t know what’s causing this. It could be that either Plex or Broadtail is causing a kernel panic of sorts, or it could be something in the the Nuc itself: it’s reasonably low cost hardware that is tailored more for Windows. I may eventually replace the Nuc with the Mac Mini I’m currently using as a desktop, once it’s time to upgrade.
But all in all, I think this is working for me. Not seeing any ads or crappy recommendations is a major win, and it’s also nice to actually run out of things to watch, forcing me to do something productive. Sometimes question whether the time it took to set this all up was worth it. Maybe, maybe not. But it feels a little better having something a little more in my control, than simply paying YouTube to remove the ads.
Finally, if Broadtail sounds interesting to you, it’s available on GitHub. I’ve only recently open-sourced it, so there’s a lot of missing things like decent documentation (it only got a README today). So please consider it in a bit of a “here be dragons” state at the moment. But if you have any questions, feel free to contact me.
Reminder That Your Content Isn't Really Yours on Medium #3
Looks like Medium has had a redesign recently, with recommended posts now being featured more prominently. Instead of appearing at the end of the post, they’re now in a right-hand sidebar, which doesn’t scroll, that is directly below the author of the post you’re reading.
And let me be clear: as far as I can tell, these are not recommendations from the same author. They can be from anyone, covering any topic that I can only assume Medium algorithmically thinks you’d be interested in. It reminds me a lot of the anxiety supplier that is Twitter Trending Topics.
Thank goodness. Here I was, reading someone’s post on UI design, without being made aware of, or being constantly reminded of whenever I move my eyes slightly to the right, of another post by a different author informing me that NFTs have been superseded by “Super NFTs”. Thank you for that, Medium. My reading experience has been dramatically improved! (Sarcasm test complete)
Honestly, I’m still wondering why people choose to use Medium for publishing long-form writing. And yes, I acknowledge that it could be worse: their “post” could just as easily been a Twitter thread1. But from this latest redesign, it seems to me that Medium is doing it’s best to close the reading experience gap between the two services.
The "Too Much Data" Error in Buffalo Projects
If there’s anyone else out there using Buffalo to build web-apps, I just discovered that it doesn’t clean up old versions of bundled JavaScript files. This means that the public/asset
directory can grow to gigabytes in size, eventually reaching the point where Go will simply refuse to embed that much data.
The tell-tail sign is this error message when you try to run the application:
too much data in section SDWARFSECT (over 2e+09 bytes)
If you see that, deleting public/assets
should solve your problem.
On Posting Daily
I recently listen to an interview with Seth Godin on the Tim Ferris podcast. In that interview, Seth mentions that he writes up to five blog posts a day. He just doesn’t publish them all. I guess that means that he has at least one or two drafts that can be touched up and published when he needs them.
Although I don’t think of this blog as being anywhere near the quality of Seths, I think I’d like to start trying to publish on this site at least once a day. I don’t post to any specific schedule here, and there have been stretches of days in which this blog has not seen an update at all. But over the last week, I’ve found myself falling into a streak, and I like to see how long I can maintain it.
The thing that has thwarted me in the past (apart from not even thinking about it) was either not being in the right frame of mind or not being available that day to post something. I’m not sure this blog warrants the discipline to set a specific time each day to sit down and write something. I treat this blog more or less like a public journal; a place to document thoughts, opinions or events of the day.
But I’m wondering if maintaining an inventory of unpublished drafts might help in maintaining this streak. So even though the goal is to write and publish a post on the same day, having something to fall back on when I can’t might be worthwhile.
The Future of Computing
I got into computers when I was quite young, and to satisfy my interest, I read a lot of books about computing during my primary school years. I remember one such book that included a discussion about how computing could evolve in the future.
The book approached the topic using a narrative of a “future” scenario, that would probably correspond with today’s present. In that story, the protagonist was late for school because of a fault with the “home computer” regarding the setting of the thermostat or something similar. Upon arriving home from school, he interacted with the computer by speaking to it as if he was talking to another person, expressing his anger of the events that morning by speaking in full, natural-language sentences. The computer responded in kind.
This book was published at a time when most personal computing involved typing in BASIC programs, so you could imagine that a bit of creative license was taken in the discussion. But I remember reading this and being quite ambivalent about this prospective future. I could not imagine the idea of central computers being installed in houses and controlling all aspects of their environment. Furthermore, I balked at the idea of people choosing to interact with these computers using natural language. I’m not much of a people person so the idea of speaking to computers as if it was another person, and having to deal with the computer speaking back, was not attractive to me.
Such is the feeling I have now with the idea of anyone wanting to put on AR and VR headsets. This seems to be the current focus of tech companies like Apple and Google, trying to find the successor to the smartphone. And although nothing from these companies have been announced yet, and these technologies have yet to escape the niche of gaming, I still cannot see a future in which people walk around with these headsets outside in public. Maybe with AR if they can do so in a device that looks like a pair of regular-looking glasses, but VR? No way.
But as soon as I reflected on those feelings, that book I read all those years ago came back to me. As you can probably guess, the future predicted in that story has more-or-less become reality, with the rise of the cloud, home automation, and smart speakers like the Amazon Echo. And more than that, people are using these systems and liking it, or at least putting up with it.
So might the same thing happen with AR and VR headsets. I should probably stay out of the future predicting business.
PGBC Scoring Rules
I get a bit of a thrill when there’s a need to design a mini-language. I have one facing me now for a little project I’m responsible for, which is maintaining a scoring site for a bocce comp I’m involve in with friends.
How scoring works now is that the winner of a particular bocce match gets one point for the season. The winner for the season is the person with the most points. However, we recently discuss the idea of adding “final matches,” which will give the match winner 7 points, the runner up 2 points, and the person who came in third 1 point. At the same time I want to add the notion of “friendly matches” which won’t count to the season score.
It might have been that a simple solution was to encode these rules directly in the app, and have a flag indicating whether a match was normal, final or friendly. But this was suboptimal as there is another variant of the game we play which do not have the notion of finals, and if we did, we may eventually have different rule for it. So I opted for a design in which a new “match type” is added as a new database entity, which will have the scoring rules encoded as a PostgreSQL JSON column type. Using this as a mechanism of encoding free(ish) structured data when there’s no need to query it has worked for me in the past. There was no need to add the notion of seasons points as it was already present as an easy way to keep track of wins for a season.
For the scoring rules JSON structure, I’m considering the use of an array of conditions. When a player meets the conditions of a particular array element, they will be awarded the points associated with that condition. Each player will only be permitted to match one condition, and if they don’t match any, they won’t get any points. The fields of the condition that a player can be matched to can be made up of the following attributes:
Using this structure, a possible scoring rules definition for a normal match may look like the following:
{ "season_score": [
{ "condition": { "winner": true }, "points": 1 }
]}
whereas a rules definition for the final match may look like the following:
{ "season_score": [
{ "condition": { "rank": 1 }, "points": 7 },
{ "condition": { "rank": 2 }, "points": 2 },
{ "condition": { "rank": 3 }, "points": 1 }
}]
Finally, for friendlies, the rules can simply look like the following:
{ "season_score": [] }
I think this provides a great deal of flexibility and extensibility without making the rules definition too complicated.
On the Moxie Marlinspike Post About web3
Today, I took a look at the Moxie Marlinspike post about web31. I found this post interesting for a variety of reasons, not least because unlike many other posts on the subject, it was a post that was level-headed and was coming from a position of want to learn more rather than persuade (or hustle). Well worth the read, especially for those that are turned off by the whole web3 crap like I am.
Anyway, there were a few things from the post that I found amusing. The first, and by far the most shocking, was that the “object” of a NFT is not derived from the actual item in question, like the artwork image, or the music audio, etc. It’s essentially just a URL. And not even a URL with an associated hash. Just a plain old URL, as in “example.com”, which points to a resources on the internet that can be change or removed at any time. Not really conducive to the idea of digital ownership if the thing that you “own” is just something that points to something else that you don’t actually control.
Also amusing was the revelation that for a majority of these so-called “distributed apps”, the “distribution” part is a bit of a misnomer. They might be using a blockchain to handle state, but many of the apps themselves are doing so by calling regular API services. They don’t build their own blockchain or even run a node on an existing blockchain, which is what I assumed they were doing. I can achieve the same thing without a blockchain if I make the database I use for my apps public and publish the API keys (yes, I’m being facetious).
The final thing I found amusing was that many of these platforms are actually building features into the platform that are not even using the blockchain at all. Moxie made the excellent point that the speed to which a protocol evolves, especially ones that are distributed by design, is usually very slow. Likely too slow if you’re trying to add features to a platform in an attempt to make it attractive to users. So services like OpenSeas are sometimes bypassing the blockchain altogether, and just adding propriety features which are backed by regular data stores like Firebase. Seems to me this is undermining the very idea of web3 itself.
So given these three revelations, what can we conclude from all the rhetoric of web3 that’s currently out there? That, I’ll leave up to you. I have my own opinions which I hope comes through from the tone of this post.
I’ll close by saying that I think the most insightful thing I learnt from the post had nothing to do with web3 at all. It was the point that the reason why Web 2 came about was that people didn’t want to run their own servers, and never will. This is actually quite obvious now when I think about it.
Burnt Out on Design
I’ve been doing a heap of design work at my job at the moment; writing documents, drawing up architecture diagrams, etc. I’d thought I would like this sort of work but I realise now that I can only tolerate it in small doses. Doing it for as long as I have been is burning me out slightly. I’d just like to go back to coding.
I’m wondering why this is. I think the biggest feeling I have is that it feels like I’m not delivering value. I understand the need to get some sort of design up so that tasks can be written up and allocated. I think a big problem is the feeling that everything needs to be in the design upfront, waterfall style, whereas the method I’d prefer is to have a basic design upfront — something that we can start work on — which can be iterated and augment over time.
I guess my preference with having something built vs. something perfect on paper differs from those that I work with. Given my current employer, which specialise more in hardware design, I can understand that line of thinking.
I’m also guessing that software architecture is not for me.
Still Off Twitter
A little while ago, I stopped using Twitter on a daily basis as the continuous barrage of news was getting me down. Six weeks after doing so, I wrote a post about it. Those six weeks have now become six months, and I can say I’m still off Twitter and have no immediate intention of going back.
My anxiety levels dropped since getting off1, and although they’ve not completely gone, the baseline has remained low with occasional spikes that soon subside. But the best thing is that the time I would have spend reading Twitter I now spend reading stuff that would have taken longer than 30 seconds to write. Things like books, blog posts and long-form articles (and Micro.blog posts, I always have time for those). It feels like the balance of my information diet has centred somewhat. I still occasionally read the news (although I stay away from the commercial news sources) but I try not to spend too much time on it. Most things I don’t need to be informed about in real time: if I learn about it the follow day, it’s no big deal.
I’m also seeing more and more people making the same choice I’ve made. The continuous stream of news on Twitter is just becoming too much for them, and they want off. I think Timo Koola’s post sums it up pretty well:
I wonder how much studies there are about harmfulness of following the news too closely? I don’t think our minds were made for constant bombardment of distressing things we can’t do anything about.
It’s not healthy being constantly reminded of events going on, most of them undesirable, that you can’t change. Better for myself that I spend my attention on things that interest me and help me grow.
100 Day Writing Streak
I promise I won’t post about every single milestone that comes along, but I’m quite happy that I reached 100 consecutive days of at least one blog post or journal entry.

On Treating Users As If They're Just There To Buy Stuff
Ars Technica has published a third post about the annoying user experience of Microsoft Edge in as many days. Today’s was about a notice that appears when the user tries to use Edge to download Chrome. These are notices that are displayed by the browser itself whenever the user opens up the Chrome download page.
Now, setting aside the fact that these notices shouldn’t be shown to the user at all, what got up my goat was the copy that appears in one of them:
‘I hate saving money,’ said no one ever. Microsoft Edge is the best browser for online shopping.
What is with this copy? Do they assume that all users do with their computers is buy stuff? That their only motivation with using a browser at all is to participate in rampant consumerism?
I’m not a Microsoft Edge user, so it’s probably not worth my time to comment on this. But what bothers me is that I’m seeing a trend suggesting that large software companies only think their users are just using their devices to consume stuff. This might be true in the majority — I really don’t know — but the problem is that this line of thinking starts to bleed into their product decisions, and reveals what lengths they will go to to extract more money from these users. I’m going on about Edge here but Apple does the same thing in their OSes: showing notifications for TV+ or Apple Music or whatever service they’re trying to flog onto their customers this month. At least with web companies like Google, Twitter and Meta (née Facebook 😒), we get to use the service for free.
I know software is expensive to build and maintain, etc, etc. But this mode of thinking is so sleazy it’s becoming insulting. It just makes the experience of using the product worse all around, like going to a “free” event when you know you’ll be pushed to buy something. This is how these software companies want their users to feel?
Weekend In Mansfield
Over the weekend, I had the opportunity to spend some time with my parents who were staying in Mansfield, in regional Victoria. We were staying in a small cottage located on a hill, which meant some pretty stunning views, especially in the evening light.



We didn’t do a heap during our trip, although we did manage to do the The Paps trail on Saturday, which involved a 700 metre climb.

(Apologies for the photo, I had another one that was zoomed in a bit more but the photo turned out quite muddy. Might need to consider another phone or camera.)
It was a bit of a challenge — the trail was quite steep at times — and there were a few instances when we considered turning back. But we did eventually reached the summit, and got some spectacular views of Lake Eldon, which was quite full thanks to all the rainfall we got over the last few months.






This was followed by a pub lunch at the Bonni Doon Hotel. The place was chokers, probably with people eager to get out of the city at the end of lockdown (likewise for the cottage we stayed at, which has been booked solid for the next couple of months). But the food (and beer) was good and it was perfect weather to be dining outside, with the sun shining and temperature in the low 20’s Celsius.
All in all it was good to get out of the city, and out of my weekend routine, for a spell.
Cookie Disclosure Popups Should be Handled by the Browser
I really dislike the cookie disclosure popups that appear on websites. Ideally I shouldn’t be seeing them at all — I know that the EU requires it, but I’m not a citizen of the EU so the regulation should not apply to me. But I’m pragmatic enough to know that not every web developer can or will selectively show this disclosure popup based on the geographic region of the visitor.
That’s why I’m wondering if these disclosure popups would be better handled by the browser.
The way I see this working is that when a website tries to set a cookie, either through a response header or within JavaScript, and the user is located in a jurisdiction that requires them to be aware of this, the browser would be responsible for telling them. They could show it as a permission request popup, much like the ones you see already when the site wants to use your microphone or get your location. The user can then choose to “accept”, in which case the cookie would be saved; or they can choose to “deny”, in which case the cookie would be silently dropped or an error will be returned.
This has some major advantages over the system we have now:
Of course the actual details of this would need to be ironed out, like how a website would know whether the user has denied cookie storage. That’s something for standards committee to work out. But it seems to me that this feature is a no-brainer.