Long Form Posts
-
This wink is doing a lot of work. ↩︎
- Ones that have been opened in which I’ve made comments that have been responded to — either by a code change or a reply — that I need to look at. The ones with more responses would appear higher in the list than the ones with less.
- Reviews that are brand new that I haven’t looked at yet, but others have.
- Brand new reviews that haven’t been looked at by anyone.
- Reviews that I’ve made comments on that have not been responded to yet. This indicates either that the author has not gotten around to it yet, in which case looking at the pull request again serves no purpose.
- Reviews that have enough approvals by others and do not necessarily need mine.
- Reviews that I’ve approved.
- Request for change: this is the highest level. It’s an indication that you see something wrong with the code that must be changed. In these cases, these comments would need to be resolved with either a change to the code, or a discussion of some sort, but they need to be resolved before the code is merged.
- Request for improvement: This is a level lower, and indicates that there is something in the code that may need to be change, but not doing so would not block the code review. This can be used to suggest improvements to how things were done, or maybe to suggest an alternative approach to solving the problem. All those nitpicking comments can go here.
- Comments: This is the lowest level. It provides the way to make remarks about the code, but that require no further action from the author. Uses for this might be praise for doing something a certain way, or FYI type comments that the submitter may need to be aware of for future changes.
-
This is all hypothetical. I’m not a Windows user. ↩︎
-
Hey Google, having a way to indicate zero interest in seeing ads from someone is signal of intent. Consider making this option available to us and you get more info for your user profiles. ↩︎
-
Please don’t publish your long form writing as a Twitter thread. ↩︎
-
Ben Thompson wrote a teriffic post about it as well. ↩︎
-
It’s amusing that the language I found myself using for this post sounds like I’m recovering from some form of substance abuse. I’m guessing the addictive nature of Twitter and its ilk are not too different. ↩︎
Cloud Formation "ValidationError at typeNameList" Errors
I was editing some Cloud Formation today and when I tried to deploy it, I was getting this lengthy, unhelpful error message:
An error occurred (ValidationError) when calling the CreateChangeSet operation: 1 validation error detected: Value ‘[AWS:SSM::Parameter, AWS::SNS::Topic]’ at ’typeNameList’ failed to satisfy constraint: Member must satisfy constraint: [Member must have length less than or equal to 204, Member must have length greater than or equal to 10, Member must satisfy regular expression pattern: [A-Za-z0-9]{2,64}::[A-Za-z0-9]{2,64}::[A-Za-z0-9]{2,64}(::MODULE){0,1}]
It was only showing up when I was tried adding a new SSM parameter resource to the template, so I first thought it was some weird problem with the parameter name or value. But after changing both to something that would work, I was still seeing this.
Turns out the problem was that I was missing a colon in the resource type. Instead of using AWS::SSM::Parameter
, I was using AWS:SSM::Parameter
(note the single colon just after “AWS”). Just looking at the error message again, I notice that this was actually being hinted to me, both in the regular expression and the “Value” list.
I know making good error messages take effort, and for most developers this tend to be an afterthought. I’m just as guilty of this as anyone else. But if I could make just one suggestion on how this message could be improved, it would be to get rid of the list in “Value” and replace it with the resource type that was actually failing validation. It would still be a relatively unhelpful error message but at least it will indicate what part of the template was actually falling over.
In any case, if anyone else is seeing an error message like this when you’re trying to roll out Cloud Formation changes, check for missing colons in your resource types.
GitLab Search Subscriptions with NetNewsWire
I’m working (with others) on a project that’s using GitLab to host the code, and I’m looking for a better way to be notified of new merge requests that I need to review. I cannot rely on the emails from GitLab as they tend to be sent for every little thing that happens on any of the merge requests I am reviewing. For this reason, any notifications sent by email will probably get missed by me. People do post new merge requests in a shared Slack channel, but a majority of them are for repos that don’t need my review. They’ve also been days where a lot of people are making a lot of changes at the same time, and any new messages for the repos I’m interesting would get pushed away.
Today I learnt that it’s possible to subscribe to searches in GitLab using RSS. So I’m trying something with NetNewsWire where I can subscribe to a search for open merge requests for the repos I’m interested in. I assume the way this works is that any new merge requests would result in a new RSS item on this feed, which will show up as an update in NetNewsWire. In theory, all I have to do is monitor NetNewsWire, and simply keep items unread until they’ve been merged or no longer need my attention.
We’ll see this approach helps. The only down side is that there’s no way to get updates for a single merge request as an RSS feed, which would have been nice.
What Would Get Me Back to Using Twitter Again
Congratulations, Elon Musk, on your purchase of Twitter. I’m sure you’ve got a bunch of ideas of how you want to move the company forward. I was once a user of Twitter myself — albeit not a massive one — and I’m sure you would just love to know what it would take for me to be a user once more. Well, here’s some advice on how you can improve the platform in ways that would make me consider going back.
First, you gotta work out the business model. This is number one as it touches on all the product decisions made to date. I think it’s clear that when it comes to Twitter, the advertising model is suboptimal. It just don’t have the scale, and the insatiable need for engagement is arguable one of the key reasons behind the product decisions that fuel the anxiety and outrage on the platform. I think the best thing you could do is drop ads completely and move to a different model. I don’t care what that model is. Subscription tiers; maybe a credit base system where you have a prepaid account and it costs you money to send Tweets based on their virality. Heck, you can fund it from your personal wealth for the rest of your life if you want. Just get rid of the ads.
Next, make it easy to know which actions result in a broadcast of intent. The big one I have in mind is unfollowing someone. I use to follow people that I work with simply because I worked with them. But after a while I found that what they were tweeting was anxiety inducing. So I don’t want to follow them any more, but I don’t know what happens if I choose to unfollow them. Do they get a notification? They got one when I started following them — I know that because I got one when they started follow me. So in lieu of any documentation (there might be documentation about this, I haven’t checked), I’d like to be able to stop following them without them being made aware of that fact. Note that this is not the same as muting them or blocking them: they’re not being nasty or breaking any policies of what they post. I just want to stop seeing what they post.
Third, about open sourcing that algorithm. By all means, do so if you think that would help, but I think that’s only half the moderation story. The other half is removing all the attempts to drive up engagement, or at least having a way to turn them off. Examples include making it easier to turn off the algorithmic timeline, getting rid of or hiding the “Trending Topics”, and no longer sticking news items in the notification section (seriously, adding this crap to the notification section has completely removed its utility to me). If I want the results to simply be a reverse chronological timeline of tweets from people I’m following, and notifications only being events of people engaging with what I post, then please make it easy for me to have this. This might means my usage may move from being less about quantity and more about quality, but remember that you no longer need all that engagement. You changed the business model, remember?
Finally, let’s talk about all the features that drum up engagement. If it was up to me, I’d probably remove them completely, but I know that some people might find them useful, and it’s arguably a way for Twitter (now under your control) to, let’s say, “steer the direction of the conversation.” So if you must, keep these discovery features, but isolate them to a specific area of the app, maybe called “Discovery”. Put whatever you want in there — trending topics, promoted tweets, tweets made within a specific location, whatever you want — but keep them in that section, and only that section. My timeline must be completely void of this if I choose it to be.
I’m sure there are others that I can think of, but I think all this is a good first step. I look forward to taking this onboard, and I thank you for your consideration. Honestly, it might not be enough for me to go back. I wasn’t a big user before, and I’ve since moved to greener pastures. But who knows, maybe it will. In any case, I believe, with these changes, that Twitter as a platform would be more valuable, both with you at the helm, and with back there with my 10 or so followers and my posting rate of 25 tweets or so in the last eight years. 😉 1
Showing A File At a Specific Git Revision
To display the contents of a file at a given revision in Git, run the following command:
$ git show <revision>:<filename>
For example, to view the version of “README.md” on the dev
branch:
$ git show dev:README.md
There is an alternative form of this command that will show the changes applied to that file as part of the commit:
$ git show <revision> -- <filename>
This can be used alongside the log command to work out what happened to a file that was deleted.
First, view the history of the file. You are interested in the ID before the commit that deleted the file: attempting to run git show
using the deletion commit ID it will result in nothing being shown.
$ git log -- file/that/was/deleted
commit abc123
Author: The Deleter <deleter@example.com>
Date: XXX
Deleted this file. Ha ha ha!
commit beforeCommit
Author: File Changer <changer@example.com>
Date: XXX
Added a new file at file/that/was/deleted
Then, use git show
to view the version before it was deleted:
$ git show beforeCommit:file/that/was/deleted
Code Review Software Sucks. Here's How I Would Improve It
This post is about code reviews, and the software that facilitates them.
I’ll be honest: I’m not a huge fan of code reviews, so a lot of what I speak of below can probably be dismissed as that from someone who blames their tools. Be that as it may, I do think there is room for improvements in the tooling used to review code, and this post touches on a few additional features which would help.
First I should say that I have no experience with dedicated code review tools. I’m mainly talking about code review tools that are part of hosted source code repository systems, like GitHub, GitLab, and Bitbucket. And since these are quite large and comprehensive systems, it might be that the priorities are different compared to a dedicated code review tool with a narrower focus. Pull requests and code reviews are just one of the many tasks that these systems need to handle, along with browsing code repositories, managing CI/CD runs, hosting binary releases, etc.
So I think it’s fair to say that such tools may not have the depth that a dedicated code review tool would have. After all, GitHub Actions does not have the same level of sophistication as something like Jenkins or BuildKite, either.
But even so, I’d say that there’s still room for improvement in the code review facilities that these systems do offer. Improvements that could be tailored more to the code review workflow. It’s a bit like using a text editor to manage your reminders. Yeah, you can use a text editor, but most of the features related specifically to reminders will not be available to you, and you’ll have to plug the feature gap yourself. Compare this to a dedicated reminder app, which would do a lot of the work for you, such as notifying you of upcoming reminders or having the means to marking an item as done.
So, what should be improved in the software that is used to review code? I can think of a few things:
Inbox: When you think about it, code reviews are a bit like emails and Jira tickets: they come to you and require you to action them in some way in order to get the code merged. But the level of attention you need to give them changes over time. If you’ve made comments on a review, there’s really no need to look at it again until the code has been updated or those the author has replied.
But this dynamic aspect of code reviews is not well reflected in most of these systems. Usually what I see is simply a list of pull requests that have not yet been approved or merged, and I have to keep track of the reviews that need my attention now myself, verses those that I can probably wait for action from others.
I think what would be better than a simple list would be something more of an inbox: a subset of the open reviews that are important to me now. As I action them, they’ll drop off the list and won’t come back until I need to action them again.
The types of reviews that I’d like to appear in the inbox, in the ordered listed below, would be the following:
In fact, the list can be extended to filter out reviews that I don’t need to worry about, such as:
This doesn’t necessarily need to replace the list of open reviews: that might still be useful. But it will no longer be the primary list of reviews I need to work with during the day to day.
Approval pending resolution of comments: One thing I always find myself indecisive about is when I should hit that Approve button. Let’s say I’ve gone through the code, made some comments that I’d like the submitter to look at, but the rest of the code looks good. When should I approve the pull request? If I do it now, then the author may not have seen the comments or have indication that I’d like them to make changes, and will go ahead and merge.
I guess then the best time to approve it is when the changes are made. But that means the onerous is on me to remember to review the changes again. If the requests are trivial — such as renaming things — and I’d trust the person to make the changes and going through to review them once again is a waste of time.
This is where “Approval pending resolution of comments” would come in handy. Selecting this approval mode would mean that my approval would be granted once the author has resolve the outstanding review comments. This would not replace the regular approval mode: if there are changes which do require a re-review, I’d just approve it normally once I’ve gone through it again. But it’s one more way to let the workflow of code reviews work in my favour.
Speaking of review comments…
Review comment types: I think it’s a mistake to assume that all review comments are equal. Certainly in my experience I find myself unable to read the urgency of comments on the reviews I submit. I also find it difficult to telegraph in the comments that I make on code reviews of others. This usually results in longer comments with phrases such as “you don’t have to do this now, but…”, or “something to consider in the future…”
Some indication of the urgency of the comment alongside the comment itself would be nice. I can think of a system that has at least three levels:
Notes to self: Finally, one thing that is on way too few systems that deal with shared data is the ability to annotate pull requests or the commented files with private notes. These won’t be seen by the author or any of the other reviewers, and are only there to facilitate making notes to self, such as things like “Looked at it, waiting for comments to be addressed”, or “review no longer pending”. This is probably the minimum, and would be less important if the other things are not addressed.
So that’s how I’d improve code review software. It may be that I’m the only one with this problem, and that others are perfectively able to review code effectively without these features. But I know they would work for me, and if I start seeing these in services like GitHub or GitLab, I probably would start using them.
Learning Through Video
Mike Crittenden wrote a post this morning about how he hates learning through videos. I know for myself that I occasionally do prefer videos for learning new things, but not always.
Usually if I need to learn something, it would be some new technology that I have to know for my job. In those cases, I find that if I have absolutely no experience in the subject matter, a good video which provides a decent overview of the major concepts helps me a great deal. Trying to learn the same thing from reading a lengthy blog post, especially when jargon is used, is less effective for me. I find myself getting tired and loosing my place. Now, this could just be because of the writing — dry blocks of text are the worst, but I tend to do better if the posts are shorter and formulated more like a tutorial.
If there is a video, I generally prefer them to be delivered in the style of a lecture or presentation. Slides that I can look at while the presenter is speaking are fine, but motion graphics or a live demo is better, especially if the subject is complex enough to warrant them. But in either case, I need something visual that I can actually watch. Having someone simply talk to the camera really doesn’t work for me, and makes watching the video more of a hassle (although it’s slightly better if I just listen to the audio).
Once I’ve become proficient in the basics, learning through video become less useful to me and a decent blog post or documentation page works better. By that time, my learning needs become less about the basics and more about something specific, like how to do a particular thing or details of a particular item. At that point, speed is more important to me, and I prefer to have something that I can skim and search in my own time, rather than watch videos that tend to take much longer.
So that’s how and when I prefer to learn something from video. I’ll close by saying that this is my preferred approach when I need to learn something for work. If it’s during my downtime, either a video or blog-post is fine, so long as my curiosity is satisfied.
Time and Money
Spending a lot of time in Stripe recently. It’s a fantastic payment gateway and a pleasure to use, compared to something like PayPal which really does show its age.
But it’s so stressful and confusing dealing with money and subscriptions. The biggest uncertainty is dealing with anything that takes time. The problem I’m facing now is if the customer chooses to buy something like a database, which is billed a flat fee every month, and then they choose to buy another database during the billing period, can I track that with a single subscription and simply adjust the quantity amount? My current research suggests that I can, and that Stripe will handle the prorating of partial payments and credits. They even have a nice API to preview the next invoice which can be used to show the customer how much they will be paying for.
But despite all the documentation, test environments, and simulations, I still can’t be sure that it will happen in real life, when real money is exchanged in real time. I guess some real life testing would be required. 💸
Cling Wrap
I bought this roll of cling wrap when I moved into my current place. Now, after 6.5 years and 150 metres, it’s finally all used up.
In the grand scheme of things, this is pretty unimportant. It happens every day: people buy something, they use it, and eventually it’s all used up. Why spend the time and energy writing and publishing this post to discuss it? Don’t you have better things to do?
And yet, there’s still a feeling of weight to this particular event that I felt was worth documenting. Perhaps it’s because it was the first roll of cling wrap I bought after I moved out. Or maybe it’s because it lasted for this long, so long in fact that the roll I bought to replace it was sitting in my cupboard for over a year. Or maybe it’s the realisation that with my current age and consumption patterns, I probably wouldn’t use up more than 7 rolls like this in my lifetime.
Who knows? All I know is that despite the banality of the whole affair, I spent just spent the better part of 20 minutes trying to work out how best to talk about it here.
I guess I’m in a bit of a reflective mood today.
Trip to Ballarat and the Beer Festival
I had the opportunity to go to Ballarat yesterday to attend the beer festival with a couple of mates. It’s been a while since I last travelled to Ballarat — I think the last time was when I was a kid. It was also the first time I took the train up there. I wanted to travel the Ballarat line for a while but I never had a real reason to do so.
The festival started at noon but I thought I’d travel up there earlier to look around the city for a while.
I didn’t stay long in the city centre as I needed to take the train to Wendouree, where the festival was located.
The beer festival itself was at Wendouree park. Layout of the place was good: vendors (breweries, food, etc.) was laid out along the perimeter, and general seating was available in the middle. They did really well with the seating. There were more than enough tables and chairs for everyone there.
Day was spectacular, if a bit sunny: the tables and chairs in the shade were prime real-estate. Whole atmosphere was pleasant: everyone was just out to have a nice time. Got pretty crowded as the day wore on. Lots of people with dogs, and a few families as well.
I’m not a massive beer connoisseur so I won’t talk much about the beers. Honestly, the trip for me was more of a chance to get out of the city and catch up with mates. But I did tried a pear cider for the first time, which was a little on the sweet side, which I guess was to be expected. I also had a Peach Melba inspired pale ale that was actually kind of nice.
Trip home was a bit of an adventure. A train was waiting at Wendouree station when I got there. There was nobody around and it was about 5 minutes until departure so I figured I’d board. Turns out it was actually not taking passengers. I was the only one that boarded, and when I actually realised that it was not in service, the doors closed and the train departed. I had to make my presence known to the driver and one other V/Line worker. They were really nice about it, and fortunately for me, they were on their way to Ballarat anyway, so it wasn’t a major issue. Even so, it was quite embarrassing. Fortunately the train home was easy enough.
OS Vendors and Online Accounts
Looks like the next version of Windows will require an online account, and while the reason for this could be something else, I’m guessing this would be used to enable file sync, mail account sync, calendar sync, etc.
I think it’s a mistake for OS vendors to assume that people would want to share their sole online identity across different devices. Say that I had a work computer and a home computer, and I’d use the same online account for both. Do I really want my personal files and work files being synced across, or my scheduled meetings to start showing up in my personal calendar?
I guess the response would be to create two online accounts: one for work and one for home. This might be possible: I don’t know how difficult it would be to create multiple Microsoft accounts for the same person. But if I do this1, and there’s software that I’ve purchased with my home account that I’d like to use on my work device, I’d have to repurchase it. I guess if I’m employed full time it should be work purchasing software, but come on, am I really going to go through the whole precurement buracracy to buy something like a $29 image editor?
This could be all theoretical: might be that this wouldn’t be a problem for Windows users. But I know from my limited experience with using MacOS that issues based on the assumption that everything associated with an online account should be shared on every device can crop up. That’s why I don’t open Mail.app on my home computer.
My YouTube Watching Setup
I’m not a sophisticated YouTube watcher but I do watch a lot of YouTube. For a while I was happy enough to simply use the YouTube app with a Chromecast. Yes there were ads, but the experience was nice enough that I tolerated them.
Recently, however, this became untenable.
It started with Google deciding to replace their simple Chromecast target with a Google TV style app, complete with a list of video recommendations I had no interest in watching. This redesign also came with more ads, which themselves would be annoying enough. But with this year being an election year, I started seeing campaign ads from a political party I have absolutely zero interest in seeing ads from. Naturally Google being Google, there was no way for me to block them1. I guess I could have just paid to remove the ads, but this wouldn’t solve the Chromecast problem. Besides, the feeling of paying for something that is arguably not a great use of my time felt wrong. I felt that a bit of friction in my YouTube watching habits wouldn’t be a bad thing to introduce.
It was time to consider an alternative setup.
Plex
Taking inspiration from those on Micro.blog and certain podcasters, I decided to give Plex a go. I had an Intel Nuc that I purchased a few years ago that I wasn’t using and it seemed like a good enough machine for a Plex server. The Nuc is decent enough, but it’s a little loud and I didn’t want it anywhere where I usually spend my time. It’s currently in a wardrobe in my spare bedroom.
After upgrading it to Ubuntu 20.04 LTS, I installed the Plex Media Server. I had to create a Plex account, which was a little annoying, but after doing so, I was able to setup a new library for YouTube videos relatively easily. I configured the library to poll every hour, which would come in handy for the next part of this setup.
I also installed the Plex app on my Android phone to act as the media player. The app has support for Chromecast, which is my preferred setup. Getting the app to talk with the media server was a little fiddly. I can’t remember all the details as it was a couple of months ago, but I do remember it taking several times before the app was listing videos in the library. But once the link was established, it because quite easy to play downloaded videos on my TV. I’ll have more to say about the app near the end of the post.
Youtube-dl And Broadtail
Once Plex was setup, I needed a way to download the YouTube videos. I was hoping to use youtube-dl, but the idea of SSH’ing into the media server to do so was unappealing. I was also aware that it was possible to subscribe to YouTube channels via RSS, which is my preferred way to be notified of new content. I’m tend not to subscribe to channels within YouTube itself as I rather Google didn’t know too much about my viewing preferences (sorry YouTubers).
I figured having a small web-app which will run alongside Plex that would allow me to subscribe to YouTube RSS feeds, and download the videos using youtube-dl to the Plex library, would be ideal. I’m sure that such applications already exist, but I decided to build my own.
So I built a small Go web-app to do this. I called it Broadtail, mainly because I’m using bird-related terms for working project names and I couldn’t think of anything better. It’s pretty basic, and it is ugly as sin, but it does the job.
I can setup an RSS subscription to YouTube channels and playlists, which it will periodically poll and store in a small embedded database. I can get a list of videos for each feed I’ve subscribed to and if it looks interesting, I can start a download from the UI. The app will run the appropriate youtube-dl incantation and provide a running status update, with some really basic job controls.
The downloaded videos are saved as MP4s in a directory configured as a Plex library. The one hour scan will pick them up, although I occasionally need to trigger a rescan manually if the video was downloaded relatively recently. During the day, I look for any new videos which look interesting, and start downloads in Broadtail. The videos would (usually) be ready and available in Plex by evening. The only exception are videos which are 3 to 4 hours long, which usually take around a day to download thanks to YouTube’s throttling.
How It’s Working Out
Putting this together took roughly a month or so, and I’ve been using it for my YouTube viewing for a couple of months now. In general, it’s working OK. The Plex media server is working quite well, as is the Plex mobile app. Broadtail is pretty bare bones but I’ve been slowly making changes to it over time as my needs evolve.
There are a few annoyances though. One large one is that the Plex app for Android is a little buggy. It gets into a state in which it is unable to start playback of a video, and the only way I know of fixing this is by rebooting the Chromecast device. This is really annoying and it’s gotten to the point when I’m doing this almost daily. I contemplated actually setting the Chromecast up on a smart plug so that I can force a restart simply by killing power to it in the middle of the night. It hasn’t quite gotten to the point where I’ve done this, but if Plex doesn’t fix their app soon, I think I may go ahead with this.
Also annoying is that sometimes the Plex app will loose connection with the media server, and will not list the contents of my library. Fortunately a restart of the mobile app is enough to resolve this.
As for the Intel Nuc itself, there have been instances when it seems to lock up, and I had to hard power it down. I don’t know what’s causing this. It could be that either Plex or Broadtail is causing a kernel panic of sorts, or it could be something in the the Nuc itself: it’s reasonably low cost hardware that is tailored more for Windows. I may eventually replace the Nuc with the Mac Mini I’m currently using as a desktop, once it’s time to upgrade.
But all in all, I think this is working for me. Not seeing any ads or crappy recommendations is a major win, and it’s also nice to actually run out of things to watch, forcing me to do something productive. Sometimes question whether the time it took to set this all up was worth it. Maybe, maybe not. But it feels a little better having something a little more in my control, than simply paying YouTube to remove the ads.
Finally, if Broadtail sounds interesting to you, it’s available on GitHub. I’ve only recently open-sourced it, so there’s a lot of missing things like decent documentation (it only got a README today). So please consider it in a bit of a “here be dragons” state at the moment. But if you have any questions, feel free to contact me.
Reminder That Your Content Isn't Really Yours on Medium #3
Looks like Medium has had a redesign recently, with recommended posts now being featured more prominently. Instead of appearing at the end of the post, they’re now in a right-hand sidebar, which doesn’t scroll, that is directly below the author of the post you’re reading.
And let me be clear: as far as I can tell, these are not recommendations from the same author. They can be from anyone, covering any topic that I can only assume Medium algorithmically thinks you’d be interested in. It reminds me a lot of the anxiety supplier that is Twitter Trending Topics.
Thank goodness. Here I was, reading someone’s post on UI design, without being made aware of, or being constantly reminded of whenever I move my eyes slightly to the right, of another post by a different author informing me that NFTs have been superseded by “Super NFTs”. Thank you for that, Medium. My reading experience has been dramatically improved! (Sarcasm test complete)
Honestly, I’m still wondering why people choose to use Medium for publishing long-form writing. And yes, I acknowledge that it could be worse: their “post” could just as easily been a Twitter thread1. But from this latest redesign, it seems to me that Medium is doing it’s best to close the reading experience gap between the two services.
The "Too Much Data" Error in Buffalo Projects
If there’s anyone else out there using Buffalo to build web-apps, I just discovered that it doesn’t clean up old versions of bundled JavaScript files. This means that the public/asset
directory can grow to gigabytes in size, eventually reaching the point where Go will simply refuse to embed that much data.
The tell-tail sign is this error message when you try to run the application:
too much data in section SDWARFSECT (over 2e+09 bytes)
If you see that, deleting public/assets
should solve your problem.
On Posting Daily
I recently listen to an interview with Seth Godin on the Tim Ferris podcast. In that interview, Seth mentions that he writes up to five blog posts a day. He just doesn’t publish them all. I guess that means that he has at least one or two drafts that can be touched up and published when he needs them.
Although I don’t think of this blog as being anywhere near the quality of Seths, I think I’d like to start trying to publish on this site at least once a day. I don’t post to any specific schedule here, and there have been stretches of days in which this blog has not seen an update at all. But over the last week, I’ve found myself falling into a streak, and I like to see how long I can maintain it.
The thing that has thwarted me in the past (apart from not even thinking about it) was either not being in the right frame of mind or not being available that day to post something. I’m not sure this blog warrants the discipline to set a specific time each day to sit down and write something. I treat this blog more or less like a public journal; a place to document thoughts, opinions or events of the day.
But I’m wondering if maintaining an inventory of unpublished drafts might help in maintaining this streak. So even though the goal is to write and publish a post on the same day, having something to fall back on when I can’t might be worthwhile.
The Future of Computing
I got into computers when I was quite young, and to satisfy my interest, I read a lot of books about computing during my primary school years. I remember one such book that included a discussion about how computing could evolve in the future.
The book approached the topic using a narrative of a “future” scenario, that would probably correspond with today’s present. In that story, the protagonist was late for school because of a fault with the “home computer” regarding the setting of the thermostat or something similar. Upon arriving home from school, he interacted with the computer by speaking to it as if he was talking to another person, expressing his anger of the events that morning by speaking in full, natural-language sentences. The computer responded in kind.
This book was published at a time when most personal computing involved typing in BASIC programs, so you could imagine that a bit of creative license was taken in the discussion. But I remember reading this and being quite ambivalent about this prospective future. I could not imagine the idea of central computers being installed in houses and controlling all aspects of their environment. Furthermore, I balked at the idea of people choosing to interact with these computers using natural language. I’m not much of a people person so the idea of speaking to computers as if it was another person, and having to deal with the computer speaking back, was not attractive to me.
Such is the feeling I have now with the idea of anyone wanting to put on AR and VR headsets. This seems to be the current focus of tech companies like Apple and Google, trying to find the successor to the smartphone. And although nothing from these companies have been announced yet, and these technologies have yet to escape the niche of gaming, I still cannot see a future in which people walk around with these headsets outside in public. Maybe with AR if they can do so in a device that looks like a pair of regular-looking glasses, but VR? No way.
But as soon as I reflected on those feelings, that book I read all those years ago came back to me. As you can probably guess, the future predicted in that story has more-or-less become reality, with the rise of the cloud, home automation, and smart speakers like the Amazon Echo. And more than that, people are using these systems and liking it, or at least putting up with it.
So might the same thing happen with AR and VR headsets. I should probably stay out of the future predicting business.
On the Moxie Marlinspike Post About web3
Today, I took a look at the Moxie Marlinspike post about web31. I found this post interesting for a variety of reasons, not least because unlike many other posts on the subject, it was a post that was level-headed and was coming from a position of want to learn more rather than persuade (or hustle). Well worth the read, especially for those that are turned off by the whole web3 crap like I am.
Anyway, there were a few things from the post that I found amusing. The first, and by far the most shocking, was that the “object” of a NFT is not derived from the actual item in question, like the artwork image, or the music audio, etc. It’s essentially just a URL. And not even a URL with an associated hash. Just a plain old URL, as in “example.com”, which points to a resources on the internet that can be change or removed at any time. Not really conducive to the idea of digital ownership if the thing that you “own” is just something that points to something else that you don’t actually control.
Also amusing was the revelation that for a majority of these so-called “distributed apps”, the “distribution” part is a bit of a misnomer. They might be using a blockchain to handle state, but many of the apps themselves are doing so by calling regular API services. They don’t build their own blockchain or even run a node on an existing blockchain, which is what I assumed they were doing. I can achieve the same thing without a blockchain if I make the database I use for my apps public and publish the API keys (yes, I’m being facetious).
The final thing I found amusing was that many of these platforms are actually building features into the platform that are not even using the blockchain at all. Moxie made the excellent point that the speed to which a protocol evolves, especially ones that are distributed by design, is usually very slow. Likely too slow if you’re trying to add features to a platform in an attempt to make it attractive to users. So services like OpenSeas are sometimes bypassing the blockchain altogether, and just adding propriety features which are backed by regular data stores like Firebase. Seems to me this is undermining the very idea of web3 itself.
So given these three revelations, what can we conclude from all the rhetoric of web3 that’s currently out there? That, I’ll leave up to you. I have my own opinions which I hope comes through from the tone of this post.
I’ll close by saying that I think the most insightful thing I learnt from the post had nothing to do with web3 at all. It was the point that the reason why Web 2 came about was that people didn’t want to run their own servers, and never will. This is actually quite obvious now when I think about it.
Burnt Out on Design
I’ve been doing a heap of design work at my job at the moment; writing documents, drawing up architecture diagrams, etc. I’d thought I would like this sort of work but I realise now that I can only tolerate it in small doses. Doing it for as long as I have been is burning me out slightly. I’d just like to go back to coding.
I’m wondering why this is. I think the biggest feeling I have is that it feels like I’m not delivering value. I understand the need to get some sort of design up so that tasks can be written up and allocated. I think a big problem is the feeling that everything needs to be in the design upfront, waterfall style, whereas the method I’d prefer is to have a basic design upfront — something that we can start work on — which can be iterated and augment over time.
I guess my preference with having something built vs. something perfect on paper differs from those that I work with. Given my current employer, which specialise more in hardware design, I can understand that line of thinking.
I’m also guessing that software architecture is not for me.
Still Off Twitter
A little while ago, I stopped using Twitter on a daily basis as the continuous barrage of news was getting me down. Six weeks after doing so, I wrote a post about it. Those six weeks have now become six months, and I can say I’m still off Twitter and have no immediate intention of going back.
My anxiety levels dropped since getting off1, and although they’ve not completely gone, the baseline has remained low with occasional spikes that soon subside. But the best thing is that the time I would have spend reading Twitter I now spend reading stuff that would have taken longer than 30 seconds to write. Things like books, blog posts and long-form articles (and Micro.blog posts, I always have time for those). It feels like the balance of my information diet has centred somewhat. I still occasionally read the news (although I stay away from the commercial news sources) but I try not to spend too much time on it. Most things I don’t need to be informed about in real time: if I learn about it the follow day, it’s no big deal.
I’m also seeing more and more people making the same choice I’ve made. The continuous stream of news on Twitter is just becoming too much for them, and they want off. I think Timo Koola’s post sums it up pretty well:
I wonder how much studies there are about harmfulness of following the news too closely? I don’t think our minds were made for constant bombardment of distressing things we can’t do anything about.
It’s not healthy being constantly reminded of events going on, most of them undesirable, that you can’t change. Better for myself that I spend my attention on things that interest me and help me grow.
100 Day Writing Streak
I promise I won’t post about every single milestone that comes along, but I’m quite happy that I reached 100 consecutive days of at least one blog post or journal entry.
On Treating Users As If They're Just There To Buy Stuff
Ars Technica has published a third post about the annoying user experience of Microsoft Edge in as many days. Today’s was about a notice that appears when the user tries to use Edge to download Chrome. These are notices that are displayed by the browser itself whenever the user opens up the Chrome download page.
Now, setting aside the fact that these notices shouldn’t be shown to the user at all, what got up my goat was the copy that appears in one of them:
‘I hate saving money,’ said no one ever. Microsoft Edge is the best browser for online shopping.
What is with this copy? Do they assume that all users do with their computers is buy stuff? That their only motivation with using a browser at all is to participate in rampant consumerism?
I’m not a Microsoft Edge user, so it’s probably not worth my time to comment on this. But what bothers me is that I’m seeing a trend suggesting that large software companies only think their users are just using their devices to consume stuff. This might be true in the majority — I really don’t know — but the problem is that this line of thinking starts to bleed into their product decisions, and reveals what lengths they will go to to extract more money from these users. I’m going on about Edge here but Apple does the same thing in their OSes: showing notifications for TV+ or Apple Music or whatever service they’re trying to flog onto their customers this month. At least with web companies like Google, Twitter and Meta (née Facebook 😒), we get to use the service for free.
I know software is expensive to build and maintain, etc, etc. But this mode of thinking is so sleazy it’s becoming insulting. It just makes the experience of using the product worse all around, like going to a “free” event when you know you’ll be pushed to buy something. This is how these software companies want their users to feel?