Long Form Posts

    Afternoon Walk Around Lake Ginninderra

    Went for an walk around Lake Ginninderra this afternoon. Well, not “around” the lake: that walk would have taken a while. But I did walk along the path that would take me around the lake for about 30 minutes, then walked back again. Below are a few photos I took.

    My Evening

    So here’s how I spent my evening:

    Watching the WWDC state of the union until the DNS resolver konked out in the WiFi router, causing the Chromecast to get into a state in which it could no longer connect to the network, resulting in about 10 minutes of troubleshooting before deceiding to clean up, not go to the gym, spend another 10 minutes trying to troubleshoot the issue, then stared at my laptop for about half an hour wondering whether to go back to troubleshooting the Chromecast, or doing something else with the hope that it would eventually work itself out.

    Eventually, after another 5 minutes of fruitless troubleshooting, I finally got the Chromecast fixed by doing a factory reset and connected it to the 2.4 GHz band.

    Anyway, I hope your evening was more productive than mine.

    (And I was worried I would have nothing to write about today.)

    The Powerline Track Walk

    Went on a walk of the Powerline Track, which I was personally calling the “powerline walk” (yes, I’m impressed at how close I was). I saw this trail when I was in Canberra earlier this year, and knowing that I would be back, I made a note to actually walk it, which I did today. This track follows the powerlines just south of Aranda Bushland Nature Reserve, then goes under Gungahlin Drive and into the Black Mountain Nature Reserve. The weather was cold but pleasant, at least at the start of the track. It eventually got quite dark and a little wet near the end, but that did result in some nice winter lighting over the landscape.

    Here’s a gallery of some of the photos I took. Note that there’s a fair few of powerlines, which is something I’ve been drawn to ever since I was a little kid.

    Humour In Conference Videos — Less Is More

    It might be just me but I get a little put off with over-the-top attempts at humour in developer conference videos.

    I’m four minutes into a conference video which has already included some slap-stick humour (with cheesy CGI), and someone trying to pitch to me on why what they’re talking about is worth listening to. This was done in such a way that it actually distracted me from the content, a.k.a. the reason why I’m watching it.

    This sort of thing is really a turn-off, almost to the point where I feel like turning it off. I also don’t think it helps you that much either. If you open your talk by pretending to get zapped by a piece of lab equiptment, I’m probably not going to assume the same level of sincerity in your presentation as I would for someone who is just trying to get their message across.

    I like a joke as much as the next person, and one or two small, well contained jokes like substituted words in the slide pack is fine. But it really needs to be dished out in small doses, and it really shouldn’t distract from the content. Less (and much less than you think) is more in my opinion.

    Cloud Formation "ValidationError at typeNameList" Errors

    I was editing some Cloud Formation today and when I tried to deploy it, I was getting this lengthy, unhelpful error message:

    An error occurred (ValidationError) when calling the CreateChangeSet operation: 1 validation error detected: Value ‘[AWS:SSM::Parameter, AWS::SNS::Topic]’ at ’typeNameList’ failed to satisfy constraint: Member must satisfy constraint: [Member must have length less than or equal to 204, Member must have length greater than or equal to 10, Member must satisfy regular expression pattern: [A-Za-z0-9]{2,64}::[A-Za-z0-9]{2,64}::[A-Za-z0-9]{2,64}(::MODULE){0,1}]

    It was only showing up when I was tried adding a new SSM parameter resource to the template, so I first thought it was some weird problem with the parameter name or value. But after changing both to something that would work, I was still seeing this.

    Turns out the problem was that I was missing a colon in the resource type. Instead of using AWS::SSM::Parameter, I was using AWS:SSM::Parameter (note the single colon just after “AWS”). Just looking at the error message again, I notice that this was actually being hinted to me, both in the regular expression and the “Value” list.

    I know making good error messages take effort, and for most developers this tend to be an afterthought. I’m just as guilty of this as anyone else. But if I could make just one suggestion on how this message could be improved, it would be to get rid of the list in “Value” and replace it with the resource type that was actually failing validation. It would still be a relatively unhelpful error message but at least it will indicate what part of the template was actually falling over.

    In any case, if anyone else is seeing an error message like this when you’re trying to roll out Cloud Formation changes, check for missing colons in your resource types.

    GitLab Search Subscriptions with NetNewsWire

    I’m working (with others) on a project that’s using GitLab to host the code, and I’m looking for a better way to be notified of new merge requests that I need to review. I cannot rely on the emails from GitLab as they tend to be sent for every little thing that happens on any of the merge requests I am reviewing. For this reason, any notifications sent by email will probably get missed by me. People do post new merge requests in a shared Slack channel, but a majority of them are for repos that don’t need my review. They’ve also been days where a lot of people are making a lot of changes at the same time, and any new messages for the repos I’m interesting would get pushed away.

    Today I learnt that it’s possible to subscribe to searches in GitLab using RSS. So I’m trying something with NetNewsWire where I can subscribe to a search for open merge requests for the repos I’m interested in. I assume the way this works is that any new merge requests would result in a new RSS item on this feed, which will show up as an update in NetNewsWire. In theory, all I have to do is monitor NetNewsWire, and simply keep items unread until they’ve been merged or no longer need my attention.

    We’ll see this approach helps. The only down side is that there’s no way to get updates for a single merge request as an RSS feed, which would have been nice.

    What Would Get Me Back to Using Twitter Again

    Congratulations, Elon Musk, on your purchase of Twitter. I’m sure you’ve got a bunch of ideas of how you want to move the company forward. I was once a user of Twitter myself — albeit not a massive one — and I’m sure you would just love to know what it would take for me to be a user once more. Well, here’s some advice on how you can improve the platform in ways that would make me consider going back.

    First, you gotta work out the business model. This is number one as it touches on all the product decisions made to date. I think it’s clear that when it comes to Twitter, the advertising model is suboptimal. It just don’t have the scale, and the insatiable need for engagement is arguable one of the key reasons behind the product decisions that fuel the anxiety and outrage on the platform. I think the best thing you could do is drop ads completely and move to a different model. I don’t care what that model is. Subscription tiers; maybe a credit base system where you have a prepaid account and it costs you money to send Tweets based on their virality. Heck, you can fund it from your personal wealth for the rest of your life if you want. Just get rid of the ads.

    Next, make it easy to know which actions result in a broadcast of intent. The big one I have in mind is unfollowing someone. I use to follow people that I work with simply because I worked with them. But after a while I found that what they were tweeting was anxiety inducing. So I don’t want to follow them any more, but I don’t know what happens if I choose to unfollow them. Do they get a notification? They got one when I started following them — I know that because I got one when they started follow me. So in lieu of any documentation (there might be documentation about this, I haven’t checked), I’d like to be able to stop following them without them being made aware of that fact. Note that this is not the same as muting them or blocking them: they’re not being nasty or breaking any policies of what they post. I just want to stop seeing what they post.

    Third, about open sourcing that algorithm. By all means, do so if you think that would help, but I think that’s only half the moderation story. The other half is removing all the attempts to drive up engagement, or at least having a way to turn them off. Examples include making it easier to turn off the algorithmic timeline, getting rid of or hiding the “Trending Topics”, and no longer sticking news items in the notification section (seriously, adding this crap to the notification section has completely removed its utility to me). If I want the results to simply be a reverse chronological timeline of tweets from people I’m following, and notifications only being events of people engaging with what I post, then please make it easy for me to have this. This might means my usage may move from being less about quantity and more about quality, but remember that you no longer need all that engagement. You changed the business model, remember?

    Finally, let’s talk about all the features that drum up engagement. If it was up to me, I’d probably remove them completely, but I know that some people might find them useful, and it’s arguably a way for Twitter (now under your control) to, let’s say, “steer the direction of the conversation.” So if you must, keep these discovery features, but isolate them to a specific area of the app, maybe called “Discovery”. Put whatever you want in there — trending topics, promoted tweets, tweets made within a specific location, whatever you want — but keep them in that section, and only that section. My timeline must be completely void of this if I choose it to be.

    I’m sure there are others that I can think of, but I think all this is a good first step. I look forward to taking this onboard, and I thank you for your consideration. Honestly, it might not be enough for me to go back. I wasn’t a big user before, and I’ve since moved to greener pastures. But who knows, maybe it will. In any case, I believe, with these changes, that Twitter as a platform would be more valuable, both with you at the helm, and with back there with my 10 or so followers and my posting rate of 25 tweets or so in the last eight years. 😉 1


    1. This wink is doing a lot of work. ↩︎

    Showing A File At a Specific Git Revision

    To display the contents of a file at a given revision in Git, run the following command:

    $ git show <revision>:<filename>
    

    For example, to view the version of “README.md” on the dev branch:

    $ git show dev:README.md
    

    There is an alternative form of this command that will show the changes applied to that file as part of the commit:

    $ git show <revision> -- <filename>
    

    This can be used alongside the log command to work out what happened to a file that was deleted.

    First, view the history of the file. You are interested in the ID before the commit that deleted the file: attempting to run git show using the deletion commit ID it will result in nothing being shown.

    $ git log -- file/that/was/deleted
    commit abc123
    Author: The Deleter <deleter@example.com>
    Date:   XXX
    
        Deleted this file.  Ha ha ha!
    
    commit beforeCommit
    Author: File Changer <changer@example.com>
    Date:   XXX
    
        Added a new file at file/that/was/deleted
    

    Then, use git show to view the version before it was deleted:

    $ git show beforeCommit:file/that/was/deleted
    

    Code Review Software Sucks. Here's How I Would Improve It

    This post is about code reviews, and the software that facilitates them.

    I’ll be honest: I’m not a huge fan of code reviews, so a lot of what I speak of below can probably be dismissed as that from someone who blames their tools. Be that as it may, I do think there is room for improvements in the tooling used to review code, and this post touches on a few additional features which would help.

    First I should say that I have no experience with dedicated code review tools. I’m mainly talking about code review tools that are part of hosted source code repository systems, like GitHub, GitLab, and Bitbucket. And since these are quite large and comprehensive systems, it might be that the priorities are different compared to a dedicated code review tool with a narrower focus. Pull requests and code reviews are just one of the many tasks that these systems need to handle, along with browsing code repositories, managing CI/CD runs, hosting binary releases, etc.

    So I think it’s fair to say that such tools may not have the depth that a dedicated code review tool would have. After all, GitHub Actions does not have the same level of sophistication as something like Jenkins or BuildKite, either.

    But even so, I’d say that there’s still room for improvement in the code review facilities that these systems do offer. Improvements that could be tailored more to the code review workflow. It’s a bit like using a text editor to manage your reminders. Yeah, you can use a text editor, but most of the features related specifically to reminders will not be available to you, and you’ll have to plug the feature gap yourself. Compare this to a dedicated reminder app, which would do a lot of the work for you, such as notifying you of upcoming reminders or having the means to marking an item as done.

    So, what should be improved in the software that is used to review code? I can think of a few things:

    Inbox: When you think about it, code reviews are a bit like emails and Jira tickets: they come to you and require you to action them in some way in order to get the code merged. But the level of attention you need to give them changes over time. If you’ve made comments on a review, there’s really no need to look at it again until the code has been updated or those the author has replied.

    But this dynamic aspect of code reviews is not well reflected in most of these systems. Usually what I see is simply a list of pull requests that have not yet been approved or merged, and I have to keep track of the reviews that need my attention now myself, verses those that I can probably wait for action from others.

    I think what would be better than a simple list would be something more of an inbox: a subset of the open reviews that are important to me now. As I action them, they’ll drop off the list and won’t come back until I need to action them again.

    The types of reviews that I’d like to appear in the inbox, in the ordered listed below, would be the following:

    1. Ones that have been opened in which I’ve made comments that have been responded to — either by a code change or a reply — that I need to look at. The ones with more responses would appear higher in the list than the ones with less.
    2. Reviews that are brand new that I haven’t looked at yet, but others have.
    3. Brand new reviews that haven’t been looked at by anyone.

    In fact, the list can be extended to filter out reviews that I don’t need to worry about, such as:

    1. Reviews that I’ve made comments on that have not been responded to yet. This indicates either that the author has not gotten around to it yet, in which case looking at the pull request again serves no purpose.
    2. Reviews that have enough approvals by others and do not necessarily need mine.
    3. Reviews that I’ve approved.

    This doesn’t necessarily need to replace the list of open reviews: that might still be useful. But it will no longer be the primary list of reviews I need to work with during the day to day.

    Approval pending resolution of comments: One thing I always find myself indecisive about is when I should hit that Approve button. Let’s say I’ve gone through the code, made some comments that I’d like the submitter to look at, but the rest of the code looks good. When should I approve the pull request? If I do it now, then the author may not have seen the comments or have indication that I’d like them to make changes, and will go ahead and merge.

    I guess then the best time to approve it is when the changes are made. But that means the onerous is on me to remember to review the changes again. If the requests are trivial — such as renaming things — and I’d trust the person to make the changes and going through to review them once again is a waste of time.

    This is where “Approval pending resolution of comments” would come in handy. Selecting this approval mode would mean that my approval would be granted once the author has resolve the outstanding review comments. This would not replace the regular approval mode: if there are changes which do require a re-review, I’d just approve it normally once I’ve gone through it again. But it’s one more way to let the workflow of code reviews work in my favour.

    Speaking of review comments…

    Review comment types: I think it’s a mistake to assume that all review comments are equal. Certainly in my experience I find myself unable to read the urgency of comments on the reviews I submit. I also find it difficult to telegraph in the comments that I make on code reviews of others. This usually results in longer comments with phrases such as “you don’t have to do this now, but…”, or “something to consider in the future…”

    Some indication of the urgency of the comment alongside the comment itself would be nice. I can think of a system that has at least three levels:

    1. Request for change: this is the highest level. It’s an indication that you see something wrong with the code that must be changed. In these cases, these comments would need to be resolved with either a change to the code, or a discussion of some sort, but they need to be resolved before the code is merged.
    2. Request for improvement: This is a level lower, and indicates that there is something in the code that may need to be change, but not doing so would not block the code review. This can be used to suggest improvements to how things were done, or maybe to suggest an alternative approach to solving the problem. All those nitpicking comments can go here.
    3. Comments: This is the lowest level. It provides the way to make remarks about the code, but that require no further action from the author. Uses for this might be praise for doing something a certain way, or FYI type comments that the submitter may need to be aware of for future changes.

    Notes to self: Finally, one thing that is on way too few systems that deal with shared data is the ability to annotate pull requests or the commented files with private notes. These won’t be seen by the author or any of the other reviewers, and are only there to facilitate making notes to self, such as things like “Looked at it, waiting for comments to be addressed”, or “review no longer pending”. This is probably the minimum, and would be less important if the other things are not addressed.

    So that’s how I’d improve code review software. It may be that I’m the only one with this problem, and that others are perfectively able to review code effectively without these features. But I know they would work for me, and if I start seeing these in services like GitHub or GitLab, I probably would start using them.

    Broadtail 0.0.7

    Released Broadtail 0.0.7 about a week ago. This included some restyling of the job list on the home page, which now includes a progress bar updated using web-sockets (no need for page refreshes anymore).

    For the frontend, the Websocket APIs that come from the browser are used. There’s not much to it — it’s managed by a Stimulus controller which sets up the websocket and listen for updates. The updates are then pushed as custom events to the main window, which the Stimulus controllers used to update the progress bar are listening out for. This allows for a single Stimulus controller to manage the websocket connection and make use of the window as a message bus.

    Working out the layers of the progress bar took me a bit of time, as I wanted to make sure the text in the progress bar itself was readable as the progress bar filled. I settled in a HTML tree that looked like the following:

    <div class="progressbar">
      <!-- The filled in layer, at z-index: 10 -->
      <div class="complete">
        <span class="label">45% complete</span>
      </div>
    
      <!-- The unfilled layer -->
      <span class="label">45% complete</span>
    </div>
    

    As you can see, there’s a base layer and a filled in layer that overlaps it. Both of these layers have a progress label that contain the same status message. As the .complete layer fills in, it will hide the unfilled layer and label. The various CSS properties used to get this effect can be found here.

    The backend was a little easier. There is a nice websocket library for Go which handles the connection upgrades and provides a nice API for posting JSON messages. Once the upgrade is complete, a goroutine servicing the connection will just start listening to status updates from the jobs manager and forward these messages as JSON text messages.

    Although this works, it’s not perfect. One small issue is that updates will not reconnect if there is an error. I imagine that it’s just a matter of listening out for the relevant events and retrying, but I’ll need to learn more about how this actually works. Another thing is that the styling of the progress bar relies of fixed widths. If I get around to reskinning the style of the entire application, that might be the time to address this.

    The second thing this release has is a simple integration with Plex. If this integration is configured, Broadtail will now send a request to Plex to rescan the library for new files, meaning that there’s no real need to wait for the schedule rescan to occur before the videos are available in the app. This simply uses Plex’s API, but it needs the Plex token, which can be found using this method.

    Anyway, that’s it for this version. I’m working on re-engineering how favourites work for the next release. Since this is still in early development, I won’t be putting in any logic to migrate the existing favourites, so just be weary that you may loose that data. If that’s going to be a problem, feel free to let me know.

    Learning Through Video

    Mike Crittenden wrote a post this morning about how he hates learning through videos. I know for myself that I occasionally do prefer videos for learning new things, but not always.

    Usually if I need to learn something, it would be some new technology that I have to know for my job. In those cases, I find that if I have absolutely no experience in the subject matter, a good video which provides a decent overview of the major concepts helps me a great deal. Trying to learn the same thing from reading a lengthy blog post, especially when jargon is used, is less effective for me. I find myself getting tired and loosing my place. Now, this could just be because of the writing — dry blocks of text are the worst, but I tend to do better if the posts are shorter and formulated more like a tutorial.

    If there is a video, I generally prefer them to be delivered in the style of a lecture or presentation. Slides that I can look at while the presenter is speaking are fine, but motion graphics or a live demo is better, especially if the subject is complex enough to warrant them. But in either case, I need something visual that I can actually watch. Having someone simply talk to the camera really doesn’t work for me, and makes watching the video more of a hassle (although it’s slightly better if I just listen to the audio).

    Once I’ve become proficient in the basics, learning through video become less useful to me and a decent blog post or documentation page works better. By that time, my learning needs become less about the basics and more about something specific, like how to do a particular thing or details of a particular item. At that point, speed is more important to me, and I prefer to have something that I can skim and search in my own time, rather than watch videos that tend to take much longer.

    So that’s how and when I prefer to learn something from video. I’ll close by saying that this is my preferred approach when I need to learn something for work. If it’s during my downtime, either a video or blog-post is fine, so long as my curiosity is satisfied.

    Some More Updates of Broadtail

    I’ve made some more changes to Broadtail over the last couple of weeks.

    The home page now shows a list of recently published videos below the currently running jobs.

    Clicking through to “Show All” displays all the published videos. A simple filter can be applied to filter them down to videos with titles containing the keywords (note: nothing fancy with the filter, just tokenisation and an OR query).

    Finally, items can now be favourited. This can be used to select videos that you may want to download in the future. I personally use this to keep the list of “new videos” in the Plex server these videos go to to a minimum.

    Time and Money

    Spending a lot of time in Stripe recently. It’s a fantastic payment gateway and a pleasure to use, compared to something like PayPal which really does show its age.

    But it’s so stressful and confusing dealing with money and subscriptions. The biggest uncertainty is dealing with anything that takes time. The problem I’m facing now is if the customer chooses to buy something like a database, which is billed a flat fee every month, and then they choose to buy another database during the billing period, can I track that with a single subscription and simply adjust the quantity amount? My current research suggests that I can, and that Stripe will handle the prorating of partial payments and credits. They even have a nice API to preview the next invoice which can be used to show the customer how much they will be paying for.

    But despite all the documentation, test environments, and simulations, I still can’t be sure that it will happen in real life, when real money is exchanged in real time. I guess some real life testing would be required. 💸

    Cling Wrap

    I bought this roll of cling wrap when I moved into my current place. Now, after 6.5 years and 150 metres, it’s finally all used up.

    Cling wrap, now empty

    In the grand scheme of things, this is pretty unimportant. It happens every day: people buy something, they use it, and eventually it’s all used up. Why spend the time and energy writing and publishing this post to discuss it? Don’t you have better things to do?

    And yet, there’s still a feeling of weight to this particular event that I felt was worth documenting. Perhaps it’s because it was the first roll of cling wrap I bought after I moved out. Or maybe it’s because it lasted for this long, so long in fact that the roll I bought to replace it was sitting in my cupboard for over a year. Or maybe it’s the realisation that with my current age and consumption patterns, I probably wouldn’t use up more than 7 rolls like this in my lifetime.

    Who knows? All I know is that despite the banality of the whole affair, I spent just spent the better part of 20 minutes trying to work out how best to talk about it here.

    I guess I’m in a bit of a reflective mood today.

    Trip to Ballarat and the Beer Festival

    I had the opportunity to go to Ballarat yesterday to attend the beer festival with a couple of mates. It’s been a while since I last travelled to Ballarat — I think the last time was when I was a kid. It was also the first time I took the train up there. I wanted to travel the Ballarat line for a while but I never had a real reason to do so.

    The festival started at noon but I thought I’d travel up there earlier to look around the city for a while.

    I didn’t stay long in the city centre as I needed to take the train to Wendouree, where the festival was located.

    The beer festival itself was at Wendouree park. Layout of the place was good: vendors (breweries, food, etc.) was laid out along the perimeter, and general seating was available in the middle. They did really well with the seating. There were more than enough tables and chairs for everyone there.

    Day was spectacular, if a bit sunny: the tables and chairs in the shade were prime real-estate. Whole atmosphere was pleasant: everyone was just out to have a nice time. Got pretty crowded as the day wore on. Lots of people with dogs, and a few families as well.

    I’m not a massive beer connoisseur so I won’t talk much about the beers. Honestly, the trip for me was more of a chance to get out of the city and catch up with mates. But I did tried a pear cider for the first time, which was a little on the sweet side, which I guess was to be expected. I also had a Peach Melba inspired pale ale that was actually kind of nice.

    Trip home was a bit of an adventure. A train was waiting at Wendouree station when I got there. There was nobody around and it was about 5 minutes until departure so I figured I’d board. Turns out it was actually not taking passengers. I was the only one that boarded, and when I actually realised that it was not in service, the doors closed and the train departed. I had to make my presence known to the driver and one other V/Line worker. They were really nice about it, and fortunately for me, they were on their way to Ballarat anyway, so it wasn’t a major issue. Even so, it was quite embarrassing. Fortunately the train home was easy enough.

    OS Vendors and Online Accounts

    Looks like the next version of Windows will require an online account, and while the reason for this could be something else, I’m guessing this would be used to enable file sync, mail account sync, calendar sync, etc.

    I think it’s a mistake for OS vendors to assume that people would want to share their sole online identity across different devices. Say that I had a work computer and a home computer, and I’d use the same online account for both. Do I really want my personal files and work files being synced across, or my scheduled meetings to start showing up in my personal calendar?

    I guess the response would be to create two online accounts: one for work and one for home. This might be possible: I don’t know how difficult it would be to create multiple Microsoft accounts for the same person. But if I do this1, and there’s software that I’ve purchased with my home account that I’d like to use on my work device, I’d have to repurchase it. I guess if I’m employed full time it should be work purchasing software, but come on, am I really going to go through the whole precurement buracracy to buy something like a $29 image editor?

    This could be all theoretical: might be that this wouldn’t be a problem for Windows users. But I know from my limited experience with using MacOS that issues based on the assumption that everything associated with an online account should be shared on every device can crop up. That’s why I don’t open Mail.app on my home computer.


    1. This is all hypothetical. I’m not a Windows user. ↩︎

    My YouTube Watching Setup

    I’m not a sophisticated YouTube watcher but I do watch a lot of YouTube. For a while I was happy enough to simply use the YouTube app with a Chromecast. Yes there were ads, but the experience was nice enough that I tolerated them.

    Recently, however, this became untenable.

    It started with Google deciding to replace their simple Chromecast target with a Google TV style app, complete with a list of video recommendations I had no interest in watching. This redesign also came with more ads, which themselves would be annoying enough. But with this year being an election year, I started seeing campaign ads from a political party I have absolutely zero interest in seeing ads from. Naturally Google being Google, there was no way for me to block them1. I guess I could have just paid to remove the ads, but this wouldn’t solve the Chromecast problem. Besides, the feeling of paying for something that is arguably not a great use of my time felt wrong. I felt that a bit of friction in my YouTube watching habits wouldn’t be a bad thing to introduce.

    It was time to consider an alternative setup.

    Plex

    Taking inspiration from those on Micro.blog and certain podcasters, I decided to give Plex a go. I had an Intel Nuc that I purchased a few years ago that I wasn’t using and it seemed like a good enough machine for a Plex server. The Nuc is decent enough, but it’s a little loud and I didn’t want it anywhere where I usually spend my time. It’s currently in a wardrobe in my spare bedroom.

    After upgrading it to Ubuntu 20.04 LTS, I installed the Plex Media Server. I had to create a Plex account, which was a little annoying, but after doing so, I was able to setup a new library for YouTube videos relatively easily. I configured the library to poll every hour, which would come in handy for the next part of this setup.

    I also installed the Plex app on my Android phone to act as the media player. The app has support for Chromecast, which is my preferred setup. Getting the app to talk with the media server was a little fiddly. I can’t remember all the details as it was a couple of months ago, but I do remember it taking several times before the app was listing videos in the library. But once the link was established, it because quite easy to play downloaded videos on my TV. I’ll have more to say about the app near the end of the post.

    Youtube-dl And Broadtail

    Once Plex was setup, I needed a way to download the YouTube videos. I was hoping to use youtube-dl, but the idea of SSH’ing into the media server to do so was unappealing. I was also aware that it was possible to subscribe to YouTube channels via RSS, which is my preferred way to be notified of new content. I’m tend not to subscribe to channels within YouTube itself as I rather Google didn’t know too much about my viewing preferences (sorry YouTubers).

    I figured having a small web-app which will run alongside Plex that would allow me to subscribe to YouTube RSS feeds, and download the videos using youtube-dl to the Plex library, would be ideal. I’m sure that such applications already exist, but I decided to build my own.

    So I built a small Go web-app to do this. I called it Broadtail, mainly because I’m using bird-related terms for working project names and I couldn’t think of anything better. It’s pretty basic, and it is ugly as sin, but it does the job.

    List of videos from a YouTube RSS feed in Broadtail

    I can setup an RSS subscription to YouTube channels and playlists, which it will periodically poll and store in a small embedded database. I can get a list of videos for each feed I’ve subscribed to and if it looks interesting, I can start a download from the UI. The app will run the appropriate youtube-dl incantation and provide a running status update, with some really basic job controls.

    Video details in Broadtail

    The downloaded videos are saved as MP4s in a directory configured as a Plex library. The one hour scan will pick them up, although I occasionally need to trigger a rescan manually if the video was downloaded relatively recently. During the day, I look for any new videos which look interesting, and start downloads in Broadtail. The videos would (usually) be ready and available in Plex by evening. The only exception are videos which are 3 to 4 hours long, which usually take around a day to download thanks to YouTube’s throttling.

    How It’s Working Out

    Putting this together took roughly a month or so, and I’ve been using it for my YouTube viewing for a couple of months now. In general, it’s working OK. The Plex media server is working quite well, as is the Plex mobile app. Broadtail is pretty bare bones but I’ve been slowly making changes to it over time as my needs evolve.

    There are a few annoyances though. One large one is that the Plex app for Android is a little buggy. It gets into a state in which it is unable to start playback of a video, and the only way I know of fixing this is by rebooting the Chromecast device. This is really annoying and it’s gotten to the point when I’m doing this almost daily. I contemplated actually setting the Chromecast up on a smart plug so that I can force a restart simply by killing power to it in the middle of the night. It hasn’t quite gotten to the point where I’ve done this, but if Plex doesn’t fix their app soon, I think I may go ahead with this.

    Also annoying is that sometimes the Plex app will loose connection with the media server, and will not list the contents of my library. Fortunately a restart of the mobile app is enough to resolve this.

    As for the Intel Nuc itself, there have been instances when it seems to lock up, and I had to hard power it down. I don’t know what’s causing this. It could be that either Plex or Broadtail is causing a kernel panic of sorts, or it could be something in the the Nuc itself: it’s reasonably low cost hardware that is tailored more for Windows. I may eventually replace the Nuc with the Mac Mini I’m currently using as a desktop, once it’s time to upgrade.

    But all in all, I think this is working for me. Not seeing any ads or crappy recommendations is a major win, and it’s also nice to actually run out of things to watch, forcing me to do something productive. Sometimes question whether the time it took to set this all up was worth it. Maybe, maybe not. But it feels a little better having something a little more in my control, than simply paying YouTube to remove the ads.

    Finally, if Broadtail sounds interesting to you, it’s available on GitHub. I’ve only recently open-sourced it, so there’s a lot of missing things like decent documentation (it only got a README today). So please consider it in a bit of a “here be dragons” state at the moment. But if you have any questions, feel free to contact me.


    1. Hey Google, having a way to indicate zero interest in seeing ads from someone is signal of intent. Consider making this option available to us and you get more info for your user profiles. ↩︎

    Reminder That Your Content Isn't Really Yours on Medium #3

    Looks like Medium has had a redesign recently, with recommended posts now being featured more prominently. Instead of appearing at the end of the post, they’re now in a right-hand sidebar, which doesn’t scroll, that is directly below the author of the post you’re reading.

    And let me be clear: as far as I can tell, these are not recommendations from the same author. They can be from anyone, covering any topic that I can only assume Medium algorithmically thinks you’d be interested in. It reminds me a lot of the anxiety supplier that is Twitter Trending Topics.

    Thank goodness. Here I was, reading someone’s post on UI design, without being made aware of, or being constantly reminded of whenever I move my eyes slightly to the right, of another post by a different author informing me that NFTs have been superseded by “Super NFTs”. Thank you for that, Medium. My reading experience has been dramatically improved! (Sarcasm test complete)

    Honestly, I’m still wondering why people choose to use Medium for publishing long-form writing. And yes, I acknowledge that it could be worse: their “post” could just as easily been a Twitter thread1. But from this latest redesign, it seems to me that Medium is doing it’s best to close the reading experience gap between the two services.


    1. Please don’t publish your long form writing as a Twitter thread. ↩︎

    The "Too Much Data" Error in Buffalo Projects

    If there’s anyone else out there using Buffalo to build web-apps, I just discovered that it doesn’t clean up old versions of bundled JavaScript files. This means that the public/asset directory can grow to gigabytes in size, eventually reaching the point where Go will simply refuse to embed that much data.

    The tell-tail sign is this error message when you try to run the application:

    too much data in section SDWARFSECT (over 2e+09 bytes)
    

    If you see that, deleting public/assets should solve your problem.

    On Posting Daily

    I recently listen to an interview with Seth Godin on the Tim Ferris podcast. In that interview, Seth mentions that he writes up to five blog posts a day. He just doesn’t publish them all. I guess that means that he has at least one or two drafts that can be touched up and published when he needs them.

    Although I don’t think of this blog as being anywhere near the quality of Seths, I think I’d like to start trying to publish on this site at least once a day. I don’t post to any specific schedule here, and there have been stretches of days in which this blog has not seen an update at all. But over the last week, I’ve found myself falling into a streak, and I like to see how long I can maintain it.

    The thing that has thwarted me in the past (apart from not even thinking about it) was either not being in the right frame of mind or not being available that day to post something. I’m not sure this blog warrants the discipline to set a specific time each day to sit down and write something. I treat this blog more or less like a public journal; a place to document thoughts, opinions or events of the day.

    But I’m wondering if maintaining an inventory of unpublished drafts might help in maintaining this streak. So even though the goal is to write and publish a post on the same day, having something to fall back on when I can’t might be worthwhile.

← Newer Posts Older Posts →