Long Form Posts

    Newsletter Reminder Emails

    I subscribe to a newsletter that sends “reminder” emails if I skip an issue. If I don’t open one of the email newsletters I receive, then a few days later, a copy will be sent with a forward of the form “looks like you skipped an issue. Here what you missed.”

    These reminder emails are bad, and here’s why:

    1. It gives the impression of hustling me. I appreicate the time you take to publish something that I see value in, but sending these reminders feels like your forcing your content onto me. Like I just got to read this content. Really, you must read it! And, oh! You forgot this one day? Well I’ll make sure you don’t forget it (and me) again. Please, back off! I’ve received your content and I’ll get to it when I get to it, if I feel like it, after I’ve read all the other newsletters I received. Please don’t push me to read it on your schedule.

    2. It just confirms that they’re tracking what I open. I mean, I know this aready, but it does bring it front of mind.

    If you’ve got an email newsletter, please don’t do this. It could just be me, but I don’t read every issue of every newsletter I receive, apart from a few exceptions. If I’ve subscribed to yours, then know that I get value from your content. Really, I do; I wouldn’t have subscribed otherwise. But sending these reminder emails out do not do your content any favours.

    The Feature Epic (Featuring the Epic Feature Branch)

    Here’s what’s been happening at work with me recently. I write it here as an exercise in how I can learn from this. They say that writing can help in this respect so I’m going to put that logic to the test (in either case, just having this documented somewhere could prove useful).

    We’re working on a pretty large change to the billing service powering the SaaS product sold by the company I work at. Along with our team, there are two other teams working on the same service at the same time, making any changes they need to release the product stuff they’re working on. All our teams had our own deadlines — which are pretty pressing — to get stuff delivered either last month or sometime this month.

    Knowing that this was a change that could impact these other teams, I came up with the idea of using an epic feature branch, which will be used to track our changes. This will leave the main branch relatively free for the other teams, who’s changes would not be as invasive as ours, to proceed with their plans and release when they need to without us blocking them. Great idea — everyone can work at their pace.

    Of course, if it was such a great idea, this blog post would be a lot shorter. 😉

    This started out well. We were doing our thing, making our changes and committing them to this epic branch, occasionally pulling updates from main when we started to fall behind. The other teams were merging their changes on main, and the CI/CD pipeline was dutifully deploying what they merged into the dev environment.

    Then, after a couple of weeks, things started to go a little wrong. The changes from the devs started to pile up in the “Ready for QA” column of our Jira board. The team was a little concerned that the changes we were making couldn’t be ready for testing until they were all finished and merged. Given the way the tickets were written, this seemed like a fair enough argument (I was the one that wrote the tickets BTW), but this delayed testing of the changes to the point when we we only had a few days to complete our testing and push it out to production before we hit our deadline.

    Once the QA team was ready to proceed, disaster. Many of our changes had bugs or issues that we didn’t foresee, and had to be fixed or new tickets had to be spun out. We had to delay rollout by more than a week (at the time of this post), which made the business quite unhappy. Making things worse was the use of the epic feature branch and the automated deployment of main to Dev. The other teams were still doing their thing on main, and when they merge a change, it blew away our changes. This resulted in the QA team coming to the dev team with issues which, after investigation, were largely because the our changes were not even there.

    One other thing I didn’t forsee was that when we get to the point of merging our epic feature branch into main, I had little confidence that it would be integrated correctly. The sole reason for this is that the test team hasn’t been testing our changes from main. They’ve been testing the epic branch just fine, but all those conflicts that were resolved during resyncs from main, plus the actual large-scale merge of our branch: what’s to say that we didn’t miss something? We’re just right back to delaying our testing near the due date.

    So this is where we are now. All the bugs (that we know of) have been addressed and I’m hoping to merge our changes into main today in preparation for what I hope to be one last test before we roll it out.

    So, what did I learn from this? I can think of a few takeaways:

    1. Continuous integration — not the automated build kind, but the consistently merging into main kind — is not only important, it’s bloody vital. Not doing this means that you’re left with little confidence in what the testing team is actually testing is what would be pushed out to production. Using the epic feature branch was a mistake. Always merge to main, and test from main as often as you can. You may need to push out changes that are turned off, but as long as you design for this, this should be fine (feature flags FTW).
    2. Letting tickets pile up like they did was another mistake. Ticket flow is important: not only for the team, who’s morale is tied to whether we will pass the sprint or not, but also in finding any problems early enough that you have enough time to react to them.
    3. This is probably something on my head: don’t spend too long doing a design. I spent a week on one when one probably could have been written up in a few days (to be fair: the scope of the work has not quite been locked down when I was doing the design work, which did delay things a little). A quick design also means getting feedback sooner.
    4. I think the largest takeaway is trying to check the businesses expectations on what can be delivered and when. This I find the hardest thing to do. I’m a bit of a “aim to please” type of person, so it’s not easy for me to say something like “we can’t do that in that time.” Instead I tend to be quite optimistic about what we can deliver. I have to get better with this.

    The saga is not quite finished yet: you may see another blog post on this subject soon enough. But hopefully things will settle down after this.

    Arriving Late

    I’m going to have to tell my boss today that the stuff my squad has been working on is going to arrive late. To much needs to be fixed or reworked, and there is one or two things that have been missed alltogeather.

    I think the biggest problem is that the thing we’ve been working on got into testing far too late — only a few days before the deadline — meaning that there was no time left for fixing things. Really, you can draft all the plans and designs you want but you really don’t know how well it will perform until the “working” code has been handed to someone else.

    Another problem might have been that I didn’t push for more time upfront. It’s been difficult to do this in the place I’m working now. It’s almost like they’re on to me (or at least have their own ideas of when things should be delivered). But I’m guessing it’s still heaps better to shatter expectations by saying something will take longer upfront, then get it delivered early, rather than be optimistic about it and come up against the deadline. “Underpromise and overdeliver,” they say.

    I wonder if there’s a way to have two deadlines: one the business knows about, and one for the squad. The first one is quoted to be a respectable amount of time that is significantly larger than what you really it would take to deliver the feature. That would check expectations, and leave enough time for any rework. The second is a more optimistic deadline for the squad to work towards. I think having this second deadline is important, otherwise all the work will pile up near the end of the first one and you’ll deliver late again. It’s all so deceptive though, and the thing is that you almost need to deceive yourself: you know that the second deadline is not the “real” deadline after-all.

    I don’t know. I find all this estimation and managing expectations quite difficult and it’s generally not something I like doing.

    In any case, we’re going to need more time. I guess I can take solice in the fact that we almost got there, maybe 70-80%. We just need to get that other 80% over the line.

    Wrong Number

    Got called three times this morning by mistake from an old woman in NSW trying to contact her son who had a very similar phone number to mine.

    First time I ignored it as I didn’t recognised the number and thought it was spam.

    Second time I answered and after trying to understand what she was trying to say, I simply said “I think you got the wrong number, sorry” and hung up.

    Third time I answered and after recognising that it was the same person, I figured it was right to work through the problem with her. So I spend some time with her, going through each number slowly, trying to make sure that we understood where the mistake was made. I didn’t hang up until I got the sense that she understood what number she needed to dial. I was trying to be helpful, but I’d be lying if I said that self-interest was not involved. After all, I wasn’t too keen on getting any more wrong numbers.

    Four hours have passed and I yet to hear from her again.

    Is there a lesson involved? I don’t know: this could have happened to anyone. I guess if there is one, it’s that sometimes you don’t get to choose the types of problems you need to work on, and you just got to do what you can to, if not solve them, help in making them less of a problem than they were before.

    Honour, Democracy, and Galati: A Day in Canberra

    Since being in Canberra, I haven’t really done anything “touristy”. Given that today was a public holiday, I figured it was as good a time as ever to do so. So I decided to spend the day visiting a couple of national landmarks, plus something I’ve been planning to do since returning to Canberra.

    The War Memorial

    The first time I’ve ever been in Canberra was during Christmas holidays in 2007 my family. During that time, Mum and Dad and my two sisters went to the War Memorial and Parliament House, while I stayed in our rented town-house. The reason why I stayed back was a little embarrassing: I claimed that I was tired, but this was during a weird period where I didn’t really want to be seen doing something touristy (I’ve mostly got over this feeling). Not going when I had the chance was something I’ve regretted since that day. Well, today I make amends with at least one of these, with a visit to the War Memorial.

    Old Parliament House

    Following my visit to the War Memorial and a brief lunch at the Poppy Cafe, it was time to visit Old Parliament House, and the Museum of Australian Democracy.

    Following this was a brief walk around the gardens.

    The City, And Galati

    My final stop was the city. Why? Well, during my visit in April, we found a really nice galati place where we stayed, and since knowing that I would be back in June, I made it a point, in half jest, to return. When talking to Mum and Dad on the phone, they asked if I have fulfilled this promise. So since I was nearby, it felt like the perfect time to do so.

    Despite how sunny it was, it was still quite cold. But even so, that galati hit the spot.

    Afternoon Walk Around Lake Ginninderra

    Went for an walk around Lake Ginninderra this afternoon. Well, not “around” the lake: that walk would have taken a while. But I did walk along the path that would take me around the lake for about 30 minutes, then walked back again. Below are a few photos I took.

    My Evening

    So here’s how I spent my evening:

    Watching the WWDC state of the union until the DNS resolver konked out in the WiFi router, causing the Chromecast to get into a state in which it could no longer connect to the network, resulting in about 10 minutes of troubleshooting before deceiding to clean up, not go to the gym, spend another 10 minutes trying to troubleshoot the issue, then stared at my laptop for about half an hour wondering whether to go back to troubleshooting the Chromecast, or doing something else with the hope that it would eventually work itself out.

    Eventually, after another 5 minutes of fruitless troubleshooting, I finally got the Chromecast fixed by doing a factory reset and connected it to the 2.4 GHz band.

    Anyway, I hope your evening was more productive than mine.

    (And I was worried I would have nothing to write about today.)

    The Powerline Track Walk

    Went on a walk of the Powerline Track, which I was personally calling the “powerline walk” (yes, I’m impressed at how close I was). I saw this trail when I was in Canberra earlier this year, and knowing that I would be back, I made a note to actually walk it, which I did today. This track follows the powerlines just south of Aranda Bushland Nature Reserve, then goes under Gungahlin Drive and into the Black Mountain Nature Reserve. The weather was cold but pleasant, at least at the start of the track. It eventually got quite dark and a little wet near the end, but that did result in some nice winter lighting over the landscape.

    Here’s a gallery of some of the photos I took. Note that there’s a fair few of powerlines, which is something I’ve been drawn to ever since I was a little kid.

    Humour In Conference Videos — Less Is More

    It might be just me but I get a little put off with over-the-top attempts at humour in developer conference videos.

    I’m four minutes into a conference video which has already included some slap-stick humour (with cheesy CGI), and someone trying to pitch to me on why what they’re talking about is worth listening to. This was done in such a way that it actually distracted me from the content, a.k.a. the reason why I’m watching it.

    This sort of thing is really a turn-off, almost to the point where I feel like turning it off. I also don’t think it helps you that much either. If you open your talk by pretending to get zapped by a piece of lab equiptment, I’m probably not going to assume the same level of sincerity in your presentation as I would for someone who is just trying to get their message across.

    I like a joke as much as the next person, and one or two small, well contained jokes like substituted words in the slide pack is fine. But it really needs to be dished out in small doses, and it really shouldn’t distract from the content. Less (and much less than you think) is more in my opinion.

    Cloud Formation "ValidationError at typeNameList" Errors

    I was editing some Cloud Formation today and when I tried to deploy it, I was getting this lengthy, unhelpful error message:

    An error occurred (ValidationError) when calling the CreateChangeSet operation: 1 validation error detected: Value ‘[AWS:SSM::Parameter, AWS::SNS::Topic]’ at ’typeNameList’ failed to satisfy constraint: Member must satisfy constraint: [Member must have length less than or equal to 204, Member must have length greater than or equal to 10, Member must satisfy regular expression pattern: [A-Za-z0-9]{2,64}::[A-Za-z0-9]{2,64}::[A-Za-z0-9]{2,64}(::MODULE){0,1}]

    It was only showing up when I was tried adding a new SSM parameter resource to the template, so I first thought it was some weird problem with the parameter name or value. But after changing both to something that would work, I was still seeing this.

    Turns out the problem was that I was missing a colon in the resource type. Instead of using AWS::SSM::Parameter, I was using AWS:SSM::Parameter (note the single colon just after “AWS”). Just looking at the error message again, I notice that this was actually being hinted to me, both in the regular expression and the “Value” list.

    I know making good error messages take effort, and for most developers this tend to be an afterthought. I’m just as guilty of this as anyone else. But if I could make just one suggestion on how this message could be improved, it would be to get rid of the list in “Value” and replace it with the resource type that was actually failing validation. It would still be a relatively unhelpful error message but at least it will indicate what part of the template was actually falling over.

    In any case, if anyone else is seeing an error message like this when you’re trying to roll out Cloud Formation changes, check for missing colons in your resource types.

    GitLab Search Subscriptions with NetNewsWire

    I’m working (with others) on a project that’s using GitLab to host the code, and I’m looking for a better way to be notified of new merge requests that I need to review. I cannot rely on the emails from GitLab as they tend to be sent for every little thing that happens on any of the merge requests I am reviewing. For this reason, any notifications sent by email will probably get missed by me. People do post new merge requests in a shared Slack channel, but a majority of them are for repos that don’t need my review. They’ve also been days where a lot of people are making a lot of changes at the same time, and any new messages for the repos I’m interesting would get pushed away.

    Today I learnt that it’s possible to subscribe to searches in GitLab using RSS. So I’m trying something with NetNewsWire where I can subscribe to a search for open merge requests for the repos I’m interested in. I assume the way this works is that any new merge requests would result in a new RSS item on this feed, which will show up as an update in NetNewsWire. In theory, all I have to do is monitor NetNewsWire, and simply keep items unread until they’ve been merged or no longer need my attention.

    We’ll see this approach helps. The only down side is that there’s no way to get updates for a single merge request as an RSS feed, which would have been nice.

    What Would Get Me Back to Using Twitter Again

    Congratulations, Elon Musk, on your purchase of Twitter. I’m sure you’ve got a bunch of ideas of how you want to move the company forward. I was once a user of Twitter myself — albeit not a massive one — and I’m sure you would just love to know what it would take for me to be a user once more. Well, here’s some advice on how you can improve the platform in ways that would make me consider going back.

    First, you gotta work out the business model. This is number one as it touches on all the product decisions made to date. I think it’s clear that when it comes to Twitter, the advertising model is suboptimal. It just don’t have the scale, and the insatiable need for engagement is arguable one of the key reasons behind the product decisions that fuel the anxiety and outrage on the platform. I think the best thing you could do is drop ads completely and move to a different model. I don’t care what that model is. Subscription tiers; maybe a credit base system where you have a prepaid account and it costs you money to send Tweets based on their virality. Heck, you can fund it from your personal wealth for the rest of your life if you want. Just get rid of the ads.

    Next, make it easy to know which actions result in a broadcast of intent. The big one I have in mind is unfollowing someone. I use to follow people that I work with simply because I worked with them. But after a while I found that what they were tweeting was anxiety inducing. So I don’t want to follow them any more, but I don’t know what happens if I choose to unfollow them. Do they get a notification? They got one when I started following them — I know that because I got one when they started follow me. So in lieu of any documentation (there might be documentation about this, I haven’t checked), I’d like to be able to stop following them without them being made aware of that fact. Note that this is not the same as muting them or blocking them: they’re not being nasty or breaking any policies of what they post. I just want to stop seeing what they post.

    Third, about open sourcing that algorithm. By all means, do so if you think that would help, but I think that’s only half the moderation story. The other half is removing all the attempts to drive up engagement, or at least having a way to turn them off. Examples include making it easier to turn off the algorithmic timeline, getting rid of or hiding the “Trending Topics”, and no longer sticking news items in the notification section (seriously, adding this crap to the notification section has completely removed its utility to me). If I want the results to simply be a reverse chronological timeline of tweets from people I’m following, and notifications only being events of people engaging with what I post, then please make it easy for me to have this. This might means my usage may move from being less about quantity and more about quality, but remember that you no longer need all that engagement. You changed the business model, remember?

    Finally, let’s talk about all the features that drum up engagement. If it was up to me, I’d probably remove them completely, but I know that some people might find them useful, and it’s arguably a way for Twitter (now under your control) to, let’s say, “steer the direction of the conversation.” So if you must, keep these discovery features, but isolate them to a specific area of the app, maybe called “Discovery”. Put whatever you want in there — trending topics, promoted tweets, tweets made within a specific location, whatever you want — but keep them in that section, and only that section. My timeline must be completely void of this if I choose it to be.

    I’m sure there are others that I can think of, but I think all this is a good first step. I look forward to taking this onboard, and I thank you for your consideration. Honestly, it might not be enough for me to go back. I wasn’t a big user before, and I’ve since moved to greener pastures. But who knows, maybe it will. In any case, I believe, with these changes, that Twitter as a platform would be more valuable, both with you at the helm, and with back there with my 10 or so followers and my posting rate of 25 tweets or so in the last eight years. 😉 1


    1. This wink is doing a lot of work. ↩︎

    Showing A File At a Specific Git Revision

    To display the contents of a file at a given revision in Git, run the following command:

    $ git show <revision>:<filename>
    

    For example, to view the version of “README.md” on the dev branch:

    $ git show dev:README.md
    

    There is an alternative form of this command that will show the changes applied to that file as part of the commit:

    $ git show <revision> -- <filename>
    

    This can be used alongside the log command to work out what happened to a file that was deleted.

    First, view the history of the file. You are interested in the ID before the commit that deleted the file: attempting to run git show using the deletion commit ID it will result in nothing being shown.

    $ git log -- file/that/was/deleted
    commit abc123
    Author: The Deleter <deleter@example.com>
    Date:   XXX
    
        Deleted this file.  Ha ha ha!
    
    commit beforeCommit
    Author: File Changer <changer@example.com>
    Date:   XXX
    
        Added a new file at file/that/was/deleted
    

    Then, use git show to view the version before it was deleted:

    $ git show beforeCommit:file/that/was/deleted
    

    Code Review Software Sucks. Here's How I Would Improve It

    This post is about code reviews, and the software that facilitates them.

    I’ll be honest: I’m not a huge fan of code reviews, so a lot of what I speak of below can probably be dismissed as that from someone who blames their tools. Be that as it may, I do think there is room for improvements in the tooling used to review code, and this post touches on a few additional features which would help.

    First I should say that I have no experience with dedicated code review tools. I’m mainly talking about code review tools that are part of hosted source code repository systems, like GitHub, GitLab, and Bitbucket. And since these are quite large and comprehensive systems, it might be that the priorities are different compared to a dedicated code review tool with a narrower focus. Pull requests and code reviews are just one of the many tasks that these systems need to handle, along with browsing code repositories, managing CI/CD runs, hosting binary releases, etc.

    So I think it’s fair to say that such tools may not have the depth that a dedicated code review tool would have. After all, GitHub Actions does not have the same level of sophistication as something like Jenkins or BuildKite, either.

    But even so, I’d say that there’s still room for improvement in the code review facilities that these systems do offer. Improvements that could be tailored more to the code review workflow. It’s a bit like using a text editor to manage your reminders. Yeah, you can use a text editor, but most of the features related specifically to reminders will not be available to you, and you’ll have to plug the feature gap yourself. Compare this to a dedicated reminder app, which would do a lot of the work for you, such as notifying you of upcoming reminders or having the means to marking an item as done.

    So, what should be improved in the software that is used to review code? I can think of a few things:

    Inbox: When you think about it, code reviews are a bit like emails and Jira tickets: they come to you and require you to action them in some way in order to get the code merged. But the level of attention you need to give them changes over time. If you’ve made comments on a review, there’s really no need to look at it again until the code has been updated or those the author has replied.

    But this dynamic aspect of code reviews is not well reflected in most of these systems. Usually what I see is simply a list of pull requests that have not yet been approved or merged, and I have to keep track of the reviews that need my attention now myself, verses those that I can probably wait for action from others.

    I think what would be better than a simple list would be something more of an inbox: a subset of the open reviews that are important to me now. As I action them, they’ll drop off the list and won’t come back until I need to action them again.

    The types of reviews that I’d like to appear in the inbox, in the ordered listed below, would be the following:

    1. Ones that have been opened in which I’ve made comments that have been responded to — either by a code change or a reply — that I need to look at. The ones with more responses would appear higher in the list than the ones with less.
    2. Reviews that are brand new that I haven’t looked at yet, but others have.
    3. Brand new reviews that haven’t been looked at by anyone.

    In fact, the list can be extended to filter out reviews that I don’t need to worry about, such as:

    1. Reviews that I’ve made comments on that have not been responded to yet. This indicates either that the author has not gotten around to it yet, in which case looking at the pull request again serves no purpose.
    2. Reviews that have enough approvals by others and do not necessarily need mine.
    3. Reviews that I’ve approved.

    This doesn’t necessarily need to replace the list of open reviews: that might still be useful. But it will no longer be the primary list of reviews I need to work with during the day to day.

    Approval pending resolution of comments: One thing I always find myself indecisive about is when I should hit that Approve button. Let’s say I’ve gone through the code, made some comments that I’d like the submitter to look at, but the rest of the code looks good. When should I approve the pull request? If I do it now, then the author may not have seen the comments or have indication that I’d like them to make changes, and will go ahead and merge.

    I guess then the best time to approve it is when the changes are made. But that means the onerous is on me to remember to review the changes again. If the requests are trivial — such as renaming things — and I’d trust the person to make the changes and going through to review them once again is a waste of time.

    This is where “Approval pending resolution of comments” would come in handy. Selecting this approval mode would mean that my approval would be granted once the author has resolve the outstanding review comments. This would not replace the regular approval mode: if there are changes which do require a re-review, I’d just approve it normally once I’ve gone through it again. But it’s one more way to let the workflow of code reviews work in my favour.

    Speaking of review comments…

    Review comment types: I think it’s a mistake to assume that all review comments are equal. Certainly in my experience I find myself unable to read the urgency of comments on the reviews I submit. I also find it difficult to telegraph in the comments that I make on code reviews of others. This usually results in longer comments with phrases such as “you don’t have to do this now, but…”, or “something to consider in the future…”

    Some indication of the urgency of the comment alongside the comment itself would be nice. I can think of a system that has at least three levels:

    1. Request for change: this is the highest level. It’s an indication that you see something wrong with the code that must be changed. In these cases, these comments would need to be resolved with either a change to the code, or a discussion of some sort, but they need to be resolved before the code is merged.
    2. Request for improvement: This is a level lower, and indicates that there is something in the code that may need to be change, but not doing so would not block the code review. This can be used to suggest improvements to how things were done, or maybe to suggest an alternative approach to solving the problem. All those nitpicking comments can go here.
    3. Comments: This is the lowest level. It provides the way to make remarks about the code, but that require no further action from the author. Uses for this might be praise for doing something a certain way, or FYI type comments that the submitter may need to be aware of for future changes.

    Notes to self: Finally, one thing that is on way too few systems that deal with shared data is the ability to annotate pull requests or the commented files with private notes. These won’t be seen by the author or any of the other reviewers, and are only there to facilitate making notes to self, such as things like “Looked at it, waiting for comments to be addressed”, or “review no longer pending”. This is probably the minimum, and would be less important if the other things are not addressed.

    So that’s how I’d improve code review software. It may be that I’m the only one with this problem, and that others are perfectively able to review code effectively without these features. But I know they would work for me, and if I start seeing these in services like GitHub or GitLab, I probably would start using them.

    Broadtail 0.0.7

    Released Broadtail 0.0.7 about a week ago. This included some restyling of the job list on the home page, which now includes a progress bar updated using web-sockets (no need for page refreshes anymore).

    For the frontend, the Websocket APIs that come from the browser are used. There’s not much to it — it’s managed by a Stimulus controller which sets up the websocket and listen for updates. The updates are then pushed as custom events to the main window, which the Stimulus controllers used to update the progress bar are listening out for. This allows for a single Stimulus controller to manage the websocket connection and make use of the window as a message bus.

    Working out the layers of the progress bar took me a bit of time, as I wanted to make sure the text in the progress bar itself was readable as the progress bar filled. I settled in a HTML tree that looked like the following:

    <div class="progressbar">
      <!-- The filled in layer, at z-index: 10 -->
      <div class="complete">
        <span class="label">45% complete</span>
      </div>
    
      <!-- The unfilled layer -->
      <span class="label">45% complete</span>
    </div>
    

    As you can see, there’s a base layer and a filled in layer that overlaps it. Both of these layers have a progress label that contain the same status message. As the .complete layer fills in, it will hide the unfilled layer and label. The various CSS properties used to get this effect can be found here.

    The backend was a little easier. There is a nice websocket library for Go which handles the connection upgrades and provides a nice API for posting JSON messages. Once the upgrade is complete, a goroutine servicing the connection will just start listening to status updates from the jobs manager and forward these messages as JSON text messages.

    Although this works, it’s not perfect. One small issue is that updates will not reconnect if there is an error. I imagine that it’s just a matter of listening out for the relevant events and retrying, but I’ll need to learn more about how this actually works. Another thing is that the styling of the progress bar relies of fixed widths. If I get around to reskinning the style of the entire application, that might be the time to address this.

    The second thing this release has is a simple integration with Plex. If this integration is configured, Broadtail will now send a request to Plex to rescan the library for new files, meaning that there’s no real need to wait for the schedule rescan to occur before the videos are available in the app. This simply uses Plex’s API, but it needs the Plex token, which can be found using this method.

    Anyway, that’s it for this version. I’m working on re-engineering how favourites work for the next release. Since this is still in early development, I won’t be putting in any logic to migrate the existing favourites, so just be weary that you may loose that data. If that’s going to be a problem, feel free to let me know.

    Learning Through Video

    Mike Crittenden wrote a post this morning about how he hates learning through videos. I know for myself that I occasionally do prefer videos for learning new things, but not always.

    Usually if I need to learn something, it would be some new technology that I have to know for my job. In those cases, I find that if I have absolutely no experience in the subject matter, a good video which provides a decent overview of the major concepts helps me a great deal. Trying to learn the same thing from reading a lengthy blog post, especially when jargon is used, is less effective for me. I find myself getting tired and loosing my place. Now, this could just be because of the writing — dry blocks of text are the worst, but I tend to do better if the posts are shorter and formulated more like a tutorial.

    If there is a video, I generally prefer them to be delivered in the style of a lecture or presentation. Slides that I can look at while the presenter is speaking are fine, but motion graphics or a live demo is better, especially if the subject is complex enough to warrant them. But in either case, I need something visual that I can actually watch. Having someone simply talk to the camera really doesn’t work for me, and makes watching the video more of a hassle (although it’s slightly better if I just listen to the audio).

    Once I’ve become proficient in the basics, learning through video become less useful to me and a decent blog post or documentation page works better. By that time, my learning needs become less about the basics and more about something specific, like how to do a particular thing or details of a particular item. At that point, speed is more important to me, and I prefer to have something that I can skim and search in my own time, rather than watch videos that tend to take much longer.

    So that’s how and when I prefer to learn something from video. I’ll close by saying that this is my preferred approach when I need to learn something for work. If it’s during my downtime, either a video or blog-post is fine, so long as my curiosity is satisfied.

    Some More Updates of Broadtail

    I’ve made some more changes to Broadtail over the last couple of weeks.

    The home page now shows a list of recently published videos below the currently running jobs.

    Clicking through to “Show All” displays all the published videos. A simple filter can be applied to filter them down to videos with titles containing the keywords (note: nothing fancy with the filter, just tokenisation and an OR query).

    Finally, items can now be favourited. This can be used to select videos that you may want to download in the future. I personally use this to keep the list of “new videos” in the Plex server these videos go to to a minimum.

    Time and Money

    Spending a lot of time in Stripe recently. It’s a fantastic payment gateway and a pleasure to use, compared to something like PayPal which really does show its age.

    But it’s so stressful and confusing dealing with money and subscriptions. The biggest uncertainty is dealing with anything that takes time. The problem I’m facing now is if the customer chooses to buy something like a database, which is billed a flat fee every month, and then they choose to buy another database during the billing period, can I track that with a single subscription and simply adjust the quantity amount? My current research suggests that I can, and that Stripe will handle the prorating of partial payments and credits. They even have a nice API to preview the next invoice which can be used to show the customer how much they will be paying for.

    But despite all the documentation, test environments, and simulations, I still can’t be sure that it will happen in real life, when real money is exchanged in real time. I guess some real life testing would be required. 💸

    Cling Wrap

    I bought this roll of cling wrap when I moved into my current place. Now, after 6.5 years and 150 metres, it’s finally all used up.

    Cling wrap, now empty

    In the grand scheme of things, this is pretty unimportant. It happens every day: people buy something, they use it, and eventually it’s all used up. Why spend the time and energy writing and publishing this post to discuss it? Don’t you have better things to do?

    And yet, there’s still a feeling of weight to this particular event that I felt was worth documenting. Perhaps it’s because it was the first roll of cling wrap I bought after I moved out. Or maybe it’s because it lasted for this long, so long in fact that the roll I bought to replace it was sitting in my cupboard for over a year. Or maybe it’s the realisation that with my current age and consumption patterns, I probably wouldn’t use up more than 7 rolls like this in my lifetime.

    Who knows? All I know is that despite the banality of the whole affair, I spent just spent the better part of 20 minutes trying to work out how best to talk about it here.

    I guess I’m in a bit of a reflective mood today.

    Trip to Ballarat and the Beer Festival

    I had the opportunity to go to Ballarat yesterday to attend the beer festival with a couple of mates. It’s been a while since I last travelled to Ballarat — I think the last time was when I was a kid. It was also the first time I took the train up there. I wanted to travel the Ballarat line for a while but I never had a real reason to do so.

    The festival started at noon but I thought I’d travel up there earlier to look around the city for a while.

    I didn’t stay long in the city centre as I needed to take the train to Wendouree, where the festival was located.

    The beer festival itself was at Wendouree park. Layout of the place was good: vendors (breweries, food, etc.) was laid out along the perimeter, and general seating was available in the middle. They did really well with the seating. There were more than enough tables and chairs for everyone there.

    Day was spectacular, if a bit sunny: the tables and chairs in the shade were prime real-estate. Whole atmosphere was pleasant: everyone was just out to have a nice time. Got pretty crowded as the day wore on. Lots of people with dogs, and a few families as well.

    I’m not a massive beer connoisseur so I won’t talk much about the beers. Honestly, the trip for me was more of a chance to get out of the city and catch up with mates. But I did tried a pear cider for the first time, which was a little on the sweet side, which I guess was to be expected. I also had a Peach Melba inspired pale ale that was actually kind of nice.

    Trip home was a bit of an adventure. A train was waiting at Wendouree station when I got there. There was nobody around and it was about 5 minutes until departure so I figured I’d board. Turns out it was actually not taking passengers. I was the only one that boarded, and when I actually realised that it was not in service, the doors closed and the train departed. I had to make my presence known to the driver and one other V/Line worker. They were really nice about it, and fortunately for me, they were on their way to Ballarat anyway, so it wasn’t a major issue. Even so, it was quite embarrassing. Fortunately the train home was easy enough.

← Newer Posts Older Posts →