Golang
- Authentication added with username/password
- Retry failed video downloads.
- The ability to download YouTube videos in audio only (all these “podcasts” that are only available as YouTube videos… 😒)
- The ability to handle the lifecycle of videos a little better than it does now. It’s already doing this for errors: when a download fails, the video is deleted. But it would be nice if it did things like automatically delete videos 30 days after downloading them. This would require more control over the “video store” though.
- Authentication added with username/password
- Retry failed video downloads.
- The ability to download YouTube videos in audio only (all these “podcasts” that are only available as YouTube videos… 😒)
- The ability to handle the lifecycle of videos a little better than it does now. It’s already doing this for errors: when a download fails, the video is deleted. But it would be nice if it did things like automatically delete videos 30 days after downloading them. This would require more control over the “video store” though.
- The (theoretical) ability to blog from anywhere: This is one of the weaknesses of static site hosting that I’ve run into when trying this approach before. I tend to work on different machines throughout the week, which generally means that when I find inspiration, I don’t have the source files on hand to work on them. The source files are kept in source control and are hosted on GitHub, but I’ve find that I tend to be quite lax with making sure I have the correct version checked out and that any committed changes are pushed. This is one of the reasons why I like micro.blog: having a service with a web interface that I can just log into, and that will have all the posts there, means that I can work on them as long as I have an internet connection.
- Full control over the appearance and workflow: Many of the other services provide the means for adjusting the appearance of the web-page, so this is only a minor reason for taking on this effort. But one thing that I would find useful is to have some control over is the blogging workflow itself. There are some ideas that I might like to include, like displaying summaries in the main page, or sharing review links for posts prior to publishing them. Being able to easily do that in a codebase that I’m familiar with would help.
- Good practice of my skills: As someone who tends to work on backend systems for his day-to-day job, some of my development and operational experience are a little rusty. Building, hosting and operating a site would provide an opportunity to exercise these muscles, and may also come in handy if I were to choose to build something for others to use (something that I’ve been contemplating for a while).
- Security and stability: This is something that comes for free from a blogging platform that I’ll need to take on myself. There’s always a risk with putting a new system onto the internet, and having a web site with remote administration is an invitation for others to abuse it. To me this is another area of development I believe I need to work on. Although I don’t intend to store any personal information but my own, I do have to be mindful of the risks of putting anything online, and making sure that the appropriate mitigations are in place to prevent that. I’ll also have to make sure that I’m maintaining proper backups of the content, and periodically exercising them to make sure they work. The fact that my work is at stake is a good incentive to keep on top of this.
- Distractions: Yeah, this is a classic problem with me: I use something that I build, I find a small problem or something that can be improved, then instead of actually finishing the task, I actually work on the code for the service. This may have to be something that only gets addressed with discipline. It may help using the CMS on a machine that doesn’t have the source code.
- Will this delay publishing of the blog? No. The CMS is functionally complete but there are some rough edges that I’d like to smooth out. I hope to actually start publishing this new blog very shortly.
- Will I be moving the hosting of this blog onto the new CMS? No, far from it. The services here works great for how I want to maintain this blog, and the community aspects are fantastic. The CMS also lacks the IndiWeb features that micro.blog offers and it may be some time before they get built.
-
Although not as much as many other languages. ↩︎
Dear Go developers,
You don’t need to return pointers,
Unless you do need to return pointers.
But if you think you need to return pointers,
You probably don’t need to return pointers.
Instead, consider just returning regular struct values. Keep the nil-pointer panics at bay.
The exhaustive
Go linter complaining about missing cases for switch statements with a default
clause is killing me.
missing cases in switch of type this, and this, and this, and this, and…

Request for a go linter: something that would warn when an variable with the name err
is not of type error
:
func Bla() {
err := 123 // 'err' not of type 'error'
}
Would’ve saved me a few hours today trying to test if a Future was not-nil, without actually waiting for the result.
Moan-routine: LO's Predicate Signatures
A critique of function signatures the ’lo’ package offers for functions like Map and Filter.
Does Google ever regret naming Go “Go”? Such a common word to use as a proper noun. I know the language devs prefer not to use Golang, but there’s no denying that it’s easier to search for.
Closed Project: Broadtail
Date: 2021 – 2022
Status: Paused
First project I’ll talk about is Broadtail. I think I talked about this one before, or at least I posted screenshot of it. I started work on this in 2021. The pandemic was still raging, and much of my downtime was watching YouTube videos. We were coming up to a federal election, and I was getting frustrated with seeing YouTube ads from political parties that offend me. This was before YouTube Premium so there was no real way to avoid these ads. Or was there?
A Frontend For youtube-dl
I had some experience with youtube-dl in the past, downloading and saving videos that I hoped to watch later. I recently discovered that YouTube also published RSS feeds for channels and playlist. So I was wondering if it was possible to build something that could use both of these. My goal was to have something that would allow me to subscribe to YouTube channels via RSS, download videos using youtube-dl, and watch them via Plex on my TV. This was to be deployed on a Intel Nuc that I was using as a home server, and be accessible via the web-browser.

I deceided to get the YouTuble downloading feature built first. I started a new Go project and got something up and running reasonably quickly. It was a good excuse to get back to vanilla Go web development, using http.Handle and Go templates, instead of relying on frameworks like Buffalo (don’t get me wrong, I still like Buffalo, but it is quite heavy handed).
It was also an excuse to try out StormDB, which is an embedded NoSQL data store. The technology behind it is quite good — it’s used B-Trees memory mapped files under the cover — and I tend to use it for other things as well. It proved to be quite usable, apart from not allowing multiple read/writers at the same time, which made deployments difficult.
But the backend code was the easy part. What I lack was any sense of web design. That’s one good thing about a framework like Buffalo: it comes with a usable style framework out of the box (Bootstrap). If I were to go my own way, I’d have to start from scratch.
The other side of that coin though, is that it would give me the freedom to go for something that’s slightly off-beat. So I went for an aesthetics that reminded me of early 2000 web-design: san-serif font, all grey lines, dull pastels colours, small controls and widgets (I stopped short at gradients and table-based layouts).
This version also included a hand-rolled job manager that I used for a bunch of other things. It’s… fine. I wouldn’t use it for anything “real”, but it had a way of managing job lifecycles, updating progress, and the ability to cancel a running job. So for that, it was good enough.
Finally, it needed a name. At the time, I was giving all projects bird-like codename, since I can’t come up with names that I liked. I eventually settled on Broadtail, which was a reference to broadtail parrots, like the rosella.
RSS Subscriptions
It didn’t take long after I got this up and running before I realised I needed the RSS subscription feature. So that was the next thing I added.

That way it worked was pretty straight forward. One would setup a subscription to a YouTube channel or playlist. Broadtail will then poll that RSS subscription every 15 minutes or so, and show new videos on the homepage. Clicking that video item would bring up details and an option to download it.

Each RSS subscription had an associated target directory. Downloading an ad-hoc video would just dump it in a configured directory but I wanted to make it possible to organise downloads from feeds in a more structured way. This wasn’t perfect though: I can’t remember the reason but I had some trouble with this, and most videos just ended up in the download directory by default (it may have had to do with making directories or video permissions).

Only the feed polling was automatic at this stage. I was not interested in having all shows downloaded, as that would eat up on bandwidth and disk storage. So users still had to choose which videos they wanted to download. The list of recent feed items were available from the home-screen so they were able to just do so from there.
I also wanted to keep abreast with what jobs were currently running, so the home-screen also had the list of running job.

The progress-bar was powered by a web-socket backed by a goroutine on the server side, which meant realtime updates. Clicking the job would also show you the live output of the youtube-dl command, making it easy to troubleshoot any jobs that failed. Jobs could be cancelled at any time, but one annoying thing that was missing was the ability to retry failed job. If a download failed, you had to spin up a new job from scratch. This meant clearing out the old job from the file-system and finding the video ID again from wherever you found it.
If you were interested in a video but were not quite ready to download it right away, you could “favourite” it by clicking the star. This was available in every list that showed a video, and was a nightmare to code up, since I was keeping references to where the video came from, such as a feed or a quick look. Keeping atop of all the possible references were become difficult with the non-relational StormDB, and the code that handled this became quite dodgy (the biggest issue was dealing with favourites from feeds that were deleted).

Rules & WWDC Videos
The basics were working out quite well, but it was all so manual. Plus, going from video publication to having something to watch was not timely. The RSS feed from YouTube was always several hours out of date, and downloading whole videos took quite a while (it may not have been realtime, but it was pretty close).
So one of the later things I added was a feature I called “Rules”. These were automations that would run when the RSS feed was polled, and would automatically download videos that met certain criteria (you could also hide them or mark them as downloaded). I quite enjoy building these sorts of complex features, where the user is able to do configure sophisticated automatic tasks, so this was a fun thing to code up. And it worked: video downloads would start when they become available and would usually be in Plex when I want to watch it (it was also possible to ping Plex to update the library once the download was finished). It wasn’t perfect though: not retrying failed downloads did plague it a little. But it was good enough.

This was near the end of my use of Broadtail. Soon after adding Rules, I got onto the YouTube Premium bandwagon, which hid the ads and removed the need for Broadtail as a whole. It was a good thing too, as the Plex Android app had this annoying habit of causing the Chrome Cast to hang, and the only way to recover from this was to reboot the device.
So I eventually returned to just using YouTube, and Broadtail was eventually abandoned.
Although, not completely. One last thing I did was extend Broadtail’s video download capabilities to include Apple WWDC Videos. This was treated as a special kind of “feed” which, when polled, would scrap the WWDC video website. I was a little uncomfortable doing this, and I knew when videos were published, they wouldn’t change. So this “feed” was never polled and the user had to refresh it automatically.
Without the means to stream them using AirPlay, downloading them and making them available in Plex was the only way I knew of watching them on my TV, which is how I prefer to watch them.
So that’s what Broadtail is primarily used for now. It’s no longer running as a daemon: I just boot it up when I want to download new videos. And although it’s only a few years old, it’s starting to show signs of decay, with the biggest issue being youtube-dl slowly being abandoned.
So it’s unlikely that I’ll put any serious efforts into this now. But if I did, there are a few things I’d like to see:
So, that’s Broadtail.
Broadtail
Date: 2021 – 2022
Status: Paused
First project I’ll talk about is Broadtail. I think I talked about this one before, or at least I posted screenshot of it. I started work on this in 2021. The pandemic was still raging, and much of my downtime was watching YouTube videos. We were coming up to a federal election, and I was getting frustrated with seeing YouTube ads from political parties that offend me. This was before YouTube Premium so there was no real way to avoid these ads. Or was there?
A Frontend For youtube-dl
I had some experience with youtube-dl in the past, downloading and saving videos that I hoped to watch later. I recently discovered that YouTube also published RSS feeds for channels and playlist. So I was wondering if it was possible to build something that could use both of these. My goal was to have something that would allow me to subscribe to YouTube channels via RSS, download videos using youtube-dl, and watch them via Plex on my TV. This was to be deployed on a Intel Nuc that I was using as a home server, and be accessible via the web-browser.

I deceided to get the YouTuble downloading feature built first. I started a new Go project and got something up and running reasonably quickly. It was a good excuse to get back to vanilla Go web development, using http.Handle and Go templates, instead of relying on frameworks like Buffalo (don’t get me wrong, I still like Buffalo, but it is quite heavy handed).
It was also an excuse to try out StormDB, which is an embedded NoSQL data store. The technology behind it is quite good — it’s used B-Trees memory mapped files under the cover — and I tend to use it for other things as well. It proved to be quite usable, apart from not allowing multiple read/writers at the same time, which made deployments difficult.
But the backend code was the easy part. What I lack was any sense of web design. That’s one good thing about a framework like Buffalo: it comes with a usable style framework out of the box (Bootstrap). If I were to go my own way, I’d have to start from scratch.
The other side of that coin though, is that it would give me the freedom to go for something that’s slightly off-beat. So I went for an aesthetics that reminded me of early 2000 web-design: san-serif font, all grey lines, dull pastels colours, small controls and widgets (I stopped short at gradients and table-based layouts).
This version also included a hand-rolled job manager that I used for a bunch of other things. It’s… fine. I wouldn’t use it for anything “real”, but it had a way of managing job lifecycles, updating progress, and the ability to cancel a running job. So for that, it was good enough.
Finally, it needed a name. At the time, I was giving all projects bird-like codename, since I can’t come up with names that I liked. I eventually settled on Broadtail, which was a reference to broadtail parrots, like the rosella.
RSS Subscriptions
It didn’t take long after I got this up and running before I realised I needed the RSS subscription feature. So that was the next thing I added.

That way it worked was pretty straight forward. One would setup a subscription to a YouTube channel or playlist. Broadtail will then poll that RSS subscription every 15 minutes or so, and show new videos on the homepage. Clicking that video item would bring up details and an option to download it.

Each RSS subscription had an associated target directory. Downloading an ad-hoc video would just dump it in a configured directory but I wanted to make it possible to organise downloads from feeds in a more structured way. This wasn’t perfect though: I can’t remember the reason but I had some trouble with this, and most videos just ended up in the download directory by default (it may have had to do with making directories or video permissions).

Only the feed polling was automatic at this stage. I was not interested in having all shows downloaded, as that would eat up on bandwidth and disk storage. So users still had to choose which videos they wanted to download. The list of recent feed items were available from the home-screen so they were able to just do so from there.
I also wanted to keep abreast with what jobs were currently running, so the home-screen also had the list of running job.

The progress-bar was powered by a web-socket backed by a goroutine on the server side, which meant realtime updates. Clicking the job would also show you the live output of the youtube-dl command, making it easy to troubleshoot any jobs that failed. Jobs could be cancelled at any time, but one annoying thing that was missing was the ability to retry failed job. If a download failed, you had to spin up a new job from scratch. This meant clearing out the old job from the file-system and finding the video ID again from wherever you found it.
If you were interested in a video but were not quite ready to download it right away, you could “favourite” it by clicking the star. This was available in every list that showed a video, and was a nightmare to code up, since I was keeping references to where the video came from, such as a feed or a quick look. Keeping atop of all the possible references were become difficult with the non-relational StormDB, and the code that handled this became quite dodgy (the biggest issue was dealing with favourites from feeds that were deleted).

Rules & WWDC Videos
The basics were working out quite well, but it was all so manual. Plus, going from video publication to having something to watch was not timely. The RSS feed from YouTube was always several hours out of date, and downloading whole videos took quite a while (it may not have been realtime, but it was pretty close).
So one of the later things I added was a feature I called “Rules”. These were automations that would run when the RSS feed was polled, and would automatically download videos that met certain criteria (you could also hide them or mark them as downloaded). I quite enjoy building these sorts of complex features, where the user is able to do configure sophisticated automatic tasks, so this was a fun thing to code up. And it worked: video downloads would start when they become available and would usually be in Plex when I want to watch it (it was also possible to ping Plex to update the library once the download was finished). It wasn’t perfect though: not retrying failed downloads did plague it a little. But it was good enough.

This was near the end of my use of Broadtail. Soon after adding Rules, I got onto the YouTube Premium bandwagon, which hid the ads and removed the need for Broadtail as a whole. It was a good thing too, as the Plex Android app had this annoying habit of causing the Chrome Cast to hang, and the only way to recover from this was to reboot the device.
So I eventually returned to just using YouTube, and Broadtail was eventually abandoned.
Although, not completely. One last thing I did was extend Broadtail’s video download capabilities to include Apple WWDC Videos. This was treated as a special kind of “feed” which, when polled, would scrap the WWDC video website. I was a little uncomfortable doing this, and I knew when videos were published, they wouldn’t change. So this “feed” was never polled and the user had to refresh it automatically.
Without the means to stream them using AirPlay, downloading them and making them available in Plex was the only way I knew of watching them on my TV, which is how I prefer to watch them.
So that’s what Broadtail is primarily used for now. It’s no longer running as a daemon: I just boot it up when I want to download new videos. And although it’s only a few years old, it’s starting to show signs of decay, with the biggest issue being youtube-dl slowly being abandoned.
So it’s unlikely that I’ll put any serious efforts into this now. But if I did, there are a few things I’d like to see:
So, that’s Broadtail.
Why I like developing in Go vs. something like NodeJS: my development setup doesn’t randomly break for some weird reason. When it does break, it’s because of something I did explicitly.
Why I'm Considering Building A Blogging CMS
I’m planning to start a new blog about Go development and one of the things that I’m currently torn on is how to host it. The choice look to be either using a service like blot.im or micro.blog or some other hosting service, using a static site generation tool like Hugo, or building my own CMS for it. I know that one of the things people tell you about blogging is that building your CMS is not worth your time: I myself even described it as “second cardinal sin of programming” on my first post to micro.blog.
Nevertheless, I think at this stage I will actually do just that. Despite the effort that comes from building a CMS I see some advantages in doing so:
Note that price is not one of these reasons. In fact it might actually cost me a little more to put together a site like this. But I think the experience and control that I hope to get out of this endeavour might be worth it.
I am also aware of some of the risks of this approach. Here is how I plan to mitigate them:
I will also have to be aware of the amount of time I put into this. I actually started working on a CMS several months ago so I’m not starting completely from scratch, but I’ve learnt with too many other of my personal projects that maintaining something like this is for the long term. It might be fine to occasionally tinker with it, but I cannot spend too much effort working on the system at the expense of actually writing content.
So this is what I might do. I might give myself the rest of the month to do what I need to do to get it up to scratch, then I will start measuring how much time I spend working on it, vs. the amount of time I actually use it to write content. If the time I spend working on the code base is more than 50% of time I use it to write content, then it will indicate to me that it’s a distraction and I will abandon it for an alternative setup. To keep myself honest, I’ll post the outcomes of this on my micro blog (if I remember).
A few other minor points:
I’ll be interested if anyone has any thoughts on this so feel free to reply to this post on micro.blog.
Update: I’ve posted a follow-up on this decision about a month after writing this.
Offical support for file embedding coming to Go
I’m excited to see, via Golang Weekly, that the official support for embedding static files in Go is being realised, with the final commit merged to the core a couple of days ago. This, along with the new file-system abstraction, will mean that it will be much easier to embed files in Go applications, and make use of them within the application itself.
One of the features I like about coding is Go is that the build artefacts are statically linked executables that can be distributed without any additional dependencies. This means that if I wanted to share something, all I need to do is to give them a single file that they can just run, without needing to worry about whether particular dependencies or runtimes are installed prior to doing so.
However there are times when the application requires static files, and to maintain this simple form of distribution, I generally want to embed these files within the application itself. This comes up surprisingly often and is not something that was officially supported within the core language tools, meaning that this gap had to be filled by 3rd parties. Given the number of tools available to do this, I can see that I’m not alone in needing this. And as great as it is to see the community step in to fill this gap, relying on an external tool complicates the build process a bit1: making sure the tool is installed, making sure that it is executed when the build run, making sure that the tool is actively being maintained so that changes to the language will be supported going forward, etc.
One other thing about these tools is that the API used to access the file is always slightly different as well, meaning that if there’s a need to change tools, you end up needing to change your code to actually access the statically embedded file.
Now that embedding files is officially coming to the language itself, there is less need to rely on all of this. There’s no need to worry about various tools being installed on the machines that are building the application. And the fact that this feature will work hand in hand with the new file system abstraction means that embedded files would be easier to work with within the code base itself.
So kudos to the core Go development team. I’m really looking forward to using this.
Remarks on Go's Error Handling using Facebook's SDK Crashes As a Framing Device
There are new reports of Facebook’s SDK crashing apps again due to server changes. The post above links to Bugsnag article which explores the underlying cause: that’s worth a read.
I’m going to throw a shout-out to Go’s approach to error handling here. I’m not saying that this shows the superiority of Go over Objective C: these sorts of things can happen in any language. The difference I want to highlight is that Go treats error handling as part of the standard flow of the language, rather than the exceptional flow. This forces you to think about error conditions when you’re making calls to code that can fail.
This does result in some annoying code of the form:
result, err := doOperation()
if err != nil {
return nil, err
}
result2, err := doSecondOperation(result)
if err != nil {
return nil, err
}
// and so on
and there’s nothing stopping your from completely ignoring the error.
But there’s no way to call these two functions without dealing with the error in some way. That is, there’s no way to simply write doSecondOperation(doOperation())
: you’re given an error and you have to do something with it. So you might as well handle it gracefully.
P.S. I should probably state that I know very little about Objective C. I do know that a fair number of APIs in AppKit and UIKit make use of completion handlers which can provide an error value, although to me it seems a littler easier to ignore it vs. deailing with the error values in Go. I also know that Swift makes improvements here, forcing you to prefix calls that can fail with the try
keyword. Again, this is not to rag on Objective C; rather it’s a comment on the idioms of error handling in Go and how these sort of events could prevent the app from crashing.
On Go’s Type Parameters Proposal
The developers of Go have release a new draft proposal for type parameters. The biggest change is the replacing the concept of constraints, which complicated the proposal somewhat, and replaced it with interfaces to express the same thing. You can read the proposal here latest proposal here.
I think they’re starting to reach a great balance between what currently exists in the language, and the features required to make a useful type parameter system. The use of interfaces to constrain the type, that is declare the operators that a type must implement in order to be used as a type parameter for a function or struct, makes total sense. It also makes moving to type parameters in some areas of the standard library trivial. For example, the sort.Sort function prototype:
func Sort(data Interface)
can simply be written as:
func Sort(type T Interface)(data T)
I do have some minor concerns though. The biggest one is the use of interfaces to express constraints related to operators, which are expressed as type lists. I think listing the types that a particular type parameter can instantiate makes sense. It dramatically simplifies the process of expressing a constraint based on the operators a particular type supports. However, using the concept of interfaces for this purpose seems a little strange, especially so when these interfaces cannot be used in areas other than type constraints. To be fair, it seems like they recognise this, and I’m suspecting that in practice these interfaces will be defined in a package that can simply be used, thereby not requiring us to deal with them directly unless we need to.
But all in all, this looks promising. It is starting to feel like the design is coming together, with the rough edges starting to be smoothed out. I appreciate the level of careful consideration the core Go developers are exhibiting in this process. This is after all a large change to the language, and they only have one real chance at this. Sure we have to wait for it, but especially in language design, mistakes are forever.