I’ve been bouncing around projects recently but last week I’ve settled on one that I’ve been really excited about. This is reboot five of this idea, but I think this time it’ll work because I’m not building it for myself, at least not entirely. Anyway, more to say when I have something to show.

Shocking to hear Gruber on Dithering tell that story about the developer’s experience with Apple’s DevRel team. To be told, after choosing to opt out from being on the Vision Pro on day one, that they’re “going to regret it”? Is it Apple’s policy to be offensive to developers now?

Broadtail

Date: 2021 – 2022

Status: Paused

First project I’ll talk about is Broadtail. I think I talked about this one before, or at least I posted screenshot of it. I started work on this in 2021. The pandemic was still raging, and much of my downtime was watching YouTube videos. We were coming up to a federal election, and I was getting frustrated with seeing YouTube ads from political parties that offend me. This was before YouTube Premium so there was no real way to avoid these ads. Or was there?

A Frontend For youtube-dl

I had some experience with youtube-dl in the past, downloading and saving videos that I hoped to watch later. I recently discovered that YouTube also published RSS feeds for channels and playlist. So I was wondering if it was possible to build something that could use both of these. My goal was to have something that would allow me to subscribe to YouTube channels via RSS, download videos using youtube-dl, and watch them via Plex on my TV. This was to be deployed on a Intel Nuc that I was using as a home server, and be accessible via the web-browser.

An early version of Broadtail, with only the download frontend.

I deceided to get the YouTuble downloading feature built first. I started a new Go project and got something up and running reasonably quickly. It was a good excuse to get back to vanilla Go web development, using http.Handle and Go templates, instead of relying on frameworks like Buffalo (don’t get me wrong, I still like Buffalo, but it is quite heavy handed).

It was also an excuse to try out StormDB, which is an embedded NoSQL data store. The technology behind it is quite good — it’s used B-Trees memory mapped files under the cover — and I tend to use it for other things as well. It proved to be quite usable, apart from not allowing multiple read/writers at the same time, which made deployments difficult.

But the backend code was the easy part. What I lack was any sense of web design. That’s one good thing about a framework like Buffalo: it comes with a usable style framework out of the box (Bootstrap). If I were to go my own way, I’d have to start from scratch.

The other side of that coin though, is that it would give me the freedom to go for something that’s slightly off-beat. So I went for an aesthetics that reminded me of early 2000 web-design: san-serif font, all grey lines, dull pastels colours, small controls and widgets (I stopped short at gradients and table-based layouts).

This version also included a hand-rolled job manager that I used for a bunch of other things. It’s… fine. I wouldn’t use it for anything “real”, but it had a way of managing job lifecycles, updating progress, and the ability to cancel a running job. So for that, it was good enough.

Finally, it needed a name. At the time, I was giving all projects bird-like codename, since I can’t come up with names that I liked. I eventually settled on Broadtail, which was a reference to broadtail parrots, like the rosella.

RSS Subscriptions

It didn’t take long after I got this up and running before I realised I needed the RSS subscription feature. So that was the next thing I added.

Homepage of the most recent version of Broadtail, with an active download and a couple of favourited items.

That way it worked was pretty straight forward. One would setup a subscription to a YouTube channel or playlist. Broadtail will then poll that RSS subscription every 15 minutes or so, and show new videos on the homepage. Clicking that video item would bring up details and an option to download it.

Video details. This one is simulated as I couldn't get youtube-dl working.

Each RSS subscription had an associated target directory. Downloading an ad-hoc video would just dump it in a configured directory but I wanted to make it possible to organise downloads from feeds in a more structured way. This wasn’t perfect though: I can’t remember the reason but I had some trouble with this, and most videos just ended up in the download directory by default (it may have had to do with making directories or video permissions).

Click a feed to view the most recent items.

Only the feed polling was automatic at this stage. I was not interested in having all shows downloaded, as that would eat up on bandwidth and disk storage. So users still had to choose which videos they wanted to download. The list of recent feed items were available from the home-screen so they were able to just do so from there.

I also wanted to keep abreast with what jobs were currently running, so the home-screen also had the list of running job.

Click the job to view it's details. This one failed.

The progress-bar was powered by a web-socket backed by a goroutine on the server side, which meant realtime updates. Clicking the job would also show you the live output of the youtube-dl command, making it easy to troubleshoot any jobs that failed. Jobs could be cancelled at any time, but one annoying thing that was missing was the ability to retry failed job. If a download failed, you had to spin up a new job from scratch. This meant clearing out the old job from the file-system and finding the video ID again from wherever you found it.

If you were interested in a video but were not quite ready to download it right away, you could “favourite” it by clicking the star. This was available in every list that showed a video, and was a nightmare to code up, since I was keeping references to where the video came from, such as a feed or a quick look. Keeping atop of all the possible references were become difficult with the non-relational StormDB, and the code that handled this became quite dodgy (the biggest issue was dealing with favourites from feeds that were deleted).

Clicking "Favourites" would show the items that you starred.

Rules & WWDC Videos

The basics were working out quite well, but it was all so manual. Plus, going from video publication to having something to watch was not timely. The RSS feed from YouTube was always several hours out of date, and downloading whole videos took quite a while (it may not have been realtime, but it was pretty close).

So one of the later things I added was a feature I called “Rules”. These were automations that would run when the RSS feed was polled, and would automatically download videos that met certain criteria (you could also hide them or mark them as downloaded). I quite enjoy building these sorts of complex features, where the user is able to do configure sophisticated automatic tasks, so this was a fun thing to code up. And it worked: video downloads would start when they become available and would usually be in Plex when I want to watch it (it was also possible to ping Plex to update the library once the download was finished). It wasn’t perfect though: not retrying failed downloads did plague it a little. But it was good enough.

Editing a Rule.

This was near the end of my use of Broadtail. Soon after adding Rules, I got onto the YouTube Premium bandwagon, which hid the ads and removed the need for Broadtail as a whole. It was a good thing too, as the Plex Android app had this annoying habit of causing the Chrome Cast to hang, and the only way to recover from this was to reboot the device.

So I eventually returned to just using YouTube, and Broadtail was eventually abandoned.

Although, not completely. One last thing I did was extend Broadtail’s video download capabilities to include Apple WWDC Videos. This was treated as a special kind of “feed” which, when polled, would scrap the WWDC video website. I was a little uncomfortable doing this, and I knew when videos were published, they wouldn’t change. So this “feed” was never polled and the user had to refresh it automatically.

Without the means to stream them using AirPlay, downloading them and making them available in Plex was the only way I knew of watching them on my TV, which is how I prefer to watch them.

So that’s what Broadtail is primarily used for now. It’s no longer running as a daemon: I just boot it up when I want to download new videos. And although it’s only a few years old, it’s starting to show signs of decay, with the biggest issue being youtube-dl slowly being abandoned.

So it’s unlikely that I’ll put any serious efforts into this now. But if I did, there are a few things I’d like to see:

  • Authentication added with username/password
  • Retry failed video downloads.
  • The ability to download YouTube videos in audio only (all these “podcasts” that are only available as YouTube videos… 😒)
  • The ability to handle the lifecycle of videos a little better than it does now. It’s already doing this for errors: when a download fails, the video is deleted. But it would be nice if it did things like automatically delete videos 30 days after downloading them. This would require more control over the “video store” though.

So, that’s Broadtail.

Don’t mind me. Just eyeing off a pigeon that’s looking at me funny.

A pigeon on a road with their head cocked towards the camera

I don’t understand why Mail.app for MacOS doesn’t block images from unknown senders by default. They may proxy them to hide my IP address, but that doesn’t help if the image URLs themselves are “personalised”. Fetching the image still indicates that someone’s seen the mail, and for certain senders I do not want them to know that (usually spammers that want confirmation that my email address is legitimate).

Do browsers/web devs still use image maps? Thinking of something that’ll have an image with regions that’d run some JavaScript when tapped. If this was the 2000s, I’d probably use the map element for that. Would people still use this, or would it just be a bunch of JavaScript now? 🤔

Vincent’s kindly given me early access to Scribbles, and I’ve been trying it out this week. And I’ve been loving it. It’s that nice level of minimalism that’s right for me: everything you need, nothing you don’t. I look forward to some of the features planned.

As for my blog over there: it’s here.

Former site of a cafe I use to frequent. Kind of amazing to see how small the plot of land actually is, when you take out all the walls and furniture.

Vacant lot, beside a brick wall and road, where once a building stood.

TIL about the JavaScript debugger statement. You can put debugger in a JS source file, and if you have the console open, the browser will pause execution at that line, like a breakpoint:

console.log("code");
debugger;
console.log("pause here");

This is really going to be useful in the future.

Really enjoyed listening to Om Malik with Ben Thompson on Stratechery today. Very insightful and optimistic conversation.

Phograms

Originally posted on Folio Red, which is why this post references "a new blog".

Pho-gram (n): a false or fanciful image or prose, usually generated by AI*

There’s nothing like a new blog. So much possibility, so much expectation of quality posts shared with the world. It’s like that feeling of a new journal or notebook: you’re almost afraid to sally it with what you think is not worthy of it.

Well, let’s set expectations right now, with a posting of a few AI generated images. 😜

Yes, yes, I know. No one wants to see images from DALL-E and Stable Diffusion that you didn’t make yourself. And yeah, I should acknowledge that these tools have resulted in some real external costs. But there are few that I would like to keep. And maybe if I added some micro-fiction around each one, it would reduce the feeling that I’m simply pushing AI generated junk on the the Internet.

So here’s a bunch of them that I generated for something at work, with a short, non-AI written, back-story for each one.

Martian Production

Since the formation of the New Martian Republic in 2294, much of the media consumed on Mars was a direct import from Earth. And even after the establishment of permanent  settlements, most Martian was more likely to consume something produced on Terra than something home grown. So, in 2351, the United Colonial Martian Government (UCMG) formed the committee to spearhead the development of local media scene. The committee had to come up with a proposal for how to bootstrap the local production  spoken audio, music, the written word, and videography.

In 2353, the first Martian videography production corporation was establish. Originally known as Martian Film House, the corporation was originally devised to produced feature-length film and documentaries, but this soon expanded to include short-form serials and news broadcasts. This culminated in 2354, in what was recorded to be the first Martian-wide video broadcast in what was called Climbing the Tholus Summit. Later, in 2360, as part of the negotiations with Terra Brodcasting, the UCMG organised for the retransmission of all Martian programs back to Earth in exchange for renewing the rights for the carry of all imported visual media.

The current logo, commissioned soon after the name change to Martian Productions, pays homage to those early pioneers exploring Mars back at the turn of the millennium. And although the technology was not sophisticated enough to carry those people to Mars themselves, they were still able to explore it from afar, through the lens of the Martian rovers. One of the most successful, the rover known as Opportunity, was chosen as the company figurehead.

Content Mill Productions

Jeffery knows that to make it on the Internet, content is everything. He who is willing to put in the hard yards, making something of quality, will get the eyeballs, and thus that sweet, sweet ad revenue everyone’s fighting over. Such prospects are more endurable than the failing demand for hand-milled flower that he’s currently engaged in (he still doesn’t understand why he spent so much on that windmill).

But quality will only get you so far. The attention span of those online can now be measured in nanosectonds. People don’t even finish the current TikTok video they’re watching before they swipe on to the next one. No wonder people are going on about three minute videos as being “long-form content”.

So Jeffery had to make a choice. He had to put his desire for quantity aside and start considering ways to simply pump out material. Content has such a dirty ring to it nowadays, but Jeffery disagrees. Compared to what he’s doing now, selling bags of hand-milled flower to no-one, the online content game will be his life-raft, his saviour. 

And after all, quantity begets quality, and content is nothing more than quanity. So wouldn’t that mean content and quality are essentially the same? (he thinks he’s got that right) And maybe with so much content being made under one name, it would be easier for others to find him. Get some of thse eyeballs going his way. Who knows, he may be able to sell an ad or two. Maybe shutdown his current business and go at it full time. Can’t be too many people doing this sort of stuff.

He things it could work. It’s all grist for the mill in the end.

* This is a made up word, thereby itself being a phogram.

Just thinking of the failure of XSD, WSDL, and other XML formats back in the day. It’s amusing to think that many of the difficulties that came from working in these formats were waved away with sayings like “ah, tooling will help you there,” or “of course there’s going to be a GUI editor for it.”

Compare that to a format like Protobuf. Sure, there are tools to generate the code, but it assumes the source will be written by humans using nothing more than a text editor.

That might be why formats like RSS and XML-RPC survived. They’re super simple to understand as they are. For all the others, it might be that if you feel your text format depends on tools to author it, it’s too complicated.

I’m starting to suspect that online, multi-choice questionnaires — with only eight hypotheticals and no choice that maps nicely to my preference or behavior — don’t make for great indicators of personality.

The AWS Generative AI Workshop

Had an AI workshop today, where we went through some of the generative AI services AWS offers and how they could be used. It was reasonably high level yet I still got something out of it.

What was striking was just how much of integrating these foundational models (something like an LLM that was pre-trained on the web) involved natural language. Like if you building a chat bot to have a certain personality, you’d start each context with something like:

You are a friendly life-coach which is trying to be helpful. If you don’t know the answer to a question, you are to say I don’t know. (Question)

This would extend to domain knowledge. Now you could fine tune a foundational model with your own data set, but an easier, allbeit slightly less efficient way, would be to do something like hand craft a bunch of questions and answers pairs, and feed that straight into the prompt.

This may also extend to agents as well (code that the model interacts with). We didn’t cover agents to a significant degree, but after looking at some of the marketing materials, it seems to me that much of the integration is instructing the model to put parameters within XML tags (so that the much “dumber” agent can parse it out), and how to interpret the structured response.

A lot of boilerplate, written in natural language, in the prompt just to deal with passing information around. I didn’t expect that.

Nevertheless, it was pretty interesting. And although I haven’t got the drive to look into this much further, I would like to learn more about how one might hook up external data-sources and agents (somthing that involves vector databases that’s available to the model and doesn’t require fine turning. I not sure how to represent these “facts” so that it’s usable by the model, or even if that’s a thing).

I don’t know why I think I’ll remember where I saw an interesting link. A few days go by, and when I want to follow it, surprise, surprise, I forgot where I saw it. The world is swimming in bookmaking and read-it-later services. Why don’t I use them?! 🤦‍♂️

Replacing Ear Cups On JBL E45BT Headphones

As far as wearables go, my daily drivers are a pair of JBL E45BT Bluetooth headphones. They’re several years old now and are showing their age: many of the buttons no longer work and it usually takes two attempts for the Bluetooth to connect. But the biggest issue is that the ear cups were no longer staying on. They’re fine when I wear them, but as soon as I take them off, the left cup would fall to the ground.

But they’re a decent pair of headphones, and I wasn’t keen on throwing them out or shopping for another pair. So I set about looking for a set of new ear cups.

This is actually the second pair of replacement cups I’ve bought for these headphones. The first had a strip of adhesive that stuck the cup straight on to the speaker (it was this adhesive that was starting to fail). I didn’t make a note of where I bought them and a quick search didn’t turn up anything that looked like them. So in December, I settled for this pair from this eBay seller. Yesterday, they arrived.

New set of ear-cups for a JBL E-series bluetooth headphones
The new set of ear cups.
A black bluetooth headphone on a table, with the left cup fallen off exposing the speaker, and the right cup slightly removed from it's original position
They couldn't have come sooner.

First impressions were that they were maybe too big. I also didn’t see an adhesive strip to stick them on. Looking at the listing again, I realised that they’re actually for a different line of JBL headphones. But I was a little desperate, so I set about trying to get them on.

The headphones in question on an old piece of paper with the left cup replaces with the new ear-cups and the right speaker exposed and bits of old adhestive laying on the paper
Removing the old adhesive, with my fingers (yeah, I probably should buy some tools).

It turns out that they’re actually still a good fit for my pair. The aperture is a little smaller than the headphone speaker, but there’s a little rim around each one and I found that by slotting one side of the padding over the rim, and then lightly stretching and rolling the aperture around the speaker, it was possible to get them on. It’s a tight fit, but that just means they’re likely to stay on. And without any adhesive, which is good.

The headphones with the right cup in profile demonstrating the roll of the padding onto the rim
It's a bit hard to see, but if you look at the top of the right cup, you can see how the padding was rolled onto the speaker from the bottom.

After a quick road test (a walk around the block and washing the dishes), I found the replacement to be a success. So here’s to a few more years of this daily driver.

The headphones in profile with the new replacement cups
Headphones with the new cups. They look and feel pretty good.
The old replacement cups on a table, with the left cup loosing it's vinyl skin revealing the actual foam.
The old cups, ready for retirement.

🔗 Let’s make the indie web easier

Inspiring post. I will admit that while I was reading it I was thinking “what about this? What about that?” But I came away with the feeling (realisation?) that the appetite for these tools might be infinite and that one size doesn’t fit all. This might be a good thing.

Argh, the coffee kiosk at the station is closed. Will have to activate my backup plan: catching the earlier train and getting a coffee two stations down. Addiction will lead you to do strange things. ☕

Finished reading: Twenty Bits I Learned about Making Websites by Dan Cederholm 📚

Got this book yesterday and read through it in about an hour. A joy to read, and a pleasure simply to hold.

A blue book with the title Twenty Bits I Learned about Making Websites

Elm Connections Retro

If you follow my blog, you would’ve noticed several videos of me coding up a Connections clone in Elm. I did this as a bit of an experiment, to see if I’d would be interested in screen-casting my coding sessions, and if anyone else would be interested in watching them. I also wanted to see if hosting them on a platform that’s not YouTube would gain any traction.

So far, I’ve received no takers: most videos have received zero views so far, with the highest view count being three. I’m guessing part of the reason is that the audience for this sort of stuff is just not there, or maybe it is but they’re spending all their watch-time on YouTube and Twitch. Building an audience on a platform like PeerTube might be feasible, but it’ll be quite a slog to fight for oxygen from these juggernauts.

But I also have to know that it’s unreasonable of me to expect any decent view numbers after just seven videos, especially after the first seven videos from someone who’s starting from scratch. Much like growing an audience for anything else, it’s just one of those things I need to work at, if I want it. Part of me is not sure that I do want it. And yet, the other part of me is seeking out posts about coders streaming on Twitch. So maybe that desire is still there.

Nevertheless, I’m glad I took on this small summer project. I had a chance to experiment with Elm, which was much needed exercise of my programming skills. I also had a chance to try out video production and editing using DaVinci Resolve, and I had a play around with PeerTube which… well, who can resist playing around with software? So although I didn’t get the banana1, at least I managed to compost the peel.

Anyway, on to the retro. Here are a few things I’ll need to keep in mind the next time I want to attempt this (it’s written in the second person as I’m writing this to myself):

Recording

  • Do a small recording test to make sure you’re setup is working. That last thing you want is to have a 30 minute recording with no audio because you forgot to turn on your mic.
  • Drink or sneeze while recording if you need to but make sure you stop moving things on the screen when you do, especially the mouse. That would make it easier for you to trim it out in the edit. This also applies when you’re thinking or reading.
  • When you do restart after drinking or sneezing, avoid repeating the last few words you just said. Saying “I’m going to be… (sneeze)… going to be doing this” makes it hard to cut it from the edit. Either restart the sentence from the beginning (“I’m going to be… (sneeze)… I’m going to be doing this”), or just continue on (“I’m going to be… (sneeze)… doing this”).
  • Also, just before you restart after drinking or sneezing, say a few random words to clear your voice.
  • Avoid saying things like “um” and “ah” when you’re explaining something. I know it’s natural to do so, so if you can’t avoid it, stop moving when you do so they can be edited out.
  • Don’t sigh. It makes it seem like you’re disinterested or annoyed.
  • Narrate more. Longs stretches of keyboard noises do not make for interesting viewing.
  • Saying things like “let’s change this” can be improved upon by saying why you’re changing “this”. Viewers know that you’re changing something — they can see it. What they can’t see is your thinking as to why it’s being changed at all.
  • Try to keep the same distance from the mic, and speak at the same volume, especially when saying things in your “thinking” voice (it tends to be a little quiet).
  • If you think the editor font is the right size, make it two steps larger.

Editing

  • Showing that you’re thinking or reading is fine, but don’t be afraid to trim it down to several seconds or so. Long stretches of things not happening on screen looks a little boring.
  • Proofread any titles you use. Make sure you’ve got the spelling right.
  • Try not to get too fancy with the effects to show the passage of time. Doing so means you’ll need to recreate it the same effects for subsequent videos. Less might be more here.
  • Learn the keyboard shortcuts for DaVinci Resolve. Here are some useful ones:
    • m: Add new marker.
    • Shift+Up, Shift+Down: Go to previous/next marker (only works in the edit section though? 🤨).
    • Cmd+\: Split the selected clip.
    • Option+Y: Select all clips to the right of the playhead (useful when trimming stuff out and you need to plug the gap).
  • Your screen is not big enough for 1080p recordings. Aim for 720p so that the video will still be crisp when exporting (a 16:9 video intended for 720p will need a capture region of 1280x720)

Publishing

  • You don’t need to announce a new episode as soon as it’s uploaded. Consider spacing them out to one or two a week. That would make it less like you’re just releasing “content”.
  • Aim to publish it around the same time, or at least on the same day. That should give others an expectation of when new episodes will be released.
  • Put some thought into the video poster. Just defaulting to the first frame is a little lazy.
  • If you’re using PeerTube, upload the videos as private first and don’t bother with the metadata until the upload is successful. Then go back and edit the metadata before making the video public (changing a video from private to public will send out an ActivityPub message). That way, there’s less chance of you loosing metadata changes if the upload were to fail.

  1. The banana here is anyone taking an interest in these videos; and I guess releasing Clonections itself? But not doing so is a conscious choice, at least for now. ↩︎