Workpad

    AWS Tools: Documentation & The Website

    Worked a little more on “awstools” (still haven’t thought of a good alternative name for it). I think the “dynamo-browse” tool is close to being in a releasable state. I’ve spent the last couple of days trying to clean up most of the inconsistencies, and making sure that it’s being packaged correctly.

    Now it’s documentation writing time. I’m working my way through a very basic website and user guide. It’s been a little while since I’ve written any form of user-level documentation — most of the documents I write have been for other developers I work closely with — and I admit that it feels like a bit of a slog. It might be the tone of writing that I’ve adopted: a little dry and impersonal, trying to walk that fine line between being informative without swamping the reader with big blocks of words. I might need to work on that: no real reason why the documentation needs to be boring to the reader.

    The website itself will be a statically generated site using Hugo and will most likely be served using GitHub pages. I’ve settled on the terminal theme, since “awstools” is a suite of terminal-based apps. That reasoning might be a little corny, but to be honest, I have grown to actually like the theme itself. I haven’t settled on a domain for it yet.

    While working on the documentation, I ran into a useful website that contains a comprehensive list of HTML entities, complete with previews. Good reference for the arrows glyphs I need to use to represent key bindings in the document.

    AWS Tools Dev Diary

    A little more work on “awstools” today, mainly on a bit of a cleanup spree to make them suitable for others to use. This generally means fixing up any inconsistencies in how the commands work. An example of this is the put command, which now writes all modified items that are marked to the table (or if there are no marked items, all modified items) instead of just the selected one. This brings it closer to how the delete command works.

    Also merged the set-s and set-n commands into a single set-attr command, which now has the ability to specify an optional attribute type along with the attribute name. This still only works with the currently selected item, and I think I’ll keep it like that for the moment. I do want something to modify attributes of all marked items (or even all items), but it might be better if that was a separate command, as it may allow for some potentially useful actions like adding a suffix to the value instead of simply changing it.

    Some of these command names are a bit unwieldy, like set-attr, but I’m hopeful to replace many of these with simple keystrokes down the line. I’m trying not to reserve many generic names like “set” in the off chance of adding something like TCL or similar to make simple scripts (this is in addition to something closer to JavaScript for more feature-full extensions). Nothing settled here, but trying to keep that option open.

    WWDC Videos In Broadtail

    Some more work on Broadtail. This time, I added the ability to use it to download Apple WWDC videos.

    The way it works is based on the existing RSS feed concept. In order to get the list of videos for a particular WWDC year, you “subscribe” to that by setting up a feed with the new “Apple Developer Videos” type. The external ID is taken from the URL slug of the web-site that Apple publishes the session videos. For example, for WWDC 2021, the external ID would be “wwdc2021”.

    Downloading the videos is more or less the same.

    There are a few differences between this feed type, and the YouTube RSS feed. For instance, it only makes use of what is available from the website, which means details like publishing date or duration are not really available. This is why the “Publishing” date is displayed as “unknown”. That’s also why the videos are arranged in alphabetical order and the feed itself is not automatically refreshed (although doing so manually by clicking “Refresh” within the feed page will work). These are actually properties that can now be applied to all feeds of one wishes, although the YouTube feeds are still arranged in reverse chronological order by default.

    From a coding perspective, this involved a lot of refactoring. I was hoping to move to a more generic feed and video type, but this was the feature that eventually got me to do so. Thing is that if I wanted to add more feed and video types in the future, it should be easier to do so.

    Feed Rules In Broadtail

    Generally, when there’s a video that I’m interesting in watching, I take a look at Broadtail to see if it’s available. When it is, I go ahead and download it.

    However, some videos take a long time to download — we’re talking 10 hours or so — and they’re usually published when I’m not looking, like during the night when I’m asleep (thank’s time-zones). So I’d thought it would be nice for Broadtail to kick off the download for me when the video shows up in the feed.

    So I’ve added Feed Rules to do this.

    Feed rules are very simple automations that happen when new items are found in during the RSS feed poll. When the video shows up in the feed, and matches the rule condition, Broadtail will perform the rule action for that video.

    Feed Rules are added as a new sub-section in “Settings”, which itself is a new top-level section of the app (the “General” sub-section is empty at this stage).

    Feed Rules consist of a name, whether the rule is active, a set of conditions, and a set of action. A feed item will need to match all the conditions of the rule in order for the actions to be performed.

    The conditions of a feed rule touch upon the following properties of a feed item:

    • The feed in which it appears in. This can be set to “any” to apply the rule to all feed items.
    • Whether the title matches a given string. The match rules are similar to the searches in the feed item list views, which are appearance of each of the space separated tokens somewhere in the title (in any case) with phrases appearing as quoted strings.
    • Whether the description matches a given string.

    If a feed item matches all the conditions, Broadtail can perform the following actions for the feed item:

    • Start a download of the video
    • Mark the feed item as a favourite

    There might be more conditions or actions added in the future. So far this seems to be the bare minimum to make the feature usable.

    Doing a small weekend/week-long project at the moment to track favourite moments in a few podcasts I’m listening to. This is something that I’ve been thinking about for a while, and I’m not entirely sure what compelled me to actually start work on it. Probably because the system I’ve been using so far — a set of timestamped Pocketcast links managed in Pinboard — has been growing quite recently and much of the limitations involved, such as the list being unordered and no skip back 30 seconds available on playback, is start to annoy me. It’s also a chance for a bit of novelty, at least for a few days or so.

    It took roughly a day or so to get a small Buffalo web-app up and running which does most of what I want. It just needs some styling and a better way to play the episodes, which is what I’m working on now. I really don’t want to spend more than a week working on this — last thing I need are more projects. But a good thing about this one is that I think the scope is naturally quite small, so no real risk of it blowing out to become too large.

    Two new awstool commands: one for browsing SSM parameters and one for simply viewing JSON log files. The SSM parameter one was especially handy, as I was dealing with parameter subtrees a lot and doing that in the AWS web console is always a pain. As for the JSON log viewer; well, let’s just say there were one too many log files from Kubernete pods I needed to look at this week.

    The pattern for working with state seems to be working. I may need to be a little careful that the state management doesn’t get too unwieldily as I add features and more things that need to be tracked. But at the moment, it seems to be manageable.

    I’ve been racking my brain trying to best work out how to organise the code for awstools. My goals are to make it possible to have view models composable, have state centralised but also localised, and keep controllers from having too much responsibility. I started another tool, which browses SSM parameters, to try and work this all out.

    I think I’ve settled on the following architecture:

    • Providers and Services will remain stateless
    • State will be managed by controllers
    • Operations in controllers are only available through tea.Cmd implementations.
    • Updates from controllers will only be available through tea.Msg implementations.
    • View models (i.e. tea.Model) will only know enough state to be able to render themselves.
    • There will be one master model which will coordinate the communication between controllers and view models. This model will react to messages from the controllers and update the views. It will also react to messages from the views and launch operations on the controllers.

    We’ll see how this goes and whether it will scale as additional features are added.

    More work on the AWS tools, mainly rebuilding the UI framework. Need to rip out all the Operation type stuff, as BubbleTea already does this using messages and commands (see tutorial 2). Also taking some time building some UI models that I can reuse across the various commands, including a few that deal with layout changes.

    Also tracked down what was causing that delay when trying to create a new list. It turns out during the call to list.New(), a bunch of adaptive styles are created, which includes a test to see if the terminal is in light or dark mode. This calls some terminal IO methods which were blocking for a significant amount of time, we’re talking in the order of 10’s of seconds.

    The good thing is that this check is only made once, so what I did was move the check into the main function. Some preliminary tests indicating that this may work: the lists are consistently being created very quickly again. We’ll see if this lasts.

    More work on Broadtail this morning. Managed to get the new favourite stuff finished. Favourites are now a dedicated entity, instead of being tied to a feed item. This means that any video can now be favourited, including ones not from a feed. The favourite list is now a top-level menu item as well.

    Also found a useful CLI tool for browsing BoltDB files.

    New AWS Tools Commands

    For a while now, I’ve been wanting some tools which would help manage AWS resources that would also run in the terminal. I know in most circumstances the AWS console would work, but I know for myself, there’s a lot of benefit from doing this sort of administration from the command line.

    I use the terminal a lot when I’m developing or investigating something. Much of the time while I’m in the weeds I’ve got a bunch of tabs with tools running and producing output, and I’m switching between them as I try to get something working across a bunch of systems.

    This is in addition to cases when I need to manage an AWS mock running on the local machine. The AWS console will not work then.

    At the start of the week, I was thinking of at least the following three tools that I would like to see exist:

    • A TUI browser/workspace for DynamoDB tables
    • A TUI workspace for monitoring SQS queues
    • Some tool for reading JSON style log files.

    As of yesterday, I actually got around to building the first two.

    The first is a tool for browsing DynamoDB tables, which I’m calling dynamo-browse (yes, the names are not great). This tool does a scan of a DynamoDB table, and shows the resulting items in a table. Each item can be inspected in full in the lower half of the window by moving the selection.

    Dynamo-Browse

    At the moment this tool only does a simple scan, and some very lightweight editing of items (duplicate and delete). But already it’s proven useful with the task I was working on, especially when I came to viewing the contents of test DynamoDB instances running on the local machine.

    The second tool is used for monitoring SQS queues.

    SQS-Browse

    This will poll for SQS messages from a queue and display them in the table here. The message itself will be pretty printed in the lower half of the screen. Messages can also be pushed to another queue. That’s pretty much it so far.

    There are a bunch of things I’d like to do in these tools. Here’s a list of them:

    • Queries in DynamoDB: There’s currently no way to run queries or filter the resulting items. I’m hoping to design a small query language to do this. I’m already using Participal to power a very simple expression language to duplicate items, so as long as I can design something that is expressive enough and knows how to use particular indices, I think this should work.
    • Putting brand new items in DynamoDB: At the moment you can create new items based on existing items in a way, but there’s currently no way to create brand new items or adjust the attributes of existing items. For this, I’d like to see an “edit item” mode, where you can select the attribute and edit the value, change the type, add or remove attributes, etc. This would require some UI rework, which is already a bit tentative at this stage (it was sort of rushed together, and although some architectural changes have been made to the M and the C in MVC, work on the V is still outstanding).
    • Preview item changes before putting them to DynamoDB: This sort of extends the point above, where you see the diff between the old item and new item before it’s put into DynamoDB.
    • Workspaces in SQS Browse: One idea I have for SQS browse is the notion of “workspace”. This is a persistent storage area located on the disk where all the messages from the queue would be saved to. Because SQS browse is currently pulling messages from the SQS queue, I don’t want the user to get into a state where they’ve lost their messages for good. The idea is that a workspace is always created when SQS browse is launch. The user can choose the workspace file explicitly, but if they don’t, then the workspace will be created in the temp directory. Also implicit in this is support for opening existing workspaces to continue work in them.
    • Multiple queues in SQS Browse: Something that I’d like to see in SQS Browse is the ability to deal with multiple queues. Say you’re pulling from one queue and you’d like to push it to another. You can use a command to add a queue to the workspace. Then, you can do things like poll the queue, push messages in the workspace to the queue, monitor it’s queue length, etc. Queues would be addressable by number or some other way, so you can simply run the command push 2 to push the current message to queue 2.

    As for when I’ll actively work on these tools. It will probably be when I need to use them. But in the short term, I’m glad I got the opportunity to start working on them. They’ve already proven quite useful to me.

    Even more work on Feed Journaler. Still trying to tune the title removal logic. Will probably require a lot of testing.

    Broadtail 0.0.7

    Released Broadtail 0.0.7 about a week ago. This included some restyling of the job list on the home page, which now includes a progress bar updated using web-sockets (no need for page refreshes anymore).

    For the frontend, the Websocket APIs that come from the browser are used. There’s not much to it — it’s managed by a Stimulus controller which sets up the websocket and listen for updates. The updates are then pushed as custom events to the main window, which the Stimulus controllers used to update the progress bar are listening out for. This allows for a single Stimulus controller to manage the websocket connection and make use of the window as a message bus.

    Working out the layers of the progress bar took me a bit of time, as I wanted to make sure the text in the progress bar itself was readable as the progress bar filled. I settled in a HTML tree that looked like the following:

    <div class="progressbar">
      <!-- The filled in layer, at z-index: 10 -->
      <div class="complete">
        <span class="label">45% complete</span>
      </div>
    
      <!-- The unfilled layer -->
      <span class="label">45% complete</span>
    </div>
    

    As you can see, there’s a base layer and a filled in layer that overlaps it. Both of these layers have a progress label that contain the same status message. As the .complete layer fills in, it will hide the unfilled layer and label. The various CSS properties used to get this effect can be found here.

    The backend was a little easier. There is a nice websocket library for Go which handles the connection upgrades and provides a nice API for posting JSON messages. Once the upgrade is complete, a goroutine servicing the connection will just start listening to status updates from the jobs manager and forward these messages as JSON text messages.

    Although this works, it’s not perfect. One small issue is that updates will not reconnect if there is an error. I imagine that it’s just a matter of listening out for the relevant events and retrying, but I’ll need to learn more about how this actually works. Another thing is that the styling of the progress bar relies of fixed widths. If I get around to reskinning the style of the entire application, that might be the time to address this.

    The second thing this release has is a simple integration with Plex. If this integration is configured, Broadtail will now send a request to Plex to rescan the library for new files, meaning that there’s no real need to wait for the schedule rescan to occur before the videos are available in the app. This simply uses Plex’s API, but it needs the Plex token, which can be found using this method.

    Anyway, that’s it for this version. I’m working on re-engineering how favourites work for the next release. Since this is still in early development, I won’t be putting in any logic to migrate the existing favourites, so just be weary that you may loose that data. If that’s going to be a problem, feel free to let me know.

    Some More Updates of Broadtail

    I’ve made some more changes to Broadtail over the last couple of weeks.

    The home page now shows a list of recently published videos below the currently running jobs.

    Clicking through to “Show All” displays all the published videos. A simple filter can be applied to filter them down to videos with titles containing the keywords (note: nothing fancy with the filter, just tokenisation and an OR query).

    Finally, items can now be favourited. This can be used to select videos that you may want to download in the future. I personally use this to keep the list of “new videos” in the Plex server these videos go to to a minimum.

    PGBC Scoring Rules

    I get a bit of a thrill when there’s a need to design a mini-language. I have one facing me now for a little project I’m responsible for, which is maintaining a scoring site for a bocce comp I’m involve in with friends.

    How scoring works now is that the winner of a particular bocce match gets one point for the season. The winner for the season is the person with the most points. However, we recently discuss the idea of adding “final matches,” which will give the match winner 7 points, the runner up 2 points, and the person who came in third 1 point. At the same time I want to add the notion of “friendly matches” which won’t count to the season score.

    It might have been that a simple solution was to encode these rules directly in the app, and have a flag indicating whether a match was normal, final or friendly. But this was suboptimal as there is another variant of the game we play which do not have the notion of finals, and if we did, we may eventually have different rule for it. So I opted for a design in which a new “match type” is added as a new database entity, which will have the scoring rules encoded as a PostgreSQL JSON column type. Using this as a mechanism of encoding free(ish) structured data when there’s no need to query it has worked for me in the past. There was no need to add the notion of seasons points as it was already present as an easy way to keep track of wins for a season.

    For the scoring rules JSON structure, I’m considering the use of an array of conditions. When a player meets the conditions of a particular array element, they will be awarded the points associated with that condition. Each player will only be permitted to match one condition, and if they don’t match any, they won’t get any points. The fields of the condition that a player can be matched to can be made up of the following attributes:

    • rank: (int) the position the player has in the match just played in accordance with the scoring, with 1 being the player with the highest score, 2 being the player with the second highest score, and so on.
    • winner: (bool) whether the player is considered the winner of the match. The person with the highest score usually is, but this is treated as an independent field and so it should be possible to define rules accordingly.
    • draw: (bool) whether the player shares their rank with another player. When a draw occurs, both winning players will have a rank of 1, with the player of the second highest score having a rank of 2.

    Using this structure, a possible scoring rules definition for a normal match may look like the following:

    { "season_score": [
      { "condition": { "winner": true }, "points": 1 }
    ]}
    

    whereas a rules definition for the final match may look like the following:

    { "season_score": [
      { "condition": { "rank": 1 }, "points": 7 },
      { "condition": { "rank": 2 }, "points": 2 },
      { "condition": { "rank": 3 }, "points": 1 }
    }]
    

    Finally, for friendlies, the rules can simply look like the following:

    { "season_score": [] }
    

    I think this provides a great deal of flexibility and extensibility without making the rules definition too complicated.

    Alto Catalogue Update

    I’ve really tied myself up in knots here. I’m spending some time working on Alto Catalogue, trying to streamline the process of uploading individual tracks into a new album. This is a workflow that is absolutely not user friendly at the moment, and the only way I’ve gotten tracks into the catalogue is to run a hacked-together tool to upload the tracks from the command line. The reason why I’m addressing this now is that it’s slightly embarrassing to have this open-source project without having a nice way of doing something that, by all accounts, is quite fundamental (a good hint for when you’re facing this is when it comes time to write the end-user documentation: if you can’t explain how to do something in a way that doesn’t include the word “hack”, “complicated”, or “unsupported”, then something is missing).

    So I’m trying to close this feature gap, but it’s proving to be more complicated than I expected. The main issue relates ID3 tags and how media is arrange in the repository. Previous versions of the catalogue actually did have a way of uploading track media to the repository, which is essentially an S3 bucket. The way this work is that the catalogue will issue the browser a pre-signed Put URL, and the browser could upload the track media directly to S3. But in order to get a pre-signed URL, you need to know the object key, which is a bit like a file path. The old upload flow had the user enter the object key manually in the upload form.

    This worked but I had some real issues with it. The first is that I’d like the objects within the S3 bucket to be organised in a nice way, for example “artist/album/tracknum-title.mp3”. I’m hoping that this S3 bucket will be my definitive music collection and I don’t want just some random IDs that are completely indecipherable when I browse the objects in the S3 bucket. That way, if I were ever to shutdown the catalogue down or loose all the metadata, I’d still be able to navigate my collection via the object keys alone.

    The second was that this approach did not take into account the track metadata. Track metadata is managed in a PostgreSQL database and had to be entered in manually; yes, this included the track duration. The only reason I used the hacked together tool to upload tracks was that it was a tool I was already using to set ID3 tags on MP3 files, and it was trivial to add a HTTP client to do the upload from there. Obviously, asking users to run a separate tool to do their track uploads is not going to fly.

    So I’m hoping to improve this. The ideal flow would be that the user will simply select an MP3 from their file system. When they click upload, the following things will happen:

    • The ID3 tags of the MP3 will be read.
    • That metadata will be used to determine the location of the object in S3.
    • A pre-signed URL will be generated and sent to the browser to upload the file.
    • The file is uploaded to S3.
    • A new track record is created with the same metadata.

    The libraries I’m using to read the ID3 tags and track duration requires the track media to be available as a file on the local file system (I assume this is for random access). Simply uploading the track media to the local file system would be the easiest approach, since it would allow me to read the metadata, upload the media to the repository on the backend, and setup the track metadata all in a single transaction. But I have some reservations about allowing large uploads to the server, and most of the existing infrastructure already makes use of pre-signed URLs. So the first run at this feature involved uploading the file to S3 and then downloading it on the server backend to read the metadata.

    But you see the problem here: in order to generate a pre-signed URL to upload the object to S3, I need to know the location of the media, which I want to derive from the track metadata. So if I don’t want uploads to go straight to the file system, I need the object to already be in S3 in order to work out the best location of where to put the object in S3.

    So I’m wondering what the best ways to fix this would be. My current thing is this series of events:

    • Create a pre-signed URL to a temporary location in the S3 bucket.
    • Allow the user to Upload the media directly to that location in the S3 bucket.
    • On the server, download that media object to get the metadata and duration.
    • From that, derive the objects location and move the object within S3, something I’m guessing should be relatively easy if the objects are in the same bucket.
    • Create a new track record from the metadata.

    The alternative is biting the bullet and allowing track uploads directly to the file system. That will simplify the crazy workflow above but means that I’ll need to configure the server for large uploads. This is not entirely without precedence though: there is a feature for uploading tracks in a zip file downloaded from a URL which uses the local file system. So there’s not a whole lot stopping me from doing this altogether.

    The third approach might be looking for a JavaScript library to read the ID3 tags. This is not great as I’d need to get the location from the server anyway, as the metadata-derive object location is configured on a per repository basis. It also means I’ll be mixing up different ways to get metadata.

    In any case, not a great set of options here.

    Feeds In Broadtail

    My quest to watch YouTube without using YouTube got a little closer recently with the addition of feeds in Broadtail. This uses the YouTube RSS feed endpoint to list videos recently added to a channel or playlist.

    Feed listing, in all it's 90's web style glory.

    There are a bunch of channels that I watch regularly but I’m very hesitant to subscribe to them within YouTube itself (sorry YouTubers, but I choose not to smash that bell icon). I’m generally quite hesitant to give any signal to YouTube about my watching habits, feeding their machine learning models even more information about myself. But I do want to know when new videos are available, so that I can get them into Plex once they’re released. There is where feeds come in handy.

    Recent videos of a feed.

    Also improved is the display of video metadata when selecting a feed item or entering a video ID in the quick look bar. Previously this would immediately start a download of the video, but I prefer knowing more about the video first. These downloads aren’t free, and they usually take many hours to get. Better to know more about them before committing to it.

    Video details page.

    Incidentally, I think this mode of watching has a slight benefit. There are days when I spend the whole evening binging YouTube, not so much following the algorithm but looking at the various channels I’m interested in for videos that I haven’t seen yet. Waiting several hours for a video download feels a little more measured, and less likely to send me down the YouTube rabbit hole. I’m sure there will still be evenings when I do nothing else other than watch TV, but hopefully that’s more of a choice rather than an accident.

    I think this is enough on Broadtail for the time being. It’s more or less functional for what I want to do with it. Time to move onto something else.

    Some Screenshots Of Broadtail

    I spent some time this morning doing some styling work on Broadtail, my silly little YouTube video download manager I’m working on.

    Now, I think it’s fair to say that I’m not a designer. And these designs look a little dated, but, surprisingly, this is sort of the design I’m going for: centered pages, borders, etc. A bit of a retro, tasteless style that may be ugly, but still usable(-ish).

    It’s not quite finished — the colours need a bit of work — but it’s sort of the style I have in my head.

    More work on the project I mentioned yesterday, codenamed Broadtail. Most of the work was around the management of download jobs. I’m using a job management library I’ve built for another project and integrated it here so that the video downloads could be observable from the web frontend. The library works quite well, but at the moment, the jobs are not kept on any sort of disk storage. They are kept in memory until they’re manually cleared, but I’m hoping to only keep the active jobs in memory, and store historical jobs onto disk. So most of today’s session was spent on making that possible, along with some screens to list and view job details.

    Start of Yet Another Project Because I Can't Help Myself

    One of the reasons why I stopped work on Lorikeet was that I was inspired by those on Micro.blog to setup a Plex server for my YouTube watching needs. A few years ago, I actually bought an old Intel Nuc for that reason, but I never got around to setting it up. I managed to do so last Wednesday and so far it’s working pretty well.

    The next thing I’d like to do is setup RSS subscriptions for certain YouTube channels and automatically download the videos when they are publish. I plan to use “youtube-dl” for the actual video downloading part, but I’m hoping to build something that would poll the RSS feeds and trigger the download when new videos are published. I’m hoping that this service would have a web-based frontend so I don’t have to login via SSH to monitor progress, etc.

    The download’s would need to be automatic as the requests made by youtube-dl seem to be throttled by YouTube and a longish video may take several hours to download. If this was a manual process, assuming that I would actually remember to start the download myself, the video won’t be ready for my evening viewing. I’m hoping that my timezone would work to my advantage here. The evenings on the US East Coast are my mornings, so if a video download starts at the beginning of the day, hopefully it would be finish when my evening rolls around. I guess we’ll see.

    Anyway, that’s what my current coding project will be on: something that would setup RSS subscriptions for YouTube channels, and download new videos when they are published.

    This is probably one of those things that already exist out there. That may be true, but there are certain things that I’m hoping to add down the line. One such thing might be adding the notion of an “interest level” to channels which would govern how long a video would be kept around. For example, a channel that is marked as very interested would have every video downloaded and stored into Plex straight away. Mildly interested channels would have videos download but kept in a holding place until I choose to watch it, in which case it would be moved to Plex. If that doesn’t happen in 7 days or so, the videos would be removed.

    I’d like to also add some video lifecycle management into the mix as well, just to avoid the disk being completely used up. I can see instances where I’d like to mark videos as “keep for ever” and all the others will churn away after 14 days or so. It might be worth checking out what Plex offers for this, just to avoid doubling up on effort.

    But that’s all for the future. For the moment, my immediate goal is to get the basics working.

    Abandoning Project Lorikeet

    I’ll admit it: the mini-project that I have been working on may not have been a good idea.

    The project, which I gave the codename Lorikeet, was to provide a way to stream YouTube videos to a Chromecast without using the YouTube app. Using the YouTube app is becoming a real pain. Ads aside, they’ve completely replaced the Chromecast experience from a very basic viewing destination to something akin to a Google TV, complete with recommendations of “Breaking News” from news services that I have no interest in seeing.

    So I spent some time trying to build something to avoid the YouTube app completely, using a mixture of youtube-dl, a Buffalo web-app, and a Flutter mobile app. I spent the last week on it (it’s not pretty so no screenshots), but at this stage I don’t see much point continuing to work on it.

    For one, the experience is far from perfect. Video loading is slow and there are cases when the video pauses due to buffering. I’m sure there are ways around this, but I really don’t want to spend the time learning how to do this.

    It was also expensive. I have a Linode server running in Sydney which acts as a bit of a hobby server (it’s also running Pagepark to serve this site); but in order to be closer to the YouTube CDNs that are closer to me, I had to rent a server that would run in Melbourne. And there are not many VPS hosting providers that offer hosting here.

    So I went with Google Cloud.

    Now, I’m sure there’s a lot to like about Google Cloud, but I found its VPS hosting to be quite sub-par. For just over $10 USD a month, I had a Linux virtual server with 512 MB of RAM, 8 GB of storage, and a CPU which I’d imagine is throttled all the way back as trying to do anything of significants slowed it to a crawl. I had immense issues installing OS updates, getting the Dokku based web-app deployed, and trying to avoid hitting the storage limit.

    For the same amount of money, Linode offers me a virtual server with 2 GB of RAM, 50 GB of storage, and a real virtual CPU. This server is running 4 Dokku apps, 3 of them with dedicated PostgreSQL databases, and apart from occasionally needing to remove dangling Docker images, I’ve had zero issues with it. None! (The podcasters were right).

    Where was I? Oh, yeah. So, that’s the reason why I’m abandoning this project and will need to re-evaluate my online video watching experience. I might give Plex a try, although before doing something like setting up a dedicated media server, I’ll probably just use the Mac Mini I’ve been using for a desktop in the short term.

    So, yeah, that’s it. It’s hard to abandon a project you spent any amount of time on. I suppose the good thing is that I got to play around with Flutter and learnt how to connect to a Chromecast using Dart, so it’s not a complete waste.

← Newer Posts