Long Form Posts
- One of the most prominent words is “just”, with “it’s” not far behind. I though it’s because I started a lot of sentences with “it’s just”, but it turns out I’ve only used that phrase once, while the individual words show up around 10 times each. I guess I use “just” a lot (apparently, so does Seth). I am surprise to see the word “anyway” only showing up twice.
- Lots of first-person pronouns and articles, like “I’m”, “I’ve”, and “mine”. That’s probably not going to change either. This is just1 the tonal choice I’ve made. I read many blogs that mainly speak in the second person and I don’t think it’s a style that works for me. Although I consciously know that they’re not speaking to me directly, or even to the audience as a whole, I don’t want to give that impression myself, unless that’s my intention. So it’ll be first person for the foreseeable future I’m sure.
- Because it’s only the first page, many of the more prominent words are from recent posts. So lots about testing, OS/2, and Bundanoon. I would like to cut down on how much I write about testing. A lot of it is little more than venting, which I guess is what one does on their blog, but I don’t want to make a habit of it.
- I see the word “good” is prominent. That’s good: not a lot of negative writing (although, this is a choice too).
- I see the word “video” is also prominent. That’s probably not as good. Might be a sign I’m talking a little too much about the videos I’ve been watching.
-
Should there be a “not” here? “It’s not even attention?” ↩︎
- No migration script — If you can get away with not writing a migration script, then this is preferred option. Of course, this will depend on how much data you’ll need to migrate, and how complicated keeping support for the previous version in your code-base. If the amount of data is massive (we’re talking millions or hundred of millions of rows), then this is probably your only option. On the other hand, if there’s a few hundred or a few thousands, then it’s probably just worth migrating the data.
- Always indicate progress — You’re likely going to have way more data in prod that your dev environments so consider showing ongoing progress of the running script. If there’s multiple stages in the migration process, make sure you log when each stage begins. If you’re running a scan or processing records, then give some indication of progress through the collection of rows. A progress bar is nice, but failing that, include a log message say every 1,000 records or so.
- Calculate expected migration size if you can — If it’s relatively cheap to get a count of the number of records that need to be migrated, then it’s helpful for the user to report this to the user. Even an estimate would be good just to give a sense of magnitude. If it’ll be too expensive to do so, then you can ignore it: better to just get migrating rather than have the user wait for a count.
- Silence is golden — Keep logging to the screen to a minimum, mainly progress indicators plus and any serious warnings or errors. Avoid bombarding the user with spurious log messages. They want to know when things go wrong, otherwise they just want to know that the script is running properly. That said:
- Log everything to a file — If the script is involved with migrating data, but will ignored records that have already been migrated, then log that records will be skipped. What you’re looking for is assurance that all records have been dealt with, meaning that any discrepancy with the summary report (such as max records encountered vs. max records migrated) can be reconciled with the log file.
- I cringe every time I see society bend to the limitations of the software they use. It shouldn’t be this way; the software should serve the user, not the other way around.
- I appreciate a well designed API. Much of my job is using APIs built by others, and the good ones always feel natural to use, like water flowing through a creek. Conversely, a badly designed API makes me want to throw may laptop to the ground.
- I think a well designed standard is just as important as a well designed API. Thus, if you’re extending the standard in a way that adds a bunch of exceptions to something that’s already there, you may want to reflect on your priorities and try an approach that doesn’t do that.
- I also try to appreciate, to varying levels of success, that there are multiple ways to do something and once all the hard and fast requirements are settled, it usually just comes down to taste. I know what appeals to my taste, but I also (try to) recognise that others have their own taste as well, and what appeals to them may not gel with me. And I just have to deal with it. I may not like it, but sometimes we have to deal with things we don’t like.
- I believe a user’s home directory is their space, not yours. And you better have a bloody good reason for adding stuff there that the user can see and didn’t ask for.
-
Although not by much. ↩︎
-
Younger me would be shocked to learn that I’d favour a WYSIWYG editor over a text editor with Markdown support ↩︎
Asciidoc, Markdown, And Having It All
Took a brief look at Asciidoc this morning.
This is for that Markdown document I’ve been writing in Obsidian. I’ve been sharing it with others using PDF exports, but it’s importance has grown to a point where I need to start properly maintaining a change log. And also… sharing via PDF exports? What is this? Microsoft Word in the 2000s?
So I’m hoping to move it to a Gitlab repo. Gitlab does support Markdown with integrated Mermaid diagrams, but not Obsidian’s extension for callouts. I’d like to be able to keep these callouts as I used them in quite a few places.
While browsing through Gitlabs’s help guide on Markdown extensions, I came across their support for Asciidoc. I’ve haven’t tried Asciidoc before, and after taking a brief look at it, it seemed like a format better suited for the type of document I’m working on. It has things like auto-generated table of contents, builtin support for callouts, proper title and heading separations; just features that work better than Markdown for long, technical documents. The language syntax also supports a number of text-based diagram formats, including Mermaid.
However, as soon as I started porting the document over to Asciidoc, I found it to be no Markdown in terms of mind share. Tool support is quite limited, in fact it’s pretty bad. There’s nothing like iA Writer for Asciidoc, with the split-screen source text and live preview that updates when you make changes. There’s loads of these tools for Markdown, so many that I can’t keep track of them (the name of the iA Writer alternative always eludes me).
Code editors should work, but they’re not perfect either. GoLand supports Asciidoc, but not with embedded Mermaid diagrams. At least not out of the box: I had to get a separate JAR which took around 10 minutes to download. Even now I’m fighting with the IDE, trying to get it to find the Mermaid CLI tool so it can render the diagrams. I encountered none of these headaches when using Markdown: GoLand supports embedded Mermaid diagrams just fine. I guess I could try VS Code, but to download it just for this one document? Hmm.
In theory the de-facto CLI tool should work, but in order to get Mermaid diagrams working there I need to download a Ruby gem and bundle it with the CLI tool (this is in addition to the same Mermaid command-line tool GoLand needs). Why this isn’t bundled by default in the Homebrew distribution is beyond me.
So for now I’m abandoning my wish for callouts and just sticking with Markdown. This is probably the best option, even if you set tooling aside. After all, everyone knows Markdown, a characteristic of the format that I shouldn’t simply ignore. Especially for these technical documents, where others are expected to contribute changes as well.
It’s a bit of a shame though. I still think Asciidoc could be better for this form of writing. If only those that make writing tools would agree.
Addendum: after drafting this post, I found that Gitlab actually supports auto-generated table of contents in Markdown too. So while I may not have it all with Markdown — such as callouts — I can still have a lot.
My Position On Blocking AI Web Crawlers
I’m seeing a lot of posts online about sites and hosting platforms blocking web crawlers used for AI training. I can completely understand their position, and fully support them: it’s their site and they can do what they want.
Allow me to lay my cards on the table. My current position is to allow these crawlers to access my content. I’m choosing to opt in, or rather, not to opt out. I’m probably in the minority here (well, the minority of those I follow), but I do have a few reasons for this, with the principal one being that I use services like ChatGTP and get value from them. So to prevent them from training their models on my posts feels personally hypocritical to me. It’s the same reason why I don’t opt out of Github Copilot crawling my open source projects (although that’s a little more theoretical, as I’m not a huge user of Copilot). To some, this position might sound weird, and when you consider the gulf between what value these AI companies get from scraping the web verses what value I get from them as a user, it may seem downright stupid. And if you approach it from a logical perspective, it probably is. But hey, we’re in the realm of feelings, and right now this is just how I feel. Of course, if I were to make a living out of this site, it would be a different story. But I don’t.
And this leads to the tension I see between site owners making decisions regarding their own content, and services making decisions on behalf of their users. This site lives on Micro.blog, so I’m governed by what Manton chooses to do or not do regarding these crawlers. I’m generally in favour of what Micro.blog has chosen so far: allowing people to block these scrapers via “robots.txt” but not yet blocking requests based on their IP address. I’m aware that others may not agree, and I can’t, in principal, reject the notion of a hosting provider choosing to block this crawlers at the network layer. I am, and will continue to be, a customer of such services.
But I do think some care should be considered, especially when it comes to customers (and non-customer) asking these services to add these network blocks. You may have good reason to demand this, but just remember there are users of these services that have opinions that may differ. I personally would prefer a mechanism where you opt into these crawlers, and this would be an option I’ll probably take (or probably not; my position is not that strong). I know that’s not possible under all circumstances so I’m not going to cry too much if this was not offered to me in lieu of a blanket ban.
I will make a point on some comments that I’ve seen that, if taken in an uncharitable way, imply that creators that have no problem with these crawlers do not care about their content. I think such opinions should be worded carefully. I know how polarising the use of AI currently is, and making such remarks, particularly within posts that are already heated due to the author’s feelings regarding these crawlers, risks spreading this heat to those that read it. The tone gives the impression that creators okay with these crawlers don’t care about what they push online, or should care more than they do. That might be true for some — might even be true for me once in a while — but to make such blanket assumptions can come off as a little insulting. And look, I know that’s not what they’re saying, but it can come across that way at times.
Anyway, that’s my position as of today. Like most things here, this may change over time, and if I become disenfranchised with these companies, I’ll join the blockade. But for the moment, I’m okay with sitting this one out.
Thinking About Plugins In Go
Thought I’d give Go’s plugin package a try for something. Seems to works fine for the absolutely simple things. But start importing any dependencies and it becomes a non-starter. You start seeing these sorts of error messages when you try to load the plugin:
plugin was built with a different version of package golang.org/x/sys/unix
Looks like the host and plugins need to have exactly the same dependencies. To be fair, the package documentation says as much, and also states that the best use of plugins is for dynamically loaded modules build from the same source. But that doesn’t help me and what I’m trying to do, which is encoding a bunch of private struct types as Protobuf messages.
So might be that I’ll need to find another approach. I wonder how others would do this. An embedded scripting language would probably not be suitable for this, since I’m dealing with Protobuf and byte slices. Maybe building the plugin as a C shared object? That could work, but then I’d loose all the niceties that come from using Go’s type system.
Another option would be something like WASM. It’s interesting seeing WASM modules becoming a bit of a thing for plugin architectures. There’s even a Go runtime to host them. The only question is whether they would have the same facilities as regular process would have, like network access; or whether they’re completely sandboxed, and you as the plugin host would need to add support for these facilities.
I guess I’d find out if I were to spend any more time looking at this. But this has been a big enough distraction already. Building a process to shell-out to would work just fine, so that’s probably what I’ll ultimately do.
Word Cloud
From Seth’s blog:
Consider building a word cloud of your writing.
Seems like a good idea so that’s what I did, taking the contents of the first page of this blog. Here it is:
Some observations:
Anyway, I thought these findings were quite interesting. One day, I’ll have to make another word cloud across all the posts on this blog.
Day Trip to Bundanoon
Decided to go on a day trip to Bundanoon today. It’s been five years since I last visited and I remember liking the town enough that I thought it’d be worth visiting again. It’s not close, around 1 hour and 40 minutes from Canberra, but it not far either and I thought it would be a nice way to spend the day. Naturally, others agreed, which I guess explains why it was busier than I expected, what with the long weekend and all. Fortunately, it wasn’t too crowded, and I still had a wonderful time.
The goal was to go on a bush-walk first. I chose to do the Erith Coal Mine track, for no particular reason other than it sounded interesting. This circuit track was meant to take you to a waterfall by an old coal mine. However, the track leading to the actual mine was closed, thanks to the recent rain. In fact, if I could describe the bush-walks in one word, it would be “wet”. The ground was soaked, as were the paths, and although conditions were lovely, the paths were still very slippery.
After completing that circuit in probably 45 minutes, my appetite for bush-walking was still unsatisfied, so I tried the Fairy Bower Falls walk next. This was not as steep as the first one, but it turned to be a much harder track due to how wet and slippery everything was.
I stopped short of the end of this one too, as it seems the path was washed away. But I did manage to get a glimpse of the waterfall, so I’m considering that a win.
After that, I returned to the town for lunch and some train spotting. The train line to Goulburn runs through Bundanoon, and the last time I was there, there was probably a freight train every hour or so. So I was hoping to get a good view of a lot of freight traffic. Maybe shoot a video of a train passing through the station I could share here.
I had lunch outside and walked around the town a little, always within sight of the railway line, hoping for at least one train to pass through. But luck wasn’t on my side, and it wasn’t until I was on my way home that I saw what I think was a grain train passing through Wingello. I pulled over to take a video, and while I miss the locomotive, I got a reasonable enough recording of the wagons.
Being a little more hopeful, I stopped at Tallong, the next town along the road. I bought a coffee and went to the station to drink it and hopefully see a train pass through. Sadly, it was not to be. So I decided to head back home.
So the train spotting was a bust, and the bush-walks were difficult, but all in all it was quite a nice day. I look forward to my next visit to Bundanoon. Lets hope the trains are running a little more frequently then.
An Unfair Critique Of OS/2 UI Design From 30 Years Ago
A favourite YouTube channel of mine is Michael MJD, who likes to explore retro PC products and software from the 90s and early 2000s. Examples of these include videos on Windows 95, Windows 98, and the various consumer tech products designed to get people online. Can I just say how interesting those times were, where phrases such as “surfing the net” were thrown about, and where shopping centres were always used to explain visiting websites. I guess it was the best analogy one could use at the time.
A staple of Michael MJD’s channel is when he installs an old operating systems onto old hardware1. Yesterday, I watched the one where he installed OS/2 Warp 4 onto a 98 PC. We were an OS/2 household back when I was growing up, thanks to my dad using it for work, and I can remember using OS/2 2.1 and thinking it was actually pretty good. Better than Windows 95, in fact. I can’t remember if I ever used Warp 4, though.
Anyway, while watching this video, and I was taken aback on how bad the UI design of OS/2 Warp 4 was. And really, I probably shouldn’t be throwing stones here: I’m not a great UI designer myself. But I guess my exposure to later versions of Windows and macOS matured my tastes somewhat; where I got exposed to the idea of interaction systems and user experience design (and generally just growing up). Obviously given how new the GUI was back then, many of these concepts were still in their infancy, although if you were to compare these UIs to the classic Mac or even Windows 3.1, I do think there was something missing in IBM’s design acumen. Was it ability? Interest? Care? Not sure. But given that it’s been 30 years, I’m not expecting the OS/2 devs to be able to defend themselves now. That’s what makes this critique wholly unfair.
Anyway, I’d thought I share some of the stills from this video that I thought contained some of the more cringeworthy UI designs2, along with my remarks. Enjoy.
Some More Thoughts On Unit Testing
Kinda want to avoid this blog descending into a series of “this is wrong with unit testing” posts, but something did occur to me this morning. We’ve kicked off a new service at work recently. It’s just me and this other developer working on it at the moment, and it’s given us the opportunity to try out this “mockless” approach to testing, of which I ranted about a couple of weeks ago (in fact, the other developer is the person I had that discussion with). And it’s probably no surprise, but I’m finding writing tests this way to be a much nicer experience already.
And I think I’ve come to the realisation that the issue is not so much with mocking itself. Rather, it’s the style of testing that it encourages. When you’re testing against “real” services, you’ll left with treating them as a black box. There’s no real way to verify your code is working correctly other than letting it interact with these services as it would, and then “probing” them in some way — running queries, waiting for messages to arrive at topics, etc. — to know whether the interaction worked. You can’t just verify this by intercepting the various calls made by the service (well you can, but it would be difficult to do).
There’s nothing about mocking that inhibits this style of testing. You can use mocks to simulate a message broker by storing the messages in an in-memory list, for example. What it does do, however, is make it easier to write tests that simply intercept the calls of the service and verify that they were made. It’s less upfront work than setting up a real client, or simulating a message broker, but now you’ve tied your tests to your implementation. You may feel like you’ve saved time and effort now, but really you’ve just deferred it for later, when you need to fix your tests when you’ve change your implementation.
I know this is stuff I said before, so I’ll just stop here, and end by saying that I’m excited to seriously try out this approach to writing unit tests. Is it a better approach than using mocks? I guess time will tell. It’s been my experience that it’s when you need to refactor things in your service when you find out how good your tests are. So I guess we’ll check back in about six months or so.
Don't Leave User Experience For Later
DDH wrote a post yesterday that resonates with me. This is how he opens:
Programmers are often skeptical of aesthetics because they frequently associate it with veneering
I doubt DHH reads this blog, but he could’ve address this post directly at me. I’m skeptical about aesthetics. Well… maybe not skeptical, but if we’re talking about personal projects, I do consider it less important than the functional side of things. Or at least I did.
He continues:
Primary reason I appreciate aesthetics so much is its power to motivate. And motivation is the rare fuel that powers all the big leaps I’ve ever taken in my career and with my projects and products. It’s not time, it’s even attention. [sic1] It’s motivation. And I’ve found that nothing quite motivates me like using and creating beautiful things.
He was mainly talking about the code design, but I think this extends to the project’s UI and UX. Particularly if you’re building it for yourself. Maybe especially if you’re building it for yourself.
And this is where my story begins. From the start I’ve been putting off any task that would improve the user experience on one of my side project. I considered such tasks unnecessary, or certainly less important than the “functional” side of things. Whenever faced with a decision on what part to work on next, the user experience work was left undone, usually with a thought along the lines of “eh, it’s UI stuff, I’ll do it later.”
But I think this was a mistake. Since I was actually using this tool, I was exposed to the clunky, unfinished UI whenever I needed to do something with it. And it turns out no matter how often you tell yourself that you’ll fix it later, a bad UI is still a bad UI, and it affects how it feels to use it. And let me tell you: it didn’t feel good at all. In fact, I detested it so much that I thought about junking it all together.
It was only when I decided to add a bit of polish did things improved. And it didn’t take much: just fixing the highlights of the nav to reflect the current section, for example. But it was enough, and the improved UI made it feel better to use, which motivated me to get back to working on it again.
So I guess the take away is similar to the point DHH made in his post, which is if something feels good to use, you’re more likely to work on it. And sure, you can’t expect to have a great user experience out of the box: I’ll have to work through some early iterations of it as I build things out. But I shouldn’t ignore user experience completely, or defer it until “later”. If it’s something I’m using myself, and it doesn’t feel good to use, best to spend some time on it now.
Writing Good Data Migration Scripts
I’m waiting for a data migration to finish, so I’ve naturally got migration scripts on my mind.
There’s an art to writing a good migration script. It may seem that simply throwing together a small Python script would be enough; and for the simpler cases, it very well might be. But it’s been my experience that running the script in prod is likely to be very different than doing test runs in dev.
There are a few reasons for this. For one thing, prod is likely to have way more data so there will always be more to migrate. And dealing with production data is always going to be a little more stressful than in non-prod, especially when you consider things like restricted access and the impact of things going wrong. So a script with a better “user experience” is always going to be better than one slapped togeather.
So without further ado, here’s the attributes for what I think makes for a good migration script:
May your migrations be quite and painless.
On Sharing Too Much About Too Little
Manuel Moreale wrote an interesting post today about sharing stuff online:
Life can be joyful and wonderful and marvellous. But it can also be a fucking nightmare. And yes, it’s important to celebrate the victories and to immortalise the glorious moment. But it’s also important to document the failures, the shitty moments, the dark places our minds find themselves stuck in. It’s all part of what makes us unique after all.
I can certaintly appreciate those that are willing to share both the ups and downs to others online. But I know for myself that I rather not, for a couple of reasons. First being that there’s already so much dark stuff online already: why add to that? And the second being that, along with being a public journal of most of my day, this site is a bit of an escape as well: a refuge that I can visit when the world gets a little too much for me.
And it may seem that what is posted here is exclusively what I feel, think, or do during the day, but that cannot be further from the truth. Shitty things happen to me, I have dark thoughts, fits of despair, or just general periods of malaise. Those get documented too, but in a private journal that’s not for public consumption.
That’s not to say that others should do likewise. I’m not here to tell you what to post, and why: you do you. This is just explaining how and why I post what I post. Maybe one day this will change: but until then that’s how I like to do things.
As Someone Who Works In Software
As someone who works in software…
The Perfect Album
The guys on Hemispheric Views have got me blogging once again. The latest episode bought up the topic of the perfect album: an album that you can “just start from beginning, let it run all the way through without skipping songs, without moving around, just front to back, and just sit there and do nothing else and just listen to that whole album”.
Well, having crashed Hemispheric Views once, I’d thought it’s time once again to give my unsolicited opinion on the matter. But first, some comments on some of the suggestions made on the show.
I’ll start with Martin’s suggestion of the Cat Empire. I feel like I should like Cat Empire more than I currently do. I used to know something who was fanatic about them. He shared a few of their songs when we were jamming out — we were in a band together — and on the whole I thought they were pretty good. They’re certainly a talented bunch of individuals. But it’s not a style of music that gels with me. I’m just not a huge fan of scar, which is funny considering the band we were both in was a scar band.
I feel like I haven’t given Radiohead a fair shake. There were many people that approached me and said something of the lines of “you really should try Radiohead; it’s a style of music you may enjoy,” and I never got around to following their advice. I probably should though, I think they may be right. Similarly for Daft Punk, of which I have heard a few tracks of and thought them to be pretty good. I really should give Random Access Memory a listen.
I would certainly agree with Jason’s suggestion of the Dark Side of the Moon. I count myself a Pink Floyd fan, and although I wouldn’t call this my favourite album by them, it’s certainly a good album (if you were to ask, my favourite would probably be either The Wall or Wish You Were Here, plus side B of Metal).
As to what my idea of a perfect album would be, my suggestion is pretty simple: it’s anything by Mike Oldfield.
LOL, just kidding!1 😄
No, I’d say a great example of a perfect album is Jeff Wayne’s musical adaptation of The War Of The Worlds.
I used to listen to this quite often during my commute, before the pandemic arrived and bought that listen count down to zero. But I’ve picked it back up a few weeks ago and it’s been a constant earworm since. I think it ticks most of the boxes for a perfect album. It’s a narrative set to music, which makes it quite coherent and naturally discourages skipping tracks. The theming around the various elements of the story are really well done: hearing one introduced near the start of the album come back later is always quite a thrill, and you find yourself picking up more of these as you listen to the album multiple times. It’s very much not a recent album but, much like Pink Floyd, there’s a certain timelessness that makes it still a great piece of music even now.
Just don’t listen to the recent remakes.
Favourite Comp. Sci. Textbooks
John Siracusa talked about his two favourite textbooks on Rec Diffs #233: Modern Operation Systems, and Computer Networks, both by Andrew S. Tanenbaum. I had those textbooks at uni as well. I still do, actually. They’re fantastic. If I were to recommend something on either subject, it would be those two.
I will add that my favourite textbook I had during my degree was Compilers: Principal, Techniques and Tools by Alfred V. Aho, et al. also known as the “dragon book.” If you’re interested in compiler design in any way, I can definitely recommend this book. It’s a little old, but really, the principals are more or less the same.
Thou Doth Promote Too Much
Manual Moreale wrote an interesting post about self promotion, where he reflects on whether closing out all his People and Blogs post with a line pointing to his Ko-Fi page is too much:
And so I added that single line. But adding that single line was a struggle. Because in my head, it’s obvious that if you do enjoy something and are willing to support it, you’d probably go look for a way to do it. That’s how my brain works. But unfortunately, that’s not how the internet works. Apparently, the correct approach seems to be the opposite one. You have to constantly remind people to like and subscribe, to support, to contribute, and to share.
I completely understand his feelings about this. I’m pretty sure I’d have just as much trouble adding such a promotion at the bottom of my post. Heck, it’s hard enough to write about what I’m working on here without any expectation from the reader other than to maybe, possibly, read it. They’ve been relegated to a separate blog, so as to not bother anyone.
But as a reader of P&B, I think the line he added is perfectly fine. I think it’s only fair to ask people to consider supporting something where it’s obvious someone put a lot of effort into it, as he obviously has been doing with P&B.
As for where to draw the line, I think I agree with Moreale:
How much self-promotion is too much? Substack interrupting your reading experience to remind you to subscribe feels too much to me. An overlay interrupting your browsing to ask you to subscribe to a newsletter is also too much. Am I wrong? Am I crazy in thinking it’s too much?
I get the need to “convert readers” but interrupting me to sign up to a newsletter is just annoying. And I’m not sure “annoying” is the feeling you want to imbue in your readers if you want them to do something.
But a single line at the end of a quality blog post? Absolutely, go for it!
Crashing Hemispheric Views #109: HAZCHEM
Okay, maybe not “crashing”, a.la Hey Dingus. But some thoughts did come to me while listening to Hemispheric Views #109: HAZCHEM that I’d though I share with others.
Haircuts
I’m sorry but I cannot disagree more. I don’t really want to talk while I’m getting a haircut. I mean I will if they’re striking up a conversation with me, but I’m generally not there to make new friends; just to get my hair cut quickly and go about my day. I feel this way about taxis too.
I’m Rooted
I haven’t really used “rooted” or “knackered” that much. My goto phrase is “buggered,” as in “oh man, I’m buggered!” or simply just “tired”. I sometimes used “exhausted” when I’m really tired, but there’s just too many syllable in that word for daily use.
Collecting
I’m the same regarding stickers. I received (although not really sought after) stickers from various podcasts and I didn’t know what to do with them. I’ve started keeping them in this journal I never used, and apart from my awful handwriting documenting where they’re from and when I added them, so far it’s been great.
I probably do need to get some HV stickers, though.
A Trash Ad for Zachary
Should check to see if Johnny Decimal got any conversions from that ad in #106. 😀
Also, here’s a free tag line for your rubbish bags: we put the trash in the bags, not the pods.
🍅⏲️ 00:39:05
I’m going to make the case for Vivaldi. Did you know there’s actually a Pomodoro timer built into Vivaldi? Click the clock on the bottom-right of the status bar to bring it up.
Never used it myself, since I don’t use a Pomodoro timer, but can Firefox do that?!
Once again, a really great listen, as always.
On Micro.blog, Scribbles, And Multi-homing
I’ve been ask why I’m using Scribbles given that I’m here on Micro.blog. Honestly I wish I could say I’ve got a great answer. I like both services very much, and I have no plans of abandoning Micro.blog for Scribbles, or visa-versa. But I am planning to use both for writing stuff online, at least for now, and I suppose the best answer I can give is a combination of various emotions and hang-ups I have about what I want to write about, and where it should go.
I am planning to continue to use Micro.blog pretty much how others would use Mastodon: short-form posts, with the occasional photo, mainly about what I’m doing or seeing during my day. I’ll continue to write the occasional long-form posts, but it won’t be the majority of what I write here.
My intentions for what I post on Scribbles is more likely to be long-form, which brings me to my first reason: I think I prefer Scribbles editor for long-form posts. Micro.blog works well for micro-blogging but I find any attempt to write something longer a little difficult. I can’t really explain it. It just feels like I’m spending more effort trying to get the words out on the screen, like they’re resisting in some way.
It’s easier for me to do this using Scribbles editor. I don’t know why. Might be a combination of how the compose screen is styled and laid out, plus the use of a WYSIWYG editor1. But whatever it is, it all combines into an experience where the words flow a little easier for me. That’s probably the only way I can describe it. There’s nothing really empirical about it all, but maybe that’s the point. It’s involves the emotional side of writing: the “look and feel”.
Second, I like that I can keep separate topics separate. I thought I could be someone who can write about any topic in one place, but when I’m browsing this site myself, I get a bit put out by all the technical topics mixed in with my day-to-day entries. They feel like they don’t belong here. Same with project notes, especially given that they tend to be more long-form anyway.
This I just attribute to one of my many hang-ups. I never have this issue with other sites I visit. It may be an emotional response from seeing what I wrote about; where reading about my day-to-day induces a different feeling (casual, reflective) than posts about code (thinking about work) or projects (being a little more critical, maybe even a little bored).
Being about to create multiple blogs in Scribbles, thanks to signing up for the lifetime plan, gives me the opportunity to create separate blogs for separate topics: one for current projects, one for past projects, and one for coding topics. Each of them, along with Micro.blog, can have their own purpose and writing style: more of a public journal for the project sites, more informational or critical on the coding topics, and more day-to-day Mastodon-like posts on Micro.blog (I also have a check-in blog which is purely a this-is-where-I’ve-been record).
Finally, I think it’s a bit of that “ooh, shiny” aspect of trying something new. I definitely got that using Scribbles. I don’t think there’s much I can do about that (nor, do I want to 😀).
And that’s probably the best explanation I can give. Arguably it’s easier just writing in one place, and to that I say, “yeah, it absolutely is.” Nothing about any of this is logical at all. I guess I’m trying to optimise to posting something without all the various hang-ups I have about posting it at all, and I think having these separate spaces to do so helps.
Plus, knowing me, it’s all likely to change pretty soon, and I’ll be back to posting everything here again.
Self-Driving Bicycle for The Mind
While listening to the Stratchery interview with Hugo Berra, a thought occurred to me. Berra mentioned that Xaomi was building an EV. Not a self-driving one, mind you: this one has a steering wheel and peddles. He made the comment that were Apple to actually go through with releasing a car, it would look a lot like what Xaomi has built. I haven’t seen either car project myself so I’ll take his word for it.
This led to the thought that it was well within Apple’s existing capability to release a car. They would’ve had to skill up in automotive engineering, but they can hire people to do that. What they couldn’t do was all the self-driving stuff. No-one can do that yet, and it seems to me that being unable to deliver on this non-negotiable requirement was one of the things that doomed the project. Sure there were others — seems like they were lacking focus in a number of other areas — but this seems like a big one.
This led to the next thought, which is why Apple thought it was ever a good idea to actually have the car self-driving. What’s wrong with having one driven by the user? Seems like this was a very un-Apple-like product decision. Has Apple ever been good a releasing tech that would replace, rather than augment, the user’s interaction with the device? Do they have phones that would browse the web for you? Have they replaced ZSH with ChatGPT in MacOS (heaven forbid). Probably the only product that comes close is Siri, and we all know what a roaring success that is.
Apple’s strength is in releasing products that keep human interaction a central pillar of it’s design. They should just stick with that, and avoid any of the self-driving traps that come up. It’s a “bicycle for a mind” after all: the human is still the one doing the peddling.
On Post Headers
My answer to @mandaris question:
How many of you are using headers in your blogging? Are you using anything that denotes different sections?
I generally don’t use headers, unless the post is so long it needs them to break it up a little. When I do, I tend to start with H2, then step down to H3, H4, etc.
I’d love to start with H1, but most themes I encounter, including those from software like Confluence, style H1 to be almost the same size as the page title. This kills me as the page title should be separate from any H1s in the body, and styled differently enough that there’s no mistaking what level the header’s on.
But, c’est la vie.
Sorting And Go Slices
Word of caution for anyone passing Go slices to a function which will sort them. Doing so as is will modify the original slice. If you were to write this, for example:
package main
import (
"fmt"
"sort"
)
func printSorted(ys []int) {
sort.Slice(ys, func(i, j int) bool { return ys[i] < ys[j] })
fmt.Println(ys)
}
func main() {
xs := []int{3, 1, 2}
printSorted(xs)
fmt.Println(xs)
}
You will find, when you run it, that both xs
and ys
will be sorted:
[1,2,3]
[1,2,3]
If this is not desired, the remedy would be make a copy of the slice prior to sorting it:
func printSorted(ys []int) {
ysDup := make([]int, len(ys))
copy(ysDup, ys)
sort.Slice(ysDup, func(i, j int) bool { return ys[i] < ys[j] })
fmt.Println(ysDup)
}
This make sense when you consider that the elements of a slice are more or less stored in a normal array. Slices add things like start and end items, which are stored as values within the slice struct, but the array itself is just a normal reference and it is this that sort.Slice will modify.
On the face of it, this is a pretty trivial thing to find out. But it’s worth noting here just so that I don’t have to remember it again.
Adding A Sidebar To A Tiny Theme Micro.blog
This is now a standalone Micro.blog Plugin called Sidebar For Tiny Theme which adds support for this out of the box. The method documented below no longer works, but I'm keeping it here for posterity reason.
I’d though I’d write a little about how I added a sidebar with recommendations to my Tiny Theme’ed Micro.blog, for anyone else interested in doing likewise. For an example on how this looks, please see this post, or just go to the home page of this site.
I should say that I wrote this in the form of a Micro.blog plugin, just so that I can use a proper text editor. It’s not published at the time of this post, but you can find all the code on Github, and although the steps here are slightly different, they should still work using Micro.blog’s template designer.
I started by defining a new Hugo partial for the sidebar. This means that I can choose which page I want it to appear on without any copy-and-paste. You can do so by adding a new template with the name layouts/partials/sidebar.html
, and pasting the following template:
<div class="sidebar">
<div class="sidebar-cell">
<header>
<h1>Recommendations</h1>
</header>
<ul class="blogroll">
{{ range .Site.Data.blogrolls.recommendations }}
<li><a href="{{ .url }}">{{ .name }}: <span>{{ (urls.Parse .url).Hostname }}</span></a></li>
{{ else }}
<p>No recommendations yet.</p>
{{ end }}
</ul>
</div>
</div>
This creates a sidebar with a single cell containing your Micro.blog recommendations. Down the line I’m hoping to add additional cells with things like shoutouts, etc. The styling is not defined for this yet though.
The sidebar is added to the page using Tiny Theme’s microhooks customisation feature. I set the microhook-after-post-list.html
hook to the following HTML to include the sidebar on the post list:
{{ partial "sidebar.html" . }}
In theory it should be possible to add it to the other pages just by adding the same HTML snippet to the other microhooks (go for the “after” ones). I haven’t tried it myself though so I’m not sure how this will look.
Finally, there’s the styling. I added the following CSS which will make the page slightly wider and place the sidebar to the right side of the page:
@media (min-width: 776px) {
body:has(div.sidebar) {
max-width: 50em;
}
div.wrapper:has(div.sidebar) {
display: grid;
grid-template-columns: minmax(20em,35em) 15em;
column-gap: 60px;
}
}
div.sidebar {
font-size: 0.9em;
line-height: 1.8;
}
@media (max-width: 775px) {
div.sidebar {
display: none;
}
}
div.sidebar header {
margin-bottom: 0;
}
div.sidebar header h1 {
font-size: 1.0em;
color: var(--accent1);
}
ul.blogroll {
padding-inline: 0;
}
ul.blogroll li {
list-style-type: none !important;
}
ul.blogroll li a {
text-decoration: none;
color: var(--text);
}
ul.blogroll li a span {
color: var(--accent2);
}
This CSS uses the style variables defined by Tiny Theme so they should match the colour scheme of your blog. A page with a sidebar is also wider than one without it. It doesn’t change with width of pages that don’t have the sidebar (if this isn’t your cup of tea, you can remove the :has(div.sidebar)
selector off the body
tag) and the sidebar will not appear on small screens, like a phone in portrait orientation. I’m not entirely sure if I like this, and I may eventually make changes. But it’s fine for now.
So that’s how the sidebar was added. More to come as I tinker with this down the line.