Making A Long Form Posts Category In Micro.blog
I use the Categories feature of Micro.blog to organise the types of posts I make on this site. One of the categories I have on this blog is called Long Form Posts, which I use to file all the posts I have that have titles. This is done automatically, such that I don’t have to think about adding a post to this category once I’ve written it1.
It’s a little hard to find the relevant features in Micro.blog to do this, but they’re there. Here’s how you can use them to make such a category on your Micro.blog blog.
Creating The Category

The first thing you need to do is create the category:
- Click “Categories” in the sidebar. You should be presented with a list of categories on your blog. You can add a new one by clicking “New Category”.
- Give your category a name. I chose the name “Long Form Posts” but it can be anything you want: Titled Posts, Essays, etc.
- Click “Create Category”.
The new category should show up in the list of categories on Micro.blog. You should also see the category appear on your blog as well. If you were go to the archive page, the list of categories should appear, along with all the posts on your blog. Clicking it will show only the posts that have that category.
The new category should also have an RSS feed, which you can use in any standard feed reader. You can get to it by clicking the category on your blog, and adding feed.xml
to the URL. For example: the URL https://lmika.org/categories/long-form-posts/feed.xml is the RSS feed of my Long Form Post category.
Creating The Filter

The Long Form Post category should exist now, but you may notice that it’s empty. At this point you need to manually add the Long Form Post category to each post you want in this category by selecting the checkbox in the Edit Post window. In you want Micro.blog to do this automatically for each category that has a title, you will need to create a Filter:
- Within the “Categories” section, click “Edit Filters”, then click “New Filter”.
- For a filter that will select all blog posts with a title; in the “Post length” picker, choose “Only long posts with a title”.
- Select the category you want these posts to have, then click “Add Filter”.
Now any post with a title will automatically be given the Long Form Post category. You can try this out by writing a post, giving a title, then saving it as a draft. When you go back to edit the post, the Long Form Post category checkbox should be checked.
Finally, to apply the new filter for any existing post, click “Run Filter”.
-
I haven’t managed to get automatic category selection working for blogging apps like MarsEdit. There might be a way to do this, but I haven’t really looked. ↩︎
I, Developer
There was a bit of a discussion on Mastodon and various blogs about how best to call someone who writes code for fun or profit. I’ll spare you the prologue of how this discussion that has been going on since the start of the profession itself: I’m sure you’ve heard it all before. But hearing one of these terms today got me thinking about this, and I thought I’d say what my preferences are.
As someone who writes software for my job and hobby, I personally prefer the term “developer”. I usually call myself a “developer” or “dev” when I’m around a group of my peers. When I’m with lay people, I usually say that I’m a “software developer” as people can associate a developer as one who’s involved with building houses (this has happened to me once). I don’t mind the term “coder” or “programmer” either, but I don’t feel like it fully describes what I actually do, given that about half my job involves things other than code (as much as I dislike that fact).
Officially my role is “engineer” but I don’t really care for the term. The reasons are the same as anyone else that’s got a problem with it, namely the fact that we’re not bound to the same level of accreditation that “real” engineers are (civil, electrical, etc.). But I think my dislike for it also has to do with the fact that the job of a “software engineer” usually involves more than just the “engineering” side of things. There’s design work, planning work, operations, etc. that feel beyond the scope of what could simply be called engineering. I guess one could say that an engineer is required to consider maintenance when they’re designing a structure or electrical circuit, but I feel like us software developers are more involved in the day-to-day operations of things than our “real” engineer counterparts. I could be completely wrong here though: I don’t know a thing about what “real” engineers really get up to, so I probably can’t say.
One term I’ve recently started hearing more is “individual contributor”, and I must say I don’t care for the term. It’s feels so abstract and wishy-washy; so divorced from the actual act of working with the code which, arguably, is a pretty important part of delivering value for a project. I don’t know how this term got so widespread. Maybe is a way of grouping all the activities involved in software development into one noun-phrase. I guess if I’m being charitable, I can see it that way. After all, the existing terms don’t really work as well for doing this (I’m guess that’s why the question was posted on Mastodon in the first place). And yet, I still get this feeling that the existence of this term is to deliberately reduce the importance of value these people deliver, as if we’re interchangeable cogs. It might just be where I see this term, so I could be completely unfair. But that’s how I feel, and it’s for that reason I don’t like using this term.
So that’s pretty much it. All in all I’m generally okay with being called what you’d want me to call, and I won’t call you out if you called me something else (except “Java monkey”, especially since I haven’t work in Java for a few years now). But if I had the choice: call me a “dev”, “developer” or “programmer”; try not to call me a “engineer”; and please don’t call me an “individual contributor”.
And please don’t call me at home. π
Pro tip: don’t have a sprint planning meeting with the hiccups. I’ve tried it today and it’s just didn’t work out.
Out walking my usual Sunday arvo. route. Perfect conditions for it as well.

Mahjong Score Card Device
I’ve recently started playing Mahjong with my family. We’ve learnt how to play a couple of years ago and grown to like it a great deal. I’m not sure if you’re familiar with how the game is scored, and while it’s not complicated, there’s a lot of looking things up which can make scoring a little tedious. So my dad put together a Mahjong scoring tool in Excel. You enter what each player got at the end of a round β 2 exposed pungs of twos, 1 hidden kong of eights, and a pair of dragons, for example β and it will determine the scores of the round and add them to the running totals. It also tracks the winds of the players and the prevailing winds, which are mechanics that can affects how much a player can get during a round. The spreadsheet works quite well, but it does mean we need to keep a bulky laptop around whenever we play.
I wondered if the way we calculated and tracked the scores could be improved. I could do something like build a web-style scorecard, like I did for Finska, but it gets a little boring doing the same stuff you know how to do pretty well. No, if I wanted to do this, I wanted to push myself a little.
I was contemplating building something with an Arduino, maybe with a keypad and a LCD display mounted in a case of some sort. I’ve played around with LCD displays using an Arduino before so it wasn’t something that seemed too hard to do. I was concerned about how well I could achieved the fit and finished I wanted this to have to be usable. Ideally this would be something that I can give to others to use, not be something that would just be for me (where’s the fun in that?). Plus, I didn’t have the skills or the equipment to mount it nicely in an enclosed case that is somewhat portable. I started drawing up some designs for it, but it felt like something I wouldn’t actually attempt.

One day I was perusing the web when I came across the SMART Response XE. From what I gathered, it a device that was built for classrooms around the early 2010s. Thanks to the smartphone, it didn’t become super successful. But hobbyist have managed to get their hands on them and reprogram them to do their own thing. It’s battery powered, had a full QUERTY keyboard and LCD display, is well built since it was designed to be used by children at schools, and feels great in the hand. And since it has an Atmel microprobes, it can be reprogrammed using the Arduino toolchain. Such a device would be perfect for this sort of project.
I bought a couple, plus a small development adapter, and set about trying to build it. I’ll write about how I go about doing it here. As the whole “work journal” implies, this won’t be a nice consistent arc from the idea to the finished project. I’m still very much a novice when it comes to electronics, and there will be setbacks, false starts, and probably long periods where I do nothing. So strap in for a bit of bumping around in the dark.


First Connection Attempt
The way to reprogram the board is to open up the back and slot some pins through the holes just above the battery compartment. From my understanding, these holes expose contact pads on the actual device board that are essentially just an ISP programming interface. In theory, if you had an ISP programmer and a 6 pin adapter, you should be able to reprogram the board.

The first attempt of attempting to connect to the SMART Response XE was not as successful as I hoped. For one thing, the SRXE Development Adapter was unable to sit nicely within the board. This is not a huge issue in and of itself, but it did mean that in order to get any contact with the board, I would have to push down on the device with a fair bit of force. And those pogo pins are very fragile. I think I actually broke the tip of one of them, trying to use an elastic band and tape to keep the board onto the . I hope it does not render the board useless.
The other issue I had is that the arrangement of the 6 pin header on the developer board is incompatible with the pins of the ISP programmer itself. The pins on the ISP programmer are arranged to plugin directly to an Arduino, but on the development board, the pins are the the wrong way round. The left pin on both the male pin on the board and female socket from the IVR programmer are both Vcc, when in order to work, one of them will need to be a mirror image of the other. This means that there’s no way to for the two to connect such that they line up. If the pins on the SMART Response XE were on the back side of the board, I would be able to plug them in directly.
I eventually got some jumper wires to plug the ISP programmer into the correct pins. Pushing down on the board I saw the LEDs on the adapter light up, indicating activity. But when I tried to verify the connection using avrdude
, I got no response:
$ avrdude -c usbASP -p m128rfa1
avrdude: initialization failed, rc=-1
Double check the connection and try again, or use -F to override
this check.
avrdude done. Thank you.
So something was still wrong with the connection. It might have been that I’ve damaged one of the pins on the dev board while I was trying to push it down. I’m actually a little unhappy with how difficult it is to use the adapter to connect to the device, and I wondered if I could build one of my own.
Device Adapter Mk. 1
I set about trying just that. I wanted to be able to sit the device on top of it such that the contact points on the board itself will sit on the adapter. I was hoping to make the pins slightly longer than the height of the device such that when I rest it on the adapter, the device was slightly balanced on top of the pins and the board will make contact with gravity alone.
This meant that the pins had to be quite long and reasonably sturdy. Jaycar did not have any pogo pins of the length I needed (or of any length) so I wondered if the pins from a LED would work1. I bought a pack of them, plus a prototyping board, and set about building an adapter for myself. Here the design I came up with:

And here is the finished result:


And it’s a dud! The position of the header gets in the way of where the device lines up to rest on the pins. But by far the biggest issue is the pins themselves. I tried placing them in the holes and rested the circuit board on top with a small spacer while I was soldering them. The idea is that after removing the spacer the pins will be higher than the device. I was hoping to cut them down to size a little, but I cut them unevenly, which meant that some of the pins won’t make contact. When I try resting the device on the board I get no rocking, which means that I suspect none of the pins make contact. I’m still not happy with the LED pins either. They don’t seem strong enough to take the weight of the device without bending.
The best thing about it was the soldering, and that’s not great either. I’m not sure I’ll even try this adapter to see if it works.
Next Steps
Before I create a new adapter, I want to try to get avrdude
talking with the board first. I think what I’ll try next is resting some nails in the holes and attaching them to alligator clips hooked up to the ISP programmer. If this works, I see if I can set about building another board using the nails. I won’t use the header again as I think it will just get in the way. It might be enough to simply solder some hookup wires directly onto the prototyping board.
Anyway, more to come on this front.
Update 29 Oct 2023: I haven’t revisited this project since this post.
-
I couldn’t find any decent bridging wire at Jaycar either so I used reclaimed wire from a CAT-5 cable. I stripped the insulation completely, twirled the wires, and soldered them onto the contacts. It worked really well. ↩︎
Tried doing some electronics this morning. Not much to show for it apart from some reading, designing, and driving to Jaycar to grab some components. This really is an activity where Iβm still quite a novice.
OS/2 Dreaming
I’ve been thinking of OS/2 recently. Yes, the ill fated OS that IBM built with Microsoft. Re-reading the Ars Technica write-up of it and listening to the Flashback episode again fills me with nostalgia.
Truth is that a lot of my early experiences with computing began with OS/2. Dad was working somewhere that used it, and I had a chance to play with it whenever he bought his laptop home. We had a plain old DOS home computer as well but it wasn’t as powerful or exciting as the laptop as dad’s laptop. It also helped that the laptop had a colour screen.
There were a lot of firsts associated with this OS. It was the first time using a graphical-based OS. I could do some pretty basic things in the DOS command line at the time, but being able to manipulate things on screen with a mouse was a much better experience. The OS had some pretty nice utilities that I grew to love, like a music app that let you compose monotonic songs by moving sliders that adjusted the pitch and duration of a note. It was my programming environment as well (QBasic, but it was still running on OS/2).
It was my first experience “shutting down” a computer. This was quite novel, and I remember wondering why it was necessary. The DOS computer you could just turn off, why couldn’t you do that here? I came to accept how important shutting down the computer first was, for no reason other than that not doing this would mean the next boot-up would run a file system check that would take several minutes. For someone who wanted to get to my DOS games as quickly as possible, learning this was important.
It was even the first time I saw someone surfing the web. On night I was watching dad using WebExplorer. I asked him what he was doing and he said that he was “using the web”. I had no idea what that meant, but I wondered if it had anything to do with a family friend that had the surname Webb. I still remember the loading animation of that browser: cubes flying by on the screen of the computer icon in the toolbar while the disk activity light flashed green.
When Windows 95 came around, we set it up with a dual boot system with OS/2. I remember at the time being reasonably unimpressed with Windows 95 and was more than happy to continue using OS/2, at least while we still had v2 installed. But eventually we got version 3 (OS/2 Warp) and it was around this time my love for it was starting to wane. DOS games were always a little incompatible on OS/21 but they eventually stopped running altogether, and I found myself booting into Windows a lot more often to play them. Eventually the day came when Dad bought home a laptop that only had Windows 95, and my OS/2 experience came to an end.
Anyway, it would be nice to pay around with OS/2 v2 again. Apparently OS/2 v2 is notoriously difficult to virtualise so I don’t know if that’s even possible. I found an emulator of OS/2 v1 that runs in the browser, and the GUIdebook has some fantastic screenshots of v2. I guess that will have to do for the moment to bring back the memories.
-
A great example of this was Commander Keen. There was something wrong in the logic that would keep text boxes on screen long enough to read them. They would fly by when running the game in OS/2 and I had to get good at hitting the Pause key if I wanted to read it. ↩︎
Follow-up to my last post: you can turn off committing after conflict resolution in GoLand by clicking “Modify options” in the Merge dialog and selecting “Do not commit the merge result.” This should display the --no-commit
flag.

Ok, I admit it’s pretty close to how it works on the command line; although not when you have conflicts, and the only time I merge in GoLand instead of the command line is when I need to resolve conflicts (I prefer the diff/merge tools there). But to be fair to GoLand, it doesn’t know about my merging preferences.
I don’t understand code editors that think they know when to commit changes “better” than I do.
GoLand is guilty of this. I just finished resolving merge conflicts, but instead of letting me make sure the resolved conflicts actually build and pass the test, GoLand just commits them as soon as I’ve finished picking the hunks I want to use. This means any fixes I need to make cannot go into the merge commit. That commit is now unstable.
One could argue that this is the proper way to do things, that the merge commit should only contain conflict resolutions and nothing else. I don’t agree. I’m bound to make mistakes while resolving conflict, and I want to make sure what I’m committing is actually working code. It probably doesn’t matter in the grand scheme of things, and it wouldn’t be the first time I make a commit with dodgy code. But I’d like to reduce the number of times that happens if I possibly can.
Besides, conflicts in Go imports or module versions I can usually fixed by running tools. I tend to resolve these conflicts quickly, expecting to get errors and duplicates that I can fix with a couple of commands (they usually need to be formatted anyway). I can’t do this if the code editor decides when to actually commit the changes.
Stage the resolved conflicts all you like, but let me actually commit them when I say so.
This hollow is hot property at the moment. It was occupied by a couple of kookaburra chicks about a month ago. They’ve since flown the nest. Now a pair of rainbow lorikeets are interested in it.

It kills me that Pinboard doesn’t automatically set the title of a bookmark if you add one without it. So easy to fetch the page and get it from there. It can be a setting if privacy/costs are a concern. But really, having this feature would be a massive usability improvement.
Premature abstraction is the root of unnecessary rework.
Might just have to accept that the way I manage links to posts of interest is not perfect. There might be some room for improvement β something that polls Pinboard occasionally and posts any new links from there somewhere β but a wholesale replacement is probably not a good idea.
A reminder to myself that the only way to get a blog post out there that you’re happy with is get a blog post out there that you’re not happy with. Classic case of learning by shipping. And I think you need to actually need see it all the way through to publishing it. I’m not convinced that you’ll get the full benefit if you just leave the post half-finished in your drafts folder.
A Rambling Thought About The App-Only Social Networks
Re-reading this post got me wondering how much traction Hive and Post are getting from the Twitter exodus. I am aware that Hive had to deal with a vulnerability and had to shut down while they fixed it. I don’t know much about Post apart it being another VC backed social network. But unless you’re a gamer attracted to Hive, andβ¦ π€·1 heading to Post, is there anyone else using them?
I’m wondering how much traction these app-only services will actually be able to get in this day and age. One huge advantage that Mastodon has is that it’s a web service first, and doesn’t require an app to use. This makes sharing things outside the network quite easy. Don’t have the app? Just open this link up in your web browser.
If Hive and Post cannot do this, I don’t see how you can get people unaware or uninterested in the service to sign up. You might be able to share a link which will prompt people to download and sign in the app. But would they actually do this? I feel that we’re beyond the days of just trying out new services unless you know for sure you’ll get value for it, and you probably won’t know this unless you can see what’s being shared without having the app.
While we’re on the subject: my curiosity got the better of me a few minutes ago, so I took a quick look a Post.news to see what it’s like. Itβs backed by Andreessen Horowitz which means that I was expecting to see a few things that I’d find disagreeable. I was not disappointed.
There was a website β styled by someone the same level of design skills that I could muster (that’s not a compliment). And it wasn’t just a sign-up page either: there was a “discover” feed of sorts. Lots of US news, politics, and screenshots of posts from other social platforms (and not just the major ones). I don’t know if/how they curate the posts that appear there but the ones I saw did not entice me to sign up (not that I have any interest in signing up anyway).
I hope A18Z feels like got their money’s worth for this. Not sure that I would if I was backing them.
-
Not sure who would sign-up to Post other than those that know/like the VC backers themselves. ↩︎
It’s been a rare (but not unprecedented) three coffee and one caffeinated tea kind of morning today. π΄
The cafe I go to has started playing music where I sit. I find music at a cafe distracting, mainly because I find myself paying attention to the music instead of what I was doing. Fortunately it’s music I don’t find appealing. But even so, bad music is still not no music.
I never considered myself someone who believed that Go must’ve had generics from the start. I appreciated that the designers added them to the language, but I though I’d could be just as effective writing Go code if they chose not to.
I don’t believe that anymore.
I’ve found generics in Go to be a major improvement to the language. It now means that I can now use higher order functions that operate on collections, like “map” or “filter”, in a more natural way. I got use to these functions while working in languages that had them (Python, Ruby, JavaScript, Java 1.8) and they’ve been so useful that I wished Go had them as well.
There was no technical reason for Go not to have these functions, but they would have had to use the interface{}
type to be useful, meaning that there was no type safety built-in and your code would be littered with type assertions. This was such a turn-off that most of the time I didn’t bother with considering higher order functions, and just wrote a for loop by hand to convert types in one slice to types in another slice. Trivial to write, but just so boring.
Now with generics, these higher order functions can be made type safe, and there is no longer a need for type assertions to use them. These made them viable once more, and I’ve found myself using them a fair bit recently. Not having to write yet another for lop has made coding fun again.
π The Magic of Small Databases
I kinda want this but for internal databases. There’ve been several times at work where I’ve had to collect semi-structured information in a spreadsheet or a wiki page comprised solely of tables. There’s always some loosely defined convention around how to represent it (use this colour to indicate this particular state) or when it should be changed (change this label to “In Review” until these people have seen it and then change it to “Confirmed”).
One example is how we manage releases: which services we’re pushing out and what commits they are, which environments it’s been deployed to or tested in, whether the other teams or the person on-call are aware of it and have signed off, etc. This is all managed in wiki pages that follow a standard layout, and it’sβ¦ okay. It was a convention that has grown out over time as we were working out our release procedure. And it made sense keeping it relatively informal as we were trying to work out our groove. But that groove has been formed now, and it would be nice to formalise the process. But doing so means that there’s a lot of manual labour keeping these release documents correct and up to date. And since it’s all in a centrally managed wiki, it’s difficult to automate away things that are managed by other systems like our code repositories.
A tool that can be hosted on-prem which will allow anyone to spin up a new document-base database (either for the team or themselves), define a very loose schema and some views, and put a very simple workflows and code macros would be great. The trick is trying to walk the line that separates something that basically is like a hosted version of Excel verses something that will require so much setup work that no-one will bother with it. I’d imagine that’s a tricky balancing act to follow.
Saw the following quote while reading this article:
You could fill a book with all I know, but with all I donβt know, you could fill a library.
β Unknown
Quite profound.