Web Search Works With Blogs Too
Here’s one more reason to write (or syndicate) to your blog instead of post directly to social media: you can use web search engines to find what you need.
I hear a lot of people complain about the crappy search in Twitter or the lack of search in Mastodon, but this won’t be a problem if you post to your site and let public search engines crawl it. They’re incentivised to make sure their search is good, so you’re more likely to get better results more quickly.
Honestly, it works. I used it today to find a post from a fellow Micro.blogger that I wanted to reread. A site:<url>
query with a few keywords. Found it in 5 seconds. I can’t imagine how long it would’ve taken if I had to track it down in Mastodon.
Obviously this won’t work for posts from others, unless they too write to their blog. But it’s probably still worth doing for others that enjoy your work. And who knows? It might be useful to yourself one day. I know it has been for me.
Some acorns fell to the ground while I was passing under an oak tree this afternoon. I looked up and saw a few female king parrot perched there, feeding on the acorns.


It was difficult making them out: their green plumage provided good camouflage.
TIL that long-clicking some toolbar icons in desktop Safari will bring up a menu. I would’ve though a right-mouse click would suffice for that, but I guess touchpads and traditional Mac mice only have a single button. I just wish I could make the long-click delay a bit longer.
Setting up Yarn on a new machine to manage some JS packages. As usual, everything about how the tool works is completely different from the last time I installed it. So, once again, what was expected to take 5 minutes is now taking 30 as I learn the new way of doing things. π©
Close encounter with a kookaburra.

Taken a few days ago while on a walk.
I donβt understand people β adults who are by themselves β who go to a cafe just to watch something on their phone with the load speaker. A bit discourteous in my opinion. Please use headphones if you got them. Otherwise, watch it at home. That way you wonβt distracting others.
Reading something about workplace memos this morning got me thinking: are workplace memos still a thing? As in printed memos pinned onto bulletin boards? Iβve never worked anywhere that did this. Iβd think most places would use email for this now, especially after 2020.
Looking at the core gRPC status codes this morning. There are 17 status codes in all, and most of them β like NOT_FOUND
or PERMISSION_DENIED
β are what you expect. I can see why those codes are listed there. They’re pretty common error conditions seen in most systems.
But the code DATA_LOSS
appears there as well:
DATA_LOSS (15): Unrecoverable data loss or corruption.
I’m a little curious as to why that error case was considered worthy enough to be given a code here. Hopefully not because it was frequent enough to warrant one.
Looking through my drafts last night to see if I could use one of them for my daily post. Found one that I wished I actually published at the time. Not because it was insightful or anything. It was just a record of what I did that day. But that’s sort of what this blog’s about in the end (as I need to keep reminding myself).
P.S. I ended up publishing it with the original timestamp using Micro.blog’s scheduling feature. It’s super useful for publishing things with date-stamps in the past as well as the future. I use it all the time.
Spent some time closing off the Dynamo-Browse shortlist. I think I’ve got most of the big ticket items addressed. Here’s a brief update on each one:
Fix the activity indicator that is sometimes not clearing when a long running task is finished.
How long running tasks are dealt with has been completely overhauled. The previous implementation had many opportunities for race conditions, which was probably the cause of the activity indicator showing up when nothing was happening. I rewrote this using a dedicated goroutine for handling these tasks, and the event bus for sending events to the other areas of the app, including the UI layer. Updates and status changes are handled with mutexes and channels, and it just feels like better code as well.
It will need some further testing, especially in real world use against a real DynamoDB database. We’ll see if this bug rears its unpleasant head once more.
Fix a bug in which executing a query expression with just the sort key does nothing. I suspect this has something to do with the query planner somehow getting confused if the sort key is used but the partition key is not.
Turns out that this was actually a problem with the “has prefix” operator. It was incorrectly determining that an expression of the form sort_key ^= "string"
with no partition key could be executed as a query instead of a scan. Adding a check to see if the partition key also existed in the expression fixed the problem.
Also made a number of other changes to the query expression. Added the ability to use indexed references, like this[1]
or that["thing"]
. This has been a long time coming so it’s good to see it implemented. Unfortuntly this only works reliably when a single level is used, so this[1][2]
will result in an error. The cause of this is a bug in the Go SDK I’m using to produce the query expressions that are run against the database. If this becomes a problem I look at this again.
I also realised that true
and false
were not treated as boolean literals, so I fixed that as well.
Finally, the query planner now consider GSIs when it’s working out how to run a query expression. If the expression can be a query over a GSI, it will be executed as one. Given the types of queries I need to run, I’ll be finding this feature useful.
Fix a bug where
set default-limits
returns a bad value.
This was a pretty simple string conversion bug.
Add a way to describe the table, i.e. show keys, indices, etc. This should also be made available to scripts. Add a way to “goto” a particular row, that is select rows just by entering the value of the partition and optionally the sort key.
These I did not do. The reason is that they’ll make good candidates for scripts and it would be a good test to see if they can be written as one. I think the “goto” feature would be easy enough. I added the ability to get information about the current table in the script, and also for scripts to add new key bindings, so I don’t force any issues here.
The table description would be trickier. There’s currently no real way to display a large block of text (except the status bar, but even there it’s a little awkward). So a full featured description might be difficult. But the information is there, at least to a degree, so maybe something showing the basics would work.
Anyway, the plan now is to use this version for a while to test it out. Then cut a release and update the documentation. That’s a large enough task in and of itself, but I’d really like to get this finished so I can move onto something else.
With all the designs I’ve made for systems using the Stripe API that blew up in my face because of some undocumented edge-case, you would’ve though I’d internalise this by now, and remembered that the only real strategy for knowing that a design is viable is to actually test it.
But no.
And so here I am once again, trying to contort my design so that it’ll work with the Stripe API while trying to avoid making any dramatic changes. π€¦
So, for next time: build a prototype, and use it to validate your hypothesis. Honestly, just do it. Spending those few hours of work now will save you the many hours of work down the line, when you find that you’ve hit some edge-case and you’ll have to go back to the drawing board.
Just frickin’ prototype!
It’s Tuesday, I don’t shutdown my laptop during the week, and I only just now launched a terminal and IDE. Guess how much programming I did yesterday.
Whenever I read something about tech or music, a voice in my head starts saying things like I can do that, I can’t do that, I could’ve done that, I should’ve done that, I did do that, etc.
I wish that voice just shut up. It’s debilitating hearing this when you’re just trying to learn something or appreciate someone else’s work.
Less Consuming, More Creating
Mike Crittenden posted a good quote from a random Hacker News commenter:
Less consuming, more creating.
Doesnβt matter what it is, doesnβt matter if itβs bad.
This quote actually sums up this blog quite nicely. The first line explains why it came to exist. The second line describes how it continues to exist.
Happy 1,000th post.
Ballarat Beer Festival 2023
My friends and I returned to Ballarat today for the Beer Festival. It was another stunning day for it: sunny, mild, not too hot. Much like last year I took an earlier train to walk around Ballarat a little. Not much to report here: very little has changed. But I never see Ballarat so it’s good to walk around a little. My friends were on the train behind mine and I caught up with them when I boarded at Ballarat. We then made our way to Wendouree park for the festival.
We were given a plastic pot when we entered and brewers typically offered either a tasting size, a half pot, or a full pot of a particular drink. Half pots are usually the best value for money: you get a decent amount to enjoy but you pace yourself and avoid reaching your limit before you tasted everything you wanted to. I made that mistake last year: buying too many full-sized pot servings. I went half pots this year.
I also made the mistake of not making notes of the beers I tried last year. I made sure to record them this year. Given the occasion, I decided to go for drinks that I wouldn’t normally go for. In other words: lots of sours today. I’m not a sour drinker and I think I eventually reached my enjoyment limit of them today. But that’s fine: I guess it’s good finding my limits this way.
Here’s a quick rundown of the drinks I tried:
- Fox Friday “Feeling Peachy” Fruited Sour: This was less sour and more on the bitter sweet side. The peach flavour came through strong, which took the edge off and made it quite refreshing. Made for a nice starter.
- Dollar Bill “Australian Wild Ale”: This was a regular ale and a little more bitter than I was expecting (although that could’ve been because of the peach sour). It felt like a heavy sort of ale. Maybe a little too heavy for me. Not sure it’ll be something I go for again.
- Mountain Culture “MS DOS” West Coast IPA: Nothing too remarkable about this. Just an IPA. But a decent IPA. Will definitely have again.
- Wild Life Citrus Sour: I can’t quite remember the make up of this sour. I think it was lemon and blood-orange. Definitely something and blood-orange, as the blood-orange taste really came through. I didn’t think much of this but that’s probably because I reached my limit for sours at this point. This brewery was also advertising a “pineapple sour,” which would’ve been amazing, but sadly they weren’t pouring it when we arrived. Might have affected how I thought about the blood-orange drink: feeling that it was playing second fiddle to the pineapple one.
- Prancing Pony “10 Year” Beer: This was one my friend got but he offered me a taste of it. It was a pilsner IPA but more on the pilsner side. It’s probably one I can see myself drinking if it wasn’t for it’s alcoholic content. It was quite high: 7.5%, or 3 standard drinks in a 500 ml can. That’s just a little too high for me. The can was quite something though.
- Molly Rose “Strawberry Sublime”: This was a low alcholoic strawberry and lime gose. A bit of a mix of sweet and sour. It was nice, but it really wasn’t doing it for me, and frankly I’m not really sure why I went for it at this point in the day.
The event was back in Wendouree park, and was pretty much like last year. Which is good: they did a good job last year.
But there were a few small changes this year. For one, more tables were placed under the trees, which was a good move. There were more tables in the sun last year, which never had anyone on it for long as people preferred the ones in the shade. I think there were fewer brewers this year too. It might have been because of the layout change but it felt a little smaller this year, and some brewers from last year didn’t make an appearance.
Also, no finska this year.
But all in all, it was a good day. Good excuse to get on a regional train out to the country for a change.
Very hot today. Mercury hovering just under 40Β°C, with every chance that it may get hotter than that. But you know what? I rather it be hot today than tomorrow. Why? You’ll find out tomorrow. In the meantimeβ¦ cheers! π§βοΈ

π From Bing To Sydney
Hmm, it’s hard not feeling a little unsettled after reading this Stratechery post. One thing’s for sure, I’m a bit more doubtful of the post I wrote two days ago.
Anyone making a software tool that converts times between time-zones, please let me enter the time-zone as an offset from UTC.
Something that only accepts time-zone abbreviations or city names are problematic. For one thing, zone abbreviations are location specific. Dealing with technology mainly developed in the US, I’ve got into the habit of writing AEST
when I want Australian Eastern Standard Time. But I wouldn’t bat an eyelid if I saw one of my compatriot use EST
to mean the same thing. Is your tool smart enough to recognise this, or will it just default to US Eastern Standard Time?
Now granted that’s one that’s got a work around. But let’s say I want to know the time on Low Howe Island. I’d probably use LHST
for that, but does that mean “Lord Howe Standard Time” (UTC+10:30) or “Lord Howe Summer Time” (UTC+11:00)? Both of those abbreviate to LHST
, and your tool will need to be smart enough to know the particular date and whether or not summer time is in effect. If you think you can handle that edge case, take a look at the list of time-zones abbreviations on Wikipedia to see how you’d be able to handle BST
.
What about city names? Yeah, city names might work, but you have to make sure your tool knows of every city out there. I’ve been trying to use Numi to convert from Melbourne time to Apia time. The docs seem to suggest that typing in 13:00 Melbourne time in Apia
should work. But as far as I can tell, it doesn’t seem to recognise this as a time-zone conversion expression. It seems to give up and just give me 1:00 pm
the answer, possiably because it doesn’t know about Apia1. I’m wondering if it even knows about Melbourne (I don’t know what each of the colours mean).

And let’s not forget the issue of multiple cities having the same name. Melbourne could refer to Melbourne, Australia or Melbourne, FL, United States. I’d like to see how your tool handles Portland (keep in mind there’s a Portland in Australia as well).
So, please. Make it easy on everyone, including yourself, and just recognise UTCΒ±xx:xx
. It would be so much easier and less ambigious if I could just to 13:00 UTC+11:00 to UTC+13:00
. Or whatever, pick a syntax that works for you.
And if you do have support for this, please make sure you document it somewhere.
-
Apia is the capital of Samoa ↩︎
Ignoring Bard to Speak to Paulie
So this happened today.
Our team was testing the integration between two systems. The first system β let’s call it Bard β can be configured to make API calls directly to Stripe, or be configured to use the second system β let’s call it Paulie β to call Stripe on it’s behalf. Bard has a REST API that is used by the HTML front-end to handle user requests. Paulie is designed to be completely isolated from the front-end and has a simple gRPC API that Bard calls. Whether or not it Bard calls Paulie at all is determined by the value of an SSM parameter.
The test was setup with Bard configured to bypass Paulie and make calls directly to Stripe. The way we were to verify this was to tail the logs of both Bard and Paulie, make a REST-API call, and confirm that logs showed up in Bard but not Paulie.
I got called by those running the test to help, as they were seeing something unusual: when the test was performed, logs were showing up in Paulie. The system was configured for Bard to ignore Paulie and go directly to Stripe, and yet Paulie was being spoken to.
So we started going through the motions. We checked to make sure we had the correct version of Bard deployed, checked the SSM parameter, traced through the code, restarted Bard a couple of times to make sure it was configured correctly. And after every check we tried the test again, to nothing changing: logs will still coming through from Paulie.
We were at it for about 15 minutes. I was staring to go through the more esoteric explanations for why this was happening, like whether we were using SSM parameters incorrectly and we may have been using an old configuration or something. Then as I was going through the traces one last time before giving up, I noticed something: there were no traces from Bard. This REST-API it had did all sorts of things like contact the database before going to Paulie or Stripe so I was expecting something like that to show up. Yet there was no evidence of any of that happening.
I then asked how this was actually being tested. And you can probably guess what the response was. Turns out the person running the test wasn’t using Bard’s REST-API at all, and was making gRPC calls directly to Paulie.
Well, naturally, if you called Paulie directly without calling Bard, it doesn’t matter what Bard is configured to do. π€ͺ
Now, I don’t write this because I’m angry or annoyed. In fact, I came away from this feeling very zen about the whole thing. Mistakes like this happen all the time, it’s fine.
But it’s a perfect opportunity remind myself that working on tech can sometimes give you tunnel vision, and that sometimes the explanation isn’t technical at all. Sometimes the answer is much simpler than you think.
π ChatGPT clearly has a place
I tried ChatGPT for the first time this morning. I needed a shell script which will downscale a bunch of JPEG images in a directory. I’m perfectly capable of writing one myself, but that would mean poking through the ImageMagick docs trying to remember which of the several zillion arguments is used to reduce the image size. Having one written for me by ChatGPT saved about 15 minutes of this (it wasn’t exactly what I wanted, I did need to tweak it a little).

I don’t know what the future holds with AIs like this, and I acknowledge that it has had an effect on some peoples’ living (heck, it may have an effect on mine). But I really can’t deny the utility it provided this morning.