TIL about the JavaScript debugger statement. You can put debugger
in a JS source file, and if you have the console open, the browser will pause execution at that line, like a breakpoint:
console.log("code");
debugger;
console.log("pause here");
This is really going to be useful in the future.
Really enjoyed listening to Om Malik with Ben Thompson on Stratechery today. Very insightful and optimistic conversation.
Phograms
Originally posted on Folio Red, which is why this post references "a new blog".
Pho-gram (n): a false or fanciful image or prose, usually generated by AI*
There’s nothing like a new blog. So much possibility, so much expectation of quality posts shared with the world. It’s like that feeling of a new journal or notebook: you’re almost afraid to sally it with what you think is not worthy of it.
Well, let’s set expectations right now, with a posting of a few AI generated images. 😜
Yes, yes, I know. No one wants to see images from DALL-E and Stable Diffusion that you didn’t make yourself. And yeah, I should acknowledge that these tools have resulted in some real external costs. But there are few that I would like to keep. And maybe if I added some micro-fiction around each one, it would reduce the feeling that I’m simply pushing AI generated junk on the the Internet.
So here’s a bunch of them that I generated for something at work, with a short, non-AI written, back-story for each one.
Martian Production
Since the formation of the New Martian Republic in 2294, much of the media consumed on Mars was a direct import from Earth. And even after the establishment of permanent settlements, most Martian was more likely to consume something produced on Terra than something home grown. So, in 2351, the United Colonial Martian Government (UCMG) formed the committee to spearhead the development of local media scene. The committee had to come up with a proposal for how to bootstrap the local production spoken audio, music, the written word, and videography.
In 2353, the first Martian videography production corporation was establish. Originally known as Martian Film House, the corporation was originally devised to produced feature-length film and documentaries, but this soon expanded to include short-form serials and news broadcasts. This culminated in 2354, in what was recorded to be the first Martian-wide video broadcast in what was called Climbing the Tholus Summit. Later, in 2360, as part of the negotiations with Terra Brodcasting, the UCMG organised for the retransmission of all Martian programs back to Earth in exchange for renewing the rights for the carry of all imported visual media.
The current logo, commissioned soon after the name change to Martian Productions, pays homage to those early pioneers exploring Mars back at the turn of the millennium. And although the technology was not sophisticated enough to carry those people to Mars themselves, they were still able to explore it from afar, through the lens of the Martian rovers. One of the most successful, the rover known as Opportunity, was chosen as the company figurehead.
Content Mill Productions
Jeffery knows that to make it on the Internet, content is everything. He who is willing to put in the hard yards, making something of quality, will get the eyeballs, and thus that sweet, sweet ad revenue everyone’s fighting over. Such prospects are more endurable than the failing demand for hand-milled flower that he’s currently engaged in (he still doesn’t understand why he spent so much on that windmill).
But quality will only get you so far. The attention span of those online can now be measured in nanosectonds. People don’t even finish the current TikTok video they’re watching before they swipe on to the next one. No wonder people are going on about three minute videos as being “long-form content”.
So Jeffery had to make a choice. He had to put his desire for quantity aside and start considering ways to simply pump out material. Content has such a dirty ring to it nowadays, but Jeffery disagrees. Compared to what he’s doing now, selling bags of hand-milled flower to no-one, the online content game will be his life-raft, his saviour.
And after all, quantity begets quality, and content is nothing more than quanity. So wouldn’t that mean content and quality are essentially the same? (he thinks he’s got that right) And maybe with so much content being made under one name, it would be easier for others to find him. Get some of thse eyeballs going his way. Who knows, he may be able to sell an ad or two. Maybe shutdown his current business and go at it full time. Can’t be too many people doing this sort of stuff.
He things it could work. It’s all grist for the mill in the end.
* This is a made up word, thereby itself being a phogram.
Just thinking of the failure of XSD, WSDL, and other XML formats back in the day. It’s amusing to think that many of the difficulties that came from working in these formats were waved away with sayings like “ah, tooling will help you there,” or “of course there’s going to be a GUI editor for it.”
Compare that to a format like Protobuf. Sure, there are tools to generate the code, but it assumes the source will be written by humans using nothing more than a text editor.
That might be why formats like RSS and XML-RPC survived. They’re super simple to understand as they are. For all the others, it might be that if you feel your text format depends on tools to author it, it’s too complicated.
I’m starting to suspect that online, multi-choice questionnaires — with only eight hypotheticals and no choice that maps nicely to my preference or behavior — don’t make for great indicators of personality.
The AWS Generative AI Workshop
Had an AI workshop today, where we went through some of the generative AI services AWS offers and how they could be used. It was reasonably high level yet I still got something out of it.
What was striking was just how much of integrating these foundational models (something like an LLM that was pre-trained on the web) involved natural language. Like if you building a chat bot to have a certain personality, you’d start each context with something like:
You are a friendly life-coach which is trying to be helpful. If you don’t know the answer to a question, you are to say I don’t know. (Question)
This would extend to domain knowledge. Now you could fine tune a foundational model with your own data set, but an easier, allbeit slightly less efficient way, would be to do something like hand craft a bunch of questions and answers pairs, and feed that straight into the prompt.
This may also extend to agents as well (code that the model interacts with). We didn’t cover agents to a significant degree, but after looking at some of the marketing materials, it seems to me that much of the integration is instructing the model to put parameters within XML tags (so that the much “dumber” agent can parse it out), and how to interpret the structured response.
A lot of boilerplate, written in natural language, in the prompt just to deal with passing information around. I didn’t expect that.
Nevertheless, it was pretty interesting. And although I haven’t got the drive to look into this much further, I would like to learn more about how one might hook up external data-sources and agents (somthing that involves vector databases that’s available to the model and doesn’t require fine turning. I not sure how to represent these “facts” so that it’s usable by the model, or even if that’s a thing).
I don’t know why I think I’ll remember where I saw an interesting link. A few days go by, and when I want to follow it, surprise, surprise, I forgot where I saw it. The world is swimming in bookmaking and read-it-later services. Why don’t I use them?! 🤦♂️
Replacing Ear Cups On JBL E45BT Headphones
As far as wearables go, my daily drivers are a pair of JBL E45BT Bluetooth headphones. They’re several years old now and are showing their age: many of the buttons no longer work and it usually takes two attempts for the Bluetooth to connect. But the biggest issue is that the ear cups were no longer staying on. They’re fine when I wear them, but as soon as I take them off, the left cup would fall to the ground.
But they’re a decent pair of headphones, and I wasn’t keen on throwing them out or shopping for another pair. So I set about looking for a set of new ear cups.
This is actually the second pair of replacement cups I’ve bought for these headphones. The first had a strip of adhesive that stuck the cup straight on to the speaker (it was this adhesive that was starting to fail). I didn’t make a note of where I bought them and a quick search didn’t turn up anything that looked like them. So in December, I settled for this pair from this eBay seller. Yesterday, they arrived.


First impressions were that they were maybe too big. I also didn’t see an adhesive strip to stick them on. Looking at the listing again, I realised that they’re actually for a different line of JBL headphones. But I was a little desperate, so I set about trying to get them on.

It turns out that they’re actually still a good fit for my pair. The aperture is a little smaller than the headphone speaker, but there’s a little rim around each one and I found that by slotting one side of the padding over the rim, and then lightly stretching and rolling the aperture around the speaker, it was possible to get them on. It’s a tight fit, but that just means they’re likely to stay on. And without any adhesive, which is good.

After a quick road test (a walk around the block and washing the dishes), I found the replacement to be a success. So here’s to a few more years of this daily driver.


🔗 Let’s make the indie web easier
Inspiring post. I will admit that while I was reading it I was thinking “what about this? What about that?” But I came away with the feeling (realisation?) that the appetite for these tools might be infinite and that one size doesn’t fit all. This might be a good thing.
Argh, the coffee kiosk at the station is closed. Will have to activate my backup plan: catching the earlier train and getting a coffee two stations down. Addiction will lead you to do strange things. ☕
Finished reading: Twenty Bits I Learned about Making Websites by Dan Cederholm 📚
Got this book yesterday and read through it in about an hour. A joy to read, and a pleasure simply to hold.

Elm Connections Retro
If you follow my blog, you would’ve noticed several videos of me coding up a Connections clone in Elm. I did this as a bit of an experiment, to see if I’d would be interested in screen-casting my coding sessions, and if anyone else would be interested in watching them. I also wanted to see if hosting them on a platform that’s not YouTube would gain any traction.
So far, I’ve received no takers: most videos have received zero views so far, with the highest view count being three. I’m guessing part of the reason is that the audience for this sort of stuff is just not there, or maybe it is but they’re spending all their watch-time on YouTube and Twitch. Building an audience on a platform like PeerTube might be feasible, but it’ll be quite a slog to fight for oxygen from these juggernauts.
But I also have to know that it’s unreasonable of me to expect any decent view numbers after just seven videos, especially after the first seven videos from someone who’s starting from scratch. Much like growing an audience for anything else, it’s just one of those things I need to work at, if I want it. Part of me is not sure that I do want it. And yet, the other part of me is seeking out posts about coders streaming on Twitch. So maybe that desire is still there.
Nevertheless, I’m glad I took on this small summer project. I had a chance to experiment with Elm, which was much needed exercise of my programming skills. I also had a chance to try out video production and editing using DaVinci Resolve, and I had a play around with PeerTube which… well, who can resist playing around with software? So although I didn’t get the banana1, at least I managed to compost the peel.
Anyway, on to the retro. Here are a few things I’ll need to keep in mind the next time I want to attempt this (it’s written in the second person as I’m writing this to myself):
Recording
- Do a small recording test to make sure you’re setup is working. That last thing you want is to have a 30 minute recording with no audio because you forgot to turn on your mic.
- Drink or sneeze while recording if you need to but make sure you stop moving things on the screen when you do, especially the mouse. That would make it easier for you to trim it out in the edit. This also applies when you’re thinking or reading.
- When you do restart after drinking or sneezing, avoid repeating the last few words you just said. Saying “I’m going to be… (sneeze)… going to be doing this” makes it hard to cut it from the edit. Either restart the sentence from the beginning (“I’m going to be… (sneeze)… I’m going to be doing this”), or just continue on (“I’m going to be… (sneeze)… doing this”).
- Also, just before you restart after drinking or sneezing, say a few random words to clear your voice.
- Avoid saying things like “um” and “ah” when you’re explaining something. I know it’s natural to do so, so if you can’t avoid it, stop moving when you do so they can be edited out.
- Don’t sigh. It makes it seem like you’re disinterested or annoyed.
- Narrate more. Longs stretches of keyboard noises do not make for interesting viewing.
- Saying things like “let’s change this” can be improved upon by saying why you’re changing “this”. Viewers know that you’re changing something — they can see it. What they can’t see is your thinking as to why it’s being changed at all.
- Try to keep the same distance from the mic, and speak at the same volume, especially when saying things in your “thinking” voice (it tends to be a little quiet).
- If you think the editor font is the right size, make it two steps larger.
Editing
- Showing that you’re thinking or reading is fine, but don’t be afraid to trim it down to several seconds or so. Long stretches of things not happening on screen looks a little boring.
- Proofread any titles you use. Make sure you’ve got the spelling right.
- Try not to get too fancy with the effects to show the passage of time. Doing so means you’ll need to recreate it the same effects for subsequent videos. Less might be more here.
- Learn the keyboard shortcuts for DaVinci Resolve. Here are some useful ones:
- m: Add new marker.
- Shift+Up, Shift+Down: Go to previous/next marker (only works in the edit section though? 🤨).
- Cmd+\: Split the selected clip.
- Option+Y: Select all clips to the right of the playhead (useful when trimming stuff out and you need to plug the gap).
- Your screen is not big enough for 1080p recordings. Aim for 720p so that the video will still be crisp when exporting (a 16:9 video intended for 720p will need a capture region of 1280x720)
Publishing
- You don’t need to announce a new episode as soon as it’s uploaded. Consider spacing them out to one or two a week. That would make it less like you’re just releasing “content”.
- Aim to publish it around the same time, or at least on the same day. That should give others an expectation of when new episodes will be released.
- Put some thought into the video poster. Just defaulting to the first frame is a little lazy.
- If you’re using PeerTube, upload the videos as private first and don’t bother with the metadata until the upload is successful. Then go back and edit the metadata before making the video public (changing a video from private to public will send out an ActivityPub message). That way, there’s less chance of you loosing metadata changes if the upload were to fail.
-
The banana here is anyone taking an interest in these videos; and I guess releasing Clonections itself? But not doing so is a conscious choice, at least for now. ↩︎
If someone asked me what sort of LLM I’d used for work, I wouldn’t go for a code assistant. Instead, I’d have something that’ll read my Slack messages, and if one looks like a description of work we need to do, it’ll return the Jira ticket for it, or offer to create one if I haven’t logged one yet.
On Go Interfaces And Component-Oriented Design
Golang Weekly had a link to a thoughtful post about interfaces. It got me thinking about my use of interfaces in Go, and how I could improve here. I’ve been struggling with this a little recently. I think there’s still a bit I’ve got to unlearn.
In the Java world, where I got my start, the principal to maintainable systems was a component-based approach: small, internally coherent, units of functionality that you stick together. These units were self contained, dependencies were well defined, and everything was tested in isolation. This meant lots of interfaces, usually defined up front before you even start writing the code that uses them.
I can see why this is popular. The appeal of component design is reuse: build one component for your system and you can use it in another system. I think the principals come from electrical engineering, where you build isolated components, like a transistor or an IC, that when put together produces the working electrical system. So it’s no surprise that this was adopted by the Java and object orientated community, which took such ideals of reuse to extreme levels, as if you could build a system out of the components of another system (this seemed like the whole motivation behind J2EE). Nevertheless, it was a patten of design that appealed to me, especially my sense of coming up with grand designs of abstraction layers.
But recently, I’ve been experiencing a bit of loss in religion. As the post points out, the idea of component design have merit in principal, but they start to break down in reality. Code reuse isn’t free, and if taken too far, you end up spending so much effort on the costs of abstraction (rewriting models, maintaining interfaces, dealing with unit test mocks) for very little benefit. Are you really going to use that Go service in the “catalogue manager” in something else?
So I agree with the post but I come away from it wondering what an alternative to component design actually looks like. I’m still trying to figure this out, and it might be that I’ll need to read up on this some. But maybe it’s to take the idea of self contain units, and throw away the imagined idea of reuse. In concrete terms, ditch the interfaces and replace them with direct method calls.
As for testing, maybe focus less on testing individual units and more of the system as a whole. I’m already onboard with the idea of not mocking out the database in unit tests, but I’m starting to come around to the idea of a “unit” being more than just a single type tested in isolation. I’m guessing this is inevitable as you throw away your interfaces and start depending on other types explicitly. That is, until you begin seeing areas where reuse is possible. But maybe this gets back to the original idea of Go interfaces: they’re more discovered than prescribed.
Anyway, it might be worth trying this approach for the next personal project I have in mind. Of course, that’s the easy part. Finding a way to adopt this pattern at work would be interesting.
So apparently tonight’s earworm is lesser known songs from Men At Work’s “Business As Usual” album, like People Just Love To Play With Words, I Can See It In Your Eyes, and Be Good Johnny. 🎵
In the end it took significantly more time to write about it then to actually do it, but the dot product approach seems to work.
🎥 Elm Connections #7: Deploying
In which I round out the series by deploying Clonections to Netlify.
Detecting A Point In a Convex Polygon
Note: there are some interactive elements and MathML in this post. So for those reading this in RSS, if it looks like some formulas or images are missing, please click through to the post.
For reasons that may or may not be made clear lately, I’ve been working on something involving bestagons. I tended to shy away from things like this before, mainly because of the maths involved in tasks like determining whether a point is within a hexagon. But instead of running away once again from things more complex than a grid, I figured it was time to learn this once and for all. So off I went.
First stop was Stack Overflow, and this answer on how to test if a point is inside a convex polygon:
You can check that easily with the dot product (as it is proportional to the cosine of the angle formed between the segment and the point, if we calculate it with the normal of the edge, those with positive sign would lay on the right side and those with negative sign on the left side).
I suppose I could’ve taken this answer as it is, but I know if I did, I’d have something that’ll be little more than magic. It’ll do the job but I’d have no idea way. Now like many, if I can get away with having something that works without me knowing how, I’ll more likely to take it. But when it comes to code, doing this will usually comes back to bite me in the bum. So I’m trying to look for opportunities to dig a little deeper than I would in learning how and why it works.
It took me a while, and a few false starts, but I think I got there in the end. And I’d figured it would be helpful for others to know how I came to understand how this worked at all. And yeah, I’m sure this is provable with various theorems and relationships, but that’s just a little too abstract for me. No, what got me to the solution in the end was visualising it, along with attempting to explain it below.
First, let’s ignore polygons completely and consider a single line. Here’s one, represented as a vector:

Oh, I should point out that I’m assuming that you’re aware of things like vectors and trigonometric functions, and have heard of things like dot-product before. Hopefully it won’t be too involved.
Anyway, we have this line. Let’s say we want to know if a specific point is to the “right” of the line. Now, if the line was vertical, this would be trivial to do. But here we’ve got a line that’s is on an angle. And although a phrase like “to the right of” is still applicable, it’ll only be a matter of time before we have a line where “right” and “left” has no meaning to us.
So let’s generalise it and say we’re interested in seeing whether a point is on the same side as the line’s normal.
Now, there are actually two normals available to us, one going out on either side of the line. But let’s pick one and say we want the normal that points to the right if the line segment is pointing directly up. We can add that to our diagram as a grey vector:

Now let’s consider this point. We can represented as a vector that the shares the same origin as the line segment1. With this we can do all sorts of things, such as work out the angle between the two (if you’re viewing this in a browser, you can tap on the canvas to reposition the green ray):

This might give us a useful solution to our problem here; namely, if the angle between the two vectors falls between 0° and 180°, we can assume the point is to the “right” of the line. But we may be getting ahead of ourselves. We haven’t even discussed how we can go about “figuring out the angle” between these vectors.
This is where the dot product comes in. The dot product is an equation that takes two vectors and produces a scalar value, based on the formula below:
One useful relationship of the dot product is that it’s proportional to the cosign of the angle between the two vectors:
Rewriting this will give us a formula that will return the angle between two vectors:
So a solution here would be to calculate the angle between the line and the point vector, and as long as it falls between 0 and 180°, we can determine that the point is on the “right” side of the line.
Now I actually tried this approach in a quick and dirty mockup using JavaScript, but I ran into a bit of an issue. For you see, the available inverse cosign function did not provide a value beyond 180°. When you think about it, this kinda makes sense, as the cosign function starts moving from -1 back to 1 as the angle is greater than 180° (or less than 0°).
But we have another vector at our disposal, the normal. What if we were to calculate the angle between those two?

Ah, now we have a relationship that’s usable. Consider when the point moves to the “left” of the line. You’d notice that the angle is either greater than 90° or less than –90°. These just happen to be angles in which the cosign function will yield a negative result. So a possible solution before is is to work out the angle between the point vector and normal, take the cosign, and if it’s positive, the point will be on the “right” side of the line (and it’ll be on the “left” side if the cosign is negative).
But we can do better than that. Looking back at the relationship between the dot product and the angle, we can see that the only way this equation could be negative is if the cosign function is negative, since the vector magnitudes will always return a positive value. So we don’t even need to work out angles at all. We can just rely on the dot product between the point and the normal.
And it’s here that the solution clicked. A point is to the “right” of a line if the dot product of the point vector and the “right”-sided normal is positive. Look back at the original Stack Overflow answer above, and you’ll see that’s pretty much what was said there as well.
Now that we’ve got this working for a single line, it’s trivial to extend this to convex2 polygons. After including all the line segments, with the normals pointing inwards, calculate the dot product between each of the normals with the point, and check the sign. If they’re all positive, the point is within the polygon. If not, it’s outside.

So here’s an approach that’ll work for me, and is relatively easy and cheap to work out. And yeah, it’s not a groundbreaking approach, and basically involved relearning a bunch of linear algebra I’ve forgotten since high school. But hey, better late than never.
So I guess today’s beginning with a game of “guess the secret password requirements.” 😒

🎥 Elm Connections #6: Fetching And JSON Decoding Puzzles
In which I use Elm’s HTTP client and JSON decoder to fetch puzzles from an external resource.