I read two things about AI this morning.

The first was this excellent post by Richard Griffiths on his blog about what creation will mean after AI has slopped everyone:

I believe we’re on the cusp of a seismic change in the culture every bit as significant as the shift around 1910 when it was suddenly impossible to be a Victorian any more1. Just as no one can be Charles Dickens these days, very soon, no one will be able to market anything that looks like what AI could produce. Sure, we’ll make use of AI tools in the background, but readers, listeners and viewers won’t accept what AI offers unless it has first passed through the distinctively human creative imagination.

I’ve been trying to observe how my feelings about AI and it’s role in creation over the last few weeks, and I feel my position is very much aligned with this one. I can definitely see AI tools be useful in the creation process but in a similar role that code completion is useful in a code editor. The best use of AI is not an ends to itself, but a means to an end, overseen and review by a human.

These feelings were in contrast to those I got from reading this quote from Andrew Ng, via Simon Willison blog:

There’s a new breed of GenAI Application Engineers who can build more-powerful applications faster than was possible before, thanks to generative AI. Individuals who can play this role are highly sought-after by businesses, but the job description is still coming into focus. […]

Skilled GenAI Application Engineers meet two primary criteria: (i) They are able to use the new AI building blocks to quickly build powerful applications. (ii) They are able to use AI assistance to carry out rapid engineering, building software systems in dramatically less time than was possible before. In addition, good product/design instincts are a significant bonus.

There were couple of things that I disliked about this position, like the idea that a “GenAI Application Engineer” is different from a regular software developer, as if the introduction of AI tools warrants a completely different job title. Replace “AI” with “IDE editor” in the quote about to see how ridiculous this sounds.

But it also revealed to me why this quote rubbed me the wrong way. The motivation of these AI proponents1, they’re not interested in quality or human connection: they’re interested in cheap and fast. Get to market first in the crappiest way possible so they can crush their competitors with unlimited venture funding by undercutting them in cost, only to jack up prices and start introducing taxes once the competition is dead.

I am not an AI hater: I’m using AI in my own personal work, and anyone that’s been on this blog as seen me add a cheeky AI image from time to time. But I cannot sign on to this: shunning quality for moving fast just feels like more of the same crappy playbook that’s going on in the tech world.

The alternative, the one proposed by Richard, will mean moving slower to ensure that quality and human connection is there. And maybe that’s the winning move in this new AI world. Despite what new technologies are out there, the idea of good things taking time has not stopped being true.


  1. I don’t want to pick on Andrew. I don’t know his real position on AI. He was just the unlucky one to have his post show up in my RSS feed this morning that stirred these feelings. ↩︎