Of Lemons And Modern Software
I found myself nodding my head throughout Alex Russell’s post The Market For Lemons:
The complexity merchants knew their environments weren’t typical, but they sold highly specialised tools as though they were generally appropriate. They understood that most websites lack tight latency budgeting, dedicated performance teams, hawkish management reviews, ship gates to prevent regressions, and end-to-end measurements of critical user journeys. They understood the only way to scale JS-driven frontends are massive investments in controlling complexity, but warned none of their customers.
He’s talking about JavaScript frameworks and their failure to produce performant web frontends. But it’s amazing how much of this also applies to the backend world as well. Replace SPA with micro-services, and React with Kubernetes, and you’ve pretty much nailed the problem in that space too.
I’m willing to believe the the trap we find ourselves in is not entirely the fault of these vendors. I can picture a scenario where some dev got curious about React, used it for small project, thought it was the bees knees, and started sharing it with others. The tech was complex enough to be usable yet interesting, and had enough of a pedigree that it was easy to persuade others to adopt it (“Facebook and Google use this, and they’re bajillionaire serving half the world’s population. Why doubt tech from them?”)
Eventually it receives mainstream adoption, and despite the issues, it’s the default choice for anything new because of two characteristics. The first is the sunk cost associated with learning these frameworks, and justifying the battle scars that come from fighting with them.
The second is the fear of rework. I suspect most organisation forget that many of these Big Tech firms started with what are effectively LAMP stacks. They had to replace it with what they have now because of user demand, and the decision was easy because the millions of requests they got per second were stressing their software.
And I suspect organisations today feel the need to skip the simple tech stacks and go straight to something that can serve those billions of users just to save themselves the perceived future need to rewrite the app. Sure, that server-side rendered web page handled by a single web server and PostgreSQL database is enough to serve those 100 DAU now, but someday that may grow to three billion users, and we need to be ready for that possibility now (even if it doesn’t materialise).
So I totally understand the mess we’re in now. But that doesn’t explain why these vendors went to such lengths promoting this tech in the first place. Why would they not be upfront about the massive investment they made to be effective with this tech? Why not disclose the reason why they specifically had to go to such lengths, rather than tout properties like “the DOM is slow” or “Amazon uses micro-services”?
I’d be curious to find out, but maybe later. I’ve got some Kubernetes pods I’ve got deal with first.