I published six articles this week. One about PostgreSQL. Another about AI agents. Another on context management. A tutorial on automation. An analysis of debugging. And an adversarial advice framework for evaluating MVPs.
It wasn’t planned. Each article came from a paper, a talk, or a project that I found interesting individually. But looking at them together, there’s a common thread I hadn’t noticed while writing them.
All six are saying the same thing.
The Pattern I Didn’t Look For
It started with Bohan Zhang’s article about how OpenAI scales PostgreSQL. The conclusion was brutal: 800 million users, a single primary, no sharding. PgBouncer (from 2007). Read replicas (a concept from the 90s). The most boring tech on the planet, still running fine on one of the most widely used applications in history.
Then came Michael Bolin dissecting the guts of coding agents. Codex CLI, Claude Code — doesn’t matter. Inside, they’re just a while loop with an LLM. No knowledge graphs. No symbolic planners. A loop, some tools, and the model deciding when to stop. The magic was a while True.
Then OpenAI’s Cookbook articles on context engineering. Managing what the model sees turns out to matter more than the model itself. And the techniques are old: inject context at the start (like a README), trim the history (a circular buffer), compress old data (a summary). Nothing new that 2000s-era chat systems didn’t already do.
The automation tutorial drove it home: OpenAI’s Codex Automations are literally cron + curl + an LLM. The oldest scheduler in Unix running the latest model on the planet. The infrastructure is 40 years old. The brain is 2.
And finally, the two posts that aren’t about infrastructure. The Jane Street puzzle where a 2,500-layer neural net turned out to be MD5. It was solved with classic debugging: observe the data patterns, simplify the problem, and narrow down options until the answer is clear. The tools were modern (SAT solvers, ChatGPT). The method was classic (systematic hypothesis elimination).
And the adversarial advice for MVPs: simulate five experts with an LLM to evaluate ideas before you build them. Sounds futuristic until you realize it’s just a wargame. The military has been running them since the 50s. Product teams call them pre-mortems. The only new part is that the panel now costs $2 in tokens instead of $50,000 in consulting fees.
The Thesis (That’s Not Mine)
I’m not saying anything new. Dan McKinley has been saying this since his 2015 talk “Choose Boring Technology.” DHH says it every time someone suggests Kubernetes for a CRUD app. Fred Brooks said it in 1975 when he wrote that there’s no silver bullet.
But this week, six unrelated examples came together to prove the same point across different domains:
| Domain | The “boring” solution that works | The “new” promise of something better |
|---|---|---|
| Databases | PostgreSQL + PgBouncer | Distributed database with sharding |
| AI agents | while loop + tools | Multi-agent architecture with orchestrator |
| Context management | YAML + README + circular buffer | RAG with vector store + embedding pipeline |
| Automation | cron + bash + curl | Cloud-native orchestration platform |
| Debugging | Observe → reduce → hypothesize → verify | End-to-end interpretability tools |
| Idea evaluation | Structured debate with published frameworks | Market research platform with panel data |
The left column is what people solving problems at scale actually use. The right column is what’s sold at conferences.
Why This Happens
It’s not that new technology is bad. It’s that new technology solves problems most teams don’t even have.
OpenAI doesn’t need sharding because its access pattern is 95% reads. A single primary can handle all the writes because they did the work of isolating and offloading heavy writes elsewhere. It wasn’t flashy. It was surgical.
Coding agents don’t need sophisticated architectures because the model is already smart enough to decide, in a simple loop, what tool to use and when to stop. The complexity of the orchestrator is inversely proportional to the intelligence of the model.
cron doesn’t need replacing because 99% of automations boil down to “run this every N hours.” If your automation needs more — event sourcing, failure compensation, retry logic with backoff — sure, you need something more. But most don’t. And what you don’t need, gets in the way.
The Artisan’s Trap
There’s a cognitive bias that explains why we keep searching for the magic tool. It’s called complexity bias: the tendency to prefer complex solutions over simple ones because complexity feels more suitable for hard problems.
If your database is slow, the solution “move heavy writes somewhere else” feels like a hack. The solution “implement sharding with consistent hashing and automatic rebalancing” feels like real engineering. And as engineers, we lean toward solutions that feel technical.
But the hack that works is better than the engineering that doesn’t get done.
OpenAI could’ve spent six months implementing sharding. Instead, they moved a few workloads to Cosmos DB and spent those same six months building features for their 800 million users.
In plain language: the opportunity cost of the elegant solution is the solution you didn’t build in the meantime.
What This Means for You
It’s not that you shouldn’t learn new technologies. It’s that, before adopting them, you should ask yourself:
What problem am I solving that boring technology can’t handle?
If the answer is “none, but the new one sounds cooler,” you’re adding complexity without benefit. And complexity always has a cost. In bugs, onboarding time, 3 a.m. fire drills.
If the answer is “I need X, and PostgreSQL doesn’t have it” — great. Adopt the new thing. But be specific about what X is.
Can I solve this with what I already have and a bit of discipline?
OpenAI solved their connection bottleneck with PgBouncer. They didn’t switch to a new database. They didn’t rewrite the data access layer. They just put a connection pooler in front. The discipline was identifying the real problem, not the one they wanted to solve.
Am I solving a real problem or a future problem?
“It won’t scale” is the most expensive phrase in software engineering. Because it’s usually correct — technically, nothing scales infinitely. But in practice, 99% of projects die before scaling becomes an issue. And the 1% that survives will have the resources to fix it when the time comes.
Discipline Isn’t Glamorous
It’s not that using a distributed database written in Rust with its own consensus protocol isn’t cool. It’s awesome. It makes for great talks. It looks fantastic on a resume.
But if what you need is PostgreSQL with a connection pooler and properly written queries, the distributed database is a cost with no benefit. And the cost isn’t just technical — it’s cognitive. Every hour spent configuring the cluster is an hour not spent understanding your data, optimizing your queries, or talking to your users.
Boring discipline — simple queries, aggressive timeouts, workload isolation, cron jobs running scripts, up-to-date READMEs, methodical debugging — doesn’t make the keynotes. It doesn’t have a logo. It doesn’t get its own conference.
But it works when nothing else does.
And this week, six unrelated articles reminded me of that all at once. Sometimes, the common thread reveals itself. You just have to look for it.
The six articles from this week:
- OpenAI scales PostgreSQL for 800M users with a single writer — The battle-hardened database that delivers what the new kids promise.
- Your AI coding agent is just a while loop with delusions of grandeur — Dismantling the magic of Codex CLI and Claude Code.
- Context engineering: the invisible skill — What the model sees matters more than the model itself.
- Codex Automations, no Codex needed — cron + bash + LLM = midnight agents.
- A 2,500-layer neural net turns out to be MD5 — Classic debugging meets ML.
- Five non-existent experts review your startup — Adversarial debates as a decision-making tool.