Agents

The Real Bottleneck in AI Isn't the Model — It's Our Communication
reading time: 6 minutes

We’re living through one of the fastest paradigm shifts in modern computing. Tools are getting better, models are getting smarter, and the ecosystem is moving from prompt engineering toward something far more powerful: context engineering.

But even as the tooling evolves quickly it is clear that some people get extraordinary results out of AI tools… and others get diminishing returns.

And when you peel the layers back, the reason is surprisingly human. It’s almost always about how effectively they communicate with the machine and how unnatural that still feels for most of us.

Generative AI vs. Agents: A Simple Litmus Test
reading time: 2 minutes

I’m writing this because I’ve made the mistake myself. I assumed everything should be an agent. If there was a problem to solve, my first instinct was, let’s build an agent for that. It took running headfirst into the tradeoffs — performance, debugging headaches, and unnecessary complexity — for me to realize that not everything benefits from being agentic.

And honestly, this isn’t a new struggle. We’ve seen the same pattern with AI more broadly: reaching for it to solve problems that are often handled better with simpler, more traditional approaches. Now, agents and agentic systems are following the same trajectory — being turned to as the “solution” even when they’re not the right fit. Or at least not the immediate.