Engineering
We’ve gotten pretty comfortable trusting LLM output. The answers sound authoritative. The code compiles. The reasoning feels right. And then we ship it. But the most dangerous thing an AI can do isn’t give you a wrong answer. The problem isn’t “AI is sometimes wrong.” It’s that we keep asking one model to be the authority.
That’s what pushed me into a little experiment: instead of using an LLM like an answer machine, what if I used it like a decision stress test?
Introduction
Building a platform like Swoop means juggling a mobile app, dashboards, APIs, infrastructure, and — more recently — AI agents. Over time, I built a workflow using Claude Code that treats all of it as one intelligent workspace: a single place where code, data, operations, and testing are deeply connected.
The key insight: AI becomes genuinely useful when it shares your mental model of the system.
When architecture, tooling, and context are explicit, Claude stops acting like autocomplete and starts reasoning like a teammate who understands your entire environment. This isn’t about clever prompts — it’s about teaching your workspace to think in context.
AI is one of the most polarizing topics in engineering right now. For some, it represents a threat to jobs or a black box risk; for others, it’s the next great productivity revolution. But beneath the noise, there’s a more subtle — and in many ways more exciting — shift underway: the evolution of AI tools like Cursor and Claude into customizable workflow partners.
Instead of being just autocomplete engines, these tools are learning how to meet us exactly where we are. Through hooks, sub-agents, and local context, they’re becoming bridges across the many layers and abstractions engineers have to navigate daily.