<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Ai on Eric Irwin</title><link>http://ericirwin.io/tags/ai/</link><description>Recent content in Ai on Eric Irwin</description><generator>Hugo</generator><language>en-us</language><managingEditor>Eric.Irwin@gmail.com (Eric Irwin)</managingEditor><webMaster>Eric.Irwin@gmail.com (Eric Irwin)</webMaster><lastBuildDate>Sat, 07 Feb 2026 00:00:00 +0000</lastBuildDate><atom:link href="http://ericirwin.io/tags/ai/index.xml" rel="self" type="application/rss+xml"/><item><title>Borrowing from Team Topologies to Make Sense of Claude Agent Teams</title><link>http://ericirwin.io/posts/borrowing-from-team-topologies-for-claude-agent-teams/</link><pubDate>Sat, 07 Feb 2026 00:00:00 +0000</pubDate><author>Eric.Irwin@gmail.com (Eric Irwin)</author><guid>http://ericirwin.io/posts/borrowing-from-team-topologies-for-claude-agent-teams/</guid><description>&lt;p&gt;There&amp;rsquo;s a habit I can&amp;rsquo;t turn off. Whenever I&amp;rsquo;m working on something — whether it&amp;rsquo;s organizing a project, structuring a team, or just figuring out how to approach a problem — my brain immediately goes to: &lt;em&gt;what&amp;rsquo;s the optimal shape for this?&lt;/em&gt; What information needs to flow, and between whom? Where are the boundaries?&lt;/p&gt;
&lt;p&gt;It&amp;rsquo;s the kind of thinking that Matthew Skelton and Manuel Pais gave a name and a framework to with &lt;em&gt;Team Topologies&lt;/em&gt;. If you haven&amp;rsquo;t read it, the short version is that it offers a set of mental models for how engineering teams should be structured — and not around org charts, but around the flow of value. How teams communicate, where cognitive load sits, what interaction modes make sense for a given type of work. It fundamentally changed how I think about engineering organizations, and honestly, it&amp;rsquo;s one of those frameworks that keeps paying dividends years after you first encounter it.&lt;/p&gt;</description></item><item><title>The Case for Making Your AI Tools Argue With Each Other</title><link>http://ericirwin.io/posts/the-case-for-making-your-ai-tools-argue-with-each-other/</link><pubDate>Sat, 24 Jan 2026 00:00:00 +0000</pubDate><author>Eric.Irwin@gmail.com (Eric Irwin)</author><guid>http://ericirwin.io/posts/the-case-for-making-your-ai-tools-argue-with-each-other/</guid><description>&lt;p&gt;We&amp;rsquo;ve gotten pretty comfortable trusting LLM output. The answers sound authoritative. The code compiles. The reasoning feels right. And then we ship it. But the most dangerous thing an AI can do isn&amp;rsquo;t give you a wrong answer. The problem isn&amp;rsquo;t &amp;ldquo;AI is sometimes wrong.&amp;rdquo; It&amp;rsquo;s that we keep asking one model to be the authority.&lt;/p&gt;
&lt;p&gt;That&amp;rsquo;s what pushed me into a little experiment: instead of using an LLM like an answer machine, what if I used it like a decision stress test?&lt;/p&gt;</description></item><item><title>The Real Bottleneck in AI Isn't the Model — It's Our Communication</title><link>http://ericirwin.io/posts/the-real-bottleneck-in-ai-isnt-the-model/</link><pubDate>Thu, 27 Nov 2025 00:00:00 +0000</pubDate><author>Eric.Irwin@gmail.com (Eric Irwin)</author><guid>http://ericirwin.io/posts/the-real-bottleneck-in-ai-isnt-the-model/</guid><description>&lt;p&gt;We&amp;rsquo;re living through one of the fastest paradigm shifts in modern computing. Tools are getting better, models are getting smarter, and the ecosystem is moving from prompt engineering toward something far more powerful: context engineering.&lt;/p&gt;
&lt;p&gt;But even as the tooling evolves quickly it is clear that some people get extraordinary results out of AI tools&amp;hellip; and others get diminishing returns.&lt;/p&gt;
&lt;p&gt;And when you peel the layers back, the reason is surprisingly human. It&amp;rsquo;s almost always about how effectively they communicate with the machine and how unnatural that still feels for most of us.&lt;/p&gt;</description></item><item><title>The AI-First Engineering Pattern: How Persistent Context Turns Claude into a True Teammate</title><link>http://ericirwin.io/posts/the-ai-first-engineering-pattern/</link><pubDate>Sat, 25 Oct 2025 00:00:00 +0000</pubDate><author>Eric.Irwin@gmail.com (Eric Irwin)</author><guid>http://ericirwin.io/posts/the-ai-first-engineering-pattern/</guid><description>&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Building a platform like Swoop means juggling a mobile app, dashboards, APIs, infrastructure, and — more recently — AI agents. Over time, I built a workflow using Claude Code that treats all of it as one intelligent workspace: a single place where code, data, operations, and testing are deeply connected.&lt;/p&gt;
&lt;p&gt;The key insight: AI becomes genuinely useful when it shares your mental model of the system.&lt;/p&gt;
&lt;p&gt;When architecture, tooling, and context are explicit, Claude stops acting like autocomplete and starts reasoning like a teammate who understands your entire environment. This isn&amp;rsquo;t about clever prompts — it&amp;rsquo;s about teaching your workspace to think in context.&lt;/p&gt;</description></item><item><title>The Quiet Evolution of AI Tools: From Autocomplete to Workflow Bridges</title><link>http://ericirwin.io/posts/the-quiet-evolution-of-ai-tools/</link><pubDate>Thu, 02 Oct 2025 00:00:00 +0000</pubDate><author>Eric.Irwin@gmail.com (Eric Irwin)</author><guid>http://ericirwin.io/posts/the-quiet-evolution-of-ai-tools/</guid><description>&lt;p&gt;AI is one of the most polarizing topics in engineering right now. For some, it represents a threat to jobs or a black box risk; for others, it&amp;rsquo;s the next great productivity revolution. But beneath the noise, there&amp;rsquo;s a more subtle — and in many ways more exciting — shift underway: the evolution of AI tools like Cursor and Claude into customizable workflow partners.&lt;/p&gt;
&lt;p&gt;Instead of being just autocomplete engines, these tools are learning how to meet us exactly where we are. Through hooks, sub-agents, and local context, they&amp;rsquo;re becoming bridges across the many layers and abstractions engineers have to navigate daily.&lt;/p&gt;</description></item><item><title>Generative AI vs. Agents: A Simple Litmus Test</title><link>http://ericirwin.io/posts/generative-ai-vs-agents-a-simple-litmus-test/</link><pubDate>Tue, 09 Sep 2025 00:00:00 +0000</pubDate><author>Eric.Irwin@gmail.com (Eric Irwin)</author><guid>http://ericirwin.io/posts/generative-ai-vs-agents-a-simple-litmus-test/</guid><description>&lt;p&gt;I&amp;rsquo;m writing this because I&amp;rsquo;ve made the mistake myself. I assumed everything should be an agent. If there was a problem to solve, my first instinct was, &lt;em&gt;let&amp;rsquo;s build an agent for that.&lt;/em&gt; It took running headfirst into the tradeoffs — performance, debugging headaches, and unnecessary complexity — for me to realize that not everything benefits from being &lt;em&gt;agentic.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;And honestly, this isn&amp;rsquo;t a new struggle. We&amp;rsquo;ve seen the same pattern with AI more broadly: reaching for it to solve problems that are often handled better with simpler, more traditional approaches. Now, agents and agentic systems are following the same trajectory — being turned to as the &amp;ldquo;solution&amp;rdquo; even when they&amp;rsquo;re not the right fit. Or at least not the immediate.&lt;/p&gt;</description></item></channel></rss>