<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Engineering-Leadership on Eric Irwin</title><link>http://ericirwin.io/tags/engineering-leadership/</link><description>Recent content in Engineering-Leadership on Eric Irwin</description><generator>Hugo</generator><language>en-us</language><managingEditor>Eric.Irwin@gmail.com (Eric Irwin)</managingEditor><webMaster>Eric.Irwin@gmail.com (Eric Irwin)</webMaster><lastBuildDate>Sat, 07 Feb 2026 00:00:00 +0000</lastBuildDate><atom:link href="http://ericirwin.io/tags/engineering-leadership/index.xml" rel="self" type="application/rss+xml"/><item><title>Borrowing from Team Topologies to Make Sense of Claude Agent Teams</title><link>http://ericirwin.io/posts/borrowing-from-team-topologies-for-claude-agent-teams/</link><pubDate>Sat, 07 Feb 2026 00:00:00 +0000</pubDate><author>Eric.Irwin@gmail.com (Eric Irwin)</author><guid>http://ericirwin.io/posts/borrowing-from-team-topologies-for-claude-agent-teams/</guid><description>&lt;p&gt;There&amp;rsquo;s a habit I can&amp;rsquo;t turn off. Whenever I&amp;rsquo;m working on something — whether it&amp;rsquo;s organizing a project, structuring a team, or just figuring out how to approach a problem — my brain immediately goes to: &lt;em&gt;what&amp;rsquo;s the optimal shape for this?&lt;/em&gt; What information needs to flow, and between whom? Where are the boundaries?&lt;/p&gt;
&lt;p&gt;It&amp;rsquo;s the kind of thinking that Matthew Skelton and Manuel Pais gave a name and a framework to with &lt;em&gt;Team Topologies&lt;/em&gt;. If you haven&amp;rsquo;t read it, the short version is that it offers a set of mental models for how engineering teams should be structured — and not around org charts, but around the flow of value. How teams communicate, where cognitive load sits, what interaction modes make sense for a given type of work. It fundamentally changed how I think about engineering organizations, and honestly, it&amp;rsquo;s one of those frameworks that keeps paying dividends years after you first encounter it.&lt;/p&gt;</description></item><item><title>Generative AI vs. Agents: A Simple Litmus Test</title><link>http://ericirwin.io/posts/generative-ai-vs-agents-a-simple-litmus-test/</link><pubDate>Tue, 09 Sep 2025 00:00:00 +0000</pubDate><author>Eric.Irwin@gmail.com (Eric Irwin)</author><guid>http://ericirwin.io/posts/generative-ai-vs-agents-a-simple-litmus-test/</guid><description>&lt;p&gt;I&amp;rsquo;m writing this because I&amp;rsquo;ve made the mistake myself. I assumed everything should be an agent. If there was a problem to solve, my first instinct was, &lt;em&gt;let&amp;rsquo;s build an agent for that.&lt;/em&gt; It took running headfirst into the tradeoffs — performance, debugging headaches, and unnecessary complexity — for me to realize that not everything benefits from being &lt;em&gt;agentic.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;And honestly, this isn&amp;rsquo;t a new struggle. We&amp;rsquo;ve seen the same pattern with AI more broadly: reaching for it to solve problems that are often handled better with simpler, more traditional approaches. Now, agents and agentic systems are following the same trajectory — being turned to as the &amp;ldquo;solution&amp;rdquo; even when they&amp;rsquo;re not the right fit. Or at least not the immediate.&lt;/p&gt;</description></item></channel></rss>