<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Engineering on Eric Irwin</title><link>http://ericirwin.io/tags/engineering/</link><description>Recent content in Engineering on Eric Irwin</description><generator>Hugo</generator><language>en-us</language><managingEditor>Eric.Irwin@gmail.com (Eric Irwin)</managingEditor><webMaster>Eric.Irwin@gmail.com (Eric Irwin)</webMaster><lastBuildDate>Sat, 24 Jan 2026 00:00:00 +0000</lastBuildDate><atom:link href="http://ericirwin.io/tags/engineering/index.xml" rel="self" type="application/rss+xml"/><item><title>The Case for Making Your AI Tools Argue With Each Other</title><link>http://ericirwin.io/posts/the-case-for-making-your-ai-tools-argue-with-each-other/</link><pubDate>Sat, 24 Jan 2026 00:00:00 +0000</pubDate><author>Eric.Irwin@gmail.com (Eric Irwin)</author><guid>http://ericirwin.io/posts/the-case-for-making-your-ai-tools-argue-with-each-other/</guid><description>&lt;p&gt;We&amp;rsquo;ve gotten pretty comfortable trusting LLM output. The answers sound authoritative. The code compiles. The reasoning feels right. And then we ship it. But the most dangerous thing an AI can do isn&amp;rsquo;t give you a wrong answer. The problem isn&amp;rsquo;t &amp;ldquo;AI is sometimes wrong.&amp;rdquo; It&amp;rsquo;s that we keep asking one model to be the authority.&lt;/p&gt;
&lt;p&gt;That&amp;rsquo;s what pushed me into a little experiment: instead of using an LLM like an answer machine, what if I used it like a decision stress test?&lt;/p&gt;</description></item><item><title>The AI-First Engineering Pattern: How Persistent Context Turns Claude into a True Teammate</title><link>http://ericirwin.io/posts/the-ai-first-engineering-pattern/</link><pubDate>Sat, 25 Oct 2025 00:00:00 +0000</pubDate><author>Eric.Irwin@gmail.com (Eric Irwin)</author><guid>http://ericirwin.io/posts/the-ai-first-engineering-pattern/</guid><description>&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Building a platform like Swoop means juggling a mobile app, dashboards, APIs, infrastructure, and — more recently — AI agents. Over time, I built a workflow using Claude Code that treats all of it as one intelligent workspace: a single place where code, data, operations, and testing are deeply connected.&lt;/p&gt;
&lt;p&gt;The key insight: AI becomes genuinely useful when it shares your mental model of the system.&lt;/p&gt;
&lt;p&gt;When architecture, tooling, and context are explicit, Claude stops acting like autocomplete and starts reasoning like a teammate who understands your entire environment. This isn&amp;rsquo;t about clever prompts — it&amp;rsquo;s about teaching your workspace to think in context.&lt;/p&gt;</description></item><item><title>The Quiet Evolution of AI Tools: From Autocomplete to Workflow Bridges</title><link>http://ericirwin.io/posts/the-quiet-evolution-of-ai-tools/</link><pubDate>Thu, 02 Oct 2025 00:00:00 +0000</pubDate><author>Eric.Irwin@gmail.com (Eric Irwin)</author><guid>http://ericirwin.io/posts/the-quiet-evolution-of-ai-tools/</guid><description>&lt;p&gt;AI is one of the most polarizing topics in engineering right now. For some, it represents a threat to jobs or a black box risk; for others, it&amp;rsquo;s the next great productivity revolution. But beneath the noise, there&amp;rsquo;s a more subtle — and in many ways more exciting — shift underway: the evolution of AI tools like Cursor and Claude into customizable workflow partners.&lt;/p&gt;
&lt;p&gt;Instead of being just autocomplete engines, these tools are learning how to meet us exactly where we are. Through hooks, sub-agents, and local context, they&amp;rsquo;re becoming bridges across the many layers and abstractions engineers have to navigate daily.&lt;/p&gt;</description></item></channel></rss>