<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Agentic-Ai on Eric Irwin</title><link>http://ericirwin.io/tags/agentic-ai/</link><description>Recent content in Agentic-Ai on Eric Irwin</description><generator>Hugo</generator><language>en-us</language><managingEditor>Eric.Irwin@gmail.com (Eric Irwin)</managingEditor><webMaster>Eric.Irwin@gmail.com (Eric Irwin)</webMaster><lastBuildDate>Tue, 09 Sep 2025 00:00:00 +0000</lastBuildDate><atom:link href="http://ericirwin.io/tags/agentic-ai/index.xml" rel="self" type="application/rss+xml"/><item><title>Generative AI vs. Agents: A Simple Litmus Test</title><link>http://ericirwin.io/posts/generative-ai-vs-agents-a-simple-litmus-test/</link><pubDate>Tue, 09 Sep 2025 00:00:00 +0000</pubDate><author>Eric.Irwin@gmail.com (Eric Irwin)</author><guid>http://ericirwin.io/posts/generative-ai-vs-agents-a-simple-litmus-test/</guid><description>&lt;p&gt;I&amp;rsquo;m writing this because I&amp;rsquo;ve made the mistake myself. I assumed everything should be an agent. If there was a problem to solve, my first instinct was, &lt;em&gt;let&amp;rsquo;s build an agent for that.&lt;/em&gt; It took running headfirst into the tradeoffs — performance, debugging headaches, and unnecessary complexity — for me to realize that not everything benefits from being &lt;em&gt;agentic.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;And honestly, this isn&amp;rsquo;t a new struggle. We&amp;rsquo;ve seen the same pattern with AI more broadly: reaching for it to solve problems that are often handled better with simpler, more traditional approaches. Now, agents and agentic systems are following the same trajectory — being turned to as the &amp;ldquo;solution&amp;rdquo; even when they&amp;rsquo;re not the right fit. Or at least not the immediate.&lt;/p&gt;</description></item></channel></rss>