<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Adaline Labs]]></title><description><![CDATA[The newsletter that swaps stale buzzwords for actionable insights. Our research-backed articles, expert commentary, and bold experiments with LLMs serve one purpose: to spark inventive thinking. By Adaline(.ai).]]></description><link>https://labs.adaline.ai</link><generator>Substack</generator><lastBuildDate>Tue, 28 Apr 2026 11:30:37 GMT</lastBuildDate><atom:link href="https://labs.adaline.ai/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Adaline]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[adaline@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[adaline@substack.com]]></itunes:email><itunes:name><![CDATA[Adaline]]></itunes:name></itunes:owner><itunes:author><![CDATA[Adaline]]></itunes:author><googleplay:owner><![CDATA[adaline@substack.com]]></googleplay:owner><googleplay:email><![CDATA[adaline@substack.com]]></googleplay:email><googleplay:author><![CDATA[Adaline]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Reliable Tool-Using AI Agents In Production: MCP, State, Retries, Timeouts, and Recovery]]></title><description><![CDATA[Learn how to build reliable tool-using AI agents in production with MCP, stateful tools, retries, timeouts, recovery patterns, approvals, and observability.]]></description><link>https://labs.adaline.ai/p/reliable-tool-using-ai-agents-production</link><guid isPermaLink="false">https://labs.adaline.ai/p/reliable-tool-using-ai-agents-production</guid><dc:creator><![CDATA[Nilesh Barla]]></dc:creator><pubDate>Sat, 25 Apr 2026 00:01:16 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/439fbe77-122b-4c11-afc4-23a74d4e8cdf_1456x816.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>TLDR:</strong> Getting an agent to call a tool is the easy part. The hard part is what happens when that tool hangs, partially succeeds, or mutates external state in a way the model cannot recover from on its own. This article covers five runtime mechanisms that determine whether a tool-using agent survives production. You will learn how to classify tool risk by state type, how to retry safely using idempotency keys, how to set timeouts per tool rather than per system, and where to place approval gates before irreversible writes. Also, how to design recovery into the workflow before the first failure occurs. If you are building or evaluating an agentic system, the reliability gap is not in the model. It is in the runtime layer around it.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://go.adaline.ai/rPUz2SX" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!22yz!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06164843-a53b-42b1-876e-dda15018a090_2160x810.png 424w, https://substackcdn.com/image/fetch/$s_!22yz!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06164843-a53b-42b1-876e-dda15018a090_2160x810.png 848w, https://substackcdn.com/image/fetch/$s_!22yz!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06164843-a53b-42b1-876e-dda15018a090_2160x810.png 1272w, https://substackcdn.com/image/fetch/$s_!22yz!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06164843-a53b-42b1-876e-dda15018a090_2160x810.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!22yz!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06164843-a53b-42b1-876e-dda15018a090_2160x810.png" width="1200" height="450" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/06164843-a53b-42b1-876e-dda15018a090_2160x810.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:546,&quot;width&quot;:1456,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:337343,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://go.adaline.ai/rPUz2SX&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/195376577?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06164843-a53b-42b1-876e-dda15018a090_2160x810.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!22yz!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06164843-a53b-42b1-876e-dda15018a090_2160x810.png 424w, https://substackcdn.com/image/fetch/$s_!22yz!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06164843-a53b-42b1-876e-dda15018a090_2160x810.png 848w, https://substackcdn.com/image/fetch/$s_!22yz!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06164843-a53b-42b1-876e-dda15018a090_2160x810.png 1272w, https://substackcdn.com/image/fetch/$s_!22yz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06164843-a53b-42b1-876e-dda15018a090_2160x810.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Tool Calling Is Not the Hard Part</h2><p>The hard part is not getting an agent to call a tool. Every agent that reaches a demo can do that. The hard part is what happens next, i.e., when a tool hangs, returns partial results, mutates state, or leaves the workflow in a condition the model cannot resolve on its own.</p><p><a href="https://labs.adaline.ai/p/building-better-product-with-tool-calling">Tool calling</a> is what moves agents from answering questions to taking actions. <a href="https://labs.adaline.ai/p/the-mcp-product-playbook">MCP</a> sets the standard for how those tools are exposed and invoked. But neither addresses what production demands: a runtime that survives tools that fail partway, time out, or create side effects that a retry makes worse.</p><p><a href="https://developers.openai.com/api/docs/guides/agents/sandboxes">OpenAI&#8217;s sandbox documentation</a> separates orchestration from execution because the two layers have different problems. <a href="https://www.anthropic.com/engineering/managed-agents">Anthropic&#8217;s managed-agents essay</a> frames the same split between the &#8220;brain&#8221; and the &#8220;hands.&#8221; Both point at the same fact: the model gets you to the first successful tool call; the runtime decides whether the workflow survives everything after it.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Prl_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f6b089c-a0ca-40c5-b591-b75ee158691c_1080x1080.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Prl_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f6b089c-a0ca-40c5-b591-b75ee158691c_1080x1080.webp 424w, https://substackcdn.com/image/fetch/$s_!Prl_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f6b089c-a0ca-40c5-b591-b75ee158691c_1080x1080.webp 848w, https://substackcdn.com/image/fetch/$s_!Prl_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f6b089c-a0ca-40c5-b591-b75ee158691c_1080x1080.webp 1272w, https://substackcdn.com/image/fetch/$s_!Prl_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f6b089c-a0ca-40c5-b591-b75ee158691c_1080x1080.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Prl_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f6b089c-a0ca-40c5-b591-b75ee158691c_1080x1080.webp" width="1080" height="1080" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2f6b089c-a0ca-40c5-b591-b75ee158691c_1080x1080.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1080,&quot;width&quot;:1080,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Prl_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f6b089c-a0ca-40c5-b591-b75ee158691c_1080x1080.webp 424w, https://substackcdn.com/image/fetch/$s_!Prl_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f6b089c-a0ca-40c5-b591-b75ee158691c_1080x1080.webp 848w, https://substackcdn.com/image/fetch/$s_!Prl_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f6b089c-a0ca-40c5-b591-b75ee158691c_1080x1080.webp 1272w, https://substackcdn.com/image/fetch/$s_!Prl_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f6b089c-a0ca-40c5-b591-b75ee158691c_1080x1080.webp 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Anthropic's Managed Agents architecture: the Harness (Claude) is decoupled from the Session, Sandbox, and Tools. Each component can fail or be replaced independently. | Source: <a href="https://www.anthropic.com/engineering/managed-agents">Anthropic Engineering</a></em></figcaption></figure></div><p>This article covers five things that determine reliability for <a href="https://labs.adaline.ai/p/what-are-agentic-llms-a-comprehensive">agentic LLMs</a> in production: state type, retries, timeouts, approvals, and recovery. None are model problems. All are runtime problems.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/reliable-tool-using-ai-agents-production?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/reliable-tool-using-ai-agents-production?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://labs.adaline.ai/p/reliable-tool-using-ai-agents-production?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h2>What Changes When an Agent Uses Tools in Production</h2><p>A one-shot tool call is simple by design. The agent queries an API, gets a result, and generates a response. Failure resets to zero without damage.</p><p>Production workflows are built differently. Once an agent calls tools across a multi-step sequence, it touches mutable systems. For instance,</p><ul><li><p>A call at step three changes the state that step four reads.</p></li><li><p>A timeout at step five leaves the system in a condition that the model cannot sort out on its own.</p></li><li><p>A partial failure at step seven may have already sent the email, updated the record, or triggered an external job that cannot be canceled.</p></li></ul><p><a href="https://developers.openai.com/api/docs/guides/agents/sandboxes">OpenAI&#8217;s sandbox guide</a> treats execution as a stateful workspace with persistence and tool artifacts.<br><a href="https://www.anthropic.com/engineering/managed-agents">Anthropic&#8217;s managed-agents writeup</a> makes the same point: longer-lived work needs structured execution surfaces, not raw chat continuity.</p><p>What breaks in <a href="https://labs.adaline.ai/p/building-production-ready-agentic">production-ready agentic systems</a> are the boundaries around the tools, like:</p><ul><li><p>What happens when a write fails halfway,</p></li><li><p>When <a href="https://labs.adaline.ai/p/why-ai-products-break-in-production-context-engineering">context breaks in production</a> corrupts a later step,</p></li><li><p>When <a href="https://labs.adaline.ai/p/designing-ai-features-for-nondeterminism">nondeterministic failures</a> pile up across a workflow built only for the happy path.</p></li></ul><p>Runtime design handles all of these. Model fluency does not.</p><h2>MCP Sets the Interface; the Runtime Owns the Rest</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!CKM0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F063a2e19-08e2-46a3-9c05-e195947dbcfb_3840x1500.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!CKM0!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F063a2e19-08e2-46a3-9c05-e195947dbcfb_3840x1500.png 424w, https://substackcdn.com/image/fetch/$s_!CKM0!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F063a2e19-08e2-46a3-9c05-e195947dbcfb_3840x1500.png 848w, https://substackcdn.com/image/fetch/$s_!CKM0!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F063a2e19-08e2-46a3-9c05-e195947dbcfb_3840x1500.png 1272w, https://substackcdn.com/image/fetch/$s_!CKM0!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F063a2e19-08e2-46a3-9c05-e195947dbcfb_3840x1500.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!CKM0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F063a2e19-08e2-46a3-9c05-e195947dbcfb_3840x1500.png" width="1456" height="569" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/063a2e19-08e2-46a3-9c05-e195947dbcfb_3840x1500.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:569,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;MCP as a standardized protocol connecting AI applications &#8212; including chat interfaces, IDEs, and other AI apps &#8212; to data sources and tools including file systems, development tools, and productivity tools, via bidirectional data flow&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="MCP as a standardized protocol connecting AI applications &#8212; including chat interfaces, IDEs, and other AI apps &#8212; to data sources and tools including file systems, development tools, and productivity tools, via bidirectional data flow" title="MCP as a standardized protocol connecting AI applications &#8212; including chat interfaces, IDEs, and other AI apps &#8212; to data sources and tools including file systems, development tools, and productivity tools, via bidirectional data flow" srcset="https://substackcdn.com/image/fetch/$s_!CKM0!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F063a2e19-08e2-46a3-9c05-e195947dbcfb_3840x1500.png 424w, https://substackcdn.com/image/fetch/$s_!CKM0!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F063a2e19-08e2-46a3-9c05-e195947dbcfb_3840x1500.png 848w, https://substackcdn.com/image/fetch/$s_!CKM0!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F063a2e19-08e2-46a3-9c05-e195947dbcfb_3840x1500.png 1272w, https://substackcdn.com/image/fetch/$s_!CKM0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F063a2e19-08e2-46a3-9c05-e195947dbcfb_3840x1500.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>MCP standardizes how AI applications connect to tools and data sources. It governs the interface &#8212; not what happens inside the execution once a tool is called. | Source: <a href="https://modelcontextprotocol.io/introduction">modelcontextprotocol.io</a></em></figcaption></figure></div><p>The <a href="https://labs.adaline.ai/p/the-mcp-product-playbook">MCP Product Playbook</a> describes MCP as a standard interface between models and tool providers. That is exactly what the <a href="https://modelcontextprotocol.io/specification/2025-11-25">MCP specification</a> does:</p><ul><li><p>It defines how tools are exposed, described, and invoked.</p></li><li><p>It handles discovery, schema, and transport.</p></li><li><p>It does not handle what happens when a tool times out, when a write is retried in an unsafe way, or when the model must decide if a failed call means the action ran.</p></li></ul><p>Standard access is the first step and not a guarantee of safe execution. The runtime still owns permissions, retry logic, timeout rules, approval gates, artifact storage, and recovery paths.</p><p>The <a href="https://labs.adaline.ai/p/writing-effective-tool-calling-functions">tool-calling functions</a> layer defines how tools are described to the model. The <a href="https://labs.adaline.ai/p/multi-agent-systems-product-control-plane">product control plane</a> governs how they run and how state is tracked across steps. <a href="https://labs.adaline.ai/p/prompt-management-for-product-leaders">Prompt management</a> controls what the model sees; the runtime controls what it does.</p><p>Both <a href="https://developers.openai.com/api/docs/guides/agents/sandboxes">OpenAI</a> and <a href="https://www.anthropic.com/engineering/managed-agents">Anthropic</a> treat standard access and safe execution as separate layers. Conflating them is how production reliability becomes an afterthought.</p><h2>Stateful vs. Stateless Tools</h2><p>Not every tool carries the same risk. The line that matters most in production is not what a tool can do &#8212; it is what a tool changes.</p><p><strong>Stateless tools</strong> read or compute without touching anything outside the agent&#8217;s context. A web search, a CRM record lookup, a file read, or a database query all fit here. If they fail, retry them freely. The cost is latency, nothing more.</p><p><strong>Stateful tools</strong> write to the world outside the agent. Sending an email, updating a CRM record, merging a pull request, creating an invoice, publishing content, etc. These all change&nbsp;<a href="https://labs.adaline.ai/p/writing-effective-tool-calling-functions">the external state</a>&nbsp;in a way that reads never do. Once execution begins, a failure does not undo what has already run. The email may already be sent. The invoice may already exist.</p><p>This is the line the <a href="https://labs.adaline.ai/p/building-better-product-with-tool-calling">tool orchestration</a> layer must hold. Different tools require different handling, such as retry rules, idempotency requirements, and fallback paths. <a href="https://labs.adaline.ai/p/sub-agents-for-product-managers">Sub-agents</a> that each own a distinct tool set make this boundary clear, rather than running all actions through one loop with no risk distinction.</p><p>The problem is the gap between tools you can retry freely and tools you cannot.</p><h2>Retries and Timeouts Are Workflow Decisions, Not Infra Defaults</h2><p>Retries look like infrastructure. In practice, they are workflow decisions with consequences that users see.</p><p>For stateless tools, retry logic is simple: if the call fails, try again with backoff and jitter. <a href="https://aws.amazon.com/builders-library/timeouts-retries-and-backoff-with-jitter/">AWS&#8217;s Builders&#8217; Library guidance</a> on timeouts and retries applies directly. For stateful tools, the question is harder.</p><p>Was the action done before the failure, or not?</p><p>A network timeout after a write does not tell you whether the write went through. Retrying without a guard could run the same action twice.</p><p><a href="https://docs.stripe.com/api/idempotent_requests">Stripe&#8217;s idempotency model</a> handles this with idempotency keys with a unique ID on each request, so that retrying returns the same result instead of creating a duplicate.</p><p><a href="https://aws.amazon.com/builders-library/making-retries-safe-with-idempotent-APIs/">AWS&#8217;s guidance on making retries safe</a> applies the same idea to distributed APIs. The pattern transfers directly: attach a unique operation ID to each stateful call, and let the downstream system deduplicate on that key.</p><p>Idempotency handles the retry problem. But retries only trigger when the system knows a call failed. Timeouts introduce a harder case: the call ended, but you do not know whether it succeeded. One timeout setting across all tools is not a policy; it is a default that creates <a href="https://labs.adaline.ai/p/designing-ai-features-for-nondeterminism">failure modes</a> the agent was not built to handle. The right cutoff depends entirely on what normal looks like for that tool:</p><ul><li><p>A fast-read API should cut off after 2 seconds.</p></li><li><p>A code sandbox may need twenty.</p></li><li><p>A document pipeline may need two minutes.</p></li></ul><p>Each tool needs its own timeout, matched to its own normal runtime.</p><p>Four rules apply across both:</p><ol><li><p>Retry reads freely; use idempotency keys for all stateful writes. Meaning: attach a unique operation ID so the downstream system can deduplicate rather than run it twice.</p></li><li><p>Track four outcomes: success, explicit failure, timeout, and unknown. Treat unknown as requiring review, not the same as failure.</p></li><li><p>Decide before launch which failures auto-retry, which escalate, and which stop the run.</p></li><li><p>Surface retry counts in your traces, because a tool that always works on the third attempt is a sign that <a href="https://labs.adaline.ai/p/why-ai-products-break-in-production-context-engineering">AI products are breaking in production</a> before users notice.</p></li></ol><p><a href="https://www.adaline.ai/docs/deploy/overview">Adaline&#8217;s Deploy overview</a> and <a href="https://www.adaline.ai/docs/deploy/integrate-your-ci-cd">CI/CD integration</a> connect here: pipelines that test agent behavior across environments need to know which tools are retry-prone before those patterns hit real traffic.</p><h2>Recovery Requires Checkpoints, Artifacts, and a Clear Next Step</h2><p>Retry logic prevents some failures from worsening. It does not cover the case where the workflow must stop, save its state, and either resume or hand off.</p><p><a href="https://developers.openai.com/api/docs/guides/agents/sandboxes">OpenAI&#8217;s sandbox model</a> treats stateful workspaces as a core design element: the runtime holds files, outputs, and mid-step results so a failed run does not restart from scratch. <a href="https://www.anthropic.com/engineering/managed-agents">Anthropic&#8217;s managed-agents essay</a> makes the same point: execution surfaces must support checkpoint-and-resume rather than using raw chat context to rebuild what happened.</p><p><a href="https://labs.adaline.ai/p/multi-agent-systems-product-control-plane">Recovery</a> is not an error handler. It is a design decision made before the first run. The right checkpoint places depend on which steps are costly to re-run and which are hard to undo. <a href="https://labs.adaline.ai/p/openclaw-architecture-not-magic">Persistent state</a> across steps lets the system pick up at the right point without redoing completed writes.</p><p>The choice between re-plan and hand-off matters. <a href="https://labs.adaline.ai/p/claude-code-vs-openai-codex">Review loops in coding agents</a> show this clearly: some failures mean the plan needs to change; others mean the run should stop and surface its state to a human. Knowing which applies before the run starts is what keeps a failure recoverable. <a href="https://www.adaline.ai/docs/deploy/deploy-your-prompt">Deploying your prompt</a> ties this to runtime snapshots, diffs, and rollback history.</p><h2>Approvals Belong at High-Risk State Transitions</h2><p>Not every tool call needs a human in the loop. But some should never run without one.</p><p><a href="https://adk.dev/workflows/human-input/">Google ADK&#8217;s human-input documentation</a> treats human input as a workflow step for decision checks and permissions, not a safety net added after the fact. Approval gates are workflow boundaries, not general AI safety measures.</p><p>The tools that need approval share one trait: they create state changes that are hard to undo. Sending a customer email, merging a pull request, publishing content, creating an invoice, or deleting a record all belong here. <a href="https://labs.adaline.ai/p/multi-agent-systems-product-control-plane">Permissions and handoffs</a> between agents, or between an agent and a human, are first-class concerns.</p><p><a href="https://labs.adaline.ai/p/sub-agents-for-product-managers">Sub-agents</a> that handle delegated tasks need approval rules set before the task starts, not at runtime. <a href="https://labs.adaline.ai/p/ai-prd-missing-sections">Behavioral constraints in AI PRDs</a> make the same point: failure limits and approval rules must be in the spec before a feature ships, not left as undefined behavior.</p><h2>Observability Makes Reliability Measurable</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ngRe!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e2af9df-fe4e-4693-859f-b7b00fb4985b_1320x1542.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ngRe!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e2af9df-fe4e-4693-859f-b7b00fb4985b_1320x1542.png 424w, https://substackcdn.com/image/fetch/$s_!ngRe!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e2af9df-fe4e-4693-859f-b7b00fb4985b_1320x1542.png 848w, https://substackcdn.com/image/fetch/$s_!ngRe!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e2af9df-fe4e-4693-859f-b7b00fb4985b_1320x1542.png 1272w, https://substackcdn.com/image/fetch/$s_!ngRe!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e2af9df-fe4e-4693-859f-b7b00fb4985b_1320x1542.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ngRe!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e2af9df-fe4e-4693-859f-b7b00fb4985b_1320x1542.png" width="1320" height="1542" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2e2af9df-fe4e-4693-859f-b7b00fb4985b_1320x1542.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1542,&quot;width&quot;:1320,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:302868,&quot;alt&quot;:&quot;Adaline execution trace showing a multi-step AI agent run with nested spans including rag_phase, pinecone_query, create_embeddings, query_routing, agent_lifecycle, tool_execution_phase, tool_call_weather_checker, tool_call_nutrition_planner, and final_response &#8212; each span annotated with timing and cost for full runtime visibility&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/180593889?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e2af9df-fe4e-4693-859f-b7b00fb4985b_1320x1542.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Adaline execution trace showing a multi-step AI agent run with nested spans including rag_phase, pinecone_query, create_embeddings, query_routing, agent_lifecycle, tool_execution_phase, tool_call_weather_checker, tool_call_nutrition_planner, and final_response &#8212; each span annotated with timing and cost for full runtime visibility" title="Adaline execution trace showing a multi-step AI agent run with nested spans including rag_phase, pinecone_query, create_embeddings, query_routing, agent_lifecycle, tool_execution_phase, tool_call_weather_checker, tool_call_nutrition_planner, and final_response &#8212; each span annotated with timing and cost for full runtime visibility" srcset="https://substackcdn.com/image/fetch/$s_!ngRe!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e2af9df-fe4e-4693-859f-b7b00fb4985b_1320x1542.png 424w, https://substackcdn.com/image/fetch/$s_!ngRe!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e2af9df-fe4e-4693-859f-b7b00fb4985b_1320x1542.png 848w, https://substackcdn.com/image/fetch/$s_!ngRe!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e2af9df-fe4e-4693-859f-b7b00fb4985b_1320x1542.png 1272w, https://substackcdn.com/image/fetch/$s_!ngRe!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e2af9df-fe4e-4693-859f-b7b00fb4985b_1320x1542.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><a href="https://go.adaline.ai/dRpz6AY">Adaline's</a> trace view showing a complete agent execution: every span from RAG retrieval to tool calls to final response, with per-step timing and a total cost of $0.0017. This is what runtime visibility looks like in practice.</figcaption></figure></div><p>Retries, timeouts, checkpoints, and approval gates are mechanisms. Without visibility into what actually ran, in what order, with what inputs and outputs, those mechanisms operate on guesswork.</p><p><a href="https://labs.adaline.ai/p/observability-vs-monitoring-for-agentic-ai">Observability vs monitoring</a> for agentic systems is not the same problem as watching a stateless API. A stateless API either responded or it did not. A tool-using agent has a multi-step trace in which any step can fail, retry, time out, partially succeed, or pause for approval. The final output tells you almost nothing about what happened in the middle.</p><p>What needs to be visible are every tool call, its inputs and outputs, retry counts, timeout events, approval triggers, state changes, and the recovery path taken. That trace is not debugging overhead. It is the layer that turns retry rules and timeout settings into something you can measure and improve.</p><p><a href="https://www.adaline.ai/blog/complete-guide-llm-observability-monitoring-2026">LLM observability</a> at the production level includes distributed tracing, per-request visibility, and anomaly detection. <a href="https://www.adaline.ai/blog/complete-guide-llm-ai-agent-evaluation-2026">AI agent evaluation</a> connects pre-launch testing to production monitoring. Essentially, behaviors you test before release need to be tracked after it, because real traffic finds edge cases no test suite fully covers.</p><h2>Reliable Tool-Using Agents Are Built at the Runtime Layer</h2><p>Every agent that reaches a demo can call the tools. What separates a solid system from a fragile one is what happens after that first call. Can the runtime classify tool risk, retry safely, hold per-tool timeouts, preserve state through failure, gate irreversible writes, and keep the full trace visible?</p><p><a href="https://www.adaline.ai/blog/complete-guide-prompt-engineering-operations-promptops-2026">PromptOps</a>, <a href="https://www.adaline.ai/iterate">Iterate</a>, <a href="https://www.adaline.ai/deploy">Deploy</a>, and the full <a href="https://www.adaline.ai/">Adaline</a> platform connect to exactly this: reliability is not a feature you add once the agent works. <strong>It is the layer you design first and build the agent on top of.</strong></p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[How To Evaluate Coding Agents In Production: Metrics, Failure Modes, And Review Loops]]></title><description><![CDATA[How to evaluate coding agents in production: four metrics that matter, five failure modes to design against, and a review loop that compounds.]]></description><link>https://labs.adaline.ai/p/evaluate-coding-agents-production</link><guid isPermaLink="false">https://labs.adaline.ai/p/evaluate-coding-agents-production</guid><dc:creator><![CDATA[Nilesh Barla]]></dc:creator><pubDate>Sat, 18 Apr 2026 00:01:42 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/f1f76ae3-75bd-4b7d-8ac4-be1b2c4b3b27_1272x713.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>TLDR:</strong> Benchmark scores don't reflect production reliability. To evaluate coding agents in real engineering environments, teams need four specific metrics: <strong>task completion rate</strong>, <strong>regression introduction rate</strong>, r<strong>eview loop count</strong>, and <strong>blast radius on failure</strong>. They also need a failure mode taxonomy to design tests around, a structured three-stage review loop, and a lightweight eval dataset built from real production tasks. The teams that build this early move faster later. They can swap models or change prompts with confidence.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://go.adaline.ai/rPUz2SX" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!5wqU!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe050fc66-b2b1-43e4-89a0-29ade70ee4c4_2160x810.png 424w, https://substackcdn.com/image/fetch/$s_!5wqU!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe050fc66-b2b1-43e4-89a0-29ade70ee4c4_2160x810.png 848w, https://substackcdn.com/image/fetch/$s_!5wqU!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe050fc66-b2b1-43e4-89a0-29ade70ee4c4_2160x810.png 1272w, https://substackcdn.com/image/fetch/$s_!5wqU!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe050fc66-b2b1-43e4-89a0-29ade70ee4c4_2160x810.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!5wqU!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe050fc66-b2b1-43e4-89a0-29ade70ee4c4_2160x810.png" width="1200" height="450" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e050fc66-b2b1-43e4-89a0-29ade70ee4c4_2160x810.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:546,&quot;width&quot;:1456,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:288175,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://go.adaline.ai/rPUz2SX&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/194520501?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe050fc66-b2b1-43e4-89a0-29ade70ee4c4_2160x810.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!5wqU!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe050fc66-b2b1-43e4-89a0-29ade70ee4c4_2160x810.png 424w, https://substackcdn.com/image/fetch/$s_!5wqU!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe050fc66-b2b1-43e4-89a0-29ade70ee4c4_2160x810.png 848w, https://substackcdn.com/image/fetch/$s_!5wqU!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe050fc66-b2b1-43e4-89a0-29ade70ee4c4_2160x810.png 1272w, https://substackcdn.com/image/fetch/$s_!5wqU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe050fc66-b2b1-43e4-89a0-29ade70ee4c4_2160x810.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Every coding agent demo looks impressive. The agent takes a feature request, navigates the codebase, writes a working diff, and the tests pass. If you're still choosing between agents, see our <a href="https://labs.adaline.ai/p/claude-code-vs-openai-codex">Claude Code vs OpenAI Codex comparison</a> before building your eval framework around a specific tool.</p><p>What you don&#8217;t see is what happens weeks later. The same agent takes a production task and quietly introduces a regression in a module it was never asked to touch.</p><p>Teams evaluating coding agents in production are discovering something important. Demo performance and production reliability measure different things entirely.</p><ul><li><p>Benchmark suites capture capability under controlled conditions.</p></li><li><p>Production work happens in messy, evolving codebases.</p></li><li><p>Half-documented APIs.</p></li><li><p>Test suites that don&#8217;t cover everything.</p></li><li><p>A context that no benchmark has ever encountered.</p></li></ul><p>This blog covers the following:</p><ol><li><p>Four metrics that are important.</p></li><li><p>The five failure modes worth designing tests around.</p></li><li><p>How to build a review loop that improves over time.</p></li><li><p>How to construct an eval dataset from real work.</p></li></ol><div class="callout-block" data-callout="true"><p>Learn more about LLM and agent evaluation <a href="https://labs.adaline.ai/blog/complete-guide-llm-ai-agent-evaluation-2026">here</a>. </p></div><h2>Why Benchmark Scores Don&#8217;t Transfer to Production</h2><p><a href="https://www.swebench.com/">SWE-bench</a> is the most commonly cited benchmark for <a href="https://labs.adaline.ai/p/what-are-agentic-llms-a-comprehensive">coding agents</a>. It measures whether an agent can resolve real GitHub issues on open-source repositories. That&#8217;s a genuinely useful signal for comparing models. But it&#8217;s not what production looks like.</p><p>A March 2026 study by <a href="https://metr.org/notes/2026-03-10-many-swe-bench-passing-prs-would-not-be-merged-into-main/">METR</a> found that roughly half of test-passing SWE-bench PRs would not be merged by actual repo maintainers. The automated grader scores are, on average, 24.2 percentage points higher than what maintainers actually accept.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!2g93!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7fbe985-6671-4305-af0c-8df50e4851d7_3000x1800.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!2g93!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7fbe985-6671-4305-af0c-8df50e4851d7_3000x1800.png 424w, https://substackcdn.com/image/fetch/$s_!2g93!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7fbe985-6671-4305-af0c-8df50e4851d7_3000x1800.png 848w, https://substackcdn.com/image/fetch/$s_!2g93!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7fbe985-6671-4305-af0c-8df50e4851d7_3000x1800.png 1272w, https://substackcdn.com/image/fetch/$s_!2g93!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7fbe985-6671-4305-af0c-8df50e4851d7_3000x1800.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!2g93!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7fbe985-6671-4305-af0c-8df50e4851d7_3000x1800.png" width="1456" height="874" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a7fbe985-6671-4305-af0c-8df50e4851d7_3000x1800.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:874,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Normalized pass rates chart&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Normalized pass rates chart" title="Normalized pass rates chart" srcset="https://substackcdn.com/image/fetch/$s_!2g93!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7fbe985-6671-4305-af0c-8df50e4851d7_3000x1800.png 424w, https://substackcdn.com/image/fetch/$s_!2g93!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7fbe985-6671-4305-af0c-8df50e4851d7_3000x1800.png 848w, https://substackcdn.com/image/fetch/$s_!2g93!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7fbe985-6671-4305-af0c-8df50e4851d7_3000x1800.png 1272w, https://substackcdn.com/image/fetch/$s_!2g93!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7fbe985-6671-4305-af0c-8df50e4851d7_3000x1800.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Both automated grader scores (orange) and maintainer merge rates (blue) improve as models improve &#8212; but the gap between them stays wide. The average difference across all models is 24.2 percentage points. | <strong>Source</strong>: <a href="https://metr.org/notes/2026-03-10-many-swe-bench-passing-prs-would-not-be-merged-into-main/">METR</a>, March 2026.</em></figcaption></figure></div><blockquote><p>That gap is the benchmark-to-production problem made concrete.</p></blockquote><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!3gr4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d5fe2fc-418c-4c07-be68-65e939b91df8_3840x2374.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!3gr4!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d5fe2fc-418c-4c07-be68-65e939b91df8_3840x2374.webp 424w, https://substackcdn.com/image/fetch/$s_!3gr4!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d5fe2fc-418c-4c07-be68-65e939b91df8_3840x2374.webp 848w, https://substackcdn.com/image/fetch/$s_!3gr4!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d5fe2fc-418c-4c07-be68-65e939b91df8_3840x2374.webp 1272w, https://substackcdn.com/image/fetch/$s_!3gr4!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d5fe2fc-418c-4c07-be68-65e939b91df8_3840x2374.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!3gr4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d5fe2fc-418c-4c07-be68-65e939b91df8_3840x2374.webp" width="1456" height="900" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2d5fe2fc-418c-4c07-be68-65e939b91df8_3840x2374.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:900,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!3gr4!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d5fe2fc-418c-4c07-be68-65e939b91df8_3840x2374.webp 424w, https://substackcdn.com/image/fetch/$s_!3gr4!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d5fe2fc-418c-4c07-be68-65e939b91df8_3840x2374.webp 848w, https://substackcdn.com/image/fetch/$s_!3gr4!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d5fe2fc-418c-4c07-be68-65e939b91df8_3840x2374.webp 1272w, https://substackcdn.com/image/fetch/$s_!3gr4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d5fe2fc-418c-4c07-be68-65e939b91df8_3840x2374.webp 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Single-turn evals grade a response. Agent evals have to verify an outcome. The grading logic is fundamentally different. | <strong>Source</strong>: <a href="https://www.anthropic.com/engineering/demystifying-evals-for-ai-agents">Demystifying evals for AI agents</a>, Anthropic Engineering, January 2026.</em></figcaption></figure></div><p>SWE-bench tasks come with a complete repository context, a clear problem statement, and a test suite that validates the fix. Production tasks arrive with ambiguous requirements, partially documented dependencies, and internal libraries with no public docs.</p><p>Scale AI&#8217;s <a href="https://scale.com/research/swe_bench_pro">SWE-bench Pro</a> shows how sharp this issue is. Top frontier models that score 80%+ on Verified fall below 25% on Pro tasks. Those tasks require multi-file reasoning across unfamiliar repositories. That&#8217;s closer to what production actually demands.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!RNW7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25bd9e2d-8cf1-4055-bab6-1b219ccc38fb_2104x944.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!RNW7!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25bd9e2d-8cf1-4055-bab6-1b219ccc38fb_2104x944.png 424w, https://substackcdn.com/image/fetch/$s_!RNW7!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25bd9e2d-8cf1-4055-bab6-1b219ccc38fb_2104x944.png 848w, https://substackcdn.com/image/fetch/$s_!RNW7!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25bd9e2d-8cf1-4055-bab6-1b219ccc38fb_2104x944.png 1272w, https://substackcdn.com/image/fetch/$s_!RNW7!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25bd9e2d-8cf1-4055-bab6-1b219ccc38fb_2104x944.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!RNW7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25bd9e2d-8cf1-4055-bab6-1b219ccc38fb_2104x944.png" width="1456" height="653" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/25bd9e2d-8cf1-4055-bab6-1b219ccc38fb_2104x944.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:653,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:455264,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/194520501?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25bd9e2d-8cf1-4055-bab6-1b219ccc38fb_2104x944.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!RNW7!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25bd9e2d-8cf1-4055-bab6-1b219ccc38fb_2104x944.png 424w, https://substackcdn.com/image/fetch/$s_!RNW7!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25bd9e2d-8cf1-4055-bab6-1b219ccc38fb_2104x944.png 848w, https://substackcdn.com/image/fetch/$s_!RNW7!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25bd9e2d-8cf1-4055-bab6-1b219ccc38fb_2104x944.png 1272w, https://substackcdn.com/image/fetch/$s_!RNW7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25bd9e2d-8cf1-4055-bab6-1b219ccc38fb_2104x944.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>SWE-bench Pro uses contamination-resilient curation from commercial repos. Resolve rates drop significantly on commercial codebases compared to public ones &#8212; GPT-5 falls from 23.3% to 14.9%, Opus 4.1 from 22.7% to 17.8%. | <strong>Source</strong>: <a href="https://scale.com/research/swe_bench_pro">Scale AI SWE-bench Pro</a></em></figcaption></figure></div><p>There&#8217;s a second structural problem. <strong>Benchmark evaluators measure outputs, not processes</strong>.</p><p>A coding agent that reaches the right answer by making up intermediate steps isn&#8217;t a reliable tool. It&#8217;s a fragile one. The benchmark score doesn&#8217;t capture how it got there. It doesn&#8217;t capture what it ignored, or whether the same reasoning chain holds on a problem that&#8217;s 10% different.</p><p>This effect is made worse by <a href="https://labs.adaline.ai/p/what-is-test-time-scaling">test-time scaling</a> in frontier models. Longer reasoning chains improve accuracy on isolated tasks. But they don&#8217;t fix what actually matters in production: the agent still has no memory of your codebase, no awareness of your team&#8217;s conventions, and no model of which parts of your system are load-bearing.</p><p>Benchmarks aren&#8217;t useless. They help you eliminate obviously weak models. But once you&#8217;ve made an initial selection, the evaluation that actually matters happens in your codebase, on your tasks, with your review process in the loop.</p><h2>The Four Metrics That Actually Matter</h2><p>Production eval for coding agents requires tracking four numbers. Two measures output quality. One measures process efficiency, and the other measures downside risk.</p><ol><li><p><strong>Task completion rate</strong> is the percentage of tasks the agent completes correctly. The definition matters: a completion means a diff that passes your test suite, builds cleanly, and requires no correction before merge. <strong>An agent that produces a partially working diff that a human has to edit is not a completion</strong>. Teams that use a loose definition tend to overestimate their agent&#8217;s reliability by 20&#8211;30 percentage points.</p></li><li><p><strong>Regression introduction rate</strong> is the percentage of completed tasks where the agent modifies code outside the specified scope and introduces a bug. This is the number most teams miss in their initial evals. An agent that completes 80% of tasks but introduces regressions in 15% of those completions is a net negative. The debugging time erases the output gain.</p></li><li><p><strong>Review loop count</strong> is the average number of human correction cycles before a task output is merge-ready. A healthy baseline for a well-scoped task is one cycle. If your agent requires two or more, the issue is almost always <strong>prompt quality</strong> or c<strong>ontext framing</strong>. That number tells you exactly where to iterate.</p><p><br><a href="https://www.faros.ai/blog/ai-software-engineering">Faros AI&#8217;s analysis</a> of 10,000 developers found that high AI adoption teams merged 98% more PRs but saw review time increase by 91%. There was no measurable gain in organizational delivery. The output gain was absorbed entirely by review overhead.<br></p><p>Collecting this metric requires <a href="https://labs.adaline.ai/p/ai-observability-and-evaluations">agent observability</a> tooling. Log each review cycle as a discrete event, not just the final accepted output.</p></li><li><p><strong>Blast radius on failure</strong> measures how much of the codebase is touched when an agent task goes wrong. For instance, a contained failure modifies two files. But a poorly scoped task can cascade across <strong>eight modules</strong>. That happens when the agent infers imports instead of confirming them. Tracking blast radius gives you data to design better scoping policies before you scale, not after the first multi-module incident.</p></li></ol><p>Collecting these metrics requires logging from day one. Every agent task should generate a structured log: task description, files touched, test results before and after, review cycle count, and final merge decision.</p><p>The early data sets your baseline. Don&#8217;t wait until you&#8217;re scaling to add it.</p><h2>The Five Failure Modes to Design Tests Around</h2><p>Building an eval dataset without a failure taxonomy is like writing tests without knowing what could break. These five failure modes cover most of what goes wrong with coding agents in real engineering environments.</p><ol><li><p><strong>Context blindness</strong> occurs when the agent operates on a wrong or incomplete model of the codebase. It writes code referencing APIs or variable names that don&#8217;t exist in the current project version. This happens because the context window holds only the files you provided. The dependency it needs is two or three levels away.<br></p><p><a href="https://labs.adaline.ai/p/context-rot-why-llms-are-getting">Context rot</a> makes this significantly worse. As context grows, instruction quality degrades. Multi-step tasks are especially vulnerable.<br></p></li><li><p><strong>Instruction drift</strong> is the multi-step version of context blindness. The agent begins executing a clear task but gradually shifts its reading of the goal. By step seven of a twelve-step refactor, it&#8217;s optimizing for a slightly different target than the one stated at step one.<br></p><p>A January 2026 <a href="https://arxiv.org/pdf/2601.04170v1">paper</a> formalizes this as &#8220;semantic drift.&#8221; The paper documents that unchecked drift reduces task completion accuracy and increases human intervention rates in production systems.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!hOGr!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91b72448-ec7b-4370-a0cf-f057a016131a_2110x1138.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!hOGr!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91b72448-ec7b-4370-a0cf-f057a016131a_2110x1138.png 424w, https://substackcdn.com/image/fetch/$s_!hOGr!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91b72448-ec7b-4370-a0cf-f057a016131a_2110x1138.png 848w, https://substackcdn.com/image/fetch/$s_!hOGr!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91b72448-ec7b-4370-a0cf-f057a016131a_2110x1138.png 1272w, https://substackcdn.com/image/fetch/$s_!hOGr!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91b72448-ec7b-4370-a0cf-f057a016131a_2110x1138.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!hOGr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91b72448-ec7b-4370-a0cf-f057a016131a_2110x1138.png" width="1456" height="785" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/91b72448-ec7b-4370-a0cf-f057a016131a_2110x1138.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:785,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:220549,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/194520501?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91b72448-ec7b-4370-a0cf-f057a016131a_2110x1138.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!hOGr!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91b72448-ec7b-4370-a0cf-f057a016131a_2110x1138.png 424w, https://substackcdn.com/image/fetch/$s_!hOGr!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91b72448-ec7b-4370-a0cf-f057a016131a_2110x1138.png 848w, https://substackcdn.com/image/fetch/$s_!hOGr!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91b72448-ec7b-4370-a0cf-f057a016131a_2110x1138.png 1272w, https://substackcdn.com/image/fetch/$s_!hOGr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91b72448-ec7b-4370-a0cf-f057a016131a_2110x1138.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Semantic drift reaches nearly 50% incidence at 600 tokens of context &#8212; far earlier than most teams expect. Coordination and behavioral drift follow the same curve. | <strong>Source</strong>: <a href="https://arxiv.org/abs/2601.04170v1">arXiv:2601.04170</a></em></figcaption></figure></div><p></p></li><li><p><strong>Silent regression</strong> is the costliest failure mode. It doesn&#8217;t surface at review time. The agent completes the requested task correctly but makes an incidental change to a shared utility or config file. That change introduces a bug. The bug won&#8217;t appear until another part of the system is affected in production.<br></p><p><a href="https://daplab.cs.columbia.edu/general/2026/01/08/9-critical-failure-patterns-of-coding-agents.html">Columbia&#8217;s DAPLab</a> studied five coding agents across 15+ applications and found a consistent pattern. Agents &#8220;prioritize runnable code over correctness,&#8221; suppressing errors to make output appear functional rather than flagging the failure.<br></p></li><li><p><strong>Scope creep</strong> occurs when the agent infers that the task requires more changes than were requested. It makes those changes without flagging them. Unlike silent regression, these extra changes are deliberate. The agent decided they were needed. The inference is often wrong. The review process focuses on the requested change but misses the additions that weren&#8217;t requested.<br></p></li><li><p><strong>The hallucinated API surface</strong> is the easiest failure mode to detect. The agent calls methods, imports packages, or references config keys that don&#8217;t exist. This usually surfaces in CI right away. But it generates an outsized debugging cost. That cost grows when the hallucination is a near-miss: a method name off by one character from a real one.</p></li></ol><div id="youtube2-005JLRt3gXI" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;005JLRt3gXI&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/005JLRt3gXI?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Designing tests around these failure modes means constructing tasks that stress each one specifically.</p><p>Test context blindness with tasks that require files not in the default context. Test instruction drift with multi-step refactors. Test silent regression by running your full test suite after every agent task, not just the tests adjacent to the change.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/evaluate-coding-agents-production?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/evaluate-coding-agents-production?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://labs.adaline.ai/p/evaluate-coding-agents-production?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h2>How to Design Your Review Loop</h2><p>The review loop is where evaluation becomes operational. Every coding agent deployment needs a structured process with explicit stages and decision criteria. &#8220;Someone should look at this&#8221; is not a process.</p><p>A three-stage loop works for most engineering teams.</p><p><strong>Stage one is automated.</strong><br>CI runs immediately on every agent-produced diff. It covers the build, unit tests, and integration tests. No human reviews a diff that fails CI.</p><p>This isn&#8217;t novel. <a href="https://google.github.io/eng-practices/">Google&#8217;s engineering practices documentation</a> has established automated gates as a baseline for any serious code review process. But teams skip this stage when moving fast. <a href="https://www.faros.ai/research">Faros AI&#8217;s 2026 data</a> across 22,000 developers found that 31% of PRs are already merging with no review at all. That&#8217;s where silent regressions accumulate at scale.</p><p><strong>Stage two is scoped human review.</strong><br>A reviewer checks three things.</p><p>First: whether the agent&#8217;s changes are contained to the intended scope. Second: whether any out-of-scope files were changed correctly. Third: whether the approach the agent took is the one the team would have taken.</p><p>The third question is the one most reviewers skip. They check for correctness rather than coherence. But approach divergence is how teams build up technical debt. Agent-generated code that works today creates refactoring work six months from now.</p><p><strong>Stage three is feedback capture.</strong> Every correction should be logged and tagged by failure mode. That means reverts, edits, and notes added to the task description.</p><p>This turns the review loop into a compounding asset. The corrections become the signal for prompt improvement, context window design, and task scoping. Teams that do this find their review loop count drops within four to eight weeks.</p><p>For teams where <a href="https://labs.adaline.ai/p/how-to-ship-reliably-with-claude-code">production reliability</a> is a first-class concern, this loop plugs into your existing code review setup. You&#8217;re not building a parallel process. You&#8217;re adding structure to one that already exists.</p><h2>How to Build a Lightweight Eval Dataset from Production</h2><p>An eval dataset built from synthetic tasks measures what you designed it to measure. That&#8217;s often not what actually fails in your codebase. The more reliable path is to mine your real task history.</p><ol><li><p>Collect the last 30&#8211;50 coding agent tasks your team has run. Include the final accepted diff and every correction made during review. Include any CI failures that occurred before acceptance. If you don&#8217;t have this logged yet, start logging now and run this exercise in four weeks. Don&#8217;t wait for synthetic examples. Start with whatever real tasks you have, even if it&#8217;s only ten.</p></li><li><p>Tag each task by the failure mode it encountered. Some tasks will be clean completions. Many will have at least one failure. Tasks that hit multiple failure modes in a single run are your most valuable eval cases. They show how failure modes compound in ways that isolated testing won&#8217;t surface.</p></li><li><p>Split the tagged dataset into two sets. The first is a dev set for iterating on prompts and context design. The second is a held-out set you run only when making a significant change: a new model, a new system prompt, or a major context window restructure. Running your full eval on every small change produces overfitting. Your prompts start passing tests without improving on genuinely new tasks.</p></li></ol><p>This is the foundation of <a href="https://labs.adaline.ai/p/evaluating-ai-agents-in-2025">evaluating AI agents</a> in a way that transfers to production. A dataset built from real failures, tagged by failure mode, and split correctly gives you the signal to improve with real confidence.</p><h2>Final Thoughts</h2><p>Evaluation is often treated as a one-time setup. Something you do before you deploy and revisit only when something breaks. That framing is exactly backward.</p><p>The eval dataset you build from your first thirty tasks becomes more valuable over time. The fiftieth and hundredth tasks reveal patterns that the early data didn&#8217;t surface. The review loop generates feedback that compounds into better prompt design. The failure mode taxonomy sharpens as your team develops intuition about which failure modes your codebase makes most likely.</p><p>The teams that build this early don&#8217;t just run their current model better. They can swap models, change prompts, and scale with genuine confidence. They have the logging to know, with evidence, whether things got better or worse.</p><p>That confidence is the actual product of evaluation. The metrics and the tests are how you earn it.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[The Missing Product Layer for Multi-Agent Systems]]></title><description><![CDATA[Multi-agent systems fail without permissions, handoffs, visibility, and recovery. How AI PMs and engineers should design a product control plane.]]></description><link>https://labs.adaline.ai/p/multi-agent-systems-product-control-plane</link><guid isPermaLink="false">https://labs.adaline.ai/p/multi-agent-systems-product-control-plane</guid><dc:creator><![CDATA[Nilesh Barla]]></dc:creator><pubDate>Sat, 11 Apr 2026 00:01:16 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/deca22f4-b18b-4863-8ac0-635e86165690_1456x816.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>TLDR:</strong> Only 1 in 10 agentic AI use cases reached production last year, and the issue is not a model-capability problem. Nor a better model. It is the governance layer above the models: who can do what, when to delegate, what humans can see, and how to recover. This article introduces the <strong>Four Control-Plane Primitives</strong> (permissions, handoffs, visibility, and recovery) and walks through what each one means for AI PMs and engineers before a multi-agent workflow ships. <strong>If your PRD does not define delegation boundaries and escalation conditions, it is not ready for a multi-agent workflow.</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://go.adaline.ai/rPUz2SX" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!0Lb8!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7af3bce-3fea-43a8-8f88-672611bc05cf_2160x810.png 424w, https://substackcdn.com/image/fetch/$s_!0Lb8!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7af3bce-3fea-43a8-8f88-672611bc05cf_2160x810.png 848w, https://substackcdn.com/image/fetch/$s_!0Lb8!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7af3bce-3fea-43a8-8f88-672611bc05cf_2160x810.png 1272w, https://substackcdn.com/image/fetch/$s_!0Lb8!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7af3bce-3fea-43a8-8f88-672611bc05cf_2160x810.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!0Lb8!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7af3bce-3fea-43a8-8f88-672611bc05cf_2160x810.png" width="1200" height="450" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c7af3bce-3fea-43a8-8f88-672611bc05cf_2160x810.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:546,&quot;width&quot;:1456,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:292511,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://go.adaline.ai/rPUz2SX&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/193829387?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7af3bce-3fea-43a8-8f88-672611bc05cf_2160x810.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!0Lb8!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7af3bce-3fea-43a8-8f88-672611bc05cf_2160x810.png 424w, https://substackcdn.com/image/fetch/$s_!0Lb8!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7af3bce-3fea-43a8-8f88-672611bc05cf_2160x810.png 848w, https://substackcdn.com/image/fetch/$s_!0Lb8!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7af3bce-3fea-43a8-8f88-672611bc05cf_2160x810.png 1272w, https://substackcdn.com/image/fetch/$s_!0Lb8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7af3bce-3fea-43a8-8f88-672611bc05cf_2160x810.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>When one agent becomes five, the problem changes. You are no longer just designing outputs. You are designing permissions, handoffs, visibility, and trust. And most teams discover this only after they've shipped.</p><p><strong>Multi-agent systems</strong> are AI architectures in which multiple specialized agents collaborate toward a shared goal. Each agent handles a distinct subtask, calls its own tools, and operates within its own context window, while a coordinating layer routes work between them.</p><p><a href="https://cordum.io/blog/multi-agent-orchestration-control-plane">Gartner named multi-agent systems a top 10 strategic technology trend for 2026</a>. They predicted that 40% of enterprise applications will include task-specific agents by year&#8217;s end, up from less than 5% in 2025. Yet only one in ten agentic AI use cases reached production in the past year. The problem between prototype and production is not a model-capability issue, but a governability issue.</p><p>The models are not the hard part. The hard part is building what sits above them:</p><ul><li><p>The layer that governs who can do what, when an agent can delegate.</p></li><li><p>How work transfers between agents, what humans can see</p></li><li><p>How the system recovers when something goes wrong.</p></li></ul><p>This article calls that layer the <strong>product control plane</strong>. It proposes a practical framework built around four primitives every multi-agent product must get right, and walks through what that means for AI PMs writing requirements and engineers deciding what to instrument.</p><h2>Why Single-Agent Product Thinking Breaks In Multi-Agent Systems</h2><p>A single AI agent operates with a knowable mental model. It has one context window, one permission surface, one responsibility boundary, and one output for the user to evaluate.</p><p>When that agent behaves unexpectedly, the failure is usually traceable:</p><ul><li><p>You can examine the prompt,</p></li><li><p>Inspect the tool calls, and</p></li><li><p>Identify where the reasoning went wrong.</p></li></ul><p>The product surface area is bounded.</p><p>Multi-agent systems architecture is categorically different. </p><p><a href="https://arxiv.org/html/2601.13671v1">A January 2026 survey on orchestration and enterprise adoption</a> described the orchestration layer as &#8220;<em>the control plane of a multi-agent system, transforming autonomous components into a coherent, goal-directed collective.</em>&#8221;</p><p>It warned that without it, &#8220;<em>even highly capable agents risk duplication of effort, logical inconsistency, or unbounded autonomy that diverges from the system&#8217;s objectives</em>&#8221;.</p><p>The unbounded autonomy problem is not theoretical. <a href="https://www.anthropic.com/news/measuring-agent-autonomy">Anthropic&#8217;s analysis</a> of agent behavior on their public API, published in early 2026, found that the 99.9th percentile session length grew from 10 to 40 minutes between October 2025 and January 2026. In the same period, the average number of human interventions per session dropped from 5.4 to 3.3. Both trends point in the same direction: agents are operating more autonomously for longer periods with less human contact. That is valuable. It is also the precise condition under which single-agent mental models break down entirely.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!TiQs!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0dd9459-987a-42c5-947c-7495cf400c7b_3840x2160.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TiQs!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0dd9459-987a-42c5-947c-7495cf400c7b_3840x2160.webp 424w, https://substackcdn.com/image/fetch/$s_!TiQs!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0dd9459-987a-42c5-947c-7495cf400c7b_3840x2160.webp 848w, https://substackcdn.com/image/fetch/$s_!TiQs!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0dd9459-987a-42c5-947c-7495cf400c7b_3840x2160.webp 1272w, https://substackcdn.com/image/fetch/$s_!TiQs!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0dd9459-987a-42c5-947c-7495cf400c7b_3840x2160.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TiQs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0dd9459-987a-42c5-947c-7495cf400c7b_3840x2160.webp" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a0dd9459-987a-42c5-947c-7495cf400c7b_3840x2160.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!TiQs!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0dd9459-987a-42c5-947c-7495cf400c7b_3840x2160.webp 424w, https://substackcdn.com/image/fetch/$s_!TiQs!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0dd9459-987a-42c5-947c-7495cf400c7b_3840x2160.webp 848w, https://substackcdn.com/image/fetch/$s_!TiQs!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0dd9459-987a-42c5-947c-7495cf400c7b_3840x2160.webp 1272w, https://substackcdn.com/image/fetch/$s_!TiQs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0dd9459-987a-42c5-947c-7495cf400c7b_3840x2160.webp 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Agents are running significantly longer sessions with each model generation &#8212; a sign of growing autonomy, and a direct argument for stronger governance design. Source: <a href="https://www.anthropic.com/news/measuring-agent-autonomy">Anthropic</a>.</em></figcaption></figure></div><p>When a product team thinks of their system as &#8220;an assistant that uses tools,&#8221; they are designing for a world where one entity has full context and one person is watching. When that same system starts delegating to subagents, the complexity multiplies.</p><p>Think this: each subagent has partial context, different tool access, and its own failure modes.</p><p>Every assumption embedded in the original design becomes a liability. Users cannot see the delegation chain. The PMs have no requirement for what happens when a subagent fails. The engineers have no instrumentation for handoff-level errors.</p><p>The product seems to work until it stops working for no apparent reason.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/multi-agent-systems-product-control-plane?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/multi-agent-systems-product-control-plane?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://labs.adaline.ai/p/multi-agent-systems-product-control-plane?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h2>Delegation Changes The Product Surface Area More Than Most Teams Expect</h2><p>Delegation sounds like a routing problem.</p><p>It is not.</p><p>Delegation is a transfer of authority, context, and responsibility across a trust boundary. And every one of those transfers expands the product surface area in ways that have to be explicitly designed for.</p><p><a href="https://arxiv.org/pdf/2602.11865">A February 2026 research paper on AI delegation mechanics</a> put this clearly: once a multi-agent AI system delegates work to a subagent, the system must account for &#8220;the delegator&#8217;s degree of belief in the delegatee&#8217;s&#8221; reliability. That trust cannot simply be assumed. In practice, it has to be constructed through three decisions that teams routinely skip:</p><ol><li><p><strong>Task packaging:</strong> When a lead agent hands work to a subagent, it must decide what context to transfer. A subagent that receives too little context will misinterpret its scope. One that receives the wrong context will act on incorrect assumptions. Neither failure surfaces as an obvious error; both surface as outputs that are subtly but consequentially wrong.</p></li><li><p><strong>Authority boundaries.</strong> The subagent needs to know what it is allowed to do independently and when it must escalate. Without explicit boundaries, subagents either become overly cautious, interrupting frequently and defeating the purpose of delegation, or overreach, taking actions the user never authorized.</p></li><li><p><strong>Coordination overhead.</strong> <a href="https://www.anthropic.com/engineering/multi-agent-research-system">Anthropic&#8217;s engineering team</a>, in describing their multi-agent research system, noted that early versions made errors like &#8220;spawning 50 subagents for simple queries&#8221; and &#8220;scouring the web endlessly&#8221;. The orchestrator had no clear rules about when delegation was appropriate and when it was wasteful. The system behaved rationally within its local context and irrationally at the product level.</p></li></ol><p>These three problems are not solvable with better prompts. They are solvable with better product design. That means specifying them before the first subagent is built.</p><h2>The Four Control-Plane Primitives: Permissions, Handoffs, Visibility, Recovery</h2><p>A production-ready multi-agent product needs four things to work together. Each is both a product decision and an engineering problem.</p><h3>Permissions</h3><p><strong>Permissions</strong> define what each agent is allowed to do:</p><ol><li><p>Which tools can it call?</p></li><li><p>Which data can it read or write?</p></li><li><p>Which actions can it initiate without asking for approval?</p></li></ol><p>The failure mode when permissions are weak is not dramatic. It is quiet. An agent with excessive permissions takes actions that fall within its technical authority but outside the user&#8217;s intent.</p><p>An agent with insufficient permissions interrupts constantly and erodes the value of autonomy. And when permissions are not designed per-agent, the risk compounds.</p><p>When all agents in a chain inherit the same flat permission set, a single compromised or misconfigured subagent can propagate unauthorized actions through the entire chain.</p><p>The research on this is direct. <a href="https://arxiv.org/pdf/2602.11865">A February 2026 paper on delegation mechanics</a> argued that permission design must extend beyond binary access to <strong>semantic constraints</strong>. Meaning, &#8220;access defined not just by the tool or dataset, but by the specific allowable operations. For example, read-only access to specific rows, or execute-only access to a specific function&#8221;.</p><p>The same paper noted that permissions must be dynamic rather than static: &#8220;access rights are not static endowments but dynamic states that persist only as long as the agent maintains the requisite trust metrics.&#8221;</p><p>For PMs: permissions are a product and compliance decision, not a backend default. The <strong>permission surface</strong> of a multi-agent system determines what the product can do to a user&#8217;s data, systems, and environment without the user's consent. That is a business risk decision.</p><p>For engineers: implement least-privilege defaults at the subagent level. Each agent should receive only the tools and data access it needs for its specific task, not the full tool set of its orchestrator.</p><h3>Handoffs</h3><p>A <strong>handoff</strong> is the transfer of execution from one agent to another: from the orchestrator to a subagent, from one specialist to another, or from an agent back to a human.</p><p>Handoffs are the highest-risk moments in any multi-agent workflow because they combine three failure conditions at once:</p><ol><li><p>Context may be incomplete,</p></li><li><p>Authority may be ambiguous, and</p></li><li><p>Neither agent may recognize that the transfer has gone wrong.</p></li></ol><p><a href="https://arxiv.org/html/2603.18096v1">A March 2026 trace-based assurance framework for agentic AI orchestration</a> identified five failure classes in multi-agent systems. Three of them manifest specifically at handoff boundaries: coordination failures such as loops and deadlocks, role drift in long-horizon workflows, and error propagation across agents.</p><p>The paper described handoffs as moments where &#8220;<strong>planner</strong>, <strong>verifier</strong>, and action <strong>roles</strong> may drift, loop, or deadlock across turn boundaries.&#8221;</p><p>The quality of context transferred at a handoff is ultimately a <a href="https://www.adaline.ai/blog/what-is-context-engineering-for-ai-agents">context engineering</a> problem: what information the receiving agent needs, in what format, and at what level of compression. Get it wrong, and the subagent acts on incorrect premises with full confidence.</p><p><a href="https://www.anthropic.com/engineering/claude-code-auto-mode">Anthropic&#8217;s auto mode for Claude Code</a> addresses handoff risk directly, running safety classifiers at both ends of every subagent handoff: when work is delegated out and when results come back. The outbound check catches compromised or unauthorized delegation. The return check catches subagents that were benign at delegation but compromised mid-run by the content they retrieved. When the classifier flags repeatedly, the system escalates to human review.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!gdMf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6087f5f3-7869-462d-b0bd-292373356895_1920x1920.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!gdMf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6087f5f3-7869-462d-b0bd-292373356895_1920x1920.webp 424w, https://substackcdn.com/image/fetch/$s_!gdMf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6087f5f3-7869-462d-b0bd-292373356895_1920x1920.webp 848w, https://substackcdn.com/image/fetch/$s_!gdMf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6087f5f3-7869-462d-b0bd-292373356895_1920x1920.webp 1272w, https://substackcdn.com/image/fetch/$s_!gdMf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6087f5f3-7869-462d-b0bd-292373356895_1920x1920.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!gdMf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6087f5f3-7869-462d-b0bd-292373356895_1920x1920.webp" width="1456" height="1456" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6087f5f3-7869-462d-b0bd-292373356895_1920x1920.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1456,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!gdMf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6087f5f3-7869-462d-b0bd-292373356895_1920x1920.webp 424w, https://substackcdn.com/image/fetch/$s_!gdMf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6087f5f3-7869-462d-b0bd-292373356895_1920x1920.webp 848w, https://substackcdn.com/image/fetch/$s_!gdMf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6087f5f3-7869-462d-b0bd-292373356895_1920x1920.webp 1272w, https://substackcdn.com/image/fetch/$s_!gdMf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6087f5f3-7869-462d-b0bd-292373356895_1920x1920.webp 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Higher task autonomy demands higher security investment. Auto mode achieves strong autonomy with low ongoing maintenance friction, but sandboxing remains the highest-safety option for sensitive environments. Source: <a href="https://www.anthropic.com/engineering/claude-code-auto-mode">Anthropic</a>.</em></figcaption></figure></div><p>For PMs: handoffs are product moments, not just engineering events. They involve responsibility transfer, potential user confusion, and invisible decisions. Specify what the system must communicate to the user when a handoff occurs, and under what conditions a handoff should require explicit approval.</p><p>For engineers: log every handoff with source agent, destination agent, task specification passed, and context transferred. Treat a handoff with incomplete context transfer as a failure event, not a warning.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share Adaline Labs&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://labs.adaline.ai/?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share Adaline Labs</span></a></p><h3>Visibility</h3><p><strong>Visibility</strong> is the ability for users, PMs, engineers, and operators to understand what the system is doing and why. In a single-agent product, visibility is a nice-to-have. In a multi-agent system, it is the mechanism by which humans maintain meaningful oversight.</p><p><a href="https://anthropic.com/news/our-framework-for-developing-safe-and-trustworthy-agents">Anthropic&#8217;s framework for trustworthy agents</a> identifies transparency as a structural requirement: &#8220;Humans need visibility into agents&#8217; problem-solving processes. Without transparency, a human asking an agent to &#8216;reduce customer churn&#8217; might be baffled when the agent starts contacting the facilities team&#8221;. That example is not abstract. Without step-level visibility, users cannot assess whether the agent is pursuing the right strategy, and they cannot intervene before an undesirable action completes.</p><p><a href="https://aws.amazon.com/blogs/machine-learning/evaluating-ai-agents-real-world-lessons-from-building-agentic-systems-at-amazon/">AWS describes the production consequence</a> in their analysis of agent evaluation at Amazon: &#8220;Quality issues in production often surface in ways that traditional monitoring misses&#8221;. Status codes, response times, and token counts can all show green while the product fails at the reasoning and coordination level.</p><p>Visibility requires traces that capture individual agent steps, tool calls, and handoff events, not just the final output. It also requires activity summaries that translate those traces into language that users can understand. State awareness tells users where they are in a multi-step workflow.</p><p>For PMs: define what the user sees at each stage of a multi-agent task. A task that runs for ten minutes across four subagents with no user-facing updates is not invisible infrastructure. It is a broken product experience.</p><p>For engineers: instrument at the agent step level, not just the request level. <a href="https://www.adaline.ai/blog/complete-guide-llm-observability-monitoring-2026">Agent observability</a> should capture what each agent received, what it called, and what it returned, with enough granularity to reconstruct the full execution trace after the fact.</p><h3>Recovery</h3><p><strong>Recovery</strong> is what the system does when something goes wrong:</p><ul><li><p>When a subagent fails, when a handoff delivers bad context,</p></li><li><p>When an action hits a permission boundary, or</p></li><li><p>When the workflow reaches a state it was not designed to handle.</p></li></ul><p>Most teams design recovery as a single fallback: &#8220;show an error message.&#8221; That is not recovery. It is abandonment.</p><p>A production-grade multi-agent system needs at least three explicit recovery paths: retry with modified parameters, fallback to a simpler workflow, and escalation to human review.</p><p>The escalation condition matters as much as the escalation mechanism. <a href="https://www.anthropic.com/news/measuring-agent-autonomy">Anthropic&#8217;s data on agent autonomy</a> found that experienced users shift over time &#8220;from approving individual actions to monitoring what the agent does and intervening when needed&#8221;. That is a healthy trust pattern. But it only works if the system surfaces enough signal for humans to know when intervention is warranted.</p><p>For PMs: define the escalation trigger conditions before launch. What agent state, output score, or action type should route to human review? What does the product communicate to the user when escalation happens?</p><p>For engineers: implement circuit breakers for runaway delegation chains. Log every permission denial and <strong>fallback logic</strong> event as first-class telemetry, not as debug noise. Recovery paths that are not monitored cannot be improved.</p><h2>What AI PMs Should Put In The PRD For A Multi-Agent Workflow</h2><p>Most PRD templates were built for single-feature, single-agent products. They do not account for the coordination, authority, and visibility questions that multi-agent systems introduce. Before a multi-agent workflow goes to engineering, the PRD should answer each of the following:</p><ul><li><p><strong>Agent role definitions:</strong> What is each agent responsible for, what tools does it have access to, and what is it explicitly prohibited from doing?</p></li><li><p><strong>Permission boundaries:</strong> Which actions require implicit approval, which require explicit user confirmation, and which are always blocked regardless of context?</p></li><li><p><strong>Delegation conditions:</strong> Under what circumstances does the orchestrator delegate to a subagent versus handling the task directly, and what criteria govern that decision?</p></li><li><p><strong>Handoff specifications:</strong> What context must be packaged when work transfers between agents, what does the receiving agent need to know to act correctly, and who is responsible for the outcome once a handoff occurs?</p></li><li><p><strong>User-visible states:</strong> What does the user see at each stage of the workflow, which intermediate states are communicated, and what happens to the UI during a multi-minute agent run?</p></li><li><p><strong>Fallback and escalation flows:</strong> At what point does the system route to human review, who owns the escalation, and what does the product communicate when a fallback triggers?</p></li><li><p><strong>Success definition:</strong> What does &#8220;done&#8221; mean in a multi-step, multi-agent task? What is the acceptance criterion, and at what point is the task complete enough to return control to the user?</p></li></ul><p>That is the product specification layer. The engineering layer that makes it observable and recoverable before launch is equally specific, and equally often skipped.</p><div><hr></div><h2>What AI Engineers Should Instrument, Evaluate, And Audit Before Launch</h2><p>Instrumentation decisions for multi-agent systems differ from single-agent products in scope and consequence. Before a multi-agent workflow goes to production, the following should be in place:</p><ul><li><p><strong>Agent-step tracing:</strong> Capture every subagent action as a trace event with parent agent ID, timestamp, and input/output payloads. Traces should reconstruct into a full execution graph.</p></li><li><p><strong>Handoff logging:</strong> Log every handoff with source agent, destination agent, task specification, and context payload. Flag incomplete context transfers as failure events, not warnings.</p></li><li><p><strong>Permission denial telemetry:</strong> Capture every blocked action with agent identity, attempted action, and the policy rule that blocked it. Permission denials are diagnostic signals about where the system design is breaking down, not noise.</p></li><li><p><strong>Trajectory-level evaluation:</strong> Output scoring at the final response level misses failures that happen inside the workflow. <a href="https://www.adaline.ai/blog/complete-guide-llm-ai-agent-evaluation-2026">Evaluation of AI agents</a> should run across the full sequence of agent decisions, not just at the endpoint. <a href="https://aws.amazon.com/blogs/machine-learning/build-reliable-ai-agents-with-amazon-bedrock-agentcore-evaluations/">Amazon&#8217;s agent evaluation framework</a> covers both individual agent performance and collective system dynamics.</p></li><li><p><strong>Fallback event monitoring:</strong> Log and trend every retry, workflow fallback, and escalation. A spike in fallback events is often the first signal of a model update, a prompt regression, or a new user behavior pattern that the system was not designed for.</p></li><li><p><strong>Auditability before GA:</strong> Any engineer should be able to reconstruct what happened in any session from traces alone, without asking the user. If that reconstruction is not possible, the instrumentation is not sufficient for production.</p></li><li><p><strong>Launch gate:</strong> Define minimum passing thresholds on trajectory evaluation scores, fallback rate, and permission denial rate. Treat them as a hard gate. A multi-agent system that passes output-level quality checks but fails at the trajectory or handoff level is not production-ready.</p></li></ul><h2>Final Thought</h2><p>The industry has spent the past two years optimizing models. The next constraint is not model capability. </p><p><a href="https://aws.amazon.com/blogs/machine-learning/evaluating-ai-agents-real-world-lessons-from-building-agentic-systems-at-amazon/">Research from Amazon&#8217;s internal deployments</a> shows that organizations that invest in&nbsp;<strong>governance</strong>&nbsp;and&nbsp;<strong>evaluation</strong>&nbsp;are an order of magnitude more successful in reaching production than those that do not. The Linux Foundation&#8217;s <a href="https://www.linuxfoundation.org/press/a2a-protocol-surpasses-150-organizations-lands-in-major-cloud-platforms-and-sees-enterprise-production-use-in-first-year">Agent-to-Agent Protocol</a> has already crossed 150 supporting organizations in its first year, a signal that the industry has recognized coordination governance as an infrastructure problem, not a product differentiator.</p><p>The teams that ship reliable multi-agent products will not be the ones with the most capable agents. They will be the ones who designed for <strong>governable autonomy</strong>:</p><ol><li><p>Specifying permissions before deploying,</p></li><li><p>Instrumenting handoffs before trusting them,</p></li><li><p>Defining recovery before needing it, and</p></li><li><p>Giving users enough visibility to trust what the system was doing on their behalf.</p></li></ol><p>That is the product layer most teams skip. It is also the one that determines whether a multi-agent system becomes a product or remains a prototype.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Why AI Took Coding Before Everything Else]]></title><description><![CDATA[Why AI automated coding before law, design, or strategy, and what the verifiability thesis reveals about where automation goes next for product leaders.]]></description><link>https://labs.adaline.ai/p/why-ai-took-coding-before-everything</link><guid isPermaLink="false">https://labs.adaline.ai/p/why-ai-took-coding-before-everything</guid><dc:creator><![CDATA[Nilesh Barla]]></dc:creator><pubDate>Sat, 04 Apr 2026 00:01:10 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/40066d01-a907-43c3-be52-f5613feff8b7_1272x713.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>TLDR</strong>: AI automated coding before law, design, or strategy because code has a built-in feedback loop. Meaning, you can run tests and know immediately whether it worked. That property, which barely exists anywhere else in knowledge work, is why autonomous AI iteration was possible in software first. Understanding that logic tells you what to automate next and which parts of the PM role hold out longest. What has changed is already reshaping how engineers work, what cognitive debt accumulates inside fast-moving teams, and what product leadership actually means when execution is no longer the constraint.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://go.adaline.ai/rPUz2SX" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!hhK7!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01f781dd-c36c-4b4a-a717-aa4376b881b0_2160x810.png 424w, https://substackcdn.com/image/fetch/$s_!hhK7!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01f781dd-c36c-4b4a-a717-aa4376b881b0_2160x810.png 848w, https://substackcdn.com/image/fetch/$s_!hhK7!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01f781dd-c36c-4b4a-a717-aa4376b881b0_2160x810.png 1272w, https://substackcdn.com/image/fetch/$s_!hhK7!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01f781dd-c36c-4b4a-a717-aa4376b881b0_2160x810.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!hhK7!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01f781dd-c36c-4b4a-a717-aa4376b881b0_2160x810.png" width="1200" height="450" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/01f781dd-c36c-4b4a-a717-aa4376b881b0_2160x810.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:546,&quot;width&quot;:1456,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:288175,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://go.adaline.ai/rPUz2SX&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/192966861?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01f781dd-c36c-4b4a-a717-aa4376b881b0_2160x810.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!hhK7!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01f781dd-c36c-4b4a-a717-aa4376b881b0_2160x810.png 424w, https://substackcdn.com/image/fetch/$s_!hhK7!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01f781dd-c36c-4b4a-a717-aa4376b881b0_2160x810.png 848w, https://substackcdn.com/image/fetch/$s_!hhK7!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01f781dd-c36c-4b4a-a717-aa4376b881b0_2160x810.png 1272w, https://substackcdn.com/image/fetch/$s_!hhK7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01f781dd-c36c-4b4a-a717-aa4376b881b0_2160x810.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The most useful way to think about a large language model is this. It has read every textbook ever published. It executes tasks instantly. And it forgets everything that happened before the current conversation. It gives confident answers to questions it genuinely cannot answer. The confidence is the problem.</p><p>Product leaders have spent careers managing exactly this kind of person. In this case, it is the junior hire who executes fast but needs context, direction, and verification. The thing that just changed is that this person now writes all the code.</p><p>This article explains why that happened &#8212; why coding automated first, before law, before strategy, before many other domains. <strong>It traces what that sequence reveals about where product leaders&#8217; attention needs to go next.</strong></p><h2>Why AI Came for Coders First</h2><p>The explanation is not that code is simpler than other knowledge work. The explanation is that code has a built-in verification loop that almost no other professional domain has. That loop made AI possible in software before anywhere else.</p><p>When a model generates code, a test suite runs. The code either works or it doesn&#8217;t. That binary result tells the model exactly where it stands, without a human in the loop. The model generates, encounters a failure, reads the error message, revises, and runs again. This inner cycle closes on its own.</p><p>The same property does not exist in law.</p><p>As <a href="https://simonwillison.net/2026/Mar/12/coding-after-coders/">Simon Willison</a> put it: &#8220;<em>If you&#8217;re a lawyer, you&#8217;re screwed, right?</em>&#8221;</p><p>A brief written by a model may be fluent, well-structured, and completely wrong about precedent, and no automated test can catch it. There is no failing test suite for a hallucinated citation. The error surfaces in court, months later, where the damage is real.</p><p>The same applies to medical reasoning, strategic advice, and most of what knowledge workers produce. Whether the output is correct requires a human who already understands the domain.</p><p>This distinction -- <strong><a href="https://www.jasonwei.net/blog/asymmetry-of-verification-and-verifiers-law">verifiable output</a></strong><a href="https://www.jasonwei.net/blog/asymmetry-of-verification-and-verifiers-law"> </a>versus output that needs expert judgment to check -- is the most important frame for thinking about the automation timeline:</p><ul><li><p>The fastest-automated domains are those where correctness can be tested automatically.</p></li><li><p>Domains that hold out longest are those where correctness is ambiguous or can only be judged by someone who already knows the problem deeply.</p></li></ul><p>For product leaders, this maps directly onto your own work. Features with measurable success signals will automate faster:</p><ul><li><p>Conversion rates, error rates, and latency -- trackable, testable, automatable.</p></li></ul><p>Work requiring judgment about ambiguous value holds out longest:</p><ul><li><p>Deciding which roadmap item matters.</p></li><li><p>Aligning stakeholders around competing priorities.</p></li><li><p>Judging which user signal is real versus noise.</p></li></ul><p>Verifiability is a strategic concept, and knowing which of your responsibilities falls into which bucket is now a planning skill.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/why-ai-took-coding-before-everything?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/why-ai-took-coding-before-everything?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://labs.adaline.ai/p/why-ai-took-coding-before-everything?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h2>The November 2025 Inflection </h2><p><em>What changed and why that inflection matters to us?</em></p><p>November 2025 was not a moment of gradual improvement. It was a threshold crossing.</p><p>Models that had only handled simple, contained tasks suddenly became capable of working through complex, multi-file, deeply connected problems. Single files and narrow scope were no longer the ceiling. The models had crossed an invisible capability line where a whole new class of problems became solvable.</p><p>The clearest evidence came from inside the team&#8217;s building, these tools.</p><p>Boris Cherny, who created Claude Code at Anthropic, has not written a line of code by hand since November 2025. Every line in every pull request is written by the model. He ships ten to thirty pull requests a day. His contribution is not producing code; it is directing the agent and verifying its output.</p><div id="youtube2-We7BZVKbCVw" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;We7BZVKbCVw&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/We7BZVKbCVw?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>For product leaders, the significance is not the output volume; it is what that volume implies about how engineers now experience their own job.</p><p>The mental model changed from &#8220;<em>I write code, the model helps</em>&#8221; to &#8220;<em>I direct the agent, I verify the output.</em>&#8221;</p><p>Engineers now spend most of their time on:</p><ul><li><p>Reviewing model output for correctness and coherence.</p></li><li><p>Writing specifications precise enough for agents to act on.</p></li><li><p>Catching failures before they reach production.</p></li></ul><p>They need more from product leadership as a result. This includes more precise direction, faster feedback cycles, and clearer success criteria. That need arrived ahead of most product roadmaps.</p><p>Most organizations are still structured for a world where the bottleneck was how fast engineers could write code. That bottleneck no longer exists. The constraint that replaced it is less visible, and it is already accumulating inside the teams that have moved fastest.</p><h2>Cognitive Debt: The Hidden Cost Nobody&#8217;s Managing</h2><p>There is a cost accumulating in engineering organizations right now that is not showing up on any dashboard: <strong>cognitive debt</strong>. </p><p>It is distinct from technical debt, and the distinction matters specifically for product leaders.</p><p>Technical debt is a code quality problem &#8212; poor architecture, shortcuts taken under pressure, messy implementations that need cleaning up later. Teams have managed this for decades.</p><blockquote><p>Cognitive debt is different. Cognitive debt is a comprehension problem. It means the team has shipped something they cannot reason about.</p></blockquote><p>For instance, a developer vibes-codes a feature in an afternoon. The feature works, passes tests, and ships on schedule. By every visible metric, the sprint was successful. But nobody on the team can predict what breaks when the next feature touches the same codebase.</p><p>Nobody can explain why the implementation made the choices it made. The shared mental model of the system &#8212; how it works and why &#8212; has degraded faster than the code itself.</p><p><a href="https://margaretstorey.com/blog/2026/02/09/cognitive-debt/">Research into AI-assisted development teams</a> documented exactly this pattern: teams hit a wall mid-project, unable to make simple changes without breaking something unexpected. The real problem was not code quality; <strong>it was that no one could explain why key design decisions had been made</strong>. They had accumulated cognitive debt faster than technical debt, and it paralyzed them.</p><p>Product managers feel cognitive debt first. It shows up as:</p><ul><li><p>Estimates that consistently miss.</p></li><li><p>Regressions with no clear cause.</p></li><li><p>Features that cannot be extended without a full rebuild.</p></li></ul><p>This is why observability stops being an engineering cost and becomes a product input. Trace data, eval systems, and production logs are how a product leader keeps enough understanding of a fast-moving, AI-written system to make planning honest.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!cM9c!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46a4a345-57dd-421e-9562-81504d8e50d4_2262x950.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!cM9c!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46a4a345-57dd-421e-9562-81504d8e50d4_2262x950.png 424w, https://substackcdn.com/image/fetch/$s_!cM9c!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46a4a345-57dd-421e-9562-81504d8e50d4_2262x950.png 848w, https://substackcdn.com/image/fetch/$s_!cM9c!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46a4a345-57dd-421e-9562-81504d8e50d4_2262x950.png 1272w, https://substackcdn.com/image/fetch/$s_!cM9c!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46a4a345-57dd-421e-9562-81504d8e50d4_2262x950.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!cM9c!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46a4a345-57dd-421e-9562-81504d8e50d4_2262x950.png" width="1456" height="611" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/46a4a345-57dd-421e-9562-81504d8e50d4_2262x950.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:611,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!cM9c!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46a4a345-57dd-421e-9562-81504d8e50d4_2262x950.png 424w, https://substackcdn.com/image/fetch/$s_!cM9c!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46a4a345-57dd-421e-9562-81504d8e50d4_2262x950.png 848w, https://substackcdn.com/image/fetch/$s_!cM9c!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46a4a345-57dd-421e-9562-81504d8e50d4_2262x950.png 1272w, https://substackcdn.com/image/fetch/$s_!cM9c!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46a4a345-57dd-421e-9562-81504d8e50d4_2262x950.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Screenshot of casual chain analysis in the <a href="https://go.adaline.ai/dRpz6AY">Adaline</a> dashboard.</em></figcaption></figure></div><p>The PM who reads what the product is actually doing in production is managing cognitive debt. The PM who only reviews finished features is not.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share&quot;,&quot;text&quot;:&quot;Share Adaline Labs&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://labs.adaline.ai/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share"><span>Share Adaline Labs</span></a></p><h2>What Design&#8217;s Collapse Reveals About the Whole Stack</h2><p>The compression happening in engineering is not isolated. It is happening across every function simultaneously, and design is the clearest case study.</p><p>Jenny Wen, who leads design for Claude at Anthropic and was previously Director of Design at Figma, documented this compression directly. </p><p>A few years ago, 60-70 percent of her team&#8217;s time went into mocking and prototyping. That number is now 30-40 percent. That recovered time went into working directly alongside engineers, i.e., polishing implementations as they were built, doing the last-mile work the old handoff model assumed someone else would handle. </p><p>In other words, execution compressed, and the role compressed with it.</p><div id="youtube2-eh8bcBIAAFo" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;eh8bcBIAAFo&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/eh8bcBIAAFo?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Her <a href="https://www.youtube.com/watch?v=4u94juYwLLM">Hatch Conference keynote</a> conveys a deeper point: in a world where anyone can build anything quickly, the scarce skill is no longer execution &#8212; it is curation.</p><p>And it is turning out to be true.</p><p>Choosing what to build matters more than being able to build it. And because building in the wrong direction now costs days instead of months, the PM&#8217;s old job of gating engineering with a complete spec matters less. The scarce judgment is upstream: which directions are worth exploring at all.</p><div id="youtube2-4u94juYwLLM" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;4u94juYwLLM&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/4u94juYwLLM?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Two insights from this shift reach beyond design.</p><p>First, <strong>non-deterministic</strong> products break the specification model.</p><p>You cannot write a complete spec for an AI feature because the product&#8217;s behavior is not fixed; it is a range. What users experience depends on the model, the prompt, and the context, which you could not have anticipated in advance.</p><p>A PM writes acceptance criteria for a summarization feature: three sentences, neutral tone, key date included. </p><p>The model produces a four-sentence summary in active voice that users find more useful than the spec required. The PRD was right about the goal and wrong about every constraint. </p><p>That is what structural mismatch looks like in practice.</p><p>Specification used to come before execution. Now they run in parallel, and the PM&#8217;s job is direction, not permission.</p><p>Second, the <strong>vision horizon</strong> has collapsed.</p><p>The two-to-five-year product roadmap is obsolete for teams running at AI execution speed. What replaces it is a three- to six-month directional prototype. It has to be concrete enough to keep teams pointed at the same thing and short-term enough to be revised when model capabilities shift.</p><p>Product planning built on annual cycles is misaligned with teams that ship daily. The planning unit needs to compress to match the execution unit, or the roadmap becomes fiction nobody trusts. That directional prototype is now the PM&#8217;s primary planning artifact. It is not a detailed spec and not an annual roadmap. But it is a direction concrete enough to keep fast-moving teams aligned and short enough to stay honest.</p><h2>Where the PM&#8217;s Job Shifts First</h2><p>These are behavioral changes, grounded in what the evidence above actually shows.</p><p><strong>Build for the model&#8217;s timeline, not yours.</strong></p><p>The principle is simple: design for where the model will be in six months, not where it is today. The capability ceiling rises every quarter. Features that feel out of reach for AI execution right now will be routine within two planning cycles. Roadmaps that treat current AI capabilities as fixed points will be wrong by the time they ship.</p><p><strong>Shift your verification energy up the stack.</strong></p><p>Engineers now spend more time reviewing model output than writing code. Your attention should move too &#8212; from reviewing shipped features to understanding what your team actually comprehends about what was built. The cognitive debt frame makes this concrete.</p><p>Your job is not just to catch bad output; it is to maintain enough shared understanding of the system so that planning stays honest. The PM who can explain how the system works, not just what it does, is the PM whose estimates hold up.</p><p><strong>Treat latent demand as a real-time signal.</strong></p><p>With AI products, the signal of what users actually want appears in production before it appears in research. Users encounter non-deterministic behavior and improvise workarounds in real time, and those workarounds are data.</p><p>With language model products, you discover use cases by watching people use them, not by specifying them in advance. The PM who builds this habit &#8212; reading trace data, support patterns, and user workarounds regularly &#8212; will identify the next right feature before a formal research cycle has time to name it.</p><h2>Closing</h2><p>The weird, overconfident intern who has read every textbook can now write all the code. That changes execution permanently.</p><p>But what does not change is the judgment layer. That layer is now visible in a way it has never been before, precisely because execution has automated around it.</p><p>The intern cannot:</p><ul><li><p>Decide what is worth building.</p></li><li><p>Know when a system that has no memory of understanding is about to fail in production.</p></li><li><p>Read the signal in a user&#8217;s workaround that the product should have been built differently.</p></li><li><p>Hold a vision long enough to keep a fast-moving team pointed at the same thing across a quarter.</p></li></ul><p>Those are product skills. The execution layer has been automated. Judgment is the job.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[How To Design AI Features For Nondeterminism]]></title><description><![CDATA[Why variance, drift, and reasoning failures are not engineering problems, and how to design around them before you ship.]]></description><link>https://labs.adaline.ai/p/designing-ai-features-for-nondeterminism</link><guid isPermaLink="false">https://labs.adaline.ai/p/designing-ai-features-for-nondeterminism</guid><dc:creator><![CDATA[Nilesh Barla]]></dc:creator><pubDate>Sat, 28 Mar 2026 00:01:41 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/bc138e6e-779c-40bf-82e8-c3f94febc6bd_1456x816.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>TLDR:</strong> Nondeterminism is not an edge case in LLM-powered products: it is the default. This blog defines the three types of production failures: <strong>output variance</strong>, <strong>behavioral drift</strong>, and <strong>reasoning-level failure</strong>. The blog also diagnoses the three design failures that cause damage and walks through how to write a spec for a probabilistic feature. Essentially, shifting from expected output to acceptance criteria, from test cases to test distributions, and from &#8220;works&#8221; to "fails by design." <strong>If your AI PRD lacks an acceptance threshold section, it is not yet an AI PRD.</strong> Reliable AI features in 2026 are not built by teams with the best models. They are built by teams who designed for the day the model behaved unexpectedly.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://go.adaline.ai/rPUz2SX" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!dS0a!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0268b1f-56bf-4ac4-b893-44e5b5b5a632_2160x810.png 424w, https://substackcdn.com/image/fetch/$s_!dS0a!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0268b1f-56bf-4ac4-b893-44e5b5b5a632_2160x810.png 848w, https://substackcdn.com/image/fetch/$s_!dS0a!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0268b1f-56bf-4ac4-b893-44e5b5b5a632_2160x810.png 1272w, https://substackcdn.com/image/fetch/$s_!dS0a!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0268b1f-56bf-4ac4-b893-44e5b5b5a632_2160x810.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!dS0a!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0268b1f-56bf-4ac4-b893-44e5b5b5a632_2160x810.png" width="1200" height="450" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f0268b1f-56bf-4ac4-b893-44e5b5b5a632_2160x810.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:546,&quot;width&quot;:1456,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:243466,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://go.adaline.ai/rPUz2SX&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/192317198?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0268b1f-56bf-4ac4-b893-44e5b5b5a632_2160x810.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!dS0a!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0268b1f-56bf-4ac4-b893-44e5b5b5a632_2160x810.png 424w, https://substackcdn.com/image/fetch/$s_!dS0a!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0268b1f-56bf-4ac4-b893-44e5b5b5a632_2160x810.png 848w, https://substackcdn.com/image/fetch/$s_!dS0a!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0268b1f-56bf-4ac4-b893-44e5b5b5a632_2160x810.png 1272w, https://substackcdn.com/image/fetch/$s_!dS0a!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0268b1f-56bf-4ac4-b893-44e5b5b5a632_2160x810.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The feature shipped cleanly. It passed QA, cleared stakeholder review, and ran without incident in staging. But three days after launch, a user forwarded a screenshot with a support ticket.</p><p>The AI had returned something the team could not explain. The logs showed nothing wrong. It was just different from anything it had produced before. When the engineer pulled the logs, everything was proper: <strong>status</strong> <strong>200</strong>, <strong>latency</strong> <strong>normal</strong>, <strong>token count within range</strong>, no exception anywhere in the stack.</p><p>The model had simply behaved differently. That is not a bug. It is a design problem or a consequence of the probabilistic nature of AI. And until you or the team accepts that framing, every audit will lead to the wrong conclusion.</p><h2>What Nondeterminism Actually Means for Product Teams</h2><p>Here are three things that you, as a product leader, should be familiar with.</p><ol><li><p><strong>Output Variance</strong>: It is the most familiar. The same input, run twice against the same model, produces two different outputs. In summarisation tasks, copy generation, and classification, this is not an edge case. It is the default behavior of every probabilistic system. Many of us know it exists, but almost none of us design for it deliberately.</p></li><li><p><strong>Behavioral Drift</strong>: It is the one that blindsides teams after launch. A feature works correctly at release, and a few weeks later, something is off with no code changes anywhere. These can be due to a model update, a shift in user input patterns, or a prompt encountering inputs it was never tested against, which can all trigger it. The team learns from user complaints, not from its own monitoring.</p></li><li><p><strong>Reasoning-Level Failure</strong> is the hardest to catch because it produces no visible error. Our blog on <a href="https://labs.adaline.ai/p/observability-vs-monitoring-for-agentic-ai">Observability vs. Monitoring for Agentic AI</a> describes this precisely: &#8220;<em>retrieval works, tool calls complete, the model responds, but the combination of those steps produces a result that is wrong for the actual task. Monitoring shows all green. [But] the product fails.</em>&#8221;</p></li></ol><p>Nondeterminism is not a bug to fix. It is a constraint to design around, just as great product teams design around latency, mobile screen size, or network reliability.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://labs.adaline.ai/subscribe?"><span>Subscribe now</span></a></p><h2>Why Agents and Modern Models Make This Harder</h2><p>A single nondeterministic call is manageable. An agent making sequential tool calls compounds the problem at every step. One failed retrieval can cascade into four downstream failures. From wrong tool selection to incomplete data to confabulated gap-filling to a correction loop.</p><p>You cannot write alerts for failure states you have never seen before. The blast radius of nondeterminism is proportional to agent autonomy.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!iCn-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ae4aa85-3e22-486c-9bd9-27edc4acbf8b_3000x2093.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!iCn-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ae4aa85-3e22-486c-9bd9-27edc4acbf8b_3000x2093.png 424w, https://substackcdn.com/image/fetch/$s_!iCn-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ae4aa85-3e22-486c-9bd9-27edc4acbf8b_3000x2093.png 848w, https://substackcdn.com/image/fetch/$s_!iCn-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ae4aa85-3e22-486c-9bd9-27edc4acbf8b_3000x2093.png 1272w, https://substackcdn.com/image/fetch/$s_!iCn-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ae4aa85-3e22-486c-9bd9-27edc4acbf8b_3000x2093.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!iCn-!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ae4aa85-3e22-486c-9bd9-27edc4acbf8b_3000x2093.png" width="1200" height="837.3626373626373" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4ae4aa85-3e22-486c-9bd9-27edc4acbf8b_3000x2093.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:1016,&quot;width&quot;:1456,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Architecture comparison of open source LLMs.&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="Architecture comparison of open source LLMs." title="Architecture comparison of open source LLMs." srcset="https://substackcdn.com/image/fetch/$s_!iCn-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ae4aa85-3e22-486c-9bd9-27edc4acbf8b_3000x2093.png 424w, https://substackcdn.com/image/fetch/$s_!iCn-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ae4aa85-3e22-486c-9bd9-27edc4acbf8b_3000x2093.png 848w, https://substackcdn.com/image/fetch/$s_!iCn-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ae4aa85-3e22-486c-9bd9-27edc4acbf8b_3000x2093.png 1272w, https://substackcdn.com/image/fetch/$s_!iCn-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ae4aa85-3e22-486c-9bd9-27edc4acbf8b_3000x2093.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Architecture comparison of open source LLMs. </em>| <strong>Source</strong>: <a href="https://magazine.sebastianraschka.com/p/the-big-llm-architecture-comparison">The Big LLM Architecture Comparison</a></figcaption></figure></div><p>Modern model architecture adds a layer that most product leaders do not account for. <a href="https://huggingface.co/blog/moe">Mixture-of-Experts models</a> like <strong>Qwen3</strong>, <strong>GLM-4.5</strong>, and <strong>DeepSeek</strong> <strong>V3</strong> do not activate all of their parameters for every inference step. A routing mechanism selects a small subset of active experts per token. Sebastian Raschka&#8217;s <a href="https://magazine.sebastianraschka.com/p/the-big-llm-architecture-comparison">Big LLM Architecture Comparison</a> shows that DeepSeek V3 activates roughly 37 billion of its 671 billion parameters per step, because just 9 of its 256 experts activate at a time.</p><p>That means, two nearly identical prompts can route to different expert combinations and produce meaningfully different outputs. This is architecture-level variance. It is not configurable.</p><p>Reasoning models add a third dimension.</p><p>These models generate an internal <strong><a href="https://www.adaline.ai/blog/chain-of-thought-prompting-in-2025">chain-of-thought</a></strong><a href="https://www.adaline.ai/blog/chain-of-thought-prompting-in-2025"> </a>before responding, and that chain is itself variable. The <a href="https://arxiv.org/pdf/2602.15763">GLM-5 technical report</a> makes this explicit. The model shipped a <strong>Preserved Thinking mode</strong> specifically to retain reasoning context across conversation turns and prevent <strong>cross-turn drift</strong>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!j3JW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2e3ae6d-3de2-4cf1-84fc-76854ec24b74_1898x1106.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!j3JW!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2e3ae6d-3de2-4cf1-84fc-76854ec24b74_1898x1106.png 424w, https://substackcdn.com/image/fetch/$s_!j3JW!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2e3ae6d-3de2-4cf1-84fc-76854ec24b74_1898x1106.png 848w, https://substackcdn.com/image/fetch/$s_!j3JW!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2e3ae6d-3de2-4cf1-84fc-76854ec24b74_1898x1106.png 1272w, https://substackcdn.com/image/fetch/$s_!j3JW!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2e3ae6d-3de2-4cf1-84fc-76854ec24b74_1898x1106.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!j3JW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2e3ae6d-3de2-4cf1-84fc-76854ec24b74_1898x1106.png" width="1456" height="848" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d2e3ae6d-3de2-4cf1-84fc-76854ec24b74_1898x1106.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:848,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:482692,&quot;alt&quot;:&quot;GLM-5 Preserved Thinking architecture showing how reasoning context is retained across conversation turns when designing AI features for nondeterminism.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/192317198?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2e3ae6d-3de2-4cf1-84fc-76854ec24b74_1898x1106.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="GLM-5 Preserved Thinking architecture showing how reasoning context is retained across conversation turns when designing AI features for nondeterminism." title="GLM-5 Preserved Thinking architecture showing how reasoning context is retained across conversation turns when designing AI features for nondeterminism." srcset="https://substackcdn.com/image/fetch/$s_!j3JW!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2e3ae6d-3de2-4cf1-84fc-76854ec24b74_1898x1106.png 424w, https://substackcdn.com/image/fetch/$s_!j3JW!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2e3ae6d-3de2-4cf1-84fc-76854ec24b74_1898x1106.png 848w, https://substackcdn.com/image/fetch/$s_!j3JW!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2e3ae6d-3de2-4cf1-84fc-76854ec24b74_1898x1106.png 1272w, https://substackcdn.com/image/fetch/$s_!j3JW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2e3ae6d-3de2-4cf1-84fc-76854ec24b74_1898x1106.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>How Preserved Thinking works in GLM-5: without it (center), the model drops all reasoning context between turns and must start from scratch. With it (right), reasoning chains persist across turns, which is what makes consistent multi-turn agent behavior achievable.</em> | <strong>Source</strong>: <a href="https://arxiv.org/pdf/2602.15763">GLM-5 Technical Report, arXiv 2602.15763</a></figcaption></figure></div><p>When model builders start engineering against a failure mode at the architecture level, that failure mode is real. </p><p>The question is not whether your AI feature will behave differently over time. The question is whether you designed for it.</p><h2>The Three Design Failures Teams Make</h2><h3>Failure 1: Hiding Variance Instead of Surfacing It</h3><p>Teams build UX that treats the AI as deterministic: no regenerate button, no confidence framing, no acknowledgment that the same question might produce a different answer tomorrow.</p><p>When variance surfaces, users experience it as a bug and report it as one. Support tickets pile up for behavior that is technically correct. <a href="https://labs.adaline.ai/p/observability-vs-monitoring-for-agentic-ai">Here</a>, we explained why the same input does not guarantee the same output, and temperature introduces randomness by design.</p><p>The product response is not to hide this. It is to design around it. &#8220;<em>Here is one way to think about this</em>&#8221; frames output differently than &#8220;<em>Here is your answer.</em>&#8221; A regenerate button signals that trying again is normal, not a sign that something broke. The goal is calibrated trust: not blind trust, not distrust, but calibrated.</p><h3>Failure 2: Writing Binary Acceptance Criteria</h3><p>Here is how it usually goes. The PRD says "<em>the AI returns a correct answer.</em>" QA runs three test cases, marks them green, and the feature ships. Nobody questions what "<em>correct</em>" actually means, because it felt obvious in the room.</p><p>Three weeks later, production surfaces a failure pattern nobody can reproduce, because the test cases were not a &#8220;distribution.&#8221; They were essentially a demo.</p><p>A demo compresses all the variability of production into a single scenario, hiding messy inputs and long-tail formats, and it hides drift, too. Meaning a prompt can look stable on five hand-picked examples, then break on some random day when a new user arrives with a different intent.</p><p>The fix is defining success as a rate, not a binary. Instead of &#8220;<em>the AI returns a correct answer,</em>&#8221; write: &#8220;<em>the AI passes this rubric on at least 90 percent of real production inputs.</em>&#8221;<br>Nine out of ten is a target you can measure. It is also a target that can degrade over time, which means you will know when it does.</p><p>LLM-as-a-judge, where a model scores outputs against defined criteria for accuracy, relevance, and instruction adherence, is the only evaluation mechanism that scales when there is no single correct output.</p><h3>Failure 3: Treating Fallback as an Afterthought</h3><p>The spec says, &#8220;display error message if the AI fails,&#8221; on a single line, and then moves on.</p><p>But failure in a nondeterministic system is rarely binary.</p><p>The AI responds. But sometimes it just responds badly. Hidden or silent failures do not crash anything, but they essentially make you lose trust, safety, and budget a little at a time, until users stop believing the feature works at all.</p><p>The fix is designing three explicit fallback tiers before the first sprint begins.</p><ol><li><p>Soft fallback delivers a simpler and narrower output at low confidence.</p></li><li><p>Human handoff routes high-stakes or ambiguous cases to a person. Essentially, think of it as human-in-the-loop.</p></li><li><p>Silent skip does nothing but do wrong.</p></li></ol><p>The choice between these three is a product decision. It belongs in the PRD.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/designing-ai-features-for-nondeterminism?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/designing-ai-features-for-nondeterminism?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://labs.adaline.ai/p/designing-ai-features-for-nondeterminism?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h2>How to Write a Spec for a Probabilistic Feature</h2><p>There are three concrete shifts that separate a spec for a deterministic feature from a spec for a probabilistic one. Each shift changes what you ship.</p><p><strong>From expected output to acceptance criteria.</strong><br>The wrong spec line reads: &#8220;T<em>he AI returns a correct summary.</em>&#8220; The right version reads: &#8220;<em>The AI produces a summary that passes the following rubric on 90 percent of a representative input set.</em>&#8220;</p><p>The difference forces the team to agree on what &#8220;good&#8221; means before building, not after shipping. Our blog on <a href="https://labs.adaline.ai/p/prompt-management-for-product-leaders">Prompt Management for Product Leaders</a> makes the point directly: evaluation is the key to iteration, and you cannot iterate toward a target you have not defined.</p><p>I would recommend another work of ours, &#8220;<a href="https://labs.adaline.ai/p/ai-observability-and-evaluations">AI Observability and Evaluations,&nbsp;</a>&#8220;which covers how to build a system that makes those improvements trackable.</p><p><strong>From test cases to test distributions.</strong><br>A single test case is a demo.</p><p>A distribution is a product.</p><p>Effective evaluation starts with roughly 20 representative cases that reflect actual production input. These are not the clean happy path, but messy inputs, edge formats, and ambiguous queries that real users send.</p><p>This starting set expands over time using production traces, not gut instinct. The spec should state where the initial eval set comes from before development begins.</p><p><strong>From &#8220;works&#8221; to &#8220;fails by design.&#8221;<br></strong>Every AI feature spec should include a Failure Modes section that answers three questions:</p><ol><li><p>What does the feature do when the output confidence is low?</p></li><li><p>What happens when a tool times out?</p></li><li><p>What does the user see when the AI produces output outside the acceptable range?</p></li></ol><p>These are product decisions. They belong in the spec, not in a Slack thread three weeks after launch.</p><p><em>If your AI PRD does not have an acceptance threshold section, it is not yet an AI PRD.</em> For a complete structural template, <a href="https://labs.adaline.ai/p/ai-prd-missing-sections">AI PRD guide</a> walks through exactly what that section should contain.</p><h2>Observability Is the Runtime Layer</h2><p>Good threshold design requires knowing what the production distribution actually looks like. Traditional monitoring cannot tell you.</p><p><a href="https://labs.adaline.ai/p/observability-vs-monitoring-for-agentic-ai">Observability vs. Monitoring for Agentic AI</a> documents the issue precisely: status codes, response times, and token counts can all show green while the product is failing. The agent may be retrieving irrelevant content, calling the wrong tool seventeen times, or filling its context window with garbage. None of that surfaces in an infrastructure dashboard. </p><p>The design decisions from the previous sections only hold up if the team can see what is happening at the level of reasoning.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!cM9c!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46a4a345-57dd-421e-9562-81504d8e50d4_2262x950.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!cM9c!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46a4a345-57dd-421e-9562-81504d8e50d4_2262x950.png 424w, https://substackcdn.com/image/fetch/$s_!cM9c!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46a4a345-57dd-421e-9562-81504d8e50d4_2262x950.png 848w, https://substackcdn.com/image/fetch/$s_!cM9c!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46a4a345-57dd-421e-9562-81504d8e50d4_2262x950.png 1272w, https://substackcdn.com/image/fetch/$s_!cM9c!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46a4a345-57dd-421e-9562-81504d8e50d4_2262x950.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!cM9c!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46a4a345-57dd-421e-9562-81504d8e50d4_2262x950.png" width="1456" height="611" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/46a4a345-57dd-421e-9562-81504d8e50d4_2262x950.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:611,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Screenshot of casual chain analysis in the Adaline dashboard.&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Screenshot of casual chain analysis in the Adaline dashboard." title="Screenshot of casual chain analysis in the Adaline dashboard." srcset="https://substackcdn.com/image/fetch/$s_!cM9c!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46a4a345-57dd-421e-9562-81504d8e50d4_2262x950.png 424w, https://substackcdn.com/image/fetch/$s_!cM9c!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46a4a345-57dd-421e-9562-81504d8e50d4_2262x950.png 848w, https://substackcdn.com/image/fetch/$s_!cM9c!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46a4a345-57dd-421e-9562-81504d8e50d4_2262x950.png 1272w, https://substackcdn.com/image/fetch/$s_!cM9c!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46a4a345-57dd-421e-9562-81504d8e50d4_2262x950.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Screenshot of casual chain analysis in the <a href="https://go.adaline.ai/dRpz6AY">Adaline</a> dashboard.</em></figcaption></figure></div><p>Fallback triggers cannot be calibrated without traces that show where and why failures happen. The real value of a proper observability layer is <strong>the ability to ask new questions about old data</strong>, <strong>tracing a bad decision back through every tool call</strong>, <strong>every retrieval step</strong>, and <strong>every token that shaped the final output</strong>. </p><p>The three fallback tiers described above need threshold data to stay correctly calibrated as the feature evolves in production.</p><p>That data comes from traces, not from the test suite.</p><p>The spec defines what acceptable behavior looks like. Observability tells you whether you are getting it. For the full operational picture on how to instrument this at the agent level, the <a href="https://labs.adaline.ai/p/observability-vs-monitoring-for-agentic-ai">Observability vs. Monitoring for Agentic AI</a> post is the companion operational read for everything covered in this blog.</p><h2>A Checklist for Product Leaders</h2><p><strong>Before you spec:</strong></p><ul><li><p>Have you defined what &#8220;acceptable output&#8221; looks like as measurable criteria, not as a description?</p></li><li><p>Have you named the three failure types for this specific feature: output variance, behavioral drift, and reasoning-level failure?</p></li><li><p>Have you designed all three fallback states: soft fallback, human handoff, and silent skip?</p></li><li><p>Have you decided which failure modes are acceptable and which are not before the first sprint begins?</p></li></ul><p><strong>Before you ship:</strong></p><ul><li><p>Does your eval set reflect real production inputs, not just the clean demo cases?</p></li><li><p>Have you run evaluations at the failure boundary, testing what happens when confidence drops or a tool times out?</p></li><li><p>Is observability instrumented to capture why a decision happened, not just that it happened?</p></li><li><p>Does QA know that &#8220;cannot reproduce&#8221; is not a reason to close an AI ticket?</p></li></ul><p><strong>After you ship:</strong></p><ul><li><p>Are behavioral threshold alerts set, not just infrastructure metric alerts?</p></li><li><p>Is there a post-incident process for AI failures that traces back to the original spec?</p></li><li><p>Is the eval set growing from production evidence on a defined cadence?</p></li></ul><h2>Closing</h2><p>The teams shipping reliable AI features in 2026 are not the ones with access to better models. Open-source models like Qwen3, GLM-4.5, DeepSeek V3, and Kimi K2.5 have made agents faster, more capable, and so do closed-source models like GPT 5.4, Claude 4.5, Gemini 3, etc.</p><p>All of them are suited to longer-horizon tasks than anything available a year ago. Sebastian Raschka&#8217;s <a href="https://magazine.sebastianraschka.com/p/the-big-llm-architecture-comparison">Big LLM Architecture Comparison</a> documents labs claiming reasoning systems that can sustain autonomous task execution for thirty hours straight.</p><p>That is a genuine capability expansion. It does not solve the product design problem. Capability and reliability are different problems, and the industry conflates them constantly. What separates good AI product teams from great ones is not the model they chose. <strong>It is whether they wrote a spec for the day the model behaved unexpectedly</strong>.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Your AI PRD Is Missing Its Hardest Sections]]></title><description><![CDATA[How to write acceptance criteria, failure modes, and behavioral constraints for an AI feature PRD.]]></description><link>https://labs.adaline.ai/p/ai-prd-missing-sections</link><guid isPermaLink="false">https://labs.adaline.ai/p/ai-prd-missing-sections</guid><dc:creator><![CDATA[Nilesh Barla]]></dc:creator><pubDate>Sat, 21 Mar 2026 00:01:10 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/5fbf6502-06b3-4565-bf67-757f5ab074a6_1456x816.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>TLDR:</strong> This post is for product managers, builders, and teams shipping AI features. The central argument is that a PRD for an AI feature is not a specification of behavior; it is a <strong>behavioral contract.</strong> It is what defines <strong>success thresholds</strong>, <strong>failure modes</strong>, <strong>fallback logic</strong>, and <strong>what the system is never allowed to do</strong>. This blog breaks down five classic PRD sections that need to be rewritten for AI. It introduces a <strong>sixth section</strong> that no standard template includes, and walks through a concrete before-and-after example using a meeting summary feature. By the end, you will have a framework you can apply to the next AI feature PRD you write.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://go.adaline.ai/rPUz2SX" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Pm1P!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff171974d-74b9-4362-afd7-6a69757a446a_2160x810.png 424w, https://substackcdn.com/image/fetch/$s_!Pm1P!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff171974d-74b9-4362-afd7-6a69757a446a_2160x810.png 848w, https://substackcdn.com/image/fetch/$s_!Pm1P!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff171974d-74b9-4362-afd7-6a69757a446a_2160x810.png 1272w, https://substackcdn.com/image/fetch/$s_!Pm1P!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff171974d-74b9-4362-afd7-6a69757a446a_2160x810.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Pm1P!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff171974d-74b9-4362-afd7-6a69757a446a_2160x810.png" width="1200" height="450" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f171974d-74b9-4362-afd7-6a69757a446a_2160x810.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:546,&quot;width&quot;:1456,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:288175,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://go.adaline.ai/rPUz2SX&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/191577021?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff171974d-74b9-4362-afd7-6a69757a446a_2160x810.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Pm1P!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff171974d-74b9-4362-afd7-6a69757a446a_2160x810.png 424w, https://substackcdn.com/image/fetch/$s_!Pm1P!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff171974d-74b9-4362-afd7-6a69757a446a_2160x810.png 848w, https://substackcdn.com/image/fetch/$s_!Pm1P!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff171974d-74b9-4362-afd7-6a69757a446a_2160x810.png 1272w, https://substackcdn.com/image/fetch/$s_!Pm1P!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff171974d-74b9-4362-afd7-6a69757a446a_2160x810.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Consider a PM hands an engineer a PRD for an AI writing assistant. The acceptance criteria read: <strong>the summary should be accurate and concise</strong>. Three weeks later, the feature ships. Upon reviewing, the PM says it is broken. But the engineer says it passes the spec. </p><p>Here is the problem: they are both right. </p><p>Let me explain. </p><p>Product circles have been debating whether the PRD is dead, and the AI PRD in particular has become a flashpoint. Aakash Gupta put it clearly.</p><div class="pullquote"><p>The spec did not die; it moved. The old flow was a permission document written before anyone had seen the system behave. And it took eight to twelve weeks. <strong>The new flow is a decision record written after the prototype has shown you what you are working with,</strong> which now takes one to two weeks. </p></div><div class="comment" data-attrs="{&quot;url&quot;:&quot;https://open.substack.com/&quot;,&quot;commentId&quot;:230210976,&quot;comment&quot;:{&quot;id&quot;:230210976,&quot;date&quot;:&quot;2026-03-19T16:44:55.151Z&quot;,&quot;edited_at&quot;:null,&quot;body&quot;:&quot;Everyone's debating whether PRDs should die. Wrong question.\n\nThe spec didn't die. It moved.\n\nOld flow: Idea &#8594; PRD &#8594; Design &#8594; Eng &#8594; QA &#8594; Ship. 8-12 weeks. The PRD was a permission document. \&quot;Please approve before we commit resources.\&quot;\n\nNew flow: Idea &#8594; 5 prototypes &#8594; Evaluate &#8594; Kill 4 &#8594; Spec the survivor &#8594; Ship. 1-2 weeks. The PRD is now a decision record. \&quot;We built 5 versions. Here's which one and why.\&quot;\n\nThe spec went from step 2 to step 6.\n\nBoris Cherny's team at Anthropic doesn't write PRDs at all. They prototype in parallel, ship 20-30 PRs a day, and let working software replace the planning document entirely. OpenAI still writes specs because 800 million MAU need behavior contracts with 15-25 labeled examples per feature. Enterprises with 5,000 people still need the document as an alignment mechanism across 3 time zones.\n\nCompany stage determines where the spec sits. The universal shift is that the spec comes after you've touched working software.\n\nA prototype shows what. The spec explains why, how you'll measure, and when you'll pull the plug. Those are the things that separate a PM from a vibe coder.\n\nThe PMs prototyping first are shipping 5x more validated features. The PMs writing specs first are producing better documents about worse ideas.\n\nAre you writing the spec before or after you know what works?&quot;,&quot;body_json&quot;:{&quot;type&quot;:&quot;doc&quot;,&quot;attrs&quot;:{&quot;schemaVersion&quot;:&quot;v1&quot;},&quot;content&quot;:[{&quot;type&quot;:&quot;paragraph&quot;,&quot;content&quot;:[{&quot;type&quot;:&quot;text&quot;,&quot;text&quot;:&quot;Everyone's debating whether PRDs should die. Wrong question.&quot;}]},{&quot;type&quot;:&quot;paragraph&quot;,&quot;content&quot;:[{&quot;type&quot;:&quot;text&quot;,&quot;text&quot;:&quot;The spec didn't die. It moved.&quot;}]},{&quot;type&quot;:&quot;paragraph&quot;,&quot;content&quot;:[{&quot;type&quot;:&quot;text&quot;,&quot;text&quot;:&quot;Old flow: Idea &#8594; PRD &#8594; Design &#8594; Eng &#8594; QA &#8594; Ship. 8-12 weeks. The PRD was a permission document. \&quot;Please approve before we commit resources.\&quot;&quot;}]},{&quot;type&quot;:&quot;paragraph&quot;,&quot;content&quot;:[{&quot;type&quot;:&quot;text&quot;,&quot;text&quot;:&quot;New flow: Idea &#8594; 5 prototypes &#8594; Evaluate &#8594; Kill 4 &#8594; Spec the survivor &#8594; Ship. 1-2 weeks. The PRD is now a decision record. \&quot;We built 5 versions. Here's which one and why.\&quot;&quot;}]},{&quot;type&quot;:&quot;paragraph&quot;,&quot;content&quot;:[{&quot;type&quot;:&quot;text&quot;,&quot;text&quot;:&quot;The spec went from step 2 to step 6.&quot;}]},{&quot;type&quot;:&quot;paragraph&quot;,&quot;content&quot;:[{&quot;type&quot;:&quot;text&quot;,&quot;text&quot;:&quot;Boris Cherny's team at Anthropic doesn't write PRDs at all. They prototype in parallel, ship 20-30 PRs a day, and let working software replace the planning document entirely. OpenAI still writes specs because 800 million MAU need behavior contracts with 15-25 labeled examples per feature. Enterprises with 5,000 people still need the document as an alignment mechanism across 3 time zones.&quot;}]},{&quot;type&quot;:&quot;paragraph&quot;,&quot;content&quot;:[{&quot;type&quot;:&quot;text&quot;,&quot;text&quot;:&quot;Company stage determines where the spec sits. The universal shift is that the spec comes after you've touched working software.&quot;}]},{&quot;type&quot;:&quot;paragraph&quot;,&quot;content&quot;:[{&quot;type&quot;:&quot;text&quot;,&quot;text&quot;:&quot;A prototype shows what. The spec explains why, how you'll measure, and when you'll pull the plug. Those are the things that separate a PM from a vibe coder.&quot;}]},{&quot;type&quot;:&quot;paragraph&quot;,&quot;content&quot;:[{&quot;type&quot;:&quot;text&quot;,&quot;text&quot;:&quot;The PMs prototyping first are shipping 5x more validated features. The PMs writing specs first are producing better documents about worse ideas.&quot;}]},{&quot;type&quot;:&quot;paragraph&quot;,&quot;content&quot;:[{&quot;type&quot;:&quot;text&quot;,&quot;text&quot;:&quot;Are you writing the spec before or after you know what works?&quot;}]}]},&quot;restacks&quot;:2,&quot;reaction_count&quot;:17,&quot;attachments&quot;:[],&quot;name&quot;:&quot;Aakash Gupta&quot;,&quot;user_id&quot;:4429439,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/44d63f8b-bc3a-439a-9715-51eb54fd03bb_512x512.png&quot;,&quot;user_bestseller_tier&quot;:1000,&quot;userStatus&quot;:{&quot;bestsellerTier&quot;:1000,&quot;subscriberTier&quot;:null,&quot;leaderboard&quot;:{&quot;ranking&quot;:&quot;trending&quot;,&quot;rank&quot;:4,&quot;publicationName&quot;:&quot;Product Growth&quot;,&quot;label&quot;:&quot;Technology&quot;,&quot;categoryId&quot;:&quot;4&quot;,&quot;publicationId&quot;:454003},&quot;vip&quot;:false,&quot;badge&quot;:{&quot;type&quot;:&quot;bestseller&quot;,&quot;tier&quot;:1000},&quot;paidPublicationIds&quot;:[],&quot;subscriber&quot;:null}}}" data-component-name="CommentPlaceholder"></div><p>At Anthropic, Boris Cherny&#8217;s team does not write specs at all; they run prototypes in parallel and ship dozens of pull requests every day. </p><div id="youtube2-We7BZVKbCVw" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;We7BZVKbCVw&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/We7BZVKbCVw?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>OpenAI takes the opposite position. With 800 million monthly active users, a feature without a written behavior contract creates alignment problems that no amount of working code can solve. </p><p>Sean Grove made this point in his &#8220;The New Code&#8221; talk: when hundreds of engineers are building on the same system, a written spec does something working software cannot. It keeps shared intent visible and consistent across the entire team.</p><div id="youtube2-8rABwKRsec4" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;8rABwKRsec4&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/8rABwKRsec4?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>That framing is correct. But it sidesteps the harder question. Once the spec moves to step six, what does a PRD for an AI feature actually contain? <strong>Especially when behavior is probabilistic, failure modes are invisible, and "accurate" is not a success criterion but an aspiration.</strong> Here is what most teams are still missing.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://labs.adaline.ai/subscribe?"><span>Subscribe now</span></a></p><h2>What Can a Prototype Not Tell You?</h2><p>The <strong>prototype-first</strong> movement is correct about sequencing. You discover things by building that no planning document would find. But a working prototype answers the wrong questions for a PRD. It essentially shows you what the system does. It cannot tell you:</p><ol><li><p>Why is the change worth making?</p></li><li><p>How does the feature connect to the broader product strategy?</p></li><li><p>Who sees it first and under what release conditions?</p></li><li><p>What does &#8220;good enough to graduate&#8221; mean as an actual number? </p></li><li><p>Which tradeoffs and side effects have you decided to consciously accept?</p></li></ol><p>Aakash Gupta identified those five gaps as the core value of a well-written spec in his August 2025 deep-dive on <a href="http://The prototype-first movement is correct about sequencing. You discover things by building that no planning document would find.">AI PRDs</a> in Product Growth. </p><blockquote><p>The prototype is a <strong>discovery tool</strong>. The PRD is an <strong>alignment artifact</strong>. </p></blockquote><p>And PRD becomes richer and more honest once you have seen how the system behaves.</p><p>For AI features specifically, there are three additional gaps that standard PRD thinking has not yet addressed.</p><ol><li><p><strong>Eval thresholds:</strong> You need a specific, numeric definition of what good looks like before you ship, not a general sense that the outputs &#8220;seem okay.&#8221;</p></li><li><p><strong>Fallback behavior:</strong> When the model gets it wrong, and it will, what does the system do? Does it fail or provide a failure response, surface uncertainty to the user, or escalate to a human? This is product logic, and it belongs in the spec.</p></li><li><p><strong>Behavioral constraints:</strong> A definition of what the system must never do, regardless of what the user asks. This is the boundary layer that protects users when the model is technically responsive but wrong in ways that cause harm or erode users&#8217; trust.</p></li></ol><blockquote><p><strong>The prototype shows you the feature. The PRD defines the contract.</strong></p></blockquote><h2>The Sections You Need to Rewrite for a PRD for an AI Feature</h2><p>The classic PRD format has <strong>four sections</strong> that appear in almost every template: <strong>problem statement</strong>, <strong>acceptance criteria</strong>, <strong>success metrics</strong>, and <strong>definition of done</strong>. For an AI feature, each requires a different kind of thinking than most teams currently apply.</p><p><strong>Problem statement:</strong> Largely unchanged, with one addition: state the cost of a wrong answer explicitly. A standard problem statement frames the user&#8217;s need. <strong>An AI problem statement also frames the consequences of failure.</strong> </p><p>For a customer service bot, a hallucinated policy destroys trust in a way that a slow page load never does. In a clinical setting, a triage tool's wrong answer could cause direct harm. Naming that cost upfront shapes every decision that follows, from how strict the quality bar needs to be to whether the feature should exist at all.</p><p><strong>Acceptance criteria: </strong>This is where most AI PRDs collapse. Hamel Husain and Shreya Shankar have trained over 2,000 engineers and PMs on evaluation systems at companies including OpenAI and Anthropic. Their September 2025 guide on Lenny's Newsletter makes a point I keep coming back to: the first instinct is to reach for off-the-shelf metrics, hallucination rate, toxicity scores, numbers that look rigorous before you understand how your specific feature actually fails. </p><p>Those numbers are not wrong. They are meaningless until you have grounded them in your product&#8217;s real failure patterns. What matters is how your feature fails, not how AI systems fail in general.</p><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:171921139,&quot;url&quot;:&quot;https://www.lennysnewsletter.com/p/building-eval-systems-that-improve&quot;,&quot;publication_id&quot;:10845,&quot;publication_name&quot;:&quot;Lenny's Newsletter&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!8MSN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F441213db-4824-4e48-9d28-a3a18952cbfc_592x592.png&quot;,&quot;title&quot;:&quot;Building eval systems that improve your AI product&quot;,&quot;truncated_body_text&quot;:&quot;&#128075; Each week, I tackle reader questions about building product, driving growth, and accelerating your career. Annual subscribers get a free year of 15+ premium products: Lovable, Replit, Bolt, n8n, Wispr Flow, Descript, Linear, Gamma, Superhuman, Granola, Warp, Perplexity, Raycast, Magic Patterns, Mobbin, and ChatPRD&quot;,&quot;date&quot;:&quot;2025-09-09T13:03:34.855Z&quot;,&quot;like_count&quot;:354,&quot;comment_count&quot;:10,&quot;bylines&quot;:[{&quot;id&quot;:2260358,&quot;name&quot;:&quot;Hamel Husain&quot;,&quot;handle&quot;:&quot;hamelhusain&quot;,&quot;previous_name&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!7sqx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Feee58cd7-9a81-4ef6-b0f4-faeed62d5166_400x400.jpeg&quot;,&quot;bio&quot;:&quot;I am a machine learning engineer with over 20 years of experience. More about me @ https://hamel.dev&quot;,&quot;profile_set_up_at&quot;:&quot;2022-12-10T16:44:42.278Z&quot;,&quot;reader_installed_at&quot;:&quot;2023-08-28T03:21:59.264Z&quot;,&quot;is_guest&quot;:true,&quot;bestseller_tier&quot;:null,&quot;status&quot;:{&quot;bestsellerTier&quot;:null,&quot;subscriberTier&quot;:1,&quot;leaderboard&quot;:null,&quot;vip&quot;:false,&quot;badge&quot;:{&quot;type&quot;:&quot;subscriber&quot;,&quot;tier&quot;:1,&quot;accent_colors&quot;:null},&quot;paidPublicationIds&quot;:[682532,10845],&quot;subscriber&quot;:null},&quot;primaryPublicationId&quot;:30258,&quot;primaryPublicationName&quot;:&quot;Hamel&#8217;s Substack&quot;,&quot;primaryPublicationUrl&quot;:&quot;https://hamelhusain.substack.com&quot;,&quot;primaryPublicationSubscribeUrl&quot;:&quot;https://hamelhusain.substack.com/subscribe?&quot;},{&quot;id&quot;:58144420,&quot;name&quot;:&quot;Shreya Shankar&quot;,&quot;handle&quot;:&quot;shreyashan&quot;,&quot;previous_name&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bacf4319-d2ab-4665-b179-d0fc5b11c708_1176x1176.jpeg&quot;,&quot;bio&quot;:null,&quot;profile_set_up_at&quot;:&quot;2025-09-05T20:40:35.559Z&quot;,&quot;reader_installed_at&quot;:&quot;2025-09-05T20:39:01.479Z&quot;,&quot;is_guest&quot;:true,&quot;bestseller_tier&quot;:null,&quot;status&quot;:{&quot;bestsellerTier&quot;:null,&quot;subscriberTier&quot;:null,&quot;leaderboard&quot;:null,&quot;vip&quot;:false,&quot;badge&quot;:null,&quot;paidPublicationIds&quot;:[],&quot;subscriber&quot;:null},&quot;primaryPublicationId&quot;:6328094,&quot;primaryPublicationName&quot;:&quot;Shreya Shankar&quot;,&quot;primaryPublicationUrl&quot;:&quot;https://shreyashan.substack.com&quot;,&quot;primaryPublicationSubscribeUrl&quot;:&quot;https://shreyashan.substack.com/subscribe?&quot;}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:true,&quot;type&quot;:&quot;newsletter&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://www.lennysnewsletter.com/p/building-eval-systems-that-improve?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><img class="embedded-post-publication-logo" src="https://substackcdn.com/image/fetch/$s_!8MSN!,w_56,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F441213db-4824-4e48-9d28-a3a18952cbfc_592x592.png" loading="lazy"><span class="embedded-post-publication-name">Lenny's Newsletter</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title">Building eval systems that improve your AI product</div></div><div class="embedded-post-body">&#128075; Each week, I tackle reader questions about building product, driving growth, and accelerating your career. Annual subscribers get a free year of 15+ premium products: Lovable, Replit, Bolt, n8n, Wispr Flow, Descript, Linear, Gamma, Superhuman, Granola, Warp, Perplexity, Raycast, Magic Patterns, Mobbin, and ChatPRD&#8230;</div><div class="embedded-post-cta-wrapper"><span class="embedded-post-cta">Read more</span></div><div class="embedded-post-meta">8 months ago &#183; 354 likes &#183; 10 comments &#183; Hamel Husain and Shreya Shankar</div></a></div><p>Writing &#8220;should not hallucinate&#8221; in an AI feature acceptance criteria section is the same mistake as writing &#8220;the app should be fast.&#8221; It sounds right, but it measures nothing actionable.</p><p>This is the problem that <a href="https://www.adaline.ai/blog/what-is-eval-driven-development-2026">eval-driven development</a> is designed to solve: you build the measurement system alongside the feature, not after it ships broken.</p><p>The fix is <strong>binary pass/fail</strong> criteria tied to specific failure modes. Hamel and Shreya are direct on the scoring format in their September 2025 guide: Likert scales are a trap. The distinction between a 3 and a 4 is subjective and inconsistent. </p><p><strong>Binary pass/fail forces clarity.</strong> </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!dcy9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd08d5d59-3c6a-43f5-a1e4-b860715c0de4_2368x1308.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!dcy9!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd08d5d59-3c6a-43f5-a1e4-b860715c0de4_2368x1308.png 424w, https://substackcdn.com/image/fetch/$s_!dcy9!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd08d5d59-3c6a-43f5-a1e4-b860715c0de4_2368x1308.png 848w, https://substackcdn.com/image/fetch/$s_!dcy9!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd08d5d59-3c6a-43f5-a1e4-b860715c0de4_2368x1308.png 1272w, https://substackcdn.com/image/fetch/$s_!dcy9!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd08d5d59-3c6a-43f5-a1e4-b860715c0de4_2368x1308.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!dcy9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd08d5d59-3c6a-43f5-a1e4-b860715c0de4_2368x1308.png" width="1456" height="804" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d08d5d59-3c6a-43f5-a1e4-b860715c0de4_2368x1308.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:804,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2496852,&quot;alt&quot;:&quot;Adaline evaluation dashboard showing binary pass/fail verdicts with written reasons for each AI output, alongside the principle that evals are feedback loops, not tests.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/191577021?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd08d5d59-3c6a-43f5-a1e4-b860715c0de4_2368x1308.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Adaline evaluation dashboard showing binary pass/fail verdicts with written reasons for each AI output, alongside the principle that evals are feedback loops, not tests." title="Adaline evaluation dashboard showing binary pass/fail verdicts with written reasons for each AI output, alongside the principle that evals are feedback loops, not tests." srcset="https://substackcdn.com/image/fetch/$s_!dcy9!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd08d5d59-3c6a-43f5-a1e4-b860715c0de4_2368x1308.png 424w, https://substackcdn.com/image/fetch/$s_!dcy9!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd08d5d59-3c6a-43f5-a1e4-b860715c0de4_2368x1308.png 848w, https://substackcdn.com/image/fetch/$s_!dcy9!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd08d5d59-3c6a-43f5-a1e4-b860715c0de4_2368x1308.png 1272w, https://substackcdn.com/image/fetch/$s_!dcy9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd08d5d59-3c6a-43f5-a1e4-b860715c0de4_2368x1308.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em><a href="https://go.adaline.ai/dRpz6AY">Adaline&#8217;s </a>eval interface in practice: every output gets a clear pass/fail verdict, plus a written reason. The reviewer never has to decide whether an output is a 3 or a 4.</em></figcaption></figure></div><p><strong>The nuance belongs in a written critique explaining why the judgment was made</strong>, detailed enough for a brand-new employee to understand it. An <a href="https://www.adaline.ai/blog/llm-as-judges">LLM-as-judge</a> can automate this scoring at scale, but the human benchmark must come first. </p><p>The criteria also need to specify what percentage of cases must pass and who holds the final judgment. A concrete version: a senior PM reviews 20 random outputs per sprint, and if more than two fail the quality bar, the feature goes back to <strong>prompt iteration</strong>. That sentence is a testable contract. &#8220;Should be accurate and concise&#8221; is not.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!to2n!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffadc8d17-2bf1-4037-9da1-ff0219ed5afd_2350x1252.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!to2n!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffadc8d17-2bf1-4037-9da1-ff0219ed5afd_2350x1252.png 424w, https://substackcdn.com/image/fetch/$s_!to2n!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffadc8d17-2bf1-4037-9da1-ff0219ed5afd_2350x1252.png 848w, https://substackcdn.com/image/fetch/$s_!to2n!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffadc8d17-2bf1-4037-9da1-ff0219ed5afd_2350x1252.png 1272w, https://substackcdn.com/image/fetch/$s_!to2n!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffadc8d17-2bf1-4037-9da1-ff0219ed5afd_2350x1252.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!to2n!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffadc8d17-2bf1-4037-9da1-ff0219ed5afd_2350x1252.png" width="1456" height="776" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fadc8d17-2bf1-4037-9da1-ff0219ed5afd_2350x1252.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:776,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2374184,&quot;alt&quot;:&quot;Diagram showing the AI development lifecycle as a continuous cycle: Iterate leads to Evaluate, Evaluate leads to Deploy, Deploy leads to Monitor, and Monitor feeds back into Iterate.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/191577021?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffadc8d17-2bf1-4037-9da1-ff0219ed5afd_2350x1252.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Diagram showing the AI development lifecycle as a continuous cycle: Iterate leads to Evaluate, Evaluate leads to Deploy, Deploy leads to Monitor, and Monitor feeds back into Iterate." title="Diagram showing the AI development lifecycle as a continuous cycle: Iterate leads to Evaluate, Evaluate leads to Deploy, Deploy leads to Monitor, and Monitor feeds back into Iterate." srcset="https://substackcdn.com/image/fetch/$s_!to2n!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffadc8d17-2bf1-4037-9da1-ff0219ed5afd_2350x1252.png 424w, https://substackcdn.com/image/fetch/$s_!to2n!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffadc8d17-2bf1-4037-9da1-ff0219ed5afd_2350x1252.png 848w, https://substackcdn.com/image/fetch/$s_!to2n!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffadc8d17-2bf1-4037-9da1-ff0219ed5afd_2350x1252.png 1272w, https://substackcdn.com/image/fetch/$s_!to2n!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffadc8d17-2bf1-4037-9da1-ff0219ed5afd_2350x1252.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>The AI development lifecycle is a continuous cycle: iterate, evaluate, deploy, monitor, and back again. The behavioral contract you write in the PRD is what makes each stage accountable to the last.</em></figcaption></figure></div><p><strong>Success metrics:</strong> You need two explicit layers, not one.</p><p><strong>The first layer covers model quality metrics</strong>: output correctness, hallucination rate, LLM-as-judge pass rate, and completeness. These live upstream of the user experience and reveal whether the foundation is sound.</p><p><strong>The second layer covers product metrics</strong>: task completion rate, session depth, and user override rate, which is the percentage of AI outputs the user manually edits or ignores. User override rate is one of the most honest signals in an AI product. When it climbs, users have stopped trusting the feature, even if they are not explicitly saying so.</p><p>Almost every PRD I have seen contains only the second layer. Both are required.</p><p><strong>Failure modes:</strong> The best failure modes do not come from imagination. <strong>They come from reviewing real outputs.</strong> Hamel and Shreya recommend starting with a single human expert, often the PM, who sits with roughly 100 real prototype interactions and writes open notes on anything that looks or feels off. </p><p>The reason this works is captured by research on <strong>criteria drift</strong> cited in their guide. People are poor at articulating their full quality requirements in the abstract. <strong>Seeing the output is what surfaces the requirement</strong>. </p><p>Essentially, the act of <strong>reviewing</strong> and <strong>annotating</strong> is how real criteria emerge. And not imagining edge cases before anything has shipped. This is a wrong practice.</p><p>Consider an AI that summarizes incoming support tickets for customer success agents. In early prototype runs, it marked several tickets as resolved when the customer had simply stopped responding, not because the issue was actually closed. That specific constraint, &#8220;<em>must not infer resolution from user silence</em>,&#8221; would never have appeared in a PRD written before the prototype ran. </p><p><strong>The failure makes the rule visible</strong>. </p><p>Write your failure modes after reviewing 20 to 50 real prototype outputs and grouping what you observed into concrete categories. That is the section that earns its place in the document.</p><p><strong>Definition of done:</strong> In a standard PRD, done means QA sign-off. For an AI feature, done requires two additional conditions: </p><ol><li><p>The specified <strong>eval suite</strong> must pass at the defined threshold. </p></li><li><p>The quality arbiter, in most cases the PM, must have reviewed a representative batch of outputs and signed off explicitly. </p></li></ol><p>Engineering done and product done are not the same for a probabilistic system. And treating them as equivalent is how low-quality AI features get shipped without anyone being clearly responsible. </p><p>When a team ships an AI feature that only QA signed off on, and outputs start degrading in production two weeks later, the definition of done determines who owns the decision to pull it. </p><p>If that question is unanswered in the PRD, it will be unanswered at the worst possible moment.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/ai-prd-missing-sections?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/ai-prd-missing-sections?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://labs.adaline.ai/p/ai-prd-missing-sections?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h2>The Section That Does Not Exist in Standard PRDs</h2><p>There is one section that no PRD template includes and that every AI PRD requires: <strong>behavioral constraints</strong>.</p><p>Behavioral constraints define what the system must never do, independent of what the user asks. They are not failure modes; failure modes describe things that go wrong unintentionally. </p><blockquote><p>Behavioral constraints describe boundaries that the system must hold, even when the model is technically capable of crossing them. They are the equivalent of the system prompt in implementation: the boundary layer that the PM defines, and the engineer enforces.</p></blockquote><p>Examples: </p><ol><li><p>Must not fabricate citations or statistics.</p></li><li><p>Must not provide specific legal or medical advice.</p></li><li><p>Must not imply that a feature exists that is not currently offered.</p></li><li><p>Must decline politely with a specific message when the input is out of scope.</p></li></ol><p>Vague behavioral constraints are functionally useless. Colin Matthews, writing about AI prototyping for Lenny&#8217;s Newsletter in January 2025, observed that the same discipline that makes AI coding tools reliable, being hyperspecific about what should change, is what makes behavioral constraints work. A vague instruction to an engineer produces the same result as a vague prompt to a model: confident-sounding noise.</p><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:153926764,&quot;url&quot;:&quot;https://www.lennysnewsletter.com/p/a-guide-to-ai-prototyping-for-product&quot;,&quot;publication_id&quot;:10845,&quot;publication_name&quot;:&quot;Lenny's Newsletter&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!8MSN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F441213db-4824-4e48-9d28-a3a18952cbfc_592x592.png&quot;,&quot;title&quot;:&quot;A guide to AI prototyping for product managers&quot;,&quot;truncated_body_text&quot;:&quot;&#128075; Welcome to a &#128274; subscriber-only edition &#128274; of my weekly newsletter. Each week I tackle reader questions about building product, driving growth, and accelerating your career. For more: Lennybot | Podcast | Hire your next product leader | My favorite Maven courses&quot;,&quot;date&quot;:&quot;2025-01-07T12:03:34.090Z&quot;,&quot;like_count&quot;:712,&quot;comment_count&quot;:13,&quot;bylines&quot;:[{&quot;id&quot;:176430401,&quot;name&quot;:&quot;Colin Matthews&quot;,&quot;handle&quot;:&quot;colinmatthews&quot;,&quot;previous_name&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!h0Lm!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8c242111-3b2c-4b82-bde0-1a02a8ce401f_443x512.jpeg&quot;,&quot;bio&quot;:&quot;I'm excited to help you learn more about how software gets built! I had my first SaaS product acquired in 2021 and have worked in healthtech for 6+ years.\nPM @ Datavant, 5000+ students&quot;,&quot;profile_set_up_at&quot;:&quot;2024-01-12T21:56:48.224Z&quot;,&quot;reader_installed_at&quot;:&quot;2024-03-26T14:19:17.026Z&quot;,&quot;is_guest&quot;:true,&quot;bestseller_tier&quot;:100,&quot;status&quot;:{&quot;bestsellerTier&quot;:100,&quot;subscriberTier&quot;:null,&quot;leaderboard&quot;:null,&quot;vip&quot;:false,&quot;badge&quot;:{&quot;type&quot;:&quot;bestseller&quot;,&quot;tier&quot;:100},&quot;paidPublicationIds&quot;:[],&quot;subscriber&quot;:null},&quot;primaryPublicationId&quot;:2254245,&quot;primaryPublicationName&quot;:&quot;Tech For Product&quot;,&quot;primaryPublicationUrl&quot;:&quot;https://blog.techforproduct.com&quot;,&quot;primaryPublicationSubscribeUrl&quot;:&quot;https://blog.techforproduct.com/subscribe?&quot;}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:true,&quot;type&quot;:&quot;newsletter&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://www.lennysnewsletter.com/p/a-guide-to-ai-prototyping-for-product?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><img class="embedded-post-publication-logo" src="https://substackcdn.com/image/fetch/$s_!8MSN!,w_56,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F441213db-4824-4e48-9d28-a3a18952cbfc_592x592.png" loading="lazy"><span class="embedded-post-publication-name">Lenny's Newsletter</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title">A guide to AI prototyping for product managers</div></div><div class="embedded-post-body">&#128075; Welcome to a &#128274; subscriber-only edition &#128274; of my weekly newsletter. Each week I tackle reader questions about building product, driving growth, and accelerating your career. For more: Lennybot | Podcast | Hire your next product leader | My favorite Maven courses&#8230;</div><div class="embedded-post-cta-wrapper"><span class="embedded-post-cta">Read more</span></div><div class="embedded-post-meta">a year ago &#183; 712 likes &#183; 13 comments &#183; Colin Matthews</div></a></div><p>Here is what the difference looks like in practice. &#8220;Should not hallucinate&#8221; is not a constraint; the useful version is: <strong>must not cite a source that was not present in the retrieved context</strong>. &#8220;Should be helpful&#8221; measures nothing; the useful version is: <strong>must attempt a response for any in-scope query, and must decline with a specific message for any out-of-scope query</strong>. &#8220;Should be concise&#8221; has no edge; the useful version is: <strong>summary output must be under 150 words unless the input exceeds 2,000 words</strong>.</p><p>Each of those rewrites does the same thing: it gives an engineer, an automated judge, or a new hire <strong>enough precision to make a consistent call on whether the output passes or fails</strong>.</p><p>The PM owns this section. Engineers should not be inventing behavioral boundaries while writing code. By the time the code is being written, the constraints should already be settled.</p><h2>A Worked Example: Meeting Summary for B2B SaaS</h2><p>Take a concrete feature: an AI-powered meeting summary for a B2B SaaS product. Users paste in a transcript, and the feature returns a structured summary with action items. Here are two versions of the PRD for this feature, shown sequentially.</p><p><strong>Version A: What most teams write.</strong></p><p>The PRD describes a feature that reads transcripts and generates concise summaries with action items. The acceptance criteria read: the summary should be accurate and capture key points. The success metric is a user's thumbs-up or thumbs-down. Failure modes are not listed. The definition of done is a QA sign-off. It sounds reasonable. It produces a broken feature with no clear owner and no shared definition of good.</p><p><strong>Version B: The behavioral contract.</strong></p><p>This version was written after the PM reviewed 30 prototype outputs before writing a single criterion. That is the sequence: see the system fail, then write the contract.</p><ul><li><p><strong>Acceptance criteria:</strong> An LLM-as-judge scores outputs at 4 out of 5 or higher on coherence and completeness for 90 percent of test cases. The PM reviews 15 random outputs per sprint, with fewer than 2 failures per cycle. Pass or fail is defined as: Does the summary correctly capture every action item assigned to a named person? That threshold came directly from watching prototype outputs miss action items. The PM saw the failure before writing the criterion.</p></li><li><p><strong>Success metrics, model layer:</strong> Hallucination rate, defined as any claim not supported by the transcript, must remain under 3 percent. Completeness score from LLM-as-judge must be above 85 percent. For a deeper breakdown of what to measure at this layer, the <a href="https://www.adaline.ai/blog/the-product-manager-s-guide-to-llm-output-evaluation">PM guide to evaluating LLM outputs</a> covers the methodology in full.</p></li><li><p><strong>Success metrics, product layer:</strong> Feature activation rate and user override rate, which is the percentage of summaries the user manually edits heavily, with a target of under 20 percent.</p></li><li><p><strong>Failure modes, drawn from reviewing 30 prototype outputs:</strong> The model fabricated deadlines not stated in the transcript. It dropped action items from speakers whose accents the transcription engine handled poorly. It occasionally produced summaries longer than the original transcript. None of these were written from imagination. They were found.</p></li><li><p><strong>Behavioral constraints:</strong> Must not infer deadlines that were not explicitly stated. Must label uncertainty when speaker intent is ambiguous. Must decline if the transcript is under 100 words.</p></li><li><p><strong>Definition of done:</strong> The eval suite passes at the specified thresholds. The PM has reviewed one full sprint&#8217;s worth of outputs and signed off.</p></li></ul><p>The difference between the two versions is not formatting. It is the work that happened before writing. The PM reviewed real outputs, found real failures, and turned those observations into a testable behavioral contract. That is what a PRD for an AI feature is supposed to do.</p><h2>Conclusion</h2><p>Pull out the last AI feature PRD your team wrote. Find the acceptance criteria section. Ask one question: <strong>could a new hire with no context on this feature use these criteria to decide whether a given output passes or fails?</strong> </p><p>If the answer is no, you do not yet have acceptance criteria. You have aspirations.</p><p>The PRD is not dead. It is harder. Writing a behavioral contract for an AI feature requires you to have <strong>seen the system fail</strong>, <strong>name the failure modes</strong>, <strong>make a judgment call about what good means</strong>, and <strong>document that judgment in a form that survives a sprint review</strong>. </p><blockquote><p>That work is harder than writing a feature description. It is also the work that separates a PM from a vibe coder.</p></blockquote><p>There is a secondary thesis running through this post worth stating plainly: <strong>the PM owns the quality bar for an AI feature, not the engineer</strong>. Not because engineers cannot reason about quality, but because what &#8220;good looks&#8221; like is a product decision, not engineering. </p><p>Product decision depends on the cost of a wrong answer, the user&#8217;s tolerance for failure, and the competitive stakes of the feature. Those judgments belong in the PRD, where the PM makes them visible and accountable.</p><p>The PM&#8217;s job in AI products is to make good legible, to the team, to the evaluators who will test it, and to yourself. That work starts in the PRD, long before anything ships.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Embeddings for AI Agents: What Product Leaders Must Know]]></title><description><![CDATA[Embeddings determine what your agent retrieves, remembers, and routes. Here's what every PM and product leader needs to understand about the embedding layer.]]></description><link>https://labs.adaline.ai/p/embeddings-for-ai-agents</link><guid isPermaLink="false">https://labs.adaline.ai/p/embeddings-for-ai-agents</guid><dc:creator><![CDATA[Adaline]]></dc:creator><pubDate>Sat, 14 Mar 2026 00:01:23 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/69b0770a-7696-4e16-b805-4b46493e5501_1600x896.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>TLDR</strong>: This blog makes one argument: <strong>embeddings are not just a retrieval mechanism, they are the full context system of every agentic product.</strong> You will learn the four jobs that embeddings do in every agent and why each one is a product decision, not an engineering detail. You will also see how multi-agent systems use shared embeddings for sub-agent coordination. This blog is written for <strong>product</strong> <strong>managers</strong>, <strong>engineers,</strong> and <strong>builders</strong> who are actively building agentic products. If embedding quality is something you have fully delegated to engineers, this blog is where to start.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://go.adaline.ai/rPUz2SX" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!-5dE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91fcdc3c-d0eb-41d4-9b69-3ac75c63c4e8_2160x810.png 424w, https://substackcdn.com/image/fetch/$s_!-5dE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91fcdc3c-d0eb-41d4-9b69-3ac75c63c4e8_2160x810.png 848w, https://substackcdn.com/image/fetch/$s_!-5dE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91fcdc3c-d0eb-41d4-9b69-3ac75c63c4e8_2160x810.png 1272w, https://substackcdn.com/image/fetch/$s_!-5dE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91fcdc3c-d0eb-41d4-9b69-3ac75c63c4e8_2160x810.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!-5dE!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91fcdc3c-d0eb-41d4-9b69-3ac75c63c4e8_2160x810.png" width="1200" height="450" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/91fcdc3c-d0eb-41d4-9b69-3ac75c63c4e8_2160x810.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:546,&quot;width&quot;:1456,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:243466,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://go.adaline.ai/rPUz2SX&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/190837237?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91fcdc3c-d0eb-41d4-9b69-3ac75c63c4e8_2160x810.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!-5dE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91fcdc3c-d0eb-41d4-9b69-3ac75c63c4e8_2160x810.png 424w, https://substackcdn.com/image/fetch/$s_!-5dE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91fcdc3c-d0eb-41d4-9b69-3ac75c63c4e8_2160x810.png 848w, https://substackcdn.com/image/fetch/$s_!-5dE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91fcdc3c-d0eb-41d4-9b69-3ac75c63c4e8_2160x810.png 1272w, https://substackcdn.com/image/fetch/$s_!-5dE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91fcdc3c-d0eb-41d4-9b69-3ac75c63c4e8_2160x810.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Philipp Schmid of Google DeepMind put it directly in his June 2025 piece. In <a href="https://www.philschmid.de/context-engineering">&#8220;The New Skill in AI is Not Prompting, It&#8217;s Context Engineering&#8221;</a>, he wrote: &#8220;<em><strong>Most agent failures are not model failures anymore, they are context failures.</strong></em>&#8221; </p><p>The model is capable, but what it receives is where production systems break down. Embeddings for AI agents are the mechanism that determines what an agent receives at every step. They control what gets retrieved, what gets remembered, and what gets passed forward.</p><p>For product leaders, embeddings are not an infrastructure decision to delegate. They are product decisions that shape quality and user experience at every layer. This blog is not a vector math tutorial. It is a product strategy argument &#8212; why the embedding layer matters, and <strong>why getting it wrong explains more failures than a weak model ever could</strong>.</p><h2>What Are Embeddings for AI Agents?</h2><p>When a language model processes text, it works with numbers, not words. Embeddings are the translation layer that enables this. An embedding model converts <strong>text</strong>, <strong>images</strong>, or <strong>code</strong> into a vector of numbers. Those numbers capture meaning &#8212; the relationships between concepts and the intent behind a phrase.</p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;66cc9626-ebc7-4501-85c3-404b6e898581&quot;,&quot;duration&quot;:null}"></div><p><em>An animated workflow of how the <a href="https://blog.google/innovation-and-ai/models-and-research/gemini-models/gemini-embedding-2/">Gemini-2 embedding</a> model works by Google DeepMind. </em></p><p>Tomas Mikolov and colleagues at Google formalized this in their 2013 <a href="https://arxiv.org/abs/1301.3781">Word2Vec paper</a>. The paper showed that vectors encode semantic relationships with surprising precision. The most-cited example is the vector for &#8220;<strong>king</strong>&#8221; minus &#8220;<strong>man</strong>&#8221; plus &#8220;<strong>woman</strong>&#8221; yields a vector close to &#8220;<strong>queen</strong>.&#8221;</p><p>Two sentences that mean the same thing land close together in vector space:</p><ul><li><p>&#8220;Cancel my subscription.&#8221;</p></li><li><p>&#8220;I want to stop paying for this.&#8221;</p></li></ul><p>Two sentences that share a word but mean different things land far apart:</p><ul><li><p>&#8220;Bank account.&#8221;</p></li><li><p>&#8220;River bank.&#8221;</p></li></ul><p><strong>Embeddings encode meaning, not form</strong>. That is what makes them the right foundation for any system that needs to understand intent.</p><p>The vector produced lives in a <strong>vector database</strong> alongside millions of others. When the system needs relevant information, it converts the query into a vector and searches for the closest matches. This is called <strong>semantic search</strong> or <strong>vector similarity search</strong>. </p><p>What product teams build on top of that foundation determines whether agents hold up in production or quietly erode user trust.</p><h2>How AI Agents Use Embeddings: Retrieval, Memory, Routing, and Personalization</h2><p>A chat interface processes a message and returns a response. </p><p>An agent does much more. It decides <strong>what to do</strong>, <strong>executes steps</strong>, <strong>uses</strong> <strong>tools</strong>, and <strong>builds toward a goal across multiple turns</strong>. The difference is not just architectural. It is temporal. That temporal dimension is exactly why agents depend on embeddings in ways a chat interface never needed to.</p><p><strong>Retrieval and grounding.</strong> </p><p>When an agent needs to complete a task, it needs relevant context. The agent converts the current query into a vector and searches the database for the closest chunks. It then pulls those chunks into its context window. </p><p>Research at&nbsp;<a href="https://proceedings.iclr.cc/paper_files/paper/2025/file/5df5b1f121c915d8bdd00db6aac20827-Paper-Conference.pdf">ICLR 2025</a>&nbsp;found that irrelevant retrieved passages, i.e., &#8220;hard negatives,&#8221; degrade output quality even when recall is high. </p><p>A 2025 paper <a href="https://arxiv.org/abs/2510.13975">classifying errors across RAG systems</a> confirmed the same: retrieval failures and generation failures compound each other. When the context layer fails, the model cannot compensate.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!dblK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e0b0575-2867-4221-9364-876e010351c3_2688x1146.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!dblK!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e0b0575-2867-4221-9364-876e010351c3_2688x1146.png 424w, https://substackcdn.com/image/fetch/$s_!dblK!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e0b0575-2867-4221-9364-876e010351c3_2688x1146.png 848w, https://substackcdn.com/image/fetch/$s_!dblK!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e0b0575-2867-4221-9364-876e010351c3_2688x1146.png 1272w, https://substackcdn.com/image/fetch/$s_!dblK!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e0b0575-2867-4221-9364-876e010351c3_2688x1146.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!dblK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e0b0575-2867-4221-9364-876e010351c3_2688x1146.png" width="1456" height="621" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8e0b0575-2867-4221-9364-876e010351c3_2688x1146.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:621,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:396098,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/190837237?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e0b0575-2867-4221-9364-876e010351c3_2688x1146.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!dblK!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e0b0575-2867-4221-9364-876e010351c3_2688x1146.png 424w, https://substackcdn.com/image/fetch/$s_!dblK!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e0b0575-2867-4221-9364-876e010351c3_2688x1146.png 848w, https://substackcdn.com/image/fetch/$s_!dblK!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e0b0575-2867-4221-9364-876e010351c3_2688x1146.png 1272w, https://substackcdn.com/image/fetch/$s_!dblK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e0b0575-2867-4221-9364-876e010351c3_2688x1146.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>More retrieved passages do not mean better context. RAG accuracy peaks at ~10 passages and declines as precision drops and misleading passages enter the context window.</em> | <strong>Source</strong>: <strong><a href="https://proceedings.iclr.cc/paper_files/paper/2025/file/5df5b1f121c915d8bdd00db6aac20827-Paper-Conference.pdf">Long-Context LLMs Meet RAG: Overcoming Challenges for Long Inputs in RAG</a></strong></figcaption></figure></div><p></p><p><strong>Memory.</strong> </p><p>Agents need to remember things across sessions, not just within one. Consider these examples:</p><ul><li><p>A support agent should remember that a user prefers email over phone calls.</p></li><li><p>A research agent should remember open questions from the last session.</p></li><li><p>A sales agent should remember the deal context from six weeks ago.</p></li></ul><p>Embeddings make this possible by encoding past interactions as vectors. The system retrieves them semantically when they are needed. Google&#8217;s <a href="https://google.github.io/adk-docs/sessions/memory/">Agent Development Kit (ADK)</a>, released in 2025, treats this as a first-class architectural requirement. It separates short-term session memory from long-term persistent memory. It then uses vector similarity search to retrieve only what is relevant, not inject an entire history into the context window.</p><p><strong>Routing.</strong> </p><p>In multi-step workflows, agents decide what happens next. The choice might be:</p><ul><li><p>Which tool to call?</p></li><li><p>Which knowledge base to query?</p></li><li><p>Which sub-agent to hand the task off to?</p></li></ul><p>Semantic routing uses embeddings to match an intent to the right next step. Instead of brittle &#8220;if X then Y&#8221; rules, the routing layer uses embedding similarity to match queries to capabilities. This makes the system far more flexible as user language varies across thousands of real interactions.</p><p><strong>Personalization.</strong> </p><p>Embeddings encode user behavior, preferences, and history in a form that is queryable. A recommendation agent that understands a user&#8217;s history as a vector finds semantically similar content without an explicit search term. The personalization is grounded in the meaning of past behavior, not keywords. That is what makes it feel relevant rather than mechanical.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/embeddings-for-ai-agents?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/embeddings-for-ai-agents?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://labs.adaline.ai/p/embeddings-for-ai-agents?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h2>How Multi-Agent Systems Use Shared Embeddings for Coordination</h2><p>Multi-agent architectures are becoming the standard production pattern for complex agentic products. A customer success platform might coordinate across:</p><ul><li><p>A billing agent.</p></li><li><p>A technical support agent.</p></li><li><p>A knowledge retrieval agent.</p></li><li><p>An escalation agent.</p></li></ul><p>Each sub-agent is specialized. The coordination challenge sits between them. When the coordinator passes context to a sub-agent, it needs to be semantically accurate. The sub-agent needs the relevant pieces of conversation history, user state, and task context to do its job. A raw transcript dump does not cut it.</p><p>Research on the <a href="https://arxiv.org/abs/2602.06039">DyTopo routing system</a> (February 2026) found a clear result. Reconstructing agent communication paths using embedding-based semantic matching at each reasoning step produced a 6.2% average improvement over fixed routing rules. That is a meaningful margin in workflows where failures accumulate across steps.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!0aXg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39d68108-a98f-4434-bdd4-1e5ef4742182_2384x1474.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!0aXg!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39d68108-a98f-4434-bdd4-1e5ef4742182_2384x1474.png 424w, https://substackcdn.com/image/fetch/$s_!0aXg!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39d68108-a98f-4434-bdd4-1e5ef4742182_2384x1474.png 848w, https://substackcdn.com/image/fetch/$s_!0aXg!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39d68108-a98f-4434-bdd4-1e5ef4742182_2384x1474.png 1272w, https://substackcdn.com/image/fetch/$s_!0aXg!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39d68108-a98f-4434-bdd4-1e5ef4742182_2384x1474.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!0aXg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39d68108-a98f-4434-bdd4-1e5ef4742182_2384x1474.png" width="1456" height="900" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/39d68108-a98f-4434-bdd4-1e5ef4742182_2384x1474.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:900,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:405071,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/190837237?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39d68108-a98f-4434-bdd4-1e5ef4742182_2384x1474.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!0aXg!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39d68108-a98f-4434-bdd4-1e5ef4742182_2384x1474.png 424w, https://substackcdn.com/image/fetch/$s_!0aXg!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39d68108-a98f-4434-bdd4-1e5ef4742182_2384x1474.png 848w, https://substackcdn.com/image/fetch/$s_!0aXg!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39d68108-a98f-4434-bdd4-1e5ef4742182_2384x1474.png 1272w, https://substackcdn.com/image/fetch/$s_!0aXg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39d68108-a98f-4434-bdd4-1e5ef4742182_2384x1474.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em><strong>(A)</strong> Single-agent. <strong>(B)</strong> Fixed topology: same agent graph every round. <strong>(C)</strong> DyTopo: embeddings rebuild the graph each round based on task goal &#8212; the architecture behind the 6.2% improvement</em>. | <strong>Source</strong>: <a href="https://arxiv.org/pdf/2602.06039">DyTopo</a>, </figcaption></figure></div><p>A shared-memory architecture relies on all agents accessing the same vector database. When one agent learns something important, like a user preference, a resolved constraint, or a task dependency, it writes that to shared memory as an embedding. When another agent needs it later, it retrieves it semantically. </p><p>The <a href="https://openreview.net/forum?id=N7NDfV2YMp">Federation of Agents framework</a> demonstrated this at scale. Using Versioned Capability Vectors &#8212; agent profiles indexed and retrieved through semantic search &#8212; it achieved a 13&#215; improvement over single-model baselines on complex multi-step reasoning tasks.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!X1lt!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90c5dadd-d32e-4fca-ab74-09a67eb56ab7_2346x1436.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!X1lt!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90c5dadd-d32e-4fca-ab74-09a67eb56ab7_2346x1436.png 424w, https://substackcdn.com/image/fetch/$s_!X1lt!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90c5dadd-d32e-4fca-ab74-09a67eb56ab7_2346x1436.png 848w, https://substackcdn.com/image/fetch/$s_!X1lt!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90c5dadd-d32e-4fca-ab74-09a67eb56ab7_2346x1436.png 1272w, https://substackcdn.com/image/fetch/$s_!X1lt!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90c5dadd-d32e-4fca-ab74-09a67eb56ab7_2346x1436.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!X1lt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90c5dadd-d32e-4fca-ab74-09a67eb56ab7_2346x1436.png" width="1456" height="891" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/90c5dadd-d32e-4fca-ab74-09a67eb56ab7_2346x1436.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:891,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:586585,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/190837237?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90c5dadd-d32e-4fca-ab74-09a67eb56ab7_2346x1436.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!X1lt!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90c5dadd-d32e-4fca-ab74-09a67eb56ab7_2346x1436.png 424w, https://substackcdn.com/image/fetch/$s_!X1lt!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90c5dadd-d32e-4fca-ab74-09a67eb56ab7_2346x1436.png 848w, https://substackcdn.com/image/fetch/$s_!X1lt!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90c5dadd-d32e-4fca-ab74-09a67eb56ab7_2346x1436.png 1272w, https://substackcdn.com/image/fetch/$s_!X1lt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90c5dadd-d32e-4fca-ab74-09a67eb56ab7_2346x1436.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>The orchestrator embeds each sub-task and scores it against agent capability profiles using cosine similarity. The highest score determines routing &#8212; Sub-task 3 routes to Agent A (0.70), Sub-task 1 to Agent D (0.73).</em> | <strong>Source</strong>: <a href="https://openreview.net/pdf?id=N7NDfV2YMp">Federation of Agents</a></figcaption></figure></div><p></p><p>The pattern is consistent: sub-agent systems with a well-maintained shared vector store outperform systems built on static context injection or keyword routing &#8212; not because the models are stronger, but because the context system is better designed.</p><h2>Why Embedding Quality Is a Product Decision, Not an Engineering One</h2><p>Embedding quality is a product decision. The choices involved directly determine user experience:</p><ul><li><p>Which embedding model do you use?</p></li><li><p>How do you chunk documents before embedding them?</p></li><li><p>How often do you refresh the vector store?</p></li><li><p>Which retrieval strategy do you apply?</p></li></ul><p>A support agent who retrieves stale documentation frustrates users. </p><p>A research agent that misses the most relevant source because it was chunked poorly loses user trust. </p><p>A sales agent who forgets a deal detail because it was never stored loses the deal.</p><p>Product leaders who understand embeddings make better calls here. </p><ul><li><p>They push for retrieval quality metrics to be tracked in production, not just during demos. </p></li><li><p>They ask whether the embedding model was fine-tuned on domain-specific content. </p></li><li><p>They question whether the chunking strategy preserves meaning at document boundaries. </p></li><li><p>They insist that memory architecture is designed before launch, not patched after users complain.</p></li></ul><p>The most common mistake is treating embeddings as only &#8220;the RAG layer.&#8221; Retrieval-augmented generation is one use case. Embeddings also power:</p><ul><li><p>Memory across sessions.</p></li><li><p>Semantic routing between agents.</p></li><li><p>Personalization based on behavioral history.</p></li><li><p>Anomaly detection when the agent outputs diverge from expected patterns.</p></li></ul><p>A team that scopes embeddings as only a retrieval pipeline leaves memory, routing, and personalization undesigned. Teams that treat embeddings as the full memory and coordination layer build systems that scale with workflow complexity. The others spend months patching failures that could have been designed away from the start.</p><h2>The Strategic Edge in the Agentic Era</h2><p>Model quality is converging faster than most teams expected. As of early 2026, <a href="https://openlm.ai/chatbot-arena/">LMSYS Chatbot Arena</a> &#8212; which aggregates nearly five million human preference votes across 296 models &#8212; shows frontier models clustered within a few Elo points of each other. </p><p><a href="https://zylos.ai/research/2026-01-16-llm-evaluation-benchmarking">Zylos Research&#8217;s January 2026 benchmark analysis</a> found leading models scoring above 88% on MMLU. A threshold that would have been a meaningful performance gap just twelve months earlier.</p><p>The differentiation will not come from which foundation model you pick. It will come from how well your system <strong>retrieves</strong>, <strong>remembers</strong>, and <strong>routes</strong> across the full lifecycle of a user interaction.</p><p>Embeddings are what make that possible. They connect memory to retrieval, retrieval to routing, routing to coordination, and coordination to user experience. They are not a backend detail. They are a design decision that compounds across every feature you ship.</p><blockquote><p>Product leaders who understand this layer will catch failures before users do. The ones who delegate it entirely will keep shipping agents that perform in demos and fall apart in production. The model is not the bottleneck. The context system is. Build accordingly.</p></blockquote><div><hr></div><h2>Frequently Asked Questions</h2><p><strong>What are embeddings in AI agents?</strong><br>Embeddings are numerical vector representations of text, code, or data that encode semantic meaning. In AI agents, they power four core functions: retrieval from knowledge bases, memory across sessions, semantic routing between tools and sub-agents, and personalization from user history. Every time an agent finds relevant context or remembers past information, it relies on embeddings.</p><p><strong>Are embeddings only used for RAG in AI agents?</strong><br>No. Retrieval-augmented generation is one use case among many. Embeddings also power memory across sessions, semantic routing between agents and tools, personalization based on user behavioral history, and anomaly detection. Every time an agentic system finds something relevant, recognizes a similar pattern, or organizes data by meaning, it is using the same embedding infrastructure.</p><p><strong>How do embeddings improve AI agent memory?</strong><br>Embeddings encode past interactions as vectors stored in a vector database. When the agent needs relevant context from a prior session, it converts the current query into a vector and retrieves the closest semantic matches. Google&#8217;s Agent Development Kit (ADK) treats this as a first-class architectural requirement, separating short-term session memory from long-term persistent memory retrieved via vector similarity search.</p><p><strong>What is semantic routing in multi-agent systems?</strong><br>Semantic routing uses embedding similarity to match an incoming query or task to the most appropriate agent, tool, or knowledge base. Unlike rule-based routing, it generalizes across varied user language. Research on the DyTopo system found embedding-based semantic routing produced a 6.2% improvement over fixed routing rules across code generation and reasoning tasks.</p><p><strong>Why should product leaders care about embeddings for AI agents?</strong><br>Embedding quality is a product decision, not just an engineering one. The choice of embedding model, chunking strategy, vector store refresh schedule, and retrieval approach all directly determine user experience. Product leaders who understand these choices identify context failures before users encounter them &#8212; and ship agents that hold up beyond the demo.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[From Zero To 100,000: The Questions We Set Out To Answer]]></title><description><![CDATA[One year of Adaline Labs. Over 100,000 subscribers. Here's what we believed, what turned out to be true, and what completely surprised us.]]></description><link>https://labs.adaline.ai/p/from-zero-to-100000</link><guid isPermaLink="false">https://labs.adaline.ai/p/from-zero-to-100000</guid><dc:creator><![CDATA[Arsh Shah Dilbagi]]></dc:creator><pubDate>Wed, 11 Mar 2026 12:00:48 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/672dad0e-08df-4b2e-b482-bacc672432f5_4800x2508.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>TLDR</strong>: <strong>How do LLMs actually work?</strong> <strong>How do you build reliably with them?</strong> <strong>How do you know if they&#8217;re working in production?</strong> These were the questions nobody was answering clearly in 2025. So we built Adaline Labs for the people, asking them. Some of these were the <strong>AI PM</strong>, the <strong>early-stage founder</strong>, and the <strong>engineer</strong> who became their team&#8217;s de facto AI lead. One year. 100,000 readers. Here&#8217;s the story. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://go.adaline.ai/rPUz2SX" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!gi3Y!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7786aee1-9483-454c-b461-9d1a1aab1472_2160x810.png 424w, https://substackcdn.com/image/fetch/$s_!gi3Y!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7786aee1-9483-454c-b461-9d1a1aab1472_2160x810.png 848w, https://substackcdn.com/image/fetch/$s_!gi3Y!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7786aee1-9483-454c-b461-9d1a1aab1472_2160x810.png 1272w, https://substackcdn.com/image/fetch/$s_!gi3Y!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7786aee1-9483-454c-b461-9d1a1aab1472_2160x810.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!gi3Y!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7786aee1-9483-454c-b461-9d1a1aab1472_2160x810.png" width="1200" height="450" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7786aee1-9483-454c-b461-9d1a1aab1472_2160x810.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:546,&quot;width&quot;:1456,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:288175,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://go.adaline.ai/rPUz2SX&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/190376436?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7786aee1-9483-454c-b461-9d1a1aab1472_2160x810.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!gi3Y!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7786aee1-9483-454c-b461-9d1a1aab1472_2160x810.png 424w, https://substackcdn.com/image/fetch/$s_!gi3Y!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7786aee1-9483-454c-b461-9d1a1aab1472_2160x810.png 848w, https://substackcdn.com/image/fetch/$s_!gi3Y!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7786aee1-9483-454c-b461-9d1a1aab1472_2160x810.png 1272w, https://substackcdn.com/image/fetch/$s_!gi3Y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7786aee1-9483-454c-b461-9d1a1aab1472_2160x810.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>When we published the first post on Adaline Labs, we had a simple and maybe naive belief that the people building AI products were being underserved by the content around them.</p><p>There was plenty of research. Plenty of hype. Plenty of &#8220;AI will change everything&#8221; takes. What was harder to find was something practical, honest, and aimed at the person actually responsible for shipping an AI feature. Or building AI products. This included the <strong>product manager </strong>and <strong>leaders</strong>, <strong>the early-stage founder</strong>, and <strong>the engineer</strong> who just became their team&#8217;s de facto AI lead.</p><p>That was the gap we wanted to close. And one year later, with over 100,000 of you reading, we think we were onto something.</p><p>Here is what we set out to answer and what we learned along the way.</p><h2>The First Question: &#8220;What Even Is This Thing?&#8221;</h2><p>In early 2025, most product leaders we spoke to were in a strange position. They were being asked to build with LLMs without really understanding how they worked. Not at a research level, that was never the point, but at a product level. Enough to make good decisions.</p><p>So we started from the ground up.</p><p><strong>What are embeddings</strong>, and <strong>why do they matter for search?</strong> <strong>How does attention work</strong>, and <strong>what does that mean for context limits?</strong> <strong>What is test-time scaling</strong>, and <strong>why is reasoning so expensive?</strong> <strong>What even is an agentic LLM?</strong></p><p>These were not academic questions. They were the questions a PM would ask before a planning meeting, and couldn&#8217;t find a clean answer to. We wrote them for that person.</p><blockquote><p><em>The audience was not looking for a shortcut. They wanted to actually understand; they just needed someone to explain it without the jargon.</em></p></blockquote><p>Posts like <em>"<a href="https://open.substack.com/pub/adalineai/p/what-pms-need-to-know-about-transformers?utm_campaign=post-expanded-share&amp;utm_medium=web">What PMs Need to Know About Transformers</a>"</em>&nbsp;and&nbsp;<em>"<a href="https://open.substack.com/pub/adalineai/p/understanding-attention-mechanisms?utm_campaign=post-expanded-share&amp;utm_medium=web">Understanding Attention Mechanisms in LLMs</a>"</em> became some of our most widely shared pieces. What surprised us was the enormous appetite for this content. </p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/from-zero-to-100000?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/from-zero-to-100000?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://labs.adaline.ai/p/from-zero-to-100000?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h2>The Second Question: &#8220;Okay, But How Do I Build With It?&#8221;</h2><p>Once we established the fundamentals, the natural next question arrived: <strong>how do you actually go from model to product?</strong></p><p>This is where things got interesting and where the content got more opinionated.</p><p>We wrote extensively: </p><ul><li><p>About <strong><a href="https://open.substack.com/pub/adalineai/p/prompt-engineering-as-product-strategy?utm_campaign=post-expanded-share&amp;utm_medium=web">prompt engineering</a></strong>, not as a parlour trick, but as a genuine product discipline. </p></li><li><p>About <strong><a href="https://open.substack.com/pub/adalineai/p/writing-effective-tool-calling-functions?utm_campaign=post-expanded-share&amp;utm_medium=web">tool calling</a></strong>, and how to write effective functions that your LLM can actually use. </p></li><li><p>About <strong><a href="https://open.substack.com/pub/adalineai/p/building-production-ready-agentic?utm_campaign=post-expanded-share&amp;utm_medium=web">RAG systems</a>,</strong> <strong><a href="https://open.substack.com/pub/adalineai/p/agentic-ai?utm_campaign=post-expanded-share&amp;utm_medium=web">agentic workflows</a></strong>, and the moment when your product stops being &#8220;an app with AI&#8221; and starts being &#8220;an AI-native product.&#8221;</p></li></ul><p>We also started writing about the mistakes, such as <strong><a href="https://open.substack.com/pub/adalineai/p/context-rot-why-llms-are-getting?utm_campaign=post-expanded-share&amp;utm_medium=web">context rot</a></strong>, <strong><a href="https://open.substack.com/pub/adalineai/p/token-burnout-why-ai-costs-are-climbing?utm_campaign=post-expanded-share&amp;utm_medium=web">token burnout</a></strong>, and how an <strong><a href="https://open.substack.com/pub/adalineai/p/ai-observability-and-evaluations?utm_campaign=post-expanded-share&amp;utm_medium=web">LLM product can quietly degrade in production</a></strong> without anyone noticing until users start churning.</p><blockquote><p><em>Product leaders were not intimidated by the technical depth. They were hungry for it. The more specific and precise we got, including <strong>actual code</strong>, <strong>actual prompt structures,</strong> and <strong>actual failure modes</strong>, the more the audience grew.</em></p></blockquote><h2>The Third Question: &#8220;How Do I Know If It's Working?&#8221;</h2><p>This one took us longer to articulate, but it became the thread that tied everything together.</p><p>You can build a beautiful agentic product. You can have great prompts, well-designed tool calls, and a thoughtful RAG setup. And then it goes to production, and you have no idea what&#8217;s actually happening.</p><ul><li><p>Is the LLM hallucinating? </p></li><li><p>Is a tool call failing silently? </p></li><li><p>Is your prompt behaving differently at 10 pm than it does at 10 am? </p></li><li><p>Is latency spiking for a specific type of user query?</p></li></ul><p>This is the <strong>evaluation</strong> and <strong>observability</strong> problem. And it turns out it&#8217;s the most important problem in AI product development that needs attention right now. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!p8rV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffba51954-2a3e-4b95-b7df-ed1167f95251_1320x1542.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!p8rV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffba51954-2a3e-4b95-b7df-ed1167f95251_1320x1542.png 424w, https://substackcdn.com/image/fetch/$s_!p8rV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffba51954-2a3e-4b95-b7df-ed1167f95251_1320x1542.png 848w, https://substackcdn.com/image/fetch/$s_!p8rV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffba51954-2a3e-4b95-b7df-ed1167f95251_1320x1542.png 1272w, https://substackcdn.com/image/fetch/$s_!p8rV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffba51954-2a3e-4b95-b7df-ed1167f95251_1320x1542.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!p8rV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffba51954-2a3e-4b95-b7df-ed1167f95251_1320x1542.png" width="1320" height="1542" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fba51954-2a3e-4b95-b7df-ed1167f95251_1320x1542.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1542,&quot;width&quot;:1320,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!p8rV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffba51954-2a3e-4b95-b7df-ed1167f95251_1320x1542.png 424w, https://substackcdn.com/image/fetch/$s_!p8rV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffba51954-2a3e-4b95-b7df-ed1167f95251_1320x1542.png 848w, https://substackcdn.com/image/fetch/$s_!p8rV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffba51954-2a3e-4b95-b7df-ed1167f95251_1320x1542.png 1272w, https://substackcdn.com/image/fetch/$s_!p8rV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffba51954-2a3e-4b95-b7df-ed1167f95251_1320x1542.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>A complete observability trace in <a href="https://go.adaline.ai/dRpz6AY">Adaline</a>.</em></figcaption></figure></div><p>We published pieces on <strong><a href="https://open.substack.com/pub/adalineai/p/observability-vs-monitoring-for-agentic-ai?utm_campaign=post-expanded-share&amp;utm_medium=web">LLM observability</a>,</strong> <strong>eval frameworks</strong>, <strong><a href="https://open.substack.com/pub/adalineai/p/llm-as-a-judge?r=57ptmv&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=true">LLM-as-a-judge</a></strong>, and <strong>continuous evaluation</strong> in production. </p><p>And then, in 2026, it became the central thesis:&nbsp;<em>observability is the operating system for reliable LLMs</em>.</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;41d5001f-20bb-45d7-a630-708973de910f&quot;,&quot;caption&quot;:&quot;TLDR: Most LLM products don&#8217;t crash. They quietly leak trust, safety, and budget. Silent failure is the default failure mode, and most teams never see it coming. This is a practical guide for engineers and PMs shipping LLM features in production. You will leave with a concrete framework for&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;AI Observability And Evaluations: The Operating System For Reliable LLM Products&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:40003941,&quot;name&quot;:&quot;Arsh Shah Dilbagi&quot;,&quot;bio&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78042b50-91fe-47cb-838e-2e45b1434fc1_1024x1024.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-03-04T13:02:50.737Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/45249d8c-38c8-486e-b392-6b83b50dfb23_2880x1620.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://labs.adaline.ai/p/ai-observability-and-evaluations&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:189392105,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:224,&quot;comment_count&quot;:1,&quot;publication_id&quot;:4015259,&quot;publication_name&quot;:&quot;Adaline Labs&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Wt35!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5199b386-b9f1-4343-88fd-ed804d414ec9_1001x1001.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p>Interestingly, this resonated not just with engineers, but with product leaders who finally had a language for why their AI products felt unpredictable. They were not imagining things. The systems were genuinely hard to see inside, and that was fixable.</p><h2>Our Readers Shaped This Newsletter</h2><p>Everything we know about our audience comes from listening closely and constantly. These were the consistent signals our readers kept sending us:</p><ul><li><p>How do LLMs actually work?</p></li><li><p>How do I build reliably with them?</p></li><li><p>With new models dropping every month, how do I integrate them into existing workflows?</p></li><li><p>Which model suits which part of the workflow?</p></li><li><p>Which tool (Cursor, Claude Code, Codex, etc.) can product leaders and builders use to enhance their productivity?</p></li><li><p>How do I know if it is working in production?</p></li></ul><p>We did not pick our topics. Our readers did. We researched, studied, executed, and wrote about them. Over time, those signals pointed to a clear set of content pillars and a clear center.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!asme!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a446a53-6904-4fcf-bb4c-3b9876562cbc_1456x1366.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!asme!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a446a53-6904-4fcf-bb4c-3b9876562cbc_1456x1366.png 424w, https://substackcdn.com/image/fetch/$s_!asme!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a446a53-6904-4fcf-bb4c-3b9876562cbc_1456x1366.png 848w, https://substackcdn.com/image/fetch/$s_!asme!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a446a53-6904-4fcf-bb4c-3b9876562cbc_1456x1366.png 1272w, https://substackcdn.com/image/fetch/$s_!asme!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a446a53-6904-4fcf-bb4c-3b9876562cbc_1456x1366.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!asme!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a446a53-6904-4fcf-bb4c-3b9876562cbc_1456x1366.png" width="1456" height="1366" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3a446a53-6904-4fcf-bb4c-3b9876562cbc_1456x1366.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1366,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:145872,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/190376436?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a446a53-6904-4fcf-bb4c-3b9876562cbc_1456x1366.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!asme!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a446a53-6904-4fcf-bb4c-3b9876562cbc_1456x1366.png 424w, https://substackcdn.com/image/fetch/$s_!asme!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a446a53-6904-4fcf-bb4c-3b9876562cbc_1456x1366.png 848w, https://substackcdn.com/image/fetch/$s_!asme!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a446a53-6904-4fcf-bb4c-3b9876562cbc_1456x1366.png 1272w, https://substackcdn.com/image/fetch/$s_!asme!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a446a53-6904-4fcf-bb4c-3b9876562cbc_1456x1366.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>The five content pillars of Adaline Labs and where they intersect.</em></figcaption></figure></div><p>The diagram above captures something we did not plan but discovered over the year. <strong>Evals and Observability are not standalone topics</strong>. They live at the intersections. They are the connective tissue between understanding AI, building with it, and shipping it with confidence.</p><h2>What We Believe Now That We Didn&#8217;t When We Started</h2><p>A year in, here are the things we believe more firmly than when we started:</p><p><strong>The PM is the most important person in an AI product team.</strong> Not because they write code, but because: </p><ul><li><p>They hold the product vision. </p></li><li><p>They understand the user and serve as the connective tissue between what the model can do and what they should do. </p></li></ul><p>Equipping that person matters more than we initially realized.</p><p><strong>Fundamentals compound.</strong> The readers who understood embeddings and attention early are now the ones thinking clearly about <strong>context engineering</strong> and <strong>agentic architecture</strong>. There are no shortcuts in this field. But there are faster paths, and that&#8217;s what we tried to build.</p><p><strong>The hardest problems are not technical.</strong> They are judgment problems. For instance:</p><ul><li><p>When do you use a smaller, faster model vs. a frontier one? </p></li><li><p>When is a RAG system the right call vs. fine-tuning? </p></li><li><p>When do you add an eval layer vs. ship-and-learn? </p></li></ul><p>These are the decisions our readers face every week, and they need frameworks, not just tutorials.</p><blockquote><p><em><strong>100,000+ people are both humbling and clarifying.</strong> Humbling because this community chose to spend its attention here, every week, amid everything competing for it. Clarifying because the scale of the response tells us something: there is a massive, underserved audience of people building at the frontier of AI who want to think rigorously, not just move fast.</em></p></blockquote><h2>What Comes Next</h2><p>The questions are getting harder. And we believe this is what unfolds in 2026:</p><ul><li><p>AI agents become real production infrastructure.</p></li><li><p>Evals and observability move from nice-to-have to non-negotiable.</p></li><li><p>AI coding agents change how teams ship.</p></li><li><p>Product work gets redefined when everyone can build.</p></li></ul><p>We are going to keep following the questions. The ones our readers are wrestling with. The ones who do not yet have clean answers but deserve clear thinking.</p><p>Thank you for being here for year one.</p><blockquote><p><em>The questions get harder. Our answers get clearer.</em></p></blockquote><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Sub-Agents For Product Managers: Stop Directing A Tool. Start Running A Team.]]></title><description><![CDATA[The chatbot model has a structural ceiling. Sub-agents are what's above it.]]></description><link>https://labs.adaline.ai/p/sub-agents-for-product-managers</link><guid isPermaLink="false">https://labs.adaline.ai/p/sub-agents-for-product-managers</guid><dc:creator><![CDATA[Nilesh Barla]]></dc:creator><pubDate>Sat, 07 Mar 2026 01:00:49 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/0a17e309-6f14-4208-bd41-41f1ae95af00_1456x816.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>TLDR</strong>: PMs are running workflow through chat windows. That&#8217;s the wrong architecture. This blog breaks down why the chatbot model has a structural ceiling, not a prompting problem. And what actually changes when you replace it with <strong>orchestrated</strong>, <strong>parallel</strong>, and <strong>workspace-native agents</strong>. It covers the three constraints killing your current setup, <strong>how sub-agents actually work</strong>, <strong>when to use them</strong> and <strong>when not to</strong>, and what the PM role becomes once the architecture shifts. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://go.adaline.ai/rPUz2SX" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!gfiZ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08c7b7c3-e54a-46e8-b0fc-fa4b7e8dc226_2160x810.png 424w, https://substackcdn.com/image/fetch/$s_!gfiZ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08c7b7c3-e54a-46e8-b0fc-fa4b7e8dc226_2160x810.png 848w, https://substackcdn.com/image/fetch/$s_!gfiZ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08c7b7c3-e54a-46e8-b0fc-fa4b7e8dc226_2160x810.png 1272w, https://substackcdn.com/image/fetch/$s_!gfiZ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08c7b7c3-e54a-46e8-b0fc-fa4b7e8dc226_2160x810.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!gfiZ!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08c7b7c3-e54a-46e8-b0fc-fa4b7e8dc226_2160x810.png" width="1200" height="450" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/08c7b7c3-e54a-46e8-b0fc-fa4b7e8dc226_2160x810.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:546,&quot;width&quot;:1456,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:288175,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://go.adaline.ai/rPUz2SX&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/190000757?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08c7b7c3-e54a-46e8-b0fc-fa4b7e8dc226_2160x810.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!gfiZ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08c7b7c3-e54a-46e8-b0fc-fa4b7e8dc226_2160x810.png 424w, https://substackcdn.com/image/fetch/$s_!gfiZ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08c7b7c3-e54a-46e8-b0fc-fa4b7e8dc226_2160x810.png 848w, https://substackcdn.com/image/fetch/$s_!gfiZ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08c7b7c3-e54a-46e8-b0fc-fa4b7e8dc226_2160x810.png 1272w, https://substackcdn.com/image/fetch/$s_!gfiZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08c7b7c3-e54a-46e8-b0fc-fa4b7e8dc226_2160x810.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>We are 2026, and I still find many product managers using AI the same way they use Google:&nbsp;<strong>type a question</strong>,&nbsp;<strong>get a response</strong>, and&nbsp;act on it.</p><p>The interface is a text box.<br>The output is text that you copy elsewhere.<br>The workflow is: <strong>prompt</strong>, <strong>read</strong>, <strong>paste</strong>, and <strong>repeat</strong>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!US6j!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2ada41c-57ec-4666-8b5b-5273ab2038d0_1908x628.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!US6j!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2ada41c-57ec-4666-8b5b-5273ab2038d0_1908x628.png 424w, https://substackcdn.com/image/fetch/$s_!US6j!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2ada41c-57ec-4666-8b5b-5273ab2038d0_1908x628.png 848w, https://substackcdn.com/image/fetch/$s_!US6j!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2ada41c-57ec-4666-8b5b-5273ab2038d0_1908x628.png 1272w, https://substackcdn.com/image/fetch/$s_!US6j!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2ada41c-57ec-4666-8b5b-5273ab2038d0_1908x628.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!US6j!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2ada41c-57ec-4666-8b5b-5273ab2038d0_1908x628.png" width="1456" height="479" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b2ada41c-57ec-4666-8b5b-5273ab2038d0_1908x628.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:479,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:59079,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/190000757?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2ada41c-57ec-4666-8b5b-5273ab2038d0_1908x628.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!US6j!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2ada41c-57ec-4666-8b5b-5273ab2038d0_1908x628.png 424w, https://substackcdn.com/image/fetch/$s_!US6j!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2ada41c-57ec-4666-8b5b-5273ab2038d0_1908x628.png 848w, https://substackcdn.com/image/fetch/$s_!US6j!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2ada41c-57ec-4666-8b5b-5273ab2038d0_1908x628.png 1272w, https://substackcdn.com/image/fetch/$s_!US6j!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2ada41c-57ec-4666-8b5b-5273ab2038d0_1908x628.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>This works. But it has a ceiling. But not a ceiling of model intelligence. </p><p>Claude 4.6, GPT-5.3, and Gemini 3.1 are all capable of more than what a single chat thread lets you access. The ceiling isn&#8217;t the model. It&#8217;s the architecture you&#8217;re running it through. A chatbot is one assistant, one context window, one sequential thread. Every interaction starts with what you type. Every output ends up in your clipboard.</p><p>Sub-agents for product managers aren&#8217;t a new feature inside that model. They&#8217;re a replacement for the model itself.</p><p>The change is from directing a single assistant to orchestrating a team.<br>And the product teams that have made this shift aren&#8217;t just working faster, they&#8217;re also working differently. </p><p><strong>Research</strong>, <strong>spec drafting</strong>, and <strong>backlog</strong> <strong>triage</strong> used to happen one at a time. Now they happen in parallel, each handled by a specialized agent, each returning a structured result to an orchestrator, the PM, who synthesizes and decides.</p><p>This article is about the mental model behind that shift.</p><p>Not a tutorial.</p><p>Not a setup guide.</p><p><strong>It is a framework for understanding what sub-agents are</strong>, why the interface you run them from matters, and what the PM role actually looks like once the architecture changes.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>The Single-Assistant Ceiling</h2><p>The chatbot model has <strong>three structural constraints</strong> that no amount of improved prompting can solve.</p><p>The first is <strong>statelessness</strong>. </p><p>Every session starts from zero. The model has no memory of your product, your codebase, or what you decided last Tuesday unless you paste it back in.</p><p>Now, although ChatGPT and Claude (Web) have memory functionality. But the issue is that they have a common memory space and all the chats access the same memory. So the problem with this setup or workflow is that information will be shared in projects that don&#8217;t require it. To put it another way, personal, private, and professional life will be mixed up.</p><p>In this case, PMs become context managers. They have to:</p><ol><li><p>Maintain long system prompts.</p></li><li><p>Copy documentation into chat windows.</p></li><li><p>Manually filter content and information and bridge the gap into what the AI needs to know and what it actually knows.</p></li></ol><p>The intelligence is there, but the continuity isn&#8217;t.</p><p>The second constraint is <strong>single-threading</strong>. </p><p>Meaning one thing or task happens at a time. If you&#8217;re using an agentic AI product manager setup, you&#8217;ve probably felt this. You ask the model to research a competitive feature, then draft a spec, then break it into tickets. Each task waits for the previous.</p><p>The model is capable of doing all three &#8212; just not at once, not in separate contexts, not in parallel.</p><p>Complex PM work rarely has that kind of serial structure. Real product work leverages parallelization. Because it saves time, it's fast and efficient.</p><p>The third constraint is <strong>isolation from the environment</strong>. </p><p>A chatbot suggestion lives in a chat window. The action it recommends lives elsewhere &#8212; in Jira, in Notion, in a Figma file, or in a codebase. It takes manual effort to bring together &#8220;AI output&#8221; and &#8220;real artifact.&#8221;</p><p>As a PM, you are the integration layer. You copy the draft. You paste the ticket description. You take the suggestion and do something with it. The AI never touches the actual environment where work happens.</p><p>These aren&#8217;t complaints about specific products. They are structural properties of the chatbot interface. And together, they explain why <a href="https://redreamality.com/blog/ai-agents-in-product-management-2026/">product teams</a> save roughly two hours a day through AI automation but watch those gains concentrate in routine, documentation-heavy tasks. Not the complex, interconnected work that makes the biggest difference. The interface caps the upside.</p><p>The question isn&#8217;t how to prompt better inside the single-assistant model. It&#8217;s what happens when you replace the model altogether.</p><h2>What Sub-Agents Actually Are</h2><p>Sub-agents are not &#8220;more prompts.&#8221; They are a different architectural pattern. And understanding the pattern is the prerequisite to using it well.</p><p>In a sub-agent system, a parent agent &#8212; <strong>the orchestrator</strong> &#8212; decomposes a complex task and delegates pieces of it to specialized child agents. Each child agent, <strong>the sub-agent</strong>, operates in its own isolated context window.</p><ol><li><p>It receives a prompt with exactly the context it needs.</p></li><li><p>Works autonomously using its assigned tools.</p></li><li><p>Returns a structured result to the parent.</p></li></ol><p>The parent synthesizes those results and decides what happens next.</p><p>Three things make this fundamentally different from a single-assistant setup.</p><p><strong>Context isolation.</strong><br>Each sub-agent starts with a clean context. A research sub-agent exploring competitive positioning doesn&#8217;t share its context window with a spec-drafting sub-agent working on a feature brief. Neither pollutes the other&#8217;s focus.</p><p>And the orchestrator never sees the intermediate noise. It sees final results. This is how <a href="https://www.anthropic.com/engineering/multi-agent-research-system">Anthropic&#8217;s multi-agent research system works</a>:</p><blockquote><p>A lead agent spawns sub-agents to explore different aspects of a question simultaneously, each returning condensed findings rather than raw search logs.</p></blockquote><p>Anthropic&#8217;s engineering team &#8212; Jeremy Hadfield, Barry Zhang, and colleagues &#8212; documented a 90.2% improvement over single-agent performance on complex research tasks. Not because the model got smarter, but because the architecture distributes the cognitive load.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Tt9Z!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4dc156d-cd2e-4ced-9b9a-83c03beb2be7_3840x3840.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Tt9Z!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4dc156d-cd2e-4ced-9b9a-83c03beb2be7_3840x3840.webp 424w, https://substackcdn.com/image/fetch/$s_!Tt9Z!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4dc156d-cd2e-4ced-9b9a-83c03beb2be7_3840x3840.webp 848w, https://substackcdn.com/image/fetch/$s_!Tt9Z!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4dc156d-cd2e-4ced-9b9a-83c03beb2be7_3840x3840.webp 1272w, https://substackcdn.com/image/fetch/$s_!Tt9Z!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4dc156d-cd2e-4ced-9b9a-83c03beb2be7_3840x3840.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Tt9Z!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4dc156d-cd2e-4ced-9b9a-83c03beb2be7_3840x3840.webp" width="1456" height="1456" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b4dc156d-cd2e-4ced-9b9a-83c03beb2be7_3840x3840.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1456,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Tt9Z!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4dc156d-cd2e-4ced-9b9a-83c03beb2be7_3840x3840.webp 424w, https://substackcdn.com/image/fetch/$s_!Tt9Z!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4dc156d-cd2e-4ced-9b9a-83c03beb2be7_3840x3840.webp 848w, https://substackcdn.com/image/fetch/$s_!Tt9Z!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4dc156d-cd2e-4ced-9b9a-83c03beb2be7_3840x3840.webp 1272w, https://substackcdn.com/image/fetch/$s_!Tt9Z!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4dc156d-cd2e-4ced-9b9a-83c03beb2be7_3840x3840.webp 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>The orchestrator-worker pattern in practice. </em>| <strong>Source</strong>: <a href="https://www.anthropic.com/engineering/multi-agent-research-system">Anthropic Engineering, June 2025</a></figcaption></figure></div><p><strong>Parallel execution.</strong><br>Multiple sub-agents run simultaneously. This is what the Cursor community noticed when sub-agents shipped &#8212; that single-threaded prompting suddenly felt archaic. </p><div class="pullquote"><p>Agents with real roles, customized skill sets, clean handoffs, deliberate execution.</p></div><p>That was the reaction, because that&#8217;s what becomes visible when you move from sequential to parallel.</p><div id="youtube2-NXTnmfG4h7U" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;NXTnmfG4h7U&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/NXTnmfG4h7U?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>From a PM standpoint, a research agent, a spec agent, and a triage agent can all be working simultaneously. Each in its own context and each returning to a shared orchestration layer when complete.</p><p><strong>Specialization and model selection.</strong><br>Each sub-agent is configured for its role. That includes its instructions, its tool access, and most importantly, its model. </p><ul><li><p>A sub-agent doing deep reasoning on a product brief might run on Claude Opus. </p></li><li><p>A sub-agent performing rapid parallel searches might run on Claude Sonnet 4.6, GPT-5.3 Instant, or even Gemini 3.1 Flash. Where speed matters more than depth.</p></li><li><p>A sub-agent working with long documents such as research papers, transcript archives, and support logs, might run on Gemini, which is optimized for long-context retrieval. </p></li></ul><blockquote><p>The model choice stops being a single global setting and becomes a deliberate configuration decision per task type.</p></blockquote><p>This is what multi-agent product management actually means in practice: the PM defines the goal and the team's shape. The team executes in parallel. The results come back structured.</p><p>The community reaction to seeing this run &#8212; &#8220;makes single-threaded prompting feel archaic&#8221; &#8212; is the right reaction. </p><p>It&#8217;s not hyperbole. </p><p>It&#8217;s a recognition that the previous model had a ceiling you didn&#8217;t know you were hitting until you saw above it.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/sub-agents-for-product-managers?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/sub-agents-for-product-managers?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://labs.adaline.ai/p/sub-agents-for-product-managers?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h2>Why the Interface Matters: Chatbot vs Workspace-Native</h2><p>Knowing what sub-agents are is half the model. The other half is understanding where they can run. Because the interface is not neutral. It shapes what&#8217;s possible.</p><p>A chatbot interface is isolated by design. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!RPGz!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb4524ae-92c3-4657-980c-b06926a17a5f_1524x716.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!RPGz!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb4524ae-92c3-4657-980c-b06926a17a5f_1524x716.png 424w, https://substackcdn.com/image/fetch/$s_!RPGz!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb4524ae-92c3-4657-980c-b06926a17a5f_1524x716.png 848w, https://substackcdn.com/image/fetch/$s_!RPGz!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb4524ae-92c3-4657-980c-b06926a17a5f_1524x716.png 1272w, https://substackcdn.com/image/fetch/$s_!RPGz!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb4524ae-92c3-4657-980c-b06926a17a5f_1524x716.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!RPGz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb4524ae-92c3-4657-980c-b06926a17a5f_1524x716.png" width="1456" height="684" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/eb4524ae-92c3-4657-980c-b06926a17a5f_1524x716.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:684,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:82057,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/190000757?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb4524ae-92c3-4657-980c-b06926a17a5f_1524x716.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!RPGz!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb4524ae-92c3-4657-980c-b06926a17a5f_1524x716.png 424w, https://substackcdn.com/image/fetch/$s_!RPGz!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb4524ae-92c3-4657-980c-b06926a17a5f_1524x716.png 848w, https://substackcdn.com/image/fetch/$s_!RPGz!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb4524ae-92c3-4657-980c-b06926a17a5f_1524x716.png 1272w, https://substackcdn.com/image/fetch/$s_!RPGz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb4524ae-92c3-4657-980c-b06926a17a5f_1524x716.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>It processes text and returns text. It has no access to your files unless you paste or attach them. It has no connection to your tools unless you&#8217;ve explicitly described them in the prompt. It has no memory of your product unless you rebuild that context every session.</p><p>This is fine for answering questions. It is a structural constraint for orchestrating a team of agents that need to read your codebase, push to Jira, pull from Notion, and execute changes in real files.</p><p><strong>Workspace-native tools solve this at the architecture level.</strong></p><p>The clearest articulation of the distinction is this: ChatGPT works from pasted context. Cursor works from your actual project. That difference sounds obvious. Its implications run deep.</p><p>Dennis Yang, a PM at Chime, <a href="https://www.builder.io/blog/cursor-for-product-managers">put it plainly after switching</a>: &#8220;Cursor is a much better product manager than I ever was.&#8221;</p><p>He&#8217;s not talking about the model. He&#8217;s talking about the environment.</p><p>When a PRD is drafted inside the workspace, it references real API endpoints. The spec reflects what the team has actually built. Tickets are grounded in the codebase, not a description of it. The artifacts are real because the tool is connected to the environment where real work happens.</p><p>This matters specifically for sub-agents because sub-agents need plumbing.</p><ul><li><p>A research sub-agent needs web search and internal documentation.</p></li><li><p>A spec-drafting sub-agent needs the product&#8217;s existing architecture.</p></li><li><p>A triage sub-agent needs to read from Jira or Linear and write back to it. None of this is possible inside a stateless chat window.</p></li></ul><p>The Model Context Protocol (MCP) is what makes it possible in workspace-native tools: a standardized layer that connects agents to external tools and files as first-class capabilities, not workarounds.</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;69b6f26c-9fc2-4e55-9d5a-49fef9ab997b&quot;,&quot;caption&quot;:&quot;TLDR: This blog shows how Model Context Protocol (MCP) transforms AI product development from an eight-week engineering marathon into a four-hour prototyping sprint. Through building a shopping assistant, you&#8217;ll learn a five-stage playbook that covers tool discovery, product definition, system prompt engineering, guardrails design, and quality evaluatio&#8230;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;The MCP Product Playbook: From Idea to Prototype in One Conversation&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:315292999,&quot;name&quot;:&quot;Nilesh Barla&quot;,&quot;bio&quot;:&quot;I research and write stuff on Adaline.ai&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7b494dad-d22a-40cf-a461-24749c055d0a_960x1280.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-12-20T02:00:42.560Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d5a4fb71-d50a-46fd-bc04-4e40b077c17b_1614x954.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://labs.adaline.ai/p/the-mcp-product-playbook&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:181879651,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:27,&quot;comment_count&quot;:0,&quot;publication_id&quot;:4015259,&quot;publication_name&quot;:&quot;Adaline Labs&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Wt35!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5199b386-b9f1-4343-88fd-ed804d414ec9_1001x1001.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p><a href="https://www.builder.io/blog/cursor-for-product-managers">YC&#8217;s Spring 2026 Request for Startups</a> named &#8220;Cursor for Product Management&#8221; as an official startup category.</p><p>Naval Ravikant told his 3M+ followers that vibe coding is the new product management. Both point to the same underlying shift: the environment where PMs work is moving from specification documents to executable workspaces.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!kh9k!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffff0c646-0fc4-4867-aaca-c7e3c88ada52_1972x748.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!kh9k!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffff0c646-0fc4-4867-aaca-c7e3c88ada52_1972x748.png 424w, https://substackcdn.com/image/fetch/$s_!kh9k!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffff0c646-0fc4-4867-aaca-c7e3c88ada52_1972x748.png 848w, https://substackcdn.com/image/fetch/$s_!kh9k!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffff0c646-0fc4-4867-aaca-c7e3c88ada52_1972x748.png 1272w, https://substackcdn.com/image/fetch/$s_!kh9k!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffff0c646-0fc4-4867-aaca-c7e3c88ada52_1972x748.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!kh9k!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffff0c646-0fc4-4867-aaca-c7e3c88ada52_1972x748.png" width="1456" height="552" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fff0c646-0fc4-4867-aaca-c7e3c88ada52_1972x748.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:552,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:166894,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/190000757?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffff0c646-0fc4-4867-aaca-c7e3c88ada52_1972x748.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!kh9k!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffff0c646-0fc4-4867-aaca-c7e3c88ada52_1972x748.png 424w, https://substackcdn.com/image/fetch/$s_!kh9k!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffff0c646-0fc4-4867-aaca-c7e3c88ada52_1972x748.png 848w, https://substackcdn.com/image/fetch/$s_!kh9k!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffff0c646-0fc4-4867-aaca-c7e3c88ada52_1972x748.png 1272w, https://substackcdn.com/image/fetch/$s_!kh9k!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffff0c646-0fc4-4867-aaca-c7e3c88ada52_1972x748.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><strong>Source</strong>: <a href="https://x.com/naval/status/2018633583423049951?s=20">Naval on X.</a></figcaption></figure></div><p>The AI agent workflow that matters isn&#8217;t the one in the chat window. It&#8217;s the one running inside the environment where decisions become artifacts.</p><h2>The PM as Orchestrator: What the Role Actually Becomes</h2><p>When the interface changes, the role changes. Not in the direction most PMs expect.</p><p>The shift from chatbot to sub-agent orchestration is not primarily a technical shift. PMs who make this transition don&#8217;t need to become engineers.</p><p>What they need to become is more precise about goals, constraints, and boundaries. Because in an orchestrated system, the PM is not directing each step. The PM is defining the brief. The agents figure out the steps.</p><p>This is actually a familiar mental model.</p><p>A PM working with a research team, a designer, an engineer, and a data analyst doesn&#8217;t tell each person exactly what to type. They define the objective, constraints, output format, and handoff structure.</p><p>The team figures out the execution.</p><p>Sub-agent orchestration is the same mental model applied to AI agents. The PM provides the brief, not the method.</p><p><strong>What changes is the cost of imprecision.</strong> A vague goal given to a human engineer prompts a conversation, a clarifying question, and a back-and-forth. A vague goal given to a sub-agent produces an output &#8212; confident, well-formatted, and possibly wrong in ways that are hard to catch.</p><p>The orchestrator&#8217;s core competency becomes writing goals precise enough that agents don&#8217;t hallucinate arbitrary decisions to fill in the gaps. This is what product teams are starting to call &#8220;<strong>executable specs.</strong>&#8221; Essentially, they are requirements so specific that they function almost as instructions. It is the PM skill that matters most in a sub-agent world.</p><p>What the PM stops doing is acting as the integration layer.</p><p>In the chatbot model, the PM is the one who carries information between tools &#8212; from AI to Jira, from research to spec, from spec to engineer. In a well-designed orchestration system, agents handle those handoffs. The PM&#8217;s time shifts toward judgment calls: which goals to prioritize, which agent outputs to synthesize, which results to challenge.</p><p>Jim Allen Wallace of Redis <a href="https://redis.io/blog/ai-agent-orchestration/">documented a 40% agentic project cancellation rate by end of 2027</a>. And it isn&#8217;t primarily an engineering failure. It&#8217;s a coordination failure. Teams underestimate the design work required to define:</p><ol><li><p>Clean handoffs between agents.</p></li><li><p>Precise enough goals to prevent hallucination drift.</p></li><li><p>Clear enough scope boundaries to keep agents from doing work that conflicts.</p></li></ol><p>Getting orchestration right is a product design problem. Which means it&#8217;s a PM problem.</p><h2>When Sub-Agents Are the Right Call</h2><p>Sub-agents are not the answer to every PM problem. The overhead is real and should be taken seriously.</p><p>Each sub-agent runs in its own context window, which means each one consumes tokens independently. <a href="https://www.anthropic.com/engineering/multi-agent-research-system">Anthropic&#8217;s</a> engineering team found that multi-agent architectures use roughly fifteen times more tokens than standard chat interactions. That&#8217;s an economic reality, not a footnote.</p><p>Sub-agents are worth it when the task&#8217;s value justifies the cost and when the task&#8217;s structure actually suits parallel execution.</p><p><strong>Use sub-agents when:</strong></p><ul><li><p>The task is genuinely too large for a single context window.</p></li><li><p>Distinct parallel workstreams exist that don&#8217;t depend on each other&#8217;s output.</p></li><li><p>different parts of the task benefit from different model strengths &#8212; deep reasoning, fast retrieval, and long-context analysis.</p></li></ul><p><strong>Don&#8217;t use sub-agents when:</strong></p><ul><li><p>The task is simple, sequential, and fits comfortably in a single context.</p></li><li><p>When all agents need to share the same context to make decisions (this breaks context isolation, eliminating the primary benefit).</p></li><li><p>When the coordination overhead &#8212; designing handoffs, synthesizing outputs &#8212; exceeds the time the parallelism saves.</p></li></ul><p>Single-agent approaches often outperform multi-agent in production for tightly sequential tasks.</p><blockquote><p>Complexity is not a virtue.</p></blockquote><p>The orchestrator&#8217;s job is to match the architecture to the task. And sometimes the right call is one agent, one context, one clean result.</p><h2>Conclusion</h2><p>The chatbot is not going away. But it&#8217;s already not the ceiling; it&#8217;s the floor.</p><p>The PMs who are pulling ahead aren&#8217;t using better prompts inside the single-assistant model. They&#8217;re designing systems: specialized agents with defined roles, parallel execution, clean handoffs, and workspace-native environments. Where AI output lands as real artifacts, not clipboard text.</p><p>The mental model shift is from user to orchestrator. From &#8220;how do I ask this better?&#8221; to &#8220;how do I design a team that handles this without me acting as the integration layer?&#8221;</p><p>That transformation requires precision, in goal-setting, in constraint definition, in understanding which tasks justify the architecture and which don&#8217;t.</p><p>It requires tools that are connected to the actual environment where work happens, not isolated chat windows. And it requires a different relationship to AI: not a tool you direct, but a team you run.</p><p>The question to sit with: what is the most complex workflow you currently manage by copying responses from a chatbot into five other tools?</p><p>That&#8217;s the first candidate.</p><p>Not because sub-agents make it trivially easy; they actually don&#8217;t. But because that workflow has already exposed the ceiling of the model you&#8217;re in.</p><p>The architecture exists to go above it.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[AI Observability And Evaluations: The Operating System For Reliable LLM Products]]></title><description><![CDATA[A practical guide to measuring LLM behavior, catching silent failures, and improving with real production data.]]></description><link>https://labs.adaline.ai/p/ai-observability-and-evaluations</link><guid isPermaLink="false">https://labs.adaline.ai/p/ai-observability-and-evaluations</guid><dc:creator><![CDATA[Arsh Shah Dilbagi]]></dc:creator><pubDate>Wed, 04 Mar 2026 13:02:50 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/45249d8c-38c8-486e-b392-6b83b50dfb23_2880x1620.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>TLDR</strong>: Most LLM products don&#8217;t crash. They quietly leak trust, safety, and budget. Silent failure is the default failure mode, and most teams never see it coming. This is a practical guide for <strong>engineers</strong> and <strong>PMs</strong> shipping LLM features in production. You will leave with a concrete framework for <strong>instrumenting runs</strong>, <strong>version prompts</strong>, <strong>design rubrics</strong>, <strong>catching silent failures</strong>, and <strong>switching models without fear</strong>. The moat is measured improvement, not prompt cleverness.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://go.adaline.ai/rPUz2SX" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!cPmF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d42c6dd-9d6a-4191-81c6-786ef374ee9b_1600x600.png 424w, https://substackcdn.com/image/fetch/$s_!cPmF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d42c6dd-9d6a-4191-81c6-786ef374ee9b_1600x600.png 848w, https://substackcdn.com/image/fetch/$s_!cPmF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d42c6dd-9d6a-4191-81c6-786ef374ee9b_1600x600.png 1272w, https://substackcdn.com/image/fetch/$s_!cPmF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d42c6dd-9d6a-4191-81c6-786ef374ee9b_1600x600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!cPmF!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d42c6dd-9d6a-4191-81c6-786ef374ee9b_1600x600.png" width="1200" height="450" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0d42c6dd-9d6a-4191-81c6-786ef374ee9b_1600x600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:546,&quot;width&quot;:1456,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:&quot;https://go.adaline.ai/rPUz2SX&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!cPmF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d42c6dd-9d6a-4191-81c6-786ef374ee9b_1600x600.png 424w, https://substackcdn.com/image/fetch/$s_!cPmF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d42c6dd-9d6a-4191-81c6-786ef374ee9b_1600x600.png 848w, https://substackcdn.com/image/fetch/$s_!cPmF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d42c6dd-9d6a-4191-81c6-786ef374ee9b_1600x600.png 1272w, https://substackcdn.com/image/fetch/$s_!cPmF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d42c6dd-9d6a-4191-81c6-786ef374ee9b_1600x600.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h1>Introduction</h1><div id="youtube2-Zj3Oer4pTDM" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;Zj3Oer4pTDM&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/Zj3Oer4pTDM?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><h2>Why LLM Products Break Quietly Without Observability</h2><p>When I build LLM features, I do not worry about clever prompts first. What I worry about is that the team can&#8217;t see what the system is doing when it fails.</p><p>In this blog, I am making the case that <strong>reliability starts with visibility, not vibes</strong>.</p><p>The motivating question is simple. What is the equivalent of GitHub plus unit tests for an LLM application where the behavior is shaped by prompts and shifting context? Without that substrate, teams ship changes they <strong>cannot review</strong>, <strong>cannot regress</strong>, and <strong>cannot explain</strong>.</p><p>Silent failure becomes the default failure mode. The output looks coherent, the user seems satisfied, and the product metrics stay flat.</p><p>Underneath, the system may be wrong, unsafe, or quietly violating policy. That is why I treat <strong>observability</strong> and <strong>evaluations</strong> as the <strong>reliability layer</strong>. They turn unknown behavior into inspectable behavior, then measurable behavior.</p><p>Tool use raises the stakes. Once a model can act, a conversation becomes an execution surface. For instance, if the app can issue refunds, the &#8220;executable code&#8221; can be embedded in the chat thread itself.</p><p>The incident pattern is quite familiar.</p><p>A support bot approves a refund it should not, the customer is happy, and the mistake only shows up later as leaked margin and policy debt.</p><p>Key points I&#8217;m making:</p><ul><li><p>LLM apps need a review and regression discipline comparable to code.</p></li><li><p>Silent failure is more common than loud failure.</p></li><li><p>Tool calls convert text into real operational risk.</p></li><li><p>Observability plus evals create accountability for behavior.</p></li></ul><p>How I&#8217;d implement this:</p><ul><li><p>Instrument every run with <strong>prompt version</strong>, <strong>context</strong>, <strong>tool calls</strong>, <strong>cost</strong>, and <strong>latency</strong>.</p></li><li><p>Sample real cases and curate a small starting dataset.</p></li><li><p>Run a small eval set on every change.</p></li><li><p>Monitor for drift and escalate failures into the dataset.</p></li></ul><p>Next, I will reframe prompts as business logic you have to govern.</p><h2>Prompts Are Executable Business Logic In Production</h2><p>When I say prompts matter, I do not mean prompt wording as a copywriting exercise.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Wzr0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbfcc4966-7345-486d-a471-3f7432de7c15_1440x810.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Wzr0!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbfcc4966-7345-486d-a471-3f7432de7c15_1440x810.png 424w, https://substackcdn.com/image/fetch/$s_!Wzr0!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbfcc4966-7345-486d-a471-3f7432de7c15_1440x810.png 848w, https://substackcdn.com/image/fetch/$s_!Wzr0!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbfcc4966-7345-486d-a471-3f7432de7c15_1440x810.png 1272w, https://substackcdn.com/image/fetch/$s_!Wzr0!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbfcc4966-7345-486d-a471-3f7432de7c15_1440x810.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Wzr0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbfcc4966-7345-486d-a471-3f7432de7c15_1440x810.png" width="1440" height="810" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bfcc4966-7345-486d-a471-3f7432de7c15_1440x810.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:810,&quot;width&quot;:1440,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:44170,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/189392105?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbfcc4966-7345-486d-a471-3f7432de7c15_1440x810.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Wzr0!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbfcc4966-7345-486d-a471-3f7432de7c15_1440x810.png 424w, https://substackcdn.com/image/fetch/$s_!Wzr0!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbfcc4966-7345-486d-a471-3f7432de7c15_1440x810.png 848w, https://substackcdn.com/image/fetch/$s_!Wzr0!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbfcc4966-7345-486d-a471-3f7432de7c15_1440x810.png 1272w, https://substackcdn.com/image/fetch/$s_!Wzr0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbfcc4966-7345-486d-a471-3f7432de7c15_1440x810.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>The evolution of prompts from punch cards in the 1950s.</em> | <strong>Source</strong>: <a href="https://www.youtube.com/watch?v=Zj3Oer4pTDM">Stanford CS 224G: AI Observability &amp; Evaluations | Guest Lecture by Arsh Shah Dilbagi</a></figcaption></figure></div><p>I mean prompts as runtime logic that drives what the system does.</p><p>In production, a prompt is not configuration text. It becomes executable business logic as soon as the model is embedded inside a product that can read data and take action.</p><p>The program is not a single string. The program is the assembled runtime bundle that the model receives and acts on. If you do not model it as a bundle, you cannot reason about behavior. You end up debugging the wrong layer, then shipping fixes that only work on one happy-path input.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!3yMS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2118d7b5-ee6c-49bd-b151-fd5f16a841fd_2880x1620.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!3yMS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2118d7b5-ee6c-49bd-b151-fd5f16a841fd_2880x1620.png 424w, https://substackcdn.com/image/fetch/$s_!3yMS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2118d7b5-ee6c-49bd-b151-fd5f16a841fd_2880x1620.png 848w, https://substackcdn.com/image/fetch/$s_!3yMS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2118d7b5-ee6c-49bd-b151-fd5f16a841fd_2880x1620.png 1272w, https://substackcdn.com/image/fetch/$s_!3yMS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2118d7b5-ee6c-49bd-b151-fd5f16a841fd_2880x1620.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!3yMS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2118d7b5-ee6c-49bd-b151-fd5f16a841fd_2880x1620.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2118d7b5-ee6c-49bd-b151-fd5f16a841fd_2880x1620.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2049201,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/189392105?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2118d7b5-ee6c-49bd-b151-fd5f16a841fd_2880x1620.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!3yMS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2118d7b5-ee6c-49bd-b151-fd5f16a841fd_2880x1620.png 424w, https://substackcdn.com/image/fetch/$s_!3yMS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2118d7b5-ee6c-49bd-b151-fd5f16a841fd_2880x1620.png 848w, https://substackcdn.com/image/fetch/$s_!3yMS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2118d7b5-ee6c-49bd-b151-fd5f16a841fd_2880x1620.png 1272w, https://substackcdn.com/image/fetch/$s_!3yMS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2118d7b5-ee6c-49bd-b151-fd5f16a841fd_2880x1620.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Prompts are more than words; they define your business, product, logic, and much more.</em> </figcaption></figure></div><p>The runtime bundle includes:</p><ul><li><p>System and developer instructions.</p></li><li><p>Dynamic variables and session state.</p></li><li><p>Retrieved context.</p></li><li><p>User input, untrusted.</p></li><li><p>Tool permissions and safety constraints.</p></li><li><p>Runtime parameters, model version, and temperature.</p></li></ul><p>I plan for instruction conflicts because they occur in real systems. A user message can contain a directive that tries to override the instruction layer.</p><p>A retrieved document can contain hidden instructions that pull the model off task.</p><p>The model may still produce fluent output even when following the wrong instruction, which is why this failure is hard to notice without measurement. This maps directly to the <a href="https://arxiv.org/pdf/2306.05499">prompt-injection</a> risk category in standard LLM threat models.</p><p>Key points I&#8217;m making:</p><ul><li><p>The prompt bundle is the real program, not the UI chat box.</p></li><li><p>Untrusted inputs create instruction conflicts by default.</p></li><li><p>Tool permissions turn text into operational decisions.</p></li><li><p>Reliability requires governance, not prompt folklore.</p></li></ul><p>How I&#8217;d implement this:</p><ul><li><p>Version prompts and treat edits like code changes.</p></li><li><p>Require diffs for every prompt revision.</p></li><li><p>Maintain rollback points for prompt and model versions.</p></li><li><p>Assign ownership per prompt surface area and workflow.</p></li></ul><p>If this is runtime logic, I need runtime traces.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/ai-observability-and-evaluations?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/ai-observability-and-evaluations?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://labs.adaline.ai/p/ai-observability-and-evaluations?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h2>What Observability Means For LLM Systems</h2><p>I have a narrow definition of observability for LLM systems. I want to reconstruct a run the same way I would reconstruct a production incident in any other distributed system. <strong>If I only log the final output, I am guessing</strong>.</p><p>In practice, observability means end-to-end traceability across <strong>prompt assembly</strong>, <strong>retrieval</strong>, <strong>tool calls</strong>, and <strong>outputs</strong>. That too, with enough context to explain why a specific response happened.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!p8rV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffba51954-2a3e-4b95-b7df-ed1167f95251_1320x1542.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!p8rV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffba51954-2a3e-4b95-b7df-ed1167f95251_1320x1542.png 424w, https://substackcdn.com/image/fetch/$s_!p8rV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffba51954-2a3e-4b95-b7df-ed1167f95251_1320x1542.png 848w, https://substackcdn.com/image/fetch/$s_!p8rV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffba51954-2a3e-4b95-b7df-ed1167f95251_1320x1542.png 1272w, https://substackcdn.com/image/fetch/$s_!p8rV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffba51954-2a3e-4b95-b7df-ed1167f95251_1320x1542.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!p8rV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffba51954-2a3e-4b95-b7df-ed1167f95251_1320x1542.png" width="1320" height="1542" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fba51954-2a3e-4b95-b7df-ed1167f95251_1320x1542.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1542,&quot;width&quot;:1320,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!p8rV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffba51954-2a3e-4b95-b7df-ed1167f95251_1320x1542.png 424w, https://substackcdn.com/image/fetch/$s_!p8rV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffba51954-2a3e-4b95-b7df-ed1167f95251_1320x1542.png 848w, https://substackcdn.com/image/fetch/$s_!p8rV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffba51954-2a3e-4b95-b7df-ed1167f95251_1320x1542.png 1272w, https://substackcdn.com/image/fetch/$s_!p8rV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffba51954-2a3e-4b95-b7df-ed1167f95251_1320x1542.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>A complete observability trace in <a href="https://go.adaline.ai/dRpz6AY">Adaline</a>. </em></figcaption></figure></div><p>Readable traces matter because they reduce <strong>debugging time</strong>, <strong>make ownership clear</strong>, and <strong>let me iterate without shipping blind changes</strong>. When the trace is legible, a failure becomes a concrete artifact, not a debate.</p><p>Trace checklist:</p><ul><li><p><strong>Prompt template version,</strong> which is a static instruction. And <strong>assembled prompt</strong> which are variables, i.e., dynamic. The idea is to separate static instructions from variables.</p></li><li><p>User input, to capture the untrusted trigger.</p></li><li><p>Retrieved context payload plus retrieval metadata, to validate what the model actually saw.</p></li><li><p>Tool calls, arguments, responses, and side effects to audit real actions.</p></li><li><p>Model identifier, version, and runtime parameters, to attribute behavior to runtime choices.</p></li><li><p>Token usage and estimated cost, to catch budget regressions.</p></li><li><p>Latency breakdown, to localize slow spans, including model server time .</p></li><li><p>Final output and structured output if present, to verify compliance and formatting.</p></li></ul><p>When I see a bad answer, the trace tells me where to look.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!kzJr!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F87b18b30-652d-45e0-8bed-e16a73b2e8fa_1272x1306.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!kzJr!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F87b18b30-652d-45e0-8bed-e16a73b2e8fa_1272x1306.png 424w, https://substackcdn.com/image/fetch/$s_!kzJr!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F87b18b30-652d-45e0-8bed-e16a73b2e8fa_1272x1306.png 848w, https://substackcdn.com/image/fetch/$s_!kzJr!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F87b18b30-652d-45e0-8bed-e16a73b2e8fa_1272x1306.png 1272w, https://substackcdn.com/image/fetch/$s_!kzJr!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F87b18b30-652d-45e0-8bed-e16a73b2e8fa_1272x1306.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!kzJr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F87b18b30-652d-45e0-8bed-e16a73b2e8fa_1272x1306.png" width="1272" height="1306" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/87b18b30-652d-45e0-8bed-e16a73b2e8fa_1272x1306.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1306,&quot;width&quot;:1272,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!kzJr!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F87b18b30-652d-45e0-8bed-e16a73b2e8fa_1272x1306.png 424w, https://substackcdn.com/image/fetch/$s_!kzJr!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F87b18b30-652d-45e0-8bed-e16a73b2e8fa_1272x1306.png 848w, https://substackcdn.com/image/fetch/$s_!kzJr!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F87b18b30-652d-45e0-8bed-e16a73b2e8fa_1272x1306.png 1272w, https://substackcdn.com/image/fetch/$s_!kzJr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F87b18b30-652d-45e0-8bed-e16a73b2e8fa_1272x1306.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Here, the observability from <a href="https://go.adaline.ai/dRpz6AY">Adaline&#8217;s </a>dashboard data shows me that answer quality is 0.65, which isn&#8217;t good. The reason is poor retrieval quality. </em></figcaption></figure></div><p>If the retrieval returned irrelevant context, I fix the retrieval. If tool calls are wrong, I fix tool selection and constraints. If the same input flips behavior after a prompt edit, I fix the prompt structure, not the dataset.</p><p>Key points I&#8217;m making:</p><ul><li><p>Observability is traceability across the full run, not output logging.</p></li><li><p>Accountability and speed up iteration.</p></li><li><p>Cost and latency are first-class failure signals.</p></li><li><p>Tool call visibility is non-negotiable once actions are in place.</p></li></ul><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;65fb56b4-619c-4846-b9ca-56d0b133813c&quot;,&quot;duration&quot;:null}"></div><p><em>Prompt versioning and deployment in <a href="https://go.adaline.ai/dRpz6AY">Adaline</a>.</em></p><p>How I&#8217;d implement this:</p><ul><li><p>Standardize a trace schema and enforce it for every run.</p></li><li><p>Store prompt versions and attach them to every trace.</p></li><li><p>Log retrieval inputs and outputs with stable identifiers.</p></li><li><p>Capture tool calls as structured events with side effects.</p></li><li><p>Add a weekly review of failed traces and recurring patterns.</p></li></ul><p>Once you can see runs, you can classify failures.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>The Silent Failure Taxonomy I Built Around</h2><p>Silent failures do not crash the system. They leak <strong>trust</strong>, <strong>safety</strong>, and <strong>budget</strong> a little at a time. In the lecture, I push on this because you can ship something that looks fine, then wake up to a week of damage that never showed up as an error page.</p><p>Generally, to tackle this issue, I built categories around these failures. Because monitoring and evaluation need targets. A taxonomy keeps the team from treating every issue as a prompt problem.</p><p>It also keeps alerts honest. I believe you can only alert on what you can name and measure.</p><p><strong>Being hyperspecific to details is the key here.</strong></p><p>Taxonomy I use in practice:</p><ul><li><p><strong>Policy failures that look like success</strong>: The signal to monitor includes <strong>tool call policy violations</strong> and <strong>missing approvals</strong>.</p></li><li><p><strong>Security failures, prompt injection, </strong>and<strong> instruction conflicts</strong>: Signal to monitor includes <strong>override patterns</strong> and <strong>tool intent </strong>that contradict constraints.</p></li><li><p><strong>Cost </strong>and<strong> latency failures, token blowups, loops, OCR weirdness:</strong> Signal to monitor includes <strong>token spikes</strong>, <strong>repetition</strong>, and <strong>timeouts</strong>.</p></li><li><p><strong>Correctness failures masked by fluency:</strong> The signal to monitor includes <strong>missing citations</strong>, <strong>schema drift</strong>, and <strong>low agreement</strong> with the provided sources.</p></li></ul><p>The incident I plan for is boring, which is the point.</p><p>We switched to an OCR workflow, everything looked normal, then costs spiked. The model started appending long runs of spaces, producing around 100,000 characters when 5,000 would have been enough.</p><p>Now, customers did not notice at first. But the trace made it obvious, so we tightened the prompt and added a cost guardrail.</p><p>Key points I&#8217;m making:</p><ul><li><p>Failures show up as drift, not downts, and alerts are concrete.</p></li><li><p>Security and cost issues can hide behind good-looking text.</p></li></ul><p>How I&#8217;d implement this:</p><ul><li><p>Map each category to a small set of measurable signals.</p></li><li><p>Alert on deltas, not absolutes, for cost and latency.</p></li><li><p>Triage from traces, then promote repeats into eval datasets.</p></li><li><p>Add a post incident rule that prevents the same class from returning.</p></li></ul><p>To evaluate any of this, I need representative cases.</p><h2>Evaluations Start With Sampling The Real Distribution</h2><p>When I watch teams build LLM features, the demo is rarely the hard part. The demo is one clean input, one clean output, one clean conclusion.</p><p>Production is a distribution, and the distribution is where behavior fractures.</p><p>A demo lies because it compresses variability into a single scenario. It hides <strong>messy inputs,</strong> <strong>conflicting instructions</strong>, and <strong>long tail formats</strong>. It also hides <strong>drift</strong>.</p><p>A prompt can look stable on five hand-picked examples, then break on day three because a new user arrives with a new intent. This is a very common issue.</p><p>So, how to tackle it?</p><p>I start evaluations by sampling the real distribution.</p><p>My baseline is simple. I take about 20 representative cases that look like what I expect to see in production, I run them, and I ship.</p><p>Then I expand the set using the evidence provided by production.</p><p>Observability supplies the raw material.</p><p>Traces become cases, cases become datasets, datasets become evaluations.</p><p><a href="https://developers.openai.com/api/docs/guides/evaluation-best-practices/">OpenAI&#8217;s evaluation guidance</a> makes the same point. Mix production data with expert-curated cases, keep adding edge cases, and keep the set growing as you learn.</p><p>Key points I&#8217;m making:</p><ul><li><p>One clean example hides the distribution.</p></li><li><p>A small representative set beats intuition.</p></li><li><p>Traces are the source of evaluation data.</p></li><li><p>Datasets must evolve with customers and inputs.</p></li></ul><p>How I&#8217;d implement this:</p><ul><li><p>Seed the first dataset from traces whenever possible.</p></li><li><p>Include messy and adversarial inputs in the first 20.</p></li><li><p>Add failures and near failures every week.</p></li><li><p>Refresh the dataset when the customer types or document formats change.</p></li><li><p>Tag cases by intent and input modality for coverage checks.</p></li></ul><p>I have seen a new customer type break assumptions overnight. The trace showed the same prompt behaving differently because the inputs shifted, not because the model changed. The dataset made that visible fast, then the fix became measurable.</p><p>Now I can talk about evals as a feedback loop.</p><h2>Evaluation Is A Feedback Loop, Not A Unit Test Suite</h2><p>I have a strong view on evals because I have watched good systems fail for boring reasons. A prompt change sounds better to a human. But production makes it worse.</p><p>So, I am making the explicit claim that evals are feedback loops, not deterministic unit tests.</p><p>Essentially, their job is to keep me shipping while protecting the downside. I run them to catch <strong>regressions when I edit prompts</strong>, <strong>to switch models without fear</strong>, and <strong>to detect drift once the system is live</strong>.</p><p>Perfect coverage is impossible because users will always do something you did not anticipate.</p><p>That is fine.</p><p>The goal is not perfection.</p><p>The goal is fast learning with controlled risk.</p><p>Starter eval set I begin with:</p><ul><li><p>Schema and format adherence, so outputs stay parseable.</p></li><li><p>Tool and policy compliance to keep actions permitted.</p></li><li><p>Citation or reference presence where required, so answers stay auditable.</p></li><li><p>Refusal correctness for disallowed requests, so boundaries hold.</p></li><li><p>Groundedness to provide context, so answers do not drift from inputs.</p></li><li><p>Cost gate or latency gate, so the product stays within constraints.</p></li><li><p>Retrieval sanity check, so the model is not reasoning on garbage context.</p></li></ul><p>Here is a mini example from real work.</p><p>I have seen a small prompt change that helped one slice of cases and failed another, like drug A versus drug B.</p><p>The new prompt read cleaner, then broke the distribution. A basic eval suite made the regression visible before it became a support incident. This matches the eval-driven workflow OpenAI recommends, especially the practice of collecting production-like data and evaluating continuously.</p><p>Key points I&#8217;m making:</p><ul><li><p>Evals exist to learn quickly, not to certify perfection.</p></li><li><p>They protect model switches, prompt edits, and production drift.</p></li><li><p>Coverage grows from failures, not imagination.</p></li></ul><p>How I&#8217;d implement this:</p><ul><li><p>Run the eval suite on every prompt or model change.</p></li><li><p>Label failures as prompt regression, retrieval regression, rubric mismatch, or distribution shift.</p></li><li><p>Fix the correct layer, then add the failing case to the dataset.</p></li><li><p>Track cost and latency gates as hard constraints, not nice metrics.</p></li></ul><p>Evals only work if I define good as outcomes.</p><h2>How I Design Rubrics From Product Outcomes</h2><p>I design rubrics the same way I design product requirements. I start from what the user must be able to do next. If the rubric cannot predict the next action, it is taste, not engineering.</p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;b9eeb38c-332e-4a75-9115-6aac5dcd2869&quot;,&quot;duration&quot;:null}"></div><p><em>Evaluating prompts using LLM-as-a-judge metric with custom rubrics in <a href="https://go.adaline.ai/dRpz6AY">Adaline</a>.</em></p><p>Outcome-first grading means I translate the user goal into observable checks. A good rubric is specific about required fields, hard constraints, grounding to provided inputs, and safe tool behavior.</p><p>In high-stakes workflows, I do not pretend engineers can invent correctness. In my experience, the people who own prompts and the people who write rubrics are often domain experts. Someone like clinicians and finance specialists, because they know what the output must contain and what it must never do.</p><p>Here is what this looks like in practice. Micro rubric for a support response.</p><ul><li><p>It acknowledges the user request in one sentence without adding new claims.</p></li><li><p>It applies the correct policy constraint for eligibility and required approvals.</p></li><li><p>It uses the provided account context and does not invent missing details.</p></li><li><p>It selects the correct tool action only when permitted and necessary.</p></li><li><p>It ends with the next step the user should take, if any.</p></li></ul><p>Rubrics drift because products drift. You add customers, new input formats arrive, and the distribution changes.</p><p>When a system works for months and rubrics suddenly fail, I treat that as a signal that the rubric may need to change, not just the prompt.</p><p>Clear, detailed rubrics also make automated grading more reliable. This is why I write them like executable criteria rather than guidelines.</p><p>Key points I&#8217;m making:</p><ul><li><p>I define good as a usable next step for the user.</p></li><li><p>Rubrics encode constraints, not stylistic preferences.</p></li><li><p>Domain experts define correctness in high-stakes domains.</p></li><li><p>Rubrics evolve with the input distribution.</p></li></ul><p>How I&#8217;d Implement This</p><ul><li><p>Assign rubric authorship to the domain owner for the workflow.</p></li><li><p>Review rubrics weekly using fresh failure cases from traces.</p></li><li><p>Update the rubric first when the distribution changes, then update the prompt.</p></li><li><p>Keep a change log so rubric edits are auditable.</p></li></ul><p>Next, I will show how I scale these checks with model-based graders.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/ai-observability-and-evaluations?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/ai-observability-and-evaluations?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://labs.adaline.ai/p/ai-observability-and-evaluations?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h2>LLM As Judge, But Only Under Constraints</h2><p>I use model-based judges or <a href="https://labs.adaline.ai/p/llm-as-a-judge">LLM-as-a-judge</a>, because some checks do not reduce cleanly to code. Tone, completeness, and policy reasoning often need language understanding. A judge can also scale review across thousands of traces without turning the team into a labeling factory.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Bghn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F278edbc1-6c06-4f7d-93ba-f09c375f0b44_1600x620.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Bghn!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F278edbc1-6c06-4f7d-93ba-f09c375f0b44_1600x620.png 424w, https://substackcdn.com/image/fetch/$s_!Bghn!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F278edbc1-6c06-4f7d-93ba-f09c375f0b44_1600x620.png 848w, https://substackcdn.com/image/fetch/$s_!Bghn!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F278edbc1-6c06-4f7d-93ba-f09c375f0b44_1600x620.png 1272w, https://substackcdn.com/image/fetch/$s_!Bghn!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F278edbc1-6c06-4f7d-93ba-f09c375f0b44_1600x620.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Bghn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F278edbc1-6c06-4f7d-93ba-f09c375f0b44_1600x620.png" width="1456" height="564" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/278edbc1-6c06-4f7d-93ba-f09c375f0b44_1600x620.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:564,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Bghn!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F278edbc1-6c06-4f7d-93ba-f09c375f0b44_1600x620.png 424w, https://substackcdn.com/image/fetch/$s_!Bghn!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F278edbc1-6c06-4f7d-93ba-f09c375f0b44_1600x620.png 848w, https://substackcdn.com/image/fetch/$s_!Bghn!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F278edbc1-6c06-4f7d-93ba-f09c375f0b44_1600x620.png 1272w, https://substackcdn.com/image/fetch/$s_!Bghn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F278edbc1-6c06-4f7d-93ba-f09c375f0b44_1600x620.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>A working illustration of LLM-as-a-judge.</em> | <strong>Source</strong>: <a href="https://arxiv.org/pdf/2411.15594">A Survey on LLM-as-a-Judge</a></p><p>My rule is strict. I prefer pass/fail or a small set of named categories. I avoid numeric scoring. In the lecture I gave, I called this out as the easiest way to cripple the entire system because confidence intervals and arbitrary scales do not stay consistent across runs .</p><p>When I need nuance, I use semantic labels that carry meaning, not numbers that float.</p><p>I ask for reasoning when the verdict depends on a rubric with multiple clauses. I want a short justification tied to rubric items, then the verdict. <strong>For everything that should be deterministic, I do not use a judge at all</strong>.</p><p>I validate schemas with code.</p><p>I gate tool calls with policy checks.</p><p>I block-banned actions and formatting violations before any judge runs.</p><p><a href="https://platform.openai.com/docs/guides/evaluation-best-practices?utm_source=chatgpt.com">OpenAI</a> also recommends structuring evaluations around criteria and using pass/fail or comparisons to improve reliability in judge workflows.</p><p>Key points I&#8217;m making:</p><ul><li><p>Judges help with nuance, not with mechanics.</p></li><li><p>Binary beats numeric for stability.</p></li><li><p>Reasoning improves alignment with the rubric.</p></li><li><p>Deterministic constraints should stay deterministic.</p></li></ul><p>How I&#8217;d implement this:</p><ul><li><p>Provide a rubric with clear pass/fail examples.</p></li><li><p>Provide the full context, including retrieved snippets and the tool plan.</p></li><li><p>Require a short, grounded reason.</p></li><li><p>Output a verdict as pass or fail, or a named category.</p></li></ul><p>Once judging is stable, I run it continuously in production.</p><h2>Continuous Evaluation In Production Is Where Reliability Compounds</h2><p><strong>Continuous evaluation</strong> is where reliability compounds. Monitoring is the keystone because it captures the real distribution, including the unknown unknowns, and turns them into something the team can act on.</p><p>I define continuous evaluation as lightweight checks applied to production traces. I do not wait for support tickets to tell me something drifted. I want the system to tell me first. That is the difference between a small regression and a week of silent damage.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!aeeB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdecbc62d-d646-43f8-86b5-192193f19482_2880x1620.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!aeeB!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdecbc62d-d646-43f8-86b5-192193f19482_2880x1620.png 424w, https://substackcdn.com/image/fetch/$s_!aeeB!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdecbc62d-d646-43f8-86b5-192193f19482_2880x1620.png 848w, https://substackcdn.com/image/fetch/$s_!aeeB!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdecbc62d-d646-43f8-86b5-192193f19482_2880x1620.png 1272w, https://substackcdn.com/image/fetch/$s_!aeeB!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdecbc62d-d646-43f8-86b5-192193f19482_2880x1620.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!aeeB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdecbc62d-d646-43f8-86b5-192193f19482_2880x1620.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/decbc62d-d646-43f8-86b5-192193f19482_2880x1620.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3645992,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/189392105?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdecbc62d-d646-43f8-86b5-192193f19482_2880x1620.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!aeeB!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdecbc62d-d646-43f8-86b5-192193f19482_2880x1620.png 424w, https://substackcdn.com/image/fetch/$s_!aeeB!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdecbc62d-d646-43f8-86b5-192193f19482_2880x1620.png 848w, https://substackcdn.com/image/fetch/$s_!aeeB!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdecbc62d-d646-43f8-86b5-192193f19482_2880x1620.png 1272w, https://substackcdn.com/image/fetch/$s_!aeeB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdecbc62d-d646-43f8-86b5-192193f19482_2880x1620.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em><a href="https://go.adaline.ai/dRpz6AY">Adaline</a> allows you to continuously run evals in production. This acts like a feedback mechanism rather than a static unit test. </em></figcaption></figure></div><p>I describe running simple checks on every log and getting notified when a silent failure occurs before customers start getting upset. <a href="https://platform.openai.com/docs/guides/evaluation-best-practices?utm_source=chatgpt.com">OpenAI</a> makes the same recommendation with continuous evaluation tied to logs and ongoing case collection.</p><p>Alerts I treat as first class:</p><ul><li><p>The pass rate dropped on a key rubric.</p></li><li><p>Token or cost spikes.</p></li><li><p>Tool call anomalies or policy violations.</p></li><li><p>Retrieval is empty or of low quality repeatedly.</p></li><li><p>Latency regressions by model or route.</p></li></ul><p>Key points I&#8217;m making:</p><ul><li><p>Monitoring shows the true distribution, not the demo distribution.</p></li><li><p>Continuous eval catches drift before users notice it.</p></li><li><p>Reliability improves when failures are made reusable as test cases.</p></li><li><p>Cost and latency are behavior signals, not only infra metrics.</p></li></ul><p>How I&#8217;d implement this:</p><ul><li><p>Monitor traces and sample failures daily.</p></li><li><p>Convert failures into dataset entries with labels and notes.</p></li><li><p>Update rubrics when the distribution changes.</p></li><li><p>Re-run evals on every prompt or model change.</p></li></ul><p>This is what finally makes model switching safe.</p><h2>The Payoff: Model Switching Confidence And A Minimal System To Start This Week</h2><p>I keep seeing the same pattern, and it frustrates me. Teams keep paying for better models, but they stay on an old one.</p><p>They are not blocked by procurement, but you know, they are blocked by fear.</p><p>The fear is rational.</p><p>If I change the model, something might break, and I will not know until production tells me.</p><p>I call out teams still running older models because they have no way to predict breakage or to validate upgrades with confidence.</p><p>That is a reliability problem, not a model selection problem.</p><p>The fix is not a perfect test suite.</p><p>The fix is a minimal system that combines <strong>evaluations</strong> and <strong>monitoring</strong>.</p><p>Evaluations give me a regression signal on known cases.</p><p>Monitoring captures the true distribution and feeds new cases back into the eval set, so the system gets safer over time.</p><p><a href="https://developers.openai.com/api/docs/guides/evaluation-best-practices">OpenAI</a> frames the same workflow as eval-driven development with continuous evaluation and logging so you can grow your eval set from real traffic.</p><p>Key points I&#8217;m making:</p><ul><li><p>Model upgrades feel risky when behavior is not measurable.</p></li><li><p>Monitoring plus evals turns upgrades into controlled changes.</p></li><li><p>Silent failures show up as drift in cost, policy, and quality.</p></li><li><p>A small, disciplined loop beats a large, vague framework.</p></li></ul><p>How I&#8217;d implement this:</p><ul><li><p>Fixed regression dataset for the core workflows that must never regress.</p></li><li><p>Rolling dataset from recent traces that reflects current traffic.</p></li><li><p>Side-by-side comparisons for model and prompt changes before rollout.</p></li><li><p>Instrument traces.</p></li><li><p>Curate 20 cases.</p></li><li><p>Implement 4 to 7 evals.</p></li><li><p>Add 2 to 3 alerts.</p></li><li><p>Weekly review and dataset refresh.</p></li></ul><p>If I had to boil this down: the moat is measured improvement through observability and evaluation, not prompt cleverness.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[In the Age of Agentic Engineering, Context Is Your Real Product ]]></title><description><![CDATA[What every product leader needs to understand about shipping AI that actually works]]></description><link>https://labs.adaline.ai/p/why-ai-products-break-in-production-context-engineering</link><guid isPermaLink="false">https://labs.adaline.ai/p/why-ai-products-break-in-production-context-engineering</guid><dc:creator><![CDATA[Nilesh Barla]]></dc:creator><pubDate>Sat, 28 Feb 2026 01:00:53 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/db70e3a4-570f-4240-bb4b-82a28a674656_1456x816.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>TLDR: </strong>AI products break in production not because the model fails, but because the context around it was never designed. This blog is for product leaders and engineers building AI features who keep shipping demos that fall apart under real users. What you&#8217;ll take away is practical: <strong>a shared vocabulary for context failures</strong>, <strong>three mental models for designing around them</strong>, and <strong>pre-launch stress test advice</strong>. The model is not your product. The context you give it is.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://go.adaline.ai/rPUz2SX" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_t7w!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6dce3555-1aa1-4e12-9c79-a806da245770_2160x810.png 424w, https://substackcdn.com/image/fetch/$s_!_t7w!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6dce3555-1aa1-4e12-9c79-a806da245770_2160x810.png 848w, https://substackcdn.com/image/fetch/$s_!_t7w!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6dce3555-1aa1-4e12-9c79-a806da245770_2160x810.png 1272w, https://substackcdn.com/image/fetch/$s_!_t7w!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6dce3555-1aa1-4e12-9c79-a806da245770_2160x810.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_t7w!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6dce3555-1aa1-4e12-9c79-a806da245770_2160x810.png" width="1200" height="450" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6dce3555-1aa1-4e12-9c79-a806da245770_2160x810.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:546,&quot;width&quot;:1456,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:292511,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://go.adaline.ai/rPUz2SX&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/189336701?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6dce3555-1aa1-4e12-9c79-a806da245770_2160x810.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_t7w!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6dce3555-1aa1-4e12-9c79-a806da245770_2160x810.png 424w, https://substackcdn.com/image/fetch/$s_!_t7w!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6dce3555-1aa1-4e12-9c79-a806da245770_2160x810.png 848w, https://substackcdn.com/image/fetch/$s_!_t7w!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6dce3555-1aa1-4e12-9c79-a806da245770_2160x810.png 1272w, https://substackcdn.com/image/fetch/$s_!_t7w!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6dce3555-1aa1-4e12-9c79-a806da245770_2160x810.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>The Demo Always Works</h2><p>A product team spends three weeks building an AI customer support agent. Internal testing goes well. The model handles edge cases, stays on topic, and generates responses that feel genuinely helpful. </p><p>Lastly, the team ships it.</p><p>Two weeks later, the support queue fills with complaints. The agent is confidently answering questions users never fully asked. It assigns ownership to problems nobody claimed. Users stop trusting the product entirely.</p><p>What happened?<br>Nobody changed the model. But what broke was never examined in the first place.</p><p><a href="https://www.lennysnewsletter.com/p/building-ai-product-sense-part-2">Marily Nika</a>, a former AI Product Lead at Google and Meta, watched the same sequence repeat across teams: an AI feature that worked beautifully in controlled conditions broke in production. </p><p>Why?<br>Because no one could find the failure modes that were visible before launch, if anyone had known where to look. </p><p><a href="https://simonwillison.net/guides/agentic-engineering-patterns/code-is-cheap/">Simon Willison</a> describes the same gap from the engineering side: the bottleneck in AI development is no longer writing code. It is giving the agent the right environment to produce output that actually works.</p><p>That environment is called context. Everything that follows explains why it is your real product.</p><h2>What Agentic Engineering Actually Is</h2><p>Agentic engineering is the practice of building software using coding agents &#8212; tools like Claude Code, Cursor, and OpenAI Codex &#8212; where the agent generates code, executes it, runs tests, and iterates independently between turns. The human sets objectives and maintains oversight. The agent acts.</p><p><a href="https://simonwillison.net/2026/Feb/23/agentic-engineering-patterns/">Simon Willison</a> distinguishes this sharply from vibe coding, where you prompt, accept, and hope. </p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/karpathy/status/1886192184808149383?lang=en&quot;,&quot;full_text&quot;:&quot;There's a new kind of coding I call \&quot;vibe coding\&quot;, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It's possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper&quot;,&quot;username&quot;:&quot;karpathy&quot;,&quot;name&quot;:&quot;Andrej Karpathy&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1296667294148382721/9Pr6XrPB_normal.jpg&quot;,&quot;date&quot;:&quot;2025-02-02T23:17:15.000Z&quot;,&quot;photos&quot;:[],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:1424,&quot;retweet_count&quot;:3606,&quot;like_count&quot;:33433,&quot;impression_count&quot;:6804912,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><p><a href="https://addyosmani.com/blog/agentic-engineering/">Addy Osmani</a> puts the operational difference plainly: the single biggest differentiator is testing. A solid test suite lets an agent iterate until it passes. Without one, it declares broken code done.</p><p>That distinction reveals something structural. </p><p>The test is not just a quality check. It is a context mechanism &#8212; a precise description of what success looks like before the agent begins. <a href="https://simonwillison.net/guides/agentic-engineering-patterns/red-green-tdd/">Willison&#8217;s Red/Green TDD pattern</a> makes this explicit:</p><ul><li><p>Write the test first and confirm it fails.</p></li><li><p>Let the agent implement until the test passes.</p></li><li><p>The test defines the context. The agent operates within it.</p></li></ul><p>Practitioners who work this way consistently arrive at the same conclusion: the model is rarely the bottleneck. What the model is given to work with is, i.e., the context.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/why-ai-products-break-in-production-context-engineering?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/why-ai-products-break-in-production-context-engineering?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://labs.adaline.ai/p/why-ai-products-break-in-production-context-engineering?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h2>The Context Problem: What Breaks AI Products</h2><p>A model does not experience ambiguity the way a human does. </p><p>For instance, a human encountering a half-formed request pauses or asks for clarification. </p><p>An LLM, on the other hand, fills the gap. </p><p>It takes whatever is in its context window, finds the most plausible completion, and returns output that looks finished. The problem is not that the model is wrong. <strong>The problem is that it does not know it is wrong.</strong></p><p><a href="https://www.lennysnewsletter.com/p/building-ai-product-sense-part-2">Marily Nika</a> calls this the failure signature. Essentially, it is the pattern of breakdowns a feature reliably falls into when real users arrive. </p><p>Every AI feature has one. The teams that find it before launch deliberately push the model into its failure modes during development. The teams that do not find it discover it through support tickets.</p><p>Either way, the failure signature takes three distinct shapes:</p><ol><li><p><strong>Context overload</strong> occurs when the model receives more information than it can usefully process. Noise crowds out the signal, and the model treats everything with equal weight. A meeting notes tool fed an entire unstructured transcript will summarize the loudest voices, not the most important decisions.</p></li><li><p><strong>Context gaps</strong> occur when the model lacks the information it needs and fills the absence with inference. Mostly probability distribution. The customer support agent who confidently answers &#8220;Is this good?&#8221; without asking what &#8220;this&#8221; refers to is not malfunctioning. It is doing exactly what a model does when the context does not tell it what it does not know.</p></li><li><p><strong>Context misalignment</strong> occurs when the model has information, but the wrong framing for the task. Marily&#8217;s Slack thread demonstration is precise here. Essentially, the model was not missing content; it was missing the framing that distinguished decisions from noise. It imposed its own structure and returned a fabricated roadmap that looked authoritative.</p></li></ol><p>These are not model failures. They are design failures. Tal Raviv and Aman Khan say support tickets show a pattern of AI &#8220;forgetting&#8221; facts during sessions. This issue is called <strong>context rot</strong>. </p><p>It refers to the steady loss of reliable behavior as the context window fills up. As this happens, the model struggles to remember earlier instructions. That is not a bug to file. It is a product experience to design around.</p><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:186226252,&quot;url&quot;:&quot;https://www.lennysnewsletter.com/p/how-to-build-ai-product-sense&quot;,&quot;publication_id&quot;:10845,&quot;publication_name&quot;:&quot;Lenny's Newsletter&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!8MSN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F441213db-4824-4e48-9d28-a3a18952cbfc_592x592.png&quot;,&quot;title&quot;:&quot;How to build AI product sense&quot;,&quot;truncated_body_text&quot;:&quot;&#128075; Hey there, I&#8217;m Lenny. Each week, I answer reader questions about building product, driving growth, and accelerating your career. For more: Lenny&#8217;s Podcast | How I AI | Lennybot | Lenny&#8217;s Reads | Favorite AI and PM courses | Favorite public speaking course&quot;,&quot;date&quot;:&quot;2026-02-03T13:45:58.303Z&quot;,&quot;like_count&quot;:506,&quot;comment_count&quot;:37,&quot;bylines&quot;:[{&quot;id&quot;:3269279,&quot;name&quot;:&quot;Tal Raviv&quot;,&quot;handle&quot;:&quot;talsraviv&quot;,&quot;previous_name&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Sp2z!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7ebe7e6-cd97-479f-95a8-c19fc3ae402c_664x664.jpeg&quot;,&quot;bio&quot;:&quot;Early @ Patreon, Riverside, Wix, AppsFlyer, DuckDuckGo &quot;,&quot;profile_set_up_at&quot;:&quot;2022-05-17T06:00:46.518Z&quot;,&quot;reader_installed_at&quot;:&quot;2023-09-11T16:24:55.118Z&quot;,&quot;is_guest&quot;:true,&quot;bestseller_tier&quot;:null,&quot;status&quot;:{&quot;bestsellerTier&quot;:null,&quot;subscriberTier&quot;:1,&quot;leaderboard&quot;:null,&quot;vip&quot;:false,&quot;badge&quot;:{&quot;type&quot;:&quot;subscriber&quot;,&quot;tier&quot;:1,&quot;accent_colors&quot;:null},&quot;paidPublicationIds&quot;:[10845],&quot;subscriber&quot;:null},&quot;primaryPublicationId&quot;:3340514,&quot;primaryPublicationName&quot;:&quot;Build AI product sense by using AI agents for real work&quot;,&quot;primaryPublicationUrl&quot;:&quot;https://www.talraviv.co&quot;,&quot;primaryPublicationSubscribeUrl&quot;:&quot;https://www.talraviv.co/subscribe?&quot;},{&quot;id&quot;:128655487,&quot;name&quot;:&quot;Aman Khan&quot;,&quot;handle&quot;:&quot;amankhan1&quot;,&quot;previous_name&quot;:&quot;Aman&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!XLkV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2babe551-c5b2-4c0f-8c1a-d493518832d5_1203x1203.jpeg&quot;,&quot;bio&quot;:&quot;AI Product Guy&quot;,&quot;profile_set_up_at&quot;:&quot;2024-04-24T15:58:07.389Z&quot;,&quot;reader_installed_at&quot;:&quot;2024-11-20T00:15:53.956Z&quot;,&quot;is_guest&quot;:true,&quot;bestseller_tier&quot;:null,&quot;status&quot;:{&quot;bestsellerTier&quot;:null,&quot;subscriberTier&quot;:1,&quot;leaderboard&quot;:null,&quot;vip&quot;:false,&quot;badge&quot;:{&quot;type&quot;:&quot;subscriber&quot;,&quot;tier&quot;:1,&quot;accent_colors&quot;:null},&quot;paidPublicationIds&quot;:[335953],&quot;subscriber&quot;:null},&quot;primaryPublicationId&quot;:2561806,&quot;primaryPublicationName&quot;:&quot;AI Product Playbook&quot;,&quot;primaryPublicationUrl&quot;:&quot;https://amankhan1.substack.com&quot;,&quot;primaryPublicationSubscribeUrl&quot;:&quot;https://amankhan1.substack.com/subscribe?&quot;}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:true,&quot;type&quot;:&quot;newsletter&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://www.lennysnewsletter.com/p/how-to-build-ai-product-sense?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><img class="embedded-post-publication-logo" src="https://substackcdn.com/image/fetch/$s_!8MSN!,w_56,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F441213db-4824-4e48-9d28-a3a18952cbfc_592x592.png" loading="lazy"><span class="embedded-post-publication-name">Lenny's Newsletter</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title">How to build AI product sense</div></div><div class="embedded-post-body">&#128075; Hey there, I&#8217;m Lenny. Each week, I answer reader questions about building product, driving growth, and accelerating your career. For more: Lenny&#8217;s Podcast | How I AI | Lennybot | Lenny&#8217;s Reads | Favorite AI and PM courses | Favorite public speaking course&#8230;</div><div class="embedded-post-cta-wrapper"><span class="embedded-post-cta">Read more</span></div><div class="embedded-post-meta">3 months ago &#183; 506 likes &#183; 37 comments &#183; Tal Raviv and Aman Khan</div></a></div><h2>Context Engineering Is Product Design</h2><p>Context engineering is about carefully shaping what an agent observes at every step. Essentially, it shapes its information environment. This way, it gets what it needs to <strong>think</strong>, <strong>act</strong>, and <strong>recover</strong>. It avoids creating confident nonsense when things get tough. It is not prompt writing. Prompt writing is a sentence. Context engineering is an architecture.</p><p>That architecture works in three layers. Product leaders are making choices about these layers, even if they don&#8217;t view them as context decisions.</p><ul><li><p><strong>System instructions</strong> are the rules, constraints, and behavioral boundaries. These tell the model how to operate before any user input arrives. <a href="https://www.lennysnewsletter.com/p/building-ai-product-sense-part-2">Marily Nika</a> describes adding a single instruction to a Slack summarization tool. Meaning, only assign an owner if someone explicitly volunteers. This immediately eliminated the product&#8217;s biggest trust issue. The fix was not a different model. It was a missing context decision.</p></li><li><p><strong>Retrieved knowledge</strong> covers what relevant information is pulled into the model&#8217;s context at query time, how much, and how it is structured before the model sees it. <a href="https://www.lennysnewsletter.com/p/how-to-build-ai-product-sense">Tal Raviv and Aman Khan</a> observe that output quality improves not because the model improves but because the context improves. The model is constant. What changes is what it sees.</p></li><li><p><strong>Memory and history</strong> determine what the agent retains across turns and between sessions. When an agent loses track of an earlier instruction mid-session, the user experiences it as the product breaking. It is a context design failure, not a model limitation.</p></li></ul><p>These three layers map directly onto decisions made during every AI feature build &#8212;gro data access scope, system prompt structure, and when to ask a clarifying question rather than let the model infer. </p><p><a href="https://addyosmani.com/blog/agentic-engineering/">Addy Osmani</a> captures the underlying principle: <strong>agentic engineering rewards people who know what good output looks</strong> <strong>like</strong>. Because they can design the environment that produces it.</p><p>Agentic engineers call this context engineering. Product leaders have always called pieces of it feature scoping, guardrail definition, and UX constraints. The vocabulary has been different. The decisions have been the same.</p><h2>Three Mental Models for Product Leaders</h2><p>Understanding context as the primary determinant of AI product quality changes the questions you ask at every stage of development. These three mental models make that change practical.</p><p><strong>Ask what the model sees before asking what it can do.</strong></p><p>The right first question is not which model handles this task best. It is what the model will actually see when a real user triggers this feature in production. These are:</p><ul><li><p>A real query.</p></li><li><p>Arriving with partial context.</p></li><li><p>Unstated assumptions.</p></li><li><p>The intent the model will have to infer. </p></li></ul><p><a href="https://www.lennysnewsletter.com/p/how-to-build-ai-product-sense">Tal Raviv and Aman Khan</a> describe this as the core of AI product sense: anticipating what will be impactful and feasible requires understanding what the model sees at the moment it acts, not what it can do in a controlled demo.</p><p><strong>Define Minimum Viable Quality before you define your feature.</strong></p><p><a href="https://www.lennysnewsletter.com/p/building-ai-product-sense-part-2">Marily Nika</a> establishes three thresholds every product leader should set before development begins:</p><ul><li><p><strong>Acceptable bar</strong>: The <strong>acceptable bar</strong> is where the feature performs well enough for real users under typical conditions.</p></li><li><p><strong>Delight bar</strong>: The delight bar is where correction rates drop and the feature earns trust through consistency.</p></li><li><p>Do-not-ship bar: It is the failure rate at which the feature actively damages user trust.</p></li></ul><p>MVQ also requires an honest cost envelope. For instance, a feature at $0.30 per user per month that drives retention is a straightforward decision. The same feature at $5 per user per month with unclear impact is a business problem that no engineering will solve.</p><p><strong>Build the adversarial ritual into your launch process.</strong></p><p>Before any AI feature ships, push it into the conditions that will break it. <a href="https://www.lennysnewsletter.com/p/building-ai-product-sense-part-2">Marily</a> runs <strong>three stress tests</strong> in under fifteen minutes: </p><ol><li><p>Feed it chaotic input. </p></li><li><p>Give it an ambiguous request.</p></li><li><p>Assign it something deceptively hard. </p></li></ol><p>What comes back is not a pass or fail. It is a product requirement &#8212; a missing constraint, an underspecified instruction, a clarifying question the UX should ask instead of letting the model infer.</p><h2>Closing</h2><p>Return to the team whose AI broke in production. They were not asking the wrong questions about their model. They were asking the wrong question entirely.</p><p>The question was never &#8220;what can our model do?&#8221; It was always &#8220;what does our model see?&#8221;</p><p>That change, from capability to context, is what agentic engineering worked out through practice rather than theory. Practitioners hit the walls, inspected the tool calls, watched the context window fill, and arrived at the same conclusion repeatedly: the model was not the problem. </p><p>The environment the model was operating in was.</p><p><a href="https://simonwillison.net/guides/agentic-engineering-patterns/code-is-cheap/">Simon Willison</a>, <a href="https://www.lennysnewsletter.com/p/building-ai-product-sense-part-2">Marily Nika</a>, <a href="https://www.lennysnewsletter.com/p/how-to-build-ai-product-sense">Tal Raviv and Aman Khan</a> each arrived here from different directions. The conclusion is the same.</p><p>The model is not your product. The context you give it is.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[The AI Skills No One Is Teaching Product Managers (But Should Be)]]></title><description><![CDATA[You have the tools -- Claude Code and GPT-5.3 -- but here's the skill layer that makes them actually work.]]></description><link>https://labs.adaline.ai/p/ai-skills-no-one-is-teaching</link><guid isPermaLink="false">https://labs.adaline.ai/p/ai-skills-no-one-is-teaching</guid><dc:creator><![CDATA[Nilesh Barla]]></dc:creator><pubDate>Sat, 21 Feb 2026 01:01:07 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/00c630f9-55d3-4988-a869-102001db10c8_1456x816.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>TLDR:</strong> Most PMs use AI daily but lack the judgment to use it well. This leads to decisions built on fabricated evidence. This article breaks down <strong>8 practical skills</strong> (such as&nbsp;<strong>context loading</strong>,&nbsp;<strong>verification</strong>, and&nbsp;<strong>sycophancy-aware prompting</strong>) that distinguish reliable AI analysis from confident-sounding noise. Essential reading for product managers who want their AI-assisted recommendations to actually hold up under scrutiny. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://go.adaline.ai/rPUz2SX" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!JAIO!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef886048-09ac-4673-86ca-7a397c6c75ca_2160x810.png 424w, https://substackcdn.com/image/fetch/$s_!JAIO!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef886048-09ac-4673-86ca-7a397c6c75ca_2160x810.png 848w, https://substackcdn.com/image/fetch/$s_!JAIO!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef886048-09ac-4673-86ca-7a397c6c75ca_2160x810.png 1272w, https://substackcdn.com/image/fetch/$s_!JAIO!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef886048-09ac-4673-86ca-7a397c6c75ca_2160x810.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!JAIO!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef886048-09ac-4673-86ca-7a397c6c75ca_2160x810.png" width="1200" height="450" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ef886048-09ac-4673-86ca-7a397c6c75ca_2160x810.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:546,&quot;width&quot;:1456,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:337343,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://go.adaline.ai/rPUz2SX&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/188604743?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef886048-09ac-4673-86ca-7a397c6c75ca_2160x810.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!JAIO!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef886048-09ac-4673-86ca-7a397c6c75ca_2160x810.png 424w, https://substackcdn.com/image/fetch/$s_!JAIO!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef886048-09ac-4673-86ca-7a397c6c75ca_2160x810.png 848w, https://substackcdn.com/image/fetch/$s_!JAIO!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef886048-09ac-4673-86ca-7a397c6c75ca_2160x810.png 1272w, https://substackcdn.com/image/fetch/$s_!JAIO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef886048-09ac-4673-86ca-7a397c6c75ca_2160x810.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Everyone Has the Tool. Almost Nobody Has the Skill</h2><p>98% of product managers use AI daily, but only 39% received job-specific training on how to use it well. Or maybe that 39% tried various methods, read papers, and watched podcasts to learn the best practices.</p><p>There are many podcasts and resources that can help you hone AI for a specific workflow. </p><p>And that gap does not show up in adoption numbers. It shows up three months later, when a decision built on fabricated evidence collapses in a stakeholder review or audit.</p><p>Claude, ChatGPT, GPT-5.2, Gemini 3.1, Claude Code. The interfaces are everywhere. Every PM at a mid-size company has at least one open on their machine right now. Access was never the bottleneck, but judgment is.</p><blockquote><p>Caitlin Sullivan ran the same customer transcripts through two models and received two completely different narratives. </p></blockquote><p>Both were confident. Both cited participants. One cherry-picked three quotes and leapt to a recommendation. The other challenged the framing, segmented users by actual need, and flagged pricing risk with verifiable timestamps.</p><p>Same data. Same tools. Different operators.</p><p>Claude Code can run analytical scripts without manual input. GPT-5 drafts strategy memos faster than most human first drafts. Gemini 3.1 synthesizes research across dozens of sources in under a minute. These are real capabilities.</p><div id="youtube2-We7BZVKbCVw" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;We7BZVKbCVw&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/We7BZVKbCVw?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>But the output quality is decided before the model runs. It is decided by <strong>how well the PM shaped the input</strong>, <strong>loaded the context</strong>, and <strong>built the habit of verifying what came back</strong>.</p><p>That is the skill layer. And almost no one is teaching it.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>Why AI Analysis Fails PMs in Silence</h2><p>The thing about AI is that it can fail by giving the wrong output. </p><p>Meaning to say, AI does not fail loudly. </p><p>There is no error message. </p><p>No red flag. </p><p>The output arrives clean, structured, and confident, which is exactly what makes it dangerous.</p><p><a href="https://www.lennysnewsletter.com/p/how-to-do-ai-analysis-you-can-actually">Caitlin Sullivan</a> describes it precisely in Lenny&#8217;s Newsletter. </p><p>&#8220;These mistakes are invisible until a stakeholder asks a question you can&#8217;t answer, or a decision falls apart three months later, or you realize the &#8216;customer evidence&#8217; behind a major investment actually had enormous holes.&#8221; </p><p>That is not a model failure but more of a skill failure.</p><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:187779404,&quot;url&quot;:&quot;https://www.lennysnewsletter.com/p/how-to-do-ai-analysis-you-can-actually&quot;,&quot;publication_id&quot;:10845,&quot;publication_name&quot;:&quot;Lenny's Newsletter&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!8MSN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F441213db-4824-4e48-9d28-a3a18952cbfc_592x592.png&quot;,&quot;title&quot;:&quot;How to do AI analysis you can actually trust&quot;,&quot;truncated_body_text&quot;:&quot;&#128075; Hey there, I&#8217;m Lenny. Each week, I answer reader questions about building product, driving growth, and accelerating your career. For more: Lenny&#8217;s Podcast | Lennybot | How I AI | My favorite AI/PM courses, public speaking course, and interview prep copilot&quot;,&quot;date&quot;:&quot;2026-02-17T13:45:26.090Z&quot;,&quot;like_count&quot;:231,&quot;comment_count&quot;:3,&quot;bylines&quot;:[],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:true,&quot;type&quot;:&quot;newsletter&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://www.lennysnewsletter.com/p/how-to-do-ai-analysis-you-can-actually?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><img class="embedded-post-publication-logo" src="https://substackcdn.com/image/fetch/$s_!8MSN!,w_56,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F441213db-4824-4e48-9d28-a3a18952cbfc_592x592.png" loading="lazy"><span class="embedded-post-publication-name">Lenny's Newsletter</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title">How to do AI analysis you can actually trust</div></div><div class="embedded-post-body">&#128075; Hey there, I&#8217;m Lenny. Each week, I answer reader questions about building product, driving growth, and accelerating your career. For more: Lenny&#8217;s Podcast | Lennybot | How I AI | My favorite AI/PM courses, public speaking course, and interview prep copilot&#8230;</div><div class="embedded-post-cta-wrapper"><span class="embedded-post-cta">Read more</span></div><div class="embedded-post-meta">2 months ago &#183; 231 likes &#183; 3 comments</div></a></div><p>Three things make AI analysis silently unreliable for product managers [specifically]:</p><ul><li><p>The output always looks finished. Claude Sonnet 4.6, ChatGPT, and Gemini 3.1 do not signal uncertainty the way a junior analyst would. They return polished prose with participant citations, timestamps, and confident recommendations. Regardless of whether the underlying evidence supports any of it. <strong>A well-formatted hallucination and a well-grounded insight look identical on the screen</strong>.</p></li><li><p>Pattern-matching gets mistaken for reasoning. Apple&#8217;s <a href="https://arxiv.org/pdf/2410.05229">GSM-Symbolic research</a> found that changing only variable names in a math problem caused LLM performance to drop by up to 10%. The model was not reasoning through the problem. It was recognizing surface patterns from training data. <br><br>Now, consider this: when a PM asks Claude to analyze churn themes, the model does not independently weigh the evidence. It finds what looks statistically probable given everything it has seen before. </p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!E7em!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9af457c1-ccee-46a6-8a9d-4ca1b3de64c1_2436x1586.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!E7em!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9af457c1-ccee-46a6-8a9d-4ca1b3de64c1_2436x1586.png 424w, https://substackcdn.com/image/fetch/$s_!E7em!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9af457c1-ccee-46a6-8a9d-4ca1b3de64c1_2436x1586.png 848w, https://substackcdn.com/image/fetch/$s_!E7em!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9af457c1-ccee-46a6-8a9d-4ca1b3de64c1_2436x1586.png 1272w, https://substackcdn.com/image/fetch/$s_!E7em!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9af457c1-ccee-46a6-8a9d-4ca1b3de64c1_2436x1586.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!E7em!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9af457c1-ccee-46a6-8a9d-4ca1b3de64c1_2436x1586.png" width="1456" height="948" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9af457c1-ccee-46a6-8a9d-4ca1b3de64c1_2436x1586.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:948,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:607197,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/188604743?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9af457c1-ccee-46a6-8a9d-4ca1b3de64c1_2436x1586.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!E7em!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9af457c1-ccee-46a6-8a9d-4ca1b3de64c1_2436x1586.png 424w, https://substackcdn.com/image/fetch/$s_!E7em!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9af457c1-ccee-46a6-8a9d-4ca1b3de64c1_2436x1586.png 848w, https://substackcdn.com/image/fetch/$s_!E7em!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9af457c1-ccee-46a6-8a9d-4ca1b3de64c1_2436x1586.png 1272w, https://substackcdn.com/image/fetch/$s_!E7em!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9af457c1-ccee-46a6-8a9d-4ca1b3de64c1_2436x1586.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><strong>Source</strong>: <a href="https://arxiv.org/pdf/2410.05229">GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models</a></figcaption></figure></div><ul><li><p>Sycophancy shapes the output before the PM notices. <a href="https://www.nngroup.com/articles/sycophancy-generative-ai-chatbots/">Nielsen Norman Group</a> found that 58% of all chatbot interactions display sycophantic behavior. If a PM mentions &#8220;pricing issues&#8221; anywhere in their prompt, the model weights toward pricing. If a PM pushes back on a theme, the model often reverses a previously correct answer. The output is already a reflection of the input&#8217;s assumptions, not an independent read of the data. </p></li></ul><p>The result, as Sullivan documents, is a choose-your-own-adventure experience. Two models. Same transcripts. Different narratives. Different evidence. Different product recommendations. Each was delivered with equal confidence. </p><p>Most PMs only ever see one output. They never see what the same data looks like through a different lens, with a different prompt, on a different model. That single output becomes the evidence base for the next decision.</p><p>That is where the skills in Section 3 begin to matter.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/ai-skills-no-one-is-teaching?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/ai-skills-no-one-is-teaching?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://labs.adaline.ai/p/ai-skills-no-one-is-teaching?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h2>The 8 Skills That Actually Matter</h2><p>The difference between the two outputs Sullivan showed side by side was not the model. It was the decisions made before the model ran. Each skill below addresses one of those decisions.</p><h3>Prompt for Decisions, Not Just Answers</h3><p>Most PMs ask AI what the data says. The better question is what to do about a specific problem given specific constraints. <a href="https://www.productmanagement.ai/p/prompt-engineering">Product Faculty</a> puts it plainly. &#8220;Bad prompts try to produce good answers. Great prompts try to prevent bad reasoning.&#8221; </p><p>When the prompt changes from &#8220;what are the themes?&#8221; to &#8220;given that we are deciding whether to build this feature for this user segment, what does the evidence support?&#8221;, the model has a decision to serve, not just a pattern to find.</p><h3>Load Context That Actually Changes the Output</h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zoSv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4878b34-6bd2-4ef5-95f3-8f3645841ef9_2160x1790.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zoSv!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4878b34-6bd2-4ef5-95f3-8f3645841ef9_2160x1790.png 424w, https://substackcdn.com/image/fetch/$s_!zoSv!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4878b34-6bd2-4ef5-95f3-8f3645841ef9_2160x1790.png 848w, https://substackcdn.com/image/fetch/$s_!zoSv!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4878b34-6bd2-4ef5-95f3-8f3645841ef9_2160x1790.png 1272w, https://substackcdn.com/image/fetch/$s_!zoSv!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4878b34-6bd2-4ef5-95f3-8f3645841ef9_2160x1790.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zoSv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4878b34-6bd2-4ef5-95f3-8f3645841ef9_2160x1790.png" width="1456" height="1207" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a4878b34-6bd2-4ef5-95f3-8f3645841ef9_2160x1790.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1207,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Context&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Context" title="Context" srcset="https://substackcdn.com/image/fetch/$s_!zoSv!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4878b34-6bd2-4ef5-95f3-8f3645841ef9_2160x1790.png 424w, https://substackcdn.com/image/fetch/$s_!zoSv!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4878b34-6bd2-4ef5-95f3-8f3645841ef9_2160x1790.png 848w, https://substackcdn.com/image/fetch/$s_!zoSv!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4878b34-6bd2-4ef5-95f3-8f3645841ef9_2160x1790.png 1272w, https://substackcdn.com/image/fetch/$s_!zoSv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4878b34-6bd2-4ef5-95f3-8f3645841ef9_2160x1790.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Venn diagram explaining context engineering.</em> | <strong>Source</strong>: <a href="https://www.philschmid.de/context-engineering">The New Skill in AI is Not Prompting, It&#8217;s Context Engineering</a></figcaption></figure></div><p>Dumping background into a prompt is not context loading. Phil Schmid of Google DeepMind <a href="https://www.philschmid.de/context-engineering">documented</a> this precisely. </p><div class="pullquote"><p>&#8220;Most agent failures are not model failures anymore. They are context failures.&#8221; </p></div><p>Effective context has four components. </p><ol><li><p>Project scope. </p></li><li><p>The specific business goal</p></li><li><p>Product constraints.</p></li><li><p>A participant overview. </p></li></ol><p>Without those four, Claude and ChatGPT default to generic analysis. With them, they answer your question instead of a version of it.</p><h3>Verify Before Anything Leaves the Room</h3><p>Sullivan ran a verification prompt on a set of ChatGPT quotes and found that the majority were paraphrases, not the customer&#8217;s actual words. </p><p>They had participant IDs. They had timestamps. They looked authoritative. But they were not real. </p><p>The fix is a two-step habit. </p><ol><li><p>Define quote rules before analysis begins. </p></li><li><p>Then run a verification pass before any output reaches a stakeholder. </p></li></ol><p>This takes five minutes and catches the errors that would otherwise sit inside a strategy deck for months.</p><h3>Spot Pattern-Matching Before it Becomes a Recommendation</h3><p>When AI returns a theme like &#8220;users want more reliable data,&#8221; that is almost certainly pattern-matching, not signal. </p><p>It could describe any product in any category. </p><p><a href="https://www.producttalk.org/ai-playbook/">Teresa Torres</a> tested Claude against 15 interviews she had previously analyzed manually and found that Claude identified eight opportunities she missed, but also missed seven she found.</p><p>The skill here is <strong>recognizing when AI is surfacing consensus rather than insight</strong>. And then pushing past it with a follow-up that asks for <strong>what is specific</strong>, <strong>contradictory</strong>, or <strong>unexpected in the data</strong>.</p><h3>Use AI Across Multiple Passes, Not One</h3><p>The teams that get real value from AI treat it as a thinking partner across several iterations, not a machine that produces a final answer on the first try.</p><p><a href="https://blog.logrocket.com/product-management/use-ai-to-improve-product-judgment/">LogRocket</a> research across 18 product teams found that the teams producing the most impact were not the ones generating the most output. They were the ones using AI to challenge their own thinking at each step. </p><p>Teresa Torres took a single overloaded prompt, <strong>split it into four focused passes</strong>, and <strong>saw quality improve immediately</strong>. </p><p>That is orchestration, which is a skill, not a setting.</p><h3>Match the Model to the Task</h3><p>Claude Sonnet or Opus 4.6, GPT-5.2, and Gemini 3.1 are not interchangeable. Sullivan documented this after running the same analysis across all three more than 100 times.</p><ul><li><p>Claude covers more ground with less pushing and is best suited for <strong>deep qualitative analysis</strong>.</p></li><li><p>Gemini delivers fewer themes but grounds them more heavily in evidence, making it reliable for research synthesis.</p></li><li><p>GPT-5 excels at stakeholder framing and communication, but is the most prone to combining quotes into plausible-sounding fabrications.</p></li></ul><p>Using the wrong model for the task is not a tool problem. It is a judgment problem. </p><h3>Write Prompts That Do Not Lead the Witness</h3><p>A 2025 <a href="https://www.nngroup.com/articles/sycophancy-generative-ai-chatbots/">study</a> found that 58% of chatbot interactions display sycophantic behavior, and AI models agree with users 50% more than humans do. </p><p>Mentioning &#8220;retention problems&#8221; in the prompt prompts the model to find them. </p><p>The skill is writing <strong>neutral</strong>, <strong>open-ended inputs</strong> that let signal emerge rather than confirm what you already believe. Meaning, don&#8217;t be biased in your prompting, have curiosity, and a tendency to explore. </p><p>One practical rule is to express the business goal without naming the expected answer.</p><h3>Translate Output into a Recommendation, Not a Report</h3><p>AI returns analysis. It does not return a decision. <strong><a href="https://www.lennysnewsletter.com/p/how-to-use-chatgpt-in-your-pm-work">Shreyas Doshi&#8217;s</a></strong> framing applies directly here. </p><blockquote><p><strong>The PM&#8217;s role is editor, not author</strong>. </p></blockquote><p>The last mile, from themes and evidence to a crisp recommendation with a clear rationale and the right level of confidence, is entirely human. That translation is where product judgment lives, and no interface automates it.</p><h2>Where to Start (Without Overwhelm)</h2><p>Eight skills are a lot to absorb at once. The good news is that they do not all carry equal weight at the beginning.</p><p>Start with context loading. It is the skill that immediately improves every other output without changing anything else about the workflow. </p><p>Before the next analysis session, <strong>define the project scope</strong>, <strong>the specific decision at stake</strong>, <strong>the product constraints</strong>, and <strong>who the participants are</strong>. Load those four things before the first prompt. The difference in output quality is immediate and visible. Try it. </p><p><strong>Add verification next.</strong> </p><p>Before any AI output reaches a stakeholder, run a verification pass on the quotes and claims it contains. </p><p>This single habit protects credibility and catches the errors that confident formatting makes invisible. Sullivan&#8217;s verification prompt takes five minutes. The cost of skipping it can take months to recover from. </p><p>Once those two habits are stable, shift the prompting approach toward decisions. Replace &#8220;what does this data show?&#8221; with the specific choice the team needs to make. </p><p>That reframe naturally pulls the remaining six skills into place. Because decision-focused prompts demand <strong>better context</strong>, <strong>reward iterative passes</strong>, and <strong>make pattern-matching easier to spot</strong>.</p><p>These three skills compound. </p><ul><li><p>Better context produces fewer fabrications. </p></li><li><p>Fewer fabrications make verification faster. </p></li><li><p>Cleaner verified output makes the final recommendation sharper.</p></li></ul><h2>The Judgment Layer Is the Job</h2><p>The PM who produced the trustworthy output in Sullivan&#8217;s experiment was not using a better tool. Claude, ChatGPT, and Gemini were available to both. The difference was the <strong>layer of judgment applied before</strong>, <strong>during</strong>, and <strong>after the model ran</strong>.</p><p>That layer does not come from the interface. It does not improve automatically as models get more capable. GPT-5.2 and Claude Sonnet 4.6 are more sophisticated than anything available two years ago. And the failure modes Sullivan documented are still happening daily across product teams everywhere.</p><p>Lenny Rachitsky framed the direction clearly. &#8220;The PM&#8217;s role shifts to becoming very good at knowing what data to feed AI and asking the right questions.&#8221;  </p><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:143204698,&quot;url&quot;:&quot;https://www.lennysnewsletter.com/p/how-ai-will-impact-product-management&quot;,&quot;publication_id&quot;:10845,&quot;publication_name&quot;:&quot;Lenny's Newsletter&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!8MSN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F441213db-4824-4e48-9d28-a3a18952cbfc_592x592.png&quot;,&quot;title&quot;:&quot;How AI will impact product management&quot;,&quot;truncated_body_text&quot;:&quot;&#128075; Hey, I&#8217;m Lenny and welcome to a &#128274; subscriber-only edition &#128274; of my weekly newsletter. Each week I tackle reader questions about building product, driving growth, and accelerating your career.&quot;,&quot;date&quot;:&quot;2024-04-09T12:02:42.507Z&quot;,&quot;like_count&quot;:237,&quot;comment_count&quot;:22,&quot;bylines&quot;:[{&quot;id&quot;:1849774,&quot;name&quot;:&quot;Lenny Rachitsky&quot;,&quot;handle&quot;:&quot;lenny&quot;,&quot;previous_name&quot;:null,&quot;photo_url&quot;:&quot;https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/afba5161-65bb-4d99-8d6b-cce660917fa1_1540x1540.png&quot;,&quot;bio&quot;:&quot;Writing &#8226; Angel investing &#8226; Advising&quot;,&quot;profile_set_up_at&quot;:&quot;2021-05-01T23:55:21.518Z&quot;,&quot;reader_installed_at&quot;:&quot;2021-12-15T18:09:25.096Z&quot;,&quot;publicationUsers&quot;:[{&quot;id&quot;:247600,&quot;user_id&quot;:1849774,&quot;publication_id&quot;:10845,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:true,&quot;publication&quot;:{&quot;id&quot;:10845,&quot;name&quot;:&quot;Lenny's Newsletter&quot;,&quot;subdomain&quot;:&quot;lenny&quot;,&quot;custom_domain&quot;:&quot;www.lennysnewsletter.com&quot;,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;Deeply researched product, growth, and career advice&#8212;newsletter, podcast, community, and living library&quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/441213db-4824-4e48-9d28-a3a18952cbfc_592x592.png&quot;,&quot;author_id&quot;:1849774,&quot;primary_user_id&quot;:1849774,&quot;theme_var_background_pop&quot;:&quot;#f47c55&quot;,&quot;created_at&quot;:&quot;2019-06-01T15:35:37.885Z&quot;,&quot;email_from_name&quot;:&quot;Lenny's Newsletter&quot;,&quot;copyright&quot;:null,&quot;founding_plan_name&quot;:&quot;Insider Tier&quot;,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;enabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;newspaper&quot;,&quot;is_personal_mode&quot;:false}}],&quot;twitter_screen_name&quot;:&quot;lennysan&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:10000,&quot;status&quot;:{&quot;bestsellerTier&quot;:10000,&quot;subscriberTier&quot;:10,&quot;leaderboard&quot;:null,&quot;vip&quot;:false,&quot;badge&quot;:{&quot;type&quot;:&quot;bestseller&quot;,&quot;tier&quot;:10000},&quot;paidPublicationIds&quot;:[3525780,1243269,16907,2217127,1548028,218501,260347,313411,46510,1163860,1435249,1256656,10025,35345],&quot;subscriber&quot;:null}}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:true,&quot;type&quot;:&quot;newsletter&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://www.lennysnewsletter.com/p/how-ai-will-impact-product-management?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><img class="embedded-post-publication-logo" src="https://substackcdn.com/image/fetch/$s_!8MSN!,w_56,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F441213db-4824-4e48-9d28-a3a18952cbfc_592x592.png" loading="lazy"><span class="embedded-post-publication-name">Lenny's Newsletter</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title">How AI will impact product management</div></div><div class="embedded-post-body">&#128075; Hey, I&#8217;m Lenny and welcome to a &#128274; subscriber-only edition &#128274; of my weekly newsletter. Each week I tackle reader questions about building product, driving growth, and accelerating your career&#8230;</div><div class="embedded-post-cta-wrapper"><span class="embedded-post-cta">Read more</span></div><div class="embedded-post-meta">2 years ago &#183; 237 likes &#183; 22 comments &#183; Lenny Rachitsky</div></a></div><p>That is not a peripheral skill. </p><p>That is the job.</p><p>As models get better at producing outputs that look right, the ability to judge whether they are right becomes more valuable, not less. </p><p>The eight skills in this article are not a workaround for weak models. They are the foundation for working with strong ones.</p><h2>Conclusion</h2><p>98% of PMs have the tool. The 39% who invest in the skill layer are the ones whose recommendations hold up in the room, whose evidence survives scrutiny, and whose decisions age well.</p><p>This gap is not closing on its own. Practice, experiment, read, and learn these techniques. Observe the differences. Find what suits your workflow, then iterate and teach others. </p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Investor And Venture Outlook On AI | Takeaways For Founders And Product Leaders]]></title><description><![CDATA[A grounded lens on where AI value will compound, which risks matter, and why execution discipline beats hype.]]></description><link>https://labs.adaline.ai/p/investor-and-venture-outlook-on-ai</link><guid isPermaLink="false">https://labs.adaline.ai/p/investor-and-venture-outlook-on-ai</guid><dc:creator><![CDATA[Arsh Shah Dilbagi]]></dc:creator><pubDate>Wed, 18 Feb 2026 13:55:19 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/71d9c7b9-85d2-4b13-89f0-6963d366f4d1_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>TLDR</strong>: This blog shares what investors really think about AI in 2025. The big idea: AI is still in its early days, even if it doesn&#8217;t feel that way. Just because everyone in tech is talking about AI doesn&#8217;t mean businesses are actually using it yet. Real adoption shows up in budgets, not just experiments. Many industries have barely started. The core message for founders and investors: <strong>the AI opportunity is just getting started, not winding down</strong>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://go.adaline.ai/rPUz2SX" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!DibU!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68b8e0c8-7868-4753-8ef6-8443943ffec9_2160x810.png 424w, https://substackcdn.com/image/fetch/$s_!DibU!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68b8e0c8-7868-4753-8ef6-8443943ffec9_2160x810.png 848w, https://substackcdn.com/image/fetch/$s_!DibU!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68b8e0c8-7868-4753-8ef6-8443943ffec9_2160x810.png 1272w, https://substackcdn.com/image/fetch/$s_!DibU!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68b8e0c8-7868-4753-8ef6-8443943ffec9_2160x810.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!DibU!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68b8e0c8-7868-4753-8ef6-8443943ffec9_2160x810.png" width="1200" height="450" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/68b8e0c8-7868-4753-8ef6-8443943ffec9_2160x810.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:546,&quot;width&quot;:1456,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:292511,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://go.adaline.ai/rPUz2SX&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/184653182?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68b8e0c8-7868-4753-8ef6-8443943ffec9_2160x810.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!DibU!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68b8e0c8-7868-4753-8ef6-8443943ffec9_2160x810.png 424w, https://substackcdn.com/image/fetch/$s_!DibU!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68b8e0c8-7868-4753-8ef6-8443943ffec9_2160x810.png 848w, https://substackcdn.com/image/fetch/$s_!DibU!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68b8e0c8-7868-4753-8ef6-8443943ffec9_2160x810.png 1272w, https://substackcdn.com/image/fetch/$s_!DibU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68b8e0c8-7868-4753-8ef6-8443943ffec9_2160x810.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Introduction</h2><div id="youtube2-6rX9K90InuE" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;6rX9K90InuE&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/6rX9K90InuE?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><h2>Founder Intro: Investor and Venture Outlook on AI in 2025</h2><p>There&#8217;s no shortage of opinions about AI&#8217;s future. What&#8217;s far rarer is clarity about what actually matters <em>right now</em>. For founders, it is about building companies; for investors, about deciding where conviction belongs.</p><p>Panel 5 was designed to cut through that noise. Rather than speculate about distant futures or abstract breakthroughs, we wanted to anchor the conversation in the realities shaping AI businesses in 2025: adoption curves, economics, org design, governance, and where durable value is actually accruing.</p><p>To do that, we brought together investors who are actively underwriting these questions across different stages, geographies, and market structures:</p><ul><li><p><strong><a href="https://www.linkedin.com/in/lukas-linemayr/">Lukas Linemayr</a></strong>, Partner at <strong>Streamlined Ventures</strong>.</p></li><li><p><strong><a href="https://www.linkedin.com/in/rakgarg/">Rak Gar</a>g</strong>, Partner at <strong>Bain Capital Ventures</strong>.</p></li><li><p><strong><a href="https://www.linkedin.com/in/tiger-gao-princeton2021/">Tiger Gao</a></strong><a href="https://www.linkedin.com/in/tiger-gao-princeton2021/">,</a> Investor at <strong>Apax Digital</strong>.</p></li><li><p><strong><a href="https://www.linkedin.com/in/zaochen/">Zao Chen</a></strong>, Investor at <strong>Craft Ventures</strong>.</p></li></ul><p>What emerged was a surprisingly grounded picture of the AI landscape. Yes, the market is early, but it is not empty. Yes, capital investment is massive, but revenue realization takes time. Yes, platform risk is real, but applications still capture value. And perhaps most importantly: AI has expanded the outcome space for founders rather than narrowing it.</p><p>This panel wasn&#8217;t about predicting AGI timelines or chasing the next hype cycle. It was about understanding constraints, making realistic bets, and recognizing where opportunity still hides &#8212; often in overlooked markets, unglamorous workflows, and human-heavy industries that software never fully reached.</p><p>Across the discussion, one theme stood out:</p><blockquote><p>&#8220;AI changes what&#8217;s possible &#8212; not what&#8217;s required to build a real business.&#8221;</p></blockquote><p>Durable companies are still built on trust, usage, distribution, and judgment. The tools are new. The fundamentals are not. The sections that follow break down how investors are thinking about value capture, revenue quality, founder profiles, governance, and scale &#8212; not as theory, but as underwriting criteria today.</p><p>If you&#8217;re building in AI and trying to decide <em>what kind of company to build</em>, <em>whether venture is the right path</em>, or <em>where the next decade of opportunity actually lies</em>, this panel offers a clear place to start.</p><div><hr></div><h2>1. The Market Is Early &#8212; But Not Empty</h2><p>One of the most consistent refrains across the panel was a corrective to a common misconception:</p><p><strong>AI adoption feels saturated inside tech circles &#8212; but it isn&#8217;t saturated in the real economy.</strong></p><p>What looks crowded from within Silicon Valley looks very different when viewed across industries, geographies, and buyer maturity curves.</p><h3>Inside the Bubble vs Outside the Market</h3><p>Within technology ecosystems, AI can feel ubiquitous. Models are improving rapidly. New products launch weekly. Capital is flowing aggressively.</p><p>But as multiple panelists emphasized, this perspective is deeply skewed.</p><p><strong>Outside of tech-forward companies:</strong></p><ul><li><p>Most enterprises are still experimenting.</p></li><li><p>Deployments are limited to pilots or narrow workflows.</p></li><li><p>Leadership teams are cautious.</p></li><li><p>Organizational readiness lags technical capability.</p></li></ul><p>As <strong>Lukas Linemayr</strong>, Partner at <strong>Streamlined Ventures</strong>, noted, exposure should not be confused with adoption. Awareness is high. Actual usage at scale is not.</p><h3>Budgets Tell the Real Story</h3><p>Several panelists pointed to a simple reality check: <strong>budget allocation</strong>.</p><p>Despite the attention AI receives, AI spend remains a small fraction of overall enterprise budgets. In most organizations, it competes with:</p><ul><li><p>Legacy software commitments.</p></li><li><p>Infrastructure modernization.</p></li><li><p>Security and compliance spend.</p></li><li><p>Headcount and services.</p></li></ul><p>As <strong>Rak Gard</strong>, Partner at <strong>Bain Capital Ventures</strong>, emphasized, real adoption shows up in sustained budget line items &#8212; not experimentation funds. By that measure, most enterprises are still in early innings.</p><h3>Consumer Adoption Is Uneven, Not Universal</h3><p>The panel also pushed back on the idea that consumer AI adoption is &#8220;done.&#8221;</p><p>While some products have achieved massive usage, adoption remains:</p><ul><li><p>Uneven across geographies.</p></li><li><p>Concentrated among power users.</p></li><li><p>Fragmented by use case.</p></li><li><p>Highly sensitive to trust and clarity.</p></li></ul><p>As <strong>Tiger Gao</strong>, Investor at <strong>Apax Digital</strong>, pointed out, consumer behavior varies dramatically outside of early-adopter markets. What feels mainstream in one region can be niche in another.</p><p>This unevenness suggests opportunity &#8212; not saturation.</p><h3>Entire Industries Are Barely Started</h3><p>Perhaps the most important insight was how many sectors have barely begun meaningful AI deployment. Industries like healthcare, manufacturing, logistics, financial operations, and regulated services face constraints that slow down, </p><ul><li><p>Adoption.</p></li><li><p>Compliance requirements.</p></li><li><p>Legacy systems.</p></li><li><p>Data fragmentation.</p></li><li><p>Cultural resistance.</p></li></ul><p>As <strong>Zao Chen</strong>, Investor at <strong>Craft Ventures</strong>, noted, these constraints don&#8217;t eliminate opportunity; they delay it. And delayed markets often end up being the largest ones.</p><h3>Capital &#8800; Product-Market Fit</h3><p>A key clarification from the panel was that <strong>capital investment should not be mistaken for market maturity</strong>.</p><p>Yes, enormous amounts of capital have flowed into AI. No, that does not mean product-market fit is solved.</p><p><strong>At-scale PMF:</strong></p><ul><li><p>Is still forming.</p></li><li><p>Looks different by industry.</p></li><li><p>Requires integration, not just intelligence.</p></li><li><p>Unfolds over years, not quarters.</p></li></ul><p>Many AI products are still searching for repeatable, durable deployment patterns.</p><h3>Diffusion Has Just Begun</h3><p>This led to the panel&#8217;s core takeaway:</p><blockquote><p><strong>Today&#8217;s traction does not represent peak penetration.</strong><br><strong>It represents the beginning of diffusion.</strong></p></blockquote><p><strong>We are early in the curve where:</strong></p><ul><li><p>Workflows are being discovered.</p></li><li><p>Buyers are learning how to buy.</p></li><li><p>Organizations are learning how to deploy.</p></li><li><p>Trust is still being earned.</p></li></ul><p>For founders and investors alike, this reframes the opportunity.</p><p>The market isn&#8217;t empty. But it&#8217;s far from full.</p><h3>The Practical Takeaway</h3><p>AI may feel late-stage if you only look at demos, headlines, and funding rounds.</p><p><strong>But if you look at:</strong></p><ul><li><p>Real usage.</p></li><li><p>Real budgets.</p></li><li><p>Real deployment.</p></li><li><p>Real behavior.</p></li></ul><p>The conclusion is clear: <strong>we&#8217;re still at the beginning of adoption, not the end.</strong></p><p>For companies that can survive the experimentation phase and earn trust at scale, the next wave of growth is still ahead.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>2. AGI Debates Matter Less Than Near-Term Constraints</h2><p>AGI and superintelligence inevitably came up during the panel, but notably, they were treated as <strong>context</strong>, not catalysts.</p><p>The investors were aligned on a simple point:</p><p><strong>AGI debates are intellectually interesting. And that near-term constraints determine outcomes.</strong></p><h3>AGI Is a Moving Target</h3><p>One of the first issues raised was definitional.</p><p>As <strong>Lukas Linemayr</strong>, Partner at <strong>Streamlined Ventures</strong>, noted, there is no stable, shared definition of AGI. What qualifies as &#8220;general&#8221; varies by speaker, by benchmark, and by moment in time.</p><p><strong>This makes AGI a poor anchor for:</strong></p><ul><li><p>Investment decisions.</p></li><li><p>Company strategy.</p></li><li><p>Product roadmaps.</p></li></ul><p>If the goalposts keep moving, progress becomes impossible to evaluate meaningfully.</p><h3>Reasoning Exists &#8212; But Only Inside Boxes</h3><p>The panel acknowledged real advances in multi-step reasoning.</p><p><strong>Models today can:</strong></p><ul><li><p>Chain logic.</p></li><li><p>Follow structured plans.</p></li><li><p>Solve complex problems <em>within constrained domains</em>.</p></li></ul><p>But that constraint is doing the real work.</p><p>As <strong>Rak Gard</strong>, Partner at <strong>Bain Capital Ventures</strong>, emphasized, reasoning degrades rapidly once systems leave controlled environments. Outside of well-scoped tasks, models struggle with ambiguity, long-horizon execution, and accountability.</p><p>This gap matters far more than abstract intelligence scores.</p><h3>Autonomy Is Bottlenecked by the World, Not Models</h3><p>Another key insight was that autonomy isn&#8217;t limited by model capability alone.</p><p><strong>It&#8217;s bottlenecked by:</strong></p><ul><li><p>Messy real-world environments,</p></li><li><p>Poor or fragmented data,</p></li><li><p>Limited feedback loops,</p></li><li><p>Immature reinforcement learning systems.</p></li></ul><p>As <strong>Tiger Gao</strong>, Investor at <strong>Apax Digital</strong>, pointed out, intelligence without grounding doesn&#8217;t scale. The world is not a clean API. Until systems can reliably sense, act, and learn in open environments, autonomy will remain constrained regardless of model improvements.</p><h3>Timelines Are Longer Than the Discourse Suggests</h3><p>The panel was notably conservative on timelines.</p><ul><li><p>Not pessimistic, rather realistic.</p></li><li><p>Breakthroughs will happen.</p></li><li><p>Capabilities will improve.</p></li><li><p>New classes of applications will emerge.</p></li></ul><p>But as <strong>Zao Chen</strong>, Investor at <strong>Craft Ventures</strong>, noted, the gap between lab demos and reliable deployment is often measured in <em>years</em>, not months. Overestimating timelines is one of the fastest ways to make bad bets.</p><h3>Investors Underwrite Constraints, Not Possibility</h3><p>This led to a shared investment posture.</p><p>While AGI-level outcomes may shape long-term narratives, <strong>investors operating today underwrite constraints</strong>:</p><ul><li><p>Where models fail.</p></li><li><p>Where workflows break.</p></li><li><p>Where adoption stalls.</p></li><li><p>Where economics don&#8217;t pencil.</p></li></ul><p>Near-term success depends on navigating these limitations and not assuming they&#8217;ll disappear.</p><p>Founders who build as if constraints are permanent often outperform those betting on imminent breakthroughs.</p><h3>The Practical Takeaway</h3><p>AGI debates will continue &#8212; and they matter for long-term vision.</p><p>But in 2025:</p><ul><li><p>Constraints drive outcomes.</p></li><li><p>Environments matter more than intelligence.</p></li><li><p>Deployment beats demos.</p></li><li><p>Realism beats speculation.</p></li></ul><p>For builders and investors alike, the message was clear:</p><blockquote><p>The next wave of value won&#8217;t come from waiting for AGI. It will come from building durable businesses inside today&#8217;s limits and also expanding those limits over time.</p></blockquote><h2>3. Massive CapEx Does Not Automatically Equal Massive Revenue</h2><p>One of the most candid discussions on the panel centered around a growing tension in the AI ecosystem:</p><blockquote><p><strong>Infrastructure spending has exploded, but revenue realization is still catching up.</strong></p></blockquote><p>This disconnect is real, and it matters.</p><h3>Infrastructure Spend Is Front-Loaded by Design</h3><p>The panel acknowledged the obvious headline: AI has triggered one of the largest infrastructure buildouts in modern tech history.</p><ul><li><p>Compute.</p></li><li><p>Data centers.</p></li><li><p>Specialized hardware.</p></li><li><p>Energy commitments.</p></li></ul><p>As <strong>Rak Gard</strong>, Partner at <strong>Bain Capital Ventures</strong>, noted, this level of CapEx is unprecedented outside of telecom or cloud hyperscalers. But unlike traditional software, AI infrastructure must be built <em>ahead</em> of demand.</p><p>This makes early financials look distorted &#8212; not broken.</p><h3>Revenue Exists &#8212; Just Not in Proportion Yet</h3><p>A key nuance the panel emphasized was that <strong>AI revenue is real and growing quickly</strong>.</p><p>Some AI applications are:</p><ul><li><p>Growing faster than any prior software category.</p></li><li><p>Achieving meaningful ARR at early stages.</p></li><li><p>Demonstrating strong willingness to pay.</p></li></ul><p>As <strong>Lukas Linemayr</strong>, Partner at <strong>Streamlined Ventures</strong>, pointed out, aggregate AI ARR across the ecosystem is already substantial.</p><p>What it is <em>not yet</em> is proportional to the infrastructure being built to support future demand.</p><p>That gap is expected and temporary.</p><h3>Monetization Lags Capability</h3><p>Another consistent insight was that <strong>monetization always lags technical capability</strong>.</p><ul><li><p>Models improve first.</p></li><li><p>Use cases emerge next.</p></li><li><p>Business models stabilize last.</p></li></ul><p>As <strong>Tiger Gao</strong>, Investor at <strong>Apax Digital</strong>, explained, AI creates value before it captures value. It takes time for:</p><ul><li><p>Buyers need to understand ROI.</p></li><li><p>Pricing models to normalize.</p></li><li><p>Procurement processes to adapt.</p></li><li><p>Budgets to shift meaningfully.</p></li></ul><p>This lag is not unique to AI, but the scale makes it more visible.</p><h3>CapEx Absorption Takes Time</h3><p>The panel converged on a clear expectation:</p><blockquote><p><strong>CapEx absorption will take years, not quarters.</strong></p></blockquote><p>Infrastructure will be amortized over long time horizons.</p><p>Revenue will arrive unevenly.</p><p>Some segments will monetize faster than others.</p><p>As <strong>Zao Chen</strong>, Investor at <strong>Craft Ventures</strong>, emphasized, this doesn&#8217;t imply poor returns &#8212; it implies patience. Investors expecting immediate proportionality between spend and revenue are misreading the cycle.</p><h3>Uneven Returns Are a Feature, Not a Bug</h3><p>Another important point was that returns will not be distributed evenly.</p><p>Some layers will:</p><ul><li><p>Capture outsized value early.</p></li><li><p>Show strong unit economics.</p></li><li><p>Justify spending quickly.</p></li></ul><p>Others will:</p><ul><li><p>Struggle to monetize.</p></li><li><p>Remain infrastructure-heavy.</p></li><li><p>Consolidate over time.</p></li></ul><p>This unevenness is characteristic of platform shifts, not a sign of failure.</p><h3>The Practical Takeaway</h3><p>Massive CapEx is not proof of massive revenue, <em>yet</em>.</p><p>But it is a prerequisite for it.</p><p>The panel&#8217;s consensus was grounded but optimistic:</p><ul><li><p>Revenue is coming.</p></li><li><p>Monetization is forming.</p></li><li><p>Timelines are longer than hype suggests.</p></li></ul><p>For investors and founders alike, the message was clear:</p><blockquote><p><strong>Don&#8217;t confuse delayed returns with absent returns.</strong><br><strong>The AI buildout is early &#8212; and uneven by design.</strong></p></blockquote><h2>4. Value Accrues to Applications, Not Foundations</h2><p>One of the strongest points of alignment across the panel was a lesson the industry has learned repeatedly:</p><p><strong>Platforms enable value.</strong><br><strong>Applications capture it.</strong></p><p>AI does not break that pattern; it reinforces it.</p><h3>History Rhymes &#8212; Even When Technology Changes</h3><p>The panel situated AI within a familiar historical arc.</p><p>In prior platform shifts:</p><ul><li><p>Operating systems enabled software companies.</p></li><li><p>Cloud infrastructure enabled SaaS.</p></li><li><p>Mobile platforms enabled app ecosystems.</p></li></ul><p>In each case, the enabling layer was essential &#8212; but the enduring value accrued to the application layer.</p><p>As <strong>Rak Gard</strong>, Partner at <strong>Bain Capital Ventures</strong>, emphasized, AI follows the same economic logic. Infrastructure makes new behavior possible. Applications turn that possibility into revenue.</p><h3>Foundations Are Necessary &#8212; and Brutal</h3><p>The panel was clear-eyed about the difficulty of foundation-layer businesses.</p><p>Chips, models, and infrastructure are:</p><ul><li><p>Capital-intensive.</p></li><li><p>Technically complex.</p></li><li><p>Strategically critical.</p></li></ul><p>But they are also:</p><ul><li><p>Highly competitive.</p></li><li><p>Subject to commoditization.</p></li><li><p>Constrained by margin pressure.</p></li></ul><p>As <strong>Lukas Linemayr</strong>, Partner at <strong>Streamlined Ventures</strong>, noted, the model layer increasingly resembles cloud infrastructure wars &#8212; massive scale advantages, few winners, and brutal economics for everyone else.</p><p>These businesses matter &#8212; but they are structurally hard to own as long-term value capture plays.</p><h3>Applications Control the Customer</h3><p>What applications uniquely possess is <strong>the user relationship</strong>.</p><p>Applications own:</p><ul><li><p>Workflow integration.</p></li><li><p>Daily usage.</p></li><li><p>Customer trust.</p></li><li><p>Switching costs.</p></li></ul><p>As <strong>Tiger Gao</strong>, Investor at <strong>Apax Digital</strong>, pointed out, this control translates directly into pricing power. Users pay for outcomes, not for abstractions.</p><p>When models improve, applications benefit without having to rebuild trust from scratch.</p><h3>Differentiation Lives Above the Model</h3><p>Another key point was that <strong>models converge faster than experiences</strong>.</p><ul><li><p>Model performance gaps compress.</p></li><li><p>APIs standardize.</p></li><li><p>Capabilities diffuse.</p></li></ul><p>Applications differentiate by:</p><ul><li><p>Domain expertise.</p></li><li><p>Workflow design.</p></li><li><p>Data context.</p></li><li><p>User experience.</p></li><li><p>Operational integration.</p></li></ul><p>As <strong>Zao Chen</strong>, Investor at <strong>Craft Ventures</strong>, emphasized, durable defensibility emerges from how AI is applied &#8212; not from the intelligence itself.</p><h3>Margins Expand Up the Stack</h3><p>The panel also highlighted a familiar economic pattern:</p><ul><li><p>Margins expand as you move closer to the user.</p></li><li><p>Infrastructure margins are constrained by cost curves.</p></li><li><p>Model margins are pressured by competition.</p></li><li><p>Application margins grow through differentiation and pricing power.</p></li></ul><p>This doesn&#8217;t diminish the importance of foundational layers &#8212; but it clarifies where sustained value capture occurs.</p><h3>The Practical Takeaway</h3><p>AI infrastructure enables the future.</p><p>Applications monetize it.</p><p>For founders, this means:</p><ul><li><p>Obsessing over workflows, not models.</p></li><li><p>Owning user trust and integration.</p></li><li><p>Building differentiation above the foundation.</p></li></ul><p>For investors, it reinforces a familiar truth:</p><p><strong>The largest, most durable outcomes are still built at the application layer, even in an AI-first world.</strong></p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/investor-and-venture-outlook-on-ai?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/investor-and-venture-outlook-on-ai?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://labs.adaline.ai/p/investor-and-venture-outlook-on-ai?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h2>5. Platform Risk Is Real &#8212; But Not Fatal</h2><p>The panel didn&#8217;t avoid one of the most sensitive topics in AI investing:<br><strong>platform risk is real.</strong></p><ul><li><p>Model providers are moving downstream.</p></li><li><p>APIs are evolving.</p></li><li><p>Feature parity is increasing.</p></li></ul><p>But the consensus view was notably pragmatic &#8212; not alarmist.</p><h3>Tension Is Inevitable in Platform Shifts</h3><p>As platforms mature, they naturally look for ways to monetize.</p><p><strong>That often means:</strong></p><ul><li><p>Expanding feature sets.</p></li><li><p>Offering more opinionated tools.</p></li><li><p>Encroaching on application territory.</p></li></ul><p>As <strong>Rak Gard</strong>, Partner at <strong>Bain Capital Ventures</strong>, noted, this tension is not unique to AI. It showed up in cloud, mobile, and SaaS before.</p><p>Platforms and applications coexist &#8212; sometimes uneasily &#8212; because they serve different economic roles.</p><h3>API Risk Is a Known Variable</h3><p><strong>Several panelists acknowledged legitimate concerns around:</strong></p><ul><li><p>Access changes.</p></li><li><p>Pricing shifts.</p></li><li><p>Deprecations.</p></li><li><p>Policy updates.</p></li></ul><p>As <strong>Lukas Linemayr</strong>, Partner at <strong>Streamlined Ventures</strong>, pointed out, APIs are dependencies &#8212; not guarantees. Smart teams model this risk explicitly rather than pretending it doesn&#8217;t exist.</p><p>Platform risk becomes fatal only when it&#8217;s ignored.</p><h3>Differentiation Isn&#8217;t in the Model</h3><p>The panel repeatedly returned to where applications actually win.</p><p><strong>Apps differentiate through:</strong></p><ul><li><p>Workflow design.</p></li><li><p>Domain expertise.</p></li><li><p>Product taste.</p></li><li><p>Brand and trust.</p></li><li><p>Customer relationships.</p></li></ul><p>As <strong>Tiger Gao</strong>, Investor at <strong>Apax Digital</strong>, emphasized, platforms optimize for breadth. Applications win through depth.</p><p>That depth is hard to replicate &#8212; even for the platform itself.</p><h3>Competition Reshapes Opportunity</h3><p>One of the more grounded insights was that <strong>competition doesn&#8217;t eliminate opportunity; it reshapes it</strong>.</p><p><strong>When platforms move downstream:</strong></p><ul><li><p>They validate demand.</p></li><li><p>They educate the market.</p></li><li><p>They raise baseline expectations.</p></li></ul><p>This often creates new whitespace for more specialized, higher-quality applications.</p><p>As <strong>Zao Chen</strong>, Investor at <strong>Craft Ventures</strong>, noted, many successful SaaS companies were built <em>after</em> platforms entered adjacent spaces &#8212; not before.</p><h3>Risk Is a Pricing Input, Not a Stop Signal</h3><p>The panel ultimately framed platform risk the same way investors do:<br>As a factor to price in, not a reason to walk away.</p><p>Founders who <strong>understand their dependency surface</strong>, <strong>design for portability</strong>, <strong>own the customer relationship</strong>, and <strong>build real differentiation</strong> can survive &#8212; and even benefit from &#8212; platform competition.</p><h3>The Practical Takeaway</h3><ol><li><p>Platform risk in AI is real.</p></li><li><p>But it&#8217;s not new.</p></li><li><p>It&#8217;s not fatal.</p></li><li><p>And it&#8217;s not a reason to avoid building.</p></li></ol><p><strong>The companies that win:</strong></p><ul><li><p>Acknowledge the risk.</p></li><li><p>Design around it.</p></li><li><p>Differentiate beyond the platform.</p></li><li><p>Move faster than incumbents.</p></li></ul><p>In AI, as in every platform shift before it, <strong>value accrues to teams that build where platforms can&#8217;t &#8212; not where they can.</strong></p><h2>6. &#8220;Quality of Revenue&#8221; Now Matters at Seed</h2><p>One of the clearest shifts highlighted by investors was the&nbsp;<strong>earlier evaluation of revenue</strong>.</p><p>In prior cycles, seed revenue was rare and often enough on its own.</p><p>In AI, revenue shows up earlier.</p><p>That changes the bar.</p><h3>Revenue Is Easier to Generate &#8212; and Easier to Misread</h3><p>AI has dramatically compressed time-to-revenue.</p><p><strong>Teams can:</strong></p><ul><li><p>Ship quickly.</p></li><li><p>Demo convincingly.</p></li><li><p>Monetize early interest.</p></li><li><p>Close initial contracts faster than ever.</p></li></ul><p>But as multiple panelists emphasized, <strong>early revenue is no longer synonymous with a real business</strong>.</p><p>As <strong>Lukas Linemayr</strong>, Partner at <strong>Streamlined Ventures</strong>, noted, the question is no longer <em>&#8220;Do you have revenue?&#8221;</em> &#8212; it&#8217;s <em>&#8220;What kind of revenue is this?&#8221;</em></p><h3>The New Questions Investors Ask</h3><p>Across the panel, investors described a sharper line of inquiry at seed and Series A.</p><p><strong>They want to understand:</strong></p><ul><li><p><strong>Durability</strong>: Does usage persist after novelty fades?</p></li><li><p><strong>Depth</strong>: Are customers relying on the product, or just experimenting?</p></li><li><p><strong>Repeatability</strong>: Does demand recur, or is it opportunistic?</p></li><li><p><strong>Expansion</strong>: Is there a credible path from $10M to $100M to public markets?</p></li></ul><p>As <strong>Rak Gard</strong>, Partner at <strong>Bain Capital Ventures</strong>, emphasized, investors are increasingly underwriting <em>trajectory</em>, not just traction.</p><h3>Novelty Masks Weak Signals</h3><p>Several panelists warned that AI novelty can distort early metrics.</p><p><strong>Short-term spikes may reflect:</strong></p><ul><li><p>Curiosity.</p></li><li><p>Experimentation budgets.</p></li><li><p>Executive mandates.</p></li><li><p>Fear of missing out.</p></li></ul><p>As <strong>Tiger Gao</strong>, Investor at <strong>Apax Digital</strong>, pointed out, these signals look strong in dashboards &#8212; but decay quickly if the product doesn&#8217;t earn its place in a workflow.</p><p>Retention, not activation, tells the real story.</p><h3>Usage Reveals Business Reality</h3><p>A recurring theme was that <strong>usage behavior is more informative than revenue timing</strong>.</p><p><strong>Investors look closely at:</strong></p><ul><li><p>Frequency of use.</p></li><li><p>Depth of engagement.</p></li><li><p>Reliance during critical moments.</p></li><li><p>Behavior when the product fails.</p></li></ul><p>As <strong>Zao Chen</strong>, Investor at <strong>Craft Ventures</strong>, noted, strong businesses show resilience. Customers return even when things break. Weak ones disappear quietly.</p><p>Revenue without usage conviction is fragile.</p><h3>Scale Tests Everything</h3><p>Another important point was that <strong>scaling reveals quality quickly</strong>.</p><p><strong>Many AI products can reach $1&#8211;5M in ARR through:</strong></p><ul><li><p>Founder-led sales.</p></li><li><p>Bespoke deployments.</p></li><li><p>Heavy services.</p></li><li><p>Early adopter enthusiasm.</p></li></ul><p><strong>The real question is whether the business can:</strong></p><ul><li><p>Standardize delivery.</p></li><li><p>Reduce marginal cost.</p></li><li><p>Survive broader scrutiny.</p></li><li><p>Scale distribution without collapsing economics.</p></li></ul><p>As the panel emphasized, the path from $10M to $100M remains the true test&#8212;and AI has not shortened it.</p><h3>Time-to-Business Maturity Hasn&#8217;t Changed</h3><p>This led to one of the panel&#8217;s most grounded conclusions:</p><blockquote><p><strong>AI has compressed time-to-revenue.</strong><br><strong>It has not compressed time-to-business maturity.</strong></p></blockquote><p>Trust still takes time.</p><p>Habits still take time.</p><p>Markets still take time.</p><p>No model shortcut changes that.</p><h3>The Practical Takeaway</h3><p>Revenue is necessary &#8212; but no longer sufficient.</p><p><strong>For founders:</strong></p><ul><li><p>Focus on usage durability, not just monetization.</p></li><li><p>Optimize for reliance, not novelty.</p></li><li><p>Build businesses that survive attention decay.</p></li></ul><p><strong>For investors:</strong></p><blockquote><p>Early revenue is a starting point for diligence, not the end.</p></blockquote><p>In an AI-first world,&nbsp;<strong>the quality of revenue matters earlier because it&#8217;s easier than ever to get the wrong kind</strong>.</p><h2>7. Taste, Brand, and Community Are Emerging Moats</h2><p>One of the more surprising &#8212; and strongly aligned &#8212; themes across the panel was how much <strong>intangible moats now matter in AI</strong>.</p><p>In fact, the investors suggested they may matter <em>more</em> than in traditional SaaS.</p><h3>Feature Parity Is the New Default</h3><p>As models converge and capabilities diffuse, feature parity arrives faster than teams expect.</p><p>What once felt differentiated &#8212; reasoning quality, speed, and output polish &#8212; now quickly becomes the baseline.</p><p>As <strong>Lukas Linemayr</strong>, Partner at <strong>Streamlined Ventures</strong>, noted, when technical advantages compress, competition shifts up the stack &#8212; toward how products <em>feel</em>, not just what they do.</p><h3>Taste Creates Coherence</h3><p>The panel framed <strong>taste</strong> not as aesthetics, but as coherence.</p><p><strong>Taste shows up in:</strong></p><ul><li><p>Which problems are chosen?</p></li><li><p>Which features are excluded?</p></li><li><p>How are workflows structured?</p></li><li><p>How does the product behave under stress?</p></li></ul><p>As <strong>Rak Gard</strong>, Partner at <strong>Bain Capital Ventures</strong>, emphasized, taste is what makes a product feel intentional rather than accidental. In AI products, where outputs are probabilistic, that sense of intention is deeply reassuring.</p><p>Coherence builds confidence.</p><p>Confidence builds habit.</p><h3>Brand Is a Trust Shortcut</h3><p>Brand also took on a more functional meaning in the discussion.</p><p>In AI, brand is not about awareness &#8212; it&#8217;s about <strong>trust compression</strong>.</p><p>As <strong>Tiger Gao</strong>, Investor at <strong>Apax Digital</strong>, pointed out, when users don&#8217;t fully understand how a system works, they rely on signals. Brand becomes a shortcut for:</p><ul><li><p>Reliability.</p></li><li><p>Alignment.</p></li><li><p>Safety.</p></li><li><p>Intent.</p></li></ul><p>In uncertain environments, trusted brands reduce friction in adoption and forgiveness in the face of failure.</p><h3>Community Multiplies Distribution and Retention</h3><p>Community was discussed not as engagement, but as leverage.</p><p><strong>Strong communities:</strong></p><ul><li><p>Normalize uncertainty.</p></li><li><p>Spread best practices.</p></li><li><p>Reinforce identity.</p></li><li><p>Accelerate onboarding.</p></li></ul><p>As <strong>Zao Chen</strong>, Investor at <strong>Craft Ventures</strong>, noted, community transforms products from tools into shared experiences. That shift increases retention and turns users into distributors.</p><p>Community doesn&#8217;t lock users in technically &#8212; it locks them in emotionally.</p><h3>Switching Costs Are Becoming Emotional</h3><p>Perhaps the most important reframe was around <strong>switching costs</strong>.</p><p>In AI, switching costs are often low technically:</p><ul><li><p>Data can be exported.</p></li><li><p>Integrations are portable.</p></li><li><p>Models are interchangeable.</p></li></ul><p>But switching costs are high emotionally.</p><p><strong>People stick with products they:</strong></p><ul><li><p>Trust.</p></li><li><p>Identify with.</p></li><li><p>Feel understood by.</p></li><li><p>Have invested in learning.</p></li></ul><p>As the panel emphasized, these costs aren&#8217;t enforced &#8212; they&#8217;re <em>felt</em>.</p><h3>Moats You Can&#8217;t Diagram</h3><p>The panel acknowledged that taste, brand, and community are harder to quantify than traditional moats.</p><p>But that doesn&#8217;t make them weaker.</p><p><strong>In fact, they&#8217;re often:</strong></p><ul><li><p>Slower to build.</p></li><li><p>Harder to copy.</p></li><li><p>More durable over time.</p></li></ul><p>As one investor summarized, competitors can clone features in months. They can&#8217;t clone trust, coherence, or belonging on the same timeline.</p><h3>The Practical Takeaway</h3><p>In an AI world defined by rapid convergence, the strongest moats are increasingly human.</p><p><strong>They live in:</strong></p><ul><li><p>Product judgment.</p></li><li><p>Emotional resonance.</p></li><li><p>Shared identity.</p></li><li><p>Trust is built over time.</p></li></ul><p><strong>For founders, this means:</strong></p><ul><li><p>Investing in coherence early.</p></li><li><p>Treating brand as infrastructure.</p></li><li><p>Designing community intentionally.</p></li></ul><p>For investors, it reframes defensibility.</p><p><strong>The most durable moats may no longer be enforced by code; they&#8217;re earned through experience.</strong></p><h2>8. Founder Profiles Are Expanding, Not Narrowing</h2><p>One of the most encouraging conclusions from the panel was the extent to which&nbsp;<strong>the founder archetype is expanding</strong> in the AI era. Rather than narrowing the set of who can build venture-scale companies, AI is expanding it.</p><h3>The Old Pattern Is Breaking</h3><p>Historically, venture-backed success clustered around a familiar profile:</p><ul><li><p>Elite technical pedigree.</p></li><li><p>Prior big-tech experience.</p></li><li><p>Access to capital and networks.</p></li><li><p>Long lead times to build.</p></li></ul><p>The panel agreed that this pattern is weakening.</p><p>As <strong>Lukas Linemayr</strong>, Partner at <strong>Streamlined Ventures</strong>, noted, AI dramatically lowers the cost of experimentation. Founders no longer need massive teams or years of infrastructure work to reach meaningful traction.</p><p>This opens the door to a much broader set of builders.</p><h3>Younger Founders Are Succeeding Earlier</h3><p>Several investors pointed out that <strong>founders are reaching real scale earlier in their careers</strong>.</p><p><strong>AI allows:</strong></p><ul><li><p>Faster iteration.</p></li><li><p>Quicker feedback from the market.</p></li><li><p>Earlier revenue.</p></li><li><p>More compressed learning cycles.</p></li></ul><p>As <strong>Rak Gard</strong>, Partner at <strong>Bain Capital Ventures</strong>, emphasized, velocity now matters more than a resume. Teams that learn quickly often outperform those with deeper credentials but slower adaptation.</p><h3>Domain Expertise Is Rising in Importance</h3><p>Another major shift discussed was the increasing value of <strong>deep domain knowledge</strong>.</p><p><strong>In many AI categories:</strong></p><ul><li><p>The hard part isn&#8217;t building intelligence.</p></li><li><p>It&#8217;s understanding the workflow.</p></li><li><p>Navigating edge cases.</p></li><li><p>Earning trust in complex environments.</p></li></ul><p>As <strong>Tiger Gao</strong>, Investor at <strong>Apax Digital</strong>, pointed out, founders with lived experience in a problem domain often have sharper product intuition than technically elite generalists.</p><p>Knowing what <em>shouldn&#8217;t</em> be automated is often more valuable than knowing how to automate everything.</p><h3>Adaptability Is the New Core Skill</h3><p>The panel was unified on one point: <strong>AI rewards founders who adapt continuously</strong>.</p><p><strong>Successful founders today must:</strong></p><ul><li><p>Navigate constant model changes.</p></li><li><p>Reassess architectural decisions regularly.</p></li><li><p>Update mental models frequently.</p></li><li><p>Make decisions with incomplete information.</p></li></ul><p>As <strong>Zao Chen</strong>, Investor at <strong>Craft Ventures</strong>, noted, the ability to revise beliefs quickly has become a defining trait. Rigid thinkers struggle in environments where assumptions expire every quarter.</p><h3>Opinionated Thinking Matters More Than Credentials</h3><p>Another subtle but important theme was the value of <strong>opinionated judgment</strong>.</p><p>With so many tools, models, and paths available, founders who <strong>have clear points of view</strong>, <strong>make decisive tradeoffs</strong>, <strong>resist chasing every trend</strong>, and <strong>articulate why they believe something</strong> tend to move faster and build more coherent companies.</p><p>Pedigree may open doors, but judgment keeps companies alive.</p><h3>The Founder Archetype Is Broadening</h3><p>Taken together, the panel painted a clear picture:</p><p>There is no single &#8220;ideal&#8221; AI founder.</p><p>Instead, the market rewards:</p><ul><li><p>Speed over seniority.</p></li><li><p>Learning over lineage.</p></li><li><p>Judgment over credentials.</p></li><li><p>Adaptability over perfection.</p></li></ul><p>This is a structural shift &#8212; not a temporary one.</p><h3>The Practical Takeaway</h3><p>AI is not concentrating on opportunity. It&#8217;s distributing it.</p><p>For founders, this is a call to lean into:</p><ul><li><p>Lived experience.</p></li><li><p>Clear thinking.</p></li><li><p>Fast learning.</p></li><li><p>Strong opinions.</p></li></ul><p>For investors, it means expanding pattern recognition &#8212; not narrowing it.</p><p>In the AI era, <strong>the founders who win won&#8217;t all look the same and that&#8217;s a feature, not a bug</strong>.</p><h2>9. Venture-Backed Is a Choice &#8212; Not a Default</h2><p>One of the most refreshingly candid moments in the panel came when the conversation turned to <strong>founder paths</strong>.</p><p>The investors were aligned on a point that&#8217;s often left unsaid:</p><blockquote><p><strong>Not every great AI business should be venture-backed.</strong></p></blockquote><p>And that&#8217;s not a failure &#8212; it&#8217;s a feature of the moment we&#8217;re in.</p><h3>AI Has Changed the Economics of Building</h3><p>AI has dramatically lowered the cost of starting companies.</p><p>Founders can now:</p><ul><li><p>Build sophisticated products with small teams.</p></li><li><p>Reach customers directly.</p></li><li><p>Generate revenue early.</p></li><li><p>Operate profitably at smaller scales.</p></li></ul><p>As <strong>Lukas Linemayr</strong>, Partner at <strong>Streamlined Ventures</strong>, noted, this fundamentally expands the set of viable outcomes. Venture is no longer the only path to building something meaningful &#8212; or enduring.</p><h3>Niche, Profitable Businesses Are More Viable Than Ever</h3><p>Several panelists highlighted how AI enables <strong>high-quality, niche businesses</strong>.</p><p>These companies:</p><ul><li><p>Serve specific audiences deeply.</p></li><li><p>Operate with strong margins.</p></li><li><p>Grow sustainably.</p></li><li><p>Don&#8217;t require hypergrowth.</p></li></ul><p>As <strong>Tiger Gao</strong>, Investor at <strong>Apax Digital</strong>, pointed out, many of these businesses would have struggled to exist a decade ago. Today, they can thrive &#8212; and founders can own more of the upside.</p><p>Scale isn&#8217;t the only measure of success.</p><h3>Community Enables Profitable Distribution</h3><p>Another enabling factor discussed was the rise of <strong>community-driven distribution</strong>.</p><p>Strong communities allow companies to:</p><ul><li><p>Reach users directly.</p></li><li><p>Reduce CAC dramatically.</p></li><li><p>Build trust faster.</p></li><li><p>Monetize without heavy spend.</p></li></ul><p>As <strong>Zao Chen</strong>, Investor at <strong>Craft Ventures</strong>, noted, community doesn&#8217;t just support growth &#8212; it supports profitability. For many AI products, that changes the calculus entirely.</p><h3>Venture Comes With Constraints</h3><p>The panel was also clear about what venture capital demands.</p><p>Venture-backed paths require:</p><ul><li><p>Chasing very large markets.</p></li><li><p>Tolerating higher risk.</p></li><li><p>Optimizing for scale over stability.</p></li><li><p>Committing to outcomes that justify dilution.</p></li></ul><p>As <strong>Rak Gard</strong>, Partner at <strong>Bain Capital Ventures</strong>, emphasized, venture is best suited for companies willing to pursue problems that are structurally large &#8212; often adjacent to, but not dependent on, AGI-level breakthroughs.</p><p>It&#8217;s a powerful tool &#8212; but it narrows the problem space.</p><h3>Choosing Venture Means Choosing the Problem</h3><p>One of the most important reframes was that <strong>venture is not just a financing choice &#8212; it&#8217;s a product choice</strong>.</p><p>It implicitly commits founders to:</p><ul><li><p>A certain growth rate.</p></li><li><p>A certain market size.</p></li><li><p>A certain risk profile.</p></li></ul><p>Founders who don&#8217;t want those constraints shouldn&#8217;t feel compelled to accept them.</p><p>As the panel underscored, opting out of venture isn&#8217;t opting out of ambition &#8212; it&#8217;s opting into a different kind of ambition.</p><h3>AI Expands the Outcome Space</h3><p>The broader conclusion was optimistic.</p><p>AI doesn&#8217;t funnel founders into a single path. It multiplies the paths available.</p><p>Some companies should:</p><ul><li><p>Raise aggressively.</p></li><li><p>Chase massive markets.</p></li><li><p>Take on existential risk.</p></li></ul><p>Others should:</p><ul><li><p>Stay small and profitable.</p></li><li><p>Serve communities deeply.</p></li><li><p>Compound quietly over time.</p></li></ul><p>Both are valid. Both can be impactful.</p><h3>The Practical Takeaway</h3><p>AI lowers the cost of building &#8212; but it doesn&#8217;t dictate how you should build.</p><p>Venture-backed is no longer the default. It&#8217;s a choice.</p><p>The best founders don&#8217;t ask:</p><blockquote><p><em>&#8220;Can this raise venture?&#8221;</em></p></blockquote><p>They ask:</p><blockquote><p><em>&#8220;What kind of company do I want to build &#8212; and what path best supports that?&#8221;</em></p></blockquote><p>In an AI-first world, <strong>freedom of choice is one of the most powerful new advantages founders have</strong>.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/investor-and-venture-outlook-on-ai?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/investor-and-venture-outlook-on-ai?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://labs.adaline.ai/p/investor-and-venture-outlook-on-ai?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h2>10. Huge Markets Remain Underserved</h2><p>Despite how crowded parts of the AI landscape appear, the panel was emphatic on one point: <strong>Many of the largest opportunities aren&#8217;t crowded at all.</strong> They&#8217;re simply overlooked.</p><h3>Silicon Valley Sees a Narrow Slice of the Economy</h3><p>The panel highlighted a structural blind spot in how markets are perceived.</p><p>Inside tech ecosystems, attention clusters around:</p><ul><li><p>Developer tools.</p></li><li><p>Knowledge work productivity.</p></li><li><p>Media and content.</p></li><li><p>Obvious white-collar workflows.</p></li></ul><p>But as <strong>Zao Chen</strong>, Investor at <strong>Craft Ventures</strong>, noted, these categories represent a small fraction of global economic activity.</p><p>Outside that bubble sit enormous industries that are:</p><ul><li><p>Operationally complex.</p></li><li><p>Heavily manual.</p></li><li><p>Under-softwared.</p></li><li><p>Resistant to prior automation.</p></li></ul><p>These sectors don&#8217;t appear on demo days, but they dominate real GDP.</p><h3>Service Industries Are Still Software-Poor</h3><p>Several investors emphasized how many service-heavy industries remain untouched by modern software.</p><p>Examples discussed included:</p><ul><li><p>Field services.</p></li><li><p>Logistics coordination.</p></li><li><p>Healthcare operations.</p></li><li><p>Compliance-heavy workflows.</p></li><li><p>Back-office functions in regulated industries.</p></li></ul><p>As <strong>Rak Gard</strong>, Partner at <strong>Bain Capital Ventures</strong>, pointed out, many of these markets were poor fits for traditional SaaS. The workflows were too fragmented, too judgment-heavy, or too expensive to automate manually.</p><p>AI changes that calculus.</p><h3>AI Enables Automation Where Software Never Reached</h3><p>The panel stressed that AI&#8217;s most powerful impact may not be where software already exists &#8212; but where it <em>never could</em>.</p><p>AI can:</p><ul><li><p>Handle ambiguity.</p></li><li><p>Adapt to messy inputs.</p></li><li><p>Support human judgment.</p></li><li><p>Operate across inconsistent processes.</p></li></ul><p>As <strong>Tiger Gao</strong>, Investor at <strong>Apax Digital</strong>, explained, this opens entirely new categories. Work that was previously uneconomical to software-enable suddenly becomes tractable.</p><p>The opportunity isn&#8217;t a marginal improvement. It&#8217;s first-time automation.</p><h3>Visibility, Not Ideation, Is the Bottleneck</h3><p>Another important reframing was around innovation itself.</p><p>The panel rejected the idea that success requires discovering a &#8220;new&#8221; idea. Instead, it requires:</p><ul><li><p>Seeing existing problems clearly.</p></li><li><p>Understanding how work actually happens.</p></li><li><p>Recognizing where human labor is trapped by process.</p></li></ul><p>As <strong>Lukas Linemayr</strong>, Partner at <strong>Streamlined Ventures</strong>, noted, many of the biggest AI companies of the next decade won&#8217;t feel novel to insiders. They&#8217;ll feel <em>obvious</em> &#8212; once someone finally builds them.</p><h3>Underserved Markets Often Look Unattractive Early</h3><p>One reason these markets remain open is that they rarely look attractive at first glance. They:</p><ul><li><p>Lack clean APIs.</p></li><li><p>Involve legacy systems.</p></li><li><p>Require domain expertise.</p></li><li><p>Don&#8217;t fit standard growth narratives.</p></li></ul><p>But as the panel emphasized, these same traits often signal durability. Once solved, these problems create:</p><ul><li><p>High switching costs.</p></li><li><p>Deep customer reliance.</p></li><li><p>Long-term contracts.</p></li><li><p>Real economic impact.</p></li></ul><h3>The Practical Takeaway</h3><p>AI opportunity isn&#8217;t concentrated only where attention is loudest. It&#8217;s often hiding in:</p><ul><li><p>Invisible workflows.</p></li><li><p>Neglected industries.</p></li><li><p>Unglamorous services.</p></li><li><p>Problems people stopped trying to solve.</p></li></ul><p>The panel&#8217;s closing reframe was simple but powerful:</p><blockquote><p><strong>The opportunity is not finding a new idea, it&#8217;s seeing an old problem clearly for the first time.</strong></p></blockquote><p>For founders willing to look beyond the obvious, the AI market is still wide open.</p><h2>11. Hiring and Org Design Are Still Bottlenecks</h2><p>One of the most pragmatic points the panel made was also one of the least glamorous: <strong>AI does not eliminatea eliminate organizational bottlenecks.</strong> <strong>It often exposes them.</strong></p><p>Despite dramatic gains in technical capability, the fundamentals of building and scaling companies remain stubbornly human.</p><h3>AI Doesn&#8217;t Replace Go-To-Market Reality</h3><p>The panel was explicit that AI does not remove the need for:</p><ul><li><p>Selling.</p></li><li><p>Onboarding.</p></li><li><p>Change management.</p></li><li><p>Domain translation.</p></li><li><p>Forward-deployed work.</p></li></ul><p>As <strong>Rak Gard</strong>, Partner at <strong>Bain Capital Ventures</strong>, noted, many AI companies underestimate how much of the work happens <em>outside</em> the model. Especially in enterprise and regulated markets, trust must still be earned person by person.</p><p>Models don&#8217;t close deals. People do.</p><h3>Non-Technical Roles Matter More Than Expected</h3><p>A recurring surprise for many founders is how critical non-coding roles remain. They become essential when:</p><ul><li><p>Sales cycles are long.</p></li><li><p>Buyers are non-technical.</p></li><li><p>Workflows are entrenched.</p></li><li><p>Adoption requires behavior change.</p></li></ul><p>As <strong>Zao Chen</strong>, Investor at <strong>Craft Ventures</strong>, emphasized, AI products often increase the need for translation &#8212; not reduce it. Someone still has to explain what the system does, where it works, where it doesn&#8217;t, and how to integrate it safely.</p><p>That work doesn&#8217;t disappear. It shifts.</p><h3>Forward-Deployed Humans Are Often the Unlock</h3><p>Several panelists pointed out that forward-deployed teams are not a sign of weakness &#8212; they&#8217;re often a sign of realism.</p><p>In complex environments, humans:</p><ul><li><p>Adapt to messy workflows.</p></li><li><p>Handle exceptions.</p></li><li><p>Earn trust in high-stakes settings.</p></li><li><p>Surface product gaps quickly.</p></li></ul><p>As <strong>Lukas Linemayr</strong>, Partner at <strong>Streamlined Ventures</strong>, noted, many successful AI companies scale <em>through</em> forward-deployed work before they scale <em>away from it</em>. The mistake is treating these roles as temporary hacks instead of strategic leverage.</p><h3>Org Design Determines Where AI Actually Scales</h3><p>Another key insight was that <strong>organizational design determines where AI leverage shows up</strong>.</p><p>Teams that struggle often:</p><ul><li><p>Over-index on engineers.</p></li><li><p>Under-invest in GTM and enablement.</p></li><li><p>Assume automation replaces coordination.</p></li><li><p>Delay hiring for customer-facing roles.</p></li></ul><p>As <strong>Tiger Gao</strong>, Investor at <strong>Apax Digital</strong>, pointed out, this creates a mismatch: powerful technology paired with insufficient human scaffolding. Adoption stalls not because the product is weak &#8212; but because the org can&#8217;t support it.</p><h3>Leverage Comes From Deploying Humans Intentionally</h3><p>The panel emphasized that winning teams don&#8217;t eliminate humans; they deploy them strategically. They:</p><ul><li><p>Put humans where judgment matters most.</p></li><li><p>Automate where repetition dominates.</p></li><li><p>Keep humans close to customers early.</p></li><li><p>Pull them back only once patterns stabilize.</p></li></ul><p>This isn&#8217;t inefficient. It&#8217;s how learning compounds.</p><h3>The Practical Takeaway</h3><p>AI changes what humans do not whether they&#8217;re needed.</p><p>The companies that win:</p><ul><li><p>Design orgs around real-world adoption.</p></li><li><p>Hire for translation, trust, and judgment.</p></li><li><p>Accept that some work cannot be automated early.</p></li><li><p>Deploy humans where leverage is highest.</p></li></ul><p>In an AI-first world, <strong>technology scales fastest when organizations are designed to support it</strong>.</p><p>Ignoring hiring and org design doesn&#8217;t make them go away. It just turns them into silent bottlenecks.</p><h2>12. Governance Will Emerge Bottom-Up, Not Top-Down</h2><p>When the conversation turned to regulation and governance, the panel aligned around a view that was notably pragmatic:</p><p><strong>Governance will not arrive first through policy.</strong><br><strong>It will emerge through products.</strong></p><p>This isn&#8217;t ideological &#8212; it&#8217;s observational.</p><h3>Regulation Will Always Lag Innovation</h3><p>The panel was clear that regulation inevitably trails technology.</p><p>AI is moving too quickly for:</p><ul><li><p>Comprehensive legislation.</p></li><li><p>Globally consistent standards.</p></li><li><p>Real-time regulatory oversight.</p></li></ul><p>As <strong>Lukas Linemayr</strong>, Partner at <strong>Streamlined Ventures</strong>, noted, this lag is not a failure of regulators &#8212; it&#8217;s a structural reality. By the time rules are written, the underlying technology has already shifted.</p><p>Waiting for regulation to define governance is therefore unrealistic.</p><h3>Governance Will Be Built, Not Declared</h3><p>Instead, governance is emerging <strong>bottom-up</strong>, through tooling and infrastructure.</p><p>The panel emphasized that real governance is operational, not philosophical.</p><p>It shows up as:</p><ul><li><p>Auditability.</p></li><li><p>Observability.</p></li><li><p>Access controls.</p></li><li><p>Permissions.</p></li><li><p>Rollback mechanisms.</p></li><li><p>Monitoring and logging.</p></li></ul><p>As <strong>Rak Gard</strong>, Partner at <strong>Bain Capital Ventures</strong>, explained, these capabilities allow organizations to manage risk <em>before</em> regulation requires it. They become de facto standards because they work &#8212; not because they&#8217;re mandated.</p><h3>Trust Is Earned Through Control, Not Promises</h3><p>Another recurring theme was that <strong>trust cannot be asserted</strong>.</p><p>In AI systems, trust is earned when:</p><ul><li><p>Behavior is observable.</p></li><li><p>Decisions can be inspected.</p></li><li><p>Failures are traceable.</p></li><li><p>Systems can be constrained.</p></li></ul><p>As <strong>Tiger Gao</strong>, Investor at <strong>Apax Digital</strong>, pointed out, customers don&#8217;t want assurances &#8212; they want mechanisms. Products that offer real control are adopted faster than those that simply claim safety.</p><h3>Compliance Will Be Solved Inside Products</h3><p>The panel also reframed compliance as a product problem.</p><p>Rather than external enforcement, compliance will increasingly be achieved through:</p><ul><li><p>Built-in controls.</p></li><li><p>Clear boundaries.</p></li><li><p>Configurable policies.</p></li><li><p>Embedded audit trails.</p></li></ul><p>As <strong>Zao Chen</strong>, Investor at <strong>Craft Ventures</strong>, noted, the most successful AI products treat compliance as an enabling feature &#8212; not an afterthought. When compliance is integrated, adoption accelerates instead of slowing.</p><h3>Tooling Creates De Facto Standards</h3><p>Over time, the panel expects governance norms to crystallize around what works in practice.</p><p>Tools that <strong>reduce risk</strong>, <strong>improve transparency</strong>, and <strong>support accountability</strong> will spread organically across companies, industries, and geographies.</p><p>These tools become standards not because they&#8217;re required, but because they&#8217;re indispensable.</p><h3>The Final Takeaway</h3><p>AI governance won&#8217;t arrive as a single policy moment.</p><p>It will emerge gradually, through:</p><ul><li><p>Observability layers.</p></li><li><p>Control systems.</p></li><li><p>Audit tooling.</p></li><li><p>Product-level constraints.</p></li></ul><p>Trust, safety, and compliance will be <strong>built into systems</strong>, not bolted on by regulators after the fact.</p><p>In the AI era, <strong>the companies that define governance will be the ones that operationalize it first</strong> &#8212; long before anyone tells them they have to.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Claude Opus 4.6 vs GPT-5.3 Codex: Which AI Coding Model Should You Use?]]></title><description><![CDATA[A practical comparison for real PRs; when to use Claude for building and Codex for review, refactors, and reliability.]]></description><link>https://labs.adaline.ai/p/claude-opus-46-vs-gpt-53-codex</link><guid isPermaLink="false">https://labs.adaline.ai/p/claude-opus-46-vs-gpt-53-codex</guid><dc:creator><![CDATA[Nilesh Barla]]></dc:creator><pubDate>Sat, 14 Feb 2026 01:00:45 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/a3c087cd-d37c-4ea2-9781-468c65f67f62_1280x720.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>TLDR:</strong> This blog compares Claude Opus 4.6 and GPT 5.3 Codex in the only way that holds up in production. It treats them as different roles, not rivals. You will learn when to use Opus for architecture, deep context, and repo-wide refactors, and when to use Codex for terminal-driven iteration, bug fixes, and test writing. It explains the context tradeoff between large prompts and retrieval, the cost reality that changes defaults, and a hybrid workflow that plans with Opus, executes with Codex, then audits with Opus. You will leave with routing rules you can apply immediately. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://go.adaline.ai/rPUz2SX" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!sXIL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c28fb94-0606-4e78-a994-19f6ddd66751_2160x810.png 424w, https://substackcdn.com/image/fetch/$s_!sXIL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c28fb94-0606-4e78-a994-19f6ddd66751_2160x810.png 848w, https://substackcdn.com/image/fetch/$s_!sXIL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c28fb94-0606-4e78-a994-19f6ddd66751_2160x810.png 1272w, https://substackcdn.com/image/fetch/$s_!sXIL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c28fb94-0606-4e78-a994-19f6ddd66751_2160x810.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!sXIL!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c28fb94-0606-4e78-a994-19f6ddd66751_2160x810.png" width="1200" height="450" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4c28fb94-0606-4e78-a994-19f6ddd66751_2160x810.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:546,&quot;width&quot;:1456,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:288175,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://go.adaline.ai/rPUz2SX&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/187839197?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c28fb94-0606-4e78-a994-19f6ddd66751_2160x810.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!sXIL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c28fb94-0606-4e78-a994-19f6ddd66751_2160x810.png 424w, https://substackcdn.com/image/fetch/$s_!sXIL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c28fb94-0606-4e78-a994-19f6ddd66751_2160x810.png 848w, https://substackcdn.com/image/fetch/$s_!sXIL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c28fb94-0606-4e78-a994-19f6ddd66751_2160x810.png 1272w, https://substackcdn.com/image/fetch/$s_!sXIL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c28fb94-0606-4e78-a994-19f6ddd66751_2160x810.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><p>Watching Peter Steinberger talk through Claude Opus 4.6 and GPT 5.3 Codex clarified why this comparison keeps producing disagreement. He describes Codex as the model that reads more by default and stays reliable even when it feels dry, while Opus can run ahead unless you push it into a planning posture. </p><p>He also ties modern coding to the command line and explains why terminal fluency matters once agents start running loops for you. That combination pushed me to research roles, not rankings, and to write a guide that routes work by scope and risk.</p><div id="youtube2-j190mwiVlwA" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;j190mwiVlwA&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/j190mwiVlwA?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><h2>Claude Opus 4.6 vs GPT-5.3 Codex: Quick Summary </h2><p>On February 5, 2026, the AI coding landscape changed in a very specific way. Anthropic shipped <a href="https://www.anthropic.com/news/claude-opus-4-6?utm_source=chatgpt.com">Claude Opus 4.6</a>, and OpenAI shipped <a href="https://openai.com/index/introducing-gpt-5-3-codex/">GPT 5.3 Codex</a> on the same day. </p><p>The first reaction was confusion. Benchmarks pointed in one direction. Hands-on testing pointed to another. People were looking at the same two models and drawing different conclusions, which is a signal that the comparison is being framed incorrectly. </p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/gregisenberg/status/2019910072684458282&quot;,&quot;full_text&quot;:&quot;this was one of the biggest weeks in AI because claude opus 4.6 and gpt-5.3 codex dropped basically at the SAME time.\n\nthey solve the same problem in VERY different ways.\n\n- opus spins up agent teams and disappears for a while.\n- codex stays with you and ships ridiculously fast. &quot;,&quot;username&quot;:&quot;gregisenberg&quot;,&quot;name&quot;:&quot;GREG ISENBERG&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1577116785656139776/5mi0qgTz_normal.jpg&quot;,&quot;date&quot;:&quot;2026-02-06T23:04:24.000Z&quot;,&quot;photos&quot;:[{&quot;img_url&quot;:&quot;https://substackcdn.com/image/upload/w_1028,c_limit,q_auto:best/l_twitter_play_button_rvaygk,w_88/khinzq91xl7wb3llzfpz&quot;,&quot;link_url&quot;:&quot;https://t.co/hWqJSY6rQh&quot;}],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:94,&quot;retweet_count&quot;:87,&quot;like_count&quot;:760,&quot;impression_count&quot;:81487,&quot;expanded_url&quot;:null,&quot;video_url&quot;:&quot;https://video.twimg.com/amplify_video/2019907316938653697/vid/avc1/1280x720/xzlP-M0zMF-FksN3.mp4&quot;,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><p>This article uses a simple hiring lens so you can pick the right tool without arguing about winners. <strong>Claude Opus 4.6 behaves like a senior architect.</strong> It slows down, asks for more context, and spends tokens thinking before it commits to a plan. That deliberation often produces cleaner designs and fewer rewrites when the problem is structural. </p><p><strong>GPT 5.3 Codex behaves like a hyperproductive intern</strong>. It moves quickly, makes changes early, runs loops, and stays close to the terminal and the feedback cycle. It will break things, notice the break, and patch them in the next pass. </p><p>For a focused comparison of the coding agents specifically, see <a href="https://labs.adaline.ai/p/claude-code-vs-openai-codex">Claude Code vs OpenAI Codex</a>.</p><p><a href="https://x.com/gregisenberg/status/2019910072684458282?utm_source=chatgpt.com">Greg Isenberg</a> captured this as a split between reasoning and momentum. Once you see it that way, the question becomes which role you are hiring for on this task.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>What Claude Opus 4.6 Is Best For: Architecture &amp; Reasoning</h2><p>Claude Opus 4.6 is strongest when the task begins with uncertainty and ends with a coherent design. You see this when the codebase is large, the constraints are fuzzy, and t<strong>he right answer depends on keeping many moving parts consistent across files</strong>. </p><p>Anthropic calls this adaptive thinking, a mode in which the model spends time reasoning before it writes. </p><p>That <a href="https://platform.claude.com/docs/en/about-claude/models/whats-new-claude-4-6">deliberation</a> shows up as fewer wrong turns, fewer patch cycles, and fewer hidden contradictions later in the build. </p><p>The long context capability matters for the same reason. A large context window is not only about reading more text. It changes how the model constructs its mental representation of the repository. </p><p>Opus 4.6 supports 200K tokens, and a 1M token context window is available in beta on the <a href="https://www.anthropic.com/news/claude-opus-4-6">Claude Developer Platform</a>. With enough context, it can track relationships across modules, data flow assumptions, and naming conventions without constantly re-fetching or re-explaining them. </p><p>This is why Opus is a good fit for greenfield work that still has real complexity. </p><p>Think of an authentication system with roles, session rotation, and audit logging, or a 3D floor plan generator with a geometry pipeline and export formats. The model has to choose an architecture before it chooses syntax.</p><p><a href="https://medium.com/%40info.booststash/i-spent-48-hours-testing-claude-opus-4-6-gpt-5-3-codex-004adc046312">Alex Carter&#8217;s</a> 48-hour deep dive captured the same pattern in a concrete test. He reports that Opus produced a fully functional Kanban board with working drag-and-drop and clean state management on the first attempt, while Codex failed on authentication logic in the comparable build.</p><p>The tradeoff is <a href="https://www.anthropic.com/news/claude-opus-4-6">cost</a>. The deliberation phase consumes tokens, but it often buys you fewer bugs that only appear after you have shipped.</p><h2>What GPT-5.3 Codex Is Best For?</h2><p>If I were to answer that question in three words, it would be &#8220;The Speed Demon.&#8221;</p><p>GPT 5.3 Codex is strongest when the work has a tight feedback loop, and you want the loop to run without supervision. </p><p>It behaves more like an operator than a planner. You give it a concrete task, it tries something, it runs the command, it reads the error, then it tries again. That rhythm matters because a large share of day-to-day engineering is not design. </p><p>It is repeated <strong>compilation</strong>, <strong>failed tests</strong>, <strong>missing dependencies</strong>, and <strong>small fixes</strong> that only become obvious after you execute the code.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!h-DN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faca19b90-32c4-46f8-bbfb-26765f85a91e_770x818.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!h-DN!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faca19b90-32c4-46f8-bbfb-26765f85a91e_770x818.png 424w, https://substackcdn.com/image/fetch/$s_!h-DN!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faca19b90-32c4-46f8-bbfb-26765f85a91e_770x818.png 848w, https://substackcdn.com/image/fetch/$s_!h-DN!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faca19b90-32c4-46f8-bbfb-26765f85a91e_770x818.png 1272w, https://substackcdn.com/image/fetch/$s_!h-DN!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faca19b90-32c4-46f8-bbfb-26765f85a91e_770x818.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!h-DN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faca19b90-32c4-46f8-bbfb-26765f85a91e_770x818.png" width="400" height="424.93506493506493" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/aca19b90-32c4-46f8-bbfb-26765f85a91e_770x818.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:818,&quot;width&quot;:770,&quot;resizeWidth&quot;:400,&quot;bytes&quot;:46406,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/187839197?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faca19b90-32c4-46f8-bbfb-26765f85a91e_770x818.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!h-DN!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faca19b90-32c4-46f8-bbfb-26765f85a91e_770x818.png 424w, https://substackcdn.com/image/fetch/$s_!h-DN!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faca19b90-32c4-46f8-bbfb-26765f85a91e_770x818.png 848w, https://substackcdn.com/image/fetch/$s_!h-DN!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faca19b90-32c4-46f8-bbfb-26765f85a91e_770x818.png 1272w, https://substackcdn.com/image/fetch/$s_!h-DN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faca19b90-32c4-46f8-bbfb-26765f85a91e_770x818.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: <a href="https://openai.com/index/introducing-gpt-5-3-codex/">OpenAI</a></figcaption></figure></div><p><strong>Terminal Bench 2.0</strong> captures this bias toward command line competence. Codex scores 77.3 percent on that evaluation, while Claude Opus 4.6 scores around 65.4 percent in Anthropic&#8217;s reported results. Treat that as a sign about where Codex spends its effort. It is built to act inside terminal-shaped work, not only to write a plausible patch. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!L8wU!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0b20e21-f5df-47ef-9014-6af30bcd9ef8_1894x542.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!L8wU!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0b20e21-f5df-47ef-9014-6af30bcd9ef8_1894x542.png 424w, https://substackcdn.com/image/fetch/$s_!L8wU!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0b20e21-f5df-47ef-9014-6af30bcd9ef8_1894x542.png 848w, https://substackcdn.com/image/fetch/$s_!L8wU!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0b20e21-f5df-47ef-9014-6af30bcd9ef8_1894x542.png 1272w, https://substackcdn.com/image/fetch/$s_!L8wU!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0b20e21-f5df-47ef-9014-6af30bcd9ef8_1894x542.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!L8wU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0b20e21-f5df-47ef-9014-6af30bcd9ef8_1894x542.png" width="1456" height="417" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d0b20e21-f5df-47ef-9014-6af30bcd9ef8_1894x542.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:417,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:199537,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/187839197?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0b20e21-f5df-47ef-9014-6af30bcd9ef8_1894x542.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!L8wU!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0b20e21-f5df-47ef-9014-6af30bcd9ef8_1894x542.png 424w, https://substackcdn.com/image/fetch/$s_!L8wU!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0b20e21-f5df-47ef-9014-6af30bcd9ef8_1894x542.png 848w, https://substackcdn.com/image/fetch/$s_!L8wU!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0b20e21-f5df-47ef-9014-6af30bcd9ef8_1894x542.png 1272w, https://substackcdn.com/image/fetch/$s_!L8wU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0b20e21-f5df-47ef-9014-6af30bcd9ef8_1894x542.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: <a href="https://www.anthropic.com/news/claude-opus-4-6">Anthropic</a></figcaption></figure></div><p>This creates a distinct momentum mode. </p><p>It feels like pair programming with someone who types much faster than you and keeps running the program while you are still reading the diff. </p><p>It will sometimes reach for a package or an import that is not in your stack, but the recovery is quick because it immediately hits the build, sees the failure, and corrects the attempt in the next pass.</p><p>That makes Codex a strong fit for brownfield work. Bug fixes, unit tests, small feature additions, and cleanup tasks reward speed over elegance. Claire Vo&#8217;s experiment is the clearest proof point. She reports shipping 44 pull requests in five days using these models, and her results show Codex behaving like the closer that turns loops into merged code. </p><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:187548554,&quot;url&quot;:&quot;https://www.lennysnewsletter.com/p/claude-opus-46-vs-gpt-53-codex-how&quot;,&quot;publication_id&quot;:10845,&quot;publication_name&quot;:&quot;Lenny's Newsletter&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!8MSN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F441213db-4824-4e48-9d28-a3a18952cbfc_592x592.png&quot;,&quot;title&quot;:&quot;Claude Opus 4.6 vs. GPT-5.3 Codex: How I shipped 93,000 lines of code in 5 days&quot;,&quot;truncated_body_text&quot;:null,&quot;date&quot;:&quot;2026-02-11T13:02:52.568Z&quot;,&quot;like_count&quot;:8,&quot;comment_count&quot;:0,&quot;bylines&quot;:[{&quot;id&quot;:5636738,&quot;name&quot;:&quot;Claire Vo&quot;,&quot;handle&quot;:&quot;clairevo&quot;,&quot;previous_name&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!9F1P!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fca382ecd-862b-433d-bf35-b5a7d9dceeeb_400x400.jpeg&quot;,&quot;bio&quot;:&quot;&#128105;&#8205;&#128102;&#8205;&#128102; mama &#128187; chief product &amp; eng officer @color &#8226; prev @optimizely &#129504; pm, leadership &amp; startup life &#128525; @elawless &#128241; http://tiktok.com/@chiefproductofficer&quot;,&quot;profile_set_up_at&quot;:&quot;2023-03-13T01:51:07.663Z&quot;,&quot;reader_installed_at&quot;:null,&quot;is_guest&quot;:true,&quot;bestseller_tier&quot;:null,&quot;status&quot;:{&quot;bestsellerTier&quot;:null,&quot;subscriberTier&quot;:1,&quot;leaderboard&quot;:null,&quot;vip&quot;:false,&quot;badge&quot;:{&quot;type&quot;:&quot;subscriber&quot;,&quot;tier&quot;:1,&quot;accent_colors&quot;:null},&quot;paidPublicationIds&quot;:[1459978,10845],&quot;subscriber&quot;:null},&quot;primaryPublicationId&quot;:4280169,&quot;primaryPublicationName&quot;:&quot;Claire&#8217;s Substack&quot;,&quot;primaryPublicationUrl&quot;:&quot;https://clairevo.substack.com&quot;,&quot;primaryPublicationSubscribeUrl&quot;:&quot;https://clairevo.substack.com/subscribe?&quot;}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:true,&quot;type&quot;:&quot;podcast&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://www.lennysnewsletter.com/p/claude-opus-46-vs-gpt-53-codex-how?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><img class="embedded-post-publication-logo" src="https://substackcdn.com/image/fetch/$s_!8MSN!,w_56,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F441213db-4824-4e48-9d28-a3a18952cbfc_592x592.png" loading="lazy"><span class="embedded-post-publication-name">Lenny's Newsletter</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title-icon"><svg width="19" height="19" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
  <path d="M3 18V12C3 9.61305 3.94821 7.32387 5.63604 5.63604C7.32387 3.94821 9.61305 3 12 3C14.3869 3 16.6761 3.94821 18.364 5.63604C20.0518 7.32387 21 9.61305 21 12V18" stroke-linecap="round" stroke-linejoin="round"></path>
  <path d="M21 19C21 19.5304 20.7893 20.0391 20.4142 20.4142C20.0391 20.7893 19.5304 21 19 21H18C17.4696 21 16.9609 20.7893 16.5858 20.4142C16.2107 20.0391 16 19.5304 16 19V16C16 15.4696 16.2107 14.9609 16.5858 14.5858C16.9609 14.2107 17.4696 14 18 14H21V19ZM3 19C3 19.5304 3.21071 20.0391 3.58579 20.4142C3.96086 20.7893 4.46957 21 5 21H6C6.53043 21 7.03914 20.7893 7.41421 20.4142C7.78929 20.0391 8 19.5304 8 19V16C8 15.4696 7.78929 14.9609 7.41421 14.5858C7.03914 14.2107 6.53043 14 6 14H3V19Z" stroke-linecap="round" stroke-linejoin="round"></path>
</svg></div><div class="embedded-post-title">Claude Opus 4.6 vs. GPT-5.3 Codex: How I shipped 93,000 lines of code in 5 days</div></div><div class="embedded-post-cta-wrapper"><div class="embedded-post-cta-icon"><svg width="32" height="32" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg">
  <path classname="inner-triangle" d="M10 8L16 12L10 16V8Z" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round"></path>
</svg></div><span class="embedded-post-cta">Listen now</span></div><div class="embedded-post-meta">2 months ago &#183; 8 likes &#183; Claire Vo</div></a></div><h2>The Context Battle: 1M Tokens vs. Repo-RAG</h2><p>Claude Opus 4.6 and GPT 5.3 Codex can look similar on the surface because both can edit a repository and both can produce working code. <strong>The difference is how each model forms knowledge about your codebase</strong>.</p><p><strong>Opus leans on sheer context capacity.</strong> </p><p>Opus 4.6 supports very large prompts, with 200K tokens as the standard limit and a 1M token context window available in beta on the Claude Developer Platform. </p><p>When you load large slices of the repo, the model can carry a more continuous mental model across modules, conventions, and edge cases. That is valuable during major refactors because the risk is not writing code. <strong>The risk is breaking an assumption that lives in a different folder</strong>. Migration work like moving an app from React to Svelte is full of those buried assumptions.</p><p><strong>Codex often reaches similar outcomes through retrieval</strong>. </p><p>Instead of holding the whole codebase in the prompt, it pulls the most relevant files and focuses effort there. This is faster and cheaper when the problem is local, but it can miss cross-file invariants because it only sees what it retrieved. The model edits the correct file, yet the change may conflict with a pattern set elsewhere. </p><blockquote><p>Use a simple rule. When a rename or refactor touches dozens of files, use Opus. When a fix lives in a single function within a single file, use Codex.</p></blockquote><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/claude-opus-46-vs-gpt-53-codex?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/claude-opus-46-vs-gpt-53-codex?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://labs.adaline.ai/p/claude-opus-46-vs-gpt-53-codex?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h2>Pricing &amp; Economics: The $28 vs $0.12 Reality</h2><p>Economics changes the decision faster than benchmarks. </p><p>You can admire Opus 4.6 for its deliberation and still choose not to run it on every small question. The model price is not a rounding error. <a href="https://www.anthropic.com/news/claude-opus-4-6">Anthropic</a> lists Opus 4.6 at 5 dollars per million input tokens and 25 dollars per million output tokens, so long outputs and multi-pass reasoning can add up quickly. </p><p>A recent thread on r/SlashClaudeAI made the gap concrete. A user named DutchesForKaioSama described a complex task that came out to 28.70 dollars on Opus, while a similar outcome cost 0.12 dollars on Codex. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Xph6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ec7ad1c-3a4f-4f13-bd37-284c723be4b0_1498x1456.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Xph6!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ec7ad1c-3a4f-4f13-bd37-284c723be4b0_1498x1456.png 424w, https://substackcdn.com/image/fetch/$s_!Xph6!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ec7ad1c-3a4f-4f13-bd37-284c723be4b0_1498x1456.png 848w, https://substackcdn.com/image/fetch/$s_!Xph6!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ec7ad1c-3a4f-4f13-bd37-284c723be4b0_1498x1456.png 1272w, https://substackcdn.com/image/fetch/$s_!Xph6!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ec7ad1c-3a4f-4f13-bd37-284c723be4b0_1498x1456.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Xph6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ec7ad1c-3a4f-4f13-bd37-284c723be4b0_1498x1456.png" width="1456" height="1415" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3ec7ad1c-3a4f-4f13-bd37-284c723be4b0_1498x1456.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1415,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:424389,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/187839197?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ec7ad1c-3a4f-4f13-bd37-284c723be4b0_1498x1456.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Xph6!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ec7ad1c-3a4f-4f13-bd37-284c723be4b0_1498x1456.png 424w, https://substackcdn.com/image/fetch/$s_!Xph6!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ec7ad1c-3a4f-4f13-bd37-284c723be4b0_1498x1456.png 848w, https://substackcdn.com/image/fetch/$s_!Xph6!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ec7ad1c-3a4f-4f13-bd37-284c723be4b0_1498x1456.png 1272w, https://substackcdn.com/image/fetch/$s_!Xph6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ec7ad1c-3a4f-4f13-bd37-284c723be4b0_1498x1456.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: <a href="https://www.reddit.com/r/ClaudeAI/comments/1r04x3x/observations_from_using_gpt53_codex_and_claude/">Reddit</a></figcaption></figure></div><p>Even if you treat those numbers as anecdotal, the ratio is the point. When you pay for deliberation, you pay for tokens and for time spent thinking. </p><p>This is why Opus is a poor default for casual chat. Use it like a contractor. </p><p>Bring it in when the task has <strong>architectural risk</strong>, <strong>repo-wide consequences</strong>, or <strong>requirements you cannot afford to get wrong</strong>. Keep it out of simple syntax questions, quick formatting, and routine unit test boilerplate.</p><p>Codex fits the always-on role because iteration is cheap. Let it run the loops. Save Opus for the moments where a careful plan prevents a week of cleanup.</p><h2>The "Hybrid" Workflow: Manager &amp; Intern</h2><p>A clean way to use both models is to treat them as two roles in the same engineering loop. </p><ul><li><p>One role produces a careful plan that reduces architectural risk. </p></li><li><p>The other role turns that plan into diffs and runs the feedback cycle until the work is shippable.</p></li></ul><p><strong>Start with Opus 4.6 for planning</strong>. </p><p>Give it the requirements, the constraints, and the acceptance criteria. Ask for a short spec, interface definitions, and an implementation plan that is broken into steps you can execute one at a time. </p><p>Opus is good at this because it enters a deliberate reasoning phase and maintains more global constraints throughout the design. You are paying for that deliberation, so use it where it changes the shape of the work. </p><p><strong>Move to Codex for execution</strong>. </p><p>Paste the plan into Codex and constrain it to one step. Tell it to implement <strong>step one</strong>, <strong>run tests</strong>, <strong>fix failures</strong>, then <strong>stop</strong> and <strong>report</strong>. </p><p>Codex is designed for tool-using loops and fast iteration, so it is a strong fit for writing the code, running commands, and grinding through the errors without constant supervision. </p><p>Bring Opus back for review. Paste the final diff and ask for a logic and security audit. Focus it on auth flows, input validation, permission checks, and failure states. This is where a slower model can catch mismatched assumptions and corner cases.</p><p><a href="https://www.lennysnewsletter.com/p/claude-opus-46-vs-gpt-53-codex-how">Claire Vo</a> describes using different models at different stages of the pull request lifecycle to maximize return on spend, and this workflow turns that idea into a repeatable routine you can adopt immediately. </p><h2>Decision Matrix &amp; Conclusion</h2><p>Use this decision matrix when you want a fast answer without rethinking the tradeoffs.</p><ul><li><p>Complex Logic and New App: Use Opus 4.6</p></li><li><p>Bug Fixing and Terminal Ops: Use Codex 5.3</p></li><li><p>Refactoring Legacy Code: Use Opus 4.6</p></li><li><p>Writing Tests: Use Codex 5.3</p></li></ul><blockquote><p><strong>Note this</strong>: You are not choosing a winner. You are choosing a role. </p></blockquote><p>Opus is the call when the work needs a stable design, and one correct pass matters more than speed. </p><p>Codex is the call when the work is a loop and the fastest path is to run commands, fix failures, and repeat until green. </p><p>The one model strategy is not how teams will work in 2026. The winning setup is a router that assigns work to the right model based on risk, scope, and iteration cost. </p><p>Engineers who ship consistently do not take sides. They pick a roster.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Shipping Fast And Iterating At AI-Speed | Takeaways For Founders And Product Leaders]]></title><description><![CDATA[Ship fast in AI by learning faster: define &#8220;good,&#8221; dogfood, stay close to users, and prevent regressions with evals.]]></description><link>https://labs.adaline.ai/p/shipping-fast-and-iterating-at-ai</link><guid isPermaLink="false">https://labs.adaline.ai/p/shipping-fast-and-iterating-at-ai</guid><dc:creator><![CDATA[Arsh Shah Dilbagi]]></dc:creator><pubDate>Wed, 11 Feb 2026 13:50:02 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/83a38e97-f3e5-4f04-8b0d-2806de8aa492_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>TLDR</strong>: Shipping Fast and Iterating at AI Speed explores why traditional startup speed advice fails in AI development. The blog argues that real AI speed isn't about moving faster than competitors, but about learning velocity&#8212;understanding what "good" looks like and adapting quickly. It covers how short-term velocity destroys long-term progress through technical debt, why correctness is subjective in AI products, and how sustainable speed requires informed restraint, clear ownership, and reversible decisions. Readers will learn concrete principles from industry leaders on building feedback loops, maintaining team confidence through transparency, and designing systems flexible enough to survive the AI ecosystem's rapid changes. The key insight: the fastest teams avoid premature bets and focus on preserving optionality while maintaining strong signals about what matters.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://go.adaline.ai/rPUz2SX" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!FLzW!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08e89380-7747-43f4-9203-1ebe11ee863c_2160x810.png 424w, https://substackcdn.com/image/fetch/$s_!FLzW!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08e89380-7747-43f4-9203-1ebe11ee863c_2160x810.png 848w, https://substackcdn.com/image/fetch/$s_!FLzW!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08e89380-7747-43f4-9203-1ebe11ee863c_2160x810.png 1272w, https://substackcdn.com/image/fetch/$s_!FLzW!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08e89380-7747-43f4-9203-1ebe11ee863c_2160x810.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!FLzW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08e89380-7747-43f4-9203-1ebe11ee863c_2160x810.png" width="1456" height="546" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/08e89380-7747-43f4-9203-1ebe11ee863c_2160x810.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:546,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:337343,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://go.adaline.ai/rPUz2SX&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/184655353?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08e89380-7747-43f4-9203-1ebe11ee863c_2160x810.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!FLzW!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08e89380-7747-43f4-9203-1ebe11ee863c_2160x810.png 424w, https://substackcdn.com/image/fetch/$s_!FLzW!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08e89380-7747-43f4-9203-1ebe11ee863c_2160x810.png 848w, https://substackcdn.com/image/fetch/$s_!FLzW!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08e89380-7747-43f4-9203-1ebe11ee863c_2160x810.png 1272w, https://substackcdn.com/image/fetch/$s_!FLzW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08e89380-7747-43f4-9203-1ebe11ee863c_2160x810.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h1>Introduction</h1><div id="youtube2-MgV6uP1qeSo" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;MgV6uP1qeSo&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/MgV6uP1qeSo?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><h2>Founder Intro: Shipping Fast and Iterating at AI Speed</h2><p>&#8220;Move fast&#8221; has always been a startup mantra. In AI, that advice has become <strong>dangerously ambiguous</strong>.</p><p>Teams ship more often than ever. Demos come together in days. Iteration feels constant. And yet, many companies still feel stuck, slowed not by lack of activity but by a <strong>lack of clarity</strong>.</p><p>Panel 4 was designed to unpack that tension. Rather than asking, &#8220;How do we ship faster?&#8221;, we wanted to ask a more precise question: <strong>What does speed actually mean when you&#8217;re building AI products</strong>, and <strong>what quietly destroys it over time</strong>?</p><p>To explore that, we brought together operators who are shipping at the edge of what&#8217;s possible, across very different contexts:</p><ul><li><p><strong><a href="https://www.linkedin.com/in/dakshg/">Daksh Gupta</a></strong>, Co-founder and CEO at <em>Greptile</em>, is building AI systems where <strong>correctness</strong> and <strong>iteration speed</strong> must coexist.</p></li><li><p><strong><a href="https://www.linkedin.com/in/evanaowen/">Evan Owen</a></strong>, Co-founder and CEO at <em>Glue</em>, is navigating fast-moving AI workflows where <strong>trust</strong> and <strong>learning loops</strong> matter more than raw throughput.</p></li><li><p><strong><a href="https://www.linkedin.com/in/rayjbjang/">Ray Jang</a></strong><a href="https://www.linkedin.com/in/rayjbjang/">,</a> Co-founder and CEO at <em>Atria</em>, operates at the intersection of <strong>automation</strong>, <strong>experimentation</strong>, and <strong>reliability</strong>.</p></li><li><p><strong><a href="https://www.linkedin.com/in/helloyenhere/">Yen Tan</a></strong>, Product Manager at <em>15Five</em>, is bringing a <strong>product and user-centered lens</strong> to shipping in <strong>high-trust environments</strong>.</p></li></ul><p>What emerged was a clear reframing of AI speed. This panel wasn&#8217;t about <strong>shipping more features</strong> or <strong>chasing every new model release</strong>. It was about <strong>learning velocity</strong>: how quickly teams understand <em>what good looks like</em>, detect when something <em>feels off</em>, and correct course <strong>without eroding trust</strong>.</p><p>Across the conversation, a consistent theme surfaced:</p><blockquote><p>&#8220;<strong>Speed without direction is just noise.</strong>&#8221;<br>&#8220;<strong>Sustainable speed</strong> comes from <em>tight feedback loops</em>, <strong>informed restraint</strong>, and <em>organizations designed to learn</em>.&#8221;</p></blockquote><p>The sections that follow break down what that looks like in practice, from why <strong>dogfooding beats dashboards early</strong>, to how <strong>feature flags enable safe aggression</strong>, to why <strong>trust behaves like a finite resource</strong>.</p><p>If you&#8217;re building with AI and feel like you&#8217;re moving fast but not forward, this panel offers a grounded perspective on what <strong>real velocity</strong> actually requires.</p><div><hr></div><h2>1. AI Speed Is About Learning What &#8220;Good&#8221; Looks Like</h2><p>The panel opened by dismantling a common misconception: <strong>AI speed does not simply mean shipping faster</strong>.</p><p>Shipping faster is easy. <strong>Learning faster is hard</strong>.</p><p>What separates teams that actually move quickly from those that just move <em>often</em> is how fast they develop a <strong>shared understanding of quality</strong>.</p><h3>Speed Comes From Signal, Not Velocity</h3><p>Across the discussion, speakers converged on a more precise definition of AI speed.</p><p><strong>AI speed is defined by:</strong></p><ul><li><p><strong>How quickly</strong> teams learn what &#8220;good&#8221; outputs look like?</p></li><li><p><strong>How fast</strong>&nbsp;can they tell when something feels&nbsp;<em>off?</em></p></li><li><p><strong>How early</strong>&nbsp;can they course-correct without&nbsp;<strong>breaking trust?</strong></p></li></ul><p>As <strong>Daksh Gupta</strong>, Co-founder and CEO at <em>Greptile</em>, emphasized, most AI teams don&#8217;t slow down because they ship too little. They slow down because they don&#8217;t know <em>what to aim for</em>.</p><p>Without a clear target, <strong>iteration becomes noise</strong>.</p><h3>Correctness Is Often Ambiguous in AI Products</h3><p>In traditional software, correctness is binary. Something works, or it doesn&#8217;t.</p><p>In AI products, correctness is often <strong>subjective</strong>.</p><p>As <strong>Yen Tan</strong>, Product Manager at <em>15Five</em>, described, this ambiguity shows up most clearly in:</p><ul><li><p>Creative workflows,</p></li><li><p>Generative systems,</p></li><li><p>Judgment-based tasks,</p></li><li><p>Assistive experiences.</p></li></ul><p>Outputs can be <em>plausible</em> without being <em>good</em>. They can be technically correct, but emotionally wrong. They can pass automated checks, and still fail user expectations.</p><p>This makes iteration <strong>fundamentally harder</strong>.</p><h3>Without Quality Signals, Teams Thrash</h3><p>Several speakers described a familiar failure mode:</p><ul><li><p>Teams ship quickly,</p></li><li><p>Outputs look reasonable,</p></li><li><p>Feedback is vague,</p></li><li><p>Iteration continues blindly.</p></li></ul><p>As <strong>Ray Jang</strong>, Co-founder and CEO at <em>Atria</em>, noted, without fast, reliable signals on quality, teams end up oscillating&#8212;changing prompts, models, or workflows without knowing whether they&#8217;re actually improving anything.</p><p>The result is <strong>activity without progress</strong>.</p><h3>&#8220;Feels Off&#8221; Is an Important Signal</h3><p>One of the more subtle insights from the panel was the importance of <em>intuition</em> early on.</p><p>As <strong>Evan Owen</strong>, Co-founder and CEO at <em>Glue</em>, explained, experienced teams learn to trust early discomfort. When outputs feel off&#8212;even if they technically pass&#8212;that&#8217;s often the first indicator that assumptions are wrong, or constraints are missing.</p><p>Teams that move fast don&#8217;t ignore that signal. <strong>They investigate it immediately</strong>.</p><p>Speed comes from shortening the gap between:</p><ul><li><p>Noticing something feels wrong,</p></li><li><p>Understanding why,</p></li><li><p>Fixing the underlying cause.</p></li></ul><h3>Directional Clarity Beats Raw Throughput</h3><p>The panel repeatedly returned to the idea that <strong>speed without direction is wasted motion</strong>.</p><p>AI makes it easy to:</p><ul><li><p>Generate more outputs,</p></li><li><p>Try more variations,</p></li><li><p>Explore more options.</p></li></ul><p>But without a shared definition of &#8220;good,&#8221; those options don&#8217;t converge.</p><p>As one speaker summarized:</p><blockquote><p>The fastest teams aren&#8217;t the ones shipping the most changes,<br>They&#8217;re the ones <strong>learning what to keep</strong>.</p></blockquote><h3>The Practical Takeaway</h3><p>AI speed isn&#8217;t about how fast you deploy. It&#8217;s about <strong>how fast you learn</strong>.</p><p>Teams that truly move quickly:</p><ul><li><p><strong>Define quality</strong> early,</p></li><li><p>Develop strong instincts for <strong>&#8220;wrong&#8221;</strong>,</p></li><li><p>Create <strong>tight feedback loops</strong>,</p></li><li><p>Correct course <strong>before problems compound</strong>.</p></li></ul><blockquote><p>In AI products, <strong>learning velocity beats shipping velocity</strong>.<br>Speed without clarity feels productive&#8212;until it isn&#8217;t.</p></blockquote><h2>2. Short-Term Velocity Can Destroy Long-Term Velocity</h2><p>One of the most consistent warnings across the panel was a counterintuitive one:</p><blockquote><p><strong>The fastest way to slow down permanently is to optimize too aggressively for short-term speed.</strong></p></blockquote><p>In an ecosystem that rewards quick demos and rapid iteration, this is an easy trap to fall into, and a hard one to escape.</p><h3>Early Momentum Often Comes From Fragile Choices</h3><p>Several speakers described how teams often gain early momentum by making expedient decisions:</p><ul><li><p>Choosing frameworks optimized for speed over control.</p></li><li><p>Hardcoding integrations instead of designing interfaces.</p></li><li><p>Building around temporary standards.</p></li><li><p>Overfitting workflows to current model capabilities.</p></li></ul><p>These choices feel rational in the moment. They produce visible progress. They reduce upfront friction.</p><p>As <strong>Daksh Gupta</strong>, Co-founder &amp; CEO of <em>Greptile</em>, explained, many of these decisions aren&#8217;t mistakes. They&#8217;re <em>unexamined commitments</em> that accumulate quietly.</p><h3>The Hidden Cost of Expedience</h3><p>What looks like speed early often shows up later as a constraint.</p><p>As products mature, those early shortcuts create:</p><ul><li><p>Architectural lock-in.</p></li><li><p>Brittle abstractions.</p></li><li><p>Painful migrations.</p></li><li><p>Slow, risky changes.</p></li><li><p>Fear of touching core systems.</p></li></ul><p>As <strong>Ray Jang</strong>, Co-founder &amp; CEO of <em>Atria</em>, noted, teams often don&#8217;t realize they&#8217;ve slowed down until they&#8217;re already stuck. Every change requires workarounds. Every improvement risks regression. Momentum evaporates.</p><p>The system becomes fast to <em>run</em>, but slow to <em>change</em>.</p><h3>AI Ecosystems Shift Faster Than Architecture</h3><p>This problem is amplified in AI because the ecosystem itself is moving so quickly:</p><ul><li><p>Models evolve.</p></li><li><p>APIs change.</p></li><li><p>Best practices shift.</p></li><li><p>Capabilities that felt stable six months ago suddenly aren&#8217;t.</p></li></ul><p>As <strong>Evan Owen</strong>, Co-founder &amp; CEO of <em>Glue</em>, pointed out, decisions that assume today&#8217;s model behavior will persist are especially dangerous. Overfitting to current capabilities may unlock speed now, but it creates fragility later, precisely when adaptation matters most.</p><h3>Overfitting Is a Form of Technical Debt</h3><p>The panel reframed overfitting in a broader sense.</p><p>It&#8217;s not just about data or prompts. It&#8217;s about <strong>designing systems that only work under narrow conditions</strong>.</p><p>Overfit systems:</p><ul><li><p>Assume specific output formats.</p></li><li><p>Rely on implicit model behavior.</p></li><li><p>Break when context windows change.</p></li><li><p>Fail when reasoning patterns shift.</p></li></ul><p>Each assumption tightens the system&#8217;s tolerance for change.</p><h3>True Speed Requires Optionality</h3><p>The teams that sustained velocity over time shared one trait: <strong>informed restraint</strong>.</p><p>They:</p><ul><li><p>Moved fast where reversibility was high.</p></li><li><p>Slowed down where decisions were expensive to undo.</p></li><li><p>Avoided locking in assumptions prematurely.</p></li><li><p>Designed interfaces, not shortcuts.</p></li></ul><p>As <strong>Yen Tan</strong>, Product Manager at <em>15Five</em>, emphasized, speed isn&#8217;t just about shipping. It&#8217;s about preserving the ability to change direction without breaking everything.</p><h3>Velocity Is a Function of Confidence</h3><p>Another subtle insight from the panel was that teams slow down not just because systems are brittle, but because <strong>people lose confidence</strong>.</p><p>When:</p><ul><li><p>Changes feel risky.</p></li><li><p>Behavior is hard to predict.</p></li><li><p>Regressions are costly.</p></li></ul><p>Teams hesitate. Reviews drag on. Releases slow. Innovation stalls.</p><p>Short-term speed that undermines confidence eventually kills momentum.</p><h3>The Practical Takeaway</h3><p>AI speed isn&#8217;t about maximizing short-term output.</p><p>It&#8217;s about:</p><ul><li><p>Making reversible decisions quickly.</p></li><li><p>Deferring irreversible ones thoughtfully.</p></li><li><p>Preserving optionality.</p></li><li><p>Designing for change, not permanence.</p></li></ul><p>True AI speed requires restraint, not caution, but judgment.</p><p>Move fast.</p><p>Just don&#8217;t move fast <em>into a corner</em>.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>3. Long-Term Speed Requires Informed Foresight</h2><p>Several speakers emphasized that sustaining velocity over time requires more than execution discipline.</p><p>It requires <strong>informed foresight</strong>.</p><p>Not clairvoyance.</p><p>Not a perfect prediction.</p><p>But the ability to make educated bets about where the ecosystem is heading, and where it isn&#8217;t.</p><h3>Speed Over Time Is About Betting on What Endures</h3><p>In fast-moving AI environments, it&#8217;s tempting to treat everything as temporary.</p><p>Frameworks change.</p><p>Models improve.</p><p>Tooling evolves monthly.</p><p>&#8220;Best practices&#8221; have short half-lives.</p><p>But as the panel made clear, <strong>some things do last longer than others</strong>, and knowing the difference is what separates teams that compound velocity from those that reset every six months.</p><p>As <strong>Daksh Gupta</strong>, Co-founder &amp; CEO of <strong>Greptile</strong>, noted, long-term speed comes from investing in abstractions that survive churn, even when the layers above them change.</p><h3>The Cost of Dead-End Bets</h3><p>Several speakers shared examples of teams that moved quickly, but in the wrong direction.</p><p>These teams:</p><ul><li><p>Adopted tooling that couldn&#8217;t evolve.</p></li><li><p>Built on standards that never stabilized.</p></li><li><p>Committed deeply to APIs that were clearly transitional.</p></li><li><p>Optimized for the current model generation.</p></li></ul><p>Each decision felt reasonable at the time.</p><p>Together, they created dead ends.</p><p>As <strong>Ray Jang</strong>, Co-founder &amp; CEO of <strong>Atria</strong>, explained, the problem isn&#8217;t making bets, it&#8217;s making bets without understanding their reversibility. Dead-end bets don&#8217;t just slow teams down. They force rewrites.</p><h3>AI Makes Reactivity Expensive</h3><p>Because the AI landscape changes so quickly, many teams default to being reactive.</p><p>New model? Switch immediately.</p><p>New framework? Rewrite.</p><p>New technique? Adopt everywhere.</p><p>The panel warned that this behavior creates motion, but not progress.</p><p>As <strong>Evan Owen</strong>, Co-founder &amp; CEO of <strong>Glue</strong>, put it, reactive teams feel fast until they realize they&#8217;re constantly rebuilding the same system with slightly different parts.</p><p>Speed becomes cyclical instead of compounding.</p><h3>Selective Proactivity Is the Real Advantage</h3><p>The fastest teams described on the panel weren&#8217;t chasing every change.</p><p>They were <strong>selectively proactive</strong>.</p><p>They:</p><ul><li><p>Tracked where standards were converging.</p></li><li><p>Waited for signal before committing.</p></li><li><p>Designed internal interfaces to absorb change.</p></li><li><p>Insulated core logic from external volatility.</p></li></ul><p>As <strong>Yen Tan</strong>, Product Manager at <strong>15Five</strong>, emphasized, foresight isn&#8217;t about predicting the future, it&#8217;s about <em>limiting the blast radius</em> when the future arrives.</p><h3>Understanding Direction Beats Knowing Timing</h3><p>Another important reframe from the panel was that timing matters less than direction.</p><p>You don&#8217;t need to know <em>when</em> a standard will win.</p><p>You need to know <em>whether</em> it&#8217;s likely to matter.</p><p>Teams that understood direction:</p><ul><li><p>Avoided one-off integrations.</p></li><li><p>Favored open interfaces.</p></li><li><p>Resisted premature optimization.</p></li><li><p>Chose boring, stable layers where possible.</p></li></ul><p>That restraint allowed them to move faster later, when clarity emerged.</p><h3>Foresight Is a Team Skill</h3><p>Importantly, foresight wasn&#8217;t described as a founder superpower.</p><p>It was treated as an organizational capability.</p><p>Teams built foresight by:</p><ul><li><p>Discussing ecosystem trends openly.</p></li><li><p>Revisiting architectural assumptions regularly.</p></li><li><p>Questioning &#8220;why this now?&#8221;.</p></li><li><p>Rewarding reversibility over cleverness.</p></li></ul><p>Over time, this created shared intuition, and faster decision-making.</p><h3>The Practical Takeaway</h3><p>Long-term AI speed isn&#8217;t about reacting faster than everyone else.</p><p>It&#8217;s about:</p><ul><li><p>Understanding where the ecosystem is heading.</p></li><li><p>Avoiding bets that trap you.</p></li><li><p>Investing in abstractions that outlast hype.</p></li><li><p>Moving early <em>only when it matters</em>.</p></li></ul><p>The fastest teams don&#8217;t chase change.</p><p>They position themselves so change can&#8217;t knock them off balance.</p><h2>4. Dogfooding Is the Highest-Leverage Evaluation Mechanism</h2><p>One of the most practical insights from the panel was also one of the simplest:</p><p><strong>The best early evaluation system is lived experience.</strong></p><p>Before metrics.<br>Before dashboards.<br>Before formal eval frameworks.</p><p>Teams need to <em>feel</em> their product.</p><h3>Formal Evals Come Too Late for Early Learning</h3><p>Several speakers cautioned against jumping too quickly into formal evaluation systems.</p><p>Evals are powerful, but only once teams already understand:</p><ul><li><p>What does good look like?</p></li><li><p>Which failures matter?</p></li><li><p>Where nuance lives?</p></li></ul><p>Before that understanding exists, evals tend to encode guesses rather than truth.</p><p>As <strong>Daksh Gupta</strong>, Co-founder &amp; CEO of <strong>Greptile</strong>, emphasized, premature evals often give teams false confidence. They pass checks while the product quietly degrades in ways the metrics don&#8217;t capture.</p><h3>&#8220;This Feels Wrong&#8221; Is a Real Signal</h3><p>A recurring phrase on the panel was some version of:</p><blockquote><p><em>&#8220;This feels wrong.&#8221;</em></p></blockquote><p>That instinct, especially from domain experts, surfaced again and again as an early warning signal.</p><p>As <strong>Yen Tan</strong>, Product Manager at <strong>15Five</strong>, explained, when people who understand the problem deeply start to hesitate, something important is usually off. The issue might not be obvious. It might not be measurable yet. But ignoring that signal almost always leads to larger failures later.</p><p>Early intuition isn&#8217;t noise.<br>It&#8217;s a compressed experience.</p><h3>Dogfooding Exposes What Metrics Miss</h3><p>Dogfooding forces teams to confront the product as it actually behaves, not how they hope it behaves.</p><p>When teams use their own product daily:</p><ul><li><p>Subtle regressions surface.</p></li><li><p>Quality decay becomes obvious.</p></li><li><p>Friction accumulates visibly.</p></li><li><p>Edge cases repeat.</p></li></ul><p>As <strong>Ray Jang</strong>, Co-founder &amp; CEO of <strong>Atria</strong>, noted, dashboards rarely capture the emotional texture of a product. Dogfooding does.</p><p>You notice when:</p><ul><li><p>Outputs start to feel generic.</p></li><li><p>Responses drift off tone.</p></li><li><p>Latency becomes irritating.</p></li><li><p>Trust erodes slightly but consistently.</p></li></ul><p>These are the signals that matter most early.</p><h3>Shared Intuition Accelerates Teams</h3><p>Another benefit the panel highlighted was <em>alignment</em>.</p><p>Dogfooding builds:</p><ul><li><p>Shared intuition across engineering, product, and GTM.</p></li><li><p>Common language for quality.</p></li><li><p>Faster decision-making.</p></li></ul><p>When everyone has felt the pain personally, debates get shorter. Teams don&#8217;t argue abstractly about metrics; they argue from experience.</p><p>As <strong>Evan Owen</strong>, Co-founder &amp; CEO of <strong>Glue</strong>, put it, teams that dogfood aggressively don&#8217;t need long spec documents to explain why something needs fixing. Everyone already knows.</p><h3>When Formal Evals Actually Help</h3><p>The panel wasn&#8217;t dismissive of formal evaluation, just precise about timing.</p><p>Formal evals work best when:</p><ul><li><p>Intuition is already strong.</p></li><li><p>Failure modes are known.</p></li><li><p>Quality criteria are shared.</p></li><li><p>The team agrees on tradeoffs.</p></li></ul><p>At that point, evals scale understanding.<br>Before that point, they obscure it.</p><h3>The Practical Takeaway</h3><p>Dogfooding isn&#8217;t a culture perk.<br>It&#8217;s an evaluation strategy.</p><p>The teams that move fastest:</p><ul><li><p>Live inside their product.</p></li><li><p>Trust early discomfort.</p></li><li><p>Use intuition to guide iteration.</p></li><li><p>Add formal evals once meaning exists.</p></li></ul><p>In AI products, <strong>you can&#8217;t measure what you don&#8217;t yet understand</strong>.</p><blockquote><p>Understanding comes first.<br>Automation follows.</p></blockquote><h2>5. Evals Prevent Regression &#8212; They Don&#8217;t Create Insight</h2><p>The panel was clear, and notably aligned, on one point:</p><p><strong>Evals are often introduced too early.</strong></p><p>Not because evals are bad, but because teams frequently expect them to do the wrong job.</p><h3>What Evals Are Actually Good At</h3><p>When used correctly, evals are extremely effective.</p><p>They:</p><ul><li><p>Prevent systems from getting worse.</p></li><li><p>Enforce known baselines.</p></li><li><p>Catch regressions early.</p></li><li><p>Scale judgment once patterns are understood.</p></li></ul><p>As <strong>Ray Jang</strong>, Co-founder &amp; CEO of <strong>Atria</strong>, described, evals are invaluable once a team already knows what quality looks like. At that point, they act as guardrails, ensuring progress doesn&#8217;t slip backward as systems evolve.</p><p>But guardrails don&#8217;t decide where you&#8217;re going.</p><h3>The Risk of Introducing Evals Too Early</h3><p>Several speakers warned that early-stage AI teams often reach for evals before they&#8217;ve earned them.</p><p>When evals are introduced prematurely, they tend to:</p><ul><li><p>Cap quality too early.</p></li><li><p>Freeze incomplete assumptions.</p></li><li><p>Obscure creative exploration.</p></li><li><p>Incentivize optimization against the wrong signals.</p></li></ul><p>As <strong>Daksh Gupta</strong>, Co-founder &amp; CEO of <strong>Greptile</strong>, noted, early evals often reflect what teams <em>think</em> matters, not what actually does. Once encoded, those assumptions quietly shape every future decision.</p><p>What feels like rigor becomes constraint.</p><h3>Insight Comes From Humans, Not Metrics</h3><p>A recurring theme across the panel was that <strong>insight emerges from exposure</strong>, not automation.</p><p>Early-stage AI products benefit far more from:</p><ul><li><p>Human review of outputs.</p></li><li><p>Direct customer conversations.</p></li><li><p>Qualitative feedback.</p></li><li><p>Rapid iteration driven by intuition.</p></li></ul><p>As <strong>Yen Tan</strong>, Product Manager at <strong>15Five</strong>, explained, insight requires context. It requires understanding why something feels wrong, not just that it failed a check. That depth simply can&#8217;t be automated early on.</p><p>Metrics without understanding are misleading.</p><h3>Evals Encode Assumptions &#8212; Whether You Want Them To or Not</h3><p>One of the most important cautions from the panel was that evals always encode values.</p><p>They define:</p><ul><li><p>What does &#8220;good&#8221; mean?</p></li><li><p>Which failures matter?</p></li><li><p>What tradeoffs are acceptable?</p></li></ul><p>When those definitions are immature, evals lock teams into a narrow view of quality.</p><p>As <strong>Evan Owen</strong>, Co-founder &amp; CEO of <strong>Glue</strong>, put it, once an eval exists, teams naturally optimize for it, even if it no longer reflects reality. Exploration slows. Creativity narrows. Learning stalls.</p><h3>Guardrails, Not Steering Wheels</h3><p>This led to one of the clearest metaphors of the panel:</p><blockquote><p><strong>Evals are guardrails, not steering wheels.</strong></p></blockquote><p>They prevent disaster.<br>They don&#8217;t choose a direction.</p><p>Teams that try to steer with evals early often end up driving confidently in the wrong direction.</p><h3>The Practical Takeaway</h3><p>The fastest AI teams sequence evaluation deliberately.</p><p>They:</p><ol><li><p><strong>Learn through humans first.</strong></p></li><li><p><strong>Build intuition around quality.</strong></p></li><li><p><strong>Identify stable patterns.</strong></p></li><li><p><strong>Then encode those patterns into evals.</strong></p></li></ol><p>Used this way, evals accelerate progress without freezing it.</p><p>In AI products, <strong>understanding precedes automation</strong>.</p><p>If you automate judgment before you&#8217;ve developed it, you don&#8217;t move faster, you just lock in ignorance.</p><h2>6. Teams That Ship Fast Collapse Distance Between Thinking &amp; Doing</h2><p>A recurring operational insight from the panel was deceptively simple:</p><p><strong>Communication is lossy &#8212; especially in fast-moving environments.</strong></p><p>Every handoff introduces delay. Every translation risks distortion. Every layer adds friction.</p><p>The teams that ship fastest aren&#8217;t necessarily working harder. They&#8217;re working <strong>with less distance between thinking and doing</strong>.</p><h3>Speed Comes From Collapsing the Loop</h3><p>Across examples, the panel highlighted the same pattern:</p><p>Teams maximize velocity when:</p><ul><li><p>The same person designs, builds, ships, and iterates.</p></li><li><p>Ownership spans the full lifecycle of a feature.</p></li><li><p>Feedback flows directly to the builder.</p></li></ul><p>As <strong>Daksh Gupta</strong>, Co-founder &amp; CEO of <strong>Greptile</strong>, emphasized, this collapse of roles doesn&#8217;t eliminate rigor &#8212; it eliminates delay. Decisions happen where context already lives.</p><h3>Handoffs Are Hidden Taxes</h3><p>In theory, specialization increases efficiency. In practice, handoffs impose invisible costs.</p><p>Each handoff requires:</p><ul><li><p>Re-explaining intent.</p></li><li><p>Re-establishing context.</p></li><li><p>Re-interpreting feedback.</p></li></ul><p>As <strong>Ray Jang</strong>, Co-founder &amp; CEO of <strong>Atria</strong>, noted, even perfect documentation can&#8217;t fully transmit intuition. What gets lost isn&#8217;t just information &#8212; it&#8217;s judgment.</p><p>In AI products, where quality is often subjective and evolving, that loss is especially expensive.</p><h3>Feedback Is Only Useful If It&#8217;s Immediate</h3><p>Another theme that emerged was the importance of <em>feedback proximity</em>.</p><p>When feedback:</p><ul><li><p>Reaches the builder quickly.</p></li><li><p>Arrives unfiltered.</p></li><li><p>Includes real user context.</p></li></ul><p>Iteration accelerates.</p><p>As <strong>Yen Tan</strong>, Product Manager at <strong>15Five</strong>, explained, teams slow down when feedback is delayed, summarized, or abstracted. By the time it reaches the person who can act on it, urgency &#8212; and insight &#8212; have faded.</p><p>Fast teams shorten that path aggressively.</p><h3>Ownership Creates Judgment</h3><p>The panel also emphasized that ownership isn&#8217;t just about accountability &#8212; it&#8217;s about learning.</p><p>When the same person:</p><ul><li><p>Makes the decision.</p></li><li><p>Implements the solution.</p></li><li><p>Observes the outcome.</p></li><li><p>Feels the failure.</p></li></ul><p>They develop judgment rapidly.</p><p>As <strong>Evan Owen</strong>, Co-founder &amp; CEO of <strong>Glue</strong>, shared, teams that fragment ownership fragment understanding. No one fully knows why something works &#8212; or why it doesn&#8217;t.</p><p>Judgment accumulates fastest when responsibility is continuous.</p><h3>Thinking and Execution Belong Together</h3><p>One of the most resonant reframes of the section was this:</p><blockquote><p><strong>Speed increases not because people work harder &#8212; but because thinking and execution happen in the same head.</strong></p></blockquote><p>When design, implementation, and iteration are separated, speed decays. When they&#8217;re unified, momentum compounds.</p><p>This doesn&#8217;t mean eliminating collaboration. It means eliminating unnecessary translation.</p><h3>The Practical Takeaway</h3><p>Teams that move fast don&#8217;t optimize for efficiency on paper.</p><p>They optimize for:</p><ul><li><p>Tight ownership loops.</p></li><li><p>Minimal handoffs.</p></li><li><p>Direct feedback.</p></li><li><p>Continuous learning.</p></li></ul><p>In AI products, where quality signals are subtle and shifting, <strong>distance is the enemy of speed</strong>.</p><p>Collapse the distance &#8212; and speed follows.</p><h2>7. Customer Obsession Beats Process Optimization</h2><p>Despite the panel&#8217;s technical depth, the conversation kept circling back to a simple truth:</p><p><strong>Customers are the fastest feedback system available.</strong></p><p>No internal process, tool, or framework can compete with direct exposure to real usage.</p><h3>Process Doesn&#8217;t Create Insight &#8212; Exposure Does</h3><p>Many teams try to move faster by refining internal processes:</p><ul><li><p>Better roadmaps.</p></li><li><p>Tighter sprint rituals.</p></li><li><p>More detailed specs.</p></li><li><p>More sophisticated tooling.</p></li></ul><p>The panel was blunt about the limitations of this approach.</p><p>As <strong>Daksh Gupta</strong>, Co-founder &amp; CEO of <strong>Greptile</strong>, noted, process can reduce chaos &#8212; but it doesn&#8217;t create understanding. Teams that rely too heavily on internal abstractions often end up optimizing for the wrong problems.</p><p>Speed comes from knowing <em>what</em> to build &#8212; not just <em>how</em> to build it efficiently.</p><h3>High-Velocity Teams Stay Close to Users</h3><p>The fastest teams described on the panel shared one defining habit: <strong>constant customer contact</strong>.</p><p>They:</p><ul><li><p>Talk to users weekly &#8212; sometimes daily.</p></li><li><p>Onboard customers themselves.</p></li><li><p>Watch real usage in real contexts.</p></li><li><p>Feel confusion and delight firsthand.</p></li></ul><p>As <strong>Evan Owen</strong>, Co-founder &amp; CEO of <strong>Glue</strong>, explained, nothing accelerates learning like watching someone struggle with your product in real time. Feedback becomes concrete. Priorities become obvious.</p><h3>Abstract Requests Hide Real Needs</h3><p>Another recurring insight was that <strong>customer requests are often misleading</strong>.</p><p>Users ask for features. They describe symptoms. They propose solutions.</p><p>But as <strong>Yen Tan</strong>, Product Manager at <strong>15Five</strong>, pointed out, the real work is understanding <em>why</em> they&#8217;re asking. That understanding rarely comes from tickets or surveys. It comes from observing behavior.</p><p>Teams that prioritize based on lived feedback move faster than those reacting to abstract input.</p><h3>Proximity Collapses Feedback Loops</h3><p>Customer proximity shortens feedback loops in ways no internal system can replicate.</p><p>When teams are close to users:</p><ul><li><p>Misalignment is obvious immediately.</p></li><li><p>Incorrect assumptions are exposed early.</p></li><li><p>Course correction happens faster.</p></li><li><p>Iteration becomes confident.</p></li></ul><p>As <strong>Ray Jang</strong>, Co-founder &amp; CEO of <strong>Atria</strong>, noted, teams often underestimate how much time they lose by guessing instead of asking &#8212; or by interpreting instead of observing.</p><h3>Obsession Is a Practical Choice</h3><p>The panel was careful to separate <em>customer obsession</em> from performative empathy.</p><p>This isn&#8217;t about:</p><ul><li><p>NPS slogans.</p></li><li><p>Empathy workshops.</p></li><li><p>Abstract personas.</p></li></ul><p>It&#8217;s about:</p><ul><li><p>Proximity.</p></li><li><p>Frequency.</p></li><li><p>Firsthand exposure.</p></li></ul><p>Customer obsession isn&#8217;t a cultural value. It&#8217;s an operational strategy.</p><h3>The Practical Takeaway</h3><p>If speed is the goal, customer proximity is the lever.</p><p>The teams that ship fastest:</p><ul><li><p>Stay close to real usage.</p></li><li><p>Trust lived feedback over speculation.</p></li><li><p>Let customers shape priorities directly.</p></li><li><p>Reduce internal debate by increasing external clarity.</p></li></ul><p>In AI products, where quality is contextual and evolving, <strong>customers are the fastest way to find the truth</strong>.</p><p>No process can substitute for that.</p><h2>8. Feature Flags Enable Safe Aggression</h2><p>One of the most practical themes to emerge from the panel was that <strong>shipping fast does not mean shipping recklessly</strong>.</p><p>High-velocity teams don&#8217;t move carefully &#8212; they move <strong>contained</strong>.</p><p>Feature flags surfaced repeatedly as one of the most important tools for making that possible.</p><h3>Speed Requires the Ability to Contain Risk</h3><p>AI products introduce uncertainty by default.</p><p>Outputs vary.</p><p>Behavior shifts.</p><p>Edge cases surface unpredictably.</p><p>In that environment, shipping changes broadly and permanently is dangerous.</p><p>As <strong>Ray Jang</strong>, Co-founder &amp; CEO of <strong>Atria</strong>, emphasized, teams that move fast sustainably all share one trait: they can <em>limit blast radius</em>. Feature flags give teams that control.</p><p>They allow teams to:</p><ul><li><p>Isolate risk.</p></li><li><p>Control who sees what.</p></li><li><p>Roll out changes incrementally.</p></li><li><p>Pull back instantly if something breaks.</p></li></ul><p>Speed without containment isn&#8217;t velocity &#8212; it&#8217;s gambling.</p><h3>Flags Turn Experiments Into Reversible Decisions</h3><p>A recurring insight was that <strong>reversibility is the foundation of speed</strong>.</p><p>Feature flags turn what would otherwise be hard commitments into reversible bets.</p><p>As <strong>Daksh Gupta</strong>, Co-founder &amp; CEO of <strong>Greptile</strong>, noted, teams are far more willing to experiment aggressively when they know they can turn something off without damage. That psychological safety unlocks real momentum.</p><p>Without flags, every experiment feels existential.</p><p>With flags, experimentation becomes routine.</p><h3>Early Adopters Are Not the Same as Everyone Else</h3><p>Another key point was segmentation.</p><p>Not all users want &#8212; or tolerate &#8212; the same level of experimentation.</p><p>Feature flags allow teams to:</p><ul><li><p>Expose new capabilities to power users.</p></li><li><p>Test with internal teams first.</p></li><li><p>Learn from early adopters.</p></li><li><p>Protect broader user trust.</p></li></ul><p>As <strong>Yen Tan</strong>, Product Manager at <strong>15Five</strong>, explained, trust is fragile in AI products. Once users lose confidence, it&#8217;s difficult to earn back. Flags allow teams to learn without burning that trust.</p><h3>Reliability and Experimentation Are Not Opposites</h3><p>The panel strongly rejected the idea that teams must choose between speed and reliability.</p><p>The fastest teams do both &#8212; by separating <em>learning</em> from <em>exposure</em>.</p><p>Feature flags make that separation explicit.</p><p>As <strong>Evan Owen</strong>, Co-founder &amp; CEO of <strong>Glue</strong>, shared, flags allow teams to test bold ideas while keeping the core experience stable. Users experience consistency, while teams gain insight.</p><p>That balance is what allows iteration at AI speed without chaos.</p><h3>Safe Aggression Is a Design Principle</h3><p>What emerged was a broader principle:</p><blockquote><p><strong>Move aggressively &#8212; but only where failure is contained.</strong></p></blockquote><p>Feature flags operationalize that principle.</p><p>They:</p><ul><li><p>Encourage experimentation.</p></li><li><p>Reduce fear of shipping.</p></li><li><p>Protect user trust.</p></li><li><p>Preserve optionality.</p></li></ul><p>Without them, teams naturally become conservative.</p><p>With them, teams can be bold &#8212; responsibly.</p><h3>The Practical Takeaway</h3><p>Speed in AI products isn&#8217;t about recklessness.</p><p>It&#8217;s about <strong>controlled risk</strong>.</p><p>Teams that ship fast:</p><ul><li><p>Isolate experiments.</p></li><li><p>Segment exposure.</p></li><li><p>Learn quickly.</p></li><li><p>Revert instantly.</p></li></ul><p>Feature flags don&#8217;t slow teams down.</p><p>They make it safe to move faster.</p><p>In an AI-first world, <strong>aggression without containment is chaos</strong> &#8212; but aggression with guardrails is progress.</p><h2>9. Trust Is a Battery &#8212; Spend It Carefully</h2><p>Across multiple parts of the discussion, trust kept coming up &#8212; not as a vague brand concept, but as a <strong>finite operational resource</strong>.</p><p>The panel consistently framed it this way:</p><blockquote><p><strong>Trust behaves like a battery.</strong><br><strong>It charges slowly.</strong><br><strong>It drains quickly.</strong></p></blockquote><p>And once it&#8217;s depleted, speed collapses.</p><h3>Early Products Must Earn Trust Before Spending It</h3><p>The panel was clear that early-stage AI products don&#8217;t have the luxury of experimentation at scale.</p><p>Before teams can move aggressively, they must:</p><ul><li><p>Nail table-stakes experiences.</p></li><li><p>Behave predictably.</p></li><li><p>Avoid surprising failures.</p></li><li><p>Demonstrate basic reliability.</p></li></ul><p>As <strong>Yen Tan</strong>, Product Manager at <strong>15Five</strong>, noted, users are far more sensitive early on. When trust hasn&#8217;t been established yet, even small inconsistencies feel disproportionate.</p><p>Early trust isn&#8217;t built by novelty.</p><p>It&#8217;s built on <em>dependability</em>.</p><h3>Trust Decays Faster Than It Accumulates</h3><p>Several speakers emphasized how asymmetrical trust really is.</p><p>It takes:</p><ul><li><p>Repeated successful interactions.</p></li><li><p>Consistent behavior.</p></li><li><p>Clear boundaries.</p></li></ul><p>to build trust.</p><p>But it takes:</p><ul><li><p>One confusing output.</p></li><li><p>One silent failure.</p></li><li><p>One unexplained change.</p></li></ul><p>to start draining it.</p><p>As <strong>Daksh Gupta</strong>, Co-founder &amp; CEO of <strong>Greptile</strong>, pointed out, AI systems feel especially brittle because they present confident outputs even when they&#8217;re wrong. That makes trust loss sharper &#8212; and recovery harder.</p><h3>Experimentation Is a Privilege, Not a Right</h3><p>A recurring theme was that experimentation must be <em>earned</em>.</p><p>Once trust is established, teams gain:</p><ul><li><p>Room to experiment.</p></li><li><p>Tolerance for occasional failure.</p></li><li><p>Forgiveness for iteration.</p></li><li><p>User patience during change.</p></li></ul><p>As <strong>Ray Jang</strong>, Co-founder &amp; CEO of <strong>Atria</strong>, explained, trusted products can ship imperfect updates and recover quickly. Untrusted products can&#8217;t survive even minor missteps.</p><p>Trust buys optionality.</p><h3>Small Mistakes Compound When Trust Is Low</h3><p>Without trust, every issue feels bigger than it is.</p><p>Minor bugs turn into reasons to churn.</p><p>Ambiguous behavior becomes incompetence.</p><p>Iteration feels like instability.</p><p>As <strong>Evan Owen</strong>, Co-founder &amp; CEO of <strong>Glue</strong>, shared, teams often underestimate how much damage is caused not by catastrophic failures &#8212; but by <em>frequent, low-grade disappointment</em>.</p><p>Without trust, those moments stack up fast.</p><h3>Spend Trust Where Learning Is Highest</h3><p>The panel also emphasized that trust should be spent intentionally.</p><p>When teams <em>do</em> experiment, they should:</p><ul><li><p>Do it where learning is maximized.</p></li><li><p>Isolate exposure carefully.</p></li><li><p>Communicate changes clearly.</p></li><li><p>Roll back quickly when needed.</p></li></ul><p>As <strong>Daksh Gupta</strong> noted earlier, feature flags and segmentation aren&#8217;t just technical tools &#8212; they&#8217;re trust-management tools.</p><p>They allow teams to learn without draining the battery.</p><h3>The Practical Takeaway</h3><p>Trust isn&#8217;t an abstract virtue in AI products.</p><p>It&#8217;s fuel.</p><p>The fastest teams:</p><ul><li><p>Build trust deliberately.</p></li><li><p>Protect it aggressively.</p></li><li><p>Spend it where learning is highest.</p></li><li><p>Replenish it through reliability.</p></li></ul><p>In an AI-first world, <strong>trust determines how fast you&#8217;re allowed to move</strong>.</p><p>Spend it recklessly, and speed disappears.</p><p>Spend it wisely, and iteration compounds.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/shipping-fast-and-iterating-at-ai?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/shipping-fast-and-iterating-at-ai?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://labs.adaline.ai/p/shipping-fast-and-iterating-at-ai?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h2>10. Customer Feedback Must Be Filtered, Not Obeyed</h2><p>One of the final &#8212; and most important &#8212; clarifications from the panel was this:</p><p><strong>Listening to customers is not the same as following them.</strong></p><p>High-velocity teams do both &#8212; but they do them very differently.</p><h3>Feedback Is Raw Data, Not Direction</h3><p>The panel emphasized that customer feedback is inherently noisy.</p><p>Users:</p><ul><li><p>Describe symptoms.</p></li><li><p>Articulate frustrations.</p></li><li><p>Suggest solutions.</p></li><li><p>React emotionally to outcomes.</p></li></ul><p>But they rarely diagnose root causes accurately.</p><p>As <strong>Evan Owen</strong>, Co-founder &amp; CEO of <strong>Glue</strong>, noted, treating every piece of feedback as a directive leads teams to chase surface-level fixes &#8212; and lose coherence over time.</p><p>Feedback is signal.</p><p>Direction requires judgment.</p><h3>Caring Is Different From Complaining</h3><p>A key distinction surfaced around <strong>how much users actually care</strong>.</p><p>Many users complain.</p><p>Very few are willing to change behavior.</p><p>Effective teams learn to distinguish:</p><ul><li><p>Annoyance from urgency.</p></li><li><p>Requests from necessity.</p></li><li><p>Opinions from switching behavior.</p></li></ul><p>As <strong>Daksh Gupta</strong>, Co-founder &amp; CEO of <strong>Greptile</strong>, explained, the most valuable signals come from moments where users say, <em>&#8220;I can&#8217;t do my job without this working.&#8221;</em> Everything else requires scrutiny.</p><h3>&#8220;Hell Yes&#8221; Outcomes Are Rare &#8212; and Precious</h3><p>Several speakers emphasized the importance of identifying <strong>&#8220;hell yes&#8221; moments</strong>.</p><p>These are moments where:</p><ul><li><p>Users light up.</p></li><li><p>Value is immediately obvious.</p></li><li><p>Behavior changes without prompting.</p></li><li><p>Adoption accelerates naturally.</p></li></ul><p>As <strong>Ray Jang</strong>, Co-founder &amp; CEO of <strong>Atria</strong>, shared, teams that optimize for lukewarm satisfaction move slowly. Teams that optimize for undeniable value move decisively.</p><p>Mediocre feedback leads to mediocre products.</p><h3>Surveys Don&#8217;t Surface Tradeoffs &#8212; Conversations Do</h3><p>Another clear takeaway was the limitation of surveys.</p><p>Surveys:</p><ul><li><p>Flatten nuance.</p></li><li><p>Encourage safe answers.</p></li><li><p>Hide tradeoffs.</p></li></ul><p>Tradeoff conversations, by contrast:</p><ul><li><p>Force prioritization.</p></li><li><p>Surface real constraints.</p></li><li><p>Reveal what users would give up.</p></li></ul><p>As <strong>Yen Tan</strong>, Product Manager at <strong>15Five</strong>, noted, asking users to choose &#8212; not just react &#8212; exposes what truly matters.</p><p>Speed comes from clarity, not consensus.</p><h3>Builder&#8217;s Own Diagnosis</h3><p>The panel repeatedly returned to a simple but powerful responsibility:</p><blockquote><p><strong>Customers describe symptoms.</strong><br><strong>Builders diagnose causes.</strong></p></blockquote><p>When teams outsource diagnosis to users, they lose control of the product&#8217;s direction.</p><p>The fastest teams:</p><ul><li><p>Absorb feedback deeply.</p></li><li><p>Triangulate across users.</p></li><li><p>Test hypotheses quickly.</p></li><li><p>Make opinionated decisions.</p></li></ul><p>They don&#8217;t abdicate judgment &#8212; they sharpen it.</p><h3>The Practical Takeaway</h3><p>Customer feedback is indispensable &#8212; and dangerous.</p><p>Used well, it:</p><ul><li><p>Accelerates learning.</p></li><li><p>Validates direction.</p></li><li><p>Surfaces blind spots.</p></li></ul><p>Used poorly, it:</p><ul><li><p>Fragments focus.</p></li><li><p>Slows decision-making.</p></li><li><p>Erodes product coherence.</p></li></ul><p>In AI products, especially, where complexity is high and quality is subtle, <strong>judgment is the bottleneck &#8212; not information</strong>.</p><p>Listen closely.</p><p>Filter aggressively.</p><p>Decide decisively.</p><p>That&#8217;s how teams ship fast &#8212; without losing their way.</p><h2>11. &#8220;Minimum Lovable&#8221; Beats &#8220;Minimum Viable&#8221;</h2><p>One of the most subtle &#8212; and powerful &#8212; reframings from the panel was this:</p><p><strong>In AI products, &#8220;viable&#8221; is not enough.</strong></p><p>What passes as acceptable in traditional software often fails immediately in AI.</p><h3>AI Outputs Feel Personal &#8212; Whether You Intend Them To or Not</h3><p>AI products don&#8217;t just execute instructions.</p><p>They <em>respond</em>.</p><p>They:</p><ul><li><p>Speak in natural language.</p></li><li><p>Make suggestions.</p></li><li><p>Infer intent.</p></li><li><p>Appear confident.</p></li></ul><p>As a result, users interpret outputs as <em>judgment</em>, not just functionality.</p><p>When an AI system gets something wrong, it doesn&#8217;t feel like a bug.</p><p>It feels like a misunderstanding.</p><p>As <strong>Yen Tan</strong>, Product Manager at <strong>15Five</strong>, noted, this makes early impressions far more emotionally charged. Mistakes feel intelligent &#8212; and therefore scarier.</p><h3>&#8220;Viable&#8221; Is a Low Bar for Trust-Heavy Systems</h3><p>Minimum viable products are designed to answer one question:</p><blockquote><p><em>Does this work at all?</em></p></blockquote><p>In AI, that question is insufficient.</p><p>Because:</p><ul><li><p>Trust is fragile.</p></li><li><p>Users don&#8217;t know system boundaries.</p></li><li><p>Failures feel personal.</p></li><li><p>Confidence amplifies error.</p></li></ul><p>As <strong>Daksh Gupta</strong>, Co-founder &amp; CEO of <strong>Greptile</strong>, explained, shipping something that technically works but feels careless or incoherent often does more damage than not shipping at all.</p><p>Users don&#8217;t wait for it to get better.</p><p>They leave.</p><h3>Lovability Is About Respect, Not Polish</h3><p>The panel was careful to distinguish <em>lovable</em> from <em>polished</em>.</p><p>Lovability doesn&#8217;t mean:</p><ul><li><p>Perfect UX.</p></li><li><p>Flawless outputs.</p></li><li><p>Exhaustive feature sets.</p></li></ul><p>It means the product feels:</p><ul><li><p>Coherent.</p></li><li><p>Intentional.</p></li><li><p>Respectful of user intent.</p></li><li><p>Reliably useful in its core job.</p></li></ul><p>As <strong>Ray Jang</strong>, Co-founder &amp; CEO of <strong>Atria</strong>, shared, users forgive missing features. They don&#8217;t forgive feeling misunderstood or dismissed.</p><h3>Lovability Creates Forgiveness</h3><p>A recurring insight was that <strong>forgiveness is the real early-stage moat</strong>.</p><p>When a product feels lovable:</p><ul><li><p>Users retry after failure.</p></li><li><p>They give feedback instead of churning.</p></li><li><p>They tolerate iteration.</p></li><li><p>They stay curious.</p></li></ul><p>When a product feels merely viable:</p><ul><li><p>Failures feel unacceptable.</p></li><li><p>Trust erodes quickly.</p></li><li><p>Churn accelerates.</p></li></ul><p>As <strong>Evan Owen</strong>, Co-founder &amp; CEO of <strong>Glue</strong>, noted, early-stage AI products live or die by whether users believe the team <em>cares</em>.</p><p>Lovability communicates care.</p><h3>Minimum Lovable Sets the Right Floor</h3><p>The panel ultimately reframed early-stage quality bars.</p><p>Instead of asking:</p><blockquote><p><em>&#8220;Is this good enough to ship?&#8221;</em></p></blockquote><p>High-velocity teams ask:</p><blockquote><p><em>&#8220;Is this good enough to earn patience?&#8221;</em></p></blockquote><p>That question leads to different decisions:</p><ul><li><p>Tighter scope.</p></li><li><p>Clearer boundaries.</p></li><li><p>Fewer but better use cases.</p></li><li><p>More intentional defaults.</p></li></ul><h3>The Practical Takeaway</h3><p>AI products don&#8217;t get graded like traditional software.</p><p>They&#8217;re judged as collaborators.</p><p>That raises the bar.</p><p><strong>Minimum viable gets you tried.</strong></p><p><strong>Minimum lovable gets you trusted.</strong></p><p>And in an AI-first world, trust is the only thing that lets you move fast without breaking everything that matters.</p><h2>12. AI Speed Is Organizational, Not Just Technical</h2><p>As the panel closed, one final theme became unmistakably clear: <strong>AI speed is not primarily a tooling problem.</strong> It&#8217;s an organizational one.</p><p>Models matter. Frameworks matter. Infrastructure matters. But none of them determines speed on their own.</p><h3>Tools Don&#8217;t Learn &#8212; Teams Do</h3><p>Throughout the discussion, speakers repeatedly returned to the same observation: Two teams can use the same models, the same frameworks, and the same tools &#8212; and move at radically different speeds. The difference isn&#8217;t technical sophistication. It&#8217;s how the organization learns.</p><p>AI speed is driven by:</p><ul><li><p>Team structure.</p></li><li><p>Ownership models.</p></li><li><p>Cultural norms.</p></li><li><p>Decision-making velocity.</p></li><li><p>How feedback is interpreted and acted on.</p></li></ul><p>As <strong>Daksh Gupta</strong>, Co-founder &amp; CEO of <strong>Greptile</strong>, emphasized, teams don&#8217;t slow down because prompts are bad &#8212; they slow down because decisions get stuck.</p><h3>Ownership Determines Learning Velocity</h3><p>One of the strongest predictors of speed discussed on the panel was <strong>clear ownership</strong>.</p><p>Fast teams:</p><ul><li><p>Know who decides.</p></li><li><p>Know who owns quality.</p></li><li><p>Know who responds to failure.</p></li><li><p>Don&#8217;t diffuse responsibility.</p></li></ul><p>As <strong>Ray Jang</strong>, Co-founder &amp; CEO of <strong>Atria</strong>, noted, ambiguity in ownership creates hesitation. And hesitation compounds quickly in fast-moving AI environments. When no one owns learning, learning slows.</p><h3>Culture Shapes How Feedback Is Handled</h3><p>Another recurring insight was that <strong>feedback is only as useful as the culture that processes it</strong>.</p><p>In slower organizations:</p><ul><li><p>Feedback is debated endlessly.</p></li><li><p>Mistakes trigger defensiveness.</p></li><li><p>Learning is politicized.</p></li><li><p>Decisions wait for consensus.</p></li></ul><p>In faster ones:</p><ul><li><p>Feedback is welcome early.</p></li><li><p>Mistakes are treated as data.</p></li><li><p>Iteration is normalized.</p></li><li><p>Decisions move forward with imperfect information.</p></li></ul><p>As <strong>Yen Tan</strong>, Product Manager at <strong>15Five</strong>, explained, psychological safety isn&#8217;t just a people concept &#8212; it&#8217;s a speed multiplier. Teams that feel safe to surface problems do so earlier, when fixes are cheaper.</p><h3>Decision Velocity Beats Decision Accuracy</h3><p>The panel also reframed how teams should think about decision quality.</p><p>Perfect decisions are rare. Reversible decisions are common.</p><p>Fast AI teams:</p><ul><li><p>Make decisions quickly.</p></li><li><p>Revisit them often.</p></li><li><p>Correct course early.</p></li><li><p>Avoid over-indexing on certainty.</p></li></ul><p>As <strong>Evan Owen</strong>, Co-founder &amp; CEO of <strong>Glue</strong>, put it, teams that wait for confidence rarely get it. Teams that act and observe learn faster. Speed comes from motion with feedback &#8212; not deliberation without data.</p><h3>Learning Loops Are the Real Differentiator</h3><p>Across all examples, one pattern dominated:</p><blockquote><p><strong>The fastest AI companies have the tightest learning loops.</strong></p></blockquote><p>They:</p><ul><li><p>Ship small changes.</p></li><li><p>Observe real behavior.</p></li><li><p>Absorb feedback directly.</p></li><li><p>Adjust immediately.</p></li></ul><p>Tooling supports this &#8212; but it doesn&#8217;t create it.</p><p>Learning loops are designed through:</p><ul><li><p>Org structure.</p></li><li><p>Incentives.</p></li><li><p>Ownership.</p></li><li><p>Trust.</p></li></ul><h3>The Final Reframe</h3><p>By the end of the panel, &#8220;AI speed&#8221; had been fully redefined.</p><p>It isn&#8217;t about:</p><ul><li><p>Better prompts.</p></li><li><p>Faster GPUs.</p></li><li><p>Clever architectures.</p></li></ul><p>It&#8217;s about:</p><ul><li><p>Collapsing feedback loops.</p></li><li><p>Reducing organizational drag.</p></li><li><p>Empowering decision-makers.</p></li><li><p>Learning faster than competitors.</p></li></ul><h3>The Practical Takeaway</h3><p>If your AI team feels slow, the bottleneck is rarely technical. It&#8217;s usually:</p><ul><li><p>Unclear ownership.</p></li><li><p>Delayed decisions.</p></li><li><p>Filtered feedback.</p></li><li><p>Cultural friction.</p></li></ul><p>The fastest teams don&#8217;t just build better systems. They build <strong>organizations designed to learn at AI speed</strong>. And in an ecosystem where technology converges quickly, <strong>learning speed is the only durable advantage left</strong>.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[OpenClaw Is Not Magic; It's Just Good Architecture]]></title><description><![CDATA[Why event-driven design and persistent state create the illusion of an intelligent assistant.]]></description><link>https://labs.adaline.ai/p/openclaw-architecture-not-magic</link><guid isPermaLink="false">https://labs.adaline.ai/p/openclaw-architecture-not-magic</guid><dc:creator><![CDATA[Nilesh Barla]]></dc:creator><pubDate>Sat, 07 Feb 2026 00:45:12 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/64fc7916-3bd4-4db0-9813-d591c5b885f8_2260x1264.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>TLDR: </strong>OpenClaw feels alive, maybe near AGI, but it's not magic. It's event-driven architecture implemented correctly. This piece explains why <strong>triggers</strong>, <strong>queues</strong>, and <strong>persistent state</strong> create the illusion of intelligence, what makes agent assistants reliable in production, and where they fail. This blog is for engineers and builders who want to understand the machinery behind the hype, not just believe it.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://go.adaline.ai/rPUz2SX" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!1JO0!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3188ad79-19e5-4b71-87cd-a0f043ca1905_2160x810.png 424w, https://substackcdn.com/image/fetch/$s_!1JO0!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3188ad79-19e5-4b71-87cd-a0f043ca1905_2160x810.png 848w, https://substackcdn.com/image/fetch/$s_!1JO0!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3188ad79-19e5-4b71-87cd-a0f043ca1905_2160x810.png 1272w, https://substackcdn.com/image/fetch/$s_!1JO0!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3188ad79-19e5-4b71-87cd-a0f043ca1905_2160x810.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!1JO0!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3188ad79-19e5-4b71-87cd-a0f043ca1905_2160x810.png" width="1200" height="450" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3188ad79-19e5-4b71-87cd-a0f043ca1905_2160x810.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:546,&quot;width&quot;:1456,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:243466,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://go.adaline.ai/rPUz2SX&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/186987983?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3188ad79-19e5-4b71-87cd-a0f043ca1905_2160x810.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!1JO0!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3188ad79-19e5-4b71-87cd-a0f043ca1905_2160x810.png 424w, https://substackcdn.com/image/fetch/$s_!1JO0!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3188ad79-19e5-4b71-87cd-a0f043ca1905_2160x810.png 848w, https://substackcdn.com/image/fetch/$s_!1JO0!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3188ad79-19e5-4b71-87cd-a0f043ca1905_2160x810.png 1272w, https://substackcdn.com/image/fetch/$s_!1JO0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3188ad79-19e5-4b71-87cd-a0f043ca1905_2160x810.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Why The OpenClaw Hype Makes Sense</h2><p>OpenClaw is easiest to understand as an always-on local assistant that can execute tools. It runs on a machine you control, and it listens for messages. Not only that, it can take actions such as reading files, running commands, or pulling information from services.</p><p>For engineers, that description translates cleanly. It is an event-driven runtime with <strong>persistent state</strong>. </p><p>Essentially, it means that events arrive from a messaging surface or a schedule. The runtime turns those events into <strong>ordered work</strong>, <strong>calls models when needed</strong>, and <strong>persists</strong> what happened so the next event has context. That framing explains the excitement better than any claim about model intelligence.</p><p>Ben Goertzel&#8217;s &#8220;hands for a brain&#8221; metaphor makes sense because it points to the real differentiator.</p><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:186691243,&quot;url&quot;:&quot;https://bengoertzel.substack.com/p/openclaw-amazing-hands-for-a-brain&quot;,&quot;publication_id&quot;:349947,&quot;publication_name&quot;:&quot;Eurykosmotron&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!lHzi!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fb5e95575-39eb-4680-ba24-305396c557b3_400x400.png&quot;,&quot;title&quot;:&quot;OpenClaw &#8211; Amazing Hands for a Brain That Doesn&#8217;t Yet Exist&quot;,&quot;truncated_body_text&quot;:&quot;A lot of people are excited about OpenClaw just now &#8211; and they should be. It&#8217;s a genuinely important piece of software -- an open-source, self-hosted agent runtime that lets AI systems reach out and touch the world through your laptop, connecting to file systems, browsers, APIs, shell commands, and a growing ecosystem of integrations. It&#8217;s language-mode&#8230;&quot;,&quot;date&quot;:&quot;2026-02-03T02:54:58.350Z&quot;,&quot;like_count&quot;:35,&quot;comment_count&quot;:6,&quot;bylines&quot;:[{&quot;id&quot;:312261,&quot;name&quot;:&quot;Ben Goertzel&quot;,&quot;handle&quot;:&quot;bengoertzel&quot;,&quot;previous_name&quot;:null,&quot;photo_url&quot;:&quot;https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/85762f14-9217-4410-96cf-3c6a84c88918_48x48.png&quot;,&quot;bio&quot;:&quot;Benevolent #AGI, #transhumanism &amp; eurycosmos.  CEO @singularity_net, Chair @opencog  @HumanityPlus  @iCog_Labs&quot;,&quot;profile_set_up_at&quot;:&quot;2022-01-15T16:48:28.136Z&quot;,&quot;reader_installed_at&quot;:null,&quot;publicationUsers&quot;:[{&quot;id&quot;:271528,&quot;user_id&quot;:312261,&quot;publication_id&quot;:349947,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:true,&quot;publication&quot;:{&quot;id&quot;:349947,&quot;name&quot;:&quot;Eurykosmotron&quot;,&quot;subdomain&quot;:&quot;bengoertzel&quot;,&quot;custom_domain&quot;:null,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;AGI, frontier science, maniacal metaphysics, decentralizationist politics, life and consciousness extension and expansion, psi and psychedelics and etc. etc.&quot;,&quot;logo_url&quot;:&quot;https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/b5e95575-39eb-4680-ba24-305396c557b3_400x400.png&quot;,&quot;author_id&quot;:312261,&quot;primary_user_id&quot;:312261,&quot;theme_var_background_pop&quot;:&quot;#6B26FF&quot;,&quot;created_at&quot;:&quot;2021-04-28T22:22:02.866Z&quot;,&quot;email_from_name&quot;:null,&quot;copyright&quot;:&quot;Ben Goertzel&quot;,&quot;founding_plan_name&quot;:null,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;disabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:null,&quot;is_personal_mode&quot;:false}}],&quot;twitter_screen_name&quot;:&quot;bengoertzel&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null,&quot;status&quot;:{&quot;bestsellerTier&quot;:null,&quot;subscriberTier&quot;:1,&quot;leaderboard&quot;:null,&quot;vip&quot;:false,&quot;badge&quot;:{&quot;type&quot;:&quot;subscriber&quot;,&quot;tier&quot;:1,&quot;accent_colors&quot;:null},&quot;paidPublicationIds&quot;:[888615],&quot;subscriber&quot;:null}}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:false,&quot;type&quot;:&quot;newsletter&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://bengoertzel.substack.com/p/openclaw-amazing-hands-for-a-brain?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><img class="embedded-post-publication-logo" src="https://substackcdn.com/image/fetch/$s_!lHzi!,w_56,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fb5e95575-39eb-4680-ba24-305396c557b3_400x400.png"><span class="embedded-post-publication-name">Eurykosmotron</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title">OpenClaw &#8211; Amazing Hands for a Brain That Doesn&#8217;t Yet Exist</div></div><div class="embedded-post-body">A lot of people are excited about OpenClaw just now &#8211; and they should be. It&#8217;s a genuinely important piece of software -- an open-source, self-hosted agent runtime that lets AI systems reach out and touch the world through your laptop, connecting to file systems, browsers, APIs, shell commands, and a growing ecosystem of integrations. It&#8217;s language-mode&#8230;</div><div class="embedded-post-cta-wrapper"><span class="embedded-post-cta">Read more</span></div><div class="embedded-post-meta">3 months ago &#183; 35 likes &#183; 6 comments &#183; Ben Goertzel</div></a></div><p>OpenClaw extends the system's capabilities globally. It gives a language model a set of <strong>practical hands, </strong>essentially, so the output is not only text. It is a changed file, a launched process, a completed check-in, or a scheduled action.</p><p>This is also why adoption is massive and still growing.</p><p>Many people do not need a system that writes better paragraphs. They need a system that handles life ops with low ceremony. A calendar change should not require three apps and ten taps. A reminder should not require re-explaining the same preferences each time. </p><div id="youtube2-AcwK1Uuwc0U" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;AcwK1Uuwc0U&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/AcwK1Uuwc0U?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Demos of OpenClaw in daily use tend to center on ordinary tasks like managing calendar items, controlling devices, checking in to flights, or handling small admin actions through a chat surface because those are repeatable and measurable. </p><p>One useful comparison for orientation is Claude Code. </p><blockquote><p><strong>If Claude Code is a familiar coding agent surface, OpenClaw is a life ops agent surface.</strong></p></blockquote><p>The rest of this article will stay on that system&#8217;s lens. Execution, availability, and state are enough to produce the alive feeling, even when the underlying reasoning is ordinary.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/openclaw-architecture-not-magic?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/openclaw-architecture-not-magic?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://labs.adaline.ai/p/openclaw-architecture-not-magic?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h2>Heartbeats And Triggers Create The Illusion Of Initiative</h2><p>OpenClaw feels alive because it behaves like a running system rather than a chat window. The right term is <strong>reactive compute</strong>, which means work happens because events arrive, not because the assistant decides to be proactive. </p><p>Claire Vo&#8217;s framing is useful here. The system can have a heartbeat without having a brain. </p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/clairevo/status/2017741569521271175&quot;,&quot;full_text&quot;:&quot;https://t.co/tJNojXC9jo&quot;,&quot;username&quot;:&quot;clairevo&quot;,&quot;name&quot;:&quot;claire vo &#128420;&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1565475442470965248/LBMzyamM_normal.jpg&quot;,&quot;date&quot;:&quot;2026-01-31T23:27:32.000Z&quot;,&quot;photos&quot;:[],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:27,&quot;retweet_count&quot;:39,&quot;like_count&quot;:379,&quot;impression_count&quot;:150949,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><p>The heartbeat is the machinery that keeps checking, waking up, and responding to new inputs.</p><p>Initiative, in this setup, is mostly <strong>scheduling</strong> and <strong>routing</strong>. A message comes in. A timer fires. A file changes. Something external updates. The runtime wakes up, runs a short sequence, and leaves behind a state so the next event has context.</p><p>You can hold the whole behavior in one pipeline: inputs, then scheduler, then queue, then tools, and then state update</p><p>The <strong>Gateway</strong> is the always-on intake layer that receives events from channels and integrations and routes them into the right session or workflow.</p><pre><code>WhatsApp / Telegram / Slack / Discord / Google Chat / Signal / iMessage / BlueBubbles / Microsoft Teams / Matrix / Zalo / Zalo Personal / WebChat
               &#9474;
               &#9660;
&#9484;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9488;
&#9474;            Gateway            &#9474;
&#9474;       (control plane)         &#9474;
&#9474;     ws://127.0.0.1:18789      &#9474;
&#9492;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9516;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9496;
               &#9474;
               &#9500;&#9472; Pi agent (RPC)
               &#9500;&#9472; CLI (openclaw &#8230;)
               &#9500;&#9472; WebChat UI
               &#9500;&#9472; macOS app
               &#9492;&#9472; iOS / Android nodes</code></pre><p>That pipeline is enough to explain why it looks like the system is taking initiative. It is not guessing what to do next. It is being triggered.</p><p>A few common trigger types cover most of the &#8220;alive&#8221; feeling:</p><ul><li><p>Heartbeats that run on a timer, like every morning or every hour.</p></li><li><p>Inbound messages from a channel, like Telegram or Slack.</p></li><li><p>External events, like a calendar change or a webhook.</p></li><li><p>Local changes, like a file being updated in a watched folder.</p></li></ul><p>Let&#8217;s look at two examples to better understand. </p><p>First, a <strong>daily briefing</strong>. A morning timer is automatically executed at 8am. The runtime pulls <strong>calendar</strong> and <strong>reminders </strong>information, <strong>formats a brief</strong>, and <strong>stores</strong> that. The next day, it can compare and focus on what changed since yesterday rather than starting from scratch.</p><p>Second, a scheduled check. An hourly is automatically executed. The runtime checks one condition, sends an alert only if the condition flips, and records the last known value so it does not spam you. That record is the difference between a noisy bot and a useful assistant.</p><p>This is also why &#8220;always on&#8221; matters. When events can wake the system, the system can appear to have momentum.</p><h2>Queue-Based Execution Keeps Agent Workflows Reliable</h2><p>Reliability is the difference between an agent demo and an agent you trust with real work. In a demo, the system runs one clean task in isolation. In real use, tasks overlap. </p><p>For instance, let&#8217;s assume that messages arrive continuously while a tool is running. A scheduled check fires while you are in the middle of a conversation. The runtime has to decide what runs now, what waits, and what is allowed to overlap. </p><p>The common failure mode is parallel tool calls <strong>without control</strong>. When two tasks run at once, they both touch the same state, and you get three kinds of damage.</p><ul><li><p><strong>Logs interleave</strong>, so you cannot tell which action produced which output.</p></li><li><p><strong>Race conditions</strong> appear when two actions read and write the same files or external resources.</p></li><li><p><strong>State drift</strong> creeps in when <strong>partial results land out of order,</strong> and the next step reads the wrong snapshot. </p></li></ul><p>Queue-based execution is the simplest high-leverage fix.</p><p>Treat every requested action as a unit of work that must be scheduled. Give each session a boundary so one thread of work stays coherent. Make serial execution the default so ordering is predictable, then <strong>allow parallelism only for tasks you can prove are independent</strong>. </p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/Hesamation/status/2017038553058857413?s=20&quot;,&quot;full_text&quot;:&quot;https://t.co/LsZLoCMqTN&quot;,&quot;username&quot;:&quot;Hesamation&quot;,&quot;name&quot;:&quot;&#8463;&#949;sam&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1978647680357134336/ioMmfkXF_normal.jpg&quot;,&quot;date&quot;:&quot;2026-01-30T00:54:00.000Z&quot;,&quot;photos&quot;:[],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:85,&quot;retweet_count&quot;:574,&quot;like_count&quot;:4099,&quot;impression_count&quot;:1602077,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><p>The <a href="https://x.com/Hesamation/status/2017038553058857413?s=20">Hesamation</a> teardown describes this approach as lane-based command queues with per-session lanes, a concrete way to make serialization a first-class property rather than an afterthought. </p><p>A useful analogy is air traffic control. Planes can share airspace safely because takeoff and landing are <strong>sequenced.</strong> The system does not ban concurrency; it makes it explicit and governed. A queue does the same thing for tool calls.</p><p>A practical example is inbox work. One task is drafting a reply based on the latest thread. Another task is archiving old messages. If they run in parallel, the archiver can move the thread while the drafter is reading, or the drafter can quote content that is no longer in view. With a queue and session boundary, the system completes one coherent step, writes the result, and then moves to the next.</p><p>The architecture video frames the illusion of sentience as inputs, queues, and a loop that stays legible under load, which is exactly the reliability point. </p><div id="youtube2-CAbrRTu5xcw" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;CAbrRTu5xcw&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/CAbrRTu5xcw?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share&quot;,&quot;text&quot;:&quot;Share Adaline Labs&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://labs.adaline.ai/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share"><span>Share Adaline Labs</span></a></p><h2>Persistent Memory And Recall Create Continuity</h2><p>Continuity is mostly a persistent state plus retrieval, not human-like understanding. Personalization is often just statefulness. The system feels consistent because it can carry facts forward, not because it has a stable internal model of you.</p><p>Claire Vo&#8217;s point about a heartbeat without a brain fits here, too. A running assistant can appear attentive even when it is simply good at&nbsp;<strong>storing and reusing state over time</strong>. </p><p>Operationally, memory is not mystical. It is three boring components that work together.</p><ul><li><p>Durable <strong>notes</strong> and <strong>preferences</strong> that outlive a single session.</p></li><li><p>Session <strong>history</strong> that records what happened and what was decided.</p></li><li><p><strong>Recall</strong> that pulls the right fragments at the moment they matter.</p></li></ul><p>Engineers can think of this as a read-and-write loop around a store. The write path captures decisions and stable preferences. </p><p>The read path retrieves relevant items when a new event arrives. </p><p>Summarization and compaction emerge as patterns as history grows large. This is similar to Claude compaction for a long conversation. The system compresses what mattered, so the next retrieval step still has a signal.</p><p>Two examples make this concrete.</p><p>First, weekly updates. You tell the assistant that your status update should follow a specific format: three bullets for progress, two for blockers, and a short next week plan. </p><p>If that preference is stored durably, the assistant stops asking every time. It can draft the update in the same shape each week, and you only adjust the content.</p><p>Second, recurring constraints. You set a rule like do not send emails after 8 pm. If that constraint is written to durable storage, it becomes a&nbsp;<strong>guardrail</strong>&nbsp;that is applied whenever an email-related task is encountered. The assistant can draft at 9 pm, but schedule the send for the next morning and record that it followed the rule.</p><p>Goertzel&#8217;s &#8220;hands for a brain&#8221; framing matters here because the hands are only useful when they are guided by <strong>stable context</strong> and <strong>preferences</strong> rather than ad hoc guessing. </p><p>But there is a tradeoff. </p><p>Memory without hygiene can become stale or risky. </p><p>Old preferences can outlive their usefulness. Sensitive details can linger longer than intended. </p><p>This is why good systems need user control, recency, and a way to inspect and edit what the assistant thinks it knows. </p><h2>Event-Driven Agent Assistants Win On Clear Tasks And Guardrails</h2><p>Event-driven agent assistants work best when the job can be specified in a way that a tool can verify. They are less reliable when the job is really a judgment call disguised as a task. </p><blockquote><p><strong>The architecture gives you reach and persistence, but it does not give you governance for free.</strong></p></blockquote><p>A simple rule holds. If you can define the inputs, the action, and the success check, these systems tend to behave either well or poorly.</p><p>Good at:</p><ul><li><p>Clear operational tasks with observable outputs, such as producing a daily brief, filing a note, or running a scheduled check.</p></li><li><p>Multi-step workflows where each step has a tool-backed result, like collecting context, drafting, and then saving to a known place or directory.</p></li><li><p>Repetitive life ops work where preferences stay stable, which is why creator demos focus on calendar, reminders, and admin tasks that recur daily or weekly.</p></li></ul><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:186207574,&quot;url&quot;:&quot;https://creatoreconomy.so/p/how-openclaws-creator-uses-ai-peter-steinberger&quot;,&quot;publication_id&quot;:25792,&quot;publication_name&quot;:&quot;Behind the Craft&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!DV7q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc944a891-b38f-40ba-a756-7ddd70824b7e_1024x1024.png&quot;,&quot;title&quot;:&quot;How OpenClaw's Creator Uses AI to Run His Life (Full Demo) | Peter Steinberger&quot;,&quot;truncated_body_text&quot;:&quot;Dear subscribers,&quot;,&quot;date&quot;:&quot;2026-02-01T14:05:16.265Z&quot;,&quot;like_count&quot;:35,&quot;comment_count&quot;:0,&quot;bylines&quot;:[{&quot;id&quot;:6052627,&quot;name&quot;:&quot;Peter Yang&quot;,&quot;handle&quot;:&quot;petergyang&quot;,&quot;previous_name&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d2dbd75e-1c5a-48ab-94ef-b24caea63cdf_1024x1024.png&quot;,&quot;bio&quot;:&quot;Extremely practical AI tutorials and interviews for busy people | Join 135K+ readers at creatoreconomy.so | Product at Roblox&quot;,&quot;profile_set_up_at&quot;:&quot;2021-06-07T04:33:59.004Z&quot;,&quot;reader_installed_at&quot;:&quot;2022-07-17T17:05:30.706Z&quot;,&quot;publicationUsers&quot;:[{&quot;id&quot;:225125,&quot;user_id&quot;:6052627,&quot;publication_id&quot;:25792,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:true,&quot;publication&quot;:{&quot;id&quot;:25792,&quot;name&quot;:&quot;Behind the Craft&quot;,&quot;subdomain&quot;:&quot;peteryang&quot;,&quot;custom_domain&quot;:&quot;creatoreconomy.so&quot;,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;Extremely practical AI tutorials and interviews for busy people. Get my free AI learning path when you sign up.&quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c944a891-b38f-40ba-a756-7ddd70824b7e_1024x1024.png&quot;,&quot;author_id&quot;:6052627,&quot;primary_user_id&quot;:6052627,&quot;theme_var_background_pop&quot;:&quot;#E22D32&quot;,&quot;created_at&quot;:&quot;2020-01-08T03:42:51.283Z&quot;,&quot;email_from_name&quot;:&quot;Peter Yang&quot;,&quot;copyright&quot;:&quot;Peter Yang&quot;,&quot;founding_plan_name&quot;:&quot;\&quot;I Can Expense This\&quot;&quot;,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;enabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;magaziney&quot;,&quot;is_personal_mode&quot;:false}}],&quot;twitter_screen_name&quot;:&quot;petergyang&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:1000,&quot;status&quot;:{&quot;bestsellerTier&quot;:1000,&quot;subscriberTier&quot;:1,&quot;leaderboard&quot;:null,&quot;vip&quot;:false,&quot;badge&quot;:{&quot;type&quot;:&quot;bestseller&quot;,&quot;tier&quot;:1000},&quot;paidPublicationIds&quot;:[10845],&quot;subscriber&quot;:null}}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:true,&quot;type&quot;:&quot;newsletter&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://creatoreconomy.so/p/how-openclaws-creator-uses-ai-peter-steinberger?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><img class="embedded-post-publication-logo" src="https://substackcdn.com/image/fetch/$s_!DV7q!,w_56,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc944a891-b38f-40ba-a756-7ddd70824b7e_1024x1024.png" loading="lazy"><span class="embedded-post-publication-name">Behind the Craft</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title">How OpenClaw's Creator Uses AI to Run His Life (Full Demo) | Peter Steinberger</div></div><div class="embedded-post-body">Dear subscribers&#8230;</div><div class="embedded-post-cta-wrapper"><span class="embedded-post-cta">Read more</span></div><div class="embedded-post-meta">3 months ago &#183; 35 likes &#183; Peter Yang</div></a></div><p>Bad at:</p><ul><li><p>Ambiguous goals where success is subjective, like deciding what you should prioritize this month.</p></li><li><p>High-stakes actions without a hard verification step, like sending money, deleting data, or making irreversible changes.</p></li><li><p>Situations where autonomy grows faster than the operator&#8217;s ability to inspect what happened and why. </p></li></ul><p>Goertzel&#8217;s &#8220;hands for a brain&#8221; metaphor is a good mental boundary. Strong hands can still do the wrong thing if the instruction is underspecified or if the system lacks a disciplined way to pause and ask for confirmation. </p><p>This is where guardrails matter. Increase autonomy in steps. Start with approvals for any command that changes external state. Keep allowlists for routine safe operations. Treat risky actions as review required until the logs are boring.</p><p>Try OpenClaw, but start with low-risk workflows, watch how it behaves, and only then give it more reach.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Growth And Retention In An AI-first World | Takeaways For Founders And Product Leaders]]></title><description><![CDATA[AI makes products feel magical at first, but only trust, habit, and problem frequency turn novelty into durable retention.]]></description><link>https://labs.adaline.ai/p/growth-and-retention-in-an-ai-first-world</link><guid isPermaLink="false">https://labs.adaline.ai/p/growth-and-retention-in-an-ai-first-world</guid><dc:creator><![CDATA[Arsh Shah Dilbagi]]></dc:creator><pubDate>Wed, 04 Feb 2026 13:50:24 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/2b1e12c4-2b89-4f8a-aa64-2007058e1bf2_2560x1440.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>TLDR</strong>: This blog explains why smart AI features alone don't create lasting user habits or growth. It covers three key insights: excitement doesn't equal habit without intentional design. Retention depends on how often users naturally need your product, not how smart it is. And products grow when they solve shared problems, not just individual ones. Readers will learn why forcing engagement backfires, why aligning with users' natural workflows matters, and how collaboration drives real stickiness. The main takeaway is simple: AI products succeed by becoming useful to groups, not by being brilliant alone.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://go.adaline.ai/rPUz2SX" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!xRDr!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e13b41d-bc71-43d0-a2b0-d3c88549f1ad_2160x810.png 424w, https://substackcdn.com/image/fetch/$s_!xRDr!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e13b41d-bc71-43d0-a2b0-d3c88549f1ad_2160x810.png 848w, https://substackcdn.com/image/fetch/$s_!xRDr!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e13b41d-bc71-43d0-a2b0-d3c88549f1ad_2160x810.png 1272w, https://substackcdn.com/image/fetch/$s_!xRDr!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e13b41d-bc71-43d0-a2b0-d3c88549f1ad_2160x810.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!xRDr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e13b41d-bc71-43d0-a2b0-d3c88549f1ad_2160x810.png" width="1456" height="546" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5e13b41d-bc71-43d0-a2b0-d3c88549f1ad_2160x810.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:546,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:288175,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://go.adaline.ai/rPUz2SX&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/184654030?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e13b41d-bc71-43d0-a2b0-d3c88549f1ad_2160x810.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!xRDr!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e13b41d-bc71-43d0-a2b0-d3c88549f1ad_2160x810.png 424w, https://substackcdn.com/image/fetch/$s_!xRDr!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e13b41d-bc71-43d0-a2b0-d3c88549f1ad_2160x810.png 848w, https://substackcdn.com/image/fetch/$s_!xRDr!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e13b41d-bc71-43d0-a2b0-d3c88549f1ad_2160x810.png 1272w, https://substackcdn.com/image/fetch/$s_!xRDr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e13b41d-bc71-43d0-a2b0-d3c88549f1ad_2160x810.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Introduction</h2><div id="youtube2--iXxoxc-o6o" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;-iXxoxc-o6o&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/-iXxoxc-o6o?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><h2>Founder Intro: Growth &amp; Retention in an AI-First World</h2><p>One of the most persistent misconceptions in AI right now is that <strong>intelligence alone drives growth</strong>.</p><p>Build something impressive enough, the thinking goes, and users will keep coming back. Retention will take care of itself. Distribution will follow naturally.</p><p>In practice, the opposite is happening.</p><p>AI products are getting better faster than teams are learning how to retain users, build trust, and compound growth over time. Excitement is abundant. Habit is rare.</p><p>Panel 3 was designed to confront that gap directly.</p><p>Rather than focusing on models or capabilities, we wanted to examine a harder set of questions:</p><p>What actually makes AI products stick?</p><p>What drives durable growth once the novelty wears off?</p><p>And how do retention dynamics change in an AI-first world?</p><p>To explore those questions, we brought together operators who have spent years studying &#8212; and living inside &#8212; growth systems at scale:</p><ul><li><p><strong><a href="https://www.linkedin.com/in/aaroncort/">Aaron Cort</a></strong>, Growth &amp; Marketing Partner at <strong>Craft Ventures</strong>, advising and operating across some of the fastest-growing AI and SaaS companies</p></li><li><p><strong><a href="https://www.linkedin.com/in/bbalfour/">Brian Balfour</a></strong>, Founder &amp; CEO at <strong>Reforge</strong>, who has shaped how an entire generation of operators thinks about growth and retention</p></li><li><p><strong><a href="https://www.linkedin.com/in/-bryce/">Bryce Hunt</a></strong>, Founding GTM at <strong>Cognition</strong>, working at the frontier of agent-native products and new go-to-market motions</p></li><li><p><strong><a href="https://www.linkedin.com/in/gvohra/">Gaurav Vohra</a></strong>, Advisor and Head of Growth at <strong>Superhuman</strong>, where precision, trust, and habit are non-negotiable</p></li></ul><p>What emerged was a clear reframing of growth in the AI era.</p><p>This panel wasn&#8217;t about hacks, channels, or short-term tactics. It was about fundamentals &#8212; how problem frequency governs retention, why trust is the real retention loop, how onboarding becomes more critical (not less), and why community and personal brand are increasingly powerful growth multipliers.</p><p>Perhaps most importantly, it surfaced a shared conviction:</p><blockquote><p>AI doesn&#8217;t change the laws of growth.<br>It exposes when teams ignore them.</p></blockquote><p>The sections that follow break down the core lessons from this conversation &#8212; from why hype fades faster than habit, to why motion choice constrains everything, to why the most defensible layer in many AI companies today is human trust.</p><p>If you&#8217;re building an AI product and wondering why early excitement isn&#8217;t translating into durable usage &#8212; or how to design growth systems that actually compound &#8212; this panel offers a grounded, experience-driven place to start.</p><div><hr></div><h2>1. Hype Is Easy &#8212; Habit Is the Hard Part</h2><p>One of the most consistent themes across this panel was the widening gap between <strong>initial excitement</strong> and <strong>durable usage</strong> in AI products.</p><p>AI excels at creating &#8220;wow&#8221; moments.</p><p>New users are impressed by:</p><ul><li><p>Instant results.</p></li><li><p>Intelligent-sounding outputs.</p></li><li><p>Dramatic productivity claims.</p></li><li><p>Novelty-driven breakthroughs.</p></li></ul><p>As <strong>Brian Balfour</strong>, Founder &amp; CEO of Reforge, pointed out, this has created a dangerous illusion in the market: teams mistake <em>interest</em> for <em>retention</em>.</p><h3>The Illusion of Early Traction</h3><p>AI products today are exceptionally good at:</p><ul><li><p>Generating excitement.</p></li><li><p>Creating impressive first impressions.</p></li><li><p>Driving short-term spikes in usage.</p></li></ul><p>These early signals feel like momentum. Dashboards light up. Activation looks strong. Engagement graphs climb.</p><p>But as multiple speakers emphasized, <strong>very few AI products convert that excitement into habit</strong>.</p><p>Usage drops sharply once novelty fades. Sessions become sporadic. Power users emerge &#8212; but the majority quietly churn.</p><p>As <strong>Aaron Cort</strong>, Growth &amp; Marketing Partner at Craft Ventures, noted during the panel, this is one of the most common failure patterns he sees across AI companies: <em>strong top-of-funnel interest paired with weak behavioral lock-in</em>.</p><h3>Habit Is Not a Side Effect of Intelligence</h3><p>A critical distinction surfaced early in the discussion:</p><blockquote><p><strong>Habit does not emerge automatically from impressive capability.</strong></p></blockquote><p>Habit forms when a product:</p><ul><li><p>Solves a recurring problem.</p></li><li><p>Delivers consistent value.</p></li><li><p>Reinforces usage at the right cadence.</p></li></ul><p>AI often accelerates the first interaction &#8212; but it does not guarantee the second, third, or tenth.</p><p>As <strong>Gaurav Vohra</strong>, Advisor and Head of Growth at Superhuman, framed it, delight gets users to try. Reliability gets them to stay.</p><h3>The Missing Link: Problem Frequency</h3><p>Several speakers emphasized that many AI products fail because they misunderstand <em>how often</em> users naturally experience the problem being solved.</p><p>If a product:</p><ul><li><p>Solves a weekly problem,</p></li><li><p>But is designed for daily engagement,</p></li><li><p>Or pushes frequent prompts to manufacture usage,</p></li></ul><p>it creates friction, not habit.</p><p>This mismatch leads to:</p><ul><li><p>Forced engagement.</p></li><li><p>Notification fatigue.</p></li><li><p>User resentment.</p></li><li><p>Eventual churn.</p></li></ul><p>AI doesn&#8217;t change the natural frequency of a problem; it only exposes when teams ignore it.</p><h3>Shallow Engagement Looks Like Growth (Until It Doesn&#8217;t)</h3><p>One of the more subtle warnings from the panel was about <strong>engagement theater</strong>.</p><p>Short sessions, repeated trials, and sporadic experimentation can look like healthy usage in aggregate. But without a clear, repeatable value loop, that engagement is fragile.</p><p>As <strong>Bryce Hunt</strong>, Founding GTM at Cognition, described from the frontier of agent-native products, users will experiment enthusiastically &#8212; right up until they don&#8217;t trust the system to deliver reliably when it matters.</p><p>At that point, usage collapses.</p><h3>From Hype to Habit Requires Intentional Design</h3><p>The panel was clear that the transition from hype to habit is not accidental.</p><p>It requires:</p><ul><li><p>A deep understanding of the underlying user problem.</p></li><li><p>Clarity on when and why users should return.</p></li><li><p>Consistent value delivery, not sporadic brilliance.</p></li><li><p>Reinforcement at a cadence that matches real behavior.</p></li></ul><p>Without these elements, AI products experience:</p><ul><li><p>Rapid churn.</p></li><li><p>Novelty decay.</p></li><li><p>Shallow engagement disguised as growth.</p></li></ul><h3>The Core Insight</h3><p>AI makes it easier than ever to impress users once.</p><p>It does not make it easier to earn a place in their daily &#8212; or weekly &#8212; routine.</p><p>As this panel made clear, <strong>habit is not created by intelligence alone</strong>.</p><p>It&#8217;s created by relevance, consistency, and trust &#8212; delivered over time.</p><h2>2. Retention Is Governed by the Natural Frequency of the Problem</h2><p>Early in the panel, a foundational concept surfaced &#8212; and then kept resurfacing in different forms:</p><blockquote><p><strong>Retention is constrained by how often users naturally encounter the problem you solve.</strong></p></blockquote><p>No amount of AI sophistication can override that constraint.</p><p>You can improve <em>how</em> a problem is solved.</p><p>You can reduce friction.</p><p>You can increase quality.</p><p>But you cannot change:</p><ul><li><p>How frequently does the user feel the pain?</p></li><li><p>How urgent is it when it appears?</p></li><li><p>Whether it belong in their daily, weekly, or occasional workflow?</p></li></ul><p>As <strong>Brian Balfour</strong>, Founder &amp; CEO of Reforge, emphasized, retention mechanics are downstream of reality &#8212; not product ambition.</p><h3>AI Doesn&#8217;t Change Problem Frequency &#8212; It Reveals It</h3><p>One of the traps AI companies fall into is assuming that intelligence increases usage frequency.</p><p>It doesn&#8217;t.</p><p>AI can:</p><ul><li><p>Make a task faster.</p></li><li><p>Make a task easier.</p></li><li><p>Make a task more impressive.</p></li></ul><p>But if the task only matters once a week, <strong>daily usage is artificial</strong>.</p><p>As <strong>Aaron Cort</strong>, Growth &amp; Marketing Partner at Craft Ventures, noted during the discussion, many AI products feel pressure to justify venture-scale expectations by forcing daily engagement &#8212; even when the underlying problem doesn&#8217;t support it.</p><p>That pressure often leads to bad decisions.</p><h3>Forced Engagement Backfires</h3><p>When companies try to:</p><ul><li><p>Force daily usage for a weekly problem.</p></li><li><p>Manufacture engagement through notifications.</p></li><li><p>Inflate frequency with alerts, nudges, or reminders.</p></li></ul><p>They don&#8217;t create a habit.</p><p>They create:</p><ul><li><p>Worse products.</p></li><li><p>User fatigue.</p></li><li><p>Eroded trust.</p></li><li><p>Eventual churn.</p></li></ul><p>Users don&#8217;t interpret forced engagement as helpful.</p><p>They interpret it as noise.</p><p>AI amplifies this effect because the expectations are higher. If a system claims intelligence but interrupts users unnecessarily, the disappointment is sharper.</p><h3>Criticality Matters as Much as Frequency</h3><p>The panel also highlighted that frequency alone isn&#8217;t enough &#8212; <strong>criticality matters</strong>.</p><p>Some problems occur infrequently but are extremely important when they do. Others occur often but are low-stakes.</p><p>As <strong>Gaurav Vohra</strong>, Advisor and Head of Growth at Superhuman, explained, retention emerges when a product aligns with moments that <em>matter</em>. If users don&#8217;t feel meaningful relief or leverage when the problem appears, they won&#8217;t return &#8212; no matter how impressive the solution is.</p><p>AI products that misunderstand this often chase engagement metrics instead of solving meaningful pain.</p><h3>Misaligned Cadence Creates Product Friction</h3><p>A recurring failure pattern described on the panel looked like this:</p><ul><li><p>The product solves a real problem.</p></li><li><p>The solution works well.</p></li><li><p>But the cadence of engagement doesn&#8217;t match the user&#8217;s life.</p></li></ul><p>Daily prompts for a weekly task.</p><p>Constant nudges for occasional workflows.</p><p>Persistent reminders for low-urgency problems.</p><p>The result isn&#8217;t retention &#8212; it&#8217;s resistance.</p><p>As <strong>Bryce Hunt</strong>, Founding GTM at Cognition, pointed out from the edge of agent-driven products, users quickly disengage when a system feels like it&#8217;s working <em>for itself</em>, not for them.</p><h3>AI Makes Violations More Obvious</h3><p>One of the sharpest insights from the panel was this:</p><blockquote><p><strong>AI does not change the natural frequency law &#8212; it only amplifies violations of it.</strong></p></blockquote><p>Because AI systems are more visible, more interactive, and more assertive, misalignment shows up faster.</p><p>Users don&#8217;t quietly tolerate friction.</p><p>They disengage.</p><p>What might have taken months to surface in traditional software becomes obvious in weeks &#8212; sometimes days &#8212; in AI products.</p><h3>The Practical Takeaway</h3><p>Retention doesn&#8217;t come from intelligence alone.</p><p>It comes from alignment.</p><p>Teams that succeed:</p><ul><li><p>Identify the natural cadence of the problem.</p></li><li><p>Design engagement around that cadence.</p></li><li><p>Resist the urge to force frequency.</p></li><li><p>Measure success by consistency, not volume.</p></li></ul><p>In an AI-first world, <strong>respecting user reality is the fastest path to durable retention</strong>.</p><h2>3. Growth Comes From Solving Shared Problems, Not Isolated Ones</h2><p>Another strong theme that emerged from the panel was the importance of <strong>multi-user relevance</strong>.</p><p>Many AI products begin by delivering clear value to an individual user. That&#8217;s often the right starting point. It simplifies onboarding, shortens time-to-value, and helps teams validate core utility quickly.</p><p>But as the panel made clear, <strong>durable growth rarely stops at the individual</strong>.</p><h3>Individual Value Is Necessary &#8212; But Not Sufficient</h3><p>AI products are especially good at creating powerful single-player experiences.</p><p>They help users:</p><ul><li><p>Think faster.</p></li><li><p>Produce better outputs.</p></li><li><p>Automate personal workflows.</p></li><li><p>Feel individually empowered.</p></li></ul><p>This often leads to strong early adoption.</p><p>But as <strong>Brian Balfour</strong>, Founder &amp; CEO of Reforge, emphasized, products that remain purely individual struggle to compound. They grow linearly, not exponentially. Each new user must be acquired independently, and retention alone has to carry the entire growth story.</p><p>That&#8217;s a hard ceiling.</p><h3>Shared Problems Unlock Compounding Growth</h3><p>The most durable products discussed on the panel followed a different arc.</p><p>They:</p><ol><li><p><strong>Start with individual value</strong>: Solving a clear, personal pain point.</p></li><li><p><strong>Expand into shared contexts</strong>: Teams, organizations, or communities.</p></li><li><p><strong>Embed themselves into collaboration</strong>: Where work is coordinated, reviewed, or acted upon together.</p></li></ol><p>This transition unlocks:</p><ul><li><p>Natural network effects.</p></li><li><p>Lock-in through shared workflows.</p></li><li><p>Organic distribution via collaboration.</p></li></ul><p>As <strong>Gaurav Vohra</strong>, Advisor and Head of Growth at Superhuman, explained, once a product becomes part of how people work <em>together</em>, switching costs become emotional and operational &#8212; not just technical.</p><h3>AI Amplifies Collaboration &#8212; When Designed For It</h3><p>AI has the potential to accelerate this transition, but only if products are designed intentionally.</p><p>When AI outputs are:</p><ul><li><p>Easily shareable.</p></li><li><p>Reviewable by others.</p></li><li><p>Editable collaboratively.</p></li><li><p>Embedded in team workflows.</p></li></ul><p>They create natural reasons for expansion.</p><p>As <strong>Aaron Cort</strong>, Growth &amp; Marketing Partner at Craft Ventures, noted during the panel, many of the strongest AI companies see growth inflection not when the product gets smarter, but when it becomes <em>socially necessary</em> within a team.</p><h3>Single-Player Products Hit a Wall</h3><p>The panel was also clear about the risks of staying single-player for too long.</p><p>AI products that remain isolated experiences often:</p><ul><li><p>Depend heavily on paid acquisition.</p></li><li><p>Struggle to create organic loops.</p></li><li><p>Face high churn when usage is optional.</p></li><li><p>Fail to embed themselves into daily work.</p></li></ul><p>Even when the individual experience is strong, growth plateaus.</p><p>As <strong>Bryce Hunt</strong>, Founding GTM at Cognition, shared from the perspective of agent-native products, the moment AI systems begin influencing shared outcomes &#8212; codebases, decisions, deliverables &#8212; adoption dynamics change dramatically. Teams care. Conversations start. Distribution accelerates.</p><h3>Collaboration Creates Accountability &#8212; and Stickiness</h3><p>Another subtle benefit of shared problems is accountability.</p><p>When work is:</p><ul><li><p>Visible to others.</p></li><li><p>Reviewed collaboratively.</p></li><li><p>Dependent on multiple stakeholders.</p></li></ul><p>Usage becomes harder to abandon quietly.</p><p>Products that live inside shared workflows benefit from:</p><ul><li><p>Social reinforcement.</p></li><li><p>Collective habit formation.</p></li><li><p>Stronger norms around usage.</p></li></ul><p>This doesn&#8217;t require viral mechanics.</p><p>It requires relevance to how people already work together.</p><h3>The Practical Takeaway</h3><p>AI products don&#8217;t compound by being smarter alone.</p><p>They compound by becoming <strong>collectively useful</strong>.</p><p>The most durable growth comes from:</p><ul><li><p>Starting with individual value.</p></li><li><p>Expanding into shared problems.</p></li><li><p>Embedding into collaboration.</p></li><li><p>Letting distribution emerge naturally.</p></li></ul><p>In an AI-first world, <strong>growth follows shared utility &#8212; not isolated brilliance</strong>.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>4. AI Raises the Bar for Onboarding &#8212; Not Lowers It</h2><p>One of the more counterintuitive conclusions from the panel was this:</p><blockquote><p>AI does not make products easier to adopt.</p><p>It often makes them harder.</p></blockquote><p>Despite early expectations that intelligence would reduce friction, the opposite pattern has emerged in practice &#8212; especially once products reach real users.</p><h3>AI Introduces New Kinds of Friction</h3><p>Traditional software is predictable.</p><p>AI systems are not.</p><p>AI products introduced:</p><ul><li><p>Nondeterministic behavior.</p></li><li><p>Unfamiliar mental models.</p></li><li><p>Probabilistic outcomes.</p></li><li><p>Workflows users haven&#8217;t seen before.</p></li></ul><p>Even when the product is powerful, users often don&#8217;t know:</p><ul><li><p>What to expect.</p></li><li><p>How to judge success.</p></li><li><p>When the system is confident.</p></li><li><p>When they should intervene.</p></li></ul><p>As <strong>Brian Balfour</strong>, Founder &amp; CEO of Reforge, emphasized, this creates a gap between <em>capability</em> and <em>confidence</em>. And without confidence, users don&#8217;t stick.</p><h3>Self-Serve Onboarding Breaks Earlier Than Teams Expect</h3><p>A recurring theme across the panel was that <strong>self-serve onboarding fails much earlier in AI products</strong> than in traditional SaaS.</p><p>Many teams assume that:</p><ul><li><p>Users will experiment.</p></li><li><p>Value will reveal itself.</p></li><li><p>Intelligence will &#8220;sell&#8221; the product.</p></li></ul><p>In reality, users often stall immediately.</p><p>As <strong>Aaron Cort</strong>, Growth &amp; Marketing Partner at Craft Ventures, noted, AI products place a higher cognitive burden on users. When people don&#8217;t understand how to succeed quickly, they disengage &#8212; even if the product is technically impressive.</p><p>The failure isn&#8217;t loud.</p><p>It&#8217;s silent.</p><h3>Early Handholding Accelerates Learning</h3><p>The fastest-learning companies described on the panel didn&#8217;t avoid human involvement &#8212; they leaned into it.</p><p>They:</p><ul><li><p>Onboarded users personally.</p></li><li><p>Walked them through first successes.</p></li><li><p>Observed where confusion emerged.</p></li><li><p>Adjusted workflows based on real behavior.</p></li></ul><p>As <strong>Gaurav Vohra</strong>, Advisor and Head of Growth at Superhuman, explained, early handholding isn&#8217;t a scaling failure &#8212; it&#8217;s a learning accelerator. It shortens the feedback loop between what teams <em>think</em> users understand and what users actually experience.</p><h3>Onboarding Is About Education, Not Explanation</h3><p>A subtle but important distinction emerged around onboarding intent.</p><p>Onboarding isn&#8217;t about:</p><ul><li><p>Explaining features.</p></li><li><p>Listing capabilities.</p></li><li><p>Documenting everything the system can do.</p></li></ul><p>It&#8217;s about <strong>teaching users how to think with the product</strong>.</p><p>That means:</p><ul><li><p>Showing what good usage looks like.</p></li><li><p>Defining boundaries clearly.</p></li><li><p>Guiding users through successful outcomes.</p></li><li><p>Correcting misuse early.</p></li></ul><p>As <strong>Bryce Hunt</strong>, Founding GTM at Cognition, pointed out from the frontier of agent-based products, onboarding is often the moment where trust is either established or permanently lost.</p><h3>Learning Speed Beats Go-To-Market Speed</h3><p>Perhaps the most important reframe of the section was this:</p><blockquote><p><strong>The fastest-growing AI companies prioritize learning speed over go-to-market speed.</strong></p></blockquote><p>They don&#8217;t rush to scale acquisition before:</p><ul><li><p>Understanding user confusion.</p></li><li><p>Clarifying workflows.</p></li><li><p>Stabilizing outcomes.</p></li></ul><p>They accept slower early growth in exchange for:</p><ul><li><p>Stronger retention.</p></li><li><p>Clearer value propositions.</p></li><li><p>More predictable expansion later.</p></li></ul><p>In AI, onboarding is not a cost center.</p><p>It&#8217;s where product truth is discovered.</p><h3>The Practical Takeaway</h3><p>AI raises expectations &#8212; and uncertainty &#8212; at the same time.</p><p>That makes onboarding more important, not less.</p><p>Teams that succeed:</p><ul><li><p>Invest heavily in early education.</p></li><li><p>Embrace guided experiences.</p></li><li><p>Treat onboarding as a product system.</p></li><li><p>Learn from confusion instead of ignoring it.</p></li></ul><p>In an AI-first world, <strong>great onboarding isn&#8217;t about removing friction &#8212; it&#8217;s about removing uncertainty</strong>.</p><h2>5. Product-Led &#8800; Hands-Off</h2><p>One of the clearest misconceptions surfaced on the panel was around what <em>product-led</em> actually means in an AI-first world.</p><p>Too often, product-led growth is interpreted as:</p><ul><li><p>Zero human involvement.</p></li><li><p>Fully self-serve from day one.</p></li><li><p>No guidance or intervention.</p></li><li><p>No opinionated direction.</p></li></ul><p>The panel was unequivocal: <strong>this interpretation breaks down quickly in AI products</strong>.</p><h3>Product-Led Is About Where Value Is Created &#8212; Not Who&#8217;s Involved</h3><p>At its core, product-led growth means that <strong>the product is the primary driver of value realization</strong>.</p><p>It does <em>not</em> mean:</p><ul><li><p>Users are left alone to figure things out.</p></li><li><p>Teams remove themselves from the learning loop.</p></li><li><p>Human touch is a failure mode.</p></li></ul><p>As <strong>Brian Balfour</strong>, Founder &amp; CEO of Reforge, emphasized, product-led growth (PLG) is about <em>value delivery</em>, not <em>absence of people</em>. Confusing the two leads teams to optimize for scale before they&#8217;ve learned what actually works.</p><h3>AI Products Need Human Scaffolding Early</h3><p>AI introduces uncertainty in ways traditional software does not.</p><p>Users often:</p><ul><li><p>Don&#8217;t know what&#8217;s possible.</p></li><li><p>Don&#8217;t know how to judge outputs.</p></li><li><p>Don&#8217;t know when they&#8217;re using the product &#8220;correctly&#8221;.</p></li></ul><p>In this context, <strong>early human involvement is not optional</strong>.</p><p>As <strong>Aaron Cort</strong>, Growth &amp; Marketing Partner at Craft Ventures, explained, the most effective AI companies use concierge onboarding early &#8212; not to sell, but to observe. Watching how users struggle, succeed, and misunderstand the product surfaces insights that no dashboard ever will.</p><h3>Human Feedback Accelerates Product Discovery</h3><p>Several speakers described how early human touch dramatically shortened product discovery cycles.</p><p>By staying close to users, teams were able to:</p><ul><li><p>Identify confusing workflows quickly.</p></li><li><p>Understand which outputs actually mattered.</p></li><li><p>Separate novelty from real value.</p></li><li><p>Refine positioning before scaling acquisition.</p></li></ul><p>As <strong>Gaurav Vohra</strong>, Advisor and Head of Growth at Superhuman, noted, this human feedback loop is often the difference between a product that <em>feels impressive</em> and one that <em>earns trust</em>.</p><h3>Learning Cycles Matter More Than Scale at the Start</h3><p>A recurring warning from the panel was about premature scaling.</p><p>AI products that rush to:</p><ul><li><p>Remove human touch, </p></li><li><p>Automate everything, and</p></li><li><p>Maximize self-serve acquisition, </p></li></ul><p>often do so before they&#8217;ve stabilized value delivery.</p><p>As <strong>Bryce Hunt</strong>, Founding GTM at Cognition, shared from the frontier of agent-based products, early scale amplifies misunderstanding just as fast as it amplifies success. If users are confused at small scale, they&#8217;ll be lost at large scale.</p><h3>The Strategic Use of Human Touch</h3><p>The panel offered a more nuanced model for PLG in AI:</p><ol><li><p><strong>Use human involvement intentionally early</strong></p><ul><li><p>To teach.</p></li><li><p>To observe.</p></li><li><p>To learn.</p></li></ul></li><li><p><strong>Identify repeatable patterns of value</strong></p><ul><li><p>Where users succeed without help.</p></li><li><p>Where workflows stabilize.</p></li><li><p>Where trust is earned.</p></li></ul></li><li><p><strong>Replace human touch deliberately</strong></p><ul><li><p>With product affordances.</p></li><li><p>With opinionated flows.</p></li><li><p>With automation that reflects real usage.</p></li></ul></li></ol><p>The goal is not to avoid human touch &#8212; it&#8217;s to earn the right to remove it.</p><h3>The Practical Takeaway</h3><p>In AI products, product-led does not mean hands-off.</p><p>It means:</p><ul><li><p>The product leads value creation.</p></li><li><p>Humans accelerate learning.</p></li><li><p>Automation follows understanding.</p></li></ul><p>Teams that treat PLG as an excuse to disengage learn slowly.</p><p>Teams that treat PLG as a system &#8212; with humans embedded early &#8212; learn fast.</p><p>In an AI-first world, <strong>strategic human involvement is not a growth liability</strong>. It&#8217;s a competitive advantage.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/growth-and-retention-in-an-ai-first-world?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/growth-and-retention-in-an-ai-first-world?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://labs.adaline.ai/p/growth-and-retention-in-an-ai-first-world?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h2>6. Motion Choice Is a Strategic Constraint, Not a Tactic</h2><p>One of the most direct &#8212; and least hedged &#8212; messages from the panel was about go-to-market motion.</p><p><strong>Being &#8220;in the middle&#8221; is the worst place to be.</strong></p><p>This wasn&#8217;t framed as a tactical mistake.</p><p>It was framed as a <em>structural</em> one.</p><h3>GTM Motion Shapes Everything That Follows</h3><p>The panel was clear that GTM motion is not something you &#8220;optimize later.&#8221;</p><p>It determines:</p><ul><li><p>How products are built.</p></li><li><p>How onboarding works.</p></li><li><p>How trust is earned.</p></li><li><p>How quickly deals close.</p></li><li><p>How economics scale.</p></li></ul><p>As <strong>Aaron Cort</strong>, Growth &amp; Marketing Partner at Craft Ventures, emphasized, motion choice constrains what&#8217;s possible long before it shows up in metrics. Teams that delay this decision often find themselves stuck with a product that doesn&#8217;t cleanly support <em>any</em> motion well.</p><h3>Why the Middle Collapses</h3><p>Several speakers described the same failure pattern:</p><ul><li><p>The product isn&#8217;t self-serve enough to convert quickly.</p></li><li><p>It isn&#8217;t enterprise-ready enough to close confidently.</p></li><li><p>Sales cycles stretch.</p></li><li><p>Security reviews stall.</p></li><li><p>Legal friction increases.</p></li><li><p>Economics break.</p></li></ul><p>This &#8220;hybrid by default&#8221; approach sounds flexible. In practice, it creates friction everywhere.</p><p>As <strong>Brian Balfour</strong>, Founder &amp; CEO of Reforge, noted, ambiguity in motion leads to ambiguity in execution. Teams don&#8217;t know whether to optimize for conversion speed or deal size &#8212; and end up doing neither effectively.</p><h3>Pure Sales-Led Is Fragile in AI</h3><p>The panel was equally candid about the limits of traditional sales-led motions in AI.</p><p>Pure sales-led AI companies often struggle because:</p><ul><li><p>Products evolve too quickly for long sales cycles.</p></li><li><p>Value is hard to fully demonstrate upfront.</p></li><li><p>Buyers want proof through usage, not promises.</p></li><li><p>Model behavior can&#8217;t be perfectly specified in contracts.</p></li></ul><p>This doesn&#8217;t make sales irrelevant &#8212; but it makes <strong>sales-first strategies fragile</strong>, especially early.</p><h3>Hybrid Motions Hit Real-World Friction</h3><p>Hybrid motions &#8212; product-led entry with early sales involvement &#8212; sound attractive in theory.</p><p>In practice, the panel noted that they often collapse under:</p><ul><li><p>Security reviews.</p></li><li><p>Legal scrutiny.</p></li><li><p>IT procurement processes.</p></li><li><p>Unclear ownership.</p></li></ul><p>Without a clear product-led wedge or a true enterprise motion, teams get stuck negotiating before value is experienced.</p><h2>The Two Motions That Actually Work</h2><p>Across the discussion, the panel converged on two viable extremes:</p><p><strong>1. Product-Led (Sales Layered Later)</strong></p><ul><li><p>Clear self-serve value.</p></li><li><p>Fast time-to-first-success.</p></li><li><p>Minimal friction to try.</p></li><li><p>Sales introduced after usage and trust are established.</p></li></ul><p><strong>2. Forward-Deployed Engineering</strong></p><ul><li><p>Deep customer involvement.</p></li><li><p>Hands-on implementation.</p></li><li><p>High-touch workflows.</p></li><li><p>Clear value before scale.</p></li></ul><p>As <strong>Bryce Hunt</strong>, Founding GTM at Cognition, explained from the frontier of agent-native products, forward-deployed work isn&#8217;t a fallback &#8212; it&#8217;s often the fastest way to learn when problems are complex and trust is critical.</p><h3>Ambiguity Is the Real Enemy</h3><p>What failed consistently were companies that tried to keep all options open.</p><p>Ambiguous motion leads to:</p><ul><li><p>Slow deals.</p></li><li><p>Broken economics.</p></li><li><p>Unclear product priorities.</p></li><li><p>Stalled growth.</p></li></ul><p>Teams hesitate. Buyers hesitate. Momentum dies quietly.</p><p>As <strong>Gaurav Vohra</strong>, Advisor and Head of Growth at Superhuman, put it earlier in the panel, clarity &#8212; even when it limits options &#8212; is what enables speed.</p><h3>The Practical Takeaway</h3><p>GTM motion is not a growth hack. It&#8217;s a strategic constraint.</p><p>The companies that win:</p><ul><li><p>Choose a clear motion early.</p></li><li><p>Design the product around it.</p></li><li><p>Accept the tradeoffs.</p></li><li><p>Execute decisively.</p></li></ul><p>In an AI-first world, <strong>clarity beats flexibility</strong>.</p><p>Choosing the right motion doesn&#8217;t guarantee success &#8212; but avoiding the decision almost guarantees failure.</p><h2>7. Trust Is the New Retention Loop</h2><p>Across multiple threads of the conversation, one idea kept surfacing in different forms:</p><p><strong>Trust is the real retention mechanism in AI products.</strong></p><p>Not novelty.</p><p>Not intelligence.</p><p>Not even habit on its own.</p><p>Users return when they trust the system.</p><h3>Trust Is Built on Predictability, Not Perfection</h3><p>The panel was clear that users don&#8217;t expect AI systems to be perfect.</p><p>They expect them to be <strong>understandable</strong>.</p><p>Users return when:</p><ul><li><p>Outputs are predictable.</p></li><li><p>Behavior is consistent.</p></li><li><p>Failure modes make sense.</p></li><li><p>The system feels aligned with their intent.</p></li></ul><p>As <strong>Brian Balfour</strong>, Founder &amp; CEO of Reforge, emphasized, predictability is what allows users to form mental models. Without a mental model, there is no habit &#8212; only hesitation.</p><h3>Randomness Destroys Confidence Faster Than Errors</h3><p>Several speakers noted that <strong>randomness is more damaging than being wrong</strong>.</p><p>AI systems lose users when:</p><ul><li><p>Results feel inconsistent.</p></li><li><p>Success feels accidental.</p></li><li><p>Similar inputs produce wildly different outcomes.</p></li><li><p>Behavior changes without explanation.</p></li></ul><p>As <strong>Aaron Cort</strong>, Growth &amp; Marketing Partner at Craft Ventures, explained, users can forgive known limitations. What they can&#8217;t tolerate is uncertainty about whether the product will work <em>this time</em>.</p><p>In AI products, confusion doesn&#8217;t just slow adoption &#8212; it actively repels it.</p><h3>Opaque Systems Feel Unaligned</h3><p>A recurring theme was alignment.</p><p>Users trust systems that feel like they&#8217;re:</p><ul><li><p>Working <em>with</em> them.</p></li><li><p>Respecting their intent.</p></li><li><p>Operating within understood boundaries.</p></li></ul><p>When behavior is opaque, users assume misalignment &#8212; even if none exists.</p><p>As <strong>Bryce Hunt</strong>, Founding GTM at Cognition, described from the frontier of agent-based systems, trust collapses quickly when users don&#8217;t understand why the system acted the way it did. At that point, even good outcomes feel suspect.</p><h3>Failure Modes Matter More Than Success Cases</h3><p>One subtle but important insight from the panel was that <strong>users judge AI products by how they fail, not how they succeed</strong>.</p><p>When failure modes are:</p><ul><li><p>Explainable, </p></li><li><p>Constrained, and </p></li><li><p>Recoverable,</p></li></ul><p>trust grows.</p><p>When failures are:</p><ul><li><p>Surprising,</p></li><li><p>Silent, and </p></li><li><p>Inconsistent,</p></li></ul><p>users disengage.</p><p>As <strong>Gaurav Vohra</strong>, Advisor and Head of Growth at Superhuman, pointed out, trust isn&#8217;t built by eliminating failure &#8212; it&#8217;s built by making failure legible.</p><h3>Trust Compounds Over Time</h3><p>The panel repeatedly emphasized that trust behaves like a compounding asset.</p><p>Each predictable interaction:</p><ul><li><p>Reinforces confidence.</p></li><li><p>Lowers cognitive load.</p></li><li><p>Increases willingness to rely on the system.</p></li></ul><p>Over time, trust becomes the reason users return &#8212; even when alternatives exist.</p><p>Conversely, confusion compounds just as quickly.</p><p>Each unclear outcome:</p><ul><li><p>Introduces doubt.</p></li><li><p>Raises friction.</p></li><li><p>Shortens patience.</p></li></ul><p>Churn doesn&#8217;t usually happen after one bad experience.</p><p>It happens after several confusing ones.</p><h3>The Core Retention Loop in AI</h3><p>The panel implicitly described a new retention loop for AI products:</p><p><strong>Predictability &#8594; Trust &#8594; Reuse &#8594; Deeper Reliance</strong></p><p>Break that loop anywhere, and retention collapses.</p><p>As one speaker summarized succinctly:</p><blockquote><p>Trust compounds.</p><p>Confusion churns.</p></blockquote><h3>The Practical Takeaway</h3><p>In AI products, retention is not driven by how impressive the system is.</p><p>It&#8217;s driven by how safe it feels to rely on.</p><p>Teams that win:</p><ul><li><p>Prioritize predictable behavior.</p></li><li><p>Surface boundaries clearly.</p></li><li><p>Design for understandable failure.</p></li><li><p>Align outputs with user intent.</p></li></ul><p>In an AI-first world, <strong>trust isn&#8217;t a brand attribute &#8212; it&#8217;s a product property</strong>.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/growth-and-retention-in-an-ai-first-world?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/growth-and-retention-in-an-ai-first-world?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://labs.adaline.ai/p/growth-and-retention-in-an-ai-first-world?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h2>8. Onboarding Must Be Opinionated, Interruptive, and Interactive</h2><p>One of the sharpest insights from the panel was that <strong>great AI onboarding behaves more like a game than a tutorial</strong>.</p><p>It doesn&#8217;t politely explain everything and hope users figure it out.</p><p>It actively guides behavior.</p><h3>Neutral Onboarding Is a Silent Failure Mode</h3><p>Many AI products default to neutral onboarding:</p><ul><li><p>Feature tours.</p></li><li><p>Passive documentation.</p></li><li><p>Optional walkthroughs.</p></li><li><p>&#8220;Explore on your own&#8221; prompts.</p></li></ul><p>The panel was blunt about the outcome: <strong>users fail silently</strong>.</p><p>As <strong>Brian Balfour</strong>, Founder &amp; CEO of Reforge, noted, neutral onboarding shifts responsibility onto users at the exact moment they are least equipped to succeed. When users don&#8217;t know what &#8220;good usage&#8221; looks like, they hesitate &#8212; and hesitation kills momentum.</p><h3>Opinionation Reduces Anxiety</h3><p>Effective AI onboarding tells users <em>exactly</em> what to do.</p><p>It:</p><ul><li><p>Prescribes the first action.</p></li><li><p>Narrows choices intentionally.</p></li><li><p>Removes ambiguity.</p></li><li><p>Defines success clearly.</p></li></ul><p>As <strong>Aaron Cort</strong>, Growth &amp; Marketing Partner at Craft Ventures, emphasized, opinionation reduces cognitive load. When users don&#8217;t have to decide <em>how</em> to start, they&#8217;re more likely to start at all.</p><p>In AI products, especially, clarity feels like competence.</p><h3>Interruption Is a Feature, Not a Bug</h3><p>The panel also reframed interruption as a positive design choice.</p><p>Great onboarding:</p><ul><li><p>Interrupts users at the right moments.</p></li><li><p>Stops them before misconfiguration.</p></li><li><p>Corrects behavior early.</p></li><li><p>Enforces setup steps.</p></li></ul><p>As <strong>Gaurav Vohra</strong>, Advisor and Head of Growth at Superhuman, explained, early interruption prevents downstream confusion. Fixing misuse later is far more expensive &#8212; and often impossible once trust is lost.</p><p>Interrupting early is an act of respect.</p><h3>Interaction Beats Explanation</h3><p>Another recurring theme was that <strong>users don&#8217;t learn AI products by reading about them</strong>.</p><p>They learn by:</p><ul><li><p>Doing.</p></li><li><p>Seeing outcomes.</p></li><li><p>Correcting mistakes.</p></li><li><p>Receiving immediate feedback.</p></li></ul><p>As <strong>Bryce Hunt</strong>, Founding GTM at Cognition, shared from agent-native systems, onboarding that rewards interaction &#8212; not passive consumption &#8212; accelerates understanding dramatically.</p><p>A successful first experience is worth more than a complete explanation.</p><h3>Forcing Correct Setup Early Pays Dividends</h3><p>Several speakers emphasized the importance of enforcing correct setup early &#8212; even if it feels restrictive.</p><p>Opinionated onboarding:</p><ul><li><p>Blocks users from skipping critical steps.</p></li><li><p>Validates inputs.</p></li><li><p>Ensures prerequisites are met.</p></li><li><p>Prevents false negatives.</p></li></ul><p>This reduces:</p><ul><li><p>Misuse.</p></li><li><p>Early failure.</p></li><li><p>Frustration blamed on the product.</p></li></ul><p>As the panel made clear, letting users &#8220;explore freely&#8221; often leads to bad conclusions about the product&#8217;s value.</p><h3>Why Neutrality Fails in AI</h3><p>Neutral onboarding assumes users:</p><ul><li><p>Know what they want.</p></li><li><p>Understand system boundaries.</p></li><li><p>Can evaluate outputs accurately.</p></li></ul><p>In AI products, these assumptions are almost always wrong.</p><p>Neutrality pushes responsibility onto users &#8212; and users fail silently.</p><p>Opinionation keeps responsibility where it belongs: <strong>with the product</strong>.</p><h3>The Practical Takeaway</h3><p>In AI onboarding:</p><ul><li><p>Politeness is overrated.</p></li><li><p>Clarity is everything.</p></li><li><p>Guidance beats freedom early.</p></li></ul><p>The best onboarding:</p><ul><li><p>Tells users what to do.</p></li><li><p>Interrupts them when necessary.</p></li><li><p>Rewards correct interaction.</p></li><li><p>Teaches success through action.</p></li></ul><p>In an AI-first world, <strong>great onboarding doesn&#8217;t wait for users to understand &#8212; it actively teaches them how to win</strong>.</p><h2>9. Moats Are Shifting &#8212; Stacking Matters More Than Strength</h2><p>One of the clearest rejections from the panel was the idea that AI companies can rely on a single, permanent moat.</p><p>That framing no longer holds.</p><p>Instead, the panel converged on a more nuanced &#8212; and more practical &#8212; view:</p><blockquote><p>AI moats are time-bound.</p><p>They strengthen and weaken at different phases.</p><p>Durability comes from stacking and sequencing them.</p></blockquote><h3>The Myth of the Singular AI Moat</h3><p>Early AI discourse often revolves around finding <em>the</em> moat:</p><ul><li><p>Proprietary models.</p></li><li><p>Unique data.</p></li><li><p>Technical sophistication.</p></li><li><p>Speed of execution.</p></li></ul><p>The panel was direct: <strong>no single advantage remains dominant for long</strong>.</p><p>As <strong>Brian Balfour</strong>, Founder &amp; CEO of Reforge, noted, AI compresses competitive cycles. What feels defensible today often becomes baseline tomorrow. Teams that bet everything on one advantage eventually find themselves exposed.</p><h3>Different Moats Peak at Different Times</h3><p>Rather than dismissing moats entirely, the panel reframed them as <strong>phase-dependent</strong>.</p><p>Examples discussed included:</p><ul><li><p><strong>Data moats</strong></p><ul><li><p>Extremely strong once established.</p></li><li><p>Slow to build.</p></li><li><p>Often unusable early.</p></li><li><p>Most powerful after scale and repetition.</p></li></ul></li><li><p><strong>Brand moats</strong></p><ul><li><p>Can accelerate trust and adoption.</p></li><li><p>Fragile if product quality lags.</p></li><li><p>Difficult to repair once broken.</p></li></ul></li><li><p><strong>Distribution windows</strong></p><ul><li><p>Temporary but decisive.</p></li><li><p>Often tied to timing, channels, or platforms.</p></li><li><p>Missed windows rarely reopen.</p></li></ul></li><li><p><strong>Speed</strong></p><ul><li><p>No longer a differentiator.</p></li><li><p>Table stakes in AI.</p></li><li><p>Necessary but insufficient.</p></li></ul></li></ul><p>As <strong>Aaron Cort</strong>, Growth &amp; Marketing Partner at Craft Ventures, emphasized, many AI companies fail not because they lack moats &#8212; but because they rely on the <em>wrong</em> one at the wrong time.</p><h3>Stacking Creates Durability</h3><p>The companies that endure don&#8217;t search for a silver bullet.</p><p>They stack advantages:</p><ul><li><p>Speed early.</p></li><li><p>Distribution when available.</p></li><li><p>Brand as trust compounds.</p></li><li><p>Data as usage accumulates.</p></li></ul><p>Each moat reinforces the others.</p><p>As <strong>Gaurav Vohra</strong>, Advisor and Head of Growth at Superhuman, explained, durability comes from overlap. When one advantage weakens, others compensate. This redundancy is what allows companies to survive competitive shocks.</p><h3>Sequencing Matters as Much as Strength</h3><p>Another subtle but important insight was that <strong>moats must be sequenced intentionally</strong>.</p><p>Building a data moat before you have distribution is pointless.</p><p>Pushing brand before reliability backfires.</p><p>Optimizing for speed without retention burns credibility.</p><p>As <strong>Bryce Hunt</strong>, Founding GTM at Cognition, shared, many AI startups mistake early momentum for defensibility &#8212; only to realize later that nothing was reinforcing it.</p><p>Momentum without structure decays quickly.</p><h3>The Competitive Reality of AI</h3><p>AI lowers the cost of imitation.</p><p>Features are copied faster.</p><p>Capabilities converge.</p><p>Execution gaps narrow.</p><p>In that environment, durability doesn&#8217;t come from being the strongest in one dimension.</p><p>It comes from being <strong>good enough across many &#8212; at the right times</strong>.</p><h3>The Practical Takeaway</h3><p>There is no permanent AI moat.</p><p>There are:</p><ul><li><p>Temporary advantages.</p></li><li><p>Shifting strengths.</p></li><li><p>Strategic windows.</p></li><li><p>Compounding combinations.</p></li></ul><p>The companies that win don&#8217;t chase the perfect moat.</p><p>They build a system of advantages that evolve as the market evolves.</p><p>In an AI-first world, <strong>stacking beats strength &#8212; and sequencing beats brilliance</strong>.</p><h2>10. Brand Is Becoming Personal Again</h2><p>One of the most striking themes to emerge near the end of the panel was a shift that&#8217;s easy to underestimate:</p><p><strong>Brand is becoming personal again.</strong></p><p>Not nostalgic.</p><p>Not performative.</p><p>Personal in a way that materially affects growth, trust, and retention.</p><h3>Logos Don&#8217;t Carry Trust the Way They Used To</h3><p>The panel noted that the environment around buyers and users has fundamentally changed.</p><p>Today:</p><ul><li><p>Search is fragmented.</p></li><li><p>Feeds are noisy.</p></li><li><p>Information is overwhelming.</p></li><li><p>AI-generated content is everywhere.</p></li></ul><p>In that world, traditional brand signals &#8212; logos, taglines, even company-level messaging &#8212; carry less weight than they used to.</p><p>Users don&#8217;t trust abstractions.</p><p>They trust <em>people</em>.</p><h3>Trust Attaches to Opinionated Individuals</h3><p>Across multiple threads, speakers pointed to the same pattern:</p><p>Users increasingly trust:</p><ul><li><p>Individuals with clear points of view.</p></li><li><p>Builders who explain <em>how</em> they think.</p></li><li><p>Leaders who show up consistently over time.</p></li><li><p>People willing to be specific, not neutral.</p></li></ul><p>As <strong>Brian Balfour</strong>, Founder &amp; CEO of Reforge, noted, trust now accrues to those who reduce ambiguity. In a world of infinite answers, conviction becomes a signal.</p><h3>Founder-Led Brand as a Growth Channel</h3><p>The panel reframed founder-led (or leader-led) brand not as marketing &#8212; but as <strong>infrastructure</strong>.</p><p>When done well, personal brand becomes:</p><ul><li><p>A distribution channel.</p></li><li><p>A trust shortcut.</p></li><li><p>A wedge into new audiences.</p></li><li><p>A retention lever for existing users.</p></li></ul><p>As <strong>Aaron Cort</strong>, Growth &amp; Marketing Partner at Craft Ventures, explained, many of the strongest AI companies today see disproportionate leverage from founders and leaders who actively articulate the product&#8217;s philosophy in public.</p><p>People don&#8217;t just buy the product &#8212; they buy the worldview.</p><h3>Explanation Is the New Differentiator</h3><p>AI products often struggle because users don&#8217;t understand <em>why</em> they work.</p><p>Founder-led brand helps close that gap.</p><p>When leaders:</p><ul><li><p>Explain tradeoffs,</p></li><li><p>Share decisions,</p></li><li><p>Talk openly about constraints, and</p></li><li><p>Narrate progress and failure,</p></li></ul><p>they make the product legible.</p><p>As <strong>Gaurav Vohra</strong>, Advisor and Head of Growth at Superhuman, pointed out earlier in the panel, explanation builds trust faster than polish. Users forgive imperfection when they understand intent.</p><h3>Consistency Beats Virality</h3><p>The panel was careful to separate personal brand from social performance.</p><p>This isn&#8217;t about:</p><ul><li><p>Going viral.</p></li><li><p>Hot takes.</p></li><li><p>Constant posting.</p></li></ul><p>It&#8217;s about:</p><ul><li><p>Consistency.</p></li><li><p>Clarity.</p></li><li><p>Coherence over time.</p></li></ul><p>As <strong>Bryce Hunt</strong>, Founding GTM at Cognition, noted, users don&#8217;t need constant visibility &#8212; they need repeated signals of alignment. Over time, that consistency compounds into trust.</p><h3>The Most Defensible Layer Available</h3><p>Perhaps the most important reframe of the section was this:</p><p>In an AI world where:</p><ul><li><p>Features are copied.</p></li><li><p>Capabilities converge.</p></li><li><p>Moats shift quickly.</p></li></ul><p><strong>human trust compounds slowly &#8212; and decays slowly.</strong></p><p>For many AI companies, especially early, a founder- or leader-led brand may be the most defensible layer available.</p><p>Not because it&#8217;s impossible to copy &#8212; but because it&#8217;s impossible to fake sustainably.</p><h3>The Practical Takeaway</h3><p>Brand is no longer just how a company looks.</p><p>It&#8217;s:</p><ul><li><p>How clearly its leaders think.</p></li><li><p>How openly they explain.</p></li><li><p>How consistently they show up.</p></li></ul><p>In an AI-first world flooded with answers, <strong>people follow judgment</strong>.</p><p>And judgment, increasingly, wears a human face.</p><h2>11. Community Is a Growth Multiplier, Not a Feature</h2><p>As the panel wrapped, one final idea brought many of the earlier themes together:</p><p>Community is not a feature.</p><p>It&#8217;s a growth multiplier.</p><p>And like any multiplier, it only works when the underlying system is sound.</p><h3>Community Is Not a Container</h3><p>The panel was explicit about what community is <em>not</em>.</p><p>It is not:</p><ul><li><p>A Slack group.</p></li><li><p>A Discord server.</p></li><li><p>A forum.</p></li><li><p>A channel you &#8220;launch&#8221;.</p></li></ul><p>Those are containers.</p><p>Community is what happens <em>inside</em> them &#8212; if anything happens at all.</p><p>Too many AI companies mistake presence for participation and confuse access with value.</p><h3>Real Community Is Shared Learning</h3><p>What actually worked, according to the panel, was community built around <strong>learning</strong>.</p><p>The strongest communities shared:</p><ul><li><p>How people were using the product.</p></li><li><p>What worked and what didn&#8217;t.</p></li><li><p>Failure modes and recovery patterns.</p></li><li><p>Evolving best practices.</p></li></ul><p>As <strong>Brian Balfour</strong>, Founder &amp; CEO of Reforge, noted, learning compounds when users can see each other thinking. In AI products especially, this shared sensemaking reduces fear and accelerates confidence.</p><h3>Identity Drives Contribution</h3><p>Another recurring insight was that <strong>community only works when contribution is rewarded</strong>.</p><p>Healthy communities give members:</p><ul><li><p>Status through insight.</p></li><li><p>Recognition through contribution.</p></li><li><p>Identity through participation.</p></li></ul><p>As <strong>Aaron Cort</strong>, Growth &amp; Marketing Partner at Craft Ventures, emphasized, community isn&#8217;t about broadcasting updates &#8212; it&#8217;s about creating a place where users feel ownership over collective progress.</p><p>When contribution is visible, learning accelerates.</p><h3>The Product Must Reinforce Belonging</h3><p>The panel also stressed that community cannot live <em>outside</em> the product.</p><p>The most effective AI communities were reinforced by:</p><ul><li><p>Product language.</p></li><li><p>Shared workflows.</p></li><li><p>Common artifacts.</p></li><li><p>Visible usage patterns.</p></li></ul><p>As <strong>Bryce Hunt</strong>, Founding GTM at Cognition, shared, when users see themselves reflected in how a product is built &#8212; not just how it&#8217;s marketed &#8212; community becomes self-sustaining.</p><p>Belonging doesn&#8217;t come from access.</p><p>It comes from relevance.</p><h3>Why Community Matters More in AI</h3><p>AI products introduce uncertainty by default.</p><p>Users often ask:</p><ul><li><p>&#8220;Am I using this correctly?&#8221;</p></li><li><p>&#8220;Is this result trustworthy?&#8221;</p></li><li><p>&#8220;Is everyone else confused, too?&#8221;</p></li></ul><p>Community normalizes that uncertainty.</p><p>As <strong>Gaurav Vohra</strong>, Advisor and Head of Growth at Superhuman, explained earlier, seeing others wrestle with the same questions reduces anxiety and builds confidence faster than documentation ever could.</p><p>Community turns uncertainty into momentum.</p><h3>Organic Growth Emerges From Shared Progress</h3><p>When done well, community quietly powers growth.</p><p>It:</p><ul><li><p>Spreads best practices.</p></li><li><p>Accelerates onboarding.</p></li><li><p>Reinforces habit.</p></li><li><p>Drives organic distribution.</p></li></ul><p>Users don&#8217;t just adopt the product &#8212; they advocate for it, teach it, and evolve with it.</p><p>That&#8217;s not a feature. That&#8217;s leverage.</p><h3>The Final Takeaway</h3><p>Community doesn&#8217;t create growth by itself.</p><p>But when paired with:</p><ul><li><p>Trust, </p></li><li><p>Clarity,</p></li><li><p>Shared learning, and </p></li><li><p>Visible contribution,</p></li></ul><p>it multiplies everything else.</p><p>In AI products, where understanding is as important as capability, <strong>community becomes the fastest way to scale trust</strong>.</p><p>Not by telling users what to do &#8212; but by letting them learn together.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[When Everyone Can Build: Redesigning Product Work for the AI Era in 2026]]></title><description><![CDATA[AI is turning &#8220;who does what&#8221; into a moving target, so the winning teams will redesign roles, workflows, and accountability around outcomes, not titles.]]></description><link>https://labs.adaline.ai/p/redesigning-product-work-for-the-ai-era-in-2026</link><guid isPermaLink="false">https://labs.adaline.ai/p/redesigning-product-work-for-the-ai-era-in-2026</guid><dc:creator><![CDATA[Nilesh Barla]]></dc:creator><pubDate>Sat, 31 Jan 2026 00:55:13 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/40c52c8f-0f10-4745-83af-b8a4296e69f6_1456x816.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>TLDR</strong>: AI tools are collapsing traditional role boundaries&#8212;PMs build dashboards, engineers write copy, designers produce specs. This creates a "Mexican standoff" where old lanes no longer match daily work. The real risk isn't job loss; it's chaos from faster output without coherence. This post shows how to redesign roles around decision rights instead of job titles. You&#8217;ll learn a practical framework (Doer/Decider/Reviewer), four collaboration artifacts that prevent drift, and what PMs specifically should become. Read this if your team is shipping faster but feels misaligned, or if you're unsure how AI changes product management fundamentally.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://go.adaline.ai/rPUz2SX" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6UZy!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F281099a7-49bb-41fb-9014-c3db2e92c343_2160x810.png 424w, https://substackcdn.com/image/fetch/$s_!6UZy!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F281099a7-49bb-41fb-9014-c3db2e92c343_2160x810.png 848w, https://substackcdn.com/image/fetch/$s_!6UZy!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F281099a7-49bb-41fb-9014-c3db2e92c343_2160x810.png 1272w, https://substackcdn.com/image/fetch/$s_!6UZy!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F281099a7-49bb-41fb-9014-c3db2e92c343_2160x810.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6UZy!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F281099a7-49bb-41fb-9014-c3db2e92c343_2160x810.png" width="1200" height="450" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/281099a7-49bb-41fb-9014-c3db2e92c343_2160x810.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:546,&quot;width&quot;:1456,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:337343,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://go.adaline.ai/rPUz2SX&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/186287786?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F281099a7-49bb-41fb-9014-c3db2e92c343_2160x810.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6UZy!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F281099a7-49bb-41fb-9014-c3db2e92c343_2160x810.png 424w, https://substackcdn.com/image/fetch/$s_!6UZy!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F281099a7-49bb-41fb-9014-c3db2e92c343_2160x810.png 848w, https://substackcdn.com/image/fetch/$s_!6UZy!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F281099a7-49bb-41fb-9014-c3db2e92c343_2160x810.png 1272w, https://substackcdn.com/image/fetch/$s_!6UZy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F281099a7-49bb-41fb-9014-c3db2e92c343_2160x810.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>The Collapse Is Real, And It&#8217;s Not About Job Loss</h2><p>Picture a normal week. A PM uses Claude Code to produce a working internal dashboard that had been sitting in an engineering backlog. A designer ships a prototype that already includes the awkward states users hit in production. An engineer writes three copy variants while adjusting the UI component.</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;3d724dda-4fa9-4850-b49d-9f091801de37&quot;,&quot;caption&quot;:&quot;TLDR: PMs don&#8217;t need &#8220;AI that codes&#8221;; they need a delivery protocol. This blog explains how PMs can ship reliably with Claude Code by using plan-first gates, guardrails, Claude Code subagents, and multi-model review to turn messy tickets into clean, reviewable PRs. You&#8217;ll learn how to lead the gates and quality system so Claude Code ships safely and con&#8230;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;How to Ship Reliably With Claude Code When Your Engineers Are AI Agents&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:315292999,&quot;name&quot;:&quot;Nilesh Barla&quot;,&quot;bio&quot;:&quot;I research and write stuff on Adaline.ai&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7b494dad-d22a-40cf-a461-24749c055d0a_960x1280.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-01-24T01:00:20.743Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d11efbb8-63de-4cef-a1a5-ef2b0deed64c_1456x816.webp&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://labs.adaline.ai/p/how-to-ship-reliably-with-claude-code&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:185523000,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:73,&quot;comment_count&quot;:2,&quot;publication_id&quot;:4015259,&quot;publication_name&quot;:&quot;Adaline Labs&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Wt35!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5199b386-b9f1-4343-88fd-ed804d414ec9_1001x1001.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p>Marc Andreessen describes this friction as a &#8220;Mexican standoff&#8221; among PMs, designers, and engineers&#8212;the old lanes no longer align with the actual work. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Sr3T!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8f2f743-544d-46bb-9fda-b3b5458c389c_480x441.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Sr3T!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8f2f743-544d-46bb-9fda-b3b5458c389c_480x441.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Sr3T!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8f2f743-544d-46bb-9fda-b3b5458c389c_480x441.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Sr3T!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8f2f743-544d-46bb-9fda-b3b5458c389c_480x441.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Sr3T!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8f2f743-544d-46bb-9fda-b3b5458c389c_480x441.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Sr3T!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8f2f743-544d-46bb-9fda-b3b5458c389c_480x441.jpeg" width="480" height="441" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b8f2f743-544d-46bb-9fda-b3b5458c389c_480x441.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:441,&quot;width&quot;:480,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Comment image, no alternative text available&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Comment image, no alternative text available" title="Comment image, no alternative text available" srcset="https://substackcdn.com/image/fetch/$s_!Sr3T!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8f2f743-544d-46bb-9fda-b3b5458c389c_480x441.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Sr3T!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8f2f743-544d-46bb-9fda-b3b5458c389c_480x441.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Sr3T!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8f2f743-544d-46bb-9fda-b3b5458c389c_480x441.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Sr3T!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8f2f743-544d-46bb-9fda-b3b5458c389c_480x441.jpeg 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>This is boundary collapse. Execution capability spreads across roles because AI tooling makes credible artifacts cheap to produce. Anyone can now build things that used to require specialized skills. The constraint shifts from <em>who can make something</em> to <em>who can decide what should exist</em>.</p><p>Which is why the term &#8220;job loss&#8221; misses what&#8217;s happening week to week. The near-term change is &#8220;task reshuffling.&#8221; Some tasks disappear, many get faster, and most get rebundled into new workflows and expectations. Work reorganizes around different constraints: decision rights, coherence, and accountability.</p><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:185338497,&quot;url&quot;:&quot;https://www.lennysnewsletter.com/p/marc-andreessen-the-real-ai-boom&quot;,&quot;publication_id&quot;:10845,&quot;publication_name&quot;:&quot;Lenny's Newsletter&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!8MSN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F441213db-4824-4e48-9d28-a3a18952cbfc_592x592.png&quot;,&quot;title&quot;:&quot;Marc Andreessen: The real AI boom hasn&#8217;t even started yet&quot;,&quot;truncated_body_text&quot;:&quot;Marc Andreessen is a founder, investor, and co-founder of Netscape, as well as co-founder of the venture capital firm Andreessen Horowitz (a16z). In this conversation, we dig into why we&#8217;re living through a unique and one of the most incredible times in history, and what comes next.&quot;,&quot;date&quot;:&quot;2026-01-29T13:32:06.471Z&quot;,&quot;like_count&quot;:78,&quot;comment_count&quot;:0,&quot;bylines&quot;:[{&quot;id&quot;:1849774,&quot;name&quot;:&quot;Lenny Rachitsky&quot;,&quot;handle&quot;:&quot;lenny&quot;,&quot;previous_name&quot;:null,&quot;photo_url&quot;:&quot;https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/afba5161-65bb-4d99-8d6b-cce660917fa1_1540x1540.png&quot;,&quot;bio&quot;:&quot;Writing &#8226; Angel investing &#8226; Advising&quot;,&quot;profile_set_up_at&quot;:&quot;2021-05-01T23:55:21.518Z&quot;,&quot;reader_installed_at&quot;:&quot;2021-12-15T18:09:25.096Z&quot;,&quot;publicationUsers&quot;:[{&quot;id&quot;:247600,&quot;user_id&quot;:1849774,&quot;publication_id&quot;:10845,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:true,&quot;publication&quot;:{&quot;id&quot;:10845,&quot;name&quot;:&quot;Lenny's Newsletter&quot;,&quot;subdomain&quot;:&quot;lenny&quot;,&quot;custom_domain&quot;:&quot;www.lennysnewsletter.com&quot;,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;Deeply researched no-nonsense product, growth, and career advice&#8212;newsletter, podcast, community, and living library&quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/441213db-4824-4e48-9d28-a3a18952cbfc_592x592.png&quot;,&quot;author_id&quot;:1849774,&quot;primary_user_id&quot;:1849774,&quot;theme_var_background_pop&quot;:&quot;#f47c55&quot;,&quot;created_at&quot;:&quot;2019-06-01T15:35:37.885Z&quot;,&quot;email_from_name&quot;:&quot;Lenny's Newsletter&quot;,&quot;copyright&quot;:null,&quot;founding_plan_name&quot;:&quot;Insider Tier&quot;,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;enabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;newspaper&quot;,&quot;is_personal_mode&quot;:false}}],&quot;twitter_screen_name&quot;:&quot;lennysan&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:10000,&quot;status&quot;:{&quot;bestsellerTier&quot;:10000,&quot;subscriberTier&quot;:10,&quot;leaderboard&quot;:null,&quot;vip&quot;:false,&quot;badge&quot;:{&quot;type&quot;:&quot;bestseller&quot;,&quot;tier&quot;:10000},&quot;paidPublicationIds&quot;:[3525780,35345,1243269,16907,2217127,1548028,218501,313411,46510,1163860,1435249,1256656,10025,260347],&quot;subscriber&quot;:null}}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:true,&quot;type&quot;:&quot;podcast&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://www.lennysnewsletter.com/p/marc-andreessen-the-real-ai-boom?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><img class="embedded-post-publication-logo" src="https://substackcdn.com/image/fetch/$s_!8MSN!,w_56,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F441213db-4824-4e48-9d28-a3a18952cbfc_592x592.png" loading="lazy"><span class="embedded-post-publication-name">Lenny's Newsletter</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title-icon"><svg width="19" height="19" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
  <path d="M3 18V12C3 9.61305 3.94821 7.32387 5.63604 5.63604C7.32387 3.94821 9.61305 3 12 3C14.3869 3 16.6761 3.94821 18.364 5.63604C20.0518 7.32387 21 9.61305 21 12V18" stroke-linecap="round" stroke-linejoin="round"></path>
  <path d="M21 19C21 19.5304 20.7893 20.0391 20.4142 20.4142C20.0391 20.7893 19.5304 21 19 21H18C17.4696 21 16.9609 20.7893 16.5858 20.4142C16.2107 20.0391 16 19.5304 16 19V16C16 15.4696 16.2107 14.9609 16.5858 14.5858C16.9609 14.2107 17.4696 14 18 14H21V19ZM3 19C3 19.5304 3.21071 20.0391 3.58579 20.4142C3.96086 20.7893 4.46957 21 5 21H6C6.53043 21 7.03914 20.7893 7.41421 20.4142C7.78929 20.0391 8 19.5304 8 19V16C8 15.4696 7.78929 14.9609 7.41421 14.5858C7.03914 14.2107 6.53043 14 6 14H3V19Z" stroke-linecap="round" stroke-linejoin="round"></path>
</svg></div><div class="embedded-post-title">Marc Andreessen: The real AI boom hasn&#8217;t even started yet</div></div><div class="embedded-post-body">Marc Andreessen is a founder, investor, and co-founder of Netscape, as well as co-founder of the venture capital firm Andreessen Horowitz (a16z). In this conversation, we dig into why we&#8217;re living through a unique and one of the most incredible times in history, and what comes next&#8230;</div><div class="embedded-post-cta-wrapper"><div class="embedded-post-cta-icon"><svg width="32" height="32" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg">
  <path classname="inner-triangle" d="M10 8L16 12L10 16V8Z" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round"></path>
</svg></div><span class="embedded-post-cta">Listen now</span></div><div class="embedded-post-meta">3 months ago &#183; 78 likes &#183; Lenny Rachitsky</div></a></div><p>More output arrives first. Misalignment follows quietly.</p><p>This blog shows how to redesign roles around outcomes and decision rights.</p><h2>What Exactly Is Collapsing</h2><p>This section exists to make the blur legible. Without a simple map, teams argue about identity and titles. A clear map keeps the conversation on work, ownership, and outcomes. Here are the three areas where boundaries are collapsing. </p><h3>PM and Engineer</h3><ul><li><p>Drafting specs that include edge cases and acceptance criteria so engineers can execute with less back-and-forth.</p></li><li><p>Producing a clickable demo or internal proof that narrows the scope before a build starts.</p></li><li><p>Turning raw customer feedback into a structured backlog that encodes tradeoffs and sequencing.</p></li></ul><p>The PM now produces technical artifacts. The engineer now shapes product scope.</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;bcbfc468-4c50-4e53-aa11-aa504e636929&quot;,&quot;caption&quot;:&quot;Yesterday, I watched a new podcast from Lenny Rachitsky. The podcast interviewed Asha Sharma (CVP of AI Platform at Microsoft). One thing that fascinated me was that products are transitioning from artifacts to organisms because of AI agents. This idea made me research about this article.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;md&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;From Artifacts to Organisms: Supercharging Development with Claude Code's Agentic Context Engineering&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:315292999,&quot;name&quot;:&quot;Nilesh Barla&quot;,&quot;bio&quot;:&quot;I research and write stuff on Adaline.ai&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7b494dad-d22a-40cf-a461-24749c055d0a_960x1280.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-08-29T11:38:29.695Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!nIg2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83a5de0d-72f1-4653-9a64-ba461931958b_4630x2595.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://labs.adaline.ai/p/context-engineering-with-claude-code&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:172248922,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:156,&quot;comment_count&quot;:0,&quot;publication_id&quot;:4015259,&quot;publication_name&quot;:&quot;Adaline Labs&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Wt35!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5199b386-b9f1-4343-88fd-ed804d414ec9_1001x1001.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><h3>Engineer and Designer</h3><ul><li><p>Writing multiple copy directions and microcopy variants while iterating on UI behavior.</p></li><li><p>Exploring interaction options and edge states fast enough that design intent and feasibility converge earlier.</p></li></ul><p>The engineer now makes content decisions. The designer now navigates technical constraints.</p><h3>Designer and PM</h3><ul><li><p>Synthesizing research notes into themes, risks, and decision-ready narratives.</p></li><li><p>Writing onboarding language and positioning that stays consistent with the product&#8217;s mental model.</p></li></ul><p>The designer now structures strategic inputs. The PM now produces user-facing language.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/redesigning-product-work-for-the-ai-era-in-2026?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/redesigning-product-work-for-the-ai-era-in-2026?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://labs.adaline.ai/p/redesigning-product-work-for-the-ai-era-in-2026?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h2>The Hidden Failure Mode: More Throughput, Less Coherence</h2><p>Here is how it breaks. Your onboarding flow promises "get set up in 5 minutes." Your pricing page emphasizes "enterprise-grade control." Your settings screen adopts a new toggle pattern in one section while keeping old dropdowns in another. Each decision was reasonable. But the composite is confusing. </p><p>Output rises immediately. Coherence degrades quietly. Coherence is consistency across:</p><ul><li><p>UX patterns and interaction language,</p></li><li><p>Positioning and copy,</p></li><li><p>Metrics definitions and measurement,</p></li><li><p>Decision logic and constraints.</p></li></ul><p>When more people can produce product artifacts quickly, local changes accumulate. You get onboarding that promises one thing while pricing implies another. You get settings that adopt new patterns in one area and old patterns in the rest. None of these are &#8220;bugs.&#8221; They are coordination debt.</p><p>Teresa Torres has been explicit about a related trap: delivery pressure can crowd out discovery discipline, and faster building can pull teams toward what is easy to produce rather than what is valuable to learn.</p><div id="youtube2--xqIZEPS7Bc" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;-xqIZEPS7Bc&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/-xqIZEPS7Bc?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>You can also predict the coordination tax with old software lessons: as contributors increase, communication paths multiply, and misalignment becomes a first-order cost.</p><p>So the scarce resource shifts. Execution gets cheaper. Decision clarity and product coherence become the limiting factors.</p><p>If coherence is not owned, the product becomes a collage.</p><h2>Redesign The Contract Outcomes Interfaces Accountability</h2><p>AI tools let more people create product artifacts&#8212;specs, prototypes, demos, copy. This only works if your org has clear rules for outcomes, decision rights, and accountability. Without that clarity, your product becomes whatever the last person to touch it decided it should be.</p><p><strong>Let&#8217;s make this concrete first.</strong></p><p>Say your outcome is activation for new workspace admins, which moves from 32% to 42% in 8 weeks.</p><p>Your constraints: </p><ol><li><p>You cannot change core navigation. </p></li><li><p>You cannot add more than 2 new backend calls. </p></li><li><p>The brand requires keeping the existing voice.</p></li></ol><p>Decision rights: </p><ul><li><p>PM decides scope cuts if the timeline slips. </p></li><li><p>Designer decides which of the three onboarding approaches to emphasize. </p></li><li><p>Engineer decides whether to instrument activation via backend events or client-side tracking.</p></li></ul><p>Artifacts you&#8217;ll produce:</p><ol><li><p>Decision log entry explaining why you chose tutorial overlay over empty states. </p></li><li><p>System map showing the three critical activation moments where users drop off. </p></li><li><p>Tradeoff table showing you prioritized speed over visual polish. </p></li><li><p>Launch narrative positioning this as &#8220;faster time to value.&#8221;</p></li></ol><p>That&#8217;s the contract in action. Here&#8217;s how to structure it for any outcome.</p><div><hr></div><p><strong>Start with outcomes.</strong> Outcomes are measurable, scoped, and tied to a segment. Keep them <strong>dual-sided</strong> with a user signal and a business signal.</p><p><strong>Outcome examples:</strong></p><ul><li><p>Activation for new workspace admins moves from 32% to 42% in 8 weeks. <strong>User</strong>: Faster value realization | <strong>Business</strong>: Higher retention cohort.</p></li><li><p>Trial to paid conversion for the SMB persona improves by 3 points this quarter. <strong>User</strong>: Clearer value prop | <strong>Business</strong>: Revenue growth.</p></li><li><p>Support tickets tagged &#8220;confusing pricing&#8221; drop by 25% in 6 weeks. <strong>User</strong>: Better understanding | <strong>Business</strong>: Lower support costs.</p></li></ul><div><hr></div><p><strong>Define interfaces next.</strong> Interfaces describe artifacts, not activities. This prevents &#8220;I&#8217;m doing discovery work&#8221; from becoming a catch-all for undefined contributions.</p><p><strong>Interface by function:</strong></p><ul><li><p>PM produces a decision narrative, sequencing, and a tradeoff record.</p></li><li><p>Design produces an interaction spec, principles for consistency, and edge state intent.</p></li><li><p>Engineering produces constraints, system invariants, and feasibility boundaries.</p></li></ul><p>These are the artifacts each role is uniquely positioned to create. When contribution happens outside these lanes, decision rights determine who approves the work.</p><div><hr></div><p><strong>Then set accountability.</strong> Each outcome needs one DRI (Directly Responsible Individual). Contributors can be many. Reviewers should be explicit.</p><p>The DRI model works because it separates decision authority from contribution. Many people can produce artifacts. Many people can provide input. But only one person decides whether the outcome is achieved and makes the final call when trade-offs conflict.</p><div><hr></div><p><strong>Use a simple contract template:</strong></p><p><strong>Outcome</strong><br>[Metric, segment, timeframe]</p><p><strong>Constraints</strong><br>[UX invariants, technical invariants, brand and compliance constraints]</p><p><strong>Decision rights</strong><br>[Who decides scope, who decides UX tradeoffs, who decides system tradeoffs]</p><p><strong>Artifacts</strong><br>[Decision log entry, system map update, tradeoff table row, launch narrative]</p><div><hr></div><blockquote><p>Everyone can build; not everyone can decide.</p></blockquote><div><hr></div><p><strong>Copyable contract statement</strong></p><p>We will optimize for <strong>[OUTCOME: metric, segment, timeframe]</strong>, within <strong>[CONSTRAINTS: UX/tech/brand limits]</strong>. The DRI <strong>[NAME]</strong> owns the decision. Contributors <strong>[NAMES]</strong> produce <strong>[ARTIFACTS: list]</strong>. Reviewers <strong>[NAMES]</strong> validate coherence and constraints before the product surface moves.</p><div><hr></div><p>This contract only holds if it is enforced in writing. Verbal agreements decay the moment priorities shift or new people join. The artifacts from the next section make the contract real.</p><h2>What PMs Should Become In This New World</h2><p>The boundary collapse forces a career clarification. Execution is no longer the moat. The moat is judgment that other people can see and follow.</p><p>The PM job does not disappear. It gets narrower in definition and harder in standards. Here is what that means in practice.</p><h3>PM as Coherence Architect</h3><p>You own sequencing, narrative, and tradeoffs. This means you make parallel work feel like one release. When engineering ships three features in parallel, you write the launch narrative that ties them together. When design proposes a new interaction pattern, you check whether it conflicts with existing patterns. When someone updates copy, you verify it matches the product&#8217;s voice and mental model. You are the person who says &#8220;that does not belong here&#8221; and can explain why.</p><p><strong>Example</strong>: Engineering ships a dashboard feature, a new API endpoint, and a billing update in the same sprint. Separately, they look like infrastructure work. Your job is to write the release narrative that positions them as &#8220;Enterprise-ready workspace controls&#8221; so customers see one coherent capability, not three unrelated updates.</p><h3>PM as Systems Thinker</h3><p>You understand constraints well enough to make realistic decisions. You should be able to read a system map and spot where tight coupling will create drag. You should be able to ask &#8220;what breaks if we add this?&#8221; and understand the answer. You should know the difference between a database constraint and a business rule well enough to know which one can flex under pressure.</p><p><strong>Example</strong>: Design proposes a &#8220;duplicate workspace&#8221; feature. You check the system map and notice workspace creation is tied to billing events, which means duplication would trigger unexpected charges. You spot this before engineering starts building.</p><h3>PM as Leverage Designer</h3><p>You design workflows that scale decision-making across teams. You create decision templates that five teams can reuse without asking for clarification. You build artifact formats that capture why you decided X without requiring synchronous meetings. You set up review rituals that catch drift before users experience it.</p><p>The goal is not to make every decision yourself. The goal is to make good decisions repeatable and bad decisions impossible.</p><p><strong>Example</strong>: You create a feature proposal template that includes outcome metric, constraints, three alternatives considered, and decision criteria. Now, when anyone proposes a feature, the conversation starts from tradeoffs rather than lobbying.</p><h3>Capability Self-Check</h3><p>Here is how to assess whether you are operating at this level:</p><ul><li><p>I can write a one-paragraph tradeoff explanation that engineering, design, and execs accept as complete.</p></li><li><p>I can look at a system map and identify where coupling will create future drag.</p></li><li><p>I can design a decision artifact that five contributors can use without clarification.</p></li></ul><p>The PM job is not disappearing. When execution is cheap, judgment becomes valuable&#8212;but only when it is visible, structured, and repeatable.</p><p>That is the new PM superpower: making decisions legible, coherence enforceable, and good judgment scalable across teams that move faster than org charts can keep up with.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Building AI Products, Not Prototypes | Takeaways For Founders and Product Leaders]]></title><description><![CDATA[A production-first guide to opinionated workflows, environmental control, and evals that keep AI features reliable.]]></description><link>https://labs.adaline.ai/p/building-ai-products-not-prototypes</link><guid isPermaLink="false">https://labs.adaline.ai/p/building-ai-products-not-prototypes</guid><dc:creator><![CDATA[Arsh Shah Dilbagi]]></dc:creator><pubDate>Wed, 28 Jan 2026 14:02:44 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/17e9354a-9d64-4209-ab5e-52c1b808598e_5120x2880.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>TLDR</strong>: This blog explains how to turn AI demos into durable products by choosing opinionated workflows, controlling the environment, designing for user understanding, and planning for maintenance. It covers data reality, dual-system architecture, evals, framework tradeoffs, and task decomposition&#8212;helping teams ship more reliable, debuggable, scalable AI features.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://go.adaline.ai/rPUz2SX" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!UL9P!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee37f9c9-e173-4634-b193-9688bd2038ad_2160x810.png 424w, https://substackcdn.com/image/fetch/$s_!UL9P!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee37f9c9-e173-4634-b193-9688bd2038ad_2160x810.png 848w, https://substackcdn.com/image/fetch/$s_!UL9P!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee37f9c9-e173-4634-b193-9688bd2038ad_2160x810.png 1272w, https://substackcdn.com/image/fetch/$s_!UL9P!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee37f9c9-e173-4634-b193-9688bd2038ad_2160x810.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!UL9P!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee37f9c9-e173-4634-b193-9688bd2038ad_2160x810.png" width="1456" height="546" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ee37f9c9-e173-4634-b193-9688bd2038ad_2160x810.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:546,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:243466,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://go.adaline.ai/rPUz2SX&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/184858162?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee37f9c9-e173-4634-b193-9688bd2038ad_2160x810.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!UL9P!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee37f9c9-e173-4634-b193-9688bd2038ad_2160x810.png 424w, https://substackcdn.com/image/fetch/$s_!UL9P!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee37f9c9-e173-4634-b193-9688bd2038ad_2160x810.png 848w, https://substackcdn.com/image/fetch/$s_!UL9P!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee37f9c9-e173-4634-b193-9688bd2038ad_2160x810.png 1272w, https://substackcdn.com/image/fetch/$s_!UL9P!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee37f9c9-e173-4634-b193-9688bd2038ad_2160x810.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Introduction</h2><div id="youtube2-txRZmg_ITio" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;txRZmg_ITio&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/txRZmg_ITio?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><h2>Founder Intro: Building AI Products, Not Prototypes</h2><p>One of the motivations behind Adaline Applied was simple: <strong>there&#8217;s a growing gap between what AI can demo and what AI can actually sustain in the real world.</strong></p><p>Every week, we see impressive prototypes. Agents that look magical. Systems that feel powerful in isolation. And yet, when you talk to founders and operators trying to ship these systems into production, you hear a very different story &#8212; one defined by edge cases, trust issues, brittle workflows, and products that stall after their first moment of excitement.</p><p>Panel 2 was designed to sit directly in that tension.</p><p>Rather than asking <em>what&#8217;s possible</em>, we wanted to ask a harder question: <strong>What does it actually take to turn an AI prototype into a real product?</strong></p><p>To explore that, we brought together builders operating at very different layers of the stack:</p><ul><li><p><strong><a href="https://www.linkedin.com/in/aidenbai/">Aiden Bai</a></strong>, Co-founder &amp; CEO at <strong>Same</strong>, building AI-native products with speed and opinionation from day one</p></li><li><p><strong><a href="https://www.linkedin.com/in/joshpxyne/">Josh Payne</a></strong>, Founder &amp; CEO at <strong>Coframe</strong>, translating AI capability into measurable business outcomes</p></li><li><p><strong><a href="https://www.linkedin.com/in/thesephist/">Linus Lee</a></strong>, Engineer, AI at <strong>Thrive Capital</strong>, thinking deeply about interfaces, cognition, and long-term product truth</p></li><li><p><strong><a href="https://www.linkedin.com/in/mattrasto/">Matthew Rastovac</a></strong>, Director of AgentForce at <strong>Salesforce</strong>, shipping AI systems inside large, high-trust enterprise environments</p></li></ul><p>What emerged was not a checklist or a framework &#8212; but a shared set of hard-earned lessons.</p><p>Again and again, the conversation returned to the same idea:</p><blockquote><p><strong>Most AI failures aren&#8217;t caused by weak models.</strong><br><strong>They&#8217;re caused by weak product decisions.</strong></p></blockquote><p>The sections that follow unpack what that really means in practice &#8212; why generality creates fragility, why user understanding matters as much as accuracy, why maintenance dominates prototyping, and why the hardest problems are still hard.</p><p>This write-up isn&#8217;t meant to prescribe a single way to build AI products. It&#8217;s meant to surface the patterns that consistently separate demos from durable systems.</p><p>If you&#8217;re building with AI today &#8212; or planning to &#8212; my hope is that this panel helps you make better decisions about <em>what to build</em>, <em>how to build it</em>, and <em>when not to ship yet</em>.</p><div><hr></div><h2>1. Generality Is Expensive &#8212; Opinionated Workflows Win</h2><p>One of the strongest points of alignment across the panel was that <strong>generality is not a free abstraction</strong>. It has real, compounding cost&#8212;and that cost shows up fastest once a product leaves the demo environment.</p><p>Early on, many AI teams are drawn to building systems that are:</p><ul><li><p>Highly flexible.</p></li><li><p>Broadly applicable.</p></li><li><p>Capable of handling many use cases.</p></li><li><p>Impressive in demos.</p></li></ul><p>This instinct makes sense. General systems feel powerful. They look future-proof. They suggest unlimited upside.</p><p>But as multiple speakers emphasized, <strong>that flexibility quickly becomes a liability once real users are involved</strong>.</p><h3>When Systems Are Too General, the Model Becomes the Product Designer</h3><p>As Aiden Bai pointed out, overly general systems force the model to make decisions the product team hasn&#8217;t made. The model must infer:</p><ul><li><p>What does the user actually want?</p></li><li><p>Which constraints matter?</p></li><li><p>How to sequence actions?</p></li><li><p>What does &#8220;correct&#8221; look like?</p></li></ul><p>At the same time, the user is left guessing how to use the product successfully. The result isn&#8217;t intelligence&#8212;it&#8217;s ambiguity.</p><p>General systems push cognitive load onto both sides:</p><ul><li><p>The model gets too many degrees of freedom.</p></li><li><p>The user gets too little guidance.</p></li></ul><p>Neither wins consistently.</p><h3>In Production, Flexibility Turns Into Fragility</h3><p>This tradeoff becomes even more pronounced at scale.</p><p>Matthew Rastovac, speaking from the perspective of shipping agent systems inside Salesforce, described how generality breaks down quickly in enterprise environments. The more freedom an agent has, the harder it becomes to guarantee predictable behavior&#8212;and predictability is non-negotiable when trust is on the line.</p><p>Even when a system is technically capable, inconsistent behavior erodes confidence fast. In enterprise settings, users don&#8217;t tolerate surprises&#8212;especially from software that claims intelligence.</p><h3>Generality Also Hurts Monetization</h3><p>Josh Payne highlighted a parallel failure mode from the commercial side.</p><p>At Coframe, systems designed to be flexible across many customer use cases became:</p><ul><li><p>Harder to explain.</p></li><li><p>Harder to position.</p></li><li><p>Harder to tie to concrete metrics.</p></li></ul><p>When outputs vary too widely, customers struggle to understand why the product is valuable. And if value can&#8217;t be explained, it can&#8217;t be measured&#8212;which makes it nearly impossible to sell or scale.</p><p>Generally, in this sense, doesn&#8217;t just hurt reliability. It hurts revenue.</p><h3>Opinionation Is How Products Take Control Back</h3><p>Across these anecdotes, a consistent pattern emerged:</p><blockquote><p>The more general the system, the more responsibility is abdicated to the model&#8212;and the less control the product team retains.</p></blockquote><p>By contrast, the AI products that successfully crossed from prototype to production looked very different. They were highly opinionated.</p><p>Aiden described how real progress came not from adding flexibility, but from removing it. Teams narrowed the scope. They encoded domain assumptions directly into workflows. They removed optionality. They chose depth over breadth.</p><p>Instead of asking the model to figure everything out, they asked a different question:</p><blockquote><p>&#8220;What decisions should the product make so the model doesn&#8217;t have to?&#8221;</p></blockquote><h3>Opinionated Systems Teach Users How to Succeed</h3><p>This idea surfaced again when Linus Lee spoke about interfaces and cognition. Every AI product teaches users how to think with it&#8212;whether intentionally or not:</p><ul><li><p>General systems teach uncertainty.</p></li><li><p>Opinionated systems teach clarity.</p></li></ul><p>When workflows are explicit:</p><ul><li><p>Users learn faster.</p></li><li><p>Trust builds more quickly.</p></li><li><p>Success becomes repeatable.</p></li></ul><p>The product becomes legible instead of mysterious.</p><h3>Why Specificity Wins in the Real World</h3><p>In practice, the difference is stark.</p><p>General systems tend to:</p><ul><li><p>Produce unpredictable outputs.</p></li><li><p>Fail in subtle, hard-to-debug ways.</p></li><li><p>Create a UX that&#8217;s difficult to explain.</p></li><li><p>Erode trust through inconsistency.</p></li></ul><p>Opinionated systems tend to:</p><ul><li><p>Surface fewer failure modes.</p></li><li><p>Make success repeatable.</p></li><li><p>Clarify what &#8220;good usage&#8221; looks like.</p></li><li><p>Feel reliable even when the model isn&#8217;t perfect.</p></li></ul><p>As one speaker noted during the session:</p><blockquote><p>&#8220;Models don&#8217;t fail gracefully&#8212;products have to make them fail gracefully.&#8221;</p></blockquote><p>That only happens when constraints are intentional.</p><h3>Prototypes Need Breadth. Products Need Structure.</h3><p>This led to one of the clearest takeaways of the panel:</p><blockquote><p>Generality makes prototypes impressive.<br>Specificity makes products usable.</p></blockquote><p>Prototypes exist to explore what&#8217;s possible. Products exist to work&#8212;repeatedly, for real users, in real conditions.</p><p>Opinionation isn&#8217;t premature optimization. It&#8217;s the mechanism by which AI systems become dependable.</p><p>Teams that delay opinionation often end up retrofitting guardrails onto systems that were never designed to support them. Teams that embrace it early build foundations that scale.</p><h3>The Quiet Contrarian Insight</h3><p>In a landscape obsessed with flexibility and &#8220;AI that can do everything,&#8221; this panel offered a quieter, more durable insight:</p><p><strong>The path from prototype to product isn&#8217;t paved with more general intelligence. It&#8217;s paved with tighter workflows, clearer assumptions, and intentional constraints.</strong></p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/building-ai-products-not-prototypes?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/building-ai-products-not-prototypes?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://labs.adaline.ai/p/building-ai-products-not-prototypes?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h2>2. Control Over the Environment Determines Feasibility</h2><p>Another sharp dividing line between prototypes and real products emerged quickly in this panel: how much control the system has over its environment.</p><p>Across very different domains&#8212;product workflows, enterprise systems, and creative tooling&#8212;the same pattern repeated:</p><blockquote><p>&#8220;AI systems perform dramatically better when the environment is constrained.&#8221;</p></blockquote><p>When teams control:</p><ul><li><p>The inputs.</p></li><li><p>The structure of the task.</p></li><li><p>The available actions.</p></li><li><p>The shape of the output.</p></li></ul><p>AI systems feel capable, reliable, and even &#8220;smart.&#8221; When they don&#8217;t, reliability collapses fast.</p><h3>AI Thrives in Structured Worlds</h3><p>Several speakers described how early success almost always happened in environments where:</p><ul><li><p>Inputs were owned or normalized.</p></li><li><p>Patterns are repeated frequently.</p></li><li><p>Constraints were known ahead of time.</p></li><li><p>Failure modes were visible and enumerable.</p></li></ul><p>In these settings, models didn&#8217;t need to reason from first principles every time. They could operate within guardrails.</p><p>As Aiden Bai noted, many early prototypes feel magical precisely because they live in these controlled worlds. The system works on clean data, predictable tasks, and narrow problem definitions. It&#8217;s not that the model is unusually capable&#8212;it&#8217;s that the environment is unusually forgiving.</p><p>This creates a dangerous illusion.</p><h3>Leaving the Sandbox Is Where Things Break</h3><p>The moment these systems leave controlled environments, cracks appear.</p><p>Matthew Rastovac spoke directly to this from an enterprise perspective. Once AI systems interact with real customer data, real workflows, and real organizational complexity, unpredictability spikes. Inputs aren&#8217;t clean. Processes aren&#8217;t linear. Edge cases aren&#8217;t rare&#8212;they&#8217;re constant.</p><p>Enterprise systems introduce:</p><ul><li><p>Inconsistent schemas.</p></li><li><p>Legacy workflows.</p></li><li><p>Partial permissions.</p></li><li><p>Conflicting sources of truth.</p></li><li><p>Human-driven exceptions.</p></li></ul><p>In those conditions, even strong models struggle&#8212;not because they lack intelligence, but because they&#8217;re being asked to operate without a stable frame of reference.</p><h3>Arbitrary Inputs Are the Enemy of Reliability</h3><p>This challenge becomes even more pronounced in domains like code, content, and knowledge work.</p><p>Josh Payne described how systems that worked well on curated examples failed once exposed to the diversity of real customer data. What looked robust in testing collapsed under the weight of:</p><ul><li><p>Messy inputs.</p></li><li><p>Inconsistent structure.</p></li><li><p>Unclear user intent.</p></li></ul><p>These failures weren&#8217;t dramatic crashes. They were subtle. Outputs were &#8220;almost&#8221; right&#8212;just wrong enough to erode trust.</p><p>And because the failures were inconsistent, they were hard to debug and even harder to explain to users.</p><h3>Toy Examples Hide Real Constraints</h3><p>One of the most consistent failure modes discussed on the panel was over-reliance on toy examples.</p><p>Many impressive prototypes fail because:</p><ul><li><p>They&#8217;re built on idealized data.</p></li><li><p>They assume cooperative users.</p></li><li><p>They ignore edge cases.</p></li><li><p>They avoid ambiguous scenarios.</p></li></ul><p>These prototypes answer the question: &#8220;Can the model do this?&#8221;</p><p>Production systems must answer a harder one: &#8220;Can the system do this reliably, every day, for imperfect users, under imperfect conditions?&#8221;</p><p>That gap is where most AI products fail.</p><h3>Environmental Control Beats Model Power</h3><p>A key insight from the panel was that model capability is often the wrong lever to pull.</p><p>Teams instinctively respond to failures by:</p><ul><li><p>Switching models.</p></li><li><p>Increasing context windows.</p></li><li><p>Tuning prompts.</p></li><li><p>Layering complexity.</p></li></ul><p>But as multiple speakers emphasized, these changes rarely fix the root problem. The issue is not intelligence&#8212;it&#8217;s exposure.</p><p>Systems fail because:</p><ul><li><p>They&#8217;re asked to handle arbitrary inputs.</p></li><li><p>They lack clear task boundaries.</p></li><li><p>They don&#8217;t know which failures matter.</p></li><li><p>They don&#8217;t control how work enters the system.</p></li></ul><p>As Linus Lee framed it, feasibility is not just a modeling question&#8212;it&#8217;s a product and interface question. The more a system can shape the environment it operates in, the less it has to rely on raw reasoning.</p><h3>Task Framing Is the Hidden Superpower</h3><p>The most successful teams on the panel didn&#8217;t try to eliminate environmental complexity. They absorbed it into the product design.</p><p>They:</p><ul><li><p>Pre-processed inputs.</p></li><li><p>Guided users into structured flows.</p></li><li><p>Constrained actions intentionally.</p></li><li><p>Limited surface area for failure.</p></li></ul><p>By doing so, they reduced the cognitive burden on the model and increased consistency for users.</p><p>The takeaway was clear:</p><blockquote><p>&#8220;Production success depends less on model power and more on environmental control and task framing.&#8221;</p></blockquote><p>When teams own the environment, AI looks capable. When they don&#8217;t, even the best models look unreliable.</p><h3>The Practical Implication</h3><p>If Section 1 argued that opinionated workflows win, Section 2 explains why.</p><p>Opinionation isn&#8217;t just about UX. It&#8217;s about feasibility.</p><p>The fastest path from prototype to product is not giving AI more freedom&#8212;it&#8217;s deciding, deliberately, where freedom is dangerous and structure is necessary.</p><h2>3. User Understanding Is as Important as Model Accuracy</h2><p>Several of the failures described on this panel had nothing to do with model quality.</p><p>The systems technically worked. The outputs were often reasonable. The models were capable. And yet&#8212;users still failed.</p><p>This surfaced a critical distinction the panel kept returning to:</p><blockquote><p>&#8220;Many AI product failures are UX failures, not technical ones.&#8221;</p></blockquote><h3>When Users Don&#8217;t Know How to Succeed, Accuracy Doesn&#8217;t Matter</h3><p>Across multiple anecdotes, speakers described situations where:</p><ul><li><p>Users didn&#8217;t know what to ask.</p></li><li><p>Users didn&#8217;t know how to phrase inputs.</p></li><li><p>Users didn&#8217;t understand what the system could or couldn&#8217;t do.</p></li><li><p>Users couldn&#8217;t tell whether an output was &#8220;good.&#8221;</p></li></ul><p>Even when the system produced correct or useful responses, users lacked confidence in how to use it.</p><p>As Aiden Bai explained, this creates a subtle but fatal problem: users blame themselves. When they don&#8217;t know whether they&#8217;re using a product correctly, they stop experimenting. They hesitate. Eventually, they churn&#8212;not because the system failed, but because success felt accidental.</p><h3>Hidden Mental Models Kill Adoption</h3><p>A recurring theme was the danger of implicit mental models.</p><p>Many AI products assume users will intuitively understand:</p><ul><li><p>What kind of inputs work best?</p></li><li><p>How much context to provide?</p></li><li><p>When is the model confident versus guessing?</p></li><li><p>Where are the system&#8217;s boundaries?</p></li></ul><p>But as Linus Lee emphasized, users don&#8217;t arrive with the product team&#8217;s mental model. When success depends on unspoken rules, only power users thrive&#8212;everyone else quietly fails.</p><p>This creates a false signal:</p><ul><li><p>The product &#8220;works&#8221; for a small group.</p></li><li><p>Engagement looks healthy at the surface.</p></li><li><p>But learning doesn&#8217;t spread.</p></li></ul><p>Without explicit guidance, the system becomes brittle outside of expert hands.</p><h3>Affordances Matter More Than Capability</h3><p>Several panelists stressed that capability is useless if affordances are unclear.</p><p>Matthew Rastovac described this tension in enterprise contexts. Even highly capable agent systems struggled when users couldn&#8217;t predict behavior or understand why certain actions were taken.</p><p>In those environments, confusion is indistinguishable from risk, and risk is unacceptable.</p><p>When affordances are unclear:</p><ul><li><p>Users hesitate to rely on outputs.</p></li><li><p>Teams introduce manual checks.</p></li><li><p>Automation stalls.</p></li><li><p>Trust erodes.</p></li></ul><p>The system doesn&#8217;t need to be perfect. It needs to be legible.</p><h3>Trust Depends on Understanding, Not Just Accuracy</h3><p>Josh Payne framed this from a commercial perspective. Customers don&#8217;t just want correct outputs. They want to understand why the product helps them.</p><p>If users can&#8217;t explain the value of a system to a colleague, adoption doesn&#8217;t spread, and renewal becomes fragile.</p><p>Trust, in this sense, isn&#8217;t about correctness alone. It&#8217;s about predictability, explanation, and confidence.</p><p>Users trust systems they can reason about&#8212;even if those systems are imperfect.</p><h3>&#8220;Good Usage&#8221; Must Be Taught, Not Discovered</h3><p>One of the clearest lessons from the panel was that good usage doesn&#8217;t emerge naturally in AI products.</p><p>If users must discover:</p><ul><li><p>What to ask?</p></li><li><p>How to phrase inputs?</p></li><li><p>How to evaluate outputs?</p></li><li><p>When to intervene?</p></li></ul><p>Most of them won&#8217;t.</p><p>Successful teams made good use of explicit. They:</p><ul><li><p>Constrained inputs.</p></li><li><p>Provided examples.</p></li><li><p>Guided first actions.</p></li><li><p>Surfaced boundaries clearly.</p></li></ul><p>They didn&#8217;t assume users would figure it out.</p><h3>If Users Can&#8217;t Explain It, the Product Doesn&#8217;t Exist</h3><p>This led to one of the most blunt conclusions of the panel:</p><blockquote><p>&#8220;If users can&#8217;t explain how your product helps them, the product doesn&#8217;t exist.&#8221;</p></blockquote><p>Accuracy alone doesn&#8217;t create understanding.</p><p><strong>Understanding creates confidence. Confidence creates habit</strong>.</p><p>Without that chain, even technically impressive systems fail to become products.</p><h3>The Practical Takeaway</h3><p>Model accuracy matters, but user comprehension determines whether accuracy is ever experienced.</p><p>The teams that succeeded didn&#8217;t just build smarter systems. They built systems that taught users how to succeed.</p><p>In AI products, clarity is not a UX polish. It&#8217;s a core capability.</p><h2>4. Prototypes Are Cheap &#8212; Maintenance Is the Real Cost</h2><p>One of the most sobering insights from the panel was that AI has made prototyping deceptively easy.</p><p>With modern models, teams can:</p><ul><li><p>Stand up impressive demos in days.</p></li><li><p>Chain together workflows quickly.</p></li><li><p>Simulate &#8220;end-state&#8221; product behavior early.</p></li></ul><p>This is a genuine gift. It dramatically lowers the barrier to exploration.</p><p>But as multiple speakers warned, it&#8217;s also a trap.</p><h3>The Dangerous Pattern AI Enables</h3><p>The panel described a pattern that has become increasingly common:</p><ol><li><p>Teams prototype quickly.</p></li><li><p>Early demos look strong.</p></li><li><p>Features gain internal and external momentum.</p></li><li><p>The system gets shipped.</p></li><li><p>Long-term maintenance costs quietly explode.</p></li></ol><p>Because AI prototypes look so close to finished products, teams often skip a crucial step: asking whether the system is worth maintaining.</p><p>As Aiden Bai noted, many teams now treat &#8220;we can build this&#8221; as sufficient justification to ship. But in AI, feasibility and sustainability are very different questions.</p><h3>Shipping Is a Commitment, Not a Milestone</h3><p>Once an AI feature ships, it stops being an experiment.</p><p>It becomes:</p><ul><li><p>Something users rely on.</p></li><li><p>Something customers expect to improve.</p></li><li><p>Something that must remain stable.</p></li><li><p>Something that must adapt as models change.</p></li></ul><p>Matthew Rastovac emphasized this from an enterprise perspective. In large organizations, every shipped capability creates an implicit contract. Even &#8220;experimental&#8221; features quickly become assumed infrastructure.</p><p>Removing or degrading them later is far harder than never shipping them at all.</p><p>The cost of reversal is high&#8212;both technically and politically.</p><h3>AI Features Age Faster Than Traditional Software</h3><p>Another key distinction surfaced on the panel: AI features don&#8217;t stay still.</p><p>Unlike traditional software, AI systems must evolve alongside:</p><ul><li><p>Changing model behavior.</p></li><li><p>Shifting user expectations.</p></li><li><p>New failure modes.</p></li><li><p>Emerging best practices.</p></li></ul><p>What worked six months ago may feel broken today&#8212;not because the system regressed, but because the surrounding ecosystem moved.</p><p>As Josh Payne pointed out, this makes AI features uniquely expensive to maintain. They require continuous reevaluation, not occasional updates.</p><p>Without active stewardship, quality decays silently.</p><h3>Debugging Gets Harder Over Time, Not Easier</h3><p>Several speakers also highlighted how maintenance cost compounds in non-obvious ways.</p><p>Early on:</p><ul><li><p>Failures are obvious.</p></li><li><p>Edge cases are limited.</p></li><li><p>The system&#8217;s behavior is still well understood.</p></li></ul><p>Over time:</p><ul><li><p>Failures become subtle.</p></li><li><p>Behavior drifts.</p></li><li><p>Assumptions break.</p></li><li><p>No one fully remembers why decisions were made.</p></li></ul><p>Debugging shifts from &#8220;what broke?&#8221; to &#8220;why does this behave like this at all?&#8221;</p><p>That transition is where many AI products stall.</p><h3>The Question Teams Rarely Ask</h3><p>All of this led to one of the most important reframes of the panel.</p><p>The real question isn&#8217;t:</p><blockquote><p>&#8220;Can we build this?&#8221;</p></blockquote><p>With modern AI, the answer is almost always yes.</p><p>The real question is:</p><blockquote><p>&#8220;Are we willing to maintain this for years?&#8221;</p></blockquote><p>That means being willing to:</p><ul><li><p>Own its failures.</p></li><li><p>Evolve it as models change.</p></li><li><p>Explain it to users repeatedly.</p></li><li><p>Defend it internally.</p></li><li><p>Invest in its long-term quality.</p></li></ul><p>If the answer is no, shipping the prototype is often a mistake&#8212;no matter how impressive it looks.</p><h3>A More Disciplined Definition of Speed</h3><p>This insight ties directly back to the panel&#8217;s broader theme: real speed is long-term speed.</p><p>Shipping something that creates drag six months later is not velocity. It&#8217;s debt.</p><p>The teams that succeed don&#8217;t ship fewer prototypes. They ship fewer commitments.</p><p>They explore aggressively, but commit selectively.</p><h3>The Practical Takeaway</h3><p>AI makes it easy to build things. It does not make owning them easy.</p><p>Teams that treat every prototype as a potential long-term system make different decisions:</p><ul><li><p>They constrain the scope earlier.</p></li><li><p>They delay shipping until maintenance is understood.</p></li><li><p>They design for evolution, not just launch.</p></li></ul><p>In an era where prototypes are cheap, judgment about what to ship becomes the real competitive advantage.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>5. Data Reality Beats Synthetic Optimism</h2><p>Many of the production failures discussed on this panel didn&#8217;t stem from model weakness.</p><p>They stemmed from a data mismatch.</p><p>Again and again, speakers described the same underlying issue: systems that looked impressive in controlled testing environments broke down almost immediately when exposed to real-world data.</p><p>The problem wasn&#8217;t intelligence. It was optimism.</p><h3>The Comfort of Clean Data</h3><p>AI prototypes are often built on data that is:</p><ul><li><p>Clean.</p></li><li><p>Structured.</p></li><li><p>Well-labeled.</p></li><li><p>Internally generated.</p></li><li><p>Carefully curated.</p></li></ul><p>This makes early progress feel smooth. Outputs look coherent. Failure rates appear low. The system feels &#8220;ready.&#8221;</p><p>But as Josh Payne noted, this creates a false sense of confidence. Clean data hides the very conditions that define production environments: ambiguity, inconsistency, and noise.</p><p>Synthetic data, in particular, tends to encode the assumptions of the team that generated it. That makes it useful for testing logic, but dangerous for validating feasibility.</p><h3>Real Data Is Messy &#8212; And Honest</h3><p>Once systems encounter real user data, the illusion collapses.</p><p>Matthew Rastovac described how quickly edge cases surface inside enterprise systems. Inputs arrive partially filled, inconsistently formatted, or shaped by legacy processes no one fully understands.</p><p>Data sources conflict. Human behavior introduces exceptions that no synthetic dataset anticipates.</p><p>In those conditions:</p><ul><li><p>Models hallucinate more often.</p></li><li><p>Confidence signals break down.</p></li><li><p>Workflows fail silently.</p></li><li><p>Trust erodes.</p></li></ul><p>These failures aren&#8217;t rare. They&#8217;re immediate.</p><h3>Generalization Is Not Guaranteed</h3><p>A critical mistake surfaced repeatedly in the panel: assuming that strong performance on one dataset implies strong performance everywhere.</p><p>As Aiden Bai pointed out, model behavior is highly sensitive to distribution shifts. What works well on curated inputs can fail dramatically when:</p><ul><li><p>Vocabulary changes.</p></li><li><p>Structure degrades.</p></li><li><p>Context is incomplete.</p></li><li><p>User intent is unclear.</p></li></ul><p>Generalization is not automatic, and in many cases, it never arrives without deliberate intervention.</p><h3>Feasibility Must Be Proven Early</h3><p>One of the strongest recommendations from the panel was simple, but uncomfortable:</p><blockquote><p>&#8220;Use real data as early as possible.&#8221;</p></blockquote><p>Not after the prototype. Not after the demo. Not after initial traction.</p><p>Early feasibility checks save enormous downstream cost. They reveal:</p><ul><li><p>Whether the problem is actually solvable?</p></li><li><p>Where constraints need to be added?</p></li><li><p>How much preprocessing is required?</p></li><li><p>Which failure modes matter most?</p></li></ul><p>Teams that delay real-data testing often spend months optimizing systems that were never viable in the first place.</p><h3>Edge Cases Aren&#8217;t Edge Cases</h3><p>Another subtle but important point: in production, edge cases stop being edges.</p><p>Once a system is deployed:</p><ul><li><p>Rare inputs appear regularly.</p></li><li><p>Unexpected usage becomes normal.</p></li><li><p>Misuse becomes common.</p></li><li><p>Ambiguity becomes the default.</p></li></ul><p>As Linus Lee framed it, production environments don&#8217;t just surface edge cases. They invert them.</p><p>What seemed unlikely in testing becomes inevitable in the wild.</p><p>This is why synthetic optimism fails so reliably. It prepares teams for best-case scenarios in a world dominated by worst-case inputs.</p><h3>Garbage In Still Applies &#8212; Faster Than Ever</h3><p>The panel returned to an old truth, with a modern twist:</p><blockquote><p>&#8220;Garbage in, garbage out still applies. AI just makes the consequences arrive faster.&#8221;</p></blockquote><p>Bad data doesn&#8217;t just degrade performance. It accelerates failure.</p><p>Because AI systems act confidently even when they&#8217;re wrong, poor inputs don&#8217;t produce obvious crashes. They produce plausible errors&#8212;the most dangerous kind.</p><h3>The Practical Takeaway</h3><p>Teams that succeed don&#8217;t avoid messy data. They confront it immediately.</p><p>They:</p><ul><li><p>Test with real inputs early.</p></li><li><p>Design workflows to absorb noise.</p></li><li><p>Constrain what data is allowed in.</p></li><li><p>Surface uncertainty explicitly.</p></li><li><p>Build around failure, not perfection.</p></li></ul><p>In AI products, optimism is expensive.</p><p>Reality is cheaper&#8212;if you face it early.</p><h2>6. Building an AI Product Means Building Two Things</h2><p>One of the most important conceptual frameworks to emerge from the panel was deceptively simple:</p><blockquote><p>&#8220;When you ship an AI product, you are building two systems at once.&#8221;</p></blockquote><p>Most teams only focus on the first.</p><h3>The First System: The Product Users See</h3><p>The first system is the obvious one:</p><ul><li><p>The interface.</p></li><li><p>The workflows.</p></li><li><p>The outputs.</p></li><li><p>The features customers interact with.</p></li></ul><p>This is the artifact teams demo, launch, and market. It&#8217;s where most effort is visibly spent, and where most AI conversations begin.</p><p>But as the panel made clear, this system alone is not enough.</p><h3>The Second System: The One That Keeps the First Alive</h3><p>The second system is quieter, less visible, and far more decisive.</p><p>It&#8217;s the organizational system that:</p><ul><li><p>Observes how the product behaves in the wild.</p></li><li><p>Detects when outputs degrade.</p></li><li><p>Understands why failures occur.</p></li><li><p>Enables safe iteration.</p></li><li><p>Evolves as models and user expectations change.</p></li></ul><p>This system doesn&#8217;t ship to customers, but without it, the customer-facing product inevitably decays.</p><p>As Matthew Rastovac emphasized from an enterprise standpoint, AI products don&#8217;t just require ongoing support. They require continuous interpretation.</p><p>Outputs need context. Failures need explanation. And teams need mechanisms to decide when a system is &#8220;good enough&#8221; versus when it&#8217;s quietly drifting.</p><h3>Why AI Products Are Fundamentally Different</h3><p>In traditional software, the rules are relatively stable:</p><ul><li><p>Logic is deterministic.</p></li><li><p>Behavior changes only when engineers change it.</p></li><li><p>Best practices evolve slowly.</p></li></ul><p>AI breaks all three assumptions.</p><p>As several speakers noted:</p><ul><li><p>Model behavior can shift without code changes.</p></li><li><p>Upgrades introduce new capabilities and new regressions.</p></li><li><p>User expectations evolve as AI becomes more commonplace.</p></li><li><p>Yesterday&#8217;s &#8220;impressive&#8221; becomes today&#8217;s &#8220;table stakes.&#8221;</p></li></ul><p>This means AI products don&#8217;t just age. They mutate.</p><p>Without a strong second system in place, teams lose the ability to reason about what&#8217;s happening inside their own product.</p><h3>The Invisible Work That Actually Determines Success</h3><p>When the panel discussed teams that successfully shipped AI products at scale, the conversation quickly moved away from prompts and models and toward internal processes.</p><p>Long-term success depended far more on:</p><ul><li><p>Observability into real-world usage.</p></li><li><p>Fast feedback loops.</p></li><li><p>Clear ownership of failure modes.</p></li><li><p>Evaluation infrastructure that evolves over time.</p></li><li><p>Teams that actively learn from mistakes.</p></li></ul><p>As Aiden Bai noted, teams that move quickly without these systems often appear productive, until suddenly they aren&#8217;t.</p><p>Progress stalls not because the product is bad, but because no one can confidently change it anymore.</p><h3>Iteration Without Understanding Is Just Thrash</h3><p>Another key insight was that iteration alone is not a virtue.</p><p>Teams can ship frequently and still move backward if they:</p><ul><li><p>Don&#8217;t understand why changes help or hurt.</p></li><li><p>Lack signal on the output quality.</p></li><li><p>Can&#8217;t trace failures to causes.</p></li><li><p>Don&#8217;t know which metrics actually matter.</p></li></ul><p>This is where the second system earns its keep. It transforms iteration from guesswork into learning.</p><p>As Linus Lee framed it, the real challenge isn&#8217;t building intelligence. It&#8217;s building understanding around intelligence.</p><p>Without shared understanding inside the team, velocity collapses into churn.</p><h3>Evaluation Is a Living System, Not a One-Time Setup</h3><p>Evaluation came up repeatedly as a core part of this second system, but with an important caveat.</p><p>Evals are not something you &#8220;set and forget.&#8221;</p><p>They must:</p><ul><li><p>Evolve as the product evolves.</p></li><li><p>Reflect real user behavior.</p></li><li><p>Adapt to new use cases.</p></li><li><p>Change as expectations change.</p></li></ul><p>Static evals freeze assumptions in time. Living evals encode learning.</p><p>Teams that treated evaluation as infrastructure, not tooling, were better positioned to move fast without breaking trust.</p><h3>The Real Competitive Advantage</h3><p>By the end of the discussion, a clear pattern had emerged.</p><p>The most successful AI teams weren&#8217;t the ones with:</p><ul><li><p>The biggest models.</p></li><li><p>The cleverest prompts.</p></li><li><p>The most impressive demos.</p></li></ul><p>They were the ones with:</p><ul><li><p>Tight learning loops.</p></li><li><p>Strong internal feedback.</p></li><li><p>Clear ownership.</p></li><li><p>The ability to change their product with confidence.</p></li></ul><p>In other words, they built organizations that could evolve as quickly as their technology.</p><h3>The Practical Takeaway</h3><p>AI products are not static artifacts. They are living systems.</p><p>And living systems require:</p><ul><li><p>Observation.</p></li><li><p>Care.</p></li><li><p>Feedback.</p></li><li><p>Adaptation.</p></li></ul><p>If you only build the product users see, you will eventually lose control of it.</p><p>If you build the second system&#8212;the one that understands, evaluates, and evolves the first&#8212;you earn the right to ship AI into the real world.</p><h2>7. Evals Are Automation &#8212; Not Truth</h2><p>Evaluations came up repeatedly on the panel, but not in the way many teams expect.</p><p>Rather than positioning evals as a silver bullet, the speakers shared a more cautious, experience-earned view:</p><blockquote><p>&#8220;Evals scale insight, but they reduce resolution.&#8221;</p></blockquote><p>They are powerful tools. They are also blunt instruments.</p><p>Understanding that tradeoff is critical to building AI products that improve over time instead of calcifying prematurely.</p><h3>What Evals Are Actually Good At</h3><p>At their best, evals do three things extremely well:</p><ul><li><p>They automate human judgment.</p></li><li><p>They enable iteration at scale.</p></li><li><p>They prevent regressions.</p></li></ul><p>Several speakers described evals as essential guardrails. They make sure teams don&#8217;t move backward as systems evolve.</p><p>But guardrails are not maps.</p><h3>Where Evals Quietly Fail</h3><p>The panel was equally clear about what evals don&#8217;t do well.</p><p>Evals:</p><ul><li><p>Rely on proxy signals.</p></li><li><p>Encode assumptions that may be wrong.</p></li><li><p>Flatten nuance into binary scores.</p></li><li><p>Struggle with edge cases.</p></li><li><p>Fail to capture intent, context, or taste.</p></li></ul><p>As Daksh Gupta noted elsewhere in the event, once an eval exists, teams tend to optimize for it, even when it no longer reflects reality.</p><p>What started as a helpful abstraction slowly becomes a constraint on thinking.</p><p>The risk isn&#8217;t that evals are inaccurate.</p><p>The risk is that they are confidently incomplete.</p><h3>Resolution vs Scale Is a Real Tradeoff</h3><p>A key mental model that emerged was the idea of resolution.</p><p>Human review has:</p><ul><li><p>High resolution.</p></li><li><p>Strong intuition.</p></li><li><p>Deep contextual awareness.</p></li></ul><p>But it doesn&#8217;t scale.</p><p>Evals, by contrast, have:</p><ul><li><p>Massive scale.</p></li><li><p>Consistency.</p></li><li><p>Speed.</p></li></ul><p>But low resolution.</p><p>As Linus Lee framed it during the discussion, evals compress complex judgment into simplified signals. That compression is useful, but it necessarily discards information.</p><p>The mistake teams make is assuming compression is harmless.</p><h3>Evals Can Freeze Bad Assumptions</h3><p>Several speakers warned about introducing evals too early.</p><p>When evals are created before:</p><ul><li><p>Failure modes are understood.</p></li><li><p>Good usage is well defined.</p></li><li><p>The product has stabilized.</p></li></ul><p>They tend to encode guesses, not knowledge.</p><p>From that point on:</p><ul><li><p>The system optimizes toward the eval.</p></li><li><p>Exploration slows.</p></li><li><p>Unexpected behaviors are suppressed.</p></li><li><p>Real learning stalls.</p></li></ul><p>What looks like progress is often just alignment with an incomplete metric.</p><h3>How Great Teams Actually Use Evals</h3><p>The most effective teams on the panel treated evals very differently.</p><p>They used evals as:</p><ul><li><p>Learning accelerators, not arbiters of truth.</p></li><li><p>Ways to scale known insights, not discover new ones.</p></li><li><p>Safety nets, not steering mechanisms.</p></li></ul><p>Human judgment remained central.</p><p>Teams continued to:</p><ul><li><p>Review real outputs.</p></li><li><p>Talk to users directly.</p></li><li><p>Interrogate surprising behavior.</p></li><li><p>Revisit eval criteria frequently.</p></li></ul><p>Evals didn&#8217;t replace judgment. They made judgments faster and more focused.</p><h3>The Real Goal of Evaluation</h3><p>This led to one of the cleanest reframes of the panel:</p><blockquote><p>&#8220;The goal of evals isn&#8217;t perfection. It&#8217;s making humans faster at understanding where models fail.&#8221;</p></blockquote><p>Perfection is a mirage. Understanding is durable.</p><p>When evals are used to surface where to look, not what to believe, they unlock speed without sacrificing insight.</p><h3>The Practical Takeaway</h3><p>Evals are infrastructure, not intelligence.</p><p>They are most powerful when:</p><ul><li><p>Grounded in deep domain understanding.</p></li><li><p>Updated as products evolve.</p></li><li><p>Paired with continuous human review.</p></li><li><p>Treated as provisional, not absolute.</p></li></ul><p>Teams that mistake evals for truth slow themselves down.</p><p>Teams that use evals to amplify learning move faster and with confidence.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/building-ai-products-not-prototypes?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/building-ai-products-not-prototypes?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://labs.adaline.ai/p/building-ai-products-not-prototypes?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h2>8. Frameworks Encode Values &#8212; Choose Carefully</h2><p>One of the quieter, but most consequential insights from the panel was that framework choice is not a neutral technical decision.</p><p>It&#8217;s philosophical.</p><p>Frameworks don&#8217;t just provide abstractions. They encode:</p><ul><li><p>Assumptions about how work should be done.</p></li><li><p>Values about speed versus safety.</p></li><li><p>Opinions about who the product is for.</p></li><li><p>Mental models about how systems should evolve.</p></li></ul><p>When teams adopt a framework, they&#8217;re not just choosing tooling. They&#8217;re choosing a worldview.</p><h3>Frameworks Optimize for Something &#8212; Always</h3><p>Several speakers noted that most modern AI frameworks are optimized for a specific phase of development.</p><p>Common priorities include:</p><ul><li><p>Speed of iteration.</p></li><li><p>Ease of onboarding.</p></li><li><p>Approachability for new users.</p></li><li><p>Rapid prototyping.</p></li></ul><p>These are not bad goals. In fact, they&#8217;re often exactly what teams need early on.</p><p>But as the panel emphasized, those same values frequently come into conflict with what production systems require.</p><h3>What Prototyping Frameworks Often Trade Away</h3><p>Frameworks designed for speed and flexibility tend to de-emphasize:</p><ul><li><p>Robustness.</p></li><li><p>Explicit control.</p></li><li><p>Debuggability.</p></li><li><p>Long-term evolvability.</p></li></ul><p>Early on, these tradeoffs are invisible. Everything works. Changes are easy. Velocity feels high.</p><p>Over time, however, the costs surface.</p><p>As Linus Lee pointed out, abstraction layers that hide complexity also hide causality. When something goes wrong, teams struggle to understand why.</p><p>Behavior becomes emergent rather than intentional. Debugging shifts from reasoning to guesswork.</p><p>The framework didn&#8217;t break. It did exactly what it was designed to do.</p><h3>Tooling Shapes How Teams Think</h3><p>A subtle, but important point emerged during the discussion: frameworks don&#8217;t just shape systems. They shape teams.</p><p>They influence:</p><ul><li><p>How are problems framed?</p></li><li><p>Where do teams look for solutions?</p></li><li><p>Which tradeoffs feel &#8220;normal?&#8221;</p></li><li><p>What kinds of questions get asked?</p></li></ul><p>Frameworks optimized for rapid demos encourage experimentation and breadth. Frameworks optimized for production encourage constraint, observability, and discipline.</p><p>Neither is universally correct. But mismatches are costly.</p><h3>Speed Now vs Speed Later</h3><p>Several speakers highlighted a recurring mistake: optimizing for early velocity at the expense of future movement.</p><p>Frameworks that:</p><ul><li><p>Make it easy to ship quickly.</p></li><li><p>But hard to change direction.</p></li><li><p>Or painful to evolve.</p></li></ul><p>Often impose invisible ceilings on long-term speed.</p><p>As Aiden Bai noted earlier in the panel, the fastest teams long-term are not the ones that move fastest on day one. They&#8217;re the ones that preserve optionality.</p><p>Framework choice plays a large role in whether that optionality exists.</p><h3>Adoption Is a Commitment</h3><p>Once a framework is deeply embedded:</p><ul><li><p>Workflows form around it.</p></li><li><p>Team expertise concentrates within it.</p></li><li><p>Migration costs rise.</p></li><li><p>Architectural decisions harden.</p></li></ul><p>At that point, changing frameworks is no longer a refactor. It&#8217;s a replatforming.</p><p>This is why the panel encouraged teams to treat framework adoption with the same seriousness as core architectural decisions.</p><h3>The Practical Reframe</h3><p>The panel offered a simple, but powerful way to think about frameworks:</p><blockquote><p>&#8220;Ask not just &#8216;What does this framework make easy?&#8217; Ask &#8216;What does it make hard?&#8217;&#8221;</p></blockquote><p>Every framework makes something difficult:</p><ul><li><p>Introspection.</p></li><li><p>Control.</p></li><li><p>Customization.</p></li><li><p>Evolution.</p></li></ul><p>Those tradeoffs only become painful when the product outgrows the framework&#8217;s original intent.</p><h3>The Practical Takeaway</h3><p>Frameworks are leverage, but leverage cuts both ways.</p><p>They can:</p><ul><li><p>Accelerate learning.</p></li><li><p>Reduce boilerplate.</p></li><li><p>Enable fast exploration.</p></li></ul><p>They can also:</p><ul><li><p>Constrain thinking.</p></li><li><p>Obscure failure modes.</p></li><li><p>Slow adaptation over time.</p></li></ul><p>Choosing a framework means choosing which problems you want to solve, and which problems you&#8217;re willing to inherit later.</p><p>In AI products, where change is constant and certainty is rare, that choice deserves more intention than it usually gets.</p><h2>9. Task Decomposition Beats End-to-End Autonomy</h2><p>One of the most consistent reframes on the panel challenged a question many AI teams instinctively ask:</p><blockquote><p>&#8220;Can an agent do this end-to-end?&#8221;</p></blockquote><p>The panel suggested a better one:</p><blockquote><p>&#8220;How should this task be factored between the human and the model?&#8221;</p></blockquote><p>That shift, from autonomy to decomposition, turned out to be decisive.</p><h3>End-to-End Autonomy Is a Fragile Goal</h3><p>Several speakers described early attempts to build fully autonomous agents that could:</p><ul><li><p>Take a vague input.</p></li><li><p>Reason through a complex task.</p></li><li><p>Execute multiple steps.</p></li><li><p>Deliver a finished result.</p></li></ul><p>These systems often looked impressive in demos. But they failed in production for predictable reasons.</p><p>End-to-end autonomy concentrates too much responsibility in one place:</p><ul><li><p>Intent interpretation.</p></li><li><p>Decision-making.</p></li><li><p>Execution.</p></li><li><p>Error handling.</p></li></ul><p>When something goes wrong, there&#8217;s no clear boundary for intervention. Failure becomes opaque. Trust collapses quickly.</p><p>As one panelist noted:</p><blockquote><p>&#8220;Users don&#8217;t mind AI helping. They mind AI disappearing into a black box.&#8221;</p></blockquote><h3>Decomposition Creates Control</h3><p>By contrast, the teams that found real success broke complex tasks into smaller, legible pieces.</p><p>Instead of a single autonomous flow, they designed systems with:</p><ul><li><p>Intermediate artifacts.</p></li><li><p>Assistive steps.</p></li><li><p>Explicit checkpoints.</p></li><li><p>Explainable outputs.</p></li></ul><p>Each step answered a narrower question. Each output gave the user something concrete to react to.</p><p>This approach didn&#8217;t just improve reliability. It improved collaboration.</p><h3>Humans Are Better Judges Than Executors</h3><p>A recurring insight was that humans and models excel at different parts of the workflow.</p><p>Models are strong at:</p><ul><li><p>Pattern recognition.</p></li><li><p>Synthesis.</p></li><li><p>Drafting.</p></li><li><p>Proposing options.</p></li></ul><p>Humans are strong at:</p><ul><li><p>Judgment.</p></li><li><p>Context.</p></li><li><p>Prioritization.</p></li><li><p>Responsibility.</p></li></ul><p>When tasks are decomposed intentionally, each party does what it does best.</p><p>As Aiden Bai pointed out earlier in the panel, systems that treat AI as a collaborator, not a replacement, tend to scale better.</p><p>Users feel in control. They understand where the system helps and where it defers.</p><h3>Explainability Drives Trust</h3><p>Matthew Rastovac emphasized that explainability isn&#8217;t a compliance requirement. It&#8217;s a usability requirement.</p><p>When users can see:</p><ul><li><p>How outputs were generated.</p></li><li><p>What assumptions were made.</p></li><li><p>Where uncertainty exists.</p></li></ul><p>They&#8217;re far more likely to trust the system, even when it makes mistakes.</p><p>Decomposed systems surface reasoning naturally, because each step has a purpose.</p><p>End-to-end systems hide reasoning, because there&#8217;s nowhere to expose it without breaking the illusion of autonomy.</p><h3>Adoption Follows Legibility</h3><p>Another theme that emerged was adoption speed.</p><p>Systems that relied on full autonomy:</p><ul><li><p>Required more onboarding.</p></li><li><p>Triggered more skepticism.</p></li><li><p>Produced more hesitation.</p></li></ul><p>Systems that offered assistance in steps:</p><ul><li><p>Felt safer.</p></li><li><p>Were easier to learn.</p></li><li><p>Integrated more naturally into existing workflows.</p></li></ul><p>As Josh Payne noted earlier in the panel, adoption isn&#8217;t about how powerful a system is.</p><p>It&#8217;s about how easily users can see themselves using it successfully.</p><h3>Human + AI Beats AI Alone</h3><p>Across anecdotes and domains, the conclusion was consistent:</p><blockquote><p>&#8220;Human + AI systems outperform AI-only systems when boundaries are explicit.&#8221;</p></blockquote><p>Explicit boundaries:</p><ul><li><p>Clarify responsibility.</p></li><li><p>Reduce surprise.</p></li><li><p>Enable graceful failure.</p></li><li><p>Preserve user agency.</p></li></ul><p>Autonomy can be added later, once trust, understanding, and structure exist.</p><h3>The Practical Takeaway</h3><p>The question isn&#8217;t whether AI can do something end-to-end.</p><p>It&#8217;s whether it should.</p><p>Teams that default to decomposition:</p><ul><li><p>Ship faster.</p></li><li><p>Build trust earlier.</p></li><li><p>Adapt more easily.</p></li><li><p>Avoid catastrophic failure.</p></li></ul><p>In AI products, autonomy is not the starting point. It&#8217;s the reward for getting everything else right.</p><h2>10. The Hardest Problems Are Still Hard</h2><p>The panel closed on a sobering, but ultimately empowering truth: some problems are still genuinely hard.</p><p>Not hard because teams lack talent. Not hard because models aren&#8217;t improving.</p><p>Hard because the problems themselves sit at the edge of what today&#8217;s systems can reliably handle.</p><p>And acknowledging that reality turned out to be a strength, not a weakness.</p><h3>Value and Difficulty Are Still Correlated</h3><p>Several speakers noted a pattern that can be uncomfortable in an era of rapid progress.</p><p>The most valuable problems tend to be:</p><ul><li><p>Deeply contextual.</p></li><li><p>Poorly structured.</p></li><li><p>Full of ambiguity.</p></li><li><p>Dependent on human judgment.</p></li><li><p>Embedded in messy real-world systems.</p></li></ul><p>These are exactly the problems where AI looks promising, and where it most often breaks down in production.</p><p>As Linus Lee framed it earlier, intelligence alone doesn&#8217;t solve these problems.</p><p>They require alignment between models, interfaces, workflows, and human expectations, and that alignment is still hard to achieve.</p><h3>Model Capability Has Limits &#8212; And That&#8217;s Okay</h3><p>The panel was notably clear-eyed about current model limitations.</p><p>Despite dramatic improvements:</p><ul><li><p>Reasoning degrades under uncertainty.</p></li><li><p>Long-horizon tasks remain fragile.</p></li><li><p>Edge cases dominate real usage.</p></li><li><p>Confidence often exceeds correctness.</p></li></ul><p>Pretending these limits don&#8217;t exist leads teams to ship systems that fail silently, and damage trust in the process.</p><p>As Matthew Rastovac noted from an enterprise lens, realism about limitations is often what unlocks adoption.</p><p>Users are far more forgiving of systems that clearly communicate what they can&#8217;t do than systems that promise everything and fail unpredictably.</p><h3>Exploration Is Not the Same as Production</h3><p>A crucial distinction emerged between exploration and production.</p><p>Exploration:</p><ul><li><p>Tolerates failure.</p></li><li><p>Values learning.</p></li><li><p>Embraces uncertainty.</p></li><li><p>Rewards ambition.</p></li></ul><p>Production:</p><ul><li><p>Demands reliability.</p></li><li><p>Requires accountability.</p></li><li><p>Exposes weaknesses.</p></li><li><p>Punishes overreach.</p></li></ul><p>The panel emphasized that confusing these two modes is one of the most common causes of AI product failure.</p><p>Exploration is valuable. But production requires restraint.</p><h3>Honest Feasibility Checks Save Time</h3><p>Several speakers described projects that only succeeded once teams stopped asking &#8220;How do we make this work?&#8221; and started asking &#8220;Should this exist right now?&#8221;</p><p>That shift unlocked better decisions:</p><ul><li><p>Reframing the problem.</p></li><li><p>Narrowing scope.</p></li><li><p>Delaying automation.</p></li><li><p>Changing the abstraction.</p></li><li><p>Temporarily walking away.</p></li></ul><p>This wasn&#8217;t failure. It was judgment.</p><p>As Aiden Bai put it earlier in the session, knowing when not to ship is just as important as knowing how to ship quickly.</p><h3>Walking Away Is Sometimes the Fastest Path Forward</h3><p>One of the most counterintuitive takeaways of the panel was that walking away can be a form of progress.</p><p>Teams that succeed long-term:</p><ul><li><p>Revisit problems as models evolve.</p></li><li><p>Reattempt challenges with better tools.</p></li><li><p>Recognize when timing is wrong.</p></li><li><p>Preserve optionality instead of forcing solutions.</p></li></ul><p>Walking away doesn&#8217;t mean abandoning ambition.</p><p>It means sequencing it correctly.</p><h3>The Mature View of AI Product Development</h3><p>By the end of the discussion, a clear philosophy had emerged.</p><p>The best teams:</p><ul><li><p>Push hard where leverage exists.</p></li><li><p>Design carefully where risk is high.</p></li><li><p>Accept limits without resignation.</p></li><li><p>Combine optimism with discipline.</p></li></ul><p>They don&#8217;t mistake possibility for readiness.</p><h3>The Final Takeaway</h3><p>AI is expanding what&#8217;s possible, rapidly.</p><p>But not everything possible today is viable today.</p><p>The teams that win are not the ones who chase the hardest problems blindly.</p><p>They&#8217;re the ones who:</p><ul><li><p>Understand the limits of current models.</p></li><li><p>Respect domain complexity.</p></li><li><p>Choose the right problems at the right time.</p></li></ul><p>In an industry driven by acceleration, the panel offered a grounding reminder:</p><p>Progress comes not just from pushing forward, but from knowing when to pause, reframe, and return stronger later.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[How to Ship Reliably With Claude Code When Your Engineers Are AI Agents]]></title><description><![CDATA[A PM-friendly playbook for plan-first agentic development using subagents, guardrails, and multi-model review to turn tickets into safe pull requests.]]></description><link>https://labs.adaline.ai/p/how-to-ship-reliably-with-claude-code</link><guid isPermaLink="false">https://labs.adaline.ai/p/how-to-ship-reliably-with-claude-code</guid><dc:creator><![CDATA[Nilesh Barla]]></dc:creator><pubDate>Sat, 24 Jan 2026 01:00:20 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/d11efbb8-63de-4cef-a1a5-ef2b0deed64c_1456x816.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>TLDR:</strong> PMs don&#8217;t need &#8220;AI that codes&#8221;; they need a delivery protocol. This blog explains how PMs can ship reliably with Claude Code by using plan-first gates, guardrails, Claude Code subagents, and multi-model review to turn messy tickets into clean, reviewable PRs. You&#8217;ll learn how to lead the gates and quality system so Claude Code ships safely and consistently.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://go.adaline.ai/rPUz2SX" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qa1j!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8736d371-ec15-49dd-9561-9d56b11437e8_2160x810.png 424w, https://substackcdn.com/image/fetch/$s_!qa1j!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8736d371-ec15-49dd-9561-9d56b11437e8_2160x810.png 848w, https://substackcdn.com/image/fetch/$s_!qa1j!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8736d371-ec15-49dd-9561-9d56b11437e8_2160x810.png 1272w, https://substackcdn.com/image/fetch/$s_!qa1j!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8736d371-ec15-49dd-9561-9d56b11437e8_2160x810.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qa1j!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8736d371-ec15-49dd-9561-9d56b11437e8_2160x810.png" width="1200" height="450" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8736d371-ec15-49dd-9561-9d56b11437e8_2160x810.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:546,&quot;width&quot;:1456,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:288175,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://go.adaline.ai/rPUz2SX&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/185523000?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8736d371-ec15-49dd-9561-9d56b11437e8_2160x810.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!qa1j!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8736d371-ec15-49dd-9561-9d56b11437e8_2160x810.png 424w, https://substackcdn.com/image/fetch/$s_!qa1j!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8736d371-ec15-49dd-9561-9d56b11437e8_2160x810.png 848w, https://substackcdn.com/image/fetch/$s_!qa1j!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8736d371-ec15-49dd-9561-9d56b11437e8_2160x810.png 1272w, https://substackcdn.com/image/fetch/$s_!qa1j!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8736d371-ec15-49dd-9561-9d56b11437e8_2160x810.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Why PMs Need a Delivery Protocol for Agentic Engineering</h2><p>Let me start off with a scenario. </p><p>Let&#8217;s assume yesterday&#8217;s ticket is three lines long and slightly wrong. The AI agent grabs it anyway, starts coding immediately, and opens a PR that &#8220;looks&#8221; complete. Then you find that the diff is noisy, the intent is unclear, and the tests are either missing or <strong>irrelevant</strong>. Engineering does what engineering always does: they don&#8217;t trust it, they ask for a rewrite, and you spend your afternoon translating ambiguity into something reviewable.</p><p>That failure mode is not about capability. It is about leadership. As an AI PM, it is okay not to be an expert at coding, but not being a good leader isn&#8217;t. If your team is still deciding which coding agent to standardise on, start with our <a href="https://labs.adaline.ai/p/claude-code-vs-openai-codex">Claude Code vs OpenAI Codex comparison</a>.</p><p>In <strong>agentic engineering</strong>, PMs are no longer just managing people&#8217;s throughput. You are managing a delivery system&#8217;s production reliability, i.e., how predictable, governable, and reviewable work is under speed. The fix is not &#8220;AI that codes.&#8221; It is <strong>the PM Build Protocol</strong>. A plan-first shipping workflow that turns ambiguous intent into structured execution. <a href="https://code.claude.com/docs/en/common-workflows#use-plan-mode-for-safe-code-analysis">Plan Mode</a> [in Claude Code] exists specifically to force safe analysis and requirement clarification before changes begin. </p><p>To enable plan use the following command:</p><p><code>claude --permission-mode plan</code></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ZQxk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F118d26d0-72a0-4d52-adbb-18357536fdb2_1708x646.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ZQxk!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F118d26d0-72a0-4d52-adbb-18357536fdb2_1708x646.png 424w, https://substackcdn.com/image/fetch/$s_!ZQxk!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F118d26d0-72a0-4d52-adbb-18357536fdb2_1708x646.png 848w, https://substackcdn.com/image/fetch/$s_!ZQxk!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F118d26d0-72a0-4d52-adbb-18357536fdb2_1708x646.png 1272w, https://substackcdn.com/image/fetch/$s_!ZQxk!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F118d26d0-72a0-4d52-adbb-18357536fdb2_1708x646.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ZQxk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F118d26d0-72a0-4d52-adbb-18357536fdb2_1708x646.png" width="1456" height="551" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/118d26d0-72a0-4d52-adbb-18357536fdb2_1708x646.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:551,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:708026,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/185523000?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F118d26d0-72a0-4d52-adbb-18357536fdb2_1708x646.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ZQxk!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F118d26d0-72a0-4d52-adbb-18357536fdb2_1708x646.png 424w, https://substackcdn.com/image/fetch/$s_!ZQxk!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F118d26d0-72a0-4d52-adbb-18357536fdb2_1708x646.png 848w, https://substackcdn.com/image/fetch/$s_!ZQxk!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F118d26d0-72a0-4d52-adbb-18357536fdb2_1708x646.png 1272w, https://substackcdn.com/image/fetch/$s_!ZQxk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F118d26d0-72a0-4d52-adbb-18357536fdb2_1708x646.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Plan mode in Claude Code.</em></figcaption></figure></div><p>If you recognize these symptoms, you need a protocol, not more prompts:</p><ul><li><p>PRs are large, noisy, and hard to review.</p></li><li><p>Engineers say, &#8220;This doesn&#8217;t match the ticket,&#8221; even when it compiles.</p></li><li><p>AI code review becomes a vibes debate instead of a checklist.</p></li><li><p>Reliability issues show up late because verification is not enforced.</p></li><li><p><strong>PM time shifts from product decisions to cleanup and re-explaining intent.</strong></p></li></ul><p>The workflow is: start from the ticket, pass through a <strong>plan gate</strong>, apply <strong>guardrails</strong>, run <strong>subagent review</strong>, run <strong>multi-model review</strong>, and <strong>then open the PR</strong>.</p><p>Pull requests must be made only when the AI structures and aligns everything. </p><p>In the next sections, we will operationalize each gate&#8212;how to run Plan Mode as the approval boundary, how to encode guardrails, how to use Claude Code subagents for structured review, and how to add multi-model review so humans only see clean, trustworthy diffs. </p><h2>Plan Mode With Claude Code to Turn a Ticket Into an Execution-Ready Plan</h2><p>Plan Mode is the first place where agentic delivery becomes governable. It is your <strong>go/no-go gate</strong>: no code changes until the model can produce an execution-ready plan that a human can review and approve. Claude Code is explicitly designed to support plan-first behavior before taking actions.</p><p>In plain PM terms, <strong>plans are the unit of work</strong>. Tickets are intent. Diffs are output. </p><p>A plan is the path that makes intent legible and output reviewable. When you treat the plan as the artifact&#8212;especially when the input is a <strong>Linear</strong> issue&#8212;you stop &#8220;AI thrash&#8221; early, and you make engineering trust possible.</p><h3>Plan Output Contract</h3><ul><li><p>Goals and non-goals are stated in one sentence each.</p></li><li><p>Scope boundaries that define what will not be changed.</p></li><li><p>Files or components likely to be touched and why.</p></li><li><p>Assumptions and open questions, labeled as blocking vs non-blocking.</p></li><li><p>Acceptance criteria rewritten as checkboxes that the PR must satisfy.</p></li><li><p>Test approach mapping each acceptance criterion to a test or verification step.</p></li><li><p>Rollout and rollback plan, including flags, monitoring, and safe failure behavior.</p></li></ul><p><strong>Copy/paste prompt box</strong>:</p><pre><code><code>You are in Plan Mode. Do not modify code.
Use the Plan Output Contract format exactly (7 bullets).
Input: &lt;paste Linear ticket + any constraints&gt;.
Ask only blocking questions; if none, proceed to the plan.
Name files/components you expect to touch and why.
List tests and rollout/rollback steps tied to acceptance criteria.
For any Claude Code feature/workflow claim, cite an official Anthropic/Claude Code source.
</code></code></pre><p>Output: An execution-ready plan that can be approved like a spec, then handed to the agent to implement with guardrails.</p><h2>Guardrails That Make AI Coding Reliable in Production</h2><p>Guardrails are how you convert agent autonomy into production reliability. In practice, guardrails are concrete constraints&#8212;permissions, scoped access, allowed tools/commands, data-handling boundaries, and mandatory checks that must pass before work is considered done. I like the best practices for agentic coding from <a href="https://www.anthropic.com/engineering/claude-code-best-practices">Anthropic</a>. It&#8217;s worth checking out. </p><h3>Guardrails Ladder</h3><ol><li><p><strong>Tier 1</strong>: Read-only and analysis.<br>Agent can inspect, explain, and plan, but not write files or run risky commands. I saw this issue about CC in <a href="https://github.com/anthropics/claude-code/issues/8961?utm_source=chatgpt.com">GitHub</a>. Essentially, CC just ignored all instructions and &#8220;&#8230;modify files that should be blocked.&#8221;</p></li><li><p><strong>Tier 2</strong>: Controlled changes in scoped directories.<br><a href="https://www.anthropic.com/engineering/claude-code-best-practices">Agent</a> can edit only within approved paths and use a pre-approved tool set, with prompts for anything outside the allowlist. </p></li><li><p><strong>Tier 3</strong>: PR-ready changes with enforced checks.<br><a href="https://code.claude.com/docs/en/hooks">Agent</a> can produce a PR candidate only after automated checks run via hooks and the workflow produces evidence (tests, lint, and a clear diff narrative). </p></li></ol><p>Non-negotiables:</p><ul><li><p>Secrets are never committed; keys and tokens must be handled via environment variables or a secrets manager, not files in the repo. </p></li><li><p>Directory boundaries are explicit; sensitive paths are disallowed, and the agent&#8217;s working scope is narrowed to the minimum viable surface. </p></li><li><p>Safe commands are pre-approved through Claude Code&#8217;s permissions system and shared via project settings to standardize behavior. </p></li><li><p>Tests and lint are mandatory; hooks should run checks automatically and fail fast when standards are not met.</p></li><li><p>Logging discipline is enforced; hooks can record tool activity so reviews have an audit trail of what ran and why. </p></li><li><p>Rollback is expected; every change carries a safe failure path, whether that is a flag, a revert strategy, or a limited rollout plan.</p></li></ul><p><strong>Engineers trust diffs that are bounded and verifiable</strong>. Guardrails make the PR smaller, the intent clearer, and the failure modes testable&#8212;so review becomes a checklist, not a debate.</p><p>Opt for a reusable Guardrails Ladder that your team can adopt to standardize autonomy without sacrificing compliance or speed.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/how-to-ship-reliably-with-claude-code?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/p/how-to-ship-reliably-with-claude-code?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://labs.adaline.ai/p/how-to-ship-reliably-with-claude-code?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h2>Subagents as Your Review Team for Spec Checks, Risk Discovery, and Test Design</h2><p>A single general agent is good at momentum. It is bad at critique. When one model both proposes the approach and judges it, you get confident blind spots.</p><p>Claude Code <a href="https://code.claude.com/docs/en/sub-agents">subagents</a> let you split &#8220;doing&#8221; and &#8220;reviewing&#8221; into specialized roles with narrow mandates, so critique becomes structured and repeatable instead of conversational. PMs can treat this like an AI review org chart: small teams, clear responsibilities, crisp outputs.</p><p>To create an agent in CC use &#8220;<code>/agents</code>&#8221;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!u384!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff240f2c3-56a0-46bf-ba56-96deb495ad63_1704x718.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!u384!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff240f2c3-56a0-46bf-ba56-96deb495ad63_1704x718.png 424w, https://substackcdn.com/image/fetch/$s_!u384!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff240f2c3-56a0-46bf-ba56-96deb495ad63_1704x718.png 848w, https://substackcdn.com/image/fetch/$s_!u384!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff240f2c3-56a0-46bf-ba56-96deb495ad63_1704x718.png 1272w, https://substackcdn.com/image/fetch/$s_!u384!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff240f2c3-56a0-46bf-ba56-96deb495ad63_1704x718.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!u384!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff240f2c3-56a0-46bf-ba56-96deb495ad63_1704x718.png" width="1456" height="614" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f240f2c3-56a0-46bf-ba56-96deb495ad63_1704x718.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:614,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:821831,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://labs.adaline.ai/i/185523000?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff240f2c3-56a0-46bf-ba56-96deb495ad63_1704x718.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!u384!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff240f2c3-56a0-46bf-ba56-96deb495ad63_1704x718.png 424w, https://substackcdn.com/image/fetch/$s_!u384!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff240f2c3-56a0-46bf-ba56-96deb495ad63_1704x718.png 848w, https://substackcdn.com/image/fetch/$s_!u384!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff240f2c3-56a0-46bf-ba56-96deb495ad63_1704x718.png 1272w, https://substackcdn.com/image/fetch/$s_!u384!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff240f2c3-56a0-46bf-ba56-96deb495ad63_1704x718.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3>Subagent Org Chart</h3><ol><li><p><strong>Spec-to-AC (Acceptance Criteria) checker</strong>.</p><ol><li><p><strong>Mission</strong>: Verify the plan or diff satisfies every acceptance criterion and does not expand scope.</p></li><li><p><strong>Output</strong>: A checklist of ACs marked Pass/Fail with one-line evidence per item.</p></li><li><p><strong>Prompt snippet:</strong></p><ol><li><p>Compare this plan/diff to the ticket acceptance criteria.</p></li><li><p>Mark each AC Pass/Fail and cite the exact file/line or plan step.</p></li><li><p>List any scope creep as bullets.</p></li></ol></li></ol></li></ol><ol start="2"><li><p><strong>Risk and edge-case hunter.</strong></p><ol><li><p><strong>Mission</strong>: Surface failure modes, regressions, and operational risks before humans review.</p></li><li><p><strong>Output</strong>: Top 5 risks with severity and the test or guardrail that would catch each.</p></li><li><p><strong>Prompt snippet</strong>:</p><ol><li><p>Enumerate edge cases and regression risks from this change.</p></li><li><p>Rank by severity and likelihood.</p></li><li><p>Propose one test per risk.</p></li></ol></li></ol></li></ol><ol start="3"><li><p><strong>Test designer.</strong></p><ol><li><p><strong>Mission</strong>: Translate acceptance criteria into a minimal test plan that proves behavior.</p></li><li><p><strong>Output</strong>: A test matrix mapping AC to the test type and to target location.</p></li><li><p><strong>Prompt snippet</strong>:</p><ol><li><p>For each AC, propose the smallest test that would fail before this change.</p></li><li><p>Name the test type and likely file location.</p></li><li><p>Flag any gaps where behavior is untestable.</p></li></ol></li></ol></li></ol><ol start="4"><li><p><strong>Security and privacy reviewer</strong>.</p><ol><li><p><strong>Mission</strong>: Identify risky data handling, injection surfaces, secrets exposure, and unsafe logging.</p></li><li><p><strong>Output</strong>: Findings grouped by category with recommended mitigations.</p></li><li><p><strong>Prompt snippet</strong>:</p><ol><li><p>Scan for data ingress/egress, auth, secrets, and logging changes.</p></li><li><p>List issues by category and severity.</p></li></ol></li><li><p>Suggest the minimal mitigation per issue.</p></li></ol></li></ol><p><strong>Sequencing note</strong>: run subagents on the plan first (before execution), then rerun on the diff before PR creation to reduce noisy iterations and human review load.</p><div id="youtube2-DNGxMX7ym44" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;DNGxMX7ym44&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/DNGxMX7ym44?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p><strong>Output</strong>: A copyable Subagent Org Chart that turns &#8220;AI review&#8221; into an internal review pipeline your engineers can trust.</p><h2>Multi-Model Review to Catch Logic Gaps and Regressions Before Human Review</h2><p>Multi-model review is a practical QA layer, not a philosophical stance. Different models carry different blind spots, so cross-model critique is a cheap way to catch logic gaps and regressions before a human ever opens the diff.</p><p>To make this repeatable, you do not &#8220;ask for a review.&#8221; You assemble a <strong>packet</strong> that reviewers can audit quickly, and you keep it consistent across PRs. Meaning, you put together the same set of review details or criteria every time, so reviewers can check it fast and know what to expect.</p><p><a href="https://code.claude.com/docs/en/overview?utm_source=chatgpt.com">Claude Code</a> is well-suited to generating this packet because it operates with direct repo context and workflows rather than detached chat snippets. </p><p>Check out this podcast from Lenny Rachitsky where he and Zevi Arnovitz talk a great on how to use Claude Code and how he uses it review code. </p><div id="youtube2-1em64iUFt3U" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;1em64iUFt3U&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/1em64iUFt3U?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Below are examples of what to include in a review packet:</p><ul><li><p>Plan summary.</p></li><li><p>Acceptance criteria.</p></li><li><p>Diff summary.</p></li><li><p>Test results.</p></li><li><p>Edge-case list.</p></li><li><p>Rollout/rollback.</p></li></ul><p>Here are the examples of reviewer questions:</p><ul><li><p>Does the change align with every acceptance criterion without scope creep?</p></li><li><p>Is the core logic correct under normal and edge-case paths?</p></li><li><p>Is error handling explicit, safe, and consistent with existing patterns?</p></li><li><p>Are there any security or privacy risks in data handling, secrets, or logging?</p></li><li><p>Are there performance footguns such as N+1 calls, expensive loops, or unbounded retries?</p></li><li><p>Are tests adequate, minimal, and clearly mapped to acceptance criteria?</p></li><li><p>Is rollback safe, fast, and realistic under incident pressure?</p></li></ul><p>When reviewers disagree, the rule is simple: <strong>tests plus spec win</strong>. If the packet shows AC alignment and passing tests, prefer the path that preserves correctness and rollback safety. If risk is high or the change touches sensitive surfaces, escalate to a human reviewer immediately and narrow the scope rather than debating model opinions.</p><p><strong>Output</strong>: A paste-ready Review Packet checklist you can drop into your PR template to make AI code review faster, safer, and more predictable for production reliability.</p><h2>Conclusion</h2><p>Start with the ticket, pass it through a plan gate, apply guardrails, run subagent review, run multi-model review, and then open the PR.</p><p>This is what reliable agentic engineering looks like in practice: not more output, but more control. <strong>PMs lead the gates and the quality system</strong>. You own the plan gate that converts ambiguity into an execution-ready spec. You define guardrails that bound autonomy into safe, verifiable changes. You design the Claude Code subagent reviewers so critique is structured and repeatable. You run a lightweight multi-model audit so humans see clean diffs, not surprises.</p><p>Tomorrow,</p><ul><li><p>Pick one small ticket with clear acceptance criteria.</p></li><li><p>Run plan-first, apply your guardrails, then run subagent review and multi-model review before the PR.</p></li><li><p>Measure one outcome such as review time, rework cycles, or regression risk.</p></li></ul><p>Protocol is better than heroics.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://labs.adaline.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Adaline Labs! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item></channel></rss>