Shipping Fast And Iterating At AI-Speed | Takeaways For Founders And Product Leaders
Ship fast in AI by learning faster: define “good,” dogfood, stay close to users, and prevent regressions with evals.
TLDR: Shipping Fast and Iterating at AI Speed explores why traditional startup speed advice fails in AI development. The blog argues that real AI speed isn't about moving faster than competitors, but about learning velocity—understanding what "good" looks like and adapting quickly. It covers how short-term velocity destroys long-term progress through technical debt, why correctness is subjective in AI products, and how sustainable speed requires informed restraint, clear ownership, and reversible decisions. Readers will learn concrete principles from industry leaders on building feedback loops, maintaining team confidence through transparency, and designing systems flexible enough to survive the AI ecosystem's rapid changes. The key insight: the fastest teams avoid premature bets and focus on preserving optionality while maintaining strong signals about what matters.
Introduction
Founder Intro: Shipping Fast and Iterating at AI Speed
“Move fast” has always been a startup mantra. In AI, that advice has become dangerously ambiguous.
Teams ship more often than ever. Demos come together in days. Iteration feels constant. And yet, many companies still feel stuck, slowed not by lack of activity but by a lack of clarity.
Panel 4 was designed to unpack that tension. Rather than asking, “How do we ship faster?”, we wanted to ask a more precise question: What does speed actually mean when you’re building AI products, and what quietly destroys it over time?
To explore that, we brought together operators who are shipping at the edge of what’s possible, across very different contexts:
Daksh Gupta, Co-founder and CEO at Greptile, is building AI systems where correctness and iteration speed must coexist.
Evan Owen, Co-founder and CEO at Glue, is navigating fast-moving AI workflows where trust and learning loops matter more than raw throughput.
Ray Jang, Co-founder and CEO at Atria, operates at the intersection of automation, experimentation, and reliability.
Yen Tan, Product Manager at 15Five, is bringing a product and user-centered lens to shipping in high-trust environments.
What emerged was a clear reframing of AI speed. This panel wasn’t about shipping more features or chasing every new model release. It was about learning velocity: how quickly teams understand what good looks like, detect when something feels off, and correct course without eroding trust.
Across the conversation, a consistent theme surfaced:
“Speed without direction is just noise.”
“Sustainable speed comes from tight feedback loops, informed restraint, and organizations designed to learn.”
The sections that follow break down what that looks like in practice, from why dogfooding beats dashboards early, to how feature flags enable safe aggression, to why trust behaves like a finite resource.
If you’re building with AI and feel like you’re moving fast but not forward, this panel offers a grounded perspective on what real velocity actually requires.
1. AI Speed Is About Learning What “Good” Looks Like
The panel opened by dismantling a common misconception: AI speed does not simply mean shipping faster.
Shipping faster is easy. Learning faster is hard.
What separates teams that actually move quickly from those that just move often is how fast they develop a shared understanding of quality.
Speed Comes From Signal, Not Velocity
Across the discussion, speakers converged on a more precise definition of AI speed.
AI speed is defined by:
How quickly teams learn what “good” outputs look like?
How fast can they tell when something feels off?
How early can they course-correct without breaking trust?
As Daksh Gupta, Co-founder and CEO at Greptile, emphasized, most AI teams don’t slow down because they ship too little. They slow down because they don’t know what to aim for.
Without a clear target, iteration becomes noise.
Correctness Is Often Ambiguous in AI Products
In traditional software, correctness is binary. Something works, or it doesn’t.
In AI products, correctness is often subjective.
As Yen Tan, Product Manager at 15Five, described, this ambiguity shows up most clearly in:
Creative workflows,
Generative systems,
Judgment-based tasks,
Assistive experiences.
Outputs can be plausible without being good. They can be technically correct, but emotionally wrong. They can pass automated checks, and still fail user expectations.
This makes iteration fundamentally harder.
Without Quality Signals, Teams Thrash
Several speakers described a familiar failure mode:
Teams ship quickly,
Outputs look reasonable,
Feedback is vague,
Iteration continues blindly.
As Ray Jang, Co-founder and CEO at Atria, noted, without fast, reliable signals on quality, teams end up oscillating—changing prompts, models, or workflows without knowing whether they’re actually improving anything.
The result is activity without progress.
“Feels Off” Is an Important Signal
One of the more subtle insights from the panel was the importance of intuition early on.
As Evan Owen, Co-founder and CEO at Glue, explained, experienced teams learn to trust early discomfort. When outputs feel off—even if they technically pass—that’s often the first indicator that assumptions are wrong, or constraints are missing.
Teams that move fast don’t ignore that signal. They investigate it immediately.
Speed comes from shortening the gap between:
Noticing something feels wrong,
Understanding why,
Fixing the underlying cause.
Directional Clarity Beats Raw Throughput
The panel repeatedly returned to the idea that speed without direction is wasted motion.
AI makes it easy to:
Generate more outputs,
Try more variations,
Explore more options.
But without a shared definition of “good,” those options don’t converge.
As one speaker summarized:
The fastest teams aren’t the ones shipping the most changes,
They’re the ones learning what to keep.
The Practical Takeaway
AI speed isn’t about how fast you deploy. It’s about how fast you learn.
Teams that truly move quickly:
Define quality early,
Develop strong instincts for “wrong”,
Create tight feedback loops,
Correct course before problems compound.
In AI products, learning velocity beats shipping velocity.
Speed without clarity feels productive—until it isn’t.
2. Short-Term Velocity Can Destroy Long-Term Velocity
One of the most consistent warnings across the panel was a counterintuitive one:
The fastest way to slow down permanently is to optimize too aggressively for short-term speed.
In an ecosystem that rewards quick demos and rapid iteration, this is an easy trap to fall into, and a hard one to escape.
Early Momentum Often Comes From Fragile Choices
Several speakers described how teams often gain early momentum by making expedient decisions:
Choosing frameworks optimized for speed over control.
Hardcoding integrations instead of designing interfaces.
Building around temporary standards.
Overfitting workflows to current model capabilities.
These choices feel rational in the moment. They produce visible progress. They reduce upfront friction.
As Daksh Gupta, Co-founder & CEO of Greptile, explained, many of these decisions aren’t mistakes. They’re unexamined commitments that accumulate quietly.
The Hidden Cost of Expedience
What looks like speed early often shows up later as a constraint.
As products mature, those early shortcuts create:
Architectural lock-in.
Brittle abstractions.
Painful migrations.
Slow, risky changes.
Fear of touching core systems.
As Ray Jang, Co-founder & CEO of Atria, noted, teams often don’t realize they’ve slowed down until they’re already stuck. Every change requires workarounds. Every improvement risks regression. Momentum evaporates.
The system becomes fast to run, but slow to change.
AI Ecosystems Shift Faster Than Architecture
This problem is amplified in AI because the ecosystem itself is moving so quickly:
Models evolve.
APIs change.
Best practices shift.
Capabilities that felt stable six months ago suddenly aren’t.
As Evan Owen, Co-founder & CEO of Glue, pointed out, decisions that assume today’s model behavior will persist are especially dangerous. Overfitting to current capabilities may unlock speed now, but it creates fragility later, precisely when adaptation matters most.
Overfitting Is a Form of Technical Debt
The panel reframed overfitting in a broader sense.
It’s not just about data or prompts. It’s about designing systems that only work under narrow conditions.
Overfit systems:
Assume specific output formats.
Rely on implicit model behavior.
Break when context windows change.
Fail when reasoning patterns shift.
Each assumption tightens the system’s tolerance for change.
True Speed Requires Optionality
The teams that sustained velocity over time shared one trait: informed restraint.
They:
Moved fast where reversibility was high.
Slowed down where decisions were expensive to undo.
Avoided locking in assumptions prematurely.
Designed interfaces, not shortcuts.
As Yen Tan, Product Manager at 15Five, emphasized, speed isn’t just about shipping. It’s about preserving the ability to change direction without breaking everything.
Velocity Is a Function of Confidence
Another subtle insight from the panel was that teams slow down not just because systems are brittle, but because people lose confidence.
When:
Changes feel risky.
Behavior is hard to predict.
Regressions are costly.
Teams hesitate. Reviews drag on. Releases slow. Innovation stalls.
Short-term speed that undermines confidence eventually kills momentum.
The Practical Takeaway
AI speed isn’t about maximizing short-term output.
It’s about:
Making reversible decisions quickly.
Deferring irreversible ones thoughtfully.
Preserving optionality.
Designing for change, not permanence.
True AI speed requires restraint, not caution, but judgment.
Move fast.
Just don’t move fast into a corner.
3. Long-Term Speed Requires Informed Foresight
Several speakers emphasized that sustaining velocity over time requires more than execution discipline.
It requires informed foresight.
Not clairvoyance.
Not a perfect prediction.
But the ability to make educated bets about where the ecosystem is heading, and where it isn’t.
Speed Over Time Is About Betting on What Endures
In fast-moving AI environments, it’s tempting to treat everything as temporary.
Frameworks change.
Models improve.
Tooling evolves monthly.
“Best practices” have short half-lives.
But as the panel made clear, some things do last longer than others, and knowing the difference is what separates teams that compound velocity from those that reset every six months.
As Daksh Gupta, Co-founder & CEO of Greptile, noted, long-term speed comes from investing in abstractions that survive churn, even when the layers above them change.
The Cost of Dead-End Bets
Several speakers shared examples of teams that moved quickly, but in the wrong direction.
These teams:
Adopted tooling that couldn’t evolve.
Built on standards that never stabilized.
Committed deeply to APIs that were clearly transitional.
Optimized for the current model generation.
Each decision felt reasonable at the time.
Together, they created dead ends.
As Ray Jang, Co-founder & CEO of Atria, explained, the problem isn’t making bets, it’s making bets without understanding their reversibility. Dead-end bets don’t just slow teams down. They force rewrites.
AI Makes Reactivity Expensive
Because the AI landscape changes so quickly, many teams default to being reactive.
New model? Switch immediately.
New framework? Rewrite.
New technique? Adopt everywhere.
The panel warned that this behavior creates motion, but not progress.
As Evan Owen, Co-founder & CEO of Glue, put it, reactive teams feel fast until they realize they’re constantly rebuilding the same system with slightly different parts.
Speed becomes cyclical instead of compounding.
Selective Proactivity Is the Real Advantage
The fastest teams described on the panel weren’t chasing every change.
They were selectively proactive.
They:
Tracked where standards were converging.
Waited for signal before committing.
Designed internal interfaces to absorb change.
Insulated core logic from external volatility.
As Yen Tan, Product Manager at 15Five, emphasized, foresight isn’t about predicting the future, it’s about limiting the blast radius when the future arrives.
Understanding Direction Beats Knowing Timing
Another important reframe from the panel was that timing matters less than direction.
You don’t need to know when a standard will win.
You need to know whether it’s likely to matter.
Teams that understood direction:
Avoided one-off integrations.
Favored open interfaces.
Resisted premature optimization.
Chose boring, stable layers where possible.
That restraint allowed them to move faster later, when clarity emerged.
Foresight Is a Team Skill
Importantly, foresight wasn’t described as a founder superpower.
It was treated as an organizational capability.
Teams built foresight by:
Discussing ecosystem trends openly.
Revisiting architectural assumptions regularly.
Questioning “why this now?”.
Rewarding reversibility over cleverness.
Over time, this created shared intuition, and faster decision-making.
The Practical Takeaway
Long-term AI speed isn’t about reacting faster than everyone else.
It’s about:
Understanding where the ecosystem is heading.
Avoiding bets that trap you.
Investing in abstractions that outlast hype.
Moving early only when it matters.
The fastest teams don’t chase change.
They position themselves so change can’t knock them off balance.
4. Dogfooding Is the Highest-Leverage Evaluation Mechanism
One of the most practical insights from the panel was also one of the simplest:
The best early evaluation system is lived experience.
Before metrics.
Before dashboards.
Before formal eval frameworks.
Teams need to feel their product.
Formal Evals Come Too Late for Early Learning
Several speakers cautioned against jumping too quickly into formal evaluation systems.
Evals are powerful, but only once teams already understand:
What does good look like?
Which failures matter?
Where nuance lives?
Before that understanding exists, evals tend to encode guesses rather than truth.
As Daksh Gupta, Co-founder & CEO of Greptile, emphasized, premature evals often give teams false confidence. They pass checks while the product quietly degrades in ways the metrics don’t capture.
“This Feels Wrong” Is a Real Signal
A recurring phrase on the panel was some version of:
“This feels wrong.”
That instinct, especially from domain experts, surfaced again and again as an early warning signal.
As Yen Tan, Product Manager at 15Five, explained, when people who understand the problem deeply start to hesitate, something important is usually off. The issue might not be obvious. It might not be measurable yet. But ignoring that signal almost always leads to larger failures later.
Early intuition isn’t noise.
It’s a compressed experience.
Dogfooding Exposes What Metrics Miss
Dogfooding forces teams to confront the product as it actually behaves, not how they hope it behaves.
When teams use their own product daily:
Subtle regressions surface.
Quality decay becomes obvious.
Friction accumulates visibly.
Edge cases repeat.
As Ray Jang, Co-founder & CEO of Atria, noted, dashboards rarely capture the emotional texture of a product. Dogfooding does.
You notice when:
Outputs start to feel generic.
Responses drift off tone.
Latency becomes irritating.
Trust erodes slightly but consistently.
These are the signals that matter most early.
Shared Intuition Accelerates Teams
Another benefit the panel highlighted was alignment.
Dogfooding builds:
Shared intuition across engineering, product, and GTM.
Common language for quality.
Faster decision-making.
When everyone has felt the pain personally, debates get shorter. Teams don’t argue abstractly about metrics; they argue from experience.
As Evan Owen, Co-founder & CEO of Glue, put it, teams that dogfood aggressively don’t need long spec documents to explain why something needs fixing. Everyone already knows.
When Formal Evals Actually Help
The panel wasn’t dismissive of formal evaluation, just precise about timing.
Formal evals work best when:
Intuition is already strong.
Failure modes are known.
Quality criteria are shared.
The team agrees on tradeoffs.
At that point, evals scale understanding.
Before that point, they obscure it.
The Practical Takeaway
Dogfooding isn’t a culture perk.
It’s an evaluation strategy.
The teams that move fastest:
Live inside their product.
Trust early discomfort.
Use intuition to guide iteration.
Add formal evals once meaning exists.
In AI products, you can’t measure what you don’t yet understand.
Understanding comes first.
Automation follows.
5. Evals Prevent Regression — They Don’t Create Insight
The panel was clear, and notably aligned, on one point:
Evals are often introduced too early.
Not because evals are bad, but because teams frequently expect them to do the wrong job.
What Evals Are Actually Good At
When used correctly, evals are extremely effective.
They:
Prevent systems from getting worse.
Enforce known baselines.
Catch regressions early.
Scale judgment once patterns are understood.
As Ray Jang, Co-founder & CEO of Atria, described, evals are invaluable once a team already knows what quality looks like. At that point, they act as guardrails, ensuring progress doesn’t slip backward as systems evolve.
But guardrails don’t decide where you’re going.
The Risk of Introducing Evals Too Early
Several speakers warned that early-stage AI teams often reach for evals before they’ve earned them.
When evals are introduced prematurely, they tend to:
Cap quality too early.
Freeze incomplete assumptions.
Obscure creative exploration.
Incentivize optimization against the wrong signals.
As Daksh Gupta, Co-founder & CEO of Greptile, noted, early evals often reflect what teams think matters, not what actually does. Once encoded, those assumptions quietly shape every future decision.
What feels like rigor becomes constraint.
Insight Comes From Humans, Not Metrics
A recurring theme across the panel was that insight emerges from exposure, not automation.
Early-stage AI products benefit far more from:
Human review of outputs.
Direct customer conversations.
Qualitative feedback.
Rapid iteration driven by intuition.
As Yen Tan, Product Manager at 15Five, explained, insight requires context. It requires understanding why something feels wrong, not just that it failed a check. That depth simply can’t be automated early on.
Metrics without understanding are misleading.
Evals Encode Assumptions — Whether You Want Them To or Not
One of the most important cautions from the panel was that evals always encode values.
They define:
What does “good” mean?
Which failures matter?
What tradeoffs are acceptable?
When those definitions are immature, evals lock teams into a narrow view of quality.
As Evan Owen, Co-founder & CEO of Glue, put it, once an eval exists, teams naturally optimize for it, even if it no longer reflects reality. Exploration slows. Creativity narrows. Learning stalls.
Guardrails, Not Steering Wheels
This led to one of the clearest metaphors of the panel:
Evals are guardrails, not steering wheels.
They prevent disaster.
They don’t choose a direction.
Teams that try to steer with evals early often end up driving confidently in the wrong direction.
The Practical Takeaway
The fastest AI teams sequence evaluation deliberately.
They:
Learn through humans first.
Build intuition around quality.
Identify stable patterns.
Then encode those patterns into evals.
Used this way, evals accelerate progress without freezing it.
In AI products, understanding precedes automation.
If you automate judgment before you’ve developed it, you don’t move faster, you just lock in ignorance.
6. Teams That Ship Fast Collapse Distance Between Thinking & Doing
A recurring operational insight from the panel was deceptively simple:
Communication is lossy — especially in fast-moving environments.
Every handoff introduces delay. Every translation risks distortion. Every layer adds friction.
The teams that ship fastest aren’t necessarily working harder. They’re working with less distance between thinking and doing.
Speed Comes From Collapsing the Loop
Across examples, the panel highlighted the same pattern:
Teams maximize velocity when:
The same person designs, builds, ships, and iterates.
Ownership spans the full lifecycle of a feature.
Feedback flows directly to the builder.
As Daksh Gupta, Co-founder & CEO of Greptile, emphasized, this collapse of roles doesn’t eliminate rigor — it eliminates delay. Decisions happen where context already lives.
Handoffs Are Hidden Taxes
In theory, specialization increases efficiency. In practice, handoffs impose invisible costs.
Each handoff requires:
Re-explaining intent.
Re-establishing context.
Re-interpreting feedback.
As Ray Jang, Co-founder & CEO of Atria, noted, even perfect documentation can’t fully transmit intuition. What gets lost isn’t just information — it’s judgment.
In AI products, where quality is often subjective and evolving, that loss is especially expensive.
Feedback Is Only Useful If It’s Immediate
Another theme that emerged was the importance of feedback proximity.
When feedback:
Reaches the builder quickly.
Arrives unfiltered.
Includes real user context.
Iteration accelerates.
As Yen Tan, Product Manager at 15Five, explained, teams slow down when feedback is delayed, summarized, or abstracted. By the time it reaches the person who can act on it, urgency — and insight — have faded.
Fast teams shorten that path aggressively.
Ownership Creates Judgment
The panel also emphasized that ownership isn’t just about accountability — it’s about learning.
When the same person:
Makes the decision.
Implements the solution.
Observes the outcome.
Feels the failure.
They develop judgment rapidly.
As Evan Owen, Co-founder & CEO of Glue, shared, teams that fragment ownership fragment understanding. No one fully knows why something works — or why it doesn’t.
Judgment accumulates fastest when responsibility is continuous.
Thinking and Execution Belong Together
One of the most resonant reframes of the section was this:
Speed increases not because people work harder — but because thinking and execution happen in the same head.
When design, implementation, and iteration are separated, speed decays. When they’re unified, momentum compounds.
This doesn’t mean eliminating collaboration. It means eliminating unnecessary translation.
The Practical Takeaway
Teams that move fast don’t optimize for efficiency on paper.
They optimize for:
Tight ownership loops.
Minimal handoffs.
Direct feedback.
Continuous learning.
In AI products, where quality signals are subtle and shifting, distance is the enemy of speed.
Collapse the distance — and speed follows.
7. Customer Obsession Beats Process Optimization
Despite the panel’s technical depth, the conversation kept circling back to a simple truth:
Customers are the fastest feedback system available.
No internal process, tool, or framework can compete with direct exposure to real usage.
Process Doesn’t Create Insight — Exposure Does
Many teams try to move faster by refining internal processes:
Better roadmaps.
Tighter sprint rituals.
More detailed specs.
More sophisticated tooling.
The panel was blunt about the limitations of this approach.
As Daksh Gupta, Co-founder & CEO of Greptile, noted, process can reduce chaos — but it doesn’t create understanding. Teams that rely too heavily on internal abstractions often end up optimizing for the wrong problems.
Speed comes from knowing what to build — not just how to build it efficiently.
High-Velocity Teams Stay Close to Users
The fastest teams described on the panel shared one defining habit: constant customer contact.
They:
Talk to users weekly — sometimes daily.
Onboard customers themselves.
Watch real usage in real contexts.
Feel confusion and delight firsthand.
As Evan Owen, Co-founder & CEO of Glue, explained, nothing accelerates learning like watching someone struggle with your product in real time. Feedback becomes concrete. Priorities become obvious.
Abstract Requests Hide Real Needs
Another recurring insight was that customer requests are often misleading.
Users ask for features. They describe symptoms. They propose solutions.
But as Yen Tan, Product Manager at 15Five, pointed out, the real work is understanding why they’re asking. That understanding rarely comes from tickets or surveys. It comes from observing behavior.
Teams that prioritize based on lived feedback move faster than those reacting to abstract input.
Proximity Collapses Feedback Loops
Customer proximity shortens feedback loops in ways no internal system can replicate.
When teams are close to users:
Misalignment is obvious immediately.
Incorrect assumptions are exposed early.
Course correction happens faster.
Iteration becomes confident.
As Ray Jang, Co-founder & CEO of Atria, noted, teams often underestimate how much time they lose by guessing instead of asking — or by interpreting instead of observing.
Obsession Is a Practical Choice
The panel was careful to separate customer obsession from performative empathy.
This isn’t about:
NPS slogans.
Empathy workshops.
Abstract personas.
It’s about:
Proximity.
Frequency.
Firsthand exposure.
Customer obsession isn’t a cultural value. It’s an operational strategy.
The Practical Takeaway
If speed is the goal, customer proximity is the lever.
The teams that ship fastest:
Stay close to real usage.
Trust lived feedback over speculation.
Let customers shape priorities directly.
Reduce internal debate by increasing external clarity.
In AI products, where quality is contextual and evolving, customers are the fastest way to find the truth.
No process can substitute for that.
8. Feature Flags Enable Safe Aggression
One of the most practical themes to emerge from the panel was that shipping fast does not mean shipping recklessly.
High-velocity teams don’t move carefully — they move contained.
Feature flags surfaced repeatedly as one of the most important tools for making that possible.
Speed Requires the Ability to Contain Risk
AI products introduce uncertainty by default.
Outputs vary.
Behavior shifts.
Edge cases surface unpredictably.
In that environment, shipping changes broadly and permanently is dangerous.
As Ray Jang, Co-founder & CEO of Atria, emphasized, teams that move fast sustainably all share one trait: they can limit blast radius. Feature flags give teams that control.
They allow teams to:
Isolate risk.
Control who sees what.
Roll out changes incrementally.
Pull back instantly if something breaks.
Speed without containment isn’t velocity — it’s gambling.
Flags Turn Experiments Into Reversible Decisions
A recurring insight was that reversibility is the foundation of speed.
Feature flags turn what would otherwise be hard commitments into reversible bets.
As Daksh Gupta, Co-founder & CEO of Greptile, noted, teams are far more willing to experiment aggressively when they know they can turn something off without damage. That psychological safety unlocks real momentum.
Without flags, every experiment feels existential.
With flags, experimentation becomes routine.
Early Adopters Are Not the Same as Everyone Else
Another key point was segmentation.
Not all users want — or tolerate — the same level of experimentation.
Feature flags allow teams to:
Expose new capabilities to power users.
Test with internal teams first.
Learn from early adopters.
Protect broader user trust.
As Yen Tan, Product Manager at 15Five, explained, trust is fragile in AI products. Once users lose confidence, it’s difficult to earn back. Flags allow teams to learn without burning that trust.
Reliability and Experimentation Are Not Opposites
The panel strongly rejected the idea that teams must choose between speed and reliability.
The fastest teams do both — by separating learning from exposure.
Feature flags make that separation explicit.
As Evan Owen, Co-founder & CEO of Glue, shared, flags allow teams to test bold ideas while keeping the core experience stable. Users experience consistency, while teams gain insight.
That balance is what allows iteration at AI speed without chaos.
Safe Aggression Is a Design Principle
What emerged was a broader principle:
Move aggressively — but only where failure is contained.
Feature flags operationalize that principle.
They:
Encourage experimentation.
Reduce fear of shipping.
Protect user trust.
Preserve optionality.
Without them, teams naturally become conservative.
With them, teams can be bold — responsibly.
The Practical Takeaway
Speed in AI products isn’t about recklessness.
It’s about controlled risk.
Teams that ship fast:
Isolate experiments.
Segment exposure.
Learn quickly.
Revert instantly.
Feature flags don’t slow teams down.
They make it safe to move faster.
In an AI-first world, aggression without containment is chaos — but aggression with guardrails is progress.
9. Trust Is a Battery — Spend It Carefully
Across multiple parts of the discussion, trust kept coming up — not as a vague brand concept, but as a finite operational resource.
The panel consistently framed it this way:
Trust behaves like a battery.
It charges slowly.
It drains quickly.
And once it’s depleted, speed collapses.
Early Products Must Earn Trust Before Spending It
The panel was clear that early-stage AI products don’t have the luxury of experimentation at scale.
Before teams can move aggressively, they must:
Nail table-stakes experiences.
Behave predictably.
Avoid surprising failures.
Demonstrate basic reliability.
As Yen Tan, Product Manager at 15Five, noted, users are far more sensitive early on. When trust hasn’t been established yet, even small inconsistencies feel disproportionate.
Early trust isn’t built by novelty.
It’s built on dependability.
Trust Decays Faster Than It Accumulates
Several speakers emphasized how asymmetrical trust really is.
It takes:
Repeated successful interactions.
Consistent behavior.
Clear boundaries.
to build trust.
But it takes:
One confusing output.
One silent failure.
One unexplained change.
to start draining it.
As Daksh Gupta, Co-founder & CEO of Greptile, pointed out, AI systems feel especially brittle because they present confident outputs even when they’re wrong. That makes trust loss sharper — and recovery harder.
Experimentation Is a Privilege, Not a Right
A recurring theme was that experimentation must be earned.
Once trust is established, teams gain:
Room to experiment.
Tolerance for occasional failure.
Forgiveness for iteration.
User patience during change.
As Ray Jang, Co-founder & CEO of Atria, explained, trusted products can ship imperfect updates and recover quickly. Untrusted products can’t survive even minor missteps.
Trust buys optionality.
Small Mistakes Compound When Trust Is Low
Without trust, every issue feels bigger than it is.
Minor bugs turn into reasons to churn.
Ambiguous behavior becomes incompetence.
Iteration feels like instability.
As Evan Owen, Co-founder & CEO of Glue, shared, teams often underestimate how much damage is caused not by catastrophic failures — but by frequent, low-grade disappointment.
Without trust, those moments stack up fast.
Spend Trust Where Learning Is Highest
The panel also emphasized that trust should be spent intentionally.
When teams do experiment, they should:
Do it where learning is maximized.
Isolate exposure carefully.
Communicate changes clearly.
Roll back quickly when needed.
As Daksh Gupta noted earlier, feature flags and segmentation aren’t just technical tools — they’re trust-management tools.
They allow teams to learn without draining the battery.
The Practical Takeaway
Trust isn’t an abstract virtue in AI products.
It’s fuel.
The fastest teams:
Build trust deliberately.
Protect it aggressively.
Spend it where learning is highest.
Replenish it through reliability.
In an AI-first world, trust determines how fast you’re allowed to move.
Spend it recklessly, and speed disappears.
Spend it wisely, and iteration compounds.
10. Customer Feedback Must Be Filtered, Not Obeyed
One of the final — and most important — clarifications from the panel was this:
Listening to customers is not the same as following them.
High-velocity teams do both — but they do them very differently.
Feedback Is Raw Data, Not Direction
The panel emphasized that customer feedback is inherently noisy.
Users:
Describe symptoms.
Articulate frustrations.
Suggest solutions.
React emotionally to outcomes.
But they rarely diagnose root causes accurately.
As Evan Owen, Co-founder & CEO of Glue, noted, treating every piece of feedback as a directive leads teams to chase surface-level fixes — and lose coherence over time.
Feedback is signal.
Direction requires judgment.
Caring Is Different From Complaining
A key distinction surfaced around how much users actually care.
Many users complain.
Very few are willing to change behavior.
Effective teams learn to distinguish:
Annoyance from urgency.
Requests from necessity.
Opinions from switching behavior.
As Daksh Gupta, Co-founder & CEO of Greptile, explained, the most valuable signals come from moments where users say, “I can’t do my job without this working.” Everything else requires scrutiny.
“Hell Yes” Outcomes Are Rare — and Precious
Several speakers emphasized the importance of identifying “hell yes” moments.
These are moments where:
Users light up.
Value is immediately obvious.
Behavior changes without prompting.
Adoption accelerates naturally.
As Ray Jang, Co-founder & CEO of Atria, shared, teams that optimize for lukewarm satisfaction move slowly. Teams that optimize for undeniable value move decisively.
Mediocre feedback leads to mediocre products.
Surveys Don’t Surface Tradeoffs — Conversations Do
Another clear takeaway was the limitation of surveys.
Surveys:
Flatten nuance.
Encourage safe answers.
Hide tradeoffs.
Tradeoff conversations, by contrast:
Force prioritization.
Surface real constraints.
Reveal what users would give up.
As Yen Tan, Product Manager at 15Five, noted, asking users to choose — not just react — exposes what truly matters.
Speed comes from clarity, not consensus.
Builders Own Diagnosis
The panel repeatedly returned to a simple but powerful responsibility:
Customers describe symptoms.
Builders diagnose causes.
When teams outsource diagnosis to users, they lose control of the product’s direction.
The fastest teams:
Absorb feedback deeply.
Triangulate across users.
Test hypotheses quickly.
Make opinionated decisions.
They don’t abdicate judgment — they sharpen it.
The Practical Takeaway
Customer feedback is indispensable — and dangerous.
Used well, it:
Accelerates learning.
Validates direction.
Surfaces blind spots.
Used poorly, it:
Fragments focus.
Slows decision-making.
Erodes product coherence.
In AI products, especially, where complexity is high and quality is subtle, judgment is the bottleneck — not information.
Listen closely.
Filter aggressively.
Decide decisively.
That’s how teams ship fast — without losing their way.
11. “Minimum Lovable” Beats “Minimum Viable”
One of the most subtle — and powerful — reframings from the panel was this:
In AI products, “viable” is not enough.
What passes as acceptable in traditional software often fails immediately in AI.
AI Outputs Feel Personal — Whether You Intend Them To or Not
AI products don’t just execute instructions.
They respond.
They:
Speak in natural language.
Make suggestions.
Infer intent.
Appear confident.
As a result, users interpret outputs as judgment, not just functionality.
When an AI system gets something wrong, it doesn’t feel like a bug.
It feels like a misunderstanding.
As Yen Tan, Product Manager at 15Five, noted, this makes early impressions far more emotionally charged. Mistakes feel intelligent — and therefore scarier.
“Viable” Is a Low Bar for Trust-Heavy Systems
Minimum viable products are designed to answer one question:
Does this work at all?
In AI, that question is insufficient.
Because:
Trust is fragile.
Users don’t know system boundaries.
Failures feel personal.
Confidence amplifies error.
As Daksh Gupta, Co-founder & CEO of Greptile, explained, shipping something that technically works but feels careless or incoherent often does more damage than not shipping at all.
Users don’t wait for it to get better.
They leave.
Lovability Is About Respect, Not Polish
The panel was careful to distinguish lovable from polished.
Lovability doesn’t mean:
Perfect UX.
Flawless outputs.
Exhaustive feature sets.
It means the product feels:
Coherent.
Intentional.
Respectful of user intent.
Reliably useful in its core job.
As Ray Jang, Co-founder & CEO of Atria, shared, users forgive missing features. They don’t forgive feeling misunderstood or dismissed.
Lovability Creates Forgiveness
A recurring insight was that forgiveness is the real early-stage moat.
When a product feels lovable:
Users retry after failure.
They give feedback instead of churning.
They tolerate iteration.
They stay curious.
When a product feels merely viable:
Failures feel unacceptable.
Trust erodes quickly.
Churn accelerates.
As Evan Owen, Co-founder & CEO of Glue, noted, early-stage AI products live or die by whether users believe the team cares.
Lovability communicates care.
Minimum Lovable Sets the Right Floor
The panel ultimately reframed early-stage quality bars.
Instead of asking:
“Is this good enough to ship?”
High-velocity teams ask:
“Is this good enough to earn patience?”
That question leads to different decisions:
Tighter scope.
Clearer boundaries.
Fewer but better use cases.
More intentional defaults.
The Practical Takeaway
AI products don’t get graded like traditional software.
They’re judged as collaborators.
That raises the bar.
Minimum viable gets you tried.
Minimum lovable gets you trusted.
And in an AI-first world, trust is the only thing that lets you move fast without breaking everything that matters.
12. AI Speed Is Organizational, Not Just Technical
As the panel closed, one final theme became unmistakably clear: AI speed is not primarily a tooling problem. It’s an organizational one.
Models matter. Frameworks matter. Infrastructure matters. But none of them determines speed on their own.
Tools Don’t Learn — Teams Do
Throughout the discussion, speakers repeatedly returned to the same observation: Two teams can use the same models, the same frameworks, and the same tools — and move at radically different speeds. The difference isn’t technical sophistication. It’s how the organization learns.
AI speed is driven by:
Team structure.
Ownership models.
Cultural norms.
Decision-making velocity.
How feedback is interpreted and acted on.
As Daksh Gupta, Co-founder & CEO of Greptile, emphasized, teams don’t slow down because prompts are bad — they slow down because decisions get stuck.
Ownership Determines Learning Velocity
One of the strongest predictors of speed discussed on the panel was clear ownership.
Fast teams:
Know who decides.
Know who owns quality.
Know who responds to failure.
Don’t diffuse responsibility.
As Ray Jang, Co-founder & CEO of Atria, noted, ambiguity in ownership creates hesitation. And hesitation compounds quickly in fast-moving AI environments. When no one owns learning, learning slows.
Culture Shapes How Feedback Is Handled
Another recurring insight was that feedback is only as useful as the culture that processes it.
In slower organizations:
Feedback is debated endlessly.
Mistakes trigger defensiveness.
Learning is politicized.
Decisions wait for consensus.
In faster ones:
Feedback is welcome early.
Mistakes are treated as data.
Iteration is normalized.
Decisions move forward with imperfect information.
As Yen Tan, Product Manager at 15Five, explained, psychological safety isn’t just a people concept — it’s a speed multiplier. Teams that feel safe to surface problems do so earlier, when fixes are cheaper.
Decision Velocity Beats Decision Accuracy
The panel also reframed how teams should think about decision quality.
Perfect decisions are rare. Reversible decisions are common.
Fast AI teams:
Make decisions quickly.
Revisit them often.
Correct course early.
Avoid over-indexing on certainty.
As Evan Owen, Co-founder & CEO of Glue, put it, teams that wait for confidence rarely get it. Teams that act and observe learn faster. Speed comes from motion with feedback — not deliberation without data.
Learning Loops Are the Real Differentiator
Across all examples, one pattern dominated:
The fastest AI companies have the tightest learning loops.
They:
Ship small changes.
Observe real behavior.
Absorb feedback directly.
Adjust immediately.
Tooling supports this — but it doesn’t create it.
Learning loops are designed through:
Org structure.
Incentives.
Ownership.
Trust.
The Final Reframe
By the end of the panel, “AI speed” had been fully redefined.
It isn’t about:
Better prompts.
Faster GPUs.
Clever architectures.
It’s about:
Collapsing feedback loops.
Reducing organizational drag.
Empowering decision-makers.
Learning faster than competitors.
The Practical Takeaway
If your AI team feels slow, the bottleneck is rarely technical. It’s usually:
Unclear ownership.
Delayed decisions.
Filtered feedback.
Cultural friction.
The fastest teams don’t just build better systems. They build organizations designed to learn at AI speed. And in an ecosystem where technology converges quickly, learning speed is the only durable advantage left.


