Skip to content

What Teresa Torres Taught My AI Agent

R. Machado ·

Sixth post in the Bet-Driven Development series. Start with Post 1 if you missed it.

I picked up Teresa Torres’ Continuous Discovery Habits expecting to find a few useful ideas for product teams. What I found instead was a framework I’d already been building — without knowing it had a name.

The accidental parallel

Torres’ book introduces the Opportunity Solution Tree: a visual framework where a business outcome sits at the top, customer opportunities branch below it, solutions branch below those, and assumption tests sit at the bottom. The idea is simple and powerful — you don’t jump from outcome to solution. You map the opportunity space first, consider multiple solutions, then test the riskiest assumptions before committing to build.

When I mapped this against DevKeel’s bet-driven development cycle, the alignment was almost exact:

Torres calls the top level Desired Outcomes. We call them Project Goals.

Torres calls the next level Opportunities — the needs, pain points, and desires worth exploring. We call that Triage — the space of ideas that get evaluated before any become bets.

Torres calls the solutions level Solutions — three candidates, always compared, never evaluated alone. We call them Bets — time-boxed hypotheses about what to build.

Torres calls the bottom level Assumption Tests — specific experiments that validate the riskiest thing embedded in a solution. We call them Signals — observable, falsifiable criteria that tell you whether the bet is working.

Both frameworks insist you define what success looks like before you build. Both reject “should we do this?” as the wrong question — the right question is “which of these options best addresses the opportunity?” Both treat the framework as a living document that evolves as you learn.

I didn’t design it this way on purpose. It converged because the same problem demands the same shape of solution: if you don’t know what you’re testing, you can’t know what you learned.

What Torres gets right that AI tools miss

Torres identifies six prerequisite mindsets for continuous discovery: outcome-oriented, customer-centric, collaborative, visual, experimental, and continuous. Most AI coding tools nail exactly one — continuous. They help you iterate fast. They don’t help you iterate toward something.

Her keystone habit is the weekly customer interview. Everything else — opportunity mapping, ideation, assumption testing, measurement — depends on a steady flow of real stories from real people using real products. Without that input, the tree is theoretical.

This is the gap in AI-assisted development today. Your agent writes code in seconds. It refactors, tests, deploys. But it has no customer signal flowing into the system. It optimizes for completion, not for impact. Noah from the first post in this series built ReplyBot in three hours. Torres would have spent those three hours talking to business owners first.

The anti-patterns she’d see in your workflow

Torres warns about specific anti-patterns. When I read them, I recognized my own worst habits — and saw them everywhere in how developers use AI agents.

“Whether or not” decisions. Torres says never evaluate a single option in isolation. Always compare at least three alternatives before choosing one. But that’s exactly what happens when you open your AI tool and say “build me a notification system.” One idea, straight to code. No alternatives weighed, no opportunity framing. You’re not comparing — you’re confirming.

Noah did this with ReplyBot. He had one idea — AI-generated review responses — and went straight to building. Torres would have asked: what other approaches could address the problem of businesses struggling with review management? Maybe the answer isn’t generating responses. Maybe it’s templating. Maybe it’s triaging which reviews actually need responses. Maybe the problem isn’t reviews at all. You can’t know until you’ve compared.

Testing whole ideas instead of assumptions. Torres argues you should identify the riskiest assumption embedded in a solution and test that in isolation. Most AI workflows ship the entire feature and then discover the assumption was wrong.

Noah built all of ReplyBot before testing whether anyone needed it. Torres would have had him test the riskiest assumption first: “Do businesses spend enough time on review responses that automation would save meaningful time?” That’s a conversation signal — three out of five business owners confirming they spend more than an hour per week. You can test it in a day, not a quarter.

Overcommitting before learning. Torres recommends small, iterative cycles of exploration. AI agents enable the opposite — massive implementation sprints before any validation. The speed makes the overcommitment invisible. You don’t feel like you’re overcommitting when the code writes itself in an afternoon. But you are. Every hour of building is an hour you’re not checking whether you’re building the right thing.

Pursuing too many outcomes at once. Torres limits teams to one or two outcomes. The speed of AI-assisted development makes it tempting to chase five things simultaneously. You can build five features in the time it used to take to build one. But you can’t validate five hypotheses at once. The learning gets diluted across too many fronts, and none of them gets enough attention to produce a real signal.

I’ve done this. DevKeel had five active bets running simultaneously at one point. Infrastructure, domain-aware reviews, skill evolution, multi-agent verification, audit orchestration — all in flight. Torres would have looked at that and asked a quiet, devastating question: “Which one are you actually learning from right now?”

The honest answer was: none of them. I was building on all five and learning from zero.

What we’d already gotten right

Not everything was a gap. Several of Torres’ anti-patterns were already addressed by the bet-driven framework before I’d read the book:

Active bet limits. DevKeel enforces a maximum of one to two active bets. The rest get parked. This directly prevents Torres’ “pursuing too many outcomes” trap. You can’t chase everything. Pick one, learn from it, then pick the next.

Time-boxed bets. Every bet has a timeframe. When it expires, you resolve it — signal met or not — and capture what you learned. This prevents Torres’ “ping-ponging between outcomes” trap, where teams bounce between initiatives without finishing any of them.

“No bet, no build.” Before you write code, you frame the work as a testable hypothesis. This prevents Torres’ “jumping to solutions” trap. You can’t skip the problem framing because the system won’t let you start without a bet.

Signals. You define what success looks like before you build. Specific, observable, falsifiable. This prevents Torres’ “drawing conclusions from shallow learnings” trap. You can’t call something a success by vibes alone — the signal either met or it didn’t.

Learnings capture. When a bet resolves, you record what you learned. Win or lose. This prevents Torres’ “not showing your work” trap and builds institutional memory that compounds across sessions.

The framework had converged on the same guardrails Torres prescribes, arrived at independently through the same pain she describes.

What Torres added that we hadn’t considered

Two things.

Compare-and-contrast at the bet level. Torres insists on three solution candidates per opportunity. Never evaluate one option alone. DevKeel currently lets you create a standalone bet without formally considering alternatives. You frame a hypothesis, define signals, and start building.

Torres would push back: “What else could you build to address this opportunity? What would a completely different approach look like?” The discipline of generating alternatives — even ones you don’t pick — forces you to articulate why the chosen approach is better, not just why it seems reasonable.

This is a coaching improvement, not an architectural one. When you frame a bet, the system could ask: “What did you consider instead?” Not to block you, but to make you think. The best decisions I’ve made were the ones where I had a real alternative on the table. The worst were the ones where I had one idea and fell in love with it.

Structured customer discovery. Torres’ keystone habit is the weekly customer interview. It’s the input that feeds everything else. DevKeel has intelligence processing — structured extraction from articles, transcripts, and market research. But there’s no first-class concept for customer discovery. No nudge to talk to users regularly. No structured format for capturing what they said. No connection between interview findings and which opportunity gets prioritized next.

For a solo developer, “customer” might mean your beta users, your support inbox, or yourself dogfooding the product. The habit still applies. The question Torres asks every week is: “What needs, pain points, and desires matter most to this person?” If you’re not asking that — in some form, to someone — your bets are guesses dressed up as hypotheses.

The deeper lesson

Torres’ framework works because it separates the problem space from the solution space. You map opportunities before you generate solutions. You test assumptions before you build features. You measure impact before you call something done.

AI coding tools collapsed these phases. When you can go from idea to deployed code in minutes, the discipline of separating “what should we build?” from “can we build it?” feels unnecessary. But the speed makes the discipline more important, not less.

Building the wrong thing in a week used to be expensive. Building the wrong thing in an afternoon is cheap — so you do it over and over, burning cycles without learning. The cost isn’t in the building anymore. It’s in the not-knowing.

The developers who get the most out of AI-assisted development won’t be the ones who ship the fastest. They’ll be the ones who learn the fastest. That requires the same habits Torres describes: continuous discovery, structured by a framework, fed by real signal, and measured by outcomes — not output.

Your AI agent is the fastest builder you’ve ever worked with. Give it a keel and a compass, and it might actually take you somewhere worth going.

This is the sixth post in the Bet-Driven Development series. The full framework is documented at devkeel.com/docs, and the Opportunity Solution Tree mapping is available as a DevKeel context entry for any project using the BDD cycle.