Skip to content

Think in Bets, Not Tasks

R. Machado ·

Second post in the Bet-Driven Development series. Start here if you missed the first one.

Here’s how most developers plan their work.

They open a doc, or a project board, or just a mental list, and they write down tasks. Build the review form. Add the API integration. Design the dashboard. Set up authentication. Write the tests.

Tasks tell you what to do. They don’t tell you why. They don’t tell you whether it matters. And they definitely don’t tell you when to stop and check if any of it is working.

Noah planned ReplyBot in tasks. I know because I’ve done the same thing on every project I’ve started in the last five years. The to-do list is seductive. Each item you cross off feels like progress. The list gets shorter. The product gets bigger. You’re moving.

But moving isn’t the same as going somewhere.

What Noah never had

Go back to the ReplyBot story from the last post. Noah had a clear idea — AI-powered review responses for local businesses. He had skills. He had tools. He built fast and built well. What he didn’t have was a way to know, before investing three months, whether his idea was right.

If Noah had framed ReplyBot as a bet instead of a task list, it would have looked something like this:

Title: Local businesses need help responding to reviews

Hypothesis: I believe local business owners spend significant time crafting responses to online reviews, and would pay for a tool that does it for them.

Intent: Validate that the core problem exists before building the product.

Timeframe: 1 week.

The signal — how he’d check — would have been simple: 5 business owners confirm that writing review responses is a top-3 pain point for them.

One week. Five phone calls. Zero code.

Instead, Noah wrote a task list. Build sentiment analysis. Add tone matching. Support multiple platforms. Build the dashboard. Each task completed. None of them tested whether the underlying assumption was true.

The task list was a plan for building. The bet would have been a plan for learning.

What a bet actually is

A bet is a focused commitment of your scarce resource — time — to a specific outcome based on something you believe. It’s the bridge between “I have an idea” and “I know this works.”

Every bet has four elements:

Title — what you’re committing to, in plain language. Not a feature spec. Not a technical description. A statement of what you’re trying to learn or prove. “Check-in is the biggest pain point for studio owners.” “Developers will confirm context loss is a top-3 frustration.”

Hypothesis — what you believe will happen. This is the testable claim. It’s what makes a bet different from a task. A task says “build the check-in feature.” A hypothesis says “I believe walk-in yoga studios will use a digital check-in tool daily if it saves them more than 10 minutes.” One is an instruction. The other is a claim you can be right or wrong about.

Intent — why you’re making this bet right now. What decision does it inform? What will you do differently depending on the outcome? If the answer is “nothing, I’ll keep building regardless,” then you don’t have a bet — you have a task with extra paperwork. Intent is what separates going-through-the-motions from genuine inquiry.

Timeframe — when you’ll stop and check. This is the timebox. Two weeks. One month. Whatever’s appropriate. When it expires, you check the signal whether you feel ready or not. The timeframe exists because without it, a bet quietly becomes a project — open-ended, never checked, running forever.

That’s it. Four elements. You can write a bet on a sticky note. The point isn’t the format. The point is the shift in thinking: from “what am I building?” to “what do I believe, and how will I know if I’m right?”

The hypothesis is a conversation, not a form

I want to be clear about something, because this is where frameworks usually die. If making a bet feels like filling out a form — title in this box, hypothesis in that box, timeframe here, submit — you won’t do it. Nobody does. The form sits empty and the code editor stays open.

A bet should emerge from a conversation. With yourself, with a co-founder, with a domain expert, or with your AI agent. You say what you’re thinking about building. The conversation pushes you to articulate why, for whom, and how you’d know if it worked.

It often sounds like this:

“I want to build a check-in tool for fitness studios.”

What makes you think studios need this?

“My friend Priya runs a yoga studio and she’s always complaining about the check-in process.”

What specifically is painful about it?

“She has people sign in on paper. She transfers it to a spreadsheet at the end of the day. Some people don’t sign in at all so her attendance numbers are wrong.”

So your hypothesis is that the manual check-in process is painful enough that studio owners would switch to a digital tool?

“Yeah. But actually, I don’t know if it’s like that at other studios too, or just Priya’s.”

That sounds like the bet: ‘Independent studio owners will confirm that check-in is their biggest operational headache.’ How would you check that?

“I could call five studio owners and ask.”

Timeframe?

“Two weeks.”

That’s a bet. It took two minutes of conversation. It required zero code. And it will tell Noah — this is the StudioPulse story now — whether the problem he’s solving is real before he writes a line of code.

Noah’s second project

After ReplyBot, Noah started StudioPulse differently. He called Priya — a yoga studio owner he knew through a friend — and asked what drove her crazy about running her studio. She talked for 40 minutes about check-in chaos. Walk-ins who didn’t sign in. Attendance numbers that were always wrong. Teachers who couldn’t see who was in class.

But Noah had learned something from ReplyBot: one person’s frustration isn’t proof of a market. So instead of building immediately, he made a bet.

Title: Check-in is the #1 operational pain point for independent studio owners

Hypothesis: 3 out of 5 independent studio owners will rank the check-in/attendance process as their biggest daily headache when asked about operational pain points.

Intent: Validate that this problem is widespread enough to build for, not just Priya’s problem.

Timeframe: 2 weeks.

Two weeks. Ten phone calls. Priya made introductions to other studio owners she knew from teacher training retreats.

The result: four out of five studio owners put check-in in their top two pain points. Three said it was number one. One said scheduling was worse. The signal was met.

But here’s what made this bet valuable beyond the yes/no answer: the conversations revealed things Noah never would have learned from building. One owner mentioned that it was actually the teachers, not the owners, who dealt with check-in. Another said she’d tried a digital solution before but it was too complicated for her front desk volunteer. A third said she’d pay “whatever it takes” if it saved her the end-of-day spreadsheet reconciliation.

These weren’t just validations. They were the raw material for Noah’s next bets: who’s the real user (teachers, not owners), what’s the real constraint (simplicity, not features), and what’s the real value (time savings on reconciliation, not fancy check-in UX). None of this would have surfaced from a task list. It surfaced because Noah was having conversations instead of writing code.

That bet cost Noah two weeks and zero code. Compare that to ReplyBot, where Noah built for three months before having a single conversation with a customer. The first project was guided by a task list. The second was guided by a bet. The difference wasn’t just the outcome — it was the confidence. Noah started building StudioPulse knowing the problem was real, because he’d checked. And he started building with intelligence about the problem that shaped what he built.

The market bet comes first

There’s a hierarchy to bets, and it’s worth understanding early.

The most important bet you’ll ever make isn’t about a feature. It’s about the market. Does this problem exist? Do enough people have it? Would they pay to solve it?

This is the bet Noah skipped with ReplyBot and made with StudioPulse. It’s also the bet most developers skip entirely, because it feels like it’s slowing you down. The code is calling. The AI agent is ready. Five phone calls feel like a waste of time when you could be shipping.

But the market bet is the one with the highest leverage. If the answer is no — the problem isn’t real, or nobody would pay, or you can’t reach the people who have it — then nothing you build afterward matters. Every feature, every line of code, every late night is built on a false foundation.

Three things a market bet should validate:

The problem is real — not just an assumption based on your own experience. Noah assumed businesses needed help with review responses. They didn’t. He assumed studios needed help with check-in. They did. The difference was five phone calls.

People would pay — not just say “that’s cool.” There’s a chasm between “interesting idea” and “I would pay money for that.” Noah’s StudioPulse conversations included the question: “If a tool handled this for you, what would you pay for it?” Two studio owners named specific numbers. One said she’d pay “whatever it takes.” That’s a signal.

You have a niche — not “businesses” or “studios” but something specific enough to own. Noah’s check-in conversations revealed that boutique studios (yoga, pilates, barre) felt the pain most. Big gym-style studios had existing systems. The niche wasn’t “fitness studios.” It was “independent boutique studios with walk-in classes.” Narrow enough that no “invincible developer” building a generic studio management platform would bother competing.

The market bet is the first bet, but it’s not the only kind. Once you’ve validated the market, every feature becomes a bet too. “I believe adding class reminders will reduce no-show rates by 20%.” “I believe walk-in studios value simplicity over feature count.” “I believe teachers, not owners, should be the primary user.” Each one is a hypothesis. Each one has a signal. Each one gets checked.

The framework scales down to individual features and up to entire product directions. But it always starts with the same question: what do I believe, and how will I know?

My first real bet

When I started building what would eventually become DevKeel — back when it was called HypoShip — I made the same mistake Noah made with ReplyBot. I built first. I had an idea for a hypothesis-driven development platform. I loved the concept. I started coding.

A few weeks in, I caught myself. I was doing the thing. Building without checking.

So I stopped and made a bet.

Title: Context loss between AI coding sessions is a top pain point

Hypothesis: Developers who use AI coding tools daily will confirm that losing context between sessions is a top-3 frustration.

Intent: Validate that the core problem exists before building more of the tool.

Timeframe: 1 week.

I talked to eight developers. Seven said yes — context loss was either their top frustration or in their top three. The eighth said his biggest frustration was hallucination, but context loss was fourth. Close enough.

That bet cost me a week of conversations. It justified everything that came after. Not because I was certain DevKeel would succeed — I still had a hundred bets to make about specific features, pricing, positioning, and whether anyone would actually configure an MCP connection for their LLM. But because I knew the foundational problem was real. I wasn’t building on an assumption. I was building on evidence.

The contrast with my earlier approach was stark. Before the bet, I’d been building HypoShip based on my own frustration — which is a real data point, but it’s a sample size of one. After the bet, I had eight data points. Still small. But enough to commit time with confidence instead of hope.

And here’s the thing I didn’t expect: the conversations gave me more than validation. They gave me language. Developers described the context loss problem in ways I hadn’t thought of. One said “I feel like I’m onboarding a new hire every morning — except the new hire is my own tool.” Another said “I’ve started keeping a text file called CONTEXT.md that I paste into every session, and it’s getting longer every week.” Those descriptions shaped how I eventually positioned DevKeel. They came from a bet, not from building.

The “but I want to just build” objection

Let me address this directly, because if you’ve read this far you’re probably feeling it.

The code is calling. Your AI agent is ready. You have a great idea and the tools to make it real in hours. And I’m telling you to make phone calls.

I know. I’ve felt this exact resistance on every project I’ve started. The bet feels like overhead. It feels like the boring part before the exciting part. It feels like a tax on the thing you actually want to do.

Here’s what I’d ask you to consider: the bet isn’t the opposite of building. It’s the targeting system for building. You’re not slowing down — you’re pointing your speed in the right direction.

Five minutes of conversation with a potential user can save you five weeks of building the wrong thing. One week of phone calls can save you three months. This isn’t an exaggeration — it’s literally what happened with both ReplyBot and StudioPulse, and with my own pivot from HypoShip to DevKeel.

The discipline is simple: no bet, no build. If you can’t articulate what you believe and how you’ll check, you’re not ready to code. You’re ready to research, to explore, to talk to people. But you’re not ready to commit engineering time.

This doesn’t mean every bet requires phone calls. Some bets are about features, not markets. “I believe adding email reminders will reduce no-show rates by 20%” is a bet you can check with analytics after building. The market bet — does the problem exist, will people pay — is the one that requires conversations. Feature bets can be faster. But they still need a hypothesis and a signal.

Anti-patterns

A few ways bets go wrong, so you can recognize them in your own thinking:

The forever bet. A bet without a timeframe is just a project. “I bet users will love this feature” — when will you check? Next month? Next year? Never? The timeframe creates accountability. It forces the check.

The obvious bet. “I bet users want a login page.” There’s nothing to learn here. Bets are for things you’re genuinely uncertain about. If the answer is obvious, skip the bet and just build it.

The vague bet. “Make the product better” isn’t a bet. Better how? For whom? Measured how? If you can’t imagine evidence that would prove you wrong, you don’t have a bet — you have a wish.

The too-many-bets trap. If you have four active bets, you have zero focus. One to two active bets at a time. Everything else gets parked. We’ll talk about parking and triage in a later post, but the rule is simple: focus beats breadth.

The vanity bet. A bet with a signal you know you’ll hit. “I believe at least one person will sign up.” That’s not testing anything — it’s a performance of validation. Pick signals that genuinely challenge your hypothesis.

Your Turn

Take the project you wrote down after Post 1. Now frame it as a bet. Don’t overthink the format — just answer four questions:

  1. What am I committing to? (Title — plain language, not a feature spec)
  2. What do I believe will happen? (Hypothesis — something testable)
  3. Why does this matter right now? (Intent — what decision does this inform?)
  4. When will I stop and check? (Timeframe — a specific date or duration)

Write it down. It doesn’t need to be polished. A rough bet you wrote down is already more intentional than a perfect plan you never articulated.

If this is a brand new project, consider making your first bet a market bet: does the problem exist? Talk to five people. You don’t need a script. Just ask what frustrates them about the thing you’re building for.

If you’re already mid-build, that’s fine. Frame a bet around what you’re working on right now. “I believe [this feature] will [produce this outcome]. I’ll check by [this method] in [this timeframe].”

In two posts, we’ll put this into DevKeel and your AI agent will start coaching you against your bet. But the framework works without any tooling. A notebook is enough. The bet is a thinking tool, not a software feature.

Next in the series: Signals: How Will You Actually Know? — the most skipped step in software development, and why watching three people use your product beats a conversion funnel with no traffic.

References

  • The Bet-Driven Development framework is documented in full at devkeel.com/docs