Signals: How Will You Actually Know?
Third post in the Bet-Driven Development series. Start with Post 1 if you missed it, or catch up on Post 2 for the bet framework.
You made your bet. You wrote down your hypothesis. You set a timeframe.
Now here’s the part most developers skip entirely: how will you actually know?
Not “how will you feel.” Not “how will the demo go.” How will you know — with evidence you could show someone else — whether your bet was right or wrong?
This is the signal. And in my experience, it’s the most skipped step in the entire development process. Not because developers are lazy, but because it’s uncomfortable. Defining your signal means defining how you could be wrong. And defining how you could be wrong means confronting the possibility that you are, in fact, wrong.
It’s much easier to stay in the phase where the hypothesis is still beautiful and untested.
What Noah didn’t define
Go back to ReplyBot. Noah had a hypothesis — businesses need help responding to reviews. He had a timeframe — three months of building. What he never had was a signal.
What would have told him, before a single line of code, that the bet was working? What evidence would prove the problem was real? What would he have seen — specifically, measurably — in the first two weeks if he was right?
He never asked. So the only check he ever got was a dentist who politely told him he’d built the wrong thing.
The absence of a signal isn’t just bad process. It’s a form of self-deception. When you don’t define how you’ll know, you end up defaulting to the most convenient feedback: your own enthusiasm. The code is working, so the bet must be working. The feature looks good, so the direction must be right. You feel like you’re winning because you’re building, and building feels like progress, and progress feels like validation.
But it isn’t.
What a signal actually is
A signal is a specific, observable piece of evidence that tells you whether your hypothesis is holding up.
Good signals have three properties:
Specific. “Users like it” isn’t a signal. “Three out of five studio owners use the check-in flow at least once per day in the first week” is a signal. The more specific you can be, the clearer your verdict will be when you check.
Observable. You have to be able to actually measure it. If your signal requires data you won’t have, or behavior you can’t observe, it’s not a signal — it’s a hope. “Users will find this valuable” isn’t observable. “Users will return to the app on day 7 without a prompt” is.
Falsifiable. The signal has to be capable of telling you you’re wrong. If every possible outcome confirms your bet, the signal isn’t doing any work. Noah could have set a vanity signal — “at least one business owner will say the demo looks interesting.” Every demo he did would have met that. A real signal would have been: “Three out of five business owners confirm that responding to reviews costs them more than one hour per week.” Something that could genuinely come back negative.
When a signal meets all three criteria, it creates a moment of reckoning. Two weeks out, you check. The signal was met, or it wasn’t. You know something you didn’t know before.
The three signal types
In practice, signals tend to cluster into three types. Each is best suited to a different kind of bet.
Conversation signals are the fastest and most underused. You define a threshold — how many people, saying what — and you go have those conversations. Noah’s StudioPulse market bet had a conversation signal: three out of five studio owners rank check-in as their top pain point. It’s observable (did they say it or not?), specific (three out of five, not “some”), and falsifiable (two out of five would fail the signal).
Conversation signals are best for market bets and early problem validation. They require almost no infrastructure. They’re also the ones developers resist most, because talking to people is less comfortable than writing code. Every time I’ve pushed through that resistance, the conversations have given me more than I expected — not just yes/no, but the language, the adjacent problems, the edge cases that shaped everything that came after.
Usage signals are measured in behavior, not opinions. What does the person do, not what do they say? “5 users complete the onboarding flow” is a usage signal. “A studio runs a full day of check-ins without calling me for help” is a usage signal. “A user returns to the app on three consecutive days without a prompt” is a usage signal.
Usage signals are best for feature bets — bets about whether a specific thing you built works. They’re the most honest signals because behavior is harder to fake than opinion. Someone can tell you they love a feature in an interview and never use it in practice. Usage signals catch that gap.
Outcome signals are the highest bar. They ask: did this actually change something? Not “did users use it” but “did it work?” A 20% reduction in no-show rates. A studio owner saving two hours per week on end-of-day reconciliation. Revenue from a specific niche.
Outcome signals are best for larger bets with longer timeframes. They’re harder to hit but also harder to dismiss. When Priya told Noah that StudioPulse had cut her end-of-day reconciliation from 45 minutes to 8 minutes, that was an outcome signal. Not just “she used it.” It worked.
In practice, you’ll often layer these. A bet might have a conversation signal in week one (“three studio owners confirm the problem”), a usage signal in week two (“two studios run a full day of check-ins”), and an outcome signal in month one (“at least one studio reports saving time on reconciliation”).
Noah’s StudioPulse signals
After his market bet validated the problem, Noah started building. But before he wrote the first line of code for the check-in flow, he defined what would tell him it was working.
Bet: Digital check-in is something studio owners will actually adopt, not just demo.
Hypothesis: Walk-in studios will actively use digital check-in when we give them a simple enough tool.
Signal: Three studios run at least five consecutive days of check-ins through StudioPulse without reverting to paper.
This is a good signal. It’s specific (three studios, five consecutive days). It’s observable (he can check the database). It’s falsifiable (two studios, or studios that dip in and out, would fail it). And it tests the right thing — not whether the feature exists, but whether it’s good enough to actually change behavior.
Noah also defined what success would not look like. Studios that signed up but used it once wouldn’t count. Studios that used it for simple walk-ins but still kept the paper sheet for complicated situations wouldn’t count. The signal required genuine adoption, not a polite test.
Two weeks in, he checked. Two studios were running clean. A third had tried it for three days then gone back to paper — her front desk volunteer couldn’t figure out the check-in interface without guidance. That wasn’t a failure. It was a signal: the interface was too complicated for volunteers. Noah simplified it. The third studio tried again. By week four, all three were consistent.
The failed check at week two wasn’t a disaster. It was exactly what signals are for — early evidence of a real problem, before he’d built an entire product on top of the broken assumption.
The signal I almost didn’t define
When I was building the first version of DevKeel, I was doing something I’d been doing on every project for years: I was tracking “progress” by looking at features built, not outcomes tested.
I had a bet: developers will find it more valuable to start sessions with context from previous sessions than to start from scratch. The hypothesis was obvious to me — I’d felt the context loss problem myself. I was building to solve my own frustration, which is a decent starting point but a terrible stopping point.
A friend pushed me: “How will you know if you’re right?”
I gave him the developer’s standard non-answer: “I’ll know when people use it.”
“That’s not a signal. When will you check? What will they be doing that tells you it’s working?”
I had to think. What would actually tell me the bet was right? Not signups — people sign up for things they never use. Not good feedback in demos — people are polite. Not even usage in isolation.
Eventually I landed on this: a developer who uses DevKeel across three consecutive sessions, where the third session starts by referencing context stored in the first session, without me prompting them to.
Specific. Observable. Falsifiable. And it tests the right thing — not that they used DevKeel, but that the cross-session context was actually informing their work.
It took two beta users to hit that signal. But when I saw it happen — a user opening a new session and immediately saying “I see my bet from yesterday, let’s continue with the check-in flow” — I knew something real had happened. Not because I felt validated. Because the signal was met.
The uncomfortable part
Here’s where I need to be direct with you, because there’s a common response to this whole framework that sounds reasonable but isn’t.
“I’ll define the signal after I build the prototype. I need to see the thing first to know what to measure.”
I’ve said this. I’ve believed it. It’s how I justified building ReplyBot-style projects more than once. And it’s mostly wrong.
You define the signal before you build because the act of defining it changes what you build. When Noah committed to “three studios using it for five consecutive days without reverting to paper,” it immediately shaped his feature decisions. Simplicity over features — because the signal was about sustained adoption, not wow-factor in a demo. Teacher-facing, not owner-facing — because his conversations had told him teachers were doing the check-in. No analytics dashboard for week one — because the signal didn’t require it.
If he’d built first and defined the signal after, he’d have built a different product. Probably a more impressive-looking one. Probably a less useful one.
The signal is a constraint. Constraints are useful. The discomfort of defining a signal before you build is exactly the feeling of being honest about what you actually need to learn — which is more valuable than the feeling of being busy.
There’s also a subtler reason to define signals early: they protect you from motivated reasoning. When you’ve invested two weeks in a feature, you’ll find a way to decide it worked. The signal you defined before you started is harder to move the goalposts on. It’s the version of you from two weeks ago, before you were emotionally invested, telling current you what counts.
Failed signals are wins
This needs its own paragraph because it runs against every instinct we have.
A signal that comes back negative is not a failure. It’s a success of the framework.
Noah’s first StudioPulse signal — three studios using it consistently — didn’t land cleanly at week two. One studio reverted to paper. That was painful for about a day. But it gave Noah exactly what he needed: a specific, observable problem (the interface was too complicated for volunteers), early enough to fix it before he’d built a whole product on top of the broken assumption.
Compare that to ReplyBot. No signal defined. No check performed. Three months of building, and the “failure” arrived in a demo — not as early feedback that could be acted on, but as a verdict. Game over.
When a signal fails early, you learn something true about the world before it’s expensive. That’s the whole point. The bet is designed to fail fast and cheaply. A signal that comes back “not met” in week two is worth infinitely more than a signal you never defined and never checked, discovered to be wrong in month four.
I track the signals that come back negative as carefully as the ones that land. More carefully, honestly. They’re where the real learning is. Every time I’ve had a bet come back negative, the conversation I had with myself — and with the evidence — reshaped what I built next in ways that a positive signal never quite does.
Lost bets aren’t wasted time. They’re purchased knowledge. The only wasted time is the time you spent building something you never checked.
Before the signal: scope boundaries
There’s one more thing worth defining before you build, and it’s simpler but easy to skip: what’s out of scope for this bet.
Scope boundaries are the things you’re explicitly not building. Parked features. Adjacent ideas. Obvious additions you’re choosing not to make.
They matter because AI coding tools are addiction engines for feature creep. You describe a check-in feature, and the agent suggests adding a payment integration. That seems reasonable. You add a payment integration, and the agent suggests adding scheduling. That seems reasonable too. Two weeks later, you’ve built a full studio management platform that nobody asked for and nothing has been tested.
Scope boundaries are your defense. Before you build, write down three things you will not build during this bet. Not forever — just for this timeframe. They get parked, not deleted. They resurface when this bet resolves.
Noah’s scope boundaries for the StudioPulse check-in bet: - No payment processing - No class scheduling - No instructor app - No marketing website
These were all real ideas. Some of them became real features later. But they had nothing to do with whether studios would adopt digital check-in, which was the actual bet. Excluding them kept the build focused on the question being asked.
Every feature you build that isn’t required to test your signal is a distraction. Not a bad idea — just the wrong idea for right now.
Your Turn
You have a bet from the last post. Now define the signal.
Ask yourself: “If my hypothesis is right, what will I observe?” Be specific. Name a number. Name a behavior. Name a timeframe.
Then ask the harder question: “If my hypothesis is wrong, what will I observe instead?” If you can’t imagine the negative version, your signal is too vague.
One more: “What am I not building during this bet?” Write down three things. Park them. Let the scope breathe.
When you have a specific, observable, falsifiable signal — you’re ready to build.
In the next post, we’ll talk about what actually happens during the build. Not the code — the coaching. How your AI agent can stay oriented to the bet instead of drifting toward whatever seems interesting, and how to start a session that picks up where the last one ended instead of starting from scratch.
Next in the series: Building With Your Agent — how to keep your AI coding tool oriented to the bet during the build phase, and why the conversation at the start of a session matters as much as the code at the end of it.
References
- The Bet-Driven Development framework is documented in full at devkeel.com/docs
- Signal types and measurement patterns are covered in the DevKeel examples