Decreasing strategy risk via discovery and user research
Product discovery is important, duh. In fact, it’s so important that thousands of founders and product managers spend an extraordinary amount of time trying to zero-in on the exact problems of their customers, to find and validate the exact solutions.
Most of that time is better spent doing something else.
That effort often fails because:
The research takes too long: By the time the result is in and the business decision is made, market opportunity is gone.
Success criteria are too loose: Research methods don’t eliminate false positives. Everything seems like a good opportunity.
Success criteria are too strict: Promising signals are discarded due to “lack of confidence”.
Every failed discovery initiative I’ve seen didn’t lead to valuable decisions and failed for one of those reasons.
It doesn’t matter how much data about users and their problems you can get. What’s important is whether that data gives enough information to help make high-impact business decisions — and make them quickly.
Here’s a simple test:
If someone on your team has done a lot of research into a new opportunity, and they respond with “We need to do more research” when you probe them for a product strategy, know that the initiative has failed.
At Unity, I once joined a team that has been “researching” an Enterprise SaaS opportunity for 3+ years. They asked me to set up a product discovery program to do more research. Within three months, I recommended closing the initiative, and moved on to another project.
This was a timely outcome, because shortly after, the entire group was dismantled. I wasn’t the only one who believed that they were beating a dead horse.
Since then, I’ve come up with several principles to foolproof product discovery work. Here they are.
Know when enough research is enough
Put another way: Minimize the cost of decision.
Once you’ve reached an information threshold, more data doesn’t mean a better decision. And if you’ve been doing the research ad nauseam and still don’t have a signal? Well, you’ve got your answer: you’re trying to build something of no value. Move on.
False positives are worse than false negatives
False negatives are reversible, and false positives are not.
If you get a false negative signal and stop the initiative you can often restart once you get more positive evidence.
If you get a false positive signal and make a decision to continue investing in an initiative that has 0 or even negative expected value, then you’re putting your company at risk.
Define the "kill the initiative" criteria
You will continue burning the precious time and money if you don't do this.
There’s a concept of Escalation of Commitment in decision science. It’s also known as commitment bias. In a nutshell, it’s a behavioral pattern in which people continue their action or decision even when they face increasingly negative outcomes. It’s closely related to a sunk cost bias.
Simply knowing about this bias doesn’t help — you have to have a way to stop everything.
I’ve known about escalation of commitment for years. Still, I kept investing in and supporting other leaders who kept investing in product initiatives that should’ve been killed.
My approach changed only after I built a habit to define the stopping criteria before kicking-off the work.