đ§š WHY SO MANY IDEAS FAIL
đ The Law of Market Failure
đș âMost ideas fail in the marketâeven good ones.â
Alberto Savoia introduces one of the bookâs cornerstone ideas: the Law of Market Failure.
â âThe Law of Market Failure states that most new ideasâregardless of how promising they seemâwill fail in the market.â
This is not just anecdotal; decades of data across industries show a consistent 80% to 90% failure rate for:
- Startups,
- New product launches,
- Marketing campaigns,
- Internal corporate innovations.
â Not Technical Failure â Market Failure
Most teams mistakenly believe that if they build a good product, customers will naturally come. But:
âThe market doesnât care how good your idea is if it doesnât solve a real problem.â
Even the best engineered, beautifully designed products fail if nobody wants them.
đ§Ș Three Types of Risk in Innovation
Savoia breaks down innovation risks into 3 types:
- Technical Risk â Can we build it?
- Execution Risk â Can we deliver it on time, on budget?
- Market Risk â Will they use/buy it?
Most teams focus obsessively on the first twoâand neglect the most dangerous one: Market Risk.
â âIf the market doesnât want it, the rest doesnât matter.â
đ The Illusion of Progress
One of the most damaging traps is the illusion of progress. Teams believe theyâre making headway because theyâre:
- Hiring engineers,
- Holding design sprints,
- Completing prototypes,
- Writing code,
- Launching MVPsâŠ
âŠbut theyâre not testing the core assumption: Do people actually want this?
đ„ âWe confuse activity with progress.â
â ïž Real-World Examples:
â Google Wave (2009)
- Massive hype, elite engineering team, beautiful UI.
- But: no one understood the problem it solved.
- Shut down after 1 year.
â Juicero
- High-tech juicer startup. Raised $120M.
- Required proprietary juice packs. A manual squeeze did the same thing.
- Exposed by journalists. Collapsed under ridicule.
â Dropbox
- Before building anything, the founders created a demo video explaining the product.
- Measured signups and interest before investing in infrastructure.
- Pretotyped, then built.
đ§ Mindsets That Lead to Failure
Savoia argues that idea failure is often not a business problem, but a psychology problem.
â âWe donât fail because of poor execution. We fail because of self-deception.â
1ïžâŁ The Reality Distortion Field
Coined at Apple to describe Steve Jobsâ ability to bend perception, but when used by inexperienced innovators, it leads to disaster.
- Founders fall in love with their vision.
- They filter out negative feedback, dismiss critics, and overestimate demand.
â âWhen you believe in your idea too much, you lose sight of reality.â
Example:
- A team builds an app to gamify learning Chinese.
- Early feedback: âToo gimmicky.â
- They ignore it, saying users âjust need more exposure.â
- After 6 months, usage drops to near-zero.
2ïžâŁ Overconfidence Bias
âBecause we came up with the idea, we assume others will love it too.â
This bias leads to:
- Ignoring competitive research,
- Skipping market validation,
- Launching too soon.
Example:
- A founder says, âIâd use this, so everyone else will too.â
- But you are not your customer.
- Without evidence, this assumption is a gamble, not a strategy.
Survivorship Bias
âWe focus only on the winners and ignore the graveyard of failed products.â Most books, talks, and blog posts feature success stories. We donât hear about the hundreds of failed apps, A/B tests, or product pivots that died quietly.
đ Case Study: Facebookâs âStoriesâ format succeeded after copying Snapchat. But Facebook had previously launched and killed many storytelling and ephemeral content features. The public sees only the winning version, not the failed experiments.
3ïžâŁ Confirmation Bias
- Innovators selectively look for data that supports their idea.
- They ignore or rationalize contradictory evidence.
âWe become detectives searching for cluesâonly we ignore anything that doesnât fit our theory.â
Example:
- 100 people visit your fake landing page.
- 5 people sign up.
- You tell yourself: âLook, 5% interest!â
- You ignore: â95% bouncedâthey didnât care.â
4ïžâŁ Emotional Attachment to the Idea
đ âThe more time you spend on an idea, the harder it is to let goâeven if itâs wrong.â
This is called the sunk cost fallacy. Teams:
- Keep tweaking,
- Keep iterating,
- Keep believingâŠ
âŠinstead of testing core assumptions or killing the idea early.
â The Better Mindset: Fall in Love With the Problem
One of the most important quotes in the book:
đĄ âFall in love with the problem, not the solution.â
- Stay obsessed with solving a pain or fulfilling a need.
- Be open to changing your solutionâor abandoning itâif it doesnât serve that goal.
- Innovation is not about being right from the start, but about discovering whatâs right through experimentation.
đČ 3. Our Ideas Are Mostly Guesses â And Thatâs Okay
đ§ All Ideas Are Assumptions Until Proven Otherwise
âYour roadmap is a graveyard of guesses.â
Gilad emphasizes that ideation is inherently speculative. Great product thinking isnât about being right from the startâitâs about being adaptive and humble.
đŹ The Expert Myth
âEven the most experienced product leaders are wrong most of the time.â
Experts have pattern recognition, but patterns donât guarantee correctness. Markets change. Contexts shift. You need evidence.
đ Example: At Google, even senior engineers and PMs often failed A/B tests they were confident in. Over time, this eroded reliance on opinion and built a culture of testing everything.
đ HiPPO Decision-Making: A Red Flag
âWhen the Highest Paid Personâs Opinion overrides evidence, the team is flying blind.â
To shift to better product thinking:
- Replace opinions with observations.
- Replace authority with user insight.
- Replace roadmaps with evidence ladders (discussed later in the book).
đ§Ș The Experimentation Backlog
âThe product roadmap is not a delivery queueâitâs a hypothesis list.â
If most ideas fail, we should not treat them as projects to build, but as guesses to test.
đ Real-World Insight: Instead of saying: âWe will launch Feature A in Q2,â say: đ âWe believe Feature A may solve Problem X, and we will test it via a prototype or limited release before scaling.â
đ 4. Output â Outcome â The Root of False Productivity
âïž Output: Building for the Sake of Delivery
âMany teams mistake activity for progress.â
Output is:
- Features shipped
- Sprints completed
- Code written
But none of these guarantee value to users or business.
đ Outcome: Real, Measurable Impact
âOutcome is the real north star: changes in user behavior that create value.â
Examples:
- Increase in daily active users (DAU)
- Reduced churn
- Higher customer satisfaction
- Increased conversion rate
đ Contrast Example:
Metric | Output | Outcome |
---|---|---|
Feature | Rolled out dark mode | 40% of users enable it & use app 10% longer |
Performance | Shipped faster checkout | Conversion rate rises by 15% |
Marketing | Sent newsletter | Open rates rise; reactivation improves |
đ Vanity Metrics: Dangerous Illusions
âJust because itâs measurable doesnât mean itâs meaningful.â
Examples:
- Number of tickets closed
- Story points completed
- Number of deployments
These donât correlate with user or business value. They can make teams feel good, but hide real problems.
đ Shipping â Success
Gilad warns of the release trapâcelebrating launches while ignoring results.
đ Better Practice: Measure adoption, usage, behavior change after launch. Treat releases as experiments, not finish lines.
đ RECAP: Why Most Ideas Fail
Trap | Why It Happens | What to Do Instead |
---|---|---|
Assuming demand | You think âIf I build it, they will comeâ | Use pretotyping to test demand before building |
Illusion of progress | You celebrate activity, not results | Measure engagement, not effort |
Overconfidence | You trust your gut too much | Demand external evidence |
Confirmation bias | You only see the good signals | Track all user behavior, not just highlights |
Emotional attachment | Youâve invested too much to let go | Remember: Killing a bad idea early saves time and money |
Ideas are facts | Ideas are guesses to be tested | |
Experts know what works | Experts also need validation | |
Shipping = success | Impact = success | |
Measure speed and volume (output) | Measure value and outcome | |
Decide based on opinion or authority | Decide based on evidence and experimentation |
âïž MAKE SURE YOU HAVE THE RIGHT IT
đ Pretotyping vs. Prototyping
đ§ âDonât just build it right. First, make sure youâre building the right âItâ.â
One of the most dangerous myths in product development is the idea that if you build a great product, customers will come. This chapter breaks that myth by contrasting prototyping with a more critical, often ignored step: pretotyping.
đ§Ș What Is a Pretotype?
âPretotyping is about testing the marketâs genuine interest in your ideaâbefore you build anything real or expensive.â
Pretotyping helps you fail fast and cheaply, so you donât succeed at building something no one wants.
Pretotype = Pre + Prototype Itâs not a half-built productâitâs a simulation or illusion designed to answer the only question that matters early on:
â âWill they use it if we build it?â
đ§ What Is a Prototype?
âA prototype answers the question: âCan we build it?ââ
Itâs about testing features, functionality, usability, design, etc. It assumes youâve already validated market interestâwhich is often not the case.
đ Why Pretotyping Must Come First
Savoia insists:
â âIt is far cheaper to test an ideaâs desirability than to assume it and risk full development.â
đ Real-World Case Examples:
â Zappos (Shoes Online)
- Founder Nick Swinmurn didnât build an e-commerce system first.
- He went to local shoe stores, took photos, and listed them online.
- When someone ordered, he went and bought the shoes manually.
đ Pretotyping: Validated âWould people buy shoes online?â
â Segway
- $100M+ invested in development.
- World-class design and tech.
- Assumed it would revolutionize transportation.
- Reality: No one knew where to ride it or wanted to change habits.
đ They prototyped brilliantly, but never pretotyped.
đ§ The Right It Tools & Metrics
This chapter introduces a framework of behavioral hypotheses and metrics to help you know if youâre on the path toward The Right It. You donât have to guessâyou test with data.
đ 1. The XYZ Hypothesis
đ âX people will do Y within Z time.â
This is the core of every pretotypeâit forces you to be specific, measurable, and accountable.
đ Why Itâs Powerful:
- Prevents vague hopes like âpeople will love this.â
- Encourages clear, testable predictions.
â Example:
â500 people will click the âJoin Waitlistâ button for our new budgeting app within 5 days.â
â Bad version:
âPeople will probably be interested in our app.â (No numbers, no time frame, no behavior.)
đ§Ș How to Use XYZ Hypotheses Effectively:
- X = How many people?
- Y = What action shows interest? (click, sign up, preorderâŠ)
- Z = In what time frame?
â âIf you canât test it in time and with numbers, itâs not a real hypothesis.â
đ§Ź 2. Market Engagement Hypothesis (MEH)
âThe MEH is where your XYZ Hypothesis meets reality.â
This is your assumption about how the market will respond when given the opportunity to act, even with no real product yet.
đ§Ș How to Run a MEH Test:
- Use a landing page with a fake offer and a CTA (Buy Now, Join Beta).
- Use ads to see if people click on a fake product.
- Track real user behavior, not just traffic or likes.
â Example:
Create a simple site:
âNew App: StudyTime â Beat Procrastination. Sign up for early access.â đ Measure signups (Y) over 5 days (Z) from 1,000 visitors (X).
đ 3. Initial Level of Interest (ILI)
âILI = People who take action / People exposed to the test.â
This is your conversion rate, and itâs a direct signal of potential market demand.
â Example:
- 1,000 visitors.
- 80 clicked âSign Upâ.
- đ ILI = 8%
You can now compare this against your target. If your goal was 5%, then 8% is strong validation.
đ âILI transforms qualitative ideas into quantitative traction.â
đ Whatâs a Good ILI?
Savoia doesnât give hard rulesâit varies by marketâbut generally:
- >10% = promising traction.
- 5â10% = further testing or iteration needed.
- <5% = likely weak demand.
đ« âDonât chase unicorns with limp engagement metrics.â
đ©âđŹ 4. High-Expectation Customers (HXC)
đ§ âThe best feedback doesnât come from average usersâit comes from the most demanding ones.â
HXCs are:
- Already searching for a solution,
- Deeply understand the problem,
- Hard to impress.
But if you win them, youâre likely building something great.
đŻ Why Target HXCs Early?
- They provide blunt, high-signal feedback.
- If they adopt your pretotype, itâs a green light.
- If they ignore or criticize it, youâre in danger.
â Example: Launching a writing AI tool
- Your HXCs = daily writers, bloggers, content creators.
- Place your fake landing page in Reddit/r/copywriting or a writersâ forum.
- Monitor engagement and qualitative comments.
â âIf the people who need it most donât care, why would the general public?â
đ§ Mindset and Measurement Shift: The New Way to Innovate
Old Mindset | The Right It Mindset |
---|---|
Guess and build | Test and measure |
Listen to opinions | Watch actual behavior |
Invest early | Validate early, build later |
Seek praise | Seek honest disinterest or rejection |
Perfect your prototype | Perfect your XYZ and MEH tests first |
â Summary: The Right It Toolkit in Practice
Tool | Purpose | What It Reveals |
---|---|---|
XYZ Hypothesis | Define your expected market behavior | Are you making a testable prediction? |
MEH | Run a quick, fake or simulated test | Will people take action now? |
ILI | Quantify initial traction | How strong is early interest? |
HXC | Stress-test your idea on early adopters | Is it compelling to the most demanding users? |
đ âIf you canât get people to engage with a fake version of your product, they probably wonât care when itâs real.â
đŻ THE IMPACT-FIRST MINDSET
How High-Impact Teams Think, Plan, and Build
This section reorients teams away from the traditional output-driven modelâwhere features and velocity dominate thinkingâto a more effective paradigm: impact-first thinking, where the primary goal is to drive measurable improvements in business and user outcomes through learning, iteration, and evidence.
đ What Are We Optimizing For? â Speed or Value?
â The Default (Flawed) Mental Model
âBuild more, ship faster, and success will follow.â
This model confuses activity with progress. Agile teams, CI/CD pipelines, and sprint velocities become the north star.
However, speed without direction equals waste.
â Reframed Mental Model
âProduct success = delivering value to users and business through validated learning.â
Gilad reframes the question from:
âWhat features are we building this quarter?â to âWhat outcomes are we aiming to achieve?â
đ Analogy: The Compass vs. The Speedometer
- Output-first mindset = checking the speedometer: âHow fast are we going?â
- Impact-first mindset = checking the compass: âAre we heading in the right direction?â
âProgress without direction is just wasted motion.â
âïž Defining Impact â The Two-Sided Equation
đ What Is Impact, Really?
â âImpact = User Value + Business Valueâ
Success only happens when:
- Users benefit meaningfully
- The business gains value (revenue, retention, efficiency, etc.)
đ§ User Value
Value delivered to the end user. It must solve a real problem or enable a meaningful improvement.
Examples:
- Easier onboarding
- Feature discoverability
- Better UX / accessibility
- Saving user time, effort, or money
âIf your product isnât helping users succeed, it wonât survive.â
đŒ Business Value
Concrete, measurable contributions to business objectives:
- Higher conversion rate
- Improved retention
- Increased revenue per user
- Reduced churn or support costs
â âTrue product impact connects user success to business success.â
đ Example: A redesigned signup flow that cuts time in half (user value) and improves conversion from 10% â 14% (business value).
đ§° The GIST Framework â Bridging Vision to Action
Gilad introduces GIST as a flexible, scalable tool to support evidence-driven impact delivery:
GIST = Goals â Ideas â Step-Projects â Tasks
It aligns strategic thinking (Goals) with tactical execution (Tasks) through validated learning.
đ„ G = Goals
â âGoals are clear, measurable impact objectives, not features.â
Examples:
- Increase 30-day user retention from 25% to 35%
- Reduce cart abandonment by 15%
- Boost Net Promoter Score (NPS) from 40 to 60
đ Goals must be:
- Outcome-based (not output-based)
- Quantifiable
- Time-bound
đĄ I = Ideas
âIdeas are hypothesesâguesses about how to reach a goal.â
Most teams confuse ideas with requirements or specs. But in reality:
â âIdeas are just untested beliefs, no matter how logical they seem.â
Teams must:
- Generate multiple ideas (divergent thinking)
- Score them using tools like ICE or RICE
- Track them in an Idea Bank (as covered later in the book)
đ Example: To increase onboarding completion, ideas might include:
- Reduce signup fields
- Add welcome video
- Enable social login
Each is a guessânot a guarantee.
đ§Ș S = Step-Projects
â âFast, inexpensive experiments that test assumptions behind ideas.â
This is where learning happens.
Examples:
- Fake door tests
- Landing page A/B tests
- Concierge MVPs
- Email split tests
âDonât build until you test.â
đ Case Example: Rather than build a complex refer-a-friend system, test user interest with:
- A âRefer a Friendâ button that logs clicks
- Manual coupon delivery (instead of full automation)
đ âStep-projects reduce risk while increasing certainty.â
âïž T = Tasks
âTasks are delivery items created only after validation.â
Once a step-project proves that an idea is viable and impactful, it moves into actual development.
â âDonât start with tasks. Start with goals and validation.â
đ GIST ensures this bottom-up execution aligns with top-down strategy.
đ GIST in Action: Example Scenario
Goal: Reduce time to first value (TTFV) by 30% Ideas:
- Pre-fill user setup form
- Auto-recommend features based on usage
- Add onboarding video Step Projects:
- Run A/B test for pre-filled form
- Email new users with video vs. no video Tasks:
- Develop pre-fill automation
- Design a recommendation engine
Only after measuring impact do teams commit engineering time.
đïž Impact-First Planning â Redesigning the Product Planning Process
â Problem with Traditional Roadmaps
âMost roadmaps are just a list of guesses with deadlines.â
These:
- Create false certainty
- Encourage overcommitment
- Limit adaptability
- Reward shipping, not learning
â The Impact-First Alternative
âStart with outcomes. Let solutions emerge from evidence and experimentation.â
Instead of âWeâll build feature X by July,â say:
âWe aim to improve customer activation rate by 20% in Q3.â
This keeps:
- Autonomy for teams to explore options
- Alignment around measurable goals
đŻ OKRs Done Right
Gilad advocates for real OKRs, not task lists disguised as goals.
đ Correct Format:
- Objective: Improve trial-to-paid conversion
- KR1: Increase free trial completion from 50% to 65%
- KR2: Raise checkout conversion from 20% to 28%
â âOKRs should be inspiring, yet grounded in measurable outcomes.â
đ§ Summary â Key Mental Shifts for an Impact-First Culture
â Output-Driven Thinking | â Impact-First Thinking |
---|---|
Deliver more features | Deliver more value |
Plan by timelines & specs | Plan by goals and validated ideas |
Ideas = product requirements | Ideas = testable hypotheses |
Success = shipping | Success = outcome improvement |
Tasks are the start of work | Tasks come last, after goals and experiments |
â Final Reflection from Gilad
âThe job of a product team is not to deliver featuresâitâs to solve problems and create impact.â
This requires:
- A radical mindset shift
- New tools (like GIST, OKRs, Confidence Meters)
- A culture of experimentation, curiosity, and humility
âïž TESTING YOUR IDEA â THE PRETOTYPE METHOD
From The Right It by Alberto Savoia, complete with bold-highlighted principles, deep explanations, and real-world, practical examples.
đ§° OVERVIEW: Why You Need Pretotyping
After understanding why most ideas fail (Part 1) and learning how to define and frame a testable idea (Part 2), this part focuses on how to actually test itâquickly, cheaply, and before building anything real.
đĄ âA pretotype is not a lesser version of your productâitâs a smarter version of your decision-making process.â
Youâll learn to simulate usage, observe behavior, and collect real engagement data with minimal investment.
đ ïž The Six Pretotype Methods
Each pretotype method is a tactical technique that allows you to test a specific aspect of user interest or behavior with minimal effort.
đč 1. The Fake Door Test
đȘ âPut up a door. If no one tries to open it, donât build the house.â
â What it is:
You create a fake entry point (e.g., a signup button, a âBuy Nowâ page) for a product or feature that doesnât exist yet.
đŻ Goal:
Measure user intent through clicks, signups, or interest, without having to build anything.
đ§Ș How it works:
-
Build a landing page or CTA for your idea.
-
When users click, show a message like:
âThanks! Weâre not quite ready yet. Join the waitlist and weâll notify you soon!â
đ§ Example:
A new budgeting app idea â
- You run Google Ads with âTame Your Spending with BudgetBuddy.â
- Visitors click âTry It Freeâ on a basic page.
- You measure clicks, email captures, bounce rate.
â If no one clicks the button, why build the product?
đč 2. The Infiltrator Test
đ”ïž âSlip your idea into an existing environment and watch what happens.â
â What it is:
Insert your product idea into an existing system, platform, or experience without fanfareâand see if people use it naturally.
đŻ Goal:
Validate real-world fit and organic adoption.
đ§Ș How it works:
- Add a small new button, feature, or prompt inside an existing product, store, or page.
- Donât promote itâlet it sit.
- Track interactions or neglect.
đ§ Example:
You want to test a feature that suggests AI-generated job descriptions. You quietly embed a âTry SmartWriterâ button inside your existing HR dashboard.
- Measure how many users discover and click it organically.
đĄ âIf users donât notice or use it when itâs in their flow, it might not matter.â
đč 3. The One-Night Stand Test
đ âOffer your product temporarilyâbecause permanent commitment is expensive.â
â What it is:
Provide your product or service for just one day (or a limited window) and observe demand and behavior.
đŻ Goal:
Quickly simulate real-world conditions with short-term exposure and no long-term risk.
đ§Ș How it works:
- Launch a popup, pop-up store, webinar, or flash offer.
- Announce it through minimal ads, email, or social.
- Measure engagement: signups, show-ups, purchases.
đ§ Example:
You want to launch a fitness accountability program.
- Offer a 1-day trial event for $5 with a live coach on Zoom.
- Track how many sign up, how many show up, how they react.
đ âA one-night stand reveals if thereâs chemistryâbefore you commit to marriage.â
đč 4. The Pinocchio Test
đȘ” âBuild a fake product that looks realâbut doesnât function.â
â What it is:
You create a non-functional mockup of your product to observe user interaction and test comprehension or desirability.
đŻ Goal:
Find out if people understand and engage with your product concept, even if it doesnât work.
đ§Ș How it works:
- Create a fake version: a dummy app screen, clickable prototype, 3D printed device shell, etc.
- Give it to users.
- Ask them to âuse itâ as if it were real.
đ§ Example:
Testing an âAI-powered grocery list planner.â
- Create a Figma prototype of the app.
- Watch users try to interact.
- Do they know what it does? Where they click first? Do they seem excited or bored?
â âIf they donât want the fake version, they wonât want the real one either.â
đč 5. The Mechanical Turk
đ§ âSimulate the output of a machineâusing human effort behind the scenes.â
â What it is:
You fake a software feature or hardware automation by doing the task manually, while making it look automated.
đŻ Goal:
Test if people will use the functionality before building complex tech.
đ§Ș How it works:
- You create a front-end or interface that mimics the full product.
- Behind the scenes, your team fulfills requests by hand.
đ§ Example:
You want to build âAI Resume Booster.â
- Users upload resumes and get edits back.
- Instead of AI, your editor does the work manually.
- You test demand before building the AI model.
đĄ âWhy code an algorithm until you know theyâll pay for the outcome?â
đč 6. The Re-label Test
đ âTake an old product. Give it a new name or use case. See what happens.â
â What it is:
You test new positioning or audience segments by repackaging an existing product.
đŻ Goal:
Test if a new use case, niche, or branding resonates better.
đ§Ș How it works:
- Use your current product or someone elseâs.
- Market it to a new group, with new messaging.
- Track conversion and interest.
đ§ Example:
You already sell sleep headphones to travelers. Re-label them:
âDeep Focus Headphones for Remote Developersâ Run new ads. See which segment responds better.
đ âSometimes the product isnât wrongâthe audience is.â
đ Innovation Metrics & Decision Frameworks
đ âPretotyping without measurement is just theater.â
After you run your pretotyping tests, how do you know if your idea is âThe Right Itâ? You need a decision system based on behavior, not bias.
đ§ Key Innovation Metrics
đž XYZ Hypothesis
âX people will do Y within Z time.â
This turns vague ideas into specific predictions.
đž MEH â Market Engagement Hypothesis
âIf we offer this, people will do that.â The bridge between concept and test.
đž ILI â Initial Level of Interest
ILI = (# who take action) / (Total exposed)
A clean metric to quantify real engagement.
đ What to Do With the Data
Outcome | Interpretation | Decision |
---|---|---|
High ILI (10â15%+) | Strong early interest | â Go ahead with MVP |
Medium ILI (5â10%) | Mixed signals | đ Iterate or retest |
Low ILI (<5%) | Weak or no demand | â Kill the idea |
đ„ Focus on Behavior, Not Applause
â Donât fall for vanity metrics like:
- Pageviews
- Likes
- Comments
- Shares
â Focus on real behavior:
- Clicks
- Signups
- Preorders
- Time spent
- Money spent
đ„ âIf they donât pay attention now, they wonât pay later.â
đ§ Final Insight: Pretotyping Is an Innovation Mindset
đĄ âPretotyping is not just a testing toolâitâs a cultural shift.â
It teaches you to:
- Think in experiments
- Kill ideas early, proudly, and cheaply
- Use data, not gut feel
- Learn by simulating, not by shipping
đŹ EVIDENCE-GUIDED DEVELOPMENT
âSuccess is not about being right from the startâitâs about reducing uncertainty smartly.â
In this pivotal section, Gilad lays out the practical mechanics for moving from guesswork and gut-feel to a system of learning, experimentation, and evidence-based decision making. Itâs a shift from certainty-seeking to curiosity-driven building, supported by data and behavioral insight.
đȘ The Evidence Ladder â A Tool for Judging Idea Quality
đŻ The Core Idea
â âNot all ideas are created equal. Their strength lies in the evidence supporting them.â
Most teams donât evaluate the quality of ideasâthey prioritize by influence, intuition, or trends. The Evidence Ladder gives you a clear framework to rank ideas based on the reliability of the evidence backing them.
đ¶ The 5 Levels of the Evidence Ladder
-
Speculation
đ§ âI just have a feeling this might work.â These are pure guesses, unbacked by any validation. â ïž Risk: Building from here without testing leads to waste.
-
Opinions
đŹ âThe VP of Sales thinks we need this.â This includes feedback from stakeholders, teammates, even users. But itâs subjective and biased. â ïž Still weak evidence until verified through action.
-
User Feedback
đŁïž âUsers told us they want this feature in interviews.â Valuable, but still what people say, not what they do. Needs to be tested behaviorally. âïž Better than speculation, but not sufficient on its own.
-
User Behavior
đ âUsers clicked the fake door button at a 15% rate.â Behavioral evidence (from analytics, A/B tests, click maps) shows what people actually do. â Strong indicator that the idea creates real engagement.
-
Business Results
đ° âThe idea increased conversions by 12% and improved LTV.â The gold standard. When a tested idea leads to measurable business outcomes, confidence is at its highest. â â High-value ideas live here.
đ§ How to Use the Ladder
â âThe higher an idea sits, the more confidently you can pursue it.â
đ Use this as a prioritization filter:
- Low-evidence ideas â run small experiments
- High-evidence ideas â invest further, scale up
- No-evidence ideas â donât put on your roadmap yet
đ§Ș Testing Ideas with Step Projects â Small Bets, Fast Learning
â The Problem with Big-Bang Development
â âLetâs build the whole feature, then launch and see if it works.â
This approach:
- Consumes months of dev time
- Delays feedback
- Increases risk and emotional attachment
â Step Projects: The Safer, Smarter Alternative
â âStep Projects are mini-experiments designed to test the most critical assumptions of your idea.â
They are:
- Quick (days to weeks)
- Cheap (light engineering or no-code)
- Focused (test one core assumption)
đ Examples of Step Projects
Type | Description | Example |
---|---|---|
Fake Door Test | Show an option that doesnât exist yet | Add âUpgrade to Proâ button, log clicks |
Landing Page Test | A/B test marketing messages or features | Test new pricing tiers on mock pages |
Wizard of Oz MVP | Fake the backend, simulate experience | Manual fulfillment of orders to test demand |
Email Split Test | Measure response to different concepts or CTAs | Test âRefer a Friendâ vs âGet a Bonusâ |
Usability Test | Use Figma/Sketch to simulate flows | Observe users interact with new checkout flow prototype |
đ Step Projects = Fast Learning, Not Fast Building
đ§ âIf your idea is wrong, better to find out in 2 days than in 3 months.â
Key Benefits:
- Protect team bandwidth
- Learn from real behavior
- Build internal culture of experimentation over certainty
đ What Makes a Good Step Project?
- Tests the riskiest assumption first
- Has a clear success metric
- Is time-boxed and simple
- Can be run with minimal disruption
â âDonât ask âCan we build it?â Ask, âShould we build it?ââ
đ Learning from Data â Build the Learning Engine
đ Why Learning Is More Valuable Than Shipping
đ« âShipping velocity is a vanity metric unless tied to outcomes.â â âLearning velocity is the new superpower of product teams.â
đ Sources of Learning Signals
-
Quantitative Data
- Funnel analytics (e.g., conversion, retention)
- A/B testing results
- Heatmaps and session replays
- Cohort analysis
-
Qualitative Data
- Customer interviews
- Usability sessions
- Customer support tickets
đ Example Loop in Action
Goal: Improve onboarding completion Idea: Add progress bar Step Project: A/B test with 20% of new users Result: 18% lift in completion Next: Scale feature, monitor long-term impact
â âTeams should run dozens of these loops per quarterânot 1 big risky bet.â
đ§ Key Insight
âYour product is not the final outputâyour learning is.â Shipping without learning is just output. Learning leads to outcome.
đïž Prioritizing by Evidence and Value â Choose Wisely
â The Reality of Idea Backlogs
â âMost idea lists are long, unstructured, and driven by emotion.â
Stakeholders push pet features, trends drive FOMO, and teams build based on who shouts the loudest.
â Replace Gut Feel with Scoring Models
đą ICE Scoring
Impact Ă Confidence Ă· Effort
Simple, fast model for quick idea comparisons.
đ Example:
Idea | Impact | Confidence | Effort | ICE Score |
---|---|---|---|---|
Simplified login | 8 | 7 | 3 | 18.7 |
AI chatbot | 6 | 3 | 5 | 3.6 |
đ¶ Confidence Meter
â âVisual tool that tracks the strength of supporting evidence for each idea.â
Use color-coded tiers:
- đŽ Speculative / Opinion
- đĄ Some user feedback
- đą Behavior-tested / Result-backed
Helpful for:
- Roadmap debates
- Stakeholder discussions
- Justifying prioritization
đ§ RICE Model
Reach Ă Impact Ă Confidence Ă· Effort
Adds scale to the ICE modelâespecially for B2C or large platforms.
đ Example:
Idea | Reach | Impact | Confidence | Effort | RICE Score |
---|---|---|---|---|---|
Auto-suggestions | 5,000 | 6 | 7 | 4 | 52,500 |
New dashboard | 500 | 8 | 8 | 5 | 6,400 |
â âRICE prevents small-impact projects from crowding out high-leverage ones.â
đ§ Giladâs Rule
â âIdeas with low confidence should get tiny tests, not massive investments.â
This saves:
- Time
- Developer energy
- Morale (you fail fast, not late)
đ§ RECAP â FROM GUESSING TO EVIDENCE-LED BUILDING
â Traditional Thinking | â Evidence-Guided Practice |
---|---|
Opinions and authority guide ideas | Evidence and behavior validate ideas |
Roadmaps filled with assumptions | Roadmaps filtered by confidence & testing |
Big launches, slow feedback | Small tests, fast learning cycles |
Prioritize by gut or trends | Prioritize by ICE, RICE, and Confidence Meter |
Learning is accidental | Learning is systematic and fast |
đŹ âThe best teams donât guess betterâthey test better.â
This approach is not about being risk-averseâitâs about taking smarter risks, fast.
đ§ NAVIGATING THE INNOVATION JOURNEY
đ§ The Innovatorâs Journey
đŻ âBringing a new idea to life is not a straight pathâitâs a deeply personal adventure filled with uncertainty, resistance, insight, and transformation.â
Savoia frames the innovation process as a modern adaptation of the Heroâs Journey, a powerful narrative model described by Joseph Campbell. Why? Because innovators, like heroes, must confront fear, resistance, failure, and self-doubtânot just market challenges.
This metaphor is not just poeticâit provides a psychological roadmap that prepares innovators for the ups and downs of product development and internal advocacy.
đ„ Stage 1: The Spark â The Call to Adventure
đĄ âEvery innovation starts with a moment of excitementâa belief that things can be better.â
This is when:
- A frustration with the status quo triggers a creative impulse.
- A personal insight or customer pain point inspires a concept.
- A founder says, âThere has to be a better way.â
â ïž But early excitement is dangerous because:
- You may fall in love with your solution before testing the problem.
- You assume others will see what you see.
- You become vulnerable to confirmation bias.
đ§ âBeware of confusing the emotional thrill of a new idea with actual market demand.â
đ§Ș Stage 2: The Test â Crossing into Reality
đ§Ș âThe moment you run your first pretotyping experiment, you step into the real world.â
Here, the innovator faces the truth:
- Are people interested?
- Will they click, sign up, engage, or pay?
- Or will they ignore, bounce, or criticize?
This stage brings resistance:
- From users (lack of interest),
- From team members (who fear change),
- From yourself (fear of failure or data that contradicts your vision).
đ âThis is the stage where the idea stops being funâand starts being real.â
Yet this is where learning accelerates, if you have the courage to:
- Kill a weak idea quickly,
- Pivot based on feedback,
- Stay emotionally detached.
â âStrong innovators fall in love with the problem, not the solution.â
đ Stage 3: Validation, Iteration, or Rejection
đ âPretotyping shows you whether youâre on the right trackâearly, cheaply, and with brutal honesty.â
This is the decision point:
-
If ILI (Initial Level of Interest) is strong,
- Go deeper,
- Consider building a prototype or MVP.
-
If ILI is weak,
- Re-test with another pretotyping method.
- Pivot the positioning, market, or features.
-
If thereâs still no traction,
- Kill it, without shame.
â Example:
You test an app for time-boxed study sessions with college students.
- Fake Door test yields 500 signups in 3 days. â Signal to move forward.
A different feature (group voice rooms) yields 1% clickthrough. â Kill that feature or pivot the pitch.
đĄ âPretotyping doesnât just validate ideasâit forces clarity.â
đ Stage 4: Scaling or Strategic Exit â The Return with the Elixir
đ§ âOnce youâve proven people want your idea, the challenge shifts from testing to building.â
This is the moment to:
- Pitch investors or management with data, not dreams.
- Build your MVP with evidence-backed features.
- Hire the right team and expand distribution.
But also recognize:
- Not every idea deserves scaling.
- Sometimes a great concept is too early, too niche, or not strategically aligned.
- Knowing when to exit gracefully is a sign of strategic maturity.
â âThe goal of the innovatorâs journey is not just to launchâbut to discover whatâs worth launching.â
đ§± Organizational Barriers
đ§± âEven the best ideas will fail if the culture punishes curiosity and rewards delivery at all costs.â
In this chapter, Savoia turns his lens to the systemic and structural forces inside companies that make innovation difficultâeven when teams do everything right.
đ§š Barrier 1: Incentives Reward Building, Not Learning
â âIn many companies, building something bad is rewarded more than learning not to build it at all.â
Teams are often pressured to:
- Launch on time rather than test first.
- Hit delivery KPIs instead of insight milestones.
- Look busy, not be effective.
đ§ What this produces:
- Teams ignore weak signals.
- Stakeholders avoid âfailureâ at all costsâeven if it means wasting millions.
đ§Ș Example:
A team builds a product no one tested. It flopsâbut since it was delivered on time, the project manager is promoted.
â âWe must start rewarding people for killing bad ideas early.â
đ Barrier 2: Corporate Theater and the Illusion of Certainty
đ âIn the absence of data, companies perform theaterâcomplete with fake confidence, assumptions, and plans.â
Symptoms of corporate innovation theater:
- 100-slide business cases based on optimistic projections.
- âValidatedâ user personas based on guesswork.
- Gantt charts that schedule creativity.
â âExecutives often ask for certaintyâbut real innovation starts with admitting what you donât know.â
đ± Solution: Build a Culture of Experimentation
đ± âA culture that celebrates learning will always outperform one that punishes failure.â
Organizations that innovate well:
- Allocate small budgets and fast timelines to idea tests.
- Celebrate validated pivots and early kills.
- Create safe-to-fail zones (e.g., 20% time, hack weeks, innovation sprints).
â Real Example: Google X
Googleâs innovation lab encourages teams to kill their own ideas.
âWe reward people who stop projects, not just those who launch them.â
đ âInnovation ROI improves drastically when failure is fast, cheap, and instructive.â
đ Reframing Success and Failure
Traditional Thinking | The Right It Mindset |
---|---|
Delivering = Winning | Validating = Winning |
Failure = Shame | Early failure = Smart learning |
Certainty = Strength | Admission of ignorance = Strength |
Big launch = Progress | Early traction = Real progress |
đ„ âA successful innovation system doesnât prevent failureâit detects and absorbs it early.â
â Final Takeaways
đĄ âInnovation is not just a technical or market challengeâitâs a personal and cultural one.â
For Innovators:
- Embrace uncertainty and feedback.
- Expect emotional highs and lows.
- Be ready to pivot, kill, or defend your ideaâwith data, not ego.
For Leaders and Organizations:
- Create safe spaces to test and fail.
- Reward learning and insights, not just execution.
- Dismantle incentives that promote building The Wrong It.
đ§ âThe Right It isnât found through planningâitâs discovered through testing, humility, and bold honesty.â
đ§ BEHAVIORAL & CULTURAL SHIFTS
đ Shifting Mindsets
â ïž âThe main obstacle to innovation is not technology, tools, or timeâitâs peopleâs habits and mindsets.â
Savoia insists: even the best tools like pretotyping wonât lead to successful innovation unless thereâs a shift in how people think about risk, failure, and experimentation. Behavioral inertiaâclinging to old processes or fearing failureâis the root of slow or false innovation.
đĄ From Execution-First to Experimentation-First
đ§ âIn most organizations, execution is kingâexperimenters are seen as rebels.â
That mindset kills innovation. Instead, we must shift toward:
- Testing ideas before building them
- Investing in uncertainty reduction, not just delivery
- Viewing every initiative as a hypothesis, not a certainty
đ Old Mindset vs. New Mindset:
Old Execution-First Thinking | New Experimentation-First Thinking |
---|---|
âWeâve planned it, letâs build it.â | âLetâs test interest before we build anything.â |
âJust trust your gut.â | âLetâs see what users actually do.â |
âFailure looks bad.â | âFailure is a signalâit guides better decisions.â |
âPerfect the prototype.â | âRun pretotypes first, fast and cheap.â |
â âDonât execute before validating. Donât scale what you havenât tested.â
đ§š Celebrate Early Idea Kills
đ§ âKilling a bad idea early is not failureâitâs high-performance decision-making.â
Many organizations and founders have been trained to fear the optics of failure, even small ones. But this fear leads to:
- Zombie projects that burn money and time,
- Risk-averse behavior (avoiding novel ideas),
- Pressure to justify sunk costs, even when evidence shows the idea doesnât work.
Instead, organizations should actively reward those who run honest experiments and make the call to kill weak ideas.
đ„ âEarly kills save millions later. They should earn medalsânot reprimands.â
â Real-World Example:
A team at a large SaaS company pitched a new dashboard module. Instead of building it, they launched a Fake Door test with 2,000 existing users. Only 17 clicked âLearn More.â The team killed the projectâsaving $400K in dev costs. They were congratulated in a company-wide email.
đ§ âEvery killed idea is a step closer to The Right It.â
đ„ Normalize Micro-Failures to Enable Macro-Success
đ§Ș âInnovation is not about avoiding failureâitâs about failing in ways that are small, fast, cheap, and full of learning.â
Teams that fear failure donât experiment, which means they never discover better paths.
By contrast, innovative teams run micro-experiments constantly:
- 2 versions of a landing page,
- 3 different positioning messages,
- 5 mini-product concepts in one quarter.
These tiny tests are:
- Low-cost,
- Easy to design,
- Extremely informative.
â âA 48-hour failed test beats a 6-month failed launch every time.â
đ Organizational Practices to Shift Culture
đ± âCulture is not what leaders sayâitâs what teams do without asking permission.â
To make experimentation and learning part of culture:
đ Practical Shifts:
- â Make every new initiative start with an XYZ Hypothesis
- â Give product teams monthly mini-budgets just for pretotyping
- â Publicly celebrate pivot stories and honest kills
- â Measure learning velocity, not just shipping velocity
- â Hold âTest-What-Mattersâ weeks or Pretotyping Hackathons
â Cultural Example:
Spotifyâs team uses a âThink It â Build It â Ship It â Tweak Itâ model. They donât ship until âThink Itâ is validatedâand that stage includes MEH tests and engagement metrics, not just wireframes.
đ± The Right It in Practice
đ âThe Right It isnât a theoryâitâs a proven, scalable system for real-world validation.â
This chapter shares case studies and field-tested patterns to demonstrate how pretotyping works across startups, corporations, and solo entrepreneurs.
â Google AdWords â The Mechanical Turk Pretotype
đ§ âThe original AdWords was a fake interface connected to a team of humans.â
Before building the tech:
- Google let users enter ads into a dummy interface.
- Behind the scenes, humans manually placed the ads.
- Result: explosive traction â greenlighted automation build.
â âProof of concept should precede product development.â
â IBM â Enterprise Pretotyping
At IBM, a product team ran a non-functional dashboard demo to test internal use of predictive analytics.
- The dashboard looked realâbut data was mocked up manually.
- They measured engagement, not opinion.
Result:
- 70% clickthrough,
- Executive sponsors signed off.
đ§ âEven in corporate settings, Mechanical Turk and Fake Door tests save huge budgets.â
â Individual Entrepreneurs â Small Bets, Fast Lessons
đĄ âThe Right It is your startup co-founderâeven if youâre solo.â
Examples:
- A UX designer used One-Night Stand method to test a design sprint coaching offer on LinkedIn â sold 3 sessions before building a site.
- A developer created a Re-label Test for his focus appâmarketing it as âDeep Work Tool for Lawyersâ â 400% higher conversion.
đŻ âPretotyping gives the solo entrepreneur leverageâdata replaces doubt.â
đ Culture of Evidence-Guided Product Development
â From Authority-Based to Evidence-Based Decisions
â âMany product decisions are still driven by the loudest voice in the room.â
HiPPOs (Highest Paid Personâs Opinions), office politics, and legacy thinking often dominate. Gilad urges a cultural transformation:
â âGood ideas can come from anywhereâbut only evidence can validate them.â
đ Build a Culture of Curiosity and Experiments
â âReplace âproving youâre rightâ with âfinding whatâs true.ââ
Key behaviors of a strong evidence-guided culture:
- Welcoming failure as part of learning
- Celebrating invalidated ideas for saving resources
- Rewarding experiments, not just feature launches
đ Example: A product manager proposes an onboarding chatbot, runs a test, and finds engagement drops. Instead of being blamed, the team is praised for invalidating the wrong bet early.
đ Cultural Slogans to Reinforce Mindset
- âTest before you build.â
- âIdeas are hypotheses, not promises.â
- âCelebrate small failures that prevent big ones.â
â âCulture eats strategy for breakfastâso make experimentation part of your companyâs identity.â
đ„ Empowered Teams â Autonomy with Alignment
đ§± What Empowerment Actually Means
â âEmpowered teams donât just executeâthey solve problems.â
Too often, teams are handed a roadmap of outputs and told to deliver on schedule. This isnât empowermentâitâs execution under constraint.
đĄ Three Pillars of Empowered Teams
-
â Clear Goals
- Set by leadership through OKRs, North Star Metrics, or GIST Goals
- Must focus on outcomes, not tasks
-
â Autonomy
- Teams choose how to solve problems
- Encourages ownership, creativity, and motivation
-
â Access to Users and Data
- Teams talk to customers directly
- Use analytics, A/B testing, and behavior data to learn fast
â âGive teams the why and what success looks like. Let them figure out how.â
â Avoid Command-and-Control Cultures
â âTelling teams what to build makes them disengaged, less creative, and less accountable.â
đ Instead, create âcontext, not control.â
Example: Team A is told to âbuild feature Xâ by Q3. Team B is asked to âincrease user retention by 15%ââand allowed to find the best ideas. â Team B will test, learn, and likely outperform over time.
đ Leadership for Impact â From Director to Enabler
đ Redefine Leadership Roles
â âLeaders should be impact coaches, not taskmasters.â
Traditional leadership = assigning features, approving specs, demanding timelines Impact-first leadership = setting goals, enabling learning, clearing obstacles
đ What Impact-Oriented Leaders Do
- Set the Vision: Provide long-term direction aligned with company strategy.
- Define Outcome Goals: Establish success in terms of user and business value, not features.
- Foster Safety for Experiments: Normalize failure and learning.
- Coach, Donât Command: Help teams grow autonomy and decision-making capacity.
- Resource Learning Loops: Invest in UX research, data infrastructure, experimentation tools.
â âLeadership is about designing the environment where good decisions can emerge.â
â Anti-Patterns to Watch For
- Rewarding delivery over learning
- Killing ideas based on opinion, not data
- Penalizing teams for failed experiments
- Insisting on waterfall-style roadmaps
â âIf leaders donât model evidence-based thinking, teams wonât either.â
đ§© Bringing It All Together â From Framework to Operating System
đ Combine GIST with Lean, Agile, and Discovery
GIST is not a replacement for Agile or Leanâitâs an overlay that ensures alignment from strategy to delivery.
â âGIST makes Agile actually outcome-oriented, not just fast.â
đ How It Integrates
GIST Element | Matches With | Purpose |
---|---|---|
Goals | OKRs / North Star Metric | Strategic focus |
Ideas | Discovery / Ideation | Exploration of possible paths |
Step Projects | MVPs / Experiments | Learn fast, test risk |
Tasks | Agile Sprints / Backlog | Tactical execution |
đ Shift from Launches to Impact Delivery
â âLaunch is not the endâitâs the start of learning.â â âGreat teams donât just ship featuresâthey ship results.â
Continuous Impact Delivery means:
- Every sprint is tied to an impact metric
- Every launch includes a feedback loop
- Roadmaps evolve based on whatâs working, not what was promised
â âDonât plan for certaintyâplan for discovery and adaptability.â
đ§ SUMMARY â BUILDING THE ENVIRONMENT FOR IMPACT
â Traditional Org Model | â Evidence-Guided Culture & Structure |
---|---|
Top-down feature mandates | Autonomous teams solving outcome goals |
Leaders assign work | Leaders coach, define vision, and unblock |
Success = shipping features | Success = delivering measurable user/business value |
Failure is punished | Failure is normalized and leveraged for learning |
Roadmaps based on opinions | Roadmaps shaped by ideas + confidence meters |
đĄ âThe best products come not from the best ideasâbut from the best systems to discover and validate ideas.â
If the culture isnât aligned, no framework will succeed. But if the culture enables learning, experimentation, and curiosity, great products become inevitable.
đ Templates
đ§© XYZ Hypothesis Builder
âX people will do Y within Z time.â
Use this to define your test goal:
- 100 developers will click âJoin Waitlistâ within 3 days.
đ§Ș Market Engagement Hypothesis (MEH)
âIf we offer this, people will respond this wayâŠâ
Define the behavior you expect (click, sign up, pay).
đ Initial Level of Interest (ILI) Tracker
Formula: ILI = # who engage / # exposed
Helps compare multiple experiments and make data-informed Go/Kill decisions.
đ§° Tools & Frameworks
Need | Suggested Tools |
---|---|
Landing Pages | Carrd, Webflow, Unbounce |
Analytics & Heatmaps | Google Analytics, Hotjar, Crazy Egg |
Form Builders | Tally, Typeform, Google Forms |
Prototyping | Figma, InVision, Marvel |
No-Code MVPs | Bubble, Glide, Adalo |
A/B Testing | Google Optimize, Optimizely |
â âYou donât need codeâyou need curiosity and a clear hypothesis.â
đ Confidence Meter Template
- Color-coded framework to visualize how much evidence backs each idea
- Helps reduce political debates and focus on facts
đĄ Idea Bank Canvas
- A centralized repository for tracking, scoring, and filtering ideas
- Encourages divergent thinking, then structured convergence
â âLet ideas competeâbased on value and evidence.â
đ§Ș Step Project Tracker
- Keeps all experiments visible, documented, and reviewed
- Tracks learnings, metrics, outcomes, and next steps
đŻ Goal Tracker
- Links team OKRs or outcome goals to ongoing experiments and ideas
- Ensures alignment between strategy and day-to-day execution
â âYou canât scale impact if you donât track whatâs working.â
đŻ Final Takeaway: Culture Eats Tools for Breakfast
đ„ âTools donât drive innovationâmindsets do.â
If your team:
- Experiments fast,
- Embraces micro-failures,
- Prioritizes behavioral validation,
- Kills bad ideas early and proudlyâŠ
Then you are already ahead of 90% of startups and corporate innovation teams.
đ§ âThe Right It is not just a productâitâs a habit of mind.â
Quotes
âFake It Before You Make It,â but over the years I have heard that expression used to justify all sorts of nonsense and unsavory behaviors. Although that phrase occasionally still slips out of my mouth (or keyboard), these days Iâve replaced it with âTest It before you invest in Itâ
âPretotyping is a way to test an idea as quickly and inexpensively as possible by creating artifacts to help us test the hypotheses that âif we build it, they will buy itâ and/or âif we build it, they will use it.â
âTo help you get started, however, I will introduce you to two basic, but useful and reliable metrics that can be applied to practically any idea for new products or services: Initial Level of Interest and Ongoing Level of Interest.â
References
- https://www.amazon.ca/Right-Many-Ideas-Yours-Succeed-ebook/dp/B07CKRYYZK
- https://www.amazon.ca/Evidence-Guided-Creating-Impact-Products-Uncertainty-ebook/dp/B0CJCDP1H7/ref=sr_1_1?crid=2U9X3PWLB9BWG&dib=eyJ2IjoiMSJ9.EF0RgfvOGP2QCp6KUY5Fnw.6x0qgZxnPak_O2HNaBpte7JuBJisjRWZ-kYW87BfQJ4&dib_tag=se&keywords=evidence+guided+create+high+impact+product