the right IT by Alberto Savoia summary

June 28, 2025

🧹 WHY SO MANY IDEAS FAIL


📉 The Law of Market Failure

đŸ”ș “Most ideas fail in the market—even good ones.”

Alberto Savoia introduces one of the book’s cornerstone ideas: the Law of Market Failure.

✅ “The Law of Market Failure states that most new ideas—regardless of how promising they seem—will fail in the market.”

This is not just anecdotal; decades of data across industries show a consistent 80% to 90% failure rate for:

  • Startups,
  • New product launches,
  • Marketing campaigns,
  • Internal corporate innovations.

❌ Not Technical Failure — Market Failure

Most teams mistakenly believe that if they build a good product, customers will naturally come. But:

“The market doesn’t care how good your idea is if it doesn’t solve a real problem.”

Even the best engineered, beautifully designed products fail if nobody wants them.


đŸ§Ș Three Types of Risk in Innovation

Savoia breaks down innovation risks into 3 types:

  1. Technical Risk – Can we build it?
  2. Execution Risk – Can we deliver it on time, on budget?
  3. Market Risk – Will they use/buy it?

Most teams focus obsessively on the first two—and neglect the most dangerous one: Market Risk.

✅ “If the market doesn’t want it, the rest doesn’t matter.”


🎭 The Illusion of Progress

One of the most damaging traps is the illusion of progress. Teams believe they’re making headway because they’re:

  • Hiring engineers,
  • Holding design sprints,
  • Completing prototypes,
  • Writing code,
  • Launching MVPs



but they’re not testing the core assumption: Do people actually want this?

đŸ”„ “We confuse activity with progress.”


⚠ Real-World Examples:

❌ Google Wave (2009)

  • Massive hype, elite engineering team, beautiful UI.
  • But: no one understood the problem it solved.
  • Shut down after 1 year.

❌ Juicero

  • High-tech juicer startup. Raised $120M.
  • Required proprietary juice packs. A manual squeeze did the same thing.
  • Exposed by journalists. Collapsed under ridicule.

✅ Dropbox

  • Before building anything, the founders created a demo video explaining the product.
  • Measured signups and interest before investing in infrastructure.
  • Pretotyped, then built.

🧠 Mindsets That Lead to Failure

Savoia argues that idea failure is often not a business problem, but a psychology problem.

✅ “We don’t fail because of poor execution. We fail because of self-deception.”


1ïžâƒŁ The Reality Distortion Field

Coined at Apple to describe Steve Jobs’ ability to bend perception, but when used by inexperienced innovators, it leads to disaster.

  • Founders fall in love with their vision.
  • They filter out negative feedback, dismiss critics, and overestimate demand.

❗ “When you believe in your idea too much, you lose sight of reality.”

Example:

  • A team builds an app to gamify learning Chinese.
  • Early feedback: “Too gimmicky.”
  • They ignore it, saying users “just need more exposure.”
  • After 6 months, usage drops to near-zero.

2ïžâƒŁ Overconfidence Bias

“Because we came up with the idea, we assume others will love it too.”

This bias leads to:

  • Ignoring competitive research,
  • Skipping market validation,
  • Launching too soon.

Example:

  • A founder says, “I’d use this, so everyone else will too.”
  • But you are not your customer.
  • Without evidence, this assumption is a gamble, not a strategy.

Survivorship Bias

“We focus only on the winners and ignore the graveyard of failed products.” Most books, talks, and blog posts feature success stories. We don’t hear about the hundreds of failed apps, A/B tests, or product pivots that died quietly.

📌 Case Study: Facebook’s “Stories” format succeeded after copying Snapchat. But Facebook had previously launched and killed many storytelling and ephemeral content features. The public sees only the winning version, not the failed experiments.


3ïžâƒŁ Confirmation Bias

  • Innovators selectively look for data that supports their idea.
  • They ignore or rationalize contradictory evidence.

“We become detectives searching for clues—only we ignore anything that doesn’t fit our theory.”

Example:

  • 100 people visit your fake landing page.
  • 5 people sign up.
  • You tell yourself: “Look, 5% interest!”
  • You ignore: “95% bounced—they didn’t care.”

4ïžâƒŁ Emotional Attachment to the Idea

💔 “The more time you spend on an idea, the harder it is to let go—even if it’s wrong.”

This is called the sunk cost fallacy. Teams:

  • Keep tweaking,
  • Keep iterating,
  • Keep believing



instead of testing core assumptions or killing the idea early.


✅ The Better Mindset: Fall in Love With the Problem

One of the most important quotes in the book:

💡 “Fall in love with the problem, not the solution.”

  • Stay obsessed with solving a pain or fulfilling a need.
  • Be open to changing your solution—or abandoning it—if it doesn’t serve that goal.
  • Innovation is not about being right from the start, but about discovering what’s right through experimentation.

đŸŽČ 3. Our Ideas Are Mostly Guesses – And That’s Okay

🚧 All Ideas Are Assumptions Until Proven Otherwise

“Your roadmap is a graveyard of guesses.”

Gilad emphasizes that ideation is inherently speculative. Great product thinking isn’t about being right from the start—it’s about being adaptive and humble.


💬 The Expert Myth

“Even the most experienced product leaders are wrong most of the time.”

Experts have pattern recognition, but patterns don’t guarantee correctness. Markets change. Contexts shift. You need evidence.

📌 Example: At Google, even senior engineers and PMs often failed A/B tests they were confident in. Over time, this eroded reliance on opinion and built a culture of testing everything.


🔍 HiPPO Decision-Making: A Red Flag

“When the Highest Paid Person’s Opinion overrides evidence, the team is flying blind.”

To shift to better product thinking:

  • Replace opinions with observations.
  • Replace authority with user insight.
  • Replace roadmaps with evidence ladders (discussed later in the book).

đŸ§Ș The Experimentation Backlog

“The product roadmap is not a delivery queue—it’s a hypothesis list.”

If most ideas fail, we should not treat them as projects to build, but as guesses to test.

📌 Real-World Insight: Instead of saying: “We will launch Feature A in Q2,” say: 🔁 “We believe Feature A may solve Problem X, and we will test it via a prototype or limited release before scaling.”


🚀 4. Output ≠ Outcome – The Root of False Productivity

⚙ Output: Building for the Sake of Delivery

“Many teams mistake activity for progress.”

Output is:

  • Features shipped
  • Sprints completed
  • Code written

But none of these guarantee value to users or business.


📈 Outcome: Real, Measurable Impact

“Outcome is the real north star: changes in user behavior that create value.”

Examples:

  • Increase in daily active users (DAU)
  • Reduced churn
  • Higher customer satisfaction
  • Increased conversion rate

📌 Contrast Example:

Metric Output Outcome
Feature Rolled out dark mode 40% of users enable it & use app 10% longer
Performance Shipped faster checkout Conversion rate rises by 15%
Marketing Sent newsletter Open rates rise; reactivation improves

🎭 Vanity Metrics: Dangerous Illusions

“Just because it’s measurable doesn’t mean it’s meaningful.”

Examples:

  • Number of tickets closed
  • Story points completed
  • Number of deployments

These don’t correlate with user or business value. They can make teams feel good, but hide real problems.


🔁 Shipping ≠ Success

Gilad warns of the release trap—celebrating launches while ignoring results.

📌 Better Practice: Measure adoption, usage, behavior change after launch. Treat releases as experiments, not finish lines.


🔄 RECAP: Why Most Ideas Fail

Trap Why It Happens What to Do Instead
Assuming demand You think “If I build it, they will come” Use pretotyping to test demand before building
Illusion of progress You celebrate activity, not results Measure engagement, not effort
Overconfidence You trust your gut too much Demand external evidence
Confirmation bias You only see the good signals Track all user behavior, not just highlights
Emotional attachment You’ve invested too much to let go Remember: Killing a bad idea early saves time and money
Ideas are facts Ideas are guesses to be tested
Experts know what works Experts also need validation
Shipping = success Impact = success
Measure speed and volume (output) Measure value and outcome
Decide based on opinion or authority Decide based on evidence and experimentation

⚙ MAKE SURE YOU HAVE THE RIGHT IT

🔍 Pretotyping vs. Prototyping

🧠 “Don’t just build it right. First, make sure you’re building the right ‘It’.”

One of the most dangerous myths in product development is the idea that if you build a great product, customers will come. This chapter breaks that myth by contrasting prototyping with a more critical, often ignored step: pretotyping.


đŸ§Ș What Is a Pretotype?

“Pretotyping is about testing the market’s genuine interest in your idea—before you build anything real or expensive.”

Pretotyping helps you fail fast and cheaply, so you don’t succeed at building something no one wants.

Pretotype = Pre + Prototype It’s not a half-built product—it’s a simulation or illusion designed to answer the only question that matters early on:

❓ “Will they use it if we build it?”


🔧 What Is a Prototype?

“A prototype answers the question: ‘Can we build it?’”

It’s about testing features, functionality, usability, design, etc. It assumes you’ve already validated market interest—which is often not the case.


📊 Why Pretotyping Must Come First

Savoia insists:

✅ “It is far cheaper to test an idea’s desirability than to assume it and risk full development.”


🚀 Real-World Case Examples:

✅ Zappos (Shoes Online)

  • Founder Nick Swinmurn didn’t build an e-commerce system first.
  • He went to local shoe stores, took photos, and listed them online.
  • When someone ordered, he went and bought the shoes manually.

👉 Pretotyping: Validated “Would people buy shoes online?”


❌ Segway

  • $100M+ invested in development.
  • World-class design and tech.
  • Assumed it would revolutionize transportation.
  • Reality: No one knew where to ride it or wanted to change habits.

👉 They prototyped brilliantly, but never pretotyped.


🔧 The Right It Tools & Metrics

This chapter introduces a framework of behavioral hypotheses and metrics to help you know if you’re on the path toward The Right It. You don’t have to guess—you test with data.


📌 1. The XYZ Hypothesis

📏 “X people will do Y within Z time.”

This is the core of every pretotype—it forces you to be specific, measurable, and accountable.


🔍 Why It’s Powerful:

  • Prevents vague hopes like “people will love this.”
  • Encourages clear, testable predictions.

✅ Example:

“500 people will click the ‘Join Waitlist’ button for our new budgeting app within 5 days.”

❌ Bad version:

“People will probably be interested in our app.” (No numbers, no time frame, no behavior.)


đŸ§Ș How to Use XYZ Hypotheses Effectively:

  • X = How many people?
  • Y = What action shows interest? (click, sign up, preorder
)
  • Z = In what time frame?

✅ “If you can’t test it in time and with numbers, it’s not a real hypothesis.”


🧬 2. Market Engagement Hypothesis (MEH)

“The MEH is where your XYZ Hypothesis meets reality.”

This is your assumption about how the market will respond when given the opportunity to act, even with no real product yet.


đŸ§Ș How to Run a MEH Test:

  • Use a landing page with a fake offer and a CTA (Buy Now, Join Beta).
  • Use ads to see if people click on a fake product.
  • Track real user behavior, not just traffic or likes.

✅ Example:

Create a simple site:

“New App: StudyTime — Beat Procrastination. Sign up for early access.” 👉 Measure signups (Y) over 5 days (Z) from 1,000 visitors (X).


📊 3. Initial Level of Interest (ILI)

“ILI = People who take action / People exposed to the test.”

This is your conversion rate, and it’s a direct signal of potential market demand.

✅ Example:

  • 1,000 visitors.
  • 80 clicked “Sign Up”.
  • 👉 ILI = 8%

You can now compare this against your target. If your goal was 5%, then 8% is strong validation.

📈 “ILI transforms qualitative ideas into quantitative traction.”


🔎 What’s a Good ILI?

Savoia doesn’t give hard rules—it varies by market—but generally:

  • >10% = promising traction.
  • 5–10% = further testing or iteration needed.
  • <5% = likely weak demand.

đŸš« “Don’t chase unicorns with limp engagement metrics.”


đŸ‘©â€đŸ”Ź 4. High-Expectation Customers (HXC)

🧠 “The best feedback doesn’t come from average users—it comes from the most demanding ones.”

HXCs are:

  • Already searching for a solution,
  • Deeply understand the problem,
  • Hard to impress.

But if you win them, you’re likely building something great.


🎯 Why Target HXCs Early?

  • They provide blunt, high-signal feedback.
  • If they adopt your pretotype, it’s a green light.
  • If they ignore or criticize it, you’re in danger.

✅ Example: Launching a writing AI tool

  • Your HXCs = daily writers, bloggers, content creators.
  • Place your fake landing page in Reddit/r/copywriting or a writers’ forum.
  • Monitor engagement and qualitative comments.

✅ “If the people who need it most don’t care, why would the general public?”


🧭 Mindset and Measurement Shift: The New Way to Innovate

Old Mindset The Right It Mindset
Guess and build Test and measure
Listen to opinions Watch actual behavior
Invest early Validate early, build later
Seek praise Seek honest disinterest or rejection
Perfect your prototype Perfect your XYZ and MEH tests first

✅ Summary: The Right It Toolkit in Practice

Tool Purpose What It Reveals
XYZ Hypothesis Define your expected market behavior Are you making a testable prediction?
MEH Run a quick, fake or simulated test Will people take action now?
ILI Quantify initial traction How strong is early interest?
HXC Stress-test your idea on early adopters Is it compelling to the most demanding users?

🚀 “If you can’t get people to engage with a fake version of your product, they probably won’t care when it’s real.”


🎯 THE IMPACT-FIRST MINDSET

How High-Impact Teams Think, Plan, and Build

This section reorients teams away from the traditional output-driven model—where features and velocity dominate thinking—to a more effective paradigm: impact-first thinking, where the primary goal is to drive measurable improvements in business and user outcomes through learning, iteration, and evidence.


🔄 What Are We Optimizing For? — Speed or Value?

❌ The Default (Flawed) Mental Model

“Build more, ship faster, and success will follow.”

This model confuses activity with progress. Agile teams, CI/CD pipelines, and sprint velocities become the north star.

However, speed without direction equals waste.

✅ Reframed Mental Model

“Product success = delivering value to users and business through validated learning.”

Gilad reframes the question from:

“What features are we building this quarter?” to “What outcomes are we aiming to achieve?”


📌 Analogy: The Compass vs. The Speedometer

  • Output-first mindset = checking the speedometer: “How fast are we going?”
  • Impact-first mindset = checking the compass: “Are we heading in the right direction?”

“Progress without direction is just wasted motion.”


⚖ Defining Impact — The Two-Sided Equation

🔍 What Is Impact, Really?

✅ “Impact = User Value + Business Value”

Success only happens when:

  • Users benefit meaningfully
  • The business gains value (revenue, retention, efficiency, etc.)

🧍 User Value

Value delivered to the end user. It must solve a real problem or enable a meaningful improvement.

Examples:

  • Easier onboarding
  • Feature discoverability
  • Better UX / accessibility
  • Saving user time, effort, or money

“If your product isn’t helping users succeed, it won’t survive.”


đŸ’Œ Business Value

Concrete, measurable contributions to business objectives:

  • Higher conversion rate
  • Improved retention
  • Increased revenue per user
  • Reduced churn or support costs

✅ “True product impact connects user success to business success.”

📌 Example: A redesigned signup flow that cuts time in half (user value) and improves conversion from 10% → 14% (business value).


🧰 The GIST Framework — Bridging Vision to Action

Gilad introduces GIST as a flexible, scalable tool to support evidence-driven impact delivery:

GIST = Goals → Ideas → Step-Projects → Tasks

It aligns strategic thinking (Goals) with tactical execution (Tasks) through validated learning.


đŸ„… G = Goals

✅ “Goals are clear, measurable impact objectives, not features.”

Examples:

  • Increase 30-day user retention from 25% to 35%
  • Reduce cart abandonment by 15%
  • Boost Net Promoter Score (NPS) from 40 to 60

📌 Goals must be:

  • Outcome-based (not output-based)
  • Quantifiable
  • Time-bound

💡 I = Ideas

“Ideas are hypotheses—guesses about how to reach a goal.”

Most teams confuse ideas with requirements or specs. But in reality:

❗ “Ideas are just untested beliefs, no matter how logical they seem.”

Teams must:

  • Generate multiple ideas (divergent thinking)
  • Score them using tools like ICE or RICE
  • Track them in an Idea Bank (as covered later in the book)

📌 Example: To increase onboarding completion, ideas might include:

  • Reduce signup fields
  • Add welcome video
  • Enable social login

Each is a guess—not a guarantee.


đŸ§Ș S = Step-Projects

✅ “Fast, inexpensive experiments that test assumptions behind ideas.”

This is where learning happens.

Examples:

  • Fake door tests
  • Landing page A/B tests
  • Concierge MVPs
  • Email split tests

“Don’t build until you test.”

📌 Case Example: Rather than build a complex refer-a-friend system, test user interest with:

  • A “Refer a Friend” button that logs clicks
  • Manual coupon delivery (instead of full automation)

🔁 “Step-projects reduce risk while increasing certainty.”


⚙ T = Tasks

“Tasks are delivery items created only after validation.”

Once a step-project proves that an idea is viable and impactful, it moves into actual development.

❌ “Don’t start with tasks. Start with goals and validation.”

📌 GIST ensures this bottom-up execution aligns with top-down strategy.


🔄 GIST in Action: Example Scenario

Goal: Reduce time to first value (TTFV) by 30% Ideas:

  • Pre-fill user setup form
  • Auto-recommend features based on usage
  • Add onboarding video Step Projects:
  • Run A/B test for pre-filled form
  • Email new users with video vs. no video Tasks:
  • Develop pre-fill automation
  • Design a recommendation engine

Only after measuring impact do teams commit engineering time.


đŸ—“ïž Impact-First Planning — Redesigning the Product Planning Process

❌ Problem with Traditional Roadmaps

“Most roadmaps are just a list of guesses with deadlines.”

These:

  • Create false certainty
  • Encourage overcommitment
  • Limit adaptability
  • Reward shipping, not learning

✅ The Impact-First Alternative

“Start with outcomes. Let solutions emerge from evidence and experimentation.”

Instead of “We’ll build feature X by July,” say:

“We aim to improve customer activation rate by 20% in Q3.”

This keeps:

  • Autonomy for teams to explore options
  • Alignment around measurable goals

🎯 OKRs Done Right

Gilad advocates for real OKRs, not task lists disguised as goals.

📌 Correct Format:

  • Objective: Improve trial-to-paid conversion
  • KR1: Increase free trial completion from 50% to 65%
  • KR2: Raise checkout conversion from 20% to 28%

✅ “OKRs should be inspiring, yet grounded in measurable outcomes.”


🧠 Summary — Key Mental Shifts for an Impact-First Culture

❌ Output-Driven Thinking ✅ Impact-First Thinking
Deliver more features Deliver more value
Plan by timelines & specs Plan by goals and validated ideas
Ideas = product requirements Ideas = testable hypotheses
Success = shipping Success = outcome improvement
Tasks are the start of work Tasks come last, after goals and experiments

✅ Final Reflection from Gilad

“The job of a product team is not to deliver features—it’s to solve problems and create impact.”

This requires:

  • A radical mindset shift
  • New tools (like GIST, OKRs, Confidence Meters)
  • A culture of experimentation, curiosity, and humility

⚙ TESTING YOUR IDEA – THE PRETOTYPE METHOD

From The Right It by Alberto Savoia, complete with bold-highlighted principles, deep explanations, and real-world, practical examples.


🧰 OVERVIEW: Why You Need Pretotyping

After understanding why most ideas fail (Part 1) and learning how to define and frame a testable idea (Part 2), this part focuses on how to actually test it—quickly, cheaply, and before building anything real.

💡 “A pretotype is not a lesser version of your product—it’s a smarter version of your decision-making process.”

You’ll learn to simulate usage, observe behavior, and collect real engagement data with minimal investment.


đŸ› ïž The Six Pretotype Methods

Each pretotype method is a tactical technique that allows you to test a specific aspect of user interest or behavior with minimal effort.


đŸ”č 1. The Fake Door Test

đŸšȘ “Put up a door. If no one tries to open it, don’t build the house.”

✅ What it is:

You create a fake entry point (e.g., a signup button, a “Buy Now” page) for a product or feature that doesn’t exist yet.

🎯 Goal:

Measure user intent through clicks, signups, or interest, without having to build anything.

đŸ§Ș How it works:

  • Build a landing page or CTA for your idea.

  • When users click, show a message like:

    “Thanks! We’re not quite ready yet. Join the waitlist and we’ll notify you soon!”

🧠 Example:

A new budgeting app idea →

  • You run Google Ads with “Tame Your Spending with BudgetBuddy.”
  • Visitors click “Try It Free” on a basic page.
  • You measure clicks, email captures, bounce rate.

✅ If no one clicks the button, why build the product?


đŸ”č 2. The Infiltrator Test

đŸ•”ïž “Slip your idea into an existing environment and watch what happens.”

✅ What it is:

Insert your product idea into an existing system, platform, or experience without fanfare—and see if people use it naturally.

🎯 Goal:

Validate real-world fit and organic adoption.

đŸ§Ș How it works:

  • Add a small new button, feature, or prompt inside an existing product, store, or page.
  • Don’t promote it—let it sit.
  • Track interactions or neglect.

🧠 Example:

You want to test a feature that suggests AI-generated job descriptions. You quietly embed a “Try SmartWriter” button inside your existing HR dashboard.

  • Measure how many users discover and click it organically.

💡 “If users don’t notice or use it when it’s in their flow, it might not matter.”


đŸ”č 3. The One-Night Stand Test

🌙 “Offer your product temporarily—because permanent commitment is expensive.”

✅ What it is:

Provide your product or service for just one day (or a limited window) and observe demand and behavior.

🎯 Goal:

Quickly simulate real-world conditions with short-term exposure and no long-term risk.

đŸ§Ș How it works:

  • Launch a popup, pop-up store, webinar, or flash offer.
  • Announce it through minimal ads, email, or social.
  • Measure engagement: signups, show-ups, purchases.

🧠 Example:

You want to launch a fitness accountability program.

  • Offer a 1-day trial event for $5 with a live coach on Zoom.
  • Track how many sign up, how many show up, how they react.

🔍 “A one-night stand reveals if there’s chemistry—before you commit to marriage.”


đŸ”č 4. The Pinocchio Test

đŸȘ” “Build a fake product that looks real—but doesn’t function.”

✅ What it is:

You create a non-functional mockup of your product to observe user interaction and test comprehension or desirability.

🎯 Goal:

Find out if people understand and engage with your product concept, even if it doesn’t work.

đŸ§Ș How it works:

  • Create a fake version: a dummy app screen, clickable prototype, 3D printed device shell, etc.
  • Give it to users.
  • Ask them to “use it” as if it were real.

🧠 Example:

Testing an “AI-powered grocery list planner.”

  • Create a Figma prototype of the app.
  • Watch users try to interact.
  • Do they know what it does? Where they click first? Do they seem excited or bored?

✅ “If they don’t want the fake version, they won’t want the real one either.”


đŸ”č 5. The Mechanical Turk

🧠 “Simulate the output of a machine—using human effort behind the scenes.”

✅ What it is:

You fake a software feature or hardware automation by doing the task manually, while making it look automated.

🎯 Goal:

Test if people will use the functionality before building complex tech.

đŸ§Ș How it works:

  • You create a front-end or interface that mimics the full product.
  • Behind the scenes, your team fulfills requests by hand.

🧠 Example:

You want to build “AI Resume Booster.”

  • Users upload resumes and get edits back.
  • Instead of AI, your editor does the work manually.
  • You test demand before building the AI model.

💡 “Why code an algorithm until you know they’ll pay for the outcome?”


đŸ”č 6. The Re-label Test

🔁 “Take an old product. Give it a new name or use case. See what happens.”

✅ What it is:

You test new positioning or audience segments by repackaging an existing product.

🎯 Goal:

Test if a new use case, niche, or branding resonates better.

đŸ§Ș How it works:

  • Use your current product or someone else’s.
  • Market it to a new group, with new messaging.
  • Track conversion and interest.

🧠 Example:

You already sell sleep headphones to travelers. Re-label them:

“Deep Focus Headphones for Remote Developers” Run new ads. See which segment responds better.

📈 “Sometimes the product isn’t wrong—the audience is.”


📏 Innovation Metrics & Decision Frameworks

📊 “Pretotyping without measurement is just theater.”

After you run your pretotyping tests, how do you know if your idea is “The Right It”? You need a decision system based on behavior, not bias.


🧠 Key Innovation Metrics

🔾 XYZ Hypothesis

“X people will do Y within Z time.”

This turns vague ideas into specific predictions.

🔾 MEH – Market Engagement Hypothesis

“If we offer this, people will do that.” The bridge between concept and test.

🔾 ILI – Initial Level of Interest

ILI = (# who take action) / (Total exposed)

A clean metric to quantify real engagement.


🔍 What to Do With the Data

Outcome Interpretation Decision
High ILI (10–15%+) Strong early interest ✅ Go ahead with MVP
Medium ILI (5–10%) Mixed signals 🔁 Iterate or retest
Low ILI (<5%) Weak or no demand ❌ Kill the idea

đŸ’„ Focus on Behavior, Not Applause

❌ Don’t fall for vanity metrics like:

  • Pageviews
  • Likes
  • Comments
  • Shares

✅ Focus on real behavior:

  • Clicks
  • Signups
  • Preorders
  • Time spent
  • Money spent

đŸ”„ “If they don’t pay attention now, they won’t pay later.”


🧭 Final Insight: Pretotyping Is an Innovation Mindset

💡 “Pretotyping is not just a testing tool—it’s a cultural shift.”

It teaches you to:

  • Think in experiments
  • Kill ideas early, proudly, and cheaply
  • Use data, not gut feel
  • Learn by simulating, not by shipping

🔬 EVIDENCE-GUIDED DEVELOPMENT

“Success is not about being right from the start—it’s about reducing uncertainty smartly.”

In this pivotal section, Gilad lays out the practical mechanics for moving from guesswork and gut-feel to a system of learning, experimentation, and evidence-based decision making. It’s a shift from certainty-seeking to curiosity-driven building, supported by data and behavioral insight.


đŸȘœ The Evidence Ladder — A Tool for Judging Idea Quality

🎯 The Core Idea

✅ “Not all ideas are created equal. Their strength lies in the evidence supporting them.”

Most teams don’t evaluate the quality of ideas—they prioritize by influence, intuition, or trends. The Evidence Ladder gives you a clear framework to rank ideas based on the reliability of the evidence backing them.


đŸ“¶ The 5 Levels of the Evidence Ladder

  1. Speculation

    🧠 “I just have a feeling this might work.” These are pure guesses, unbacked by any validation. ⚠ Risk: Building from here without testing leads to waste.

  2. Opinions

    💬 “The VP of Sales thinks we need this.” This includes feedback from stakeholders, teammates, even users. But it’s subjective and biased. ⚠ Still weak evidence until verified through action.

  3. User Feedback

    đŸ—Łïž “Users told us they want this feature in interviews.” Valuable, but still what people say, not what they do. Needs to be tested behaviorally. ✔ Better than speculation, but not sufficient on its own.

  4. User Behavior

    📊 “Users clicked the fake door button at a 15% rate.” Behavioral evidence (from analytics, A/B tests, click maps) shows what people actually do. ✅ Strong indicator that the idea creates real engagement.

  5. Business Results

    💰 “The idea increased conversions by 12% and improved LTV.” The gold standard. When a tested idea leads to measurable business outcomes, confidence is at its highest. ✅✅ High-value ideas live here.


🧠 How to Use the Ladder

✅ “The higher an idea sits, the more confidently you can pursue it.”

📌 Use this as a prioritization filter:

  • Low-evidence ideas → run small experiments
  • High-evidence ideas → invest further, scale up
  • No-evidence ideas → don’t put on your roadmap yet

đŸ§Ș Testing Ideas with Step Projects — Small Bets, Fast Learning

❗ The Problem with Big-Bang Development

❌ “Let’s build the whole feature, then launch and see if it works.”

This approach:

  • Consumes months of dev time
  • Delays feedback
  • Increases risk and emotional attachment

✅ Step Projects: The Safer, Smarter Alternative

✅ “Step Projects are mini-experiments designed to test the most critical assumptions of your idea.”

They are:

  • Quick (days to weeks)
  • Cheap (light engineering or no-code)
  • Focused (test one core assumption)

📌 Examples of Step Projects

Type Description Example
Fake Door Test Show an option that doesn’t exist yet Add “Upgrade to Pro” button, log clicks
Landing Page Test A/B test marketing messages or features Test new pricing tiers on mock pages
Wizard of Oz MVP Fake the backend, simulate experience Manual fulfillment of orders to test demand
Email Split Test Measure response to different concepts or CTAs Test “Refer a Friend” vs “Get a Bonus”
Usability Test Use Figma/Sketch to simulate flows Observe users interact with new checkout flow prototype

🔁 Step Projects = Fast Learning, Not Fast Building

🧠 “If your idea is wrong, better to find out in 2 days than in 3 months.”

Key Benefits:

  • Protect team bandwidth
  • Learn from real behavior
  • Build internal culture of experimentation over certainty

🔍 What Makes a Good Step Project?

  • Tests the riskiest assumption first
  • Has a clear success metric
  • Is time-boxed and simple
  • Can be run with minimal disruption

✅ “Don’t ask ‘Can we build it?’ Ask, ‘Should we build it?’”


📈 Learning from Data — Build the Learning Engine

🔁 Why Learning Is More Valuable Than Shipping

đŸš« “Shipping velocity is a vanity metric unless tied to outcomes.” ✅ “Learning velocity is the new superpower of product teams.”


📊 Sources of Learning Signals

  1. Quantitative Data

    • Funnel analytics (e.g., conversion, retention)
    • A/B testing results
    • Heatmaps and session replays
    • Cohort analysis
  2. Qualitative Data

    • Customer interviews
    • Usability sessions
    • Customer support tickets

📌 Example Loop in Action

Goal: Improve onboarding completion Idea: Add progress bar Step Project: A/B test with 20% of new users Result: 18% lift in completion Next: Scale feature, monitor long-term impact

✅ “Teams should run dozens of these loops per quarter—not 1 big risky bet.”


🧠 Key Insight

“Your product is not the final output—your learning is.” Shipping without learning is just output. Learning leads to outcome.


đŸ—‚ïž Prioritizing by Evidence and Value — Choose Wisely

❌ The Reality of Idea Backlogs

❗ “Most idea lists are long, unstructured, and driven by emotion.”

Stakeholders push pet features, trends drive FOMO, and teams build based on who shouts the loudest.


✅ Replace Gut Feel with Scoring Models

🔱 ICE Scoring

Impact × Confidence Ă· Effort

Simple, fast model for quick idea comparisons.

📌 Example:

Idea Impact Confidence Effort ICE Score
Simplified login 8 7 3 18.7
AI chatbot 6 3 5 3.6

đŸ“¶ Confidence Meter

✅ “Visual tool that tracks the strength of supporting evidence for each idea.”

Use color-coded tiers:

  • 🔮 Speculative / Opinion
  • 🟡 Some user feedback
  • 🟱 Behavior-tested / Result-backed

Helpful for:

  • Roadmap debates
  • Stakeholder discussions
  • Justifying prioritization

🧠 RICE Model

Reach × Impact × Confidence Ă· Effort

Adds scale to the ICE model—especially for B2C or large platforms.

📌 Example:

Idea Reach Impact Confidence Effort RICE Score
Auto-suggestions 5,000 6 7 4 52,500
New dashboard 500 8 8 5 6,400

✅ “RICE prevents small-impact projects from crowding out high-leverage ones.”


🧠 Gilad’s Rule

✅ “Ideas with low confidence should get tiny tests, not massive investments.”

This saves:

  • Time
  • Developer energy
  • Morale (you fail fast, not late)

🧠 RECAP — FROM GUESSING TO EVIDENCE-LED BUILDING

❌ Traditional Thinking ✅ Evidence-Guided Practice
Opinions and authority guide ideas Evidence and behavior validate ideas
Roadmaps filled with assumptions Roadmaps filtered by confidence & testing
Big launches, slow feedback Small tests, fast learning cycles
Prioritize by gut or trends Prioritize by ICE, RICE, and Confidence Meter
Learning is accidental Learning is systematic and fast

💬 “The best teams don’t guess better—they test better.”

This approach is not about being risk-averse—it’s about taking smarter risks, fast.


🧭 NAVIGATING THE INNOVATION JOURNEY


🧗 The Innovator’s Journey

🎯 “Bringing a new idea to life is not a straight path—it’s a deeply personal adventure filled with uncertainty, resistance, insight, and transformation.”

Savoia frames the innovation process as a modern adaptation of the Hero’s Journey, a powerful narrative model described by Joseph Campbell. Why? Because innovators, like heroes, must confront fear, resistance, failure, and self-doubt—not just market challenges.

This metaphor is not just poetic—it provides a psychological roadmap that prepares innovators for the ups and downs of product development and internal advocacy.


đŸ”„ Stage 1: The Spark — The Call to Adventure

💡 “Every innovation starts with a moment of excitement—a belief that things can be better.”

This is when:

  • A frustration with the status quo triggers a creative impulse.
  • A personal insight or customer pain point inspires a concept.
  • A founder says, “There has to be a better way.”

⚠ But early excitement is dangerous because:

  • You may fall in love with your solution before testing the problem.
  • You assume others will see what you see.
  • You become vulnerable to confirmation bias.

🧠 “Beware of confusing the emotional thrill of a new idea with actual market demand.”


đŸ§Ș Stage 2: The Test — Crossing into Reality

đŸ§Ș “The moment you run your first pretotyping experiment, you step into the real world.”

Here, the innovator faces the truth:

  • Are people interested?
  • Will they click, sign up, engage, or pay?
  • Or will they ignore, bounce, or criticize?

This stage brings resistance:

  • From users (lack of interest),
  • From team members (who fear change),
  • From yourself (fear of failure or data that contradicts your vision).

🔎 “This is the stage where the idea stops being fun—and starts being real.”

Yet this is where learning accelerates, if you have the courage to:

  • Kill a weak idea quickly,
  • Pivot based on feedback,
  • Stay emotionally detached.

✅ “Strong innovators fall in love with the problem, not the solution.”


🔁 Stage 3: Validation, Iteration, or Rejection

📊 “Pretotyping shows you whether you’re on the right track—early, cheaply, and with brutal honesty.”

This is the decision point:

  • If ILI (Initial Level of Interest) is strong,

    • Go deeper,
    • Consider building a prototype or MVP.
  • If ILI is weak,

    • Re-test with another pretotyping method.
    • Pivot the positioning, market, or features.
  • If there’s still no traction,

    • Kill it, without shame.

✅ Example:

You test an app for time-boxed study sessions with college students.

  • Fake Door test yields 500 signups in 3 days. → Signal to move forward.

A different feature (group voice rooms) yields 1% clickthrough. → Kill that feature or pivot the pitch.

💡 “Pretotyping doesn’t just validate ideas—it forces clarity.”


🚀 Stage 4: Scaling or Strategic Exit — The Return with the Elixir

🧠 “Once you’ve proven people want your idea, the challenge shifts from testing to building.”

This is the moment to:

  • Pitch investors or management with data, not dreams.
  • Build your MVP with evidence-backed features.
  • Hire the right team and expand distribution.

But also recognize:

  • Not every idea deserves scaling.
  • Sometimes a great concept is too early, too niche, or not strategically aligned.
  • Knowing when to exit gracefully is a sign of strategic maturity.

✅ “The goal of the innovator’s journey is not just to launch—but to discover what’s worth launching.”


đŸ§± Organizational Barriers

đŸ§± “Even the best ideas will fail if the culture punishes curiosity and rewards delivery at all costs.”

In this chapter, Savoia turns his lens to the systemic and structural forces inside companies that make innovation difficult—even when teams do everything right.


🧹 Barrier 1: Incentives Reward Building, Not Learning

❌ “In many companies, building something bad is rewarded more than learning not to build it at all.”

Teams are often pressured to:

  • Launch on time rather than test first.
  • Hit delivery KPIs instead of insight milestones.
  • Look busy, not be effective.

🧠 What this produces:

  • Teams ignore weak signals.
  • Stakeholders avoid “failure” at all costs—even if it means wasting millions.

đŸ§Ș Example:

A team builds a product no one tested. It flops—but since it was delivered on time, the project manager is promoted.

✅ “We must start rewarding people for killing bad ideas early.”


🎭 Barrier 2: Corporate Theater and the Illusion of Certainty

🎭 “In the absence of data, companies perform theater—complete with fake confidence, assumptions, and plans.”

Symptoms of corporate innovation theater:

  • 100-slide business cases based on optimistic projections.
  • “Validated” user personas based on guesswork.
  • Gantt charts that schedule creativity.

❌ “Executives often ask for certainty—but real innovation starts with admitting what you don’t know.”


đŸŒ± Solution: Build a Culture of Experimentation

đŸŒ± “A culture that celebrates learning will always outperform one that punishes failure.”

Organizations that innovate well:

  • Allocate small budgets and fast timelines to idea tests.
  • Celebrate validated pivots and early kills.
  • Create safe-to-fail zones (e.g., 20% time, hack weeks, innovation sprints).

✅ Real Example: Google X

Google’s innovation lab encourages teams to kill their own ideas.

“We reward people who stop projects, not just those who launch them.”

📈 “Innovation ROI improves drastically when failure is fast, cheap, and instructive.”


🔁 Reframing Success and Failure

Traditional Thinking The Right It Mindset
Delivering = Winning Validating = Winning
Failure = Shame Early failure = Smart learning
Certainty = Strength Admission of ignorance = Strength
Big launch = Progress Early traction = Real progress

đŸ”„ “A successful innovation system doesn’t prevent failure—it detects and absorbs it early.”


✅ Final Takeaways

💡 “Innovation is not just a technical or market challenge—it’s a personal and cultural one.”

For Innovators:

  • Embrace uncertainty and feedback.
  • Expect emotional highs and lows.
  • Be ready to pivot, kill, or defend your idea—with data, not ego.

For Leaders and Organizations:

  • Create safe spaces to test and fail.
  • Reward learning and insights, not just execution.
  • Dismantle incentives that promote building The Wrong It.

🧠 “The Right It isn’t found through planning—it’s discovered through testing, humility, and bold honesty.”


🧠 BEHAVIORAL & CULTURAL SHIFTS


🔄 Shifting Mindsets

⚠ “The main obstacle to innovation is not technology, tools, or time—it’s people’s habits and mindsets.”

Savoia insists: even the best tools like pretotyping won’t lead to successful innovation unless there’s a shift in how people think about risk, failure, and experimentation. Behavioral inertia—clinging to old processes or fearing failure—is the root of slow or false innovation.


💡 From Execution-First to Experimentation-First

🔧 “In most organizations, execution is king—experimenters are seen as rebels.”

That mindset kills innovation. Instead, we must shift toward:

  • Testing ideas before building them
  • Investing in uncertainty reduction, not just delivery
  • Viewing every initiative as a hypothesis, not a certainty

🔁 Old Mindset vs. New Mindset:

Old Execution-First Thinking New Experimentation-First Thinking
“We’ve planned it, let’s build it.” “Let’s test interest before we build anything.”
“Just trust your gut.” “Let’s see what users actually do.”
“Failure looks bad.” “Failure is a signal—it guides better decisions.”
“Perfect the prototype.” “Run pretotypes first, fast and cheap.”

✅ “Don’t execute before validating. Don’t scale what you haven’t tested.”


🧹 Celebrate Early Idea Kills

🧠 “Killing a bad idea early is not failure—it’s high-performance decision-making.”

Many organizations and founders have been trained to fear the optics of failure, even small ones. But this fear leads to:

  • Zombie projects that burn money and time,
  • Risk-averse behavior (avoiding novel ideas),
  • Pressure to justify sunk costs, even when evidence shows the idea doesn’t work.

Instead, organizations should actively reward those who run honest experiments and make the call to kill weak ideas.

đŸ”„ “Early kills save millions later. They should earn medals—not reprimands.”

✅ Real-World Example:

A team at a large SaaS company pitched a new dashboard module. Instead of building it, they launched a Fake Door test with 2,000 existing users. Only 17 clicked “Learn More.” The team killed the project—saving $400K in dev costs. They were congratulated in a company-wide email.

🧠 “Every killed idea is a step closer to The Right It.”


đŸ’„ Normalize Micro-Failures to Enable Macro-Success

đŸ§Ș “Innovation is not about avoiding failure—it’s about failing in ways that are small, fast, cheap, and full of learning.”

Teams that fear failure don’t experiment, which means they never discover better paths.

By contrast, innovative teams run micro-experiments constantly:

  • 2 versions of a landing page,
  • 3 different positioning messages,
  • 5 mini-product concepts in one quarter.

These tiny tests are:

  • Low-cost,
  • Easy to design,
  • Extremely informative.

✅ “A 48-hour failed test beats a 6-month failed launch every time.”


🔄 Organizational Practices to Shift Culture

đŸŒ± “Culture is not what leaders say—it’s what teams do without asking permission.”

To make experimentation and learning part of culture:

📌 Practical Shifts:

  • ✅ Make every new initiative start with an XYZ Hypothesis
  • ✅ Give product teams monthly mini-budgets just for pretotyping
  • ✅ Publicly celebrate pivot stories and honest kills
  • ✅ Measure learning velocity, not just shipping velocity
  • ✅ Hold “Test-What-Matters” weeks or Pretotyping Hackathons

✅ Cultural Example:

Spotify’s team uses a “Think It – Build It – Ship It – Tweak It” model. They don’t ship until “Think It” is validated—and that stage includes MEH tests and engagement metrics, not just wireframes.


đŸŒ± The Right It in Practice

🚀 “The Right It isn’t a theory—it’s a proven, scalable system for real-world validation.”

This chapter shares case studies and field-tested patterns to demonstrate how pretotyping works across startups, corporations, and solo entrepreneurs.


✅ Google AdWords – The Mechanical Turk Pretotype

🧠 “The original AdWords was a fake interface connected to a team of humans.”

Before building the tech:

  • Google let users enter ads into a dummy interface.
  • Behind the scenes, humans manually placed the ads.
  • Result: explosive traction → greenlighted automation build.

✅ “Proof of concept should precede product development.”


✅ IBM – Enterprise Pretotyping

At IBM, a product team ran a non-functional dashboard demo to test internal use of predictive analytics.

  • The dashboard looked real—but data was mocked up manually.
  • They measured engagement, not opinion.

Result:

  • 70% clickthrough,
  • Executive sponsors signed off.

🔧 “Even in corporate settings, Mechanical Turk and Fake Door tests save huge budgets.”


✅ Individual Entrepreneurs – Small Bets, Fast Lessons

💡 “The Right It is your startup co-founder—even if you’re solo.”

Examples:

  • A UX designer used One-Night Stand method to test a design sprint coaching offer on LinkedIn → sold 3 sessions before building a site.
  • A developer created a Re-label Test for his focus app—marketing it as “Deep Work Tool for Lawyers” → 400% higher conversion.

🎯 “Pretotyping gives the solo entrepreneur leverage—data replaces doubt.”


🔁 Culture of Evidence-Guided Product Development

❌ From Authority-Based to Evidence-Based Decisions

❗ “Many product decisions are still driven by the loudest voice in the room.”

HiPPOs (Highest Paid Person’s Opinions), office politics, and legacy thinking often dominate. Gilad urges a cultural transformation:

✅ “Good ideas can come from anywhere—but only evidence can validate them.”


🔍 Build a Culture of Curiosity and Experiments

✅ “Replace ‘proving you’re right’ with ‘finding what’s true.’”

Key behaviors of a strong evidence-guided culture:

  • Welcoming failure as part of learning
  • Celebrating invalidated ideas for saving resources
  • Rewarding experiments, not just feature launches

📌 Example: A product manager proposes an onboarding chatbot, runs a test, and finds engagement drops. Instead of being blamed, the team is praised for invalidating the wrong bet early.


🔄 Cultural Slogans to Reinforce Mindset

  • “Test before you build.”
  • “Ideas are hypotheses, not promises.”
  • “Celebrate small failures that prevent big ones.”

✅ “Culture eats strategy for breakfast—so make experimentation part of your company’s identity.”


đŸ‘„ Empowered Teams — Autonomy with Alignment

đŸ§± What Empowerment Actually Means

✅ “Empowered teams don’t just execute—they solve problems.”

Too often, teams are handed a roadmap of outputs and told to deliver on schedule. This isn’t empowerment—it’s execution under constraint.


💡 Three Pillars of Empowered Teams

  1. ✅ Clear Goals

    • Set by leadership through OKRs, North Star Metrics, or GIST Goals
    • Must focus on outcomes, not tasks
  2. ✅ Autonomy

    • Teams choose how to solve problems
    • Encourages ownership, creativity, and motivation
  3. ✅ Access to Users and Data

    • Teams talk to customers directly
    • Use analytics, A/B testing, and behavior data to learn fast

✅ “Give teams the why and what success looks like. Let them figure out how.”


❌ Avoid Command-and-Control Cultures

❗ “Telling teams what to build makes them disengaged, less creative, and less accountable.”

📌 Instead, create “context, not control.”

Example: Team A is told to “build feature X” by Q3. Team B is asked to “increase user retention by 15%”—and allowed to find the best ideas. ✅ Team B will test, learn, and likely outperform over time.


👑 Leadership for Impact — From Director to Enabler

🔄 Redefine Leadership Roles

✅ “Leaders should be impact coaches, not taskmasters.”

Traditional leadership = assigning features, approving specs, demanding timelines Impact-first leadership = setting goals, enabling learning, clearing obstacles


📌 What Impact-Oriented Leaders Do

  • Set the Vision: Provide long-term direction aligned with company strategy.
  • Define Outcome Goals: Establish success in terms of user and business value, not features.
  • Foster Safety for Experiments: Normalize failure and learning.
  • Coach, Don’t Command: Help teams grow autonomy and decision-making capacity.
  • Resource Learning Loops: Invest in UX research, data infrastructure, experimentation tools.

✅ “Leadership is about designing the environment where good decisions can emerge.”


❌ Anti-Patterns to Watch For

  • Rewarding delivery over learning
  • Killing ideas based on opinion, not data
  • Penalizing teams for failed experiments
  • Insisting on waterfall-style roadmaps

❗ “If leaders don’t model evidence-based thinking, teams won’t either.”


đŸ§© Bringing It All Together — From Framework to Operating System

🔄 Combine GIST with Lean, Agile, and Discovery

GIST is not a replacement for Agile or Lean—it’s an overlay that ensures alignment from strategy to delivery.

✅ “GIST makes Agile actually outcome-oriented, not just fast.”


📌 How It Integrates

GIST Element Matches With Purpose
Goals OKRs / North Star Metric Strategic focus
Ideas Discovery / Ideation Exploration of possible paths
Step Projects MVPs / Experiments Learn fast, test risk
Tasks Agile Sprints / Backlog Tactical execution

📈 Shift from Launches to Impact Delivery

❌ “Launch is not the end—it’s the start of learning.” ✅ “Great teams don’t just ship features—they ship results.”

Continuous Impact Delivery means:

  • Every sprint is tied to an impact metric
  • Every launch includes a feedback loop
  • Roadmaps evolve based on what’s working, not what was promised

✅ “Don’t plan for certainty—plan for discovery and adaptability.”


🧠 SUMMARY — BUILDING THE ENVIRONMENT FOR IMPACT

❌ Traditional Org Model ✅ Evidence-Guided Culture & Structure
Top-down feature mandates Autonomous teams solving outcome goals
Leaders assign work Leaders coach, define vision, and unblock
Success = shipping features Success = delivering measurable user/business value
Failure is punished Failure is normalized and leveraged for learning
Roadmaps based on opinions Roadmaps shaped by ideas + confidence meters

💡 “The best products come not from the best ideas—but from the best systems to discover and validate ideas.”

If the culture isn’t aligned, no framework will succeed. But if the culture enables learning, experimentation, and curiosity, great products become inevitable.


📄 Templates

đŸ§© XYZ Hypothesis Builder

“X people will do Y within Z time.”

Use this to define your test goal:

  • 100 developers will click “Join Waitlist” within 3 days.

đŸ§Ș Market Engagement Hypothesis (MEH)

“If we offer this, people will respond this way
”

Define the behavior you expect (click, sign up, pay).

📊 Initial Level of Interest (ILI) Tracker

Formula: ILI = # who engage / # exposed

Helps compare multiple experiments and make data-informed Go/Kill decisions.


🧰 Tools & Frameworks

Need Suggested Tools
Landing Pages Carrd, Webflow, Unbounce
Analytics & Heatmaps Google Analytics, Hotjar, Crazy Egg
Form Builders Tally, Typeform, Google Forms
Prototyping Figma, InVision, Marvel
No-Code MVPs Bubble, Glide, Adalo
A/B Testing Google Optimize, Optimizely

✅ “You don’t need code—you need curiosity and a clear hypothesis.”


📊 Confidence Meter Template

  • Color-coded framework to visualize how much evidence backs each idea
  • Helps reduce political debates and focus on facts

💡 Idea Bank Canvas

  • A centralized repository for tracking, scoring, and filtering ideas
  • Encourages divergent thinking, then structured convergence

✅ “Let ideas compete—based on value and evidence.”


đŸ§Ș Step Project Tracker

  • Keeps all experiments visible, documented, and reviewed
  • Tracks learnings, metrics, outcomes, and next steps

🎯 Goal Tracker

  • Links team OKRs or outcome goals to ongoing experiments and ideas
  • Ensures alignment between strategy and day-to-day execution

✅ “You can’t scale impact if you don’t track what’s working.”

🎯 Final Takeaway: Culture Eats Tools for Breakfast

đŸ”„ “Tools don’t drive innovation—mindsets do.”

If your team:

  • Experiments fast,
  • Embraces micro-failures,
  • Prioritizes behavioral validation,
  • Kills bad ideas early and proudly


Then you are already ahead of 90% of startups and corporate innovation teams.

🧠 “The Right It is not just a product—it’s a habit of mind.”


Quotes

“Fake It Before You Make It,” but over the years I have heard that expression used to justify all sorts of nonsense and unsavory behaviors. Although that phrase occasionally still slips out of my mouth (or keyboard), these days I’ve replaced it with “Test It before you invest in It”

“Pretotyping is a way to test an idea as quickly and inexpensively as possible by creating artifacts to help us test the hypotheses that “if we build it, they will buy it” and/or “if we build it, they will use it.”

“To help you get started, however, I will introduce you to two basic, but useful and reliable metrics that can be applied to practically any idea for new products or services: Initial Level of Interest and Ongoing Level of Interest.”

References


Profile picture

Written by Tony Vo father, husband, son and software developer Twitter