The Strategic Bet Review: How Product Managers Decide What to Keep, Kill, or Double Down On


Product strategy planning session with team evaluating strategic bets on a whiteboard

Table of Contents

The Meeting That Changes Everything

Marcus had been running the product team at a mid-stage B2B SaaS company for two years. He looked at the roadmap during a quarterly planning session and counted seven major initiatives in flight. An AI-powered analytics feature that engineering had been building for five months. A self-serve onboarding flow that was supposed to reduce support tickets. A partner integration that sales swore would close three enterprise deals. An internationalization effort that the CEO brought back from a conference. Three more that had been “almost done” for longer than anyone wanted to admit.

Every initiative had a champion. Every champion had a compelling story about why their bet was the one that would move the needle. But when Marcus looked at the numbers — actual adoption, actual revenue impact, actual customer demand — the picture was different. The AI analytics feature had 11 beta users. The self-serve flow was being bypassed by 80% of new accounts. The partner integration had stalled because the partner’s API kept changing.

Marcus was managing product strategy bets the way most product managers do: by addition, never subtraction. Every quarter brought new bets. No quarter killed old ones. The portfolio had grown into a collection of half-resourced initiatives, each moving too slowly to prove or disprove anything.

This is the problem the Strategic Bet Review solves. It is a structured practice that forces product managers to evaluate every active strategic bet against clear criteria — and make the hard call about what to keep, what to kill, and what deserves more resources.

Why Product Strategy Bets Go Wrong

The failure mode is not picking the wrong bets. Every experienced PM has backed an initiative that didn’t work out, and that is a normal cost of operating in uncertainty. The real failure is the inability to recognize when a bet is losing and act on that information.

Research from Harvard Business Review on innovation portfolio management found that companies outperforming their peers typically allocate resources across a 70-20-10 split: 70% to core product improvements, 20% to adjacent opportunities, and 10% to transformational bets. But the finding that surprises most PMs is the inverse return ratio — transformational bets, despite receiving only 10% of resources, can generate up to 70% of the long-term value.

That math only works if you are ruthless about which transformational bets survive. If you let every bet linger indefinitely, you end up spending 10% of your resources across fifteen initiatives instead of two or three, and none of them get enough investment to prove anything.

The psychological barriers are well documented. Sunk cost bias makes teams hold onto initiatives because of what they have already invested rather than what the initiative is likely to return. The IKEA effect causes teams to overvalue work they helped build. And organizational politics means that killing someone’s initiative feels like killing their career trajectory.

The result is what I call “strategy debt” — a growing backlog of underfunded bets that consume resources, fragment team focus, and prevent any single initiative from reaching the scale it needs to succeed or fail decisively.

The Strategic Bet Review Framework

After watching this pattern play out across dozens of product teams over 25 years, I have landed on a quarterly practice that works. It takes about 90 minutes per quarter and pays for itself within the first session.

Step 1: Inventory Your Active Bets

List every initiative that consumes more than 10% of any team member’s time. For each bet, document:

  • The hypothesis: What outcome are we betting this will produce?
  • The time horizon: When did we expect to see signal?
  • The investment to date: People-weeks, not dollars. Dollars obscure the real cost, which is opportunity cost.
  • The evidence so far: Usage data, customer feedback, revenue impact — whatever signals exist.

Most PMs are stunned at this step. They think they have four or five active bets. They actually have eight to twelve, because some bets have been running so long they have become invisible background work.

Step 2: Apply the Three-Signal Test

For each bet, evaluate three signals:

Signal 1 — Customer Pull. Are customers actively asking for this, using early versions of it, or changing their behavior because of it? A bet with no customer pull after a reasonable time horizon is a bet based on internal conviction, not market reality.

Signal 2 — Strategic Leverage. If this bet succeeds, does it create a compounding advantage — a moat, a platform effect, a data asset? Or does it produce a one-time gain that competitors can replicate in a quarter?

Signal 3 — Execution Velocity. Is the team making meaningful progress, or has the initiative stalled? Stalled bets are almost never resource problems. They are signal problems — the team does not know what to build next because the hypothesis is too vague.

Step 3: Sort Into Three Buckets

Based on the three signals, sort each bet:

  • Keep and resource: Strong signals on at least two of three dimensions. These deserve dedicated teams, not fractional allocation.
  • Time-box and test: Mixed signals. Set a 6-8 week deadline with specific milestones. If the milestones are not hit, the bet moves to the kill bucket automatically.
  • Kill and redeploy: Weak signals or no signals after a reasonable time horizon. Kill it, celebrate the learning, and redeploy the team to a stronger bet.

The key discipline: set kill criteria before the review, not during it. When you define the conditions under which a bet gets killed while you are still emotionally neutral, you make better decisions than when you are in a room full of people defending their work.

Step 4: Rebalance the Portfolio

After sorting, check your allocation against the 70-20-10 guideline. If you are spending 50% of your capacity on transformational bets and 30% on core improvements, your product is probably getting less reliable while your bets are moving too slowly. If you are spending 95% on core, you are optimizing a product that will be disrupted.

The right ratio depends on your company’s stage and market dynamics. Early-stage startups might run 40-30-30. Mature products in stable markets might run 80-15-5. The point is not to hit a magic number — it is to make the allocation intentional rather than accidental.

Applying the Framework: Before and After

Consider a product team at a healthcare technology company. They have six initiatives in flight: a redesigned clinical dashboard, an API for third-party integrations, a patient-facing mobile app, a billing automation tool, a compliance reporting module, and an AI-powered alert system for patient risk scoring.

Before the Strategic Bet Review, every initiative has equal status on the roadmap. Each one has a product manager spending 30% of their time on it. Engineering resources are split across all six. Weekly standups cover all six at surface level. Nothing is shipping fast enough to generate real user feedback.

After applying the framework:

The clinical dashboard redesign shows strong customer pull (three enterprise clients have made renewal contingent on it) and high strategic leverage (it is the daily-use surface that drives retention). It moves to “keep and resource” — it gets a dedicated squad.

The API for third-party integrations has moderate customer pull but high strategic leverage. It moves to “time-box and test” with a 6-week sprint to ship a minimum integration with one partner and measure adoption.

The patient-facing mobile app has been in development for seven months with no beta users. Customer interviews reveal that patients prefer using the existing patient portal. It gets killed. The team reassigned to the dashboard squad.

The billing automation tool has one internal champion but no customer requests. Killed.

The compliance reporting module has regulatory deadlines driving it — a unique kind of “customer pull” that is non-negotiable. Kept and resourced.

The AI alert system has high transformational potential but weak execution velocity because the data pipeline is not ready. Time-boxed: 8 weeks to validate the data pipeline feasibility. Kill criteria defined up front.

Six initiatives become two fully-resourced priorities, two time-boxed experiments, and two killed bets that free up nearly 40% of the team’s capacity.

How to Run Your First Strategic Bet Review

Do not try to overhaul your entire portfolio the first time. Start with this:

This week, open a spreadsheet and list every initiative your team is actively working on. For each one, write the hypothesis in a single sentence. If you cannot articulate the hypothesis, that is your first signal.

Next week, run a 90-minute session with your product leadership team. Walk through the three-signal test for each bet. Do not debate solutions — just score the signals honestly. Have each person score independently before discussing to avoid anchoring.

Before the session ends, identify at least one bet to kill and one to fully resource. The most common mistake in the first review is killing nothing. If your first review does not kill at least one initiative, you are being too conservative. As one innovation governance framework puts it: if you are not killing 50-70% of early-stage transformational bets, you are either not taking enough risk or not governing effectively.

Then schedule the next review for 90 days out. Put it on the calendar now. The practice only works with a consistent cadence.

The best product strategies are not defined by the bets you make. They are defined by the bets you have the discipline to walk away from. Your roadmap is not a to-do list — it is a portfolio, and portfolios require active management.

Frequently Asked Questions

How often should product managers review their strategic bets?

Quarterly is the right cadence for most teams. Monthly reviews are too frequent — bets need time to generate signal. Annual reviews are too infrequent — by the time you review, you have spent 12 months of resources on bets that should have been killed at month four. The 90-day cycle gives enough time for evidence to accumulate while keeping the feedback loop tight enough to prevent significant waste. Between formal reviews, track your product roadmap signals weekly so the quarterly conversation is grounded in data rather than opinions.

What is the difference between a strategic bet and a regular feature?

A strategic bet is an initiative where the outcome is uncertain and the investment is significant — typically more than a single sprint’s worth of effort. Regular features have predictable outcomes because they are extensions of proven patterns. The distinction matters because bets require different governance. Features can be prioritized with RICE scoring or similar frameworks. Bets need hypothesis-driven management with explicit kill criteria and time horizons.

How do you kill a strategic bet without demoralizing the team?

Frame the kill as a portfolio decision, not a performance judgment. The team did their job — they generated the evidence that informed the decision. Celebrate what was learned and make the redeployment immediate and visible. The worst thing you can do is let a killed initiative linger in zombie status where the team knows it is dead but nobody has officially said so. Be direct, be fast, and give the team something meaningful to move to.

What if leadership insists on keeping a bet that the data says should be killed?

This happens constantly. The best approach is to separate the signal conversation from the decision conversation. Present the three-signal assessment factually. If leadership still wants to keep the bet, ask for explicit kill criteria and a time-box. This turns an emotional disagreement into a testable commitment. “We will keep this initiative for 8 more weeks, and if we do not see X by that date, we will revisit.” Most of the time, the data wins — it just needs a deadline to win against.

How does the Strategic Bet Review relate to OKR planning?

The Strategic Bet Review feeds directly into OKR planning for product teams. Your active bets should map to your key results. If a bet does not connect to a key result, either the bet is misaligned or your OKRs are not reflecting your actual strategy. Run the bet review before OKR planning each quarter so your objectives reflect your true portfolio rather than an aspirational list of everything you wish you could do. This also ensures your product strategy stays aligned across teams.

Ty Sutherland

Ty Sutherland is the editor of Product Management Resources. With a quarter-century of product expertise under his belt, Ty is a seasoned veteran in the world of product management. A dedicated student of lean principles, he is driven by the ambition to transform organizations into Exponential Organizations (ExO) with a massive transformative purpose. Ty's passion isn't just limited to theory; he's an avid experimenter, always eager to try out a myriad of products and services. While he has a soft spot for tools that enhance the lives of product managers, his curiosity knows no bounds. If you're ever looking for him online, there's a good chance he's scouring his favorite site, Product Hunt, for the next big thing. Join Ty as he navigates the ever-evolving product landscape, sharing insights, reviews, and invaluable lessons from his vast experience.

Recent Posts