RICE prioritization: how top product teams rank features without arguments


Prioritization

Most product teams have a version of the same meeting. Someone pitches a feature they’re convinced will change everything. Someone else has data showing a different initiative drives more revenue. The engineering lead points out that both estimates are wildly optimistic. An hour later, the team leaves with no decision, a longer backlog, and less trust in the process. RICE prioritization exists because Intercom’s product team got tired of exactly this dynamic — and built a scoring model that replaced opinion-driven debates with a repeatable, transparent framework that any team can adopt in a single sprint.

This guide covers how RICE scoring actually works, where teams get the inputs wrong, how to run the calculation with real numbers, and when RICE is the right framework versus when you should reach for something else entirely.

Table of Contents

What Is RICE Prioritization?

RICE prioritization is a scoring framework that evaluates product initiatives across four dimensions: Reach, Impact, Confidence, and Effort. Each initiative gets a numerical score calculated as:

RICE Score = (Reach × Impact × Confidence) ÷ Effort

The result is a single number you can use to rank competing features, experiments, or projects on a consistent scale. Higher scores indicate initiatives that deliver more value per unit of work. Lower scores surface the projects that sound exciting in a brainstorm but don’t hold up under scrutiny.

What separates RICE from simpler frameworks is the Confidence factor. Most prioritization models ask “how impactful is this?” and “how hard is it?” RICE adds a critical question: “how sure are you about those estimates?” That single addition changes the conversation from advocacy to honesty.

The Origin of RICE: Why Intercom Built It

Sean McBride and the product team at Intercom developed RICE because they found existing frameworks either too simplistic or too subjective. Their team was growing, more stakeholders were pitching ideas, and they needed a common language for comparing initiatives that were fundamentally different — a new onboarding flow versus a billing system overhaul versus a mobile push notification feature.

The breakthrough was making Confidence explicit. Before RICE, their team would estimate impact and effort but never formally acknowledge how much guesswork was involved. Projects backed by customer data and projects based on a PM’s intuition were scored identically. RICE fixed that by penalizing uncertainty directly in the formula.

The Four Factors of RICE Scoring

Reach: How Many People Will This Touch?

Reach measures the number of users or customers who will encounter this initiative within a defined time period — typically one quarter. The key word is encounter, not benefit from. You’re estimating exposure, not outcome.

How to measure it: Use your product analytics. If you’re improving the checkout flow, Reach is the number of users who hit checkout per quarter. If you’re building a new reporting dashboard, Reach is the number of users who would navigate to reporting.

Common trap: Teams often default to “all users” for anything that touches the main product surface. A settings page redesign doesn’t reach all users — it reaches the percentage who actually open settings. For most products, that’s 10-20% of active users, not 100%.

Scale: Use raw numbers (e.g., 5,000 users/quarter), not percentages. This keeps comparisons grounded in real volume.

Impact: How Much Will Each Person Be Affected?

Impact captures the magnitude of effect on each person reached. Intercom’s original framework uses a five-point scale:

Score Label What It Means
3 Massive Fundamentally changes how users experience the product
2 High Noticeably improves a core workflow
1 Medium Incremental improvement users will notice
0.5 Low Minor improvement, may go unnoticed
0.25 Minimal Barely perceptible change

How to calibrate: Before your first scoring session, pick three shipped features your team agrees were high-impact and three that were low-impact. Use those as reference points so everyone scores on the same scale.

Common trap: Impact inflation. When every feature gets a 2 or 3 because “it’s important to us,” the framework loses its ability to differentiate. If everything is high impact, nothing is.

Confidence: How Sure Are You?

Confidence is a percentage that represents how much evidence backs your Reach and Impact estimates. This is the factor most teams get wrong — and the one that matters most.

Confidence Evidence Level
100% Strong quantitative data: analytics, A/B test results, validated research
80% Solid qualitative signals: user interviews, support ticket patterns, survey data
50% Educated guess based on domain experience but limited direct evidence
Below 50% Pure speculation — Intercom calls these “moonshots”

The rule: If you can’t point to specific evidence for your Reach and Impact scores, your Confidence should not exceed 50%. Most teams default to 80% because the idea was “discussed in a meeting.” Discussion is not evidence.

Common trap: Anchoring Confidence to conviction rather than evidence. A PM who is personally passionate about a feature will rate Confidence high because they believe in it. Confidence should reflect data, not enthusiasm.

Effort: What Will This Actually Cost?

Effort quantifies the total work required in person-months. One person working for two months = 2 person-months. Two people working for one month = 2 person-months.

Critical detail: Effort must include all functions, not just engineering. Design, QA, documentation, data science, and marketing launch support all count. A feature that takes one engineer two weeks but also needs a designer for a week and QA for three days is not “0.5 person-months” — it’s closer to 1.

How to estimate: Break the initiative into phases (design, build, test, launch) and estimate each separately. Sum them. Then add 20% for the integration work and unexpected complexity that every estimate misses.

Scale: Use person-months as the unit. Round to the nearest 0.5 for small initiatives.

How to Calculate a RICE Score (With a Worked Example)

Let’s score a real initiative: adding a CSV export feature to a reporting dashboard.

Factor Value Reasoning
Reach 2,000 users/quarter Based on analytics: 2,000 unique users access reporting monthly
Impact 1 (Medium) Saves time but doesn’t change core workflow
Confidence 80% Support tickets confirm demand; 47 requests in last quarter
Effort 1.5 person-months 1 month engineering + 2 weeks design/QA

RICE Score = (2,000 × 1 × 0.8) ÷ 1.5 = 1,067

Now compare that to: rebuilding the onboarding flow.

Factor Value Reasoning
Reach 8,000 users/quarter All new signups go through onboarding
Impact 2 (High) Current onboarding has 40% drop-off; improvement could lift activation
Confidence 50% No A/B test data yet; estimate based on competitor benchmarks
Effort 6 person-months Full redesign across 3 engineers, 1 designer, QA

RICE Score = (8,000 × 2 × 0.5) ÷ 6 = 1,333

The onboarding rebuild scores higher despite being four times the effort, because the Reach and Impact are dramatically larger. But notice that Confidence is 50% — if the team ran a prototype test and validated the impact assumption, that Confidence could jump to 80%, pushing the score to 2,133. That insight alone justifies investing a week in a prototype before committing six months to the full build.

RICE Scoring in Practice: A Full Backlog Comparison

Here’s what a real prioritization session output looks like when you score five competing initiatives:

Initiative Reach Impact Confidence Effort RICE Score Rank
Onboarding rebuild 8,000 2 50% 6 1,333 1
CSV export 2,000 1 80% 1.5 1,067 2
Mobile push notifications 12,000 0.5 80% 3 1,600
Admin audit log 500 2 100% 2 500 4
Dark mode 15,000 0.25 90% 4 844 3

Wait — mobile push notifications scored highest at 1,600 but is ranked with a dash. Why? Because RICE scores are a starting point for conversation, not a final decision. If push notifications require a new infrastructure dependency that blocks other work for two quarters, that context matters. The score surfaces the opportunity; the team makes the call.

This is why experienced PMs treat RICE as a decision-support tool rather than a decision-making tool. The framework reduces bias and creates transparency, but it cannot account for dependencies, strategic bets, or market timing on its own.

The Five Mistakes That Break RICE Scoring

1. Scoring Everything at 80% Confidence

The most common failure mode. Teams treat 80% as the default because “we talked about it.” Reserve 80% for initiatives backed by real qualitative evidence — user interviews, support data, or usage analytics. If you don’t have that evidence, you’re at 50% or below.

2. Forgetting Non-Engineering Effort

A feature that takes two engineers three weeks sounds like 1.5 person-months of Effort. But if it also needs a designer for two weeks, a week of QA, documentation updates, and a marketing launch — real Effort is closer to 3.5 person-months. Cutting the RICE score in half.

3. Using Reach as a Popularity Contest

Reach is not “how many users would like this.” It’s “how many users will encounter this in a quarter.” A backend performance improvement that makes the entire app faster has high Reach. A niche workflow improvement for power users has low Reach. Neither is inherently better — that’s what Impact is for.

4. Scoring in Isolation Instead of Calibrating as a Team

When one PM scores their features alone and another PM scores theirs alone, the scales drift. One PM’s “High Impact” is another’s “Medium.” Always score collaboratively, or at minimum, calibrate by scoring three reference features together before splitting up.

5. Never Updating Scores

RICE scores are estimates, and estimates improve with new information. After a feature ships, compare actual Reach and Impact to your predictions. That feedback loop is what makes your next round of scoring more accurate. Teams that score once and never revisit build a culture of performative precision.

RICE vs. ICE vs. WSJF vs. MoSCoW: When to Use Each

RICE isn’t the only prioritization framework, and it’s not always the best one. Here’s when each framework fits:

Framework Best For Team Size Speed Key Strength
RICE Feature prioritization with customer data 10-50 people 5-10 min/item Confidence factor penalizes guesswork
ICE Growth experiments, rapid iteration Under 10 people 2-3 min/item Speed — score and move
WSJF Time-sensitive work, SAFe environments 50+ people 10-15 min/item Captures urgency and cost of delay
MoSCoW Scope negotiation within a fixed release Any size Fast for scope cuts Forces binary keep/cut decisions

ICE (Impact, Confidence, Ease) drops Reach and replaces Effort with Ease, making it faster but less precise. It works well for growth teams running dozens of small experiments where speed of scoring matters more than accuracy.

WSJF (Weighted Shortest Job First) adds urgency through a “Cost of Delay” factor. If your team operates in an environment with regulatory deadlines, competitive windows, or contractual obligations, WSJF captures time pressure that RICE ignores.

MoSCoW (Must, Should, Could, Won’t) isn’t a scoring framework — it’s a classification tool. Use it when you’ve already decided what to build and need to negotiate scope within a fixed timebox. It pairs well with RICE: use RICE to decide what to build, then MoSCoW to decide how much of it to ship in the next release.

For a deeper comparison of prioritization approaches, see our guide to the most useful product management frameworks.

How to Introduce RICE to Your Team

If your team hasn’t used RICE before, don’t roll it out on your entire backlog. Start small:

Week 1: Pick five initiatives the team is already debating. Score them together in a 45-minute session. Walk through each factor. Expect disagreements — that’s the point. The conversation surfaces assumptions that were previously invisible.

Week 2: Have each PM score their own features independently, then compare in a group calibration session. Look for where scores diverge and discuss why.

Week 3: Apply RICE to your next quarterly planning session. Use scores as the starting order for discussion, not the final decision.

Ongoing: After each cycle, run a quick retro — which scores were accurate? Which were way off? Tighten your calibration based on real outcomes.

The goal is not perfect scores. The goal is a shared language for trade-offs that replaces “I think this is more important” with “here’s why I scored this higher, and here’s what I’d need to see to change my mind.”

Build RICE into your product roadmap process and combine it with OKRs for product teams to ensure your highest-scored initiatives align with quarterly objectives.

Frequently Asked Questions

What does RICE stand for in product management?

RICE stands for Reach, Impact, Confidence, and Effort. It’s a prioritization framework developed by Sean McBride at Intercom that gives each feature or initiative a numerical score based on these four factors. The score is calculated as (Reach × Impact × Confidence) ÷ Effort, producing a single number that allows teams to rank and compare competing initiatives on a consistent scale.

How do you calculate a RICE score?

RICE Score = (Reach × Impact × Confidence) ÷ Effort. Reach is the number of users who will encounter the initiative per quarter. Impact is the effect per user on a scale from 0.25 (minimal) to 3 (massive). Confidence is a percentage reflecting how much evidence supports your estimates (100% for hard data, 50% for educated guesses). Effort is total work required in person-months across all functions — engineering, design, QA, and launch support.

What are the biggest mistakes teams make with RICE scoring?

The most common mistakes are defaulting to 80% Confidence without evidence, only counting engineering time in Effort estimates, inflating Impact scores because a feature “feels important,” scoring in isolation rather than calibrating as a team, and never updating scores as new data becomes available. These errors compound — a single inflated Confidence score can double a RICE score and push the wrong initiative to the top of the roadmap.

When should you use RICE instead of other prioritization frameworks?

RICE works best for mid-size teams (10-50 people) with enough customer data to estimate Reach and Impact with reasonable accuracy. It’s ideal for comparing features on a product roadmap. For rapid growth experiments, ICE is faster. For time-sensitive work with regulatory or competitive deadlines, WSJF captures urgency better. For scope negotiation within a fixed release, MoSCoW is more practical. Many teams use RICE for quarterly planning and simpler frameworks for day-to-day decisions.

Can RICE prioritization work for non-product teams?

Yes. RICE works anywhere you need to compare competing initiatives with limited resources. Marketing teams use it to prioritize campaigns (Reach = audience size, Impact = expected conversion lift). Engineering platform teams use it to prioritize technical debt (Reach = number of teams affected, Impact = developer productivity gain). The framework is flexible — the key is defining what Reach and Impact mean for your specific context and sticking to those definitions consistently.

Your Next Step

Pick three features currently sitting in your backlog. Score each one using RICE — right now, before your next planning session. Don’t overthink the numbers. Use your analytics for Reach, be honest about Confidence, and include all functions in your Effort estimate.

Then bring those scores to your next team discussion. Don’t present them as decisions. Present them as conversation starters: “Here’s how I scored these, and here’s what I’d need to see to change my mind.” That framing turns RICE from a spreadsheet exercise into a shared language for making trade-offs — which is the real value of any prioritization framework.

Ty Sutherland

Ty Sutherland is the editor of Product Management Resources. With a quarter-century of product expertise under his belt, Ty is a seasoned veteran in the world of product management. A dedicated student of lean principles, he is driven by the ambition to transform organizations into Exponential Organizations (ExO) with a massive transformative purpose. Ty's passion isn't just limited to theory; he's an avid experimenter, always eager to try out a myriad of products and services. While he has a soft spot for tools that enhance the lives of product managers, his curiosity knows no bounds. If you're ever looking for him online, there's a good chance he's scouring his favorite site, Product Hunt, for the next big thing. Join Ty as he navigates the ever-evolving product landscape, sharing insights, reviews, and invaluable lessons from his vast experience.

Recent Posts