Your OKRs are probably just a feature list with a corporate rebrand
Here’s what happens in most product organisations every quarter: leadership announces it’s “OKR season.” Teams scramble to reframe their already-planned features as Objectives. They add some percentage signs to make things look measurable. Everyone agrees these are “ambitious but achievable.” Three months later, nobody can explain what actually changed because of those OKRs.
If your OKRs for product teams feel like bureaucratic overhead rather than strategic clarity, you’re not using them wrong. You’re using a corrupted version that’s become standard practice. The problem isn’t the framework — it’s that most implementations gut the one thing that makes OKRs useful.
The core mistake: measuring what you ship instead of what changes
The distinction between outputs and outcomes sounds obvious until you try to write a Key Result. Then suddenly “Launch the new checkout flow” feels specific, achievable, and measurable. “Increase conversion rate from 2.1% to 2.8%” feels terrifying because you can’t fully control it.
That fear is exactly the point.
Output-based Key Results let you declare victory by shipping code. Outcome-based Key Results force you to prove that shipping code actually mattered. The difference isn’t semantic — it changes what your team optimises for.
When your KR is “Launch onboarding redesign,” you’ll ship the redesign. When your KR is “Reduce time-to-first-value from 14 days to 7 days,” you might ship a redesign. Or you might discover that a simple email sequence gets you 80% of the improvement at 10% of the cost. Or you might learn that the real problem is your pricing page confusing people before they even sign up.
Outcome-based OKRs keep you honest about whether your brilliant solution actually solves the problem.
Bad vs good product OKRs: specific examples
Abstract principles are useless without concrete examples. Here’s what the output-to-outcome shift looks like in practice:
Example 1: Feature delivery disguised as an OKR
Bad Key Result: “Launch onboarding flow redesign by end of Q2”
This tells you nothing about whether the redesign achieved anything. You can hit this KR while making onboarding worse. Teams regularly do.
Good Key Result: “Reduce median time-to-first-value from 14 days to 7 days”
Now you need to actually measure whether new users are reaching their “aha moment” faster. The redesign might be one way to achieve this. It might not be the best way. The KR doesn’t care about your solution — it cares about the result.
Example 2: Activity metrics masquerading as learning
Bad Key Result: “Conduct 20 customer interviews this quarter”
You can conduct 20 interviews and learn nothing. You can also learn everything you need from 5 conversations if you ask the right questions and talk to the right people.
Good Key Result: “Identify and validate 3 high-confidence discovery opportunities for Q3 development”
This forces you to define what “validated” means before you start. It connects research activity to a concrete output that influences future work. The 20 interviews might be necessary to get there — or 8 might be enough.
Example 3: Technical metrics without business meaning
Bad Key Result: “Reduce API response time to under 200ms”
Unless you’re selling infrastructure, nobody cares about your API latency. They care about what slow responses cause.
Good Key Result: “Reduce checkout abandonment caused by timeout errors from 3.2% to under 1%”
This connects technical work to business impact. It also means you need to actually measure whether timeout errors cause abandonment — you might discover they don’t, and the real issue is something else entirely.
Example 4: Engagement theatre
Bad Key Result: “Increase DAU/MAU ratio from 0.15 to 0.25”
This sounds outcome-focused, but it’s often just vanity metrics dressed up. What does a higher DAU/MAU ratio actually mean for your business? For some products, it means retention. For others, it means you’ve added annoying notifications that force people to check the app daily without actually using it.
Good Key Result: “Increase percentage of users completing at least one core workflow per week from 34% to 50%”
This defines engagement in terms of actual value delivery. Users doing the thing your product exists to help them do.
Why “66% completion is good” ruins everything
Google popularised the idea that hitting 70% of your OKRs means you set them correctly. Hit 100% and you sandbagged. Hit 40% and you were delusional.
This sounds reasonable until you watch it corrupt an organisation in real time.
Here’s what actually happens: teams learn that setting ambitious goals leads to “failure” in performance reviews. Managers learn that their teams look bad if KR completion is below 60%. So everyone inflates their baseline metrics, sets conservative targets, and games the denominator.
The result? A company full of teams hitting 70% of their carefully sandbagged goals while actual business metrics stagnate.
The “70% is good” rule only works when two conditions are true:
- Leadership genuinely doesn’t punish teams for missing ambitious targets
- There’s calibration across teams so “ambitious” means roughly the same thing everywhere
Most organisations satisfy neither condition. If yours doesn’t, either fix the cultural problem or stop pretending the scoring system means anything. Spotify famously abandoned OKR scoring entirely because the gaming became worse than no scores at all.
How OKRs connect to your product roadmap
One of the most common questions: “If OKRs are outcomes, where do the features go?”
Your [INTERNAL_LINK: product roadmap] still exists. It just serves a different purpose.
OKRs set the destination: “We’re trying to get here” (outcome we want to achieve)
Roadmap shows the route: “Here’s how we currently think we’ll get there” (initiatives we believe will drive the outcome)
The relationship is directional. OKRs should inform roadmap, not the other way around. If you’re writing OKRs to justify a roadmap you already decided on, you’ve inverted the relationship and gutted the value.
A healthy workflow looks like:
- Leadership sets strategic OKRs defining what outcomes matter
- Product teams propose Key Results they believe they can influence
- Teams use [INTERNAL_LINK: product discovery] to identify which initiatives might drive those Key Results
- The roadmap captures the current best-guess initiatives, with explicit links to which KRs they support
- When an initiative fails to move a KR, you try something else — the roadmap changes, the OKR doesn’t
This is why outcome-based OKRs require roadmap flexibility. If your roadmap is locked 6 months out and can’t change based on what you learn, outcome-based OKRs will just frustrate everyone.
OKRs and product discovery: the Teresa Torres connection
Teresa Torres’ Continuous Discovery Habits framework fits naturally with outcome-based OKRs, though she’d probably argue the OKR framing is secondary to the habits themselves.
Her model maps cleanly:
- Outcome: Your Key Result — the measurable change you’re trying to create
- Opportunity: The customer problem or need that, if addressed, could drive the outcome
- Solution: Your hypothesis for how to address the opportunity
- Experiment: How you’ll test whether the solution actually works
The key insight from Torres: you should be running continuous discovery against your OKRs, not just conducting discovery during “discovery phase” and then shipping during “delivery phase.”
Weekly touchpoints with customers, small experiments, interview snapshots — all aimed at understanding whether your current bets are moving the outcome and what you should try next if they’re not.
This is why activity-based KRs like “Conduct 20 customer interviews” miss the point. [INTERNAL_LINK: product discovery] isn’t an activity to complete. It’s an ongoing practice in service of outcomes.
The right cadence for product OKRs
Most companies default to quarterly OKRs for everyone. This is usually wrong for product teams.
Here’s a better model:
Annual: Strategic outcomes (company/business unit level)
These are the big bets. “Become the default solution for enterprise customers in healthcare.” “Achieve profitability in the SMB segment.” They shouldn’t change unless the strategy changes.
Connect these to your [INTERNAL_LINK: north star metric] — the one number that, if it moves, indicates you’re winning.
Quarterly: Product team OKRs
Each product team commits to Key Results that ladder up to strategic outcomes. “Increase enterprise healthcare trial-to-paid conversion from 12% to 20%.” These are specific enough to be measurable within 90 days but connected to bigger-picture goals.
Quarterly is right for most product work because it’s long enough to ship meaningful experiments and measure results, but short enough to course-correct if you’re wrong.
Weekly: Check-ins, not new OKRs
Weekly OKRs are a mistake. Instead, run weekly check-ins on quarterly OKRs:
- What did we ship last week that should influence this KR?
- Is the metric moving? If not, why?
- What’s our confidence level that current initiatives will hit the target?
- What should we try next?
These check-ins surface problems early. If you’re 6 weeks into the quarter and nothing’s moving, you need to know now, not at the retrospective.
What to do when executives hand you output OKRs
Let’s be realistic: many product teams don’t control their OKRs. The CEO decided the company is “launching the mobile app” this quarter. Your job is to make it happen, not to question whether a mobile app is the right solution.
You have a few options, in order of political capital required:
Option 1: Accept the output, add your own outcome (low risk)
Take the mandated deliverable and add an outcome-based KR that helps you measure success. “Launch mobile app” becomes “Launch mobile app AND achieve 10% of existing user base as monthly active mobile users within 60 days.”
This doesn’t challenge the mandate but gives you something to optimise against. It also gives you evidence for future conversations about whether mandates are working.
Option 2: Negotiate the outcome behind the output (medium risk)
Try to understand what outcome leadership actually wants. “Why mobile app?” might reveal they’re trying to improve retention, reach a new market segment, or respond to competitor pressure.
If you can surface the real goal, you might get permission to define the Key Result in outcome terms, even if the solution (mobile app) is fixed.
Option 3: Accept defeat and document (for hostile environments)
Sometimes the culture is too broken to change from your position. In that case, track both the output and the outcome that should have been measured. After a few quarters of “we launched the thing but nothing got better,” you’ll have evidence to push for change — or evidence that this isn’t an organisation where product thinking will ever be valued.
When OKRs are the wrong tool entirely
OKRs aren’t universal. They actively hurt in some contexts:
Very early-stage startups (pre-product-market fit)
When you’re still discovering what to build and for whom, the overhead of formal OKRs adds process without clarity. Your goal is learning, and the things you’ll learn this quarter might completely change what matters next quarter.
Better approach: have one clear question you’re trying to answer and ship experiments as fast as possible. Formalise OKRs once you know what business you’re in.
Pure maintenance or platform teams
If your team’s job is keeping systems running and responding to requests from other teams, quarterly outcome-based goals often feel forced. Your work is inherently reactive.
Better approach: service-level objectives (SLOs) and capacity allocation. “95% of requests completed within SLA” and “30% of time allocated to tech debt reduction” might serve you better than manufacturing an outcome KR every quarter.
Organisations where strategy changes constantly
If leadership pivots strategy every few weeks, quarterly OKRs become fiction immediately. You’ll spend more time rewriting OKRs than doing actual work.
Better approach: fix the strategy problem, or acknowledge you’re in execution-chaos mode and just focus on shipping until leadership stabilises. OKRs require a minimum level of strategic coherence to be useful.
Making the shift: what to do this quarter
If your OKRs currently look like feature lists with metrics, here’s how to start fixing them:
Step 1: For each current KR, ask: “Could we hit this KR without improving anything for customers or the business?” If yes, it’s an output.
Step 2: Find the outcome behind each output. Why does this feature matter? What should change when it ships? That change is your real KR.
Step 3: Make sure you can actually measure the outcome. If you can’t measure “user satisfaction with onboarding,” you can’t use it as a KR. Find a proxy metric you can track weekly.
Step 4: Have the sandbagging conversation explicitly. If leadership punishes teams for missing ambitious targets, either change that dynamic or stop pretending your OKRs are ambitious.
Step 5: Start weekly check-ins against your Key Results. Even if your OKRs are still imperfect, the habit of regularly asking “is this metric moving?” will surface problems faster.
OKRs work when they force you to define success before you start building, measure whether you’re achieving it, and change course when you’re not. Everything else — the scoring, the cadence, the tooling — is just implementation detail.
Get the output/outcome distinction right, and the rest becomes much easier. Get it wrong, and no amount of process will save you from shipping features that don’t matter.
Frequently asked questions
What are OKRs in product management?
OKRs (Objectives and Key Results) are a goal-setting framework where Objectives are qualitative goals and Key Results are measurable outcomes that tell you if you’ve achieved the objective. Product teams use OKRs to focus on outcomes over outputs.
What is an example of a product OKR?
Objective: Increase customer retention in our core segment. Key Results: Reduce 30-day churn from 8% to 5%. Increase 90-day retention from 60% to 75%. Achieve NPS score of 45+.
How often should you set product OKRs?
Quarterly OKRs are most common for product teams. Annual OKRs set strategic direction; quarterly OKRs translate strategy into near-term focus. Monthly is too frequent; annual is too infrequent to adapt.
What are common OKR mistakes in product management?
Setting output OKRs instead of outcome OKRs (‘launch 3 features’ vs ‘increase activation by 20%’), setting too many OKRs (1-3 objectives max), treating OKRs as a to-do list, and failing to review progress weekly.
