AI for product managers: the workflows saving 10+ hours a week in 2025


a close up of a computer screen with a message on it

You’re shipping slower than your engineering team can build. That’s the new problem in 2025, and it’s exactly backwards from how product development worked for the past two decades.

The bottleneck has shifted. AI coding assistants — Cursor, GitHub Copilot, Replit Agent — have compressed what used to take a sprint into what now takes a day. Your engineers aren’t waiting on you to write specs anymore; they’re waiting on you to decide. To synthesize the customer feedback. To prioritize the roadmap. To make the call.

And here’s the uncomfortable truth: if you’re still doing customer research the 2023 way, still writing PRDs from scratch, still manually tagging interview transcripts — you’re the slowest person on your team. The data backs this up: 65% of product professionals have already integrated AI into their daily workflows. The other 35% aren’t principled holdouts. They’re falling behind.

This post covers the specific ways AI creates leverage for PMs in 2025 — not the theoretical possibilities, but the actual workflows saving experienced PMs 10+ hours a week right now. I’ll also clarify something the industry keeps conflating: the difference between PMs who use AI and PMs who build AI products. These are different roles with different skills and different salary bands.

Two categories of AI for product managers (stop conflating them)

The phrase “AI product manager” means two completely different things depending on who’s saying it, and this confusion is costing people career opportunities and salary negotiations.

AI-powered PMs: traditional PMs who use AI as a tool

This is most of you, and it should be all of you. An AI-powered PM is a product manager working on any product — fintech, healthcare, e-commerce, developer tools — who uses AI to accelerate their existing workflows. You’re still doing [INTERNAL_LINK: customer discovery], still writing [INTERNAL_LINK: product requirements documents], still managing stakeholders. You’re just doing it faster.

The mental model: AI as a senior analyst who never sleeps, never gets annoyed at repetitive requests, and can process information at a scale you physically cannot.

This isn’t about replacing your judgment. It’s about getting to the judgment faster. You still decide what to build. AI just compresses the time between “I have a hypothesis” and “I have enough information to validate or kill it.”

AI PMs: PMs who build AI-native products

This is a specialized role with different requirements. An AI PM — sometimes called an ML PM or AI/ML Product Manager — builds products where machine learning is the core value proposition. Think recommendation engines, fraud detection systems, computer vision features, or LLM-powered applications.

The job is fundamentally different because the product development lifecycle is different. You’re not shipping deterministic features; you’re shipping probabilistic systems. You need to understand model training, evaluation metrics, data pipelines, and the weird ways ML systems fail.

According to Lenny Rachitsky’s 2025 research, demand for AI PMs is growing “insanely fast,” with salary premiums of 10-40% compared to traditional PM roles at the same level. That’s real money — at a $180K base, the premium could mean an extra $18K-$72K annually.

But here’s what most career advice gets wrong: you don’t become an AI PM by taking a machine learning course. You become one by shipping ML features at a company that’s building them. The path usually runs through being an AI-powered PM first, then transitioning to AI-native products once you’ve built the technical context.

For more on this career path, see [INTERNAL_LINK: AI product manager skills] and [INTERNAL_LINK: how to become an AI product manager].

Where AI creates the most leverage for PMs

Not all PM tasks benefit equally from AI. The biggest time savings come from tasks that are information-dense, repetitive, or require processing more text than a human can reasonably read. Here’s where the leverage is real, ranked by actual hours saved per week.

Customer feedback synthesis: the biggest win

This is where I’d start if you’re new to AI workflows. The ROI is immediate and obvious.

The old way: You export 500 NPS comments from your analytics tool. You read through them, maybe sampling because who has time for all 500. You highlight patterns, create a spreadsheet, group by theme. This takes 3-4 hours if you’re thorough, and you miss things because you’re human.

The new way: You paste those 500 comments into Claude. You ask it to identify the top 5 themes with representative quotes and sentiment breakdown. You get a structured analysis in 2 minutes. Then you spend 20 minutes validating the themes against your product intuition and checking for patterns the AI might have weighted incorrectly.

Time saved: 3+ hours per analysis. And you probably run more analyses because the friction dropped.

The same pattern applies to app store reviews, support tickets, sales call notes, and churn surveys. Anywhere you have unstructured customer voice data, AI compresses the synthesis step by an order of magnitude.

See [INTERNAL_LINK: AI customer feedback analysis] for specific workflows.

PRD writing: draft in 20 minutes instead of 2 hours

Most PMs spend 2-3 hours writing a solid [INTERNAL_LINK: product requirements document] from scratch. The actual writing is maybe 30% of that time. The rest is staring at a blank doc, context-switching to check Slack, and convincing yourself you need another coffee before you can start.

AI changes the starting point. Instead of a blank doc, you start with a coherent draft that captures 70-80% of what needs to be said. Your job shifts from writing to editing — which is cognitively easier and significantly faster.

But here’s what most PRD-writing guides miss: the real value isn’t the time saved on the first draft. It’s the edge cases.

When you prompt Claude or ChatGPT to write a PRD, then ask it to identify edge cases, failure modes, and questions engineering will ask — it catches things you’d miss. Not because it’s smarter than you, but because it’s systematic in a way that’s hard to be when you’re the person who came up with the feature idea. You’re too close to it.

I’ve seen AI surface questions like “What happens if the user starts this flow on mobile and finishes on desktop?” or “How does this interact with users who have two-factor authentication disabled?” — the kind of questions that would have come up in the first engineering review anyway, except now you’ve already answered them in the spec.

Time saved: 1-2 hours per PRD, plus fewer revision cycles.

See [INTERNAL_LINK: AI PRD writing] for my exact prompting approach.

Competitive research: half-day projects in 30 minutes

Traditional competitive research meant opening 15 browser tabs, skimming company blogs, hunting for press releases, and piecing together a narrative. It was tedious and incomplete because you were limited by what you could find and read in the time you had.

Perplexity changed this. The difference between Perplexity and standard ChatGPT or Claude for research is that Perplexity is built for real-time web access with citations. You’re not getting information from a training cutoff; you’re getting current data with links you can verify.

Ask Perplexity: “What pricing changes has Notion made in the last 12 months, and how did users respond?” You get a sourced answer in 30 seconds that would have taken an hour of Googling.

Ask: “What features has Linear shipped in Q1 2025, and which ones were most discussed on Twitter?” Same pattern — synthesis that’s grounded in current, verifiable sources.

Time saved: 2-4 hours per competitive deep-dive.

The limitation: Perplexity is only as good as publicly available information. For competitive intelligence that lives behind paywalls or in industry Slacks, you still need human networks.

Interview and meeting prep: stress-test your thinking

Here’s an underrated use case: using AI as a sparring partner before high-stakes conversations.

Before an executive roadmap review, I’ll paste my roadmap priorities into Claude and ask it to play the role of a skeptical CFO. “Push back on my prioritization. What questions would you ask about the ROI of item #2? Where am I being too optimistic about the timeline?”

The AI doesn’t have real executive judgment, but it’s surprisingly good at generating the structure of executive questions. It’ll ask about opportunity cost, resource allocation, competitive timing — the obvious hard questions that you should have pre-prepared answers for.

Same pattern works for customer interviews. Before a discovery call, I’ll brief Claude on what I know about the customer and ask it to generate the questions they’re likely to ask about our product direction. Then I stress-test my answers.

Time saved: hard to quantify, but the meetings go better.

Meeting notes and follow-ups: the most boring time sink

If you’re still manually writing meeting notes after calls, you’re doing unpaid administrative work that a computer can do better than you.

Tools like Granola (which Lenny Rachitsky has mentioned as part of his workflow) and Otter transcribe meetings and generate structured summaries. The better ones integrate with Notion or your documentation tool of choice, so the notes land where they need to be without copy-pasting.

The math: the average PM has 8-12 meetings per week. If you’re spending 15 minutes writing notes and follow-ups per meeting, that’s 2-3 hours weekly on pure administrative work. AI cuts this to a quick review and edit — maybe 3 minutes per meeting.

Time saved: 2+ hours per week.

The caveat: these tools require your meeting participants to consent to recording. Some companies and customers have policies against this. Know your constraints before automating.

The tools that are actually worth it in 2025

The AI tool landscape is crowded and confusing. Here’s my opinionated list of what’s actually worth your time, based on using all of these daily for the past year.

Claude: best for long-document analysis and nuanced reasoning

Best for: PRD writing, analyzing lengthy documents, nuanced reasoning tasks, anything where you want thoughtful output rather than fast output.

Claude (made by Anthropic) has the largest context window of the major LLMs, which means it can hold more information in a single conversation. This matters when you’re pasting a 40-page competitive analysis or a quarter’s worth of customer feedback.

I find Claude’s reasoning is more careful than ChatGPT’s — it’s more likely to caveat appropriately and less likely to confidently hallucinate. For PM work where nuance matters, that’s valuable.

The downside: Claude is slower than ChatGPT and has more limited integrations. If you want plugins and real-time web access, look elsewhere.

ChatGPT: best for brainstorming and versatility

Best for: Brainstorming, quick tasks, plugin integrations, image generation, anything where speed and flexibility matter more than depth.

ChatGPT is still the most versatile general-purpose AI. The plugin ecosystem means it can connect to tools you already use. The mobile app is more polished. For quick ideation and first-draft thinking, it’s often faster than Claude.

I use ChatGPT for brainstorming user interview questions, generating test data for specs, and quick sanity checks. When I need to think out loud with a capable partner who responds in two seconds, ChatGPT is the choice.

Perplexity: best for real-time research

Best for: Any research task that requires current information. Competitive analysis, market research, fact-checking.

Perplexity is what Google should have become. You ask a question, you get a synthesized answer with citations. Unlike ChatGPT or Claude, you can verify the sources because they’re linked.

For PM work, this means you can trust Perplexity’s output more for factual claims. When it says “Figma raised their enterprise pricing by 20% in March 2025,” you can click the source and confirm it.

The limitation: Perplexity is best for research, not reasoning. I wouldn’t use it for PRD writing or strategic thinking.

Notion AI: best for embedded documentation workflows

Best for: PMs whose team already lives in Notion.

Notion AI is less powerful than Claude or ChatGPT as a standalone reasoning engine. But if your team’s documentation, specs, and roadmaps live in Notion anyway, the embedded AI means you don’t have to copy-paste between tools.

The killer feature: you can ask Notion AI questions about your workspace. “Summarize all the customer feedback we’ve collected about the mobile app” — and it searches across your pages to answer.

Dovetail: best for qualitative research synthesis

Best for: Teams doing regular user interviews and needing to synthesize insights across conversations.

Dovetail is purpose-built for user research, which means it handles the specific workflows — tagging, coding, finding patterns across interviews — better than general-purpose AI. If you’re running 5+ user interviews per month, the specialized tooling pays for itself.

For more on research tools, see [INTERNAL_LINK: AI product research tools].

Granola: best for meeting intelligence

Best for: PMs in meeting-heavy roles who want automated notes without the friction of traditional transcription tools.

Granola works differently from Otter — it runs locally and uses your own notes as context for AI summaries. This means better outputs because the AI has both the transcript and your annotations. It’s the tool that senior PMs I know have been raving about in 2025.

5 prompts PMs should have saved

Generic prompts produce generic outputs. These are the specific prompts I use regularly, refined over hundreds of iterations.

Prompt 1: Customer feedback synthesis

I'm going to paste [NUMBER] pieces of customer feedback about [PRODUCT/FEATURE]. 

Analyze this feedback and provide:
1. The top 5 themes, ranked by frequency, with 2-3 representative quotes each
2. Sentiment breakdown (positive/negative/neutral) with percentages
3. Any surprising or contradictory patterns
4. Specific feature requests mentioned more than once
5. Questions I should explore in follow-up research

Here's the feedback:
[PASTE FEEDBACK]

Prompt 2: PRD first draft

Write a PRD for [FEATURE NAME]. 

Context:
- Product: [YOUR PRODUCT]
- Target user: [USER PERSONA]
- Problem being solved: [PROBLEM STATEMENT]
- Success metrics: [HOW WE'LL MEASURE SUCCESS]
- Known constraints: [TECHNICAL, TIMELINE, OR RESOURCE CONSTRAINTS]

Include these sections:
1. Problem statement and user impact
2. Proposed solution (high-level)
3. User stories with acceptance criteria
4. Edge cases and error states
5. Out of scope (what we're explicitly NOT doing)
6. Open questions for engineering

After the draft, list 10 questions engineering will likely ask that I haven't addressed.

Prompt 3: Executive roadmap challenge

You are a skeptical [CFO/CEO/CPO] reviewing my product roadmap for Q[X]. Your job is to stress-test my prioritization.

Here's my proposed roadmap with rationale:
[PASTE ROADMAP]

Ask me the 10 hardest questions about:
- Why these priorities over others
- Resource allocation
- ROI assumptions
- Competitive timing
- Dependencies and risks

Be specific and challenging. Don't accept vague answers.

Prompt 4: Competitive positioning analysis

I need to understand how [COMPETITOR] positions against [MY PRODUCT].

Research and provide:
1. Their current pricing and packaging
2. Key messaging on their website — what problems do they claim to solve?
3. Recent product announcements (last 6 months)
4. How they differentiate from competitors like us
5. Common criticisms from users (check G2, Reddit, Twitter)
6. Where they appear to be investing (based on job postings, announcements)

Cite sources for factual claims.

Prompt 5: User interview question generator

I'm conducting user interviews to understand [RESEARCH GOAL].

Target user: [PERSONA DESCRIPTION]
What I already believe: [CURRENT HYPOTHESES]
What would change my mind: [WHAT I'M TRYING TO VALIDATE OR INVALIDATE]

Generate 15 open-ended interview questions that:
- Don't lead the user toward my existing hypotheses
- Focus on past behavior, not future intentions
- Go deeper on emotional and motivation aspects
- Include follow-up probes for each question

Order them from rapport-building to more probing questions.

For more prompts, see [INTERNAL_LINK: ChatGPT prompts for product managers] and [INTERNAL_LINK: AI product management prompts].

What AI cannot replace

AI hype cycles tend to overclaim, so let me be specific about what AI is bad at — and why these limitations are structural, not temporary.

Judgment under ambiguity

AI can synthesize information and present options. It cannot make the judgment call when reasonable people disagree. “Should we prioritize enterprise features over consumer growth?” is not a question AI can answer because the answer depends on strategy, context, and company values that the AI doesn’t have access to.

The PM job is fundamentally about judgment calls where the data is incomplete. AI can compress the time to get to the decision point, but you still have to decide.

Stakeholder dynamics

AI doesn’t know that your VP of Engineering is skeptical of your roadmap because of what happened two quarters ago. It doesn’t know that the CEO responds better to bottom-up ROI arguments than top-down strategic framing. It doesn’t know who has political capital and who’s on thin ice.

Organizational context is the dark matter of product management. It determines what gets prioritized, what gets resourced, and what gets killed regardless of merit. AI has no model for this.

Customer empathy

AI can analyze what customers say. It cannot feel the frustration in a user’s voice during an interview. It cannot notice the difference between a customer who’s annoyed and a customer who’s about to churn. It cannot build the kind of relationship where a customer tells you what they actually think instead of what they think you want to hear.

The best PMs I know have an intuitive model of their users that goes beyond data. They can predict how a user will react to a design change before running the test. This intuition is built through direct exposure, not synthesis. AI can accelerate the synthesis, but it cannot replace the exposure.

Cross-functional leadership

PMs lead without authority. That’s a human skill — influence, persuasion, trust-building, conflict resolution. AI cannot sit in a room with a frustrated engineering lead and navigate toward alignment. It cannot notice when a designer is diseng

Frequently asked questions

How can product managers use AI?

PMs are using AI to synthesize customer feedback at scale, draft PRDs and user stories, run competitive research, prepare for meetings, generate roadmap options, and stress-test prioritization decisions. The biggest wins come from using AI as a thinking partner, not just a writing tool.

What is an AI product manager?

There are two meanings: a PM who uses AI tools to work more efficiently, and a PM who builds AI-powered products. The first is increasingly expected of all PMs; the second is a specialized role commanding 10-40% higher salaries.

Will AI replace product managers?

No — but PMs who use AI will replace those who don’t. AI can automate routine tasks (status updates, templates, basic research) but can’t replace the judgment, customer empathy, and stakeholder alignment that define the PM role.

What AI tools should product managers use?

Claude and ChatGPT for writing and analysis, Perplexity for real-time research, Notion AI for embedded documentation, Dovetail or Productboard AI for user research synthesis, and Granola or Otter.ai for meeting intelligence.

Ty Sutherland

Ty Sutherland is the editor of Product Management Resources. With a quarter-century of product expertise under his belt, Ty is a seasoned veteran in the world of product management. A dedicated student of lean principles, he is driven by the ambition to transform organizations into Exponential Organizations (ExO) with a massive transformative purpose. Ty's passion isn't just limited to theory; he's an avid experimenter, always eager to try out a myriad of products and services. While he has a soft spot for tools that enhance the lives of product managers, his curiosity knows no bounds. If you're ever looking for him online, there's a good chance he's scouring his favorite site, Product Hunt, for the next big thing. Join Ty as he navigates the ever-evolving product landscape, sharing insights, reviews, and invaluable lessons from his vast experience.

Recent Posts