Skill Guides

Claude Prompts for Product Managers: 30 That Actually Work

Ron Yang17 min read

What you'll learn: 30 Claude prompts for product managers across six PM workflows — from discovery to stakeholder communication. Each prompt is copy-paste ready, and each one gets dramatically better with context files.

Every PM has a collection of prompts that sort of work. You paste your feature brief into Claude, get something back, spend 20 minutes editing, and move on. Functional, but barely.

The problem isn't that your prompts are bad. It's that prompts alone hit a ceiling. When Claude doesn't know your product, your users, or your competitive landscape, the best prompt in the world produces generic output. You're spending half your time on setup — explaining who you are — and half on cleanup.

This article gives you 30 prompts that produce meaningfully better output than what most PMs are getting. But the real unlock isn't any individual prompt. It's the structure underneath.

The biggest prompting mistake I see is that PMs are content with copying a prompt, pasting it into a chat window, having a conversation, then copying and pasting the output somewhere else. That's not an efficient way of using AI — that's treating AI like Google search. The shift from prompts to skills is where the real leverage lives.


The Prompt Ceiling (and How to Break Through It)

A prompt is a one-shot instruction. A skill is a prompt plus context plus a framework plus consistent output structure. The difference matters.

ApproachSetup TimeOutput QualityConsistency
Ad hoc prompting5-10 min per useVariableLow
Good prompts (this article)2-3 min per useGoodMedium
Skills + context files0 min per useHighHigh

Every prompt below works in Claude.ai or Claude Code. But each one works dramatically better when Claude already has your context files loaded. If you're running Claude Code with context files, these prompts become almost unnecessary — the skills handle the framework and context automatically.

Think of these prompts as the bridge: useful now, and a preview of what becomes effortless when you set up the infrastructure.

These prompts work today. Context files and skills make them unnecessary tomorrow. Start with the prompts. Graduate to the system.


Discovery Prompts (1-6)

1. Interview Synthesis

I conducted [N] customer interviews about [topic]. Here are the transcripts:
[paste transcripts or key notes]

Synthesize these into:
1. Top 3-5 themes with supporting evidence (direct quotes where possible)
2. Patterns that confirm our existing assumptions about [persona]
3. Signals that challenge our current understanding
4. Recommended next steps for research or product action

Be specific. I don't want "customers want better UX." I want "4 of 6 users
couldn't find the export button and attempted at least 2 workarounds."

2. Customer Problem Validation

I believe [target persona] has this problem: [problem statement]

Here's the evidence I have:
[paste interview notes, support tickets, survey data, or anecdotal signals]

Evaluate the strength of this evidence:
- How confident should I be that this is a real problem vs. a vocal minority?
- What counter-evidence or alternative explanations should I consider?
- What additional validation would strengthen or weaken this hypothesis?
- If I had to bet, is this worth building for?

Be direct. I'd rather hear "your evidence is thin" than "there are
opportunities to gather more data."

3. Jobs-to-Be-Done Extraction

Based on these customer interview notes, extract the jobs-to-be-done
using the format: "When [situation], I want to [motivation],
so I can [expected outcome]."

Interview notes:
[paste notes]

For each JTBD:
- Classify as functional, emotional, or social
- Rate confidence (strong evidence vs. inferred)
- Note which interview(s) support it
- Identify where the current solution falls short

4. Persona Draft

Help me draft a user persona based on these inputs:
- Role: [job title]
- Company size: [range]
- Key behaviors I've observed: [list]
- Pain points from interviews: [list]
- Current tools they use: [list]

Structure the persona with: background, goals, frustrations,
current workflow, decision criteria, and a "day in the life" scenario.
Make it specific enough that my team can reference this person by name
in design reviews and actually mean something.

5. Survey Question Generator

I'm running a survey to validate [hypothesis] among [audience].

Goal: [what decision this survey will inform]
Sample size target: [N]
Distribution: [email, in-app, etc.]

Generate 8-12 questions that:
- Mix closed (quantitative) and open (qualitative)
- Avoid leading or biased phrasing
- Include at least 2 questions that could disprove my hypothesis
- End with a question that identifies potential interview candidates

6. Feedback Pattern Analysis

Here are [N] pieces of customer feedback from [source: support tickets,
NPS comments, app reviews, etc.]:

[paste feedback]

Analyze for:
1. Recurring themes (group by frequency)
2. Severity indicators (frustrated vs. annoyed vs. suggesting)
3. Feature requests vs. bug reports vs. workflow complaints
4. Segments that appear over-represented
5. The one pattern I should act on this sprint

RelatedAI-Powered Discovery: How Claude Code Handles User Research covers the full discovery workflow with skills that automate these prompts permanently.


Strategy Prompts (7-12)

7. OKR Drafting

I need to draft OKRs for [team/product] for [quarter].

Company-level priorities this quarter:
[list 2-3 company priorities]

Our team's focus areas:
[list 2-3 focus areas]

Constraints: [team size, known dependencies, tech debt commitments]

Draft 2-3 Objectives with 3-4 Key Results each. Each KR should be:
- Measurable (specific number, not "improve")
- Ambitious but achievable (70% confidence)
- Leading indicators where possible (not just lagging)

Flag any potential conflicts between objectives.

8. Competitive Positioning

My product: [brief description]
My target user: [persona]
Main competitor: [competitor name and brief description]

Analyze where I win and where they win across:
- Core value proposition
- Feature depth vs. breadth
- Pricing and packaging
- Target market overlap
- Switching costs

Be opinionated. "Both tools offer project management" is useless.
I need "They win on enterprise permissions; you win on time-to-value
for teams under 20."

9. Market Problem Assessment

I'm evaluating whether [problem] is worth solving for [audience].

What I know so far:
[paste any research, signals, or data]

Help me assess this using Marty Cagan's four risks:
1. Value risk — Will customers actually want this?
2. Usability risk — Can they figure out how to use it?
3. Feasibility risk — Can we build it?
4. Business viability — Does it work for our business?

For each risk, rate HIGH / MEDIUM / LOW with specific reasoning.

10. Strategic Narrative

I need to present our product strategy to [audience: board, exec team,
engineering org].

Our strategy in plain language: [2-3 sentences]
Key bets we're making: [list]
What we're explicitly NOT doing: [list]
Evidence supporting our direction: [data points, customer signals]

Write a strategic narrative (500-800 words) that:
- Opens with the market context that makes our strategy inevitable
- Clearly states our position and the trade-offs we've accepted
- Connects strategy to specific actions this quarter
- Ends with what success looks like in 12 months

11. Go-to-Market Quick Plan

We're launching [feature/product] in [timeframe].

Target audience: [who]
Core value prop: [what it does for them]
Distribution channels available: [list]
Budget: [range or "minimal"]
Success metric: [what would make this a win]

Draft a GTM plan covering: positioning, messaging (headline + 3 bullets),
launch sequence (pre-launch, launch day, post-launch),
channel strategy, and the one metric we should obsess over.

12. Pricing Framework

I'm setting pricing for [product/feature].

Current model: [free / freemium / paid / etc.]
Target customer: [persona and company size]
Competitors charge: [list competitor pricing]
Our cost to serve: [if known]
Value delivered: [what outcome the customer gets]

Recommend a pricing structure using the Van Westendorp or
value-based pricing framework. Include: price point(s),
packaging tiers (if applicable), and the positioning rationale
for why this price is right for this audience.

Specs & Documentation Prompts (13-18)

13. PRD First Draft

Write a PRD for [feature name].

Problem: [what user problem this solves]
User: [which persona this is for]
Success looks like: [measurable outcome]
Constraints: [technical, timeline, or resource constraints]
Related features: [what already exists that this connects to]

Include:
- Problem statement (why this matters now)
- User stories with acceptance criteria
- Scope (what's in v1, what's explicitly out)
- Success metrics (leading + lagging)
- Edge cases and open questions
- Dependencies

Don't include implementation details unless they affect the product
experience. This is for the PM-engineering handoff.

14. User Story Expansion

I have this high-level user story:
"As a [role], I want to [action] so that [outcome]."

Break this into 4-8 smaller user stories with acceptance criteria.
For each sub-story:
- Write the story in the same format
- Add 3-5 acceptance criteria (Given/When/Then where possible)
- Flag dependencies on other stories
- Estimate complexity as Small / Medium / Large

15. Technical Spec Translation

I have this PRD section:
[paste PRD excerpt]

Translate this into a technical spec outline that an engineering lead
can review. Include:
- System components affected
- Data model changes (if any)
- API endpoints needed (if any)
- Key technical decisions to make
- Performance considerations
- Questions for engineering

I'm a PM, not an engineer. Keep the spec readable but technically
specific enough that eng doesn't have to guess what I mean.

16. Release Notes

We just shipped these changes:
[paste changelog, PR descriptions, or bullet points]

Target audience: [customers / internal team / both]
Tone: [professional / conversational / minimal]

Write release notes that:
- Lead with the user benefit, not the feature name
- Group changes by impact (major / minor / fixes)
- Skip internal refactoring that doesn't affect users
- Include one sentence per change (not a paragraph)

17. Edge Case Generator

I'm speccing [feature]. Here's the happy path:
[describe the intended user flow]

Generate 15-20 edge cases I should consider, organized by:
- Input edge cases (empty states, max limits, special characters)
- State edge cases (concurrent users, offline, mid-workflow interruptions)
- Permission edge cases (wrong role, expired access, shared accounts)
- Integration edge cases (third-party failures, API timeouts, data sync)

For each, briefly note the expected behavior (what should happen).

18. API Documentation

I need to document this API endpoint for our developer docs:

Endpoint: [method + path]
Purpose: [what it does]
Auth: [how it's authenticated]
Request body: [paste schema or example]
Response: [paste schema or example]

Write documentation that includes: description, authentication,
request parameters (with types and required/optional), example
request, example response, error codes, and rate limits.

RelatedWriting PRDs with AI: Frameworks That Actually Work covers the full spec writing workflow, including how the /prd-generator skill handles this end-to-end with your product context.


Analysis Prompts (19-24)

19. RICE Prioritization

I need to prioritize these features for next quarter:
[list 5-10 features with brief descriptions]

Score each using RICE:
- Reach: How many users does this affect per quarter?
- Impact: How much does this move the needle? (3=massive, 2=high, 1=medium, 0.5=low, 0.25=minimal)
- Confidence: How sure are we? (100%=high, 80%=medium, 50%=low)
- Effort: Person-months to build

Show your reasoning for each score. Flag where you're making assumptions
I should validate. Rank by RICE score, then add a "gut check" column
for anything the math might be getting wrong.

20. Funnel Analysis Framework

Here's our conversion funnel data:
[paste funnel stages with numbers]

Analyze:
1. Where is the biggest absolute drop-off?
2. Where is the biggest proportional drop-off?
3. What's the expected conversion for each stage in our industry?
4. Which stage has the highest leverage (improvement here moves revenue most)?
5. What are 3 hypotheses for why the biggest drop-off is happening?
6. What would I measure to validate each hypothesis?

21. A/B Test Design

I want to test [hypothesis].

Current state: [what exists now]
Proposed change: [what we'd test]
Primary metric: [what we'd measure]
Traffic: [monthly visitors/users to this surface]

Design the experiment:
- Null and alternative hypotheses
- Sample size needed (with assumptions)
- Expected runtime to reach significance
- Guardrail metrics (what shouldn't get worse)
- Segmentation recommendations
- What result would make us ship vs. iterate vs. kill

22. Metrics Framework

I need a metrics framework for [feature/product area].

Our North Star metric: [metric]
Business goal: [what we're trying to achieve]

Build a metrics hierarchy:
1. North Star (1 metric)
2. Input metrics (3-5 leading indicators that drive the North Star)
3. Health metrics (3-5 guardrails that shouldn't degrade)
4. Feature metrics (2-3 specific to this feature)

For each metric: definition, measurement method, target, and
data source. Flag any metrics that require new instrumentation.

23. Churn Analysis Framework

We're seeing [churn rate] monthly churn. Here's what I know:
[paste any data: cohort analysis, cancellation reasons, usage patterns]

Help me structure the analysis:
1. Segment churn by [time-based cohort, plan tier, acquisition source]
2. Identify the most likely churn drivers based on the data
3. Suggest 3 retention experiments ranked by expected impact
4. Define the metrics I'd track to know if each experiment is working

24. Build vs. Buy Decision

We need [capability]. I'm deciding between building it ourselves
and using [specific vendor/tool].

Build option:
- Estimated effort: [person-months]
- Our team's expertise in this area: [high/medium/low]
- Maintenance burden: [expected ongoing cost]

Buy option:
- Vendor: [name]
- Cost: [pricing]
- Integration complexity: [high/medium/low]
- Lock-in risk: [how hard to switch later]

Analyze this using a build-vs-buy framework. Include total cost of
ownership over 12 and 24 months. Recommend one option with reasoning.

Communication Prompts (25-30)

25. Weekly Status Update

Here's what happened this week on [project/product]:
[paste bullet points, notes, or raw updates]

Write a stakeholder update that covers:
1. What shipped (with user impact, not just feature names)
2. What's in progress (with expected completion)
3. What's blocked (with what unblocks it)
4. Decisions needed from stakeholders
5. Key metrics movement (if applicable)

Keep it under 300 words. Lead with what matters most.
No filler sentences like "the team made great progress."

26. Executive Summary

I need to summarize [document/analysis/research] for [audience: CEO,
board, VP Engineering].

Source material:
[paste or summarize the full document]

Write an executive summary (200-400 words) that:
- Opens with the "so what" — why this matters to this audience
- Includes the 3-5 key findings or recommendations
- Ends with clear next steps or ask
- Uses the language this audience cares about (revenue for CEO,
  capacity for VP Eng, market for board)

27. Stakeholder Alignment Email

I need to align stakeholders on [decision/direction].

Context: [brief background]
My recommendation: [what I think we should do]
Key trade-offs: [what we're giving up]
Who needs to agree: [list stakeholders and their concerns]

Write an email that:
- Frames the decision clearly in the first paragraph
- Acknowledges each stakeholder's likely concern
- Presents the recommendation with supporting evidence
- Asks for specific feedback by [date]
- Is under 500 words

28. Feature Announcement (Internal)

We just shipped [feature]. Write an internal announcement for
[Slack/email/all-hands].

What it does: [user-facing description]
Why we built it: [strategic reason]
Who it affects: [which users/segments]
How to use it: [brief instructions or link]
Known limitations: [what it doesn't do yet]
Credit: [team/individuals who built it]

Tone: [celebratory but informational / matter-of-fact / etc.]
Length: [Slack-appropriate (150 words) / email-appropriate (300 words)]

29. Difficult Conversation Prep

I need to have a conversation with [stakeholder role] about [topic].

The situation: [what's happening]
Their likely perspective: [what they probably think/want]
My perspective: [what I think the right path is]
The tension: [where we likely disagree]

Help me prepare:
1. The best opening line (acknowledge their position first)
2. Key points I need to make (ordered by importance)
3. Questions I should ask to understand their constraints
4. The outcome I should aim for (realistic, not ideal)
5. What to do if the conversation goes sideways

30. Board Deck Narrative

I need to write the product section of our board deck.

Quarter in review: [Q]
Key wins: [list]
Key misses: [list]
Key metrics: [list with actuals vs. targets]
Next quarter priorities: [list]

Write a narrative (500-700 words) that:
- Opens with the strategic context (where the product sits)
- Celebrates wins honestly (not inflated)
- Addresses misses directly (with root cause and fix)
- Connects metrics to the story (not just numbers)
- Transitions to next quarter with clear conviction

RelatedStakeholder Communication with AI breaks down the full communication workflow, including how skills like /executive-update-generator handle recurring updates without prompting.


From Prompts to System

These 30 prompts work. They'll produce better output than what most PMs get from AI today. But they're still prompts — you're setting context, specifying format, and managing quality every time.

The next step is turning this into infrastructure that runs without setup:

  1. Set up context files — so Claude already knows your product, users, and competitors
  2. Install PM skills — so the framework, format, and context loading happen automatically
  3. Build the PM Operating System — so every PM on your team produces consistent, high-quality output

Build this for your team → We set up the full infrastructure — context files, shared skills, and workflows — so your team graduates from individual prompts to a shared PM operating system. See how it works →

The prompts in this article are the starting point. The PM OS is where you end up.

Download the PRD Generator free →


FAQ

Can I use these prompts in ChatGPT too?

Yes. Every prompt here works in any AI chat tool. But the output quality will vary based on how much context you provide. Claude Code with context files produces consistently better output because the context is always loaded.

How do I know which prompt to use for my situation?

Start with the workflow: are you doing discovery, strategy, specs, analysis, or communication? Then pick the prompt that matches the specific artifact you need. If you're unsure, the complete PM guide maps skills to workflows.

Do these prompts replace PM skills?

They're complementary. Use prompts when you need quick, one-off output. Use skills when you need consistent, repeatable output that reads your product context. Most PMs start with prompts and graduate to skills as they set up their context files.

What's the biggest mistake PMs make with AI prompts?

Not providing enough context. A prompt without product context produces generic output. The fix is either: (a) add context to each prompt manually, or (b) set up context files so the context is always there. Option B wins at scale.


About the Author

Ron Yang is the founder of mySecond — he builds and manages PM Operating Systems for product teams. Prior to mySecond, he led product at Aha! and is a product advisor to 25+ companies.

Browse the skills directory →

Set up Claude Code for PM work →

Try this in your workflow today

Download the related skill and run it in Claude Code. Free skills are available now — no account required.

Get the Skill