What you'll learn: How to chain Claude skills into complete PM workflows — from discovery through delivery. Five workflows, each with the specific skills involved, what they produce, and how each step feeds the next.
Individual skills are useful. Run /prd-generator, get a PRD. Run /competitive-profile-builder, get a competitor profile. Each one saves time on a specific task.
But PMs don't work in isolated tasks. Discovery feeds strategy. Strategy feeds specs. Specs feed launch plans. Communication threads through everything. The real leverage comes from running workflows — sequences of skills where each step's output feeds the next step's input.
This is the difference between "I use AI for some PM tasks" and "my entire PM workflow runs on Claude." This article shows you what the second one looks like, workflow by workflow.
Of the workflows covered here, batch interview analysis in Discovery surprised me the most. There are so many data points across transcripts that are genuinely hard to navigate and synthesize manually. Having AI process all of them at once — finding patterns across interviews that I would have missed reviewing them one at a time — was the moment the system proved its value.
If you haven't set up Claude Code yet, start with the complete setup guide and come back here when you're ready to chain skills together.
How Workflows Work in Claude Code
A workflow is a sequence of skills that share your project's context files. Because every skill reads the same company.md, product.md, personas.md, and competitors.md, the outputs are naturally connected. The PRD references the same personas as the research synthesis. The roadmap references the same strategic priorities as the OKRs.
You don't need to copy output from one skill and paste it into another. The context files are the connective tissue. When you update your research findings in your project, the next skill you run has access to them automatically.
Workflows aren't a special feature you configure. They're what happens naturally when skills share context. The skills are the steps. The context files are the glue.
Workflow 1: Discovery → Insights → Opportunity
When to use: You've conducted customer interviews and need to turn raw data into actionable product direction.
The problem this solves: Research synthesis is where most PM teams lose the most value. Interviews get recorded but not synthesized. Synthesis is done by whoever has time, in whatever format they prefer. Insights decay because they're locked in a Google Doc nobody revisits.
The Workflow
Step 1: Synthesize interviews — Run /research-synthesis-engine with interview transcripts in your discovery/inputs/ folder. It reads the transcripts alongside your persona files and produces thematic analysis with supporting quotes, organized by persona and topic.
Step 2: Extract jobs-to-be-done — Run /jtbd-extractor on the same transcripts. It identifies functional, emotional, and social jobs using the JTBD framework, with confidence ratings based on the strength of evidence.
Step 3: Validate the market problem — Run /market-problem-validator with the synthesized insights. It applies Marty Cagan's four-risk framework (value, usability, feasibility, viability) and rates the strength of your evidence for each.
Step 4: Update your context — Run /enhance-context with the research outputs. It updates your persona files with new signals and enriches your product context with validated problems.
What You Get
- Thematic analysis with quotes and evidence
- JTBD map tied to personas
- Risk assessment with confidence levels
- Updated context files that make every subsequent skill smarter
Time Impact
Manual synthesis of 8-10 interviews: 6-8 hours. This workflow: 30-45 minutes, including review and edits.
Related — AI-Powered Discovery: How Claude Code Handles User Research goes deeper on the discovery workflow with before/after examples and tips for structuring transcripts.
Workflow 2: Strategy → OKRs → Roadmap
When to use: Quarterly planning. You need to translate company strategy into team objectives and a buildable roadmap.
The problem this solves: Quarterly planning consumes multiple days. OKR drafting takes half a day. Roadmap building takes another. Alignment between company strategy and team execution depends on the PM remembering everything from the strategy meeting.
The Workflow
Step 1: Draft OKRs — Run /okr-coach with your company priorities and team focus areas. It produces 2-3 objectives with measurable key results, flags potential conflicts, and pressure-tests whether the KRs are actually leading indicators or just activity metrics.
Step 2: Prioritize the backlog — Run /prioritization-engine with your feature backlog and the OKRs from Step 1. It applies RICE scoring calibrated to your strategic objectives — features that map to KRs score higher on Impact.
Step 3: Build the roadmap — Run /roadmap-builder with the prioritized backlog. It produces a Now/Next/Later roadmap tied to objectives, with dependencies mapped and capacity considerations flagged.
Step 4: Write the positioning — Run /positioning-statement-generator if the roadmap includes launches that need market positioning. It applies April Dunford's framework against your competitive context.
What You Get
- OKRs aligned to company strategy with measurable KRs
- Prioritized feature list with RICE scores and rationale
- Buildable roadmap connected to objectives
- Positioning statements for planned launches
Time Impact
Full quarterly planning cycle: 2-3 days. This workflow: 2-3 hours, including review sessions with leadership.
Workflow 3: Feature Idea → Spec → Engineering Handoff
When to use: You have a validated feature idea and need to take it from concept to a spec that engineering can build from.
The problem this solves: The spec writing process is where PM teams have the widest quality variance. Senior PMs write thorough specs. Junior PMs miss edge cases. Different PMs use different formats. Engineering asks the same clarifying questions on every spec because every spec has different gaps.
The Workflow
Step 1: Decompose the feature — Run /feature-decomposition-tool with the feature description. It breaks the feature into shippable increments, identifies dependencies between them, and suggests what belongs in v1 versus later iterations.
Step 2: Write the PRD — Run /prd-generator with the v1 scope from Step 1. It produces a complete PRD with problem statement, user stories, acceptance criteria, success metrics, edge cases, and open questions — all referencing your personas and competitive context.
Step 3: Generate user stories — Run /user-story-writer on the PRD sections that need more granular stories. It expands each feature area into specific stories with Given/When/Then acceptance criteria.
Step 4: Identify edge cases — Review the PRD's edge case section and supplement with a targeted pass using Claude: "What edge cases am I missing for [specific user flow]?" The combination of the skill's systematic coverage and a targeted follow-up catches what each approach alone would miss.
Step 5: Create the launch plan — If this feature has external visibility, run /launch-checklist-generator to produce a pre-launch, launch-day, and post-launch checklist with owners and deadlines.
What You Get
- Feature decomposition with v1 scope definition
- Complete PRD in consistent format
- Granular user stories with acceptance criteria
- Edge case coverage
- Launch checklist (if applicable)
Time Impact
Full spec cycle for a medium feature: 8-12 hours. This workflow: 2-3 hours, including PM review and judgment calls on scope and trade-offs.
Tip — The highest-leverage improvement isn't the first draft speed. It's the consistency. When every spec uses the same structure, engineering stops asking "where's the success metrics section?" and starts asking "should we change the approach for edge case #7?" Better questions, better products.
Workflow 4: Competitive Event → Analysis → Response
When to use: A competitor launches something, raises funding, changes pricing, or makes a move that your team needs to evaluate and respond to.
The problem this solves: Competitive intelligence is either reactive (someone mentions it in Slack) or stale (the competitor doc is 6 months old). When a competitor does something notable, the PM who notices it writes up a quick summary that may or may not reach the right people.
The Workflow
Step 1: Build the competitor profile — Run /competitive-profile-builder with the competitor name and the specific event context. It produces a structured analysis covering positioning, strengths, weaknesses, and your differentiation — updated to reflect the new development.
Step 2: Assess market impact — Run /landscape-mapper to see how this move shifts the competitive landscape. It maps where you and competitors sit across key dimensions and highlights where positioning gaps have opened or closed.
Step 3: Run a SWOT analysis — Run /swot-analysis-generator focused specifically on the competitive threat. It produces a SWOT that accounts for the new development and suggests strategic responses.
Step 4: Write the stakeholder brief — Run /executive-update-generator focused on the competitive development. It produces a structured brief: what happened, why it matters, what it means for us, and recommended response.
Step 5: Update competitive context — Run /enhance-context with the new competitive analysis to update your competitors.md. Every skill that runs after this point will know about the competitive change.
What You Get
- Updated competitor profile
- Landscape map showing competitive shifts
- SWOT analysis with strategic response options
- Stakeholder brief ready to share
- Updated competitive context for all future skills
Time Impact
Manual competitive analysis cycle: 4-6 hours. This workflow: 1-2 hours, including strategic evaluation and response decisions.
Related — Competitive Intelligence with AI: The Complete PM Playbook covers the full competitive intelligence workflow, including how to set up ongoing monitoring.
Workflow 5: Recurring Communication Cadence
When to use: Every week. The stakeholder updates, sprint reviews, and status communications that consume Friday afternoons.
The problem this solves: Communication work is high-frequency and low-complexity — but it still takes time. A weekly status update takes 45 minutes. A sprint retro agenda takes 20 minutes. Board deck inputs take hours. None of this is creative work, but all of it has to happen.
The Workflow
Weekly cadence:
-
Monday: Run
/meeting-agendafor your team standup or planning meeting. It produces a focused agenda with time allocations and required pre-reads. -
Wednesday: Run
/sprint-retro-facilitatorbefore your retro. It structures the session with focused prompts and a framework that avoids "what went well / what didn't" fatigue. -
Friday: Run
/executive-update-generatorfor your weekly stakeholder update. It reads your project context and produces a structured update: shipped, in-progress, blocked, decisions needed.
Quarterly:
- Run
/board-deck-generatorfor the product section of the board deck. It synthesizes the quarter's narrative: wins, misses, metrics vs. targets, and next quarter's conviction.
What You Get
- Consistent weekly artifacts without Friday-afternoon scrambles
- Meeting agendas that are focused instead of "let's see what comes up"
- Quarterly board inputs that connect metrics to narrative
Time Impact
Weekly communication overhead: 3-4 hours. With this cadence: 45 minutes to an hour, mostly review and personalization.
Connecting the Workflows
These five workflows aren't isolated. They share context and build on each other:
- Discovery insights (Workflow 1) feed strategy and OKRs (Workflow 2)
- OKRs and roadmap (Workflow 2) scope what goes into specs (Workflow 3)
- Competitive intelligence (Workflow 4) informs positioning in Workflow 2 and differentiation in Workflow 3
- Communication (Workflow 5) reports on progress across all other workflows
The connecting layer is your context files. Every time you run /enhance-context, the insights from one workflow become available to every other workflow. Over months, this accumulation is what turns isolated AI usage into a PM operating system.
Watch out — These workflows assume your context files are populated and reasonably current. Stale or thin context files produce generic output regardless of how good the skill is. If you haven't updated your context in months, start there before expecting strong results from any workflow.
Build this for your team → We set up the full workflow infrastructure — context files, skills, and the connections between them — so your PM team runs end-to-end workflows from day one. See how it works →
For the full picture of how this system architecture works, see The PM Operating System Built on Claude.
Getting Started
Don't try to run all five workflows at once. Pick the one that causes the most pain:
- If your team loses insights after interviews: Start with Workflow 1 (Discovery)
- If quarterly planning takes too long: Start with Workflow 2 (Strategy)
- If spec quality varies across PMs: Start with Workflow 3 (Specs)
- If competitive intelligence is stale: Start with Workflow 4 (Competitive)
- If Fridays are consumed by status updates: Start with Workflow 5 (Communication)
Set up your context files, install the skills for that workflow, and run it for two weeks. Once one workflow is producing value, add the next.
The complete PM guide covers initial setup. The skills directory has 70+ skills across all five workflows.
FAQ
Do I need all five workflows running to get value?
No. Each workflow stands alone. Most PMs start with one and add others over weeks or months. The compounding effect kicks in when you have two or more workflows sharing context.
How much time should I budget for learning each workflow?
About 30 minutes for your first run of any workflow. After the first time, each subsequent run is 10-15 minutes of review and editing. The skills do the heavy lifting.
Can different PMs on my team run different workflows?
Yes — and this is one of the strengths of shared context files. One PM can run discovery skills while another runs spec skills, and both benefit from the same product context. The outputs are naturally consistent because the context is shared.
What if a skill's output isn't quite right?
Edit it. Skills produce a strong first draft, not a final artifact. The PM's judgment — scope decisions, trade-off calls, stakeholder-specific framing — still matters. The skill eliminates the blank page and the setup time. You add the judgment.
About the Author
Ron Yang is the founder of mySecond — he builds and manages PM Operating Systems for product teams. Prior to mySecond, he led product at Aha! and is a product advisor to 25+ companies.