Skill Guides

Writing PRDs with AI: Frameworks That Actually Work

Ron Yang12 min read

What you'll learn: How Claude Code applies structured PM frameworks to PRDs, the three spec approaches that cover most needs, and the five-step workflow from problem statement to engineering handoff.

A PRD is only as good as the framework behind it. And the dirty secret of most AI-written PRDs is that there's no framework at all. A PM opens ChatGPT, describes a feature, and asks for a PRD. What comes back looks like a PRD — it has sections, it has user stories, it has success metrics. But it's built on nothing. No product context. No competitive awareness. No structured thinking about risks, assumptions, or value.

The result is a document that takes an hour to edit into something useful — which defeats the purpose of using AI in the first place.

Writing PRDs with AI works when the framework is built into the tool, not bolted on by the PM at prompt time. This article covers how Claude Code handles specs — which frameworks it applies, how they change the output, and what the workflow looks like from problem statement to engineering handoff.

This is the deep dive on the Specs and Documentation category from 7 Types of PM Work You Can Automate with Claude Code. If you haven't read that overview, it covers all seven categories of PM work where automation makes a difference.

There was a period where many in the industry felt PRDs were no longer needed. Prototyping tools got powerful enough that teams started skipping documentation entirely. But as agentic development has accelerated in recent months, teams are rediscovering why clear PRDs and specs matter. When AI agents are helping build your product, a well-structured plan isn't optional — it's more important than ever.


The PRD Problem

PRDs have two audiences: the PM who writes them and the engineer who builds from them. The tension between these audiences creates most of the problems.

PMs want PRDs that capture their thinking — the strategic rationale, the user research, the competitive pressure that makes this feature necessary. Engineers want PRDs that tell them what to build — clear requirements, defined scope, explicit edge cases, testable acceptance criteria.

A good PRD serves both. Most don't. And when PMs use generic AI to write specs, the output skews heavily toward the PM audience (strategic narrative, user context) while underserving the engineering audience (precise requirements, edge cases, integration points).

This isn't a model quality issue. It's a framework issue. ChatGPT doesn't know which PRD framework your team uses or what your engineers need. Claude Code skills solve this by encoding the framework — every run applies it, every PRD has the same structure and depth.


The Frameworks That Matter

Three PRD frameworks cover most of what PM teams need. Each one serves a different purpose and produces a different kind of document.

Cagan's Problem-First Approach

Marty Cagan's approach starts with the problem, not the solution. A PRD built on this framework begins with a clear statement of what problem exists, who has it, and why solving it matters — before describing any feature.

This sounds obvious. In practice, most PRDs jump to the solution. "We're building a notification center with these features..." skips the question that should come first: "What problem does a notification center solve, and is it the right problem to solve?"

The /prd-generator skill applies this structure by default. It begins with the problem statement, connects it to specific personas from your context files, establishes the value proposition (why solving this matters for the business), and only then describes the proposed solution.

The output includes: a problem statement grounded in user evidence, a value assessment covering user value, business value, and technical feasibility, clearly defined success metrics tied to the problem (not the solution), and risk identification — what assumptions are we making and what could go wrong.

For engineering, this means the "why" is clear before the "what." Engineers can evaluate trade-offs better when they understand the underlying problem, because they'll often see technical approaches the PM didn't consider.

Risk-Weighted Specifications

Some features carry more risk than others. A minor UI improvement has different risk characteristics than a new payment flow or a data migration. The risk-weighted approach treats spec depth as a function of risk — higher-risk features get deeper specs.

The /risk-register-builder skill identifies and categorizes risks across four dimensions: technical risk (can we build it?), user risk (will they use it?), business risk (will it move the metrics?), and integration risk (will it play well with existing systems?).

When combined with /prd-generator, the output adjusts spec depth to match the risk profile. A low-risk feature gets a lightweight spec — enough for engineering to build confidently, not so much that the PM spent more time specifying than it takes to build. A high-risk feature gets a detailed spec with explicit assumption testing, rollback plans, and staged rollout recommendations.

Tip — This prevents the two most common spec failures: over-specifying simple things (wasting PM time) and under-specifying complex things (creating engineering rework). Match spec depth to risk.

User-Story-Driven Specs

Some teams don't use traditional PRDs. They work in user stories with acceptance criteria, organized by epic. The documents look different, but the underlying need is the same: clear requirements that engineering can build from.

The /user-story-writer skill generates stories in standard format — "As a [persona], I want [capability], so that [outcome]" — with acceptance criteria, edge cases, and dependency notes. Because the skill reads your personas.md context file, the stories reference real user archetypes, not generic roles.

The output organizes stories by priority and dependency, flags stories that depend on other stories or systems, and includes acceptance criteria specific enough for QA to test against.

For teams using agile processes with sprint-level planning, this format feeds directly into the backlog without reformatting. The PM reviews and adjusts priority, adds context, and the stories are ready for estimation.


The Spec Workflow

Writing a PRD isn't a single skill run. It's a workflow with distinct steps, each producing an artifact that feeds the next.

Step 1: Start with the One-Pager

Before writing a full PRD, validate the idea with stakeholders. The /one-pager-creator skill produces a concise feature proposal: problem, proposed solution, expected impact, and key risks. One page. Five-minute read.

The one-pager is a alignment tool. Share it before investing days in a detailed spec. If the strategic direction is wrong, you'd rather find out before writing 15 pages of requirements.

Step 2: Generate the PRD

With alignment confirmed, run /prd-generator. The skill reads your product context, persona files, and competitive landscape. It applies the Cagan problem-first framework and produces a structured PRD.

The output includes: problem statement with user evidence, solution overview with scope boundaries, user stories with acceptance criteria, success metrics with measurement methodology, risks and assumptions, and dependencies and integration points.

The PRD references your actual product — not a hypothetical one. The personas are your personas. The competitive context is your competitive context. The metrics connect to your existing measurement framework.

Step 3: Decompose into Shippable Increments

A monolithic PRD often describes more than one sprint's worth of work. The /feature-decomposition-tool breaks the feature into shippable increments — each one independently valuable, each one buildable within a sprint.

This step transforms a PRD from a planning document into an execution plan. Engineering can estimate each increment independently. The team can ship the highest-value piece first and iterate based on real usage data.

Step 4: Generate Technical Context

For features with significant technical complexity, the /technical-spec-writer produces the engineering-facing companion document. Architecture decisions, API contracts, data model changes, migration plans — the technical details that engineers need but that don't belong in a product-focused PRD.

Not every feature needs a technical spec. Simple UI changes and content updates don't. But for features that touch data models, integrate with external systems, or require infrastructure changes, the technical spec prevents the "we didn't realize this was complex" surprise that derails sprints.

Step 5: Pressure-Test with Stakeholder Simulation

Before sharing the PRD with real stakeholders, run /stakeholder-simulator. The skill reads your PRD and simulates how different stakeholders — engineering leads, designers, executives, customers — would respond. It flags ambiguities, missing sections, and arguments that won't land.

Watch out — This step catches the problems that are obvious to everyone except the author. The "what about edge case X?" from engineering. The "how does this connect to Q3 goals?" from the VP. Better to surface these in simulation than in the review meeting.


Why Context Files Change PRD Quality

The single biggest difference between a generic AI-written PRD and one from Claude Code is context. Not prompt engineering. Not model quality. Context.

When /prd-generator runs, it reads your context files:

  • product.md tells it what's already built, what's in flight, and what your product does
  • personas.md tells it who your users are, what they need, and how they talk about their problems
  • competitors.md tells it what alternatives exist and where you differentiate
  • company.md tells it your strategic priorities and constraints

A PRD written with this context references real things. The user stories mention your actual personas. The competitive section references your actual competitors. The success metrics connect to your actual KPIs.

Example — A context-aware PRD needs 10-15 minutes of PM review and refinement. A generic PRD needs 30-60 minutes of context-injection, where the PM essentially rewrites the parts that reference the real world. The AI saved typing time but not total time.

This is also where standardizing PM quality becomes tangible. When every PM runs the same skill against the same context files, every PRD references the same product reality. Engineering gets consistent inputs. The Head of Product reviews consistent documents. Quality stops depending on which PM got the assignment.

Watch out — Framework-driven PRDs handle structure and context well, but they don't replace the PM's judgment on scope trade-offs, sequencing decisions, or the "why now" timing question. The skill produces a strong draft; the PM still needs to pressure-test whether it's the right thing to build, in the right order, at the right depth.


What the Output Actually Looks Like

A PRD from /prd-generator isn't a wall of text. It's a structured document with clear sections, each serving a specific purpose. Here's the skeleton:

Problem Statement — What problem exists, who has it, evidence that it's real. Grounded in persona data and user research.

Value Assessment — Why solving this matters. User value, business value, technical feasibility. The case for building this instead of something else.

Proposed Solution — What we're building. Scope boundaries — what's in, what's explicitly out. Enough detail for engineering to estimate, not so much that it constrains design.

User Stories — Structured stories with acceptance criteria. Referenced to specific personas. Organized by priority.

Success Metrics — How we'll know this worked. Leading and lagging indicators. Measurement methodology and timeline.

Risks and Assumptions — What we're betting on. What could go wrong. How we'd know early if assumptions are wrong.

Dependencies — What this feature requires from other teams, systems, or timelines. What blocks what.

Every section is generated from your context. Every section follows the same framework. Every PRD looks the same structurally, while the content reflects the specific feature.


Getting Started

If you write PRDs regularly, start here. Run /prd-generator on a feature you're currently specifying. Compare the output to what you would have written manually. The structural difference and the context-awareness are immediately visible.

If your team has inconsistent PRD quality — different PMs producing specs at different depths — the path to consistency starts with a shared skill, not a shared template. Templates provide structure. Skills provide structure, framework, and context. Read how to standardize PM quality for the full picture.

If you haven't set up Claude Code yet, the setup guide walks through installation and context file creation step by step. Context files are especially important for PRD quality — they're the difference between a generic spec and one that references your product reality.

For the full picture of what's possible across all seven PM automation categories, the hub article covers discovery through stakeholder communication. And if you want to see where your team's spec quality stands relative to other dimensions, the PM Team Maturity Assessment scores your team across nine dimensions.

Build this for your team → We set up and manage PM Operating Systems for product teams — shared spec frameworks that make every PRD consistent regardless of which PM writes it. See how it works →

The complete spec writing toolkit — along with 70+ other PM skills across discovery, strategy, competitive intelligence, planning, data, and communication — is available in the PM Operating System.


About the Author

Ron Yang is the founder of mySecond — he builds and manages PM Operating Systems for product teams. Prior to mySecond, he led product at Aha! and is a product advisor to 25+ companies.

Browse the skills directory →

Set up Claude Code for PM work →

Try this in your workflow today

Download the related skill and run it in Claude Code. Free skills are available now — no account required.

Get the Skill