Generic PRDMissing context...Vague personas...No strategy link...+Context Filescompany.mdproduct.mdpersonas.mdContext-Rich PRD

Why Your AI-Generated PRDs Are Generic (And How to Fix It in One Step)

Ron YangMarch 12, 202614 min read

Your AI-generated PRDs are generic because the AI knows nothing about your product, your users, or your competitive landscape. The fix is not a better prompt. The fix is persistent context — structured files that give AI the knowledge it needs before you ever ask for a PRD. Load context once, and every PRD after that is specific to your product.

That is the entire article in three sentences. Now let me show you why this matters and how to do it.


The Problem: AI PRDs That Could Be About Any Product

You have done this. Every PM has done this.

You open ChatGPT or Claude. You type something like "Write me a PRD for a notifications feature." You get back 800 words of text that reads like it was written by someone who has never seen your product. Generic user stories. Placeholder metrics. A competitive section that mentions "competitors in the space" without naming a single one.

Here is what a generic AI PRD looks like:

Problem Statement: Users need a better way to receive notifications about important updates. Currently, users may miss critical information, leading to reduced engagement and satisfaction.

Target Users: All users of the platform who need to stay informed about relevant updates and changes.

Success Metrics:

  • Increase notification engagement rate by 20%
  • Reduce time-to-action on critical updates
  • Improve user satisfaction scores

Competitive Landscape: Several competitors in the space offer notification features. We should aim to provide a best-in-class experience.

Be honest: would you put your name on that? Would your engineering team read that and know what to build?

That PRD could be about Slack, Figma, a healthcare platform, or a dog-walking app. It tells engineering nothing. It has no opinion. It has no grounding in who your users actually are or what they actually struggle with.

And yet this is what most PMs get when they use AI for PRD generation. Not because the AI is bad — but because the AI is uninformed.


Why Context Is the Missing Ingredient (Not Better Prompts)

The instinct when you get a generic PRD from AI is to write a longer prompt. Add more detail. Specify the format. Include examples.

This helps, marginally. But it misses the real problem.

The real problem is that every time you start a new AI conversation, you are starting from zero. The AI does not know:

  • What your product does or who it serves
  • What your users' actual pain points are (in their words)
  • What your competitors offer and where the gaps are
  • What your company's strategic priorities are this quarter
  • What features are already on your roadmap

Without this knowledge, the AI will always produce generic output. No amount of prompt engineering fixes a knowledge gap.

"I don't yet have a reliable system that encodes company context, personas, and competitive landscape in a way that allows AI to generate structured outputs I fully trust. As a result, I'm still acting as the 'human glue' between insights and execution."

— PM building AI workflows in life sciences

That quote nails it. Most PMs are the "human glue" — manually translating product knowledge into every AI interaction, then manually editing the output to make it accurate. The AI handles the formatting. The PM handles everything that matters.

Context engineering flips this. Instead of re-explaining your product in every prompt, you structure your product knowledge into persistent files that the AI reads automatically. Company context. Product state. User personas. Competitive landscape. The AI reads these before generating anything.

The result: PRDs that reference your actual users, your actual metrics, your actual competitive gaps — without you typing a word of context.


Before and After: Generic PRD vs. Context-Aware PRD

This is where the difference becomes visceral. Same feature request. Same AI. The only variable is whether the AI has context.

Without Context: "Write a PRD for a workload balancer feature"

# PRD: Workload Balancer

## Problem
Teams often struggle with uneven distribution of work across
team members. This leads to burnout for some team members while
others are underutilized. A workload balancing feature would
help managers distribute tasks more evenly.

## Target Users
- Team managers who oversee multiple team members
- Individual contributors who feel overloaded

## Success Metrics
- Reduce workload imbalance by 30%
- Improve team satisfaction scores
- Decrease employee burnout indicators

## Proposed Solution
Implement a dashboard that visualizes workload distribution
across team members and suggests rebalancing actions.

With Context: Same request, with company.md, product.md, personas.md, competitors.md, and goals.md loaded

# PRD: Workload Balancer

**Status:** Draft
**Owner:** Sarah Chen
**Target Release:** Q2 2026
**Availability:** Business tier

## Context
- **Roadmap:** Workload Balancer is marked as Planned for Q2
  in product.md, tied to the Operations Efficiency theme.
- **Persona pain:** Jordan (Agency Operations Manager) reports
  "finding out about overloaded team members only when deadlines
  slip." Median team size: 12-18 across accounts.
- **Strategic fit:** Directly supports company goal of reducing
  churn in mid-market agency segment (ARR $40K-$80K).
- **Competitive:** Monday.com offers basic workload views but
  no predictive balancing. Asana's workload feature requires
  manual capacity entry. Neither surfaces historical patterns.

## Problem
Agency operations managers at mid-market firms (our largest
churn segment) cannot see who is overloaded until a deadline
slips. They manage 12-18 people across 4-6 accounts and rely
on standups and gut feel to distribute work. By the time they
detect imbalance, the damage — missed deliverables, frustrated
clients, burned-out team members — is already done.

## Evidence
- 23% of churned accounts in Q4 cited "team management
  limitations" in exit surveys
- Support tickets mentioning "workload" or "capacity" increased
  42% QoQ
- 8 of 12 discovery interviews mentioned workload visibility
  as a top-3 need
- Assumed: predictive balancing (vs. reactive) would reduce
  time-to-detect overload from ~5 days to <1 day

## Success Criteria

### Lagging Indicators
| Metric | Current | Target | Timeframe |
|--------|---------|--------|-----------|
| Mid-market churn rate | 4.2%/mo | 3.0%/mo | 90 days |
| "Team management" exit mentions | 23% | <10% | 120 days |

### Leading Indicators
| Metric | Target | Predicts |
|--------|--------|----------|
| Daily active usage of balancer | 60% of ops managers | Adoption |
| Rebalance actions taken/week | >2 per manager | Value delivery |
| Time from assignment to rebalance | <24 hours | Reduced overload |

Read both. The first one is a template. The second one is a plan your engineering team can build from.

The difference is not the prompt. The difference is that the AI knew the product, the personas, the competitive landscape, and the strategic priorities before it wrote a single word.


How to Set Up Persistent Context for PRD Generation

The fix is straightforward. You create five structured markdown files that capture your product knowledge, and you put them where your AI tool can read them automatically.

The Five Context Files

company.md — Your mission, strategic priorities, current goals, and constraints. This tells AI why certain features matter more than others.

product.md — What your product does today, your current roadmap, your tech stack, your pricing tiers. This prevents the AI from proposing features you already have or that conflict with your architecture.

personas.md — Your actual user personas with real pain points, jobs to be done, and quotes from research. This is what transforms "Target Users: all users of the platform" into "Jordan, an Agency Operations Manager managing 12-18 people across 4-6 accounts."

competitors.md — Who you compete with, what they offer, where the gaps are. This turns "several competitors in the space" into "Monday.com offers basic workload views but no predictive balancing."

goals.md — Your quarterly objectives, OKRs, and what's deprioritized. This ensures the PRD connects to what your team is actually trying to achieve right now — not generic best practices.

The Key Principle: Load Once, Use Everywhere

The power of persistent context is that you set it up once and every skill benefits. Your PRD generator uses it. Your competitive analysis uses it. Your roadmap review uses it. Your stakeholder simulator uses it.

"Honestly, everything. I spend a significant amount of time keeping requirement documentation up to date based on decisions made in meetings."

— PM leading AI strategy at a large digital health company

That time spent keeping documentation current is exactly what context files solve. Update your context once, and every AI-generated artifact — PRDs, roadmaps, competitive briefs — automatically reflects the latest state.


The Marty Cagan Problem: Why Structure Matters as Much as Context

Context alone is not enough. You also need the right structure.

Most AI PRD generators (and most PRD templates) start with the solution. "Build a workload dashboard." This is backwards. Marty Cagan has been saying this for years in Inspired and Empowered: start with the problem, validate risks, then define the solution.

A context-aware PRD skill should embed this structure automatically:

  1. Start with what the context files reveal — What does the roadmap say? What persona pain does this address? What do competitors already offer?
  2. Define the problem before the solution — Who has this problem? How do you know? What evidence exists?
  3. Separate validated evidence from assumptions — Mark what you know vs. what you are inferring. Flag assumptions for validation.
  4. Assess risks across four dimensions — Value (will users want this?), Usability (can they figure it out?), Feasibility (can engineering build it?), Viability (does it work for the business?).
  5. Include leading indicators, not just lagging ones — What early signals predict success before launch?

This structure is not something you should have to remember and type into a prompt every time. It should be embedded in the skill itself.


Why Frameworks Embedded in Skills Beat Ad-Hoc Prompting

Here is the uncomfortable truth about "how to write a PRD with AI" tutorials: they teach you to write a better prompt. They do not teach you to build a system.

A prompt is a one-time thing. You write it, you use it, you forget it. Next time you write a PRD, you start from scratch — maybe you remember to include the risk framework, maybe you don't. Maybe you remember to check competitive context, maybe you don't.

A skill is a system. It encodes the framework, the structure, the context-loading behavior, and the output format into a reusable artifact. Every time you run it, you get the full Cagan risk framework, the full context lookup, the full evidence-vs-assumptions separation. You do not have to remember anything.

"I currently spend a significant amount of time manually translating research and commercial insights into structured PRDs. Even though I'm using AI to gather intelligence, the step from insight to a clear, high-quality PRD is still inconsistent and heavily manual."

— PM transitioning to AI PM in pharma

The inconsistency that PM describes is the natural result of ad-hoc prompting. Sometimes you write a thorough prompt, sometimes you are in a rush. The output quality fluctuates with your effort. A skill removes that variance. The framework runs every time, whether you are having a sharp day or a scattered one.

"PRD writing and synthesis of qualitative user research data"

— PM between jobs, on where they spend the most time

When the most time-consuming parts of your job are PRD writing and research synthesis, those are exactly the workflows to systematize first. Not with a better prompt — with a persistent skill that encodes your standards, your context, and your framework every time.

This is the difference between using AI as a chat tool and using AI as an operating system. Chat gives you answers. An operating system gives you consistent, context-aware outputs that compound over time.


What This Looks Like in Practice

With mySecond's PM Operating System, the PRD workflow works like this:

  1. You run /prd-generator and describe the feature you are thinking about
  2. The skill automatically reads your context files — company.md, product.md, personas.md, competitors.md, goals.md
  3. It tells you what it found: "I see this feature is on your Q2 roadmap. Your persona Jordan mentions this pain point. Monday.com has a partial solution."
  4. It asks only for what is missing — not information it already has
  5. It generates a PRD using Marty Cagan's problem-first structure with your real users, real metrics, and real competitive landscape

No copy-pasting context into a chat window. No re-explaining your product. No remembering which framework to use.

The context is persistent. The framework is embedded. The output is specific to your product every single time.


Frequently Asked Questions

How is a context-aware AI PRD different from a ChatGPT PRD?

A ChatGPT PRD starts from zero knowledge about your product. You get generic user stories, placeholder metrics, and vague competitive references. A context-aware PRD starts from structured knowledge of your company, product, users, and competitors — producing output that references your actual personas, real competitive gaps, and specific strategic priorities. The AI is the same. The knowledge is different.

What context files do I need for good AI-generated PRDs?

Five files cover the critical knowledge: company.md (mission, strategy), product.md (current state, roadmap, constraints), personas.md (user pain points, jobs to be done), competitors.md (competitive landscape, gaps, positioning), and goals.md (quarterly objectives, OKRs, what's deprioritized). These are structured markdown files that your AI reads before generating any output. You create them once and update them as your product evolves — every PRD and every other PM artifact benefits automatically.

Does a better prompt fix generic AI PRDs?

No. A better prompt improves a single output, but it does not fix the underlying problem. Every new AI session starts from zero knowledge about your product, your users, and your competitive landscape. You end up re-explaining the same context every time. Persistent context files solve the problem structurally — you load your product knowledge once, and every subsequent PRD benefits automatically without re-typing anything.

What framework should an AI PRD follow?

The strongest AI PRDs follow Marty Cagan's problem-first approach from Inspired: start with the problem and evidence, assess value/usability/feasibility/viability risks before defining the solution, and separate validated evidence from assumptions. This structure should be embedded in your PRD skill so it runs automatically — not typed into each prompt manually.

Can I use context-aware PRD generation with Claude or ChatGPT?

Yes, but the implementation differs. Claude Code supports persistent context natively through project files — you place company.md, product.md, personas.md, competitors.md, and goals.md in your project, and Claude reads them automatically before generating output. With ChatGPT, you would need to paste context into each conversation or use custom GPTs. The key principle is the same regardless of tool: structured, persistent product knowledge produces dramatically better PRDs than ad-hoc prompting.

How long does it take to set up context files for AI PRD generation?

Initial setup takes 2-4 hours to create the five core context files: company.md, product.md, personas.md, competitors.md, and goals.md. Most PMs already have this information scattered across decks, docs, and their own heads — the work is structuring it into markdown files the AI can read. After the initial setup, maintenance is minimal: update the files when strategy shifts, new personas emerge, or the competitive landscape changes. Every minute invested in context pays back across every AI-generated artifact.



mySecond's /prd-generator skill generates context-aware PRDs using your product, personas, and competitive landscape. Browse all 70+ PM skills at mysecond.ai/skills.


Ron Yang is a product leader and the founder of mySecond, the PM Operating System built on Claude. He builds PM infrastructure for product teams at growing companies.