What you'll learn: The five-stage AI-powered discovery workflow, from research planning through actionable insights. How Claude Code collapses the synthesis bottleneck from days to minutes without replacing the human parts of discovery.
Discovery is the first thing that gets cut when timelines compress. Not because it doesn't matter — every Head of Product will tell you it's critical. It gets cut because it takes too long. Conducting interviews is slow. Synthesizing transcripts is slower. Connecting themes back to product decisions takes a kind of structured analysis that most PM teams never find time for. AI user research with Claude Code changes the math on all of this — not by replacing the human parts of discovery, but by collapsing the structured work between "raw data" and "decision."
This is the deep dive on the Discovery category from 7 Types of PM Work You Can Automate with Claude Code. If you haven't read that overview, it covers all seven categories. This article focuses on the one that most PM teams get wrong — and where the time savings are the most dramatic.
What excites me most about AI in the research phase is the ability to distill insights from massive amounts of data. Finding key patterns across dozens of transcripts, hundreds of support tickets, thousands of feedback entries — that simply wasn't feasible before because of the time it would have taken. AI doesn't replace the human work of conducting great interviews. It makes the previously impossible analysis practical.
The Discovery Problem for Product Teams
When nobody owns research full-time, discovery happens in the gaps. Discovery happens in the gaps between sprint planning and stakeholder updates. A PM carves out a week to run eight customer interviews. They take notes during each call. Maybe they record the sessions. Maybe they write up a quick summary afterward.
Then what?
The notes sit in a Google Doc. The recordings sit in a folder. Each PM has their own mental model of what customers said. Two weeks later, someone asks "what did we learn from the last round of interviews?" and the answer is a verbal summary from whichever PM happens to remember the most.
This is the discovery gap. It's not that PM teams skip research. It's that the pipeline between "talking to customers" and "acting on what they said" breaks down. Interviews happen. Synthesis doesn't. And when synthesis doesn't happen, insights stay locked in individual PMs' heads instead of flowing into PRDs, roadmaps, and prioritization decisions.
The cost is invisible but real. The PM Team Maturity Assessment scores teams across nine dimensions — and Discovery is where most teams score lowest. Not because they don't care about research. Because the overhead of turning raw interviews into structured, actionable insights exceeds what a 3-person PM team can absorb alongside everything else they're responsible for.
The discovery gap isn't that PM teams skip research. It's that the pipeline between "talking to customers" and "acting on what they said" breaks down. AI fixes this by collapsing the synthesis bottleneck from days to minutes.
What AI-Powered Discovery Actually Looks Like
The workflow has five stages. Each one maps to a specific skill, and they chain together — the output of one feeds into the next.
Stage 1: Research Planning
Before interviews happen, you need a plan. What are you trying to learn? Which personas should you talk to? What questions will surface the information you need?
The /interview-guide-creator skill generates structured interview guides based on your research objectives and your existing persona files. You describe what you're investigating — say, "why are mid-market customers churning after the first 90 days" — and the skill produces a guide with opening questions, probing follow-ups, and topic areas organized by research objective.
The guide isn't generic. Because the skill reads your personas.md file, it knows who your users are and tailors questions to their context. A guide for enterprise buyers asks different questions than a guide for individual contributors, even when the research objective is the same.
Stage 2: During Research
This is the human part. You conduct the interviews. You ask the questions. You notice when someone hesitates. You follow the thread when a customer says something unexpected.
What changes with an AI-powered workflow is what you do with the raw material. Instead of trying to synthesize on the fly or writing up quick summaries after each call, you drop transcripts and notes into a discovery/inputs/ folder in your project directory. That's it. The structured analysis happens in the next stage.
If you're recording calls and using a transcription service, the transcripts go directly into the folder. If you're taking manual notes, those go in too. The synthesis engine works with both.
Stage 3: Synthesis
This is where the bottleneck breaks.
The /research-synthesis-engine skill reads every transcript and note file in your discovery folder. It doesn't just summarize — it applies a structured synthesis framework. Themes emerge from patterns across multiple interviews, not from one PM's memory of what felt important. Each theme is supported by specific evidence: direct quotes, behavioral observations, frequency counts.
The output includes thematic analysis organized by research objective, pattern mapping across personas, opportunity identification with supporting evidence, and contradiction flagging where different user segments report conflicting needs.
Watch out — Synthesis quality depends directly on transcript quality. Sparse notes, heavily paraphrased accounts, or transcripts from poorly structured interviews produce weaker output. The tool amplifies whatever signal is in the data — including noise. Invest in good interview technique and detailed transcripts before expecting strong synthesis.
A synthesis that would take a PM a full day of careful reading and structuring runs in minutes. The PM still reviews the output, challenges the themes, and decides what matters. But the structured analysis between "pile of transcripts" and "organized themes" is handled.
Stage 4: Multi-Source Analysis
Interviews aren't the only input to discovery. Support tickets pile up. NPS responses accumulate. App store reviews and G2 feedback contain signal alongside the noise.
The /customer-feedback-analyzer skill processes feedback from multiple sources — support tickets, NPS responses, reviews, and survey data — alongside your interview transcripts. It surfaces patterns that no single source would reveal: the feature request that shows up in both customer interviews and support tickets, the pain point that drives both churn and negative reviews.
This is where PM teams gain the most leverage. Without automation, multi-source analysis almost never happens. No PM has time to read 200 support tickets, 50 NPS responses, and 12 interview transcripts, then synthesize them into a unified picture. With the skill, the multi-source synthesis runs against all of it at once.
Stage 5: Turning Research into Action
Insights are only useful if they flow into the artifacts that drive product decisions. This is where the discovery workflow connects to the rest of your PM operating system.
Research themes feed into /user-story-writer, which generates user stories grounded in actual customer evidence — not guesses about what users want. The /jobs-to-be-done-mapper takes interview data and maps it to the JTBD framework, identifying the functional, emotional, and social jobs your customers are hiring your product to do. And /assumption-mapper takes the assumptions embedded in your product plans and maps them against what the research actually supports.
The output of discovery becomes the input to planning and specs. No copy-pasting between documents. No "I think I remember a customer saying something about this." The chain from raw interview to structured user story is traceable.
The Before and After
Here's what the same research cycle looks like with and without AI-powered discovery.
Before: A PM interviews 8 customers over two weeks. They take notes during each call, write quick summaries after a few of them, and skip the summaries for the rest because they ran out of time. A week later, they sit down to synthesize. They re-read what they can find, try to remember the conversations that weren't documented well, and draft a themes document in a Google Doc. The synthesis takes 6-8 hours spread across two days. The themes are real but incomplete — they reflect what the PM remembers, not the full picture across all 8 interviews. Three of the transcripts were never reviewed carefully because the PM ran out of time.
After: The same PM conducts the same 8 interviews. Transcripts go into discovery/inputs/. They run /research-synthesis-engine. In 15 minutes, they have a structured synthesis that covers all 8 transcripts — every one analyzed with equal depth. Themes are mapped to personas. Each theme has supporting quotes with attribution. Contradictions between segments are flagged. The PM spends 30 minutes reviewing, adjusting emphasis, and adding their own qualitative observations from being in the room. Total synthesis time: under an hour, down from 6-8 hours.
Example — Total synthesis time: under an hour, down from 6-8 hours. But the bigger win isn't speed — it's completeness. Every transcript got equal attention. No interviews were skipped because time ran out.
Key Skills for Discovery
Eight skills form the core of the AI-powered discovery workflow. Here's what each one does and when to reach for it.
Research Synthesis Engine processes multiple interview transcripts into thematic analysis with supporting evidence, persona mapping, and opportunity identification. This is the single highest-impact discovery skill. If you only use one, use this one. Run it after every research cycle.
Interview Guide Creator generates structured interview guides based on your research objectives and existing persona files. Use it before a research round to ensure your questions are aligned with what you need to learn and tailored to the people you're talking to.
Customer Feedback Analyzer analyzes feedback from multiple sources — support tickets, NPS data, app reviews, survey responses — and synthesizes patterns across them. Use it when you need to combine qualitative interview data with quantitative feedback signals for a complete picture.
Jobs-to-be-Done Mapper takes interview data and maps it to the JTBD framework, identifying functional, emotional, and social jobs. Use it when you need to move from "what customers said" to "what customers are trying to accomplish" — the level of abstraction where product strategy happens.
User Story Writer generates user stories with acceptance criteria from research insights. Use it to bridge the gap between discovery and specs — turning themes and opportunities into structured stories that engineering can work from.
Persona Builder creates research-backed personas from interview data. Use it when your existing personas are stale or when research reveals a segment you haven't characterized yet. The output maps to your personas.md context file.
Assumption Mapper identifies and categorizes the assumptions embedded in your product plans, then maps them against available evidence. Use it before committing to a major feature to surface what you're assuming versus what you've validated.
Experiment Designer designs experiments to test product hypotheses with clear success criteria, sample sizes, and measurement plans. Use it when discovery surfaces an opportunity and you need to validate it before building.
How Context Files Make Discovery Better
Here's the thing about generic AI research tools: they don't know your product. Every synthesis starts from scratch — you have to explain who your users are, what you're building, and what matters.
With Claude Code, your context files do that work permanently. Your personas.md describes your user segments. Your product.md describes what you're building and why. Your competitors.md describes the landscape.
When /research-synthesis-engine processes your interview transcripts, it maps themes to the personas defined in your context files — not to generic user archetypes. When it identifies competitive mentions, it connects them to your actual competitive landscape. When it flags opportunities, it frames them against your product's existing positioning.
This is the difference between AI synthesis that produces a generic document and AI synthesis that produces something your team can act on immediately. The context files eliminate the gap between "here are some themes" and "here's what these themes mean for our product."
Tip — The context compounds over time. Each research cycle enriches your understanding. Your persona files get sharper. And every subsequent synthesis is more useful because it's running against richer context.
Getting Started with AI Discovery
Don't try to implement the full five-stage workflow on day one. Start with the single skill that unlocks the most value: /research-synthesis-engine.
If you have interview transcripts sitting in a folder — from last week, last month, whenever — that's your starting point. Drop them into your project directory. Run the skill. See what comes back. Most PMs are surprised by how much signal was sitting in transcripts they thought they'd already processed.
If you don't have transcripts yet, start with /interview-guide-creator before your next research round. Plan the research with the skill, conduct the interviews, then synthesize with the engine. One full cycle will show you what AI-powered discovery feels like in practice.
If you haven't set up Claude Code yet, the setup guide walks through installation and context file creation step by step. Context files are what make every skill work better — they're worth the upfront investment.
For the full picture of what's possible across all seven PM automation categories, the hub article covers everything from discovery to stakeholder communication. And if you want to see where your team's discovery process stands relative to other dimensions, the PM Team Maturity Assessment scores your team across nine dimensions and pinpoints exactly where the gaps are.
Build this for your team → We set up and manage PM Operating Systems for product teams — discovery infrastructure that turns every research cycle into accumulated intelligence. See how it works →
The complete discovery skill set — along with 70+ other PM skills across strategy, competitive intelligence, specs, planning, data, and communication — is available in the PM Operating System.
About the Author
Ron Yang is the founder of mySecond — he builds and manages PM Operating Systems for product teams. Prior to mySecond, he led product at Aha! and is a product advisor to 25+ companies.