Note: Ben Horowitz wrote the original version of this document in 2002. Everything he said about knowing the market, defining the "what," and operating from knowledge and confidence still applies. This is the update for a world where AI changed the PM's leverage, but not the job.
Good AI product managers start with the problem. They map the workflow, trace the impact chain, and identify where things break before they ever mention AI. A good AI product manager asks: does this problem require intelligence — synthesis, prediction, pattern recognition, generation — or would better UX, a simpler workflow, or an integration solve it without AI? Half the "AI opportunities" that product teams identify are actually UX problems in an AI costume. Good AI product managers name them and set them aside.
Bad AI product managers start with the technology. They pitch "let's add AI to this" without validating the pain, without mapping the workflow, without asking whether the problem requires intelligence at all. They read a competitor's press release about their new AI feature and add "AI" to three roadmap items by lunch. They conflate "AI can do this" with "AI should do this." Bad AI product managers don't have kill criteria. Every AI idea makes the roadmap. Features launch, metrics don't move, and nobody can explain why.
Good AI product managers test every AI opportunity against three lenses before committing resources: durability (will this problem survive the next foundation model upgrade?), data (do we have proprietary inputs or are we using the same public data as everyone?), and trust (will users accept AI for this decision?). Score below threshold? Kill it and move on. Bad AI product managers don't test opportunities. They don't even know the lenses exist.
Good AI product managers maintain living context — company strategy, product positioning, personas, competitive landscape — documented and accessible to every AI tool they use. Every AI conversation starts with shared understanding. Outputs are relevant from the first interaction because the AI already knows the product, the users, and the market. Bad AI product managers start every AI conversation from zero. They copy-paste the same company background into every prompt, every tool, every time, and then blame the AI for generic output.
Good AI product managers practice context engineering, not prompt engineering. Prompt engineering is asking a better question. Context engineering is making sure the AI has the right information before you ask any question at all. The difference matters. Most PMs who complain about AI output quality have a context problem, not a prompt problem. Bad AI product managers think the answer is better prompts. They collect prompt templates. They share prompt libraries. They optimize the question without ever fixing the information the AI is working from.
Good AI product managers treat context like infrastructure. It gets maintained, updated, and versioned. When the competitive landscape shifts, the context shifts with it. When a new persona emerges from discovery, the files get updated. The system gets smarter over time because the knowledge it operates on stays current. Bad AI product managers treat context like a one-time setup. They wrote a company description six months ago. The product has changed three times since.
Good AI product managers use AI to accelerate discovery — synthesizing interview transcripts, pulling patterns across conversations, pressure-testing assumptions against market data. They analyze 20 customer interviews in an afternoon and surface themes that would have taken weeks. But they make their own judgment calls. Bad AI product managers use AI to skip discovery. They ask Claude for customer insights without talking to customers first.
Good AI product managers treat AI outputs as first drafts. Every synthesis gets reviewed, challenged, refined. The AI handles volume. The PM provides judgment. Bad AI product managers treat AI outputs as decisions. They paste a transcript, ask for insights, and put whatever comes back into the next presentation without verifying whether the AI missed what matters most.
Good AI product managers know that AI can identify patterns but can't assess which patterns matter. It can summarize what customers said but can't determine which customers were telling you what you wanted to hear. It can generate hypotheses but can't design the experiment that validates them. Bad AI product managers don't make this distinction. They ship AI-generated analysis as their own analysis. They ship volume when leadership is asking for signal.
Good AI product managers know when discovery requires a human conversation, not an AI analysis. Some insights only emerge when a customer pauses, contradicts themselves, or gets emotional about a problem. AI can transcribe that moment. It can't recognize its significance. Bad AI product managers default to AI for everything and miss the insights that only come from being in the room.
Good AI product managers automate the work that doesn't require a PM — status updates, competitive monitoring, weekly metrics pulls, stakeholder reports — the operational overhead that eats 30-40% of a PM's week. Competitive intel refreshes on a schedule. Metrics land in a shared doc every Monday. Status updates compile from the project tracker and format themselves for the leadership audience. This frees PMs for customer conversations, strategic decisions, and the messy ambiguous problems AI can't solve autonomously. Bad AI product managers automate the wrong things. They build elaborate AI workflows for tasks that needed a 5-minute conversation. They spend two hours crafting the perfect prompt for a ten-minute problem.
Good AI product managers know when not to use AI. Simple problems get simple solutions. Not every workflow needs an AI layer. Bad AI product managers use AI for everything. Every email gets AI-drafted. Every meeting gets AI-summarized. Every decision gets an AI recommendation. The PM becomes a router for AI outputs instead of a decision-maker. More tools, more workflows, more complexity — productivity theater.
Good AI product managers think about data moats early. Before committing engineering resources, they ask: if a competitor uses the same model with the same public data, what do we have that they don't? If the answer is "nothing," they either find proprietary data or reframe the opportunity. Bad AI product managers build on public APIs with public data and call it a product. Their "AI advantage" lasts until the next startup demo day.
Good AI product managers design for flywheel effects. Every user interaction makes the system smarter. Every correction feeds back. Day 1,000 is meaningfully better than day 1, and the gap between you and a new entrant widens with every interaction. Bad AI product managers ship AI features that work exactly the same on day 1,000 as day 1. No learning. No improvement. No compounding advantage.
Good AI product managers model unit economics before launch — cost per successful outcome, not just cost per API call — and set alerting thresholds before they need them. Bad AI product managers don't model costs. The demo works at 100 users. At 10,000 users, the API bill is five figures a month and nobody projected it.
Good AI product managers design for transparency. When AI is wrong — and it will be wrong — users know why and can correct it. They build review workflows, confidence indicators, and feedback mechanisms into the product. Bad AI product managers hide the AI behind a black box. No explanation. No confidence signals. No way to correct mistakes. When it fails, users lose trust in the entire product, not just the AI feature.
Good AI product managers map failure modes before launch. For every AI feature: what goes wrong, what triggers it, what's the downstream damage, how do we detect it, and what's the mitigation? Bad AI product managers discover failure modes from customer support tickets. The first time AI confidently produces something wrong, they scramble.
Good AI product managers frame AI correctly in the product experience. "AI-generated draft for your review" builds trust. "AI-powered decision engine" erodes it. Users want to feel in control, and good PMs give them that control. Bad AI product managers over-promise with words like "intelligent" and "automated" before the feature reliably delivers. The gap between expectation and reality damages trust more than honest framing ever would.
Good AI product managers plan for progressive trust-building. Start conservative. Let users override everything. As accuracy proves itself over hundreds of interactions, gradually expand what the AI handles autonomously. Trust is earned interaction by interaction, not declared in a product announcement. Bad AI product managers launch at full autonomy on day one and wonder why users don't trust the output.
Good AI product managers build PM operating systems — shared context, reusable skills, documented workflows — that make their entire team dangerous. When a new PM joins, the system brings them up to speed. When a PM leaves, the knowledge stays. The operation doesn't depend on any one person's prompt library or AI tricks. Bad AI product managers keep their tricks in their own head. They have a personal collection of prompts, workflows, and systems that make them productive. When they leave, the team goes back to zero.
Good AI product managers turn personal workflows into team infrastructure. Skills that any PM can run. Context that any PM can reference. Outputs that are consistent regardless of who produces them. Bad AI product managers optimize for individual output and call it productivity. They're 3x faster. The team isn't.
Good AI product managers think about maturity tiers. Are we Ad Hoc — every PM inventing their own process? Assisted — AI helps individuals but not the team? Running a PM OS — shared infrastructure, consistent output? Autonomous — the system runs and improves itself? Most teams are stuck at Assisted. Good PMs close that gap. Bad AI product managers don't think about tiers. They don't even know the question exists.
Good AI product managers know that Operations is the dimension that makes every other dimension durable. You can score high on Discovery today, but if that capability lives in one PM instead of a system, you're one resignation away from a zero. Bad AI product managers skip Operations because it's not exciting. Discovery is exciting. Strategy is exciting. Operations is what makes them last.
Good AI product managers scale their judgment across the team. Bad AI product managers scale their individual output and call it progress.
Horowitz's original essay was about discipline. Good PMs had it, bad PMs didn't. But in 2002, the gap was hard to measure. A disciplined PM might ship slightly better features, slightly faster. The difference showed up over quarters.
AI changed the math.
In 2026, a good AI product manager closes a competitive analysis in 20 minutes that takes a bad one two weeks. A good AI product manager produces a PRD grounded in customer evidence, competitive context, and strategic positioning — while a bad one produces a generic document that could apply to any product. A good AI product manager builds a system that makes every PM on the team better — while a bad one keeps their tricks to themselves.
The original essay said good product managers "know the market, the product, the product line, and the competition extremely well and operate on the basis of a strong basis of knowledge and confidence."
That hasn't changed.
What changed is the cost of not doing it. In 2002, a bad PM wasted weeks. In 2026, a bad PM wastes weeks while a good PM ships in hours.
AI didn't raise the floor. It raised the ceiling — and made the gap visible to everyone.
Where do you stand?
The PM Team Maturity Assessment scores your team across 9 dimensions — Discovery, Strategy, Competitive, Planning, Specs, Data, Communication, Launch, and Operations. It takes five minutes and shows you exactly where AI can close the gaps vs. where you're still running on individual heroics.
Each gap point costs roughly 50 hours per PM per year. A team of 3 PMs with a typical score has over $95,000 in annual productivity lost to operational gaps. The assessment calculates this for your specific team.
Ron Yang is the founder of mySecond — he builds and manages PM Operating Systems for product teams. Prior to mySecond, he led product at Aha! and is a product advisor to 25+ companies.