When to Use Extended Thinking Mode

Product Management OS feature — Extended thinking works best with Product Management OS context files. Get the Product Management OS →

Claude offers an "extended thinking" mode for complex reasoning. Here's when to use it with mySecond.

What Is Extended Thinking?

Extended thinking gives Claude more time to reason through complex problems before responding.

Regular mode:

  • Claude responds quickly (5-15 seconds)
  • Minimal reasoning time
  • Good for straightforward tasks

Extended thinking:

  • Claude shows its reasoning process first
  • Then provides the answer
  • Takes longer (30s - 2min)
  • Better quality on complex tasks

When enabled, you'll see:

[Thinking: Feature A has higher reach but B has strategic value for enterprise tier. However, A unlocks B later. Checking dependencies... A first, then B.]

Priority 1: Feature A (RICE: 85)
- Why first: Unlocks technical foundation for B

When to Use Extended Thinking

✅ Use For

Strategic planning:

  • /quarterly-planning-template — Balancing multiple OKRs
  • /roadmap-builder — Sequencing with dependencies
  • /ai-product-strategy — Build vs buy tradeoffs

Complex analysis:

  • /landscape-mapper — Synthesizing competitive positions
  • /prioritization-engine — Multi-variable prioritization
  • /multi-review — Resolving conflicting stakeholder feedback

Edge case reasoning:

  • /devils-advocate — Finding non-obvious flaws
  • /risk-register-builder — Identifying failure scenarios
  • /swot-analysis-generator — Strategic implications

Multi-step workflows:

  • When chaining 3+ skills together
  • Decision trees with many branches
  • Synthesis of conflicting data

❌ Don't Use For

Simple formatting:

  • /meeting-notes-processor — Straightforward summarization
  • /release-notes-pro — Template filling
  • /executive-update-generator — Status reporting

Known patterns:

  • /user-story-writer — Well-defined format
  • /sprint-planning-assistant — Standard estimation
  • /jtbd-extractor — Pattern matching

Quick tasks:

  • One-sentence answers
  • Repetitive work
  • Low-stakes outputs

Learning/exploring:

  • First time using a skill (see results fast)
  • Iterating quickly on drafts

Skills That Benefit Most

SkillExtended Thinking ValueWhy
/prd-generatorMediumHelps with problem-solution fit, risk assessment
/prioritization-engineHighMultiple variables to balance, tradeoffs
/competitive-profile-builderLowMostly research synthesis
/multi-reviewHighComplex stakeholder dynamics, conflict resolution
/quarterly-planning-templateHighStrategic tradeoffs, resource allocation
/devils-advocateHighFinding non-obvious flaws, edge cases
/landscape-mapperMediumPositioning strategy, market dynamics
/roadmap-builderMediumDependency sequencing, resource constraints
/user-story-writerLowTemplate-based, well-defined format
/release-notes-proLowStraightforward summarization

Cost Implications

Extended thinking adds "thinking tokens" to your usage. These count toward total tokens.

Example: Competitive Analysis

Regular mode:

  • Input tokens: 3,000
  • Output tokens: 2,000
  • Total: 5,000 tokens (~$0.30 with Sonnet)

Extended thinking:

  • Input tokens: 3,000
  • Thinking tokens: 5,000
  • Output tokens: 2,000
  • Total: 10,000 tokens (~$0.60 with Sonnet)

Cost increase: 2× (doubles token usage)


Is It Worth It?

✅ Worth the extra cost:

Example 1: Quarterly Planning

  • Extended thinking cost: +$0.50
  • Time saved: 2 hours
  • Decision impact: $50K+ budget allocation
  • Quality improvement: Catches resource conflicts
  • Verdict: Yes, $0.50 is worth avoiding a $50K mistake

Example 2: Multi-Review

  • Extended thinking cost: +$0.30
  • Time saved: N/A (still faster than manual)
  • Decision impact: Pre-empts stakeholder concerns
  • Quality improvement: Finds consensus path
  • Verdict: Yes, saves re-work cycles

❌ Not worth the extra cost:

Example 1: Status Update

  • Extended thinking cost: +$0.10
  • Time saved: 5 minutes
  • Decision impact: Low (routine communication)
  • Quality improvement: Minimal
  • Verdict: No, use regular mode

Example 2: Release Notes

  • Extended thinking cost: +$0.15
  • Time saved: 10 minutes
  • Decision impact: Low (template filling)
  • Quality improvement: None
  • Verdict: No, not needed

How to Enable Extended Thinking

In Claude Code Desktop:

  1. Open Settings → Model
  2. Toggle "Extended thinking" ON
  3. Run skills as normal

Note: Applies to all skills until you toggle it off


In Claude Code CLI:

claude --extended-thinking /prd-generator

Or set as default:

export CLAUDE_EXTENDED_THINKING=true
claude

Examples with Comparison

Example 1: Prioritization Engine

Task: Prioritize 5 features with multiple constraints

Regular mode output:

Priority 1: Feature A (RICE score: 85)
Priority 2: Feature B (RICE score: 72)
Priority 3: Feature C (RICE score: 68)

Extended thinking output:

[Thinking: Feature A has highest RICE but B unlocks enterprise tier (strategic priority from company.md). However, A is a technical dependency for B. Also, C has customer commitments from Q4 that affect timeline. Let me check dependencies...

A must come first (enables B). B before C (enterprise priority). D and E can be parallel with C.]

Priority 1: Feature A (RICE: 85)
- Why first: Unlocks technical foundation for B
- Timeline: 6 weeks
- Dependencies: None

Priority 2: Feature B (RICE: 72)
- Why second: Strategic enterprise value, depends on A
- Timeline: 4 weeks (after A ships)
- Dependencies: Feature A

Priority 3: Feature C (RICE: 68)
- Why third: Customer commitment, can start in parallel with B
- Timeline: 8 weeks
- Dependencies: None

**Sequencing rationale:** A→B captures enterprise tier (company priority), C runs parallel to B to meet customer commitment.

Value add:

  • Caught dependency that changes priority order
  • Incorporated strategic priority from context
  • Identified parallel work opportunity
  • Explained reasoning (helps stakeholder buy-in)

Example 2: Devil's Advocate Review

Task: Challenge a PRD for AI-powered project planning

Regular mode output:

## Critical Questions

1. How accurate does the AI need to be for users to trust it?
2. What happens when AI is wrong?
3. Do we have enough training data?

Extended thinking output:

[Thinking: AI project planning is high-stakes (affects client promises). If wrong, agency loses client trust. But according to product.md, they already have "AI Project Planner (beta)" with some usage. Let me think about failure modes...

If AI underestimates, agency overpromises to client → blown deadlines → churn.
If AI overestimates, agency loses bids → revenue impact.
Competitor Monday.com doesn't have this (from competitors.md), so there's differentiation value.

But beta suggests early adopters may be forgiving. Real risk is moving from "beta" to "general availability" where all customers expect reliability...]

## Critical Questions

1. **What accuracy threshold moves this from "beta" to "generally available"?**
   - Why this matters: Beta users tolerate errors, GA users don't
   - What if you're wrong: Premature GA → customer churn
   - Recommended validation: Track AI vs actual timeline accuracy for 50+ projects

2. **How do agencies currently handle AI errors in beta?**
   - Why this matters: Current workarounds may not scale
   - What if you're wrong: Manual override process becomes bottleneck
   - Recommended validation: Interview 5 beta users about error handling

3. **What's the cost of a bad estimate to an agency?**
   - Why this matters: High cost = high accuracy bar
   - What if you're wrong: Underestimate risk, ship prematurely
   - Recommended validation: Calculate lost revenue from blown timelines

## Potential Failure Scenarios

**Scenario 1: AI consistently underestimates complex projects**
- Likelihood: Medium (new project types lack training data)
- Impact: High (client trust, agency revenue)
- Mitigation: Manual review required for projects >$50K

**Scenario 2: Agencies game the system**
- Likelihood: Low but possible
- Impact: Medium (degrades model over time)
- Mitigation: Track estimate override patterns, flag anomalies

Value add:

  • Connected PRD to existing product context (already in beta)
  • Considered competitive dynamics (differentiation opportunity)
  • Identified transition risk (beta → GA)
  • Quantified validation experiments
  • Found non-obvious failure mode (gaming)

Best Practices

1. Use Selectively

Don't enable extended thinking for all skills. Toggle it on for complex tasks, off for simple ones.

Efficient workflow:

[Regular mode]
/meeting-notes-processor
/executive-update-generator
/user-story-writer

[Switch to extended thinking]
/quarterly-planning-template
/prioritization-engine
/multi-review

[Back to regular mode]
/release-notes-pro

2. Provide Rich Context

Extended thinking works best when Claude has context to reason about.

Thin context:

Prioritize these 5 features.

→ Extended thinking has little to work with

Rich context:

Prioritize these 5 features.

Context from company.md:
- Strategic priority: Move upmarket (enterprise tier)
- Resource constraint: 2 engineers for 6 weeks

Constraints:
- Feature B requires Feature A (technical dependency)
- Customer commitment: Feature C by end of quarter

→ Extended thinking can reason about tradeoffs


3. Review the Thinking

When extended thinking is enabled, Claude shows its reasoning. Read it to:

  • Verify assumptions are correct
  • Spot if Claude missed important context
  • Understand why it made certain recommendations

Example: Catching mistakes

[Thinking: According to competitors.md, Asana has AI features...]

→ Wait, competitors.md says Asana is evaluating AI but hasn't shipped. Correct the context before proceeding.


4. Compare Outputs

For important decisions, run the same task with and without extended thinking. Compare:

  • Did extended thinking catch something regular mode missed?
  • Is the quality improvement worth the extra cost/time?

Example:

# First pass (regular mode)
/prioritization-engine

[Review output]

# Second pass (extended thinking)
/prioritization-engine

[Compare: Did extended thinking find better sequencing?]

Combining Extended Thinking with Agent Teams

Agent teams + extended thinking = most powerful, but also most expensive.

When to combine:

  • Highest-stakes decisions
  • Complex multi-variable analysis
  • Resolving contradictory data sources

Cost multiplier:

  • Agent teams: 5-7× (parallel instances)
  • Extended thinking: 2× (thinking tokens)
  • Combined: 10-14× baseline cost

Example: Competitive Intelligence with Extended Thinking

  • Regular single analysis: $0.50
  • Agent team (5 competitors): $2.50 (5×)
    • Extended thinking: $5.00 (10×)

Worth it?

  • If decision affects $100K+ budget: Yes
  • If routine market research: No

Troubleshooting

Extended Thinking Takes Too Long

Problem: Responses taking 2-3 minutes

Solutions:

  1. Disable for simpler tasks
  2. Reduce context file size (less to reason about)
  3. Check internet connection (slower connection = slower responses)

Extended Thinking Not Showing Reasoning

Problem: Not seeing [Thinking: ...] output

Solutions:

  1. Verify extended thinking is enabled in settings
  2. Some skills may not benefit (template-based tasks)
  3. Try a more complex task that requires reasoning

Extended Thinking Output Same as Regular

Problem: No quality improvement despite extended thinking

Causes:

  1. Task is too simple (doesn't require complex reasoning)
  2. Context is thin (nothing to reason about)
  3. Output format is rigid (skill is template-based)

Solutions:

  • Use extended thinking only for strategic/complex tasks
  • Enrich context files
  • Try a skill that requires judgment (prioritization, devil's advocate, etc.)


Last updated: February 2026