On this page
Appendix A: AI Toolkit for Product Directors
This appendix provides ready-to-use frameworks, checklists, and templates that translate the book's principles into daily practice. Unlike the chapters that explore the "why" and "how," this toolkit focuses on the "what exactly do I do Monday morning." Each tool has been refined through real-world application, and all are designed to work in AI-augmented workflows where Claude Code or similar tools serve as your intelligent partner.
Think of this appendix as your operational reference. Dog-ear the pages. Photocopy the checklists. Adapt the templates to your context. The best toolkit is one that gets worn from use.
Section 1: Vision and Strategy Toolkit
The AI-Era Vision Canvas
A compelling product vision in the AI era must answer questions that did not exist five years ago. This canvas helps you articulate a vision that accounts for both human value creation and AI capability evolution.
Core Vision Statement
Write in the format: "We will [transformation verb] [target users] by [unique approach] so they can [outcome they deeply care about]."
Example: "We will empower small business owners by combining human financial advisors with AI-powered analysis so they can make investment decisions with the confidence of a Fortune 500 CFO."
AI Capability Assumptions
Document the AI capabilities your vision depends on, along with your confidence level and timeline assumptions.
Human Value Anchors
Identify three to five human needs your product addresses that will remain valuable regardless of AI advancement. These are your strategic anchors.
Strategic Moat Analysis
In the AI era, traditional moats erode faster. Evaluate your sustainable advantages:
Strategy Stress Test Questions
Before finalizing any strategic plan, work through these questions with your leadership team:
On AI disruption resilience:
What happens to our strategy if AI capabilities advance twice as fast as we expect? What if they stall for five years?
On build vs. integrate:
For each AI component in our product, have we rigorously evaluated whether to build proprietary models, fine-tune foundation models, or integrate via API? What is our framework for revisiting these decisions?
On competitive response:
If a well-funded competitor launched tomorrow with AI-native architecture and no legacy constraints, which of our customers would they capture first? What would it take to win them back?
On talent implications:
Does our strategy require AI talent we cannot realistically hire? What partnerships or upskilling programs close the gap?
On ethical boundaries:
Where do we draw lines on AI use that competitors might cross? What principles are non-negotiable even if they cost us market share?
Quarterly Strategy Review Protocol
Every quarter, dedicate one leadership meeting to a structured strategy review:
Pre-meeting preparation (assign to team members):
Scan for AI capability announcements that affect your space (Research team). Analyze competitor AI feature launches and positioning (Competitive intelligence). Review customer feedback mentioning AI expectations or concerns (Product team). Compile internal AI initiative progress and learnings (Engineering leads).
Meeting structure (three hours recommended):
First hour: Review the external landscape. What has changed in AI capabilities, competitor moves, and customer expectations?
Second hour: Assess strategic fit. Does our current strategy still make sense given the new landscape? Score each strategic initiative on a 1-5 scale for continued relevance.
Third hour: Decide on adaptations. What strategic pivots or accelerations do we need? Assign owners and timelines.
Post-meeting output:
Updated strategy document with change log. Communication plan for organization. Revised OKRs if needed.
Section 2: AI-Augmented Communication Toolkit
The Communication Mode Selector
Different situations demand different communication approaches. This selector helps you choose the right mode and leverage AI appropriately for each.
Verbal Communications
Written Communications
The Memo Framework
Memos remain the most powerful tool for making people think before they meet. Here is a structure optimized for AI-assisted drafting:
Section 1: The Decision or Discussion Required
State in one sentence what you need from the reader. Not "I want to discuss our pricing strategy" but "I need your input on whether to raise prices 15% for enterprise customers in Q2."
Section 2: Context (AI-assisted)
This section benefits most from AI drafting. Ask Claude to synthesize relevant background: market conditions, historical decisions, competitive landscape. Then edit aggressively. Include only context that directly informs the decision.
Section 3: Options Analysis
Present two to four options. For each, include a brief description, the key advantages, the key risks, your confidence level in the analysis, and what you would need to know to increase confidence.
Section 4: Your Recommendation
Take a position. Even if you are genuinely uncertain, stake out a view. This gives readers something to react to.
Section 5: Questions for Discussion
List two to four specific questions you want addressed. Frame them to elicit substance, not just agreement.
AI Drafting Prompt Template:
"I need to write a memo about [topic] for [audience]. The decision/discussion required is [specific ask]. Key context includes [bullets]. I'm leaning toward [preliminary view] because [reasoning]. Draft a memo following this structure: Decision Required, Context, Options Analysis, Recommendation, Questions for Discussion. Keep context to 200 words maximum. For each option, include 2-3 advantages and 2-3 risks."
The Executive Update Template
For stakeholders who receive dozens of updates daily, structure is kindness. This template puts the most important information first.
Line 1: Status indicator
Use a simple system: "On Track," "At Risk," or "Blocked"
Line 2-3: The headline
One to two sentences capturing the most important thing they need to know.
Paragraph 1: Key accomplishments this period
Three to five bullets, each starting with an action verb.
Paragraph 2: Decisions needed or risks to flag
Be explicit about what you need from them.
Paragraph 3 (optional): Detailed context
For those who want to go deeper. Most executives will stop reading before this.
AI Enhancement:
Before sending, ask Claude: "Review this executive update. Is the most important information in the first three sentences? Are there any buried leads that should be elevated? Is anything included that doesn't serve the reader's needs?"
Presentation Preparation Checklist
When preparing for any significant presentation:
Before drafting slides:
What is the one thing I need the audience to believe or do after this presentation? What do they currently believe? What evidence would be most persuasive to this specific audience? What objections will they raise?
AI-assisted preparation:
Generate the strongest counterarguments to my position. Suggest three different opening hooks for this audience. What questions should I prepare for? Review my draft and identify where I'm most likely to lose the audience.
Delivery reminders:
Prepare your opening and closing cold, without notes. For complex arguments, use the structure: "I want to make [n] points. First... Second... Third..." Silence is more powerful than filler words. Use it. If presenting data, state the conclusion first, then show the evidence.
Stakeholder Communication Matrix
Map your key stakeholders and their communication preferences:
Section 3: Product Function Checklists
The Five Roles of Product: AI-Era Checklist
The Product function serves five essential roles. This checklist helps you evaluate how well you are fulfilling each, and where AI can enhance your effectiveness.
Role 1: Articulating the Vision
Weekly self-assessment:
Can every team member articulate our product vision in their own words? Have I communicated the vision in the past week through multiple channels? Is our vision specific enough to guide decisions but broad enough to inspire?
AI enhancement opportunities:
Use AI to translate vision into language optimized for different audiences. Have AI identify gaps between stated vision and recent feature decisions. Generate "vision stress tests" by asking AI to find contradictions.
Warning signs of dysfunction:
Teams making decisions that conflict with vision. New hires confused about what we're building. Vision deck unchanged for more than two quarters.
Role 2: Bringing the Customer to the Table
Weekly self-assessment:
Have I talked to a customer this week? Have I shared customer insights with stakeholders? Are we making any decisions without customer evidence?
AI enhancement opportunities:
AI can synthesize large volumes of customer feedback. AI can identify patterns in support tickets and feature requests. AI can help prepare for customer conversations by analyzing account history.
Warning signs of dysfunction:
Last customer interview was more than two weeks ago. Product decisions justified by "we think users want..." Customer research used only to validate existing plans.
Role 3: Bringing Order and Focus
Weekly self-assessment:
Does every active initiative connect to a stated objective? Have we explicitly said "no" to something this week? Can any team member explain why we're not working on X?
AI enhancement opportunities:
AI can help maintain and query prioritization frameworks. AI can flag initiatives that have drifted from original scope. AI can model resource allocation scenarios.
Warning signs of dysfunction:
Team cannot name current priorities. Everything is "high priority." Work happens without explicit product decisions.
Role 4: Setting the Rhythm
Weekly self-assessment:
Did we ship something this week? Are our ceremonies (standups, planning, retros) adding value? Is the team's velocity predictable?
AI enhancement opportunities:
AI can identify bottlenecks by analyzing cycle time patterns. AI can prepare sprint planning by pre-analyzing tickets. AI can generate release notes and stakeholder updates.
Warning signs of dysfunction:
Surprise delays are common. Team members unclear on current sprint goals. Retrospectives keep surfacing the same issues.
Role 5: Driving Customer Experience
Weekly self-assessment:
Have I used our product as a customer would this week? Are we measuring the moments that matter to users? Is design debt acknowledged and tracked?
AI enhancement opportunities:
AI can analyze session recordings at scale. AI can identify UX patterns associated with drop-off. AI can generate accessibility audits.
Warning signs of dysfunction:
Last dogfooding session was months ago. UX debt keeps growing. Customer effort score declining.
The First 30 Days: AI Readiness Assessment
When joining a new organization or taking on expanded responsibilities, conduct this assessment in your first month.
Week 1: Data Infrastructure Audit
Week 2: AI Capability Assessment
Week 3: Team AI Fluency Evaluation
For each team member, assess:
Comfort level with AI tools (1-5). Understanding of AI capabilities and limitations (1-5). Current use of AI in their workflow (specific examples). Learning orientation toward AI (resistant, neutral, eager).
Week 4: Quick Wins Identification
Look for opportunities that meet these criteria: High manual effort currently. Structured, repeatable process. Good historical data available. Low risk if AI makes errors. Clear success metrics.
Common quick wins: Customer feedback categorization. Bug report triage. Release note generation. Meeting summary automation. Documentation updates.
Product Review Meeting Checklist
Before any product review or decision meeting:
Pre-meeting requirements:
All materials distributed 24 hours in advance. Clear decision or feedback required stated in invite. Relevant metrics included and explained. AI-generated summary of user feedback related to topic.
During meeting:
Decision owner identified at start. Time-boxed discussion (use AI timer prompts if needed). Capture decisions and action items in real-time. Parking lot visible for off-topic items.
Post-meeting requirements:
Decisions documented within 4 hours. Action items assigned with due dates. Follow-up items scheduled. Meeting recording transcribed and summarized by AI.
Section 4: Team Operating Rhythms
Weekly Operating Cadence
A well-designed weekly rhythm creates predictability while preserving flexibility. Here is a template calibrated for AI-augmented teams:
Monday
Morning: AI-assisted weekly prep. Review automated dashboards. AI summarizes weekend customer feedback and incidents. AI prepares prioritized list of items needing attention.
Late morning: Product leadership sync. Review progress against quarterly objectives. Surface cross-team dependencies. Make any needed priority adjustments.
Tuesday
Morning: Focus time for product managers (protect this). Dedicated time for strategic work, customer research, and deep thinking. AI handles routine requests during this window.
Afternoon: One-on-ones with team members. Use AI to prepare by summarizing recent work and flagging discussion topics.
Wednesday
Morning: Product-Engineering coordination. Review technical decisions. Address any specification questions. AI pre-analyzes sprint metrics for discussion.
Afternoon: Stakeholder meetings as needed. AI prepares briefing documents and anticipated questions.
Thursday
Morning: Customer research and insights. Customer interviews. Review AI-analyzed feedback patterns. Synthesize learnings into actionable insights.
Afternoon: Product team meeting. Deep dive on one topic (rotating). Training or skill-building. AI presents data and trend analysis.
Friday
Morning: Documentation and planning. Update roadmaps and status documents. AI-assisted preparation for next week. Write weekly summary for stakeholders.
Afternoon: Flex time. Handle overflow. Learning and development. Informal team bonding.
OKR Operating System
OKRs work best when they're living documents, not quarterly exercises. Here's how to maintain them as an operating system:
Quarterly OKR Setting (allocate one full day)
Morning session: Review and reflect. How did we perform against last quarter's OKRs? What did we learn about our ability to predict outcomes? AI analyzes patterns in OKR performance across past quarters.
Afternoon session: Set new OKRs. Start with company-level objectives. Draft team OKRs that ladder up. Stress-test each key result: Is it measurable? Is it ambitious but achievable? Will we know within the quarter if we're on track?
Weekly OKR Check-ins (15 minutes in team meeting)
For each key result: Report current status using red (at risk), yellow (needs attention), green (on track). Share one specific action planned this week to advance progress. Flag any blockers or dependencies.
AI support: Automate status calculation where possible. AI prepares weekly OKR status email. AI flags when metrics are trending toward at-risk.
Monthly OKR Deep Dives (one hour)
Review each objective in depth. Are the key results still the right measures? What have we learned about what actually drives progress? Do we need to adjust targets based on new information?
Roadmap Governance Framework
Roadmaps are promises we make to ourselves and stakeholders. This framework keeps them honest and useful:
Roadmap Types and Their Uses
Roadmap Change Protocol
When something needs to change: Assess impact on committed deliverables. Calculate the trade-off (what gets cut or delayed). Prepare the communication using the format: what changed, why, what we're doing instead, impact on timeline.
AI-assisted analysis: Model scenarios for different trade-offs. Identify downstream dependencies affected. Draft stakeholder communications.
Roadmap Health Metrics
Track these monthly: Prediction accuracy (did we deliver what we said we would?). Churn rate (how often do items move between lanes?). Stakeholder satisfaction (do they find the roadmap useful?). Customer alignment (does the roadmap address top customer needs?).
Workshop and Offsite Templates
Training Workshop (2 hours)
Purpose: Skill-sharing within the product team.
Format:
Opening (10 minutes): Why this skill matters now.
Concept (20 minutes): Core framework or approach.
Demo (20 minutes): Facilitator demonstrates.
Practice (40 minutes): Participants apply to real problems.
Debrief (20 minutes): Discuss learnings and applications.
Follow-up (10 minutes): Resources and next steps.
AI support: Generate practice scenarios. Create reference materials. Summarize session for those who missed it.
Strategy Offsite (full day)
Morning: Landscape and Context
External trends presentation (AI-assisted research synthesis). Customer insight review. Competitive positioning assessment.
Midday: Strategic Options
Generate options (aim for quantity). Initial evaluation against criteria. Small group deep dives on top options.
Afternoon: Decision and Planning
Full group reconvenes. Debate and decide. Draft 90-day action plan. Assign owners and milestones.
AI support: Pre-offsite research package. Real-time option evaluation against criteria. Post-offsite documentation.
Section 5: Evaluation and Hiring Frameworks
Product Manager Assessment Framework
When evaluating product managers, whether for hiring or performance reviews, assess across these dimensions. In the AI era, traditional skills remain essential while new capabilities become differentiating.
Foundation Skills (Required for all levels)
AI Fluency Assessment
Evaluate across these areas:
Level-Specific Evaluation
Interview Guide: Product Manager
Screen (30 minutes)
Background and motivation. Why product management? Why this company? Role fit assessment.
Product Sense (45-60 minutes)
Case study: improve an existing product. Look for: customer focus, structured thinking, creativity, trade-off awareness.
Sample prompt: "Let's say you're the PM for [product they likely use]. Usage has plateaued. Walk me through how you would diagnose the problem and develop a plan."
Execution (45-60 minutes)
Deep dive on past project. Look for: ownership, cross-functional leadership, handling of setbacks, measurement approach.
Sample questions: "Tell me about a product you shipped that you're proud of. What was your specific contribution? What would you do differently?"
AI and Technical Fluency (30-45 minutes)
Understanding of AI capabilities and limitations. Experience working with technical teams. Ability to translate between technical and business contexts.
Sample prompts: "How have you used AI in your product work? Walk me through a specific example." And: "Imagine you're building a feature that uses AI to [relevant example]. What questions would you ask the ML team?"
Collaboration and Leadership (30-45 minutes)
Stakeholder management. Conflict resolution. Team dynamics.
Sample questions: "Tell me about a time you disagreed with engineering on an approach. How did you handle it?" And: "How do you build relationships with stakeholders who are skeptical of product?"
Performance Review Framework
Quarterly Check-ins (lightweight)
Review: Progress against OKRs (quantitative). Key accomplishments and learnings. Challenges and support needed. Development focus for next quarter.
Manager preparation (AI-assisted): Summarize team member's visible accomplishments. Identify recognition opportunities. Flag any patterns in feedback or collaboration.
Annual Review (comprehensive)
Part 1: Results assessment. OKR achievement across quarters. Business impact delivered. Quality of shipped work.
Part 2: Skills assessment. Progress against development plan. Foundation skills evaluation. AI fluency growth.
Part 3: Future focus. Career aspirations and timeline. Development priorities for next year. Stretch opportunities to explore.
Team Composition Framework
When building or restructuring a product team, ensure coverage across these dimensions:
Skill Distribution Matrix
Experience Mix
Target distribution: 25-35% less than 2 years PM experience (learning, energy, fresh perspective). 40-50% 2-7 years PM experience (core execution capacity). 20-30% more than 7 years PM experience (mentorship, strategic perspective).
Hiring Prioritization Matrix
When you need to hire, prioritize based on:
Onboarding Checklist: New Product Manager
Week 1: Orientation
Complete HR and systems setup. Meet team members and key stakeholders. Review product documentation and strategy decks. Get access to all relevant tools (including AI tools). Shadow customer calls and team meetings.
Week 2: Deep Dive
Study the product deeply, as a user. Review last quarter's OKRs and outcomes. Understand the technical architecture (overview level). Begin customer research. Start using AI tools in daily workflow.
Week 3: Initial Contributions
Own a small, well-scoped initiative. Present initial observations to team. Begin regular one-on-ones. Document questions and learning gaps.
Week 4: Integration
Take on primary responsibilities. Deliver first visible contribution. Give and receive feedback on onboarding. Set 90-day objectives.
Manager Responsibilities During Onboarding
Daily: Check-in (15 minutes). Weekly: Extended one-on-one (60 minutes). End of Week 4: Formal onboarding review. End of Month 3: First performance check-in.
Using This Toolkit
These frameworks and checklists are starting points, not destinations. Adapt them to your context:
Start with one section that addresses your most pressing need. Implement for one quarter, then evaluate and adjust. Share what works with your team and encourage their adaptations.
AI can help you customize these tools. Try prompts like: "Adapt this OKR framework for a team that ships weekly instead of quarterly" or "Modify this interview guide for a growth-focused PM role."
The best toolkit is one you actually use. If a checklist gathers dust, either the checklist needs improvement or you've outgrown the need it addressed. Both are signs of progress.