On this page
← All Chapters / Technical Fluency
18

The New PM Skillset

11 min read

The skills that made great Product Managers in 2020 remain necessary but are no longer sufficient. A decade ago, the best PMs excelled at customer empathy, stakeholder management, prioritization, and working effectively with engineers. These skills still matter. But the ground has shifted beneath us.

In 2026, every PM works alongside AI systems daily. They use AI to draft PRDs, synthesize user research, analyze data, and prototype features. The PMs who thrive are those who have developed a new layer of competencies on top of the traditional foundation. This chapter defines what those competencies are and how to develop them.

The Shift: From Executor to Orchestrator

Traditional product management was largely about execution. You gathered requirements, wrote specs, prioritized the backlog, and coordinated between design, engineering, and business stakeholders. The work was labor-intensive. A PM's output was directly proportional to their hours and energy.

AI changes this equation fundamentally. When you can generate a first draft of a PRD in minutes, synthesize hundreds of customer interviews in an hour, or prototype an interface before lunch, the bottleneck shifts. The scarce resource is no longer execution capacity. It's judgment.

The modern PM's value lies in knowing what to build, not in the mechanics of documenting it. It lies in recognizing which AI output is brilliant versus which is confidently wrong. It lies in orchestrating a combination of human creativity, AI capabilities, and engineering resources to solve problems that matter.

This requires a new skillset built on five pillars: prompt engineering, technical fluency, AI evaluation, collaboration with ML engineers, and orchestration thinking.

Prompt Engineering as a PM Skill

Prompt engineering is the art of communicating effectively with AI systems. For PMs, this has become as fundamental as writing a good user story or running an effective meeting.

The best PM prompts share several characteristics. They provide rich context about the product, the user, and the business situation. They specify the format and depth of response needed. They include examples of good output when available. And they break complex requests into manageable steps.

Consider the difference between asking an AI to "analyze our churn data" versus providing a prompt that includes your current churn rate, the segments you care about, your hypotheses about why users leave, and the specific decisions you're trying to inform. The first prompt yields generic insights. The second yields actionable analysis.

Developing prompt engineering skill requires deliberate practice. Keep a library of prompts that work well for your common tasks: competitive analysis, user research synthesis, PRD drafting, data interpretation. Iterate on these prompts as you learn what produces better results. Share effective prompts with your team so the entire product organization levels up together.

One pattern that works particularly well is what I call "progressive refinement." Start with a broad prompt to generate initial thinking, then use follow-up prompts to dive deeper into the most promising threads. This mirrors how you might work with a skilled analyst: first get the landscape view, then zoom into the areas that warrant investigation.

The meta-skill here is recognizing that prompting is a dialogue, not a command. The PMs who get the most from AI treat it as a collaboration where they guide, react, and refine rather than simply requesting output.

Technical Fluency in the AI Era

Product Directors have always needed technical fluency. You didn't need to write code, but you needed to understand systems well enough to have credible conversations with engineers and make informed trade-off decisions.

The technical landscape PMs must understand has expanded significantly. You now need familiarity with concepts that barely existed five years ago.

Start with foundation models and how they work at a high level. You should understand what large language models can and cannot do, why they sometimes hallucinate, and how context windows affect their capabilities. You don't need to understand transformer architecture in detail, but you should grasp why a 200,000 token context window matters for your product versus a 4,000 token one.

Understand the build versus buy versus API decision for AI capabilities. When should you fine-tune a model for your specific use case? When is a general-purpose API sufficient? When does it make sense to build proprietary models? These decisions have massive implications for cost, capability, and competitive positioning.

Learn the basics of RAG (Retrieval-Augmented Generation) and why it matters. Many AI features in products work by combining language models with retrieval systems that fetch relevant information. Understanding this pattern helps you evaluate what's feasible and scope features appropriately.

Develop intuition for AI costs. Token-based pricing, compute costs for training and inference, the trade-offs between model size and performance. These economics shape what's viable to build.

Finally, understand AI safety and alignment at a conceptual level. What are the risks when AI systems behave unexpectedly? How do companies like Anthropic approach building AI that's helpful, harmless, and honest? As a Product Director, you'll make decisions about how AI features behave in edge cases. Some grounding in safety thinking helps you make those decisions well.

Evaluating AI Features Before Shipping

Traditional feature evaluation focused on usability, performance, and business metrics. Does the feature solve the user's problem? Is it fast enough? Does it move our KPIs?

AI features require additional evaluation dimensions. The output is probabilistic rather than deterministic. The same input can produce different outputs. Edge cases are harder to enumerate because the input space is vast.

Effective AI evaluation starts with defining clear success criteria before building. What does "good enough" look like for this AI feature? A customer service bot might need 95% accuracy on common questions but can gracefully hand off edge cases to humans. A medical information feature might need much higher accuracy with explicit uncertainty communication.

Build evaluation datasets early. Collect examples of inputs your feature will encounter, along with what good outputs look like. Test systematically against these examples throughout development, not just at the end.

Pay special attention to failure modes. When the AI feature fails, how does it fail? Does it fail gracefully with appropriate uncertainty signals? Or does it fail confidently in ways that could harm users or your business? A feature that says "I'm not sure, let me connect you with a human" when confused is far better than one that confidently provides wrong information.

Test with real users earlier than you might with traditional features. AI behavior in the wild often differs from behavior in controlled testing. User prompts are messier, context is incomplete, edge cases emerge that you didn't anticipate.

Finally, plan for ongoing evaluation. AI features can drift over time as underlying models update or as user behavior shifts. Build monitoring that alerts you when output quality degrades.

Working with ML Engineers

The collaboration between PMs and engineers has always been central to the role. Working with ML engineers requires understanding their world well enough to be an effective partner.

ML development cycles differ from traditional software development. Training models takes time. Experiments frequently fail. The path from "this might work" to "this reliably works in production" is longer and less predictable than shipping a traditional feature.

Learn the vocabulary. Understand what your ML engineers mean when they discuss training data, validation sets, precision versus recall, latency requirements, and model drift. You don't need to implement these concepts, but you need to discuss them intelligently.

Help with the data problem. ML systems are only as good as their training data. PMs often have better access to user data, domain expertise, and business context than ML engineers. Use this to help define what good training data looks like, identify sources of high-quality examples, and spot gaps in coverage.

Set realistic expectations with stakeholders. ML projects have higher uncertainty than traditional engineering projects. Help your organization understand that ML timelines are estimates, not commitments, and that some experiments will fail. The PM's job is to create space for the learning that ML development requires while maintaining accountability for outcomes.

Collaborate on product decisions that affect model behavior. When do you show AI-generated content to users? How do you handle low-confidence predictions? What feedback loops can you build to improve the model over time? These decisions live at the intersection of product thinking and ML expertise.

The PM as Orchestrator

The highest expression of the new PM skillset is orchestration: knowing how to combine human capabilities, AI capabilities, and traditional software to solve problems optimally.

Some tasks are best done by humans. Creative leaps, ethical judgments, relationship building, and situations requiring genuine empathy remain fundamentally human. Other tasks are best done by AI. Synthesis of large volumes of information, pattern recognition in data, and generating variations on a theme are AI strengths. Many tasks are best done by humans and AI together, each contributing what they do best.

The orchestrator PM develops intuition for these divisions. They design workflows that route work to the right capability at the right time. They build products that combine AI capabilities with human oversight. They structure their own work to leverage AI for throughput while reserving their judgment for decisions that matter.

This requires holding two mental models simultaneously. You need to understand what AI can do today, including its genuine capabilities and limitations. You also need to imagine what AI might do tomorrow as capabilities rapidly improve. Building products at this intersection means making bets about which human activities will remain valuable and which will be augmented or replaced.

The best orchestrator PMs I know share several traits. They experiment constantly with new AI tools and capabilities. They maintain healthy skepticism about AI output, checking important conclusions rather than accepting them uncritically. They invest in developing their uniquely human skills, like building relationships and exercising judgment, knowing these become more valuable as AI handles more routine work. And they stay curious about where the technology is heading, updating their mental models as the landscape evolves.

Developing These Skills

These five competencies (prompt engineering, technical fluency, AI evaluation, ML collaboration, and orchestration) are learnable. They require deliberate practice, not innate talent.

Start by using AI tools extensively in your own work. Draft documents with AI assistance. Analyze data using AI. Prototype features. The hands-on experience builds intuition faster than reading about these tools.

Build relationships with ML engineers and researchers. Ask them to explain their work. Shadow their process. The goal isn't to become an ML engineer but to develop enough shared context for effective partnership.

Read broadly about AI capabilities and limitations. Follow the research from organizations like Anthropic, OpenAI, and Google DeepMind. The field moves quickly. Monthly reading habits compound into substantial knowledge over time.

Experiment with failure. Try using AI for tasks where it might not work well. Understanding the boundaries of AI capability, where it excels and where it falls short, is as valuable as understanding its strengths.

Finally, teach others. Explaining these concepts to your team, your stakeholders, and your organization forces you to clarify your own thinking and identifies gaps in your understanding.

Conclusion

The PM role has always evolved with technology. The shift to mobile required new skills. The rise of data-driven product development required new skills. The AI era requires another evolution.

The PMs who thrive will be those who embrace this evolution enthusiastically while staying grounded in the fundamentals. Customer empathy still matters. Clear thinking still matters. Effective collaboration still matters. These new skills layer on top of the foundation, expanding what a skilled PM can accomplish.

The alternative, resisting the change and hoping traditional PM skills remain sufficient, is a losing strategy. AI capabilities will continue to expand. PMs who don't develop fluency with these tools will find themselves increasingly limited in what they can accomplish, outpaced by peers who have embraced the new skillset.

The good news is that most PMs already have the foundational skills this transition requires. You know how to learn new tools. You know how to work with technical teams. You know how to adapt to changing circumstances. Apply those same capabilities to developing AI fluency, and you'll be well-positioned for the decade ahead.