On this page
Roadmap & Priorities
The Roadmap Paradox
Every stakeholder wants a roadmap. Investors want to know what you're building next quarter. Sales wants to promise features to close deals. Executives want predictability for planning. Engineers want clarity about what they're working toward.
And yet, the best product organizations treat roadmaps with deep skepticism.
The paradox is real: you need a roadmap to coordinate and communicate, but you shouldn't believe in it too strongly. The moment you treat a roadmap as a commitment rather than a hypothesis, you've stopped learning and started executing blindly.
This tension has always existed. But AI amplifies it dramatically. When you can prototype in days instead of months, when you can test ideas at a fraction of the previous cost, when the landscape shifts faster than quarterly planning cycles, traditional roadmapping becomes not just limiting but actively harmful.
This chapter explores how to think about roadmaps and priorities in an era where the cost of trying has collapsed but the need for direction remains.
The Anti-Roadmap
The term "anti-roadmap" comes from Intercom, the customer messaging company that built one of the most influential product cultures of the 2010s. Des Traynor, Intercom's co-founder, articulated a philosophy that challenged conventional roadmap thinking.
The core insight: traditional roadmaps are a form of organizational theater. They create the appearance of control and predictability in a world that offers neither. Teams spend weeks crafting detailed plans that become obsolete within months. Stakeholders treat roadmap items as promises, then feel betrayed when priorities shift. The roadmap becomes a political document rather than a planning tool.
Intercom's alternative was to focus on problems rather than solutions, and on outcomes rather than outputs. Instead of committing to "build feature X in Q2," commit to "solve problem Y for customer segment Z." The former locks you into a specific solution before you've validated it. The latter preserves flexibility to find the best solution through iteration.
The anti-roadmap doesn't mean having no plan. It means holding your plan loosely. It means treating every roadmap item as a hypothesis to be validated, not a promise to be kept. It means building systems that can respond to learning rather than systems that punish deviation from the plan.
Intercom operationalized this through several practices. They published their product principles openly, so stakeholders understood how decisions were made. They communicated themes and problem areas rather than feature lists. They reserved the right to change direction when they learned something new, and they explained why that flexibility served customers better than rigid commitment.
The anti-roadmap mindset requires confidence. You have to believe that your judgment, applied continuously to emerging information, will produce better outcomes than your predictions, made months in advance with limited information. This is uncomfortable for organizations that crave certainty. But it's honest about how product development actually works.
Companies that rigidly execute their roadmaps often ship exactly what they planned and miss entirely what the market actually needed. Companies that hold roadmaps loosely ship what matters, even when it wasn't in the original plan.
The anti-roadmap is not chaos. It's disciplined flexibility. You still need direction. You still need priorities. You just don't need false certainty.
Sources of Ideas
Before discussing how to prioritize, consider where ideas come from in the first place. A healthy product organization draws from multiple sources, each with different strengths and blindnesses.
Gut Feeling and Intuition
Product intuition is real. After years of building products and talking to users, experienced product leaders develop a sense for what will work. They notice patterns. They feel when something is off. They have hunches worth exploring.
The danger is treating intuition as sufficient. Gut feelings are hypotheses, not conclusions. They're starting points for investigation, not endpoints for decision-making. The best product leaders trust their intuition enough to explore it and doubt it enough to test it.
AI can augment intuition by rapidly testing hunches. That feature idea you've been mulling over? Describe it to Claude and explore the implications. Prototype it in an afternoon. Get user feedback within a week. Your intuition generated the hypothesis. AI helps you validate or invalidate it quickly.
Iterations on What Exists
Most product work isn't revolutionary. It's evolutionary. You have a product that works reasonably well, and you make it work better. Faster. Easier. More reliable. More delightful.
Iteration is underrated. The compound effect of continuous improvement is enormous. A product that gets 10% better every quarter is unrecognizable after two years.
AI accelerates iteration by reducing the cost of small improvements. Fixes that weren't worth the engineering investment at the old cost become obviously worthwhile at the new cost. You can polish more surfaces, smooth more edges, address more friction points.
Experiments and Bets
Some ideas need to be tested, not debated. You can argue endlessly about whether users want a feature, or you can build a simple version and find out.
Experiments require a different mindset than execution. You're not trying to ship something perfect. You're trying to learn something quickly. The experiment succeeds if you learn, regardless of whether the feature succeeds.
AI transforms experimentation by collapsing the cost of trying. An experiment that would have required two engineers for a month might now require one PM with AI assistance for a week. When experiments are cheap, you can run more of them. When you run more experiments, you learn faster.
Technology-Driven Possibilities
Sometimes new technology enables things that weren't previously possible. The technology doesn't tell you what to build, but it expands the space of what you could build.
This is particularly relevant now. AI capabilities are expanding rapidly. Features that were science fiction two years ago are now straightforward to implement. Product Directors need to stay current on what's becoming possible and imaginative about how those capabilities could serve users.
The risk is building technology showcases rather than user solutions. Just because you can add AI to a feature doesn't mean you should. The question is always: does this serve users better? Technology is an enabler, not a justification.
Research and Discovery
As discussed in Chapter 6, research reveals what users need, what competitors are doing, and what the market is becoming. Good research surfaces opportunities you wouldn't have found otherwise.
Research should be a continuous input to roadmap thinking, not a periodic exercise. When you're constantly learning about users and markets, your roadmap naturally evolves to reflect that learning.
AI as Brainstorming Partner and Critic
Beyond accelerating validation, AI serves as a powerful tool for generating and pressure-testing ideas.
Brainstorming at scale. When exploring a problem space, AI can generate dozens of potential approaches in minutes. Describe the user problem you're trying to solve, the constraints you're working within, and ask for ideas. You'll get a range of options, some obvious, some surprising, that expand your thinking beyond what you would have considered alone.
The value isn't that AI generates better ideas than humans. It's that AI generates more ideas faster, creating raw material for human judgment to work with. You can explore a broader solution space before narrowing down.
The devil's advocate. Perhaps AI's most valuable role in ideation is challenging ideas. AI is remarkably good at articulating why an idea might not work.
Present your favorite feature concept to Claude and ask: "What are all the reasons this might fail? What am I not considering? Who would hate this and why? What are the second-order consequences I'm missing?"
This is harder to get from human colleagues. People are polite. They don't want to be negative. They have political considerations. They might not want to challenge the boss's idea. AI has none of these inhibitions. It will tell you honestly and thoroughly why your idea has problems.
This isn't about letting AI veto ideas. It's about surfacing objections early, when you can address them, rather than late, when you've already committed resources. The best ideas survive rigorous challenge. Weak ideas should be identified before you build them, not after.
Stress-testing assumptions. Every product idea rests on assumptions. Users will understand this interface. The technology will perform at scale. The market will value this capability. Customers will pay this price.
AI can help identify and examine these assumptions. Ask: "What assumptions am I making? Which are most uncertain? What would happen if each assumption proved wrong?" This systematic examination often reveals risks that intuition missed.
From 666 to Faster Cycles
For years, I used a framework I called 666: six-year vision, six-month roadmap, six-week cycles. This structure served well in the pre-AI era. It balanced long-term direction with medium-term planning and short-term execution.
The Original 666 Model
Six-year vision. Where are you going? What does success look like? The six-year horizon provided direction without false precision. It connected to the vision work discussed in Chapter 3, painting a picture of the future you're building toward.
Six-month roadmap. What major themes and initiatives matter in the medium term? Six months was long enough to accomplish something meaningful but short enough that you weren't committing to things you couldn't possibly know. At this level, you thought in terms of problems to solve and outcomes to achieve rather than features to ship.
Six-week cycles. What are you doing right now? Six-week cycles provided rhythm for actual work. Long enough to accomplish something substantial. Short enough to course-correct quickly. Each cycle had clear goals, and at the end of each cycle, you assessed what you learned and decided what to do next.
Why 666 No Longer Fits
The 666 model assumed a certain pace of learning and execution. Six weeks was enough time to build something meaningful, test it with users, and draw conclusions. Six months was a reasonable horizon for planning because the world didn't change faster than that.
AI has compressed these timelines.
What took six weeks now takes two. A team using AI effectively can prototype, test, and iterate multiple times in the span of a traditional cycle. The learning that previously happened over a quarter now happens in weeks.
More importantly, the cost of planning has shifted relative to the cost of doing. When execution was expensive, extensive planning made sense. You wanted to be confident before committing resources. When execution is cheap, you can often just try things rather than plan them. The planning overhead that was justified at old execution costs becomes waste at new execution costs.
The New Cadence
The principles behind 666 remain sound. You still need long-term direction, medium-term focus, and short-term execution rhythm. But the specific timeframes need compression.
Long-term vision: three to five years. The acceleration of AI means that six-year horizons feel increasingly speculative. Three to five years maintains strategic direction while acknowledging faster change. Your vision should still be stable, but you're projecting into a future that arrives more quickly.
Medium-term roadmap: six to twelve weeks. The six-month roadmap becomes a twelve-week roadmap, or even shorter for fast-moving products. You're committing to themes and focus areas for the next quarter, not the next two quarters. This matches the pace at which you're learning.
Short-term cycles: one to two weeks. Six-week cycles compress to one or two-week sprints. The goal isn't to do less. It's to learn faster. Each cycle is a complete loop of hypothesis, build, test, learn. More cycles mean more learning.
Some teams are moving even faster. When AI enables same-day prototyping, the "cycle" might be measured in days rather than weeks. The principle is to match your planning cadence to your learning cadence. If you can learn something in a week, don't plan in six-week increments.
Adapting Your Rhythm
The right cadence depends on your context. Early-stage products searching for PMF should move as fast as possible. Mature products with established users might maintain longer cycles to avoid disruption.
The key is to regularly ask: are we planning at the right pace for how fast we're learning? If your plans are consistently obsolete before they're executed, your planning horizon is too long. If you're constantly scrambling without direction, it's too short.
AI is a tool. The cadence should serve your goals, not the other way around.
AI-Transformed Prioritization
Traditional prioritization frameworks like RICE (Reach, Impact, Confidence, Effort) or ICE (Impact, Confidence, Ease) attempt to quantify the value of different options. They're useful but limited. The inputs are guesses. The outputs are false precision.
AI doesn't eliminate the uncertainty in prioritization, but it can reduce it.
Rapid Validation
The biggest change is how quickly you can move from "we think this might be valuable" to "we know whether this is valuable."
Before committing significant resources to a roadmap item, use AI to accelerate validation. Build a prototype. Run it past users. Analyze the feedback. What would have taken a quarter now takes weeks. You can validate more ideas before committing to any of them.
This changes the prioritization calculation. Previously, you had to prioritize based on assumptions because validation was expensive. Now you can prioritize based on evidence because validation is cheap.
Cost Reassessment
Every prioritization framework includes some measure of cost or effort. AI changes these calculations dramatically.
Features that were expensive are now cheap. Analyses that required dedicated analysts can now be done by anyone. Prototypes that required engineering time can now be generated from descriptions.
This means revisiting your assumptions about what's worth doing. Ideas that were correctly deprioritized under old cost structures might be obviously worthwhile under new ones. Conduct a periodic review of your backlog with fresh eyes on the effort estimates.
Scenario Modeling
AI can help you think through the implications of prioritization choices. Describe your options and constraints to Claude. Ask it to model different scenarios. What happens if you prioritize growth over retention? What happens if you focus on enterprise over SMB? What are the second-order effects of each choice?
This isn't about AI making prioritization decisions. It's about AI helping you think more thoroughly about the decisions you're making. The goal is better human judgment, not replaced human judgment.
Opportunity Cost Analysis
Every choice to do something is a choice not to do something else. Opportunity cost is real but often ignored because it's hard to quantify.
AI can help surface opportunity costs. When you're considering a major initiative, ask: what else could we do with these resources? What are we giving up? Are there faster paths to the same outcome? This kind of analysis was always possible but rarely done because it was time-consuming. AI makes it practical.
Managing Stakeholder Expectations
Roadmaps exist partly for internal planning and partly for external communication. Managing stakeholder expectations is a critical skill for Product Directors.
The Commitment Problem
Stakeholders want commitments. "When will feature X ship?" "Can I promise this to the customer?" "What can I tell the board?"
The pressure to commit is intense. And commitments feel good in the moment. They create clarity. They satisfy the person asking. They make you seem confident and in control.
But false commitments create larger problems later. When you commit to something you're not certain about and then miss it, you damage trust. When you commit to building something specific and then learn it's the wrong thing, you either ship the wrong thing or break your commitment. Neither outcome is good.
The discipline is to commit to outcomes rather than outputs. "We're focused on improving retention" is a commitment you can keep. "We'll ship the new dashboard by March" is a prediction that might be wrong.
Communicating Uncertainty
The alternative to false certainty is honest uncertainty. This requires more sophisticated communication but builds more durable trust.
Be explicit about what you know and don't know. "Based on current understanding, we believe X, but we're still validating that." "Our best estimate is Q2, but that depends on what we learn in the next cycle." "This is our current priority, but we reserve the right to adjust as we learn."
Some stakeholders will push back on this uncertainty. They want guarantees. Help them understand why guarantees would be false. The world is uncertain. Pretending otherwise doesn't make it less so. It just means you're lying.
The Roadmap as Communication Tool
Different audiences need different roadmap views. The board needs high-level themes and strategic direction. Sales needs enough detail to have customer conversations. Engineering needs specific enough guidance to plan their work.
Create multiple views rather than one roadmap that tries to serve everyone. Use AI to help generate and maintain these views. A single source of truth about priorities can feed multiple presentations tailored to different audiences.
Continuous Reprioritization
In a fast-moving environment, priorities should be revisited continuously, not just at planning cycles.
Triggers for Reprioritization
Certain events should trigger immediate reconsideration of priorities. A major competitor move. A significant change in user behavior. A technological breakthrough. A shift in company strategy. A key assumption proving wrong.
Build awareness systems that surface these triggers. Use the research automation discussed in Chapter 6 to maintain continuous awareness of market changes. When something significant happens, don't wait for the next planning cycle. Assess the implications and adjust if needed.
The Discipline of Stopping
One of the hardest prioritization decisions is stopping something already in progress. Sunk cost fallacy is powerful. You've invested time and resources. Stopping feels like admitting failure.
But continuing something that shouldn't be continued is worse than stopping it. The resources you're spending on the wrong thing are resources you're not spending on the right thing.
Create explicit checkpoints for in-progress work. At each checkpoint, ask: knowing what we know now, would we start this? If the answer is no, have the courage to stop.
Saying No
Prioritization is as much about what you don't do as what you do. Every yes implies many nos. The discipline of prioritization is the discipline of saying no.
This is uncomfortable. Saying no disappoints people. It closes doors. It requires confidence that you're making the right choice.
But saying yes to everything is not a strategy. It's an absence of strategy. If everything is a priority, nothing is.
Use your frameworks. Trust your judgment. Say no clearly and kindly. And be willing to revisit if circumstances change.
The Product Director's Roadmap Role
As a Product Director, you're not personally writing roadmaps for every product. You're creating the conditions for good roadmapping across your organization.
This means establishing frameworks and processes that teams can use. It means setting expectations about how roadmaps should be held and communicated. It means modeling the right relationship with uncertainty: confident enough to provide direction, humble enough to adjust when you learn.
It also means protecting your teams from unreasonable demands for false certainty. When executives or stakeholders push for commitments that can't responsibly be made, part of your job is to push back. Explain why flexibility matters. Advocate for planning approaches that serve the company's actual interests, not just its desire for predictability.
And it means ensuring that AI capabilities are being used to improve planning, not just execution. Teams should be using AI to validate ideas faster, to model scenarios, to analyze tradeoffs, to challenge their own thinking. If they're not, help them build those capabilities.
The New Planning Mindset
The AI era demands a different relationship with planning.
The old model: spend significant time creating detailed plans, then execute against those plans, then assess results at the end.
The new model: create lightweight plans, start executing immediately, learn continuously, and adjust constantly.
Planning doesn't disappear. But it becomes less about predicting the future and more about creating frameworks for navigating uncertainty. Less about committing to specific outputs and more about committing to learning processes.
The roadmap is a living hypothesis about what will create value. Every cycle, you test that hypothesis. Every cycle, you learn. Every cycle, you adjust.
This is faster. It's also more honest. You're not pretending to know things you don't know. You're building systems that learn and adapt.
Product Directors who embrace this mindset will build organizations that consistently find and deliver value. Those who cling to traditional planning will find themselves executing brilliantly against plans that no longer matter.
The future belongs to those who can hold direction and flexibility simultaneously. Who can provide the clarity organizations need while preserving the adaptability that markets demand.
That's the new roadmap discipline. Clear enough to align. Loose enough to learn.