Prioritization Beyond Algorithms
The problem of prioritization comes up in many of my coaching discussions with product leaders, and in almost every product forum. We want it to be a trivial mechanical process: pick a metric (usually current revenue), estimate ROI for the entire backlog, then do whatever scores highest. But that very rarely works in practice. Prioritization fits into a larger strategic and organizational framework.
Some frequent symptoms:
- Requests for a magical universal prioritization spreadsheet. I don’t believe in one, since every product has its own goals/definition of success; point in the lifecycle; history of good/poor technical investments; economic and revenue urgency; and competitive situation.
- Forcing unlike things into one (financial) metric. It takes different kinds of work to deliver and support and grow a software product: innovative capabilities, competitive table stakes, bug fixes, scalable infrastructure, intellectually honest discovery, crazy technical experiments… Different categories of work each need appropriate (different) metrics. Otherwise, we’ll only work on shiny new features and wander into market irrelevance or technical bankruptcy. I don’t see one uniform metric for all requests getting good results.
- Wanting to rough-size everything in the backlog. The notion is that if we can estimate “value” for a thousand pending tickets — plus t-shirt development sizing for each — our spreadsheets will make our decisions for us. The best investments will sort to the top, and we’ll grab the individual items with the highest stand-alone ROI. (I’ve never found this to work in practice.)
So it seems worth putting the algorithmic part of prioritization into a larger context. Here’s what I see:
- Strategy comes before priorities. If we’re shifting focus from SMBs to enterprises, then we’ll make one set of product choices; if from enterprises to SMBs then very different choices. Cash cow products get less investment; early adopter offerings get more. We choose which audiences matter most. Any numerical ranking should be anchored in fundamental product/market strategy.
- There are usually 4 or 5 different kinds of requests in the backlog, with each needing an appropriate metric or evaluation model. We can’t easily compare customer-visible-feature development vs. bug fixes/test automation vs. software infrastructure vs. discovery/experimentation vs. bespoke one-off projects for our largest customer. If we judge everything in our queue based on incremental current quarter revenue potential, we end up with brittle systems and feature glut and declining usability. (“The 1990’s called, and they want their user interface back.”)
- It’s not practical (or possible) to do deep analysis and serious valuing of an entire backlog. Even if our executives and stakeholders say they want every submitted idea to get serious scrutiny (market survey plus business impact analysis plus end user validation with mockups), clock math makes this absurd. That’s often based on the (misguided) idea that quick-and-dirty ROI SWAGs for hundreds of backlog items will convert a strategic problem into a mathematical one, eliminating human insight and hard trade-offs. But investing 8 hours each into 200 ideas is a full year of product management time – even before we draft our first epic or formally assign a team to build something. And typically the ticket/request arrival rate is higher than our ability to review existing tickets/requests, so we never catch up.
- Confusion between precision and accuracy. Most prioritization schemes use several inputs, each of which is semi-quantitative and has huge error bars. For example, if we’re doing a first-pass assignment of incremental revenue and development costs, important to remember that these are wild guesses (at best). We can reformat a spreadsheet to show 6 decimal places, but €4.238159M ± €3M is indistinguishable from €3.922867M ± €3M. T-shirt sizing vaguely separates Smalls from Mediums and Larges, but doesn’t help us force-rank Small items.
What To Do?
There are various ways to break down prioritization before we try to apply rankings or algorithms. A few favorites:
- Allocate fractions of the development budget for various kinds of work. Before we consider assorted demands for specific bug fixes or partner integrations, let’s set a budget for all bug fixes or all partner integrations. (You’ll need unified product and engineering leadership with strong political skills to sell this at the executive level.)
That lets some stakeholder groups help us prioritize the areas they care most about. A productive discussion with Customer Support/Success might include “we have 35 story points each sprint for bug fixes. Which 2-3 bugs would you put first?” With Engineering, we might try “where could we put 5developer days against refactoring that would have the highest impact on velocity?”Apply some quick, old-fashioned, qualitative guesswork to pick a dozen or two items from your infinite backlog. We generally have enough horse sense to grab 3 sprints-worth of stuff as our choice set for next sprint. Or apply Teresa Torres’ Opportunity Solution Tree model. - Then try a “count the digits” approach to value estimation. Timebox 5-10 minutes each for your reduced list for an order-of-magnitude value SWAG. This intentionally avoids deep analysis, instead guesstimating how many digits are likely on the cost and benefit sides. Does this new product have $10k or $100k or $1M or $10M potential? Do we ballpark the implementation effort at 2 weeks * 3 people (= $20k), or 10 weeks * 5 people (= $200k), or 50 weeks * 30 people (= $5M)? That avoids delaying a $10M opportunity for a $50k interrupt.
- From there, we can put most of our brain power and discovery against a handful of candidates: going deep and considering lots of strategic implications. We can devote a few weeks of serious exploration each on 4 options – not possible for 200 items.
At some point, you will want to apply your favorite portfolio analysis tool: here are 20 techniques including Kano, Speedboat and Theme Scoring. Or Steve Johnson’s IDEA/E. Go wild!
Sound Byte
Prioritization happens in a broad organizational and market context. We’re usually not successful applying algorithms or ranking schemes until some of the bigger questions are addressed.