Towards the end of 2018, I found myself struggling with the new demands of my job - for the past four years, I had been operating in a relatively simple world with a single major customer and a relatively small engineering team (~25 engineers).
Six months earlier, I had started leading product management for a bigger team with ~100 major customers and ~200 engineers. While I felt like we had been doing an okay job and were prioritizing the right things, I knew we could do a lot better. Additionally, at this scale, it was becoming difficult to communicate how the prioritization process worked.
So over the past few months, I've been thinking about how to build a loosely-democratic technocratic and believability-oriented system for deciding which projects to prioritize over time.
Note that this system may not be ideal for other organizations or other product managers: I'm a hybrid Business / Technical Product Manager per Stripe's description, and so it's clearly biased by those perspectives. Our operating environment is also probably pretty different than most -- it's characterized by a substantial disconnect between users and payers, which leads to a lot of indirection between value creation and value capture, large per-client integration teams, which leads to indirection between user feedback and the product team, and very discontinuous revenue growth given the relatively small number of large scale (~1m ARR) customers.
Effectively, there are two models, a cost-oriented model which is driven primarily by engineering, and a value-oriented model which is driven primarily by sales. The goal of the system is to order a list of projects by their business value, and then achieve maximum throughput and long-term velocity from the engineering team against those projects.
A visualization of the current value model is represented in this prioritization matrix:
On the cost side, the projects are arranged in a dependency graph modeled after the concept of a tech tree, a game mechanic common in many strategy games. Currently, dependencies are firm, though in the future, I've considered adding additional bonuses and soft dependencies to better express the distinction between MUST and SHOULD dependencies. This graph is expressed as a DOT file, which produces an intuitive visualization, with green nodes representing active projects, grey nodes representing completed projects, and yellow nodes representing available projects. Each project is given a rough cost (using a rough fibonacci scale).
A visualization of the current cost model is represented in this dependency graph:
Then, the value vector is applied to the graph, and value is propagated through the dependencies to sort all of the available projects (projects where all of the dependencies are either active or completed). I've played around with various algorithms - the one I was happiest with is one that rolls the value back to all parents equally rather than splitting it, and which doesn't dissipate. This seems to result in the the most intuitive ranking, but without distinguishing between must / should dependencies, it can end up overvaluing infrastructure projects - dissipation can help with this at the cost of losing expressivity in very deep trees.
So the result is a series of value vectors for the projects:
- Vectors representing the votes from each individual account
- A vector which represents the business' strategic weights applied to the per-account votes
- A graph-propagated value vector that takes into account engineering cost and cross-project dependencies.
Having these three vectors makes it really easy to talk with a wide array of stakeholders across the company, and have made it a lot easier to discuss why we're doing what we're doing across the company. Additionally, when the graph vector gets wildly out of order versus the business vector, we can look hard at the graph to try and figure out cleverer paths to get to the value more cheaply.
Additionally, for staffing, I've found found that people tend to find it meaningful to flow through the graph: each project moves naturally into the next project, creating a strong professional growth narrative and helping project teams retain context and mentor new teammates.
More than anything, the model is transparent - we spend a lot more time talking about the inputs and the framework rather than the outputs, yielding much more substantive conversations.
Some limitations of the model that I'd like to fix:
- Doesn't take into account constellation building and project complexity. I've considered adding a complexity metric to enumerate projects which need a delicate and fragile constellation of team members in place to ensure success.
- This model doesn't express time value or cost of delay particularly well and is best suited for peacetime rather than a wartime.
- Most people don't understand it - there's a fundamental tension between accuracy, legibility and simplicity, and I probably could have made it simpler, but I opted towards a more accurate less immediately legible model in order to express the factors I found most critical.