South Star Metrics
Misused metrics have resulted in some of the worst UX decisions ever.
We’ve all heard the term “north star metric”, which is the over-arching needle that a product team is supposed to move (because it ties directly to business objectives and/or customer value). But sometimes, over-rotating on a number results in an anti-pattern with unintended side effects - I refer to this scenario as a “south star metric”.
Let me walk you through an example.
During my Amazon tenure, I spent a lot of time on hiring - at my peak I was doing 5 phone screens and 1 onsite a week…for a solid year (topic for a future post). You get to a point where you have canned questions and all the answers start to sound the same (not in a bad way - many candidates just know how to give a good-but-not-contentious answer given the interview-prep industry). One of my pet questions:
“walk me through a time when you had a decision to make, and the data in your hands didn’t match the intuition in your head - how did you reconcile?”
My rationale for asking this question:
get a baseline for whether the candidate is data- driven / informed / aware
get a sense for whether the candidate has honed product intuition
get a flavor for any decision-making frameworks the candidate uses
get an idea of how the candidate communicates with stakeholders
Getting back to the story…one of the most amazing responses to this question came from a Microsoft PM. The candidate had been on the Windows Update team, and their goal was to increase the % of Windows machines that were on the latest version of the OS. The # had historically hovered around 80% no matter what the team did, and there was a strong desire to get the number over 90% (the theory being that machines on the latest software were less susceptible to security exploits, less likely to report bugs, etc). The prevailing approach for a long time had been to prompt the user to install (either right away or at a scheduled time) the pending update - but a pesky 20% just wouldn’t. So…
“we decided to just stop asking users!”
I nearly fell out of my chair. I had in front of me one of the masterminds of the most-annoying, feature-not-a-bug, productivity issues in corporate life - the dreaded reboot while you’re in the middle of typing. When I prompted the candidate to explain how the team reconciled the experience trade-off, the response was “well, the metric”.
South. Star. Metric.
This is just one instance of metrics misuse to the point of counter-productivity. Some other scenarios I’ve seen:
business metrics negatively affecting customer experience (e.g. ad load goes up, CSAT goes down)
product team demoralized from inability to impact GTM metrics (e.g. revenue is a function of demand pipeline / sales execution and not just product value)
boosting one part of a chain of metrics without thinking end-to-end (e.g. running a campaign for signups, but nothing corresponding to lift conversion)
the metric is so urgent that the team is dinged for spending cycles on work that doesn’t move the needle (e.g. exploratory workstreams that pay off down the road)
metrics that are inconsequential when viewed through a broader lens (e.g. optimizing perf of technology X while all new development happens on Y)
metrics that are nonsensical because they destroy the business (e.g. reducing the # of production incidents / rollbacks by moratorium-ing number of deploys to ~0)
metrics for 2 different products that are incongruent (e.g. time on site is at odds with search results quality, yet many sites try to optimize both at the same time)
These are all topics for future exploration, and they all have a war story (of course) behind them. By the way, none of this is meant to imply that data-driven product development is suspect, but it’s critical when setting north star metrics that they:
exist for a long enough time horizon
connect business <-> customer value
account for the end-to-end journey
I’d love to hear from readers about their own experiences with south star metrics - please chime in via comments👇🏽. And if you enjoyed this post, please consider subscribing.
further reading / references
an overview of north star metrics from Amplitude
the pitfalls of north star metrics / more ways they can lead you astray
Goodhart’s Law: when a measure becomes a target it ceases to be a good measure
Brian Balfour has a solid list of common mistakes in defining metrics
childish drawing / interpretation