

Discover more from Run the Business
Metrics Malfunction
6 months ago I wrote about South Star Metrics (i.e. bastardized North Star Metrics), and I got an interesting question asked about it on Twitter:
There are numerous constraints, pressures, and incentives surrounding product teams, and they can conspire to drive this well-intentioned but dysfunctional behavior. Let me count the ways…
Many teams are on a journey to be more rigorous, but they sometimes view being data-* (data-informed, data-driven, data-influenced) as the end goal. The introduction of a metric is viewed as the first and last step on that journey, the completion of a transformation from not being data-* to being data-*. The reality is it takes time to re-set a charter around a metric, and reps to re-align the team’s operating model.
Other teams allow outside factors to determine their metrics as opposed to their unique (and more comprehensive) understanding of their (actual and prospective) users. The “outside factors” could range from internal leadership who are not privy to the team’s domain knowledge to external competition that influences thinking. If you think of a product team as a learning entity, not leveraging that learning for metrics construction is silly.
One latent but recurring issue is also the complexity of data management in any organization of meaningful size. The accuracy / quality / completeness / cost of data required to track a metric can be serious considerations. Some teams just don’t have the ability to actually instrument what they’d like to, so they settle for proxies or what’s possible.
Finally, there is significant change management overhead whenever you introduce / update a metric. There are many organizational hurdles for a metrics champion to overcome, including inertia, stubbornness, and disbelief. Sometimes you’re just forced to pick something because of process deadlines (e.g. annual planning or quarterly review). It’s only natural to over-index on metrics that might be easy to understand (legibility) and rally around (synchronicity).
To recap, I see 7 root causes of metrics malfunction:
lack of rigor
time pressure
desire for simplicity
copying competition
change management
top-down directives
instrument-ability
Again, falling into one of these traps doesn’t mean a team is malfunctioning - they’re just not done iterating.
I’d love to hear from readers about their own experiences with avoiding / succumbing to metrics malfunction - please chime in via comments👇. And if you enjoyed this post, please consider subscribing.
further reading / references
Goodhart’s Law: when a measure becomes a target it ceases to be a good measure
my prior post on South Star Metrics and how metrics misuse affects UX / CX
I’ve talked about legibility and synchronicity before wrt product strategy
an interesting read on the “curse” of success metrics and how they jinx learning
a team’s relationship with metrics evolves with experience
childish drawing / interpretation