top of page
  • Writer's pictureSimon Jackson

Three rules to pick a primary experiment metric

Banner image with article title

Today, I’m going to share three rules for picking a primary metric to use in experiments.


A primary metric is the main focus of an experiment. It’s what you and others look at to decide whether the changes you’re making are a success, or a flop. So it’s crucial to pick the right one.


Unfortunately, most teams aren’t given much guidance on how to pick one. In fact, “what should our primary metric be?” is one of the most common questions I get when I start to work with new product and growth teams.


Your primary metric is your guiding star, so pick well!


So why is this important choice so difficult? Well, in my experience, many teams struggle with at least one of these:


  • Not knowing the purpose of a primary metric

  • Wanting to feel in control of outcomes

  • Or trying to be too focussed on downstream financials


Fortunately, these are common problems and I’ve been through all of them. So let’s dig into the top three steps I recommend taking to pick your first, great, primary metric:


Rule 1: Stick to one!

Imagine being on a rocket to the moon and hearing the pilots debate which direction to go.


This represents the most common problem I see: teams using all sorts of primary metrics.


It’s just like everyone having different ideas of what matters to grow a business, and so going in many different directions.


The purpose of a primary metric is to be a common guide for your teams and business. Something that aligns everyone on the direction they should be going.


The gold-standard term for one metric to define this one direction is the Overall Evaluation Criterion, with a funny example from experiment legend, Ronny Kohavi at Micrsoft here:



To share a concrete example of my own, I worked with a growth org that had many teams focussed on all sorts of optimisations to drive better retention. During my problem discovery work, I learned one of the biggest complaints was teams “interfering with each others’ goals.” Odd given they were all meant to be driving retention?! Turns out they were all using different primary metrics that were various proxies of retention and sometimes at odds with each other. Working with some brilliant data scientists on the team, we helped everyone to rally behind a single primary metric (that met rules 2 and 3 too).


Looking back over 6 months of experiments, we saw that growth org achieve a whopping 1.5x increase in the impact they were having on this retention metric.


So Rule 1? Pick one!


Rule 2: Make it a “North Star”


OK, you should pick one, but what should it be?


Now, everyone has different definitions but, for this context, my go-to is Amplitude’s North Star Metric framework. Amplitude’s VP of marketing, Sandhya Hegde, summarised it nicely in this article, where this screenshot comes from:


Summary of what makes a good North Star metric from Amplitude's framework
Summary of what makes a good North Star metric from Amplitude's framework

If you’re working on your primary metric, take a few minutes to read that article and learn more. It’ll be well worth it!


Just be sure that whatever you come up with fits the context of experimentation (usually by making it something that can be computed per user).


An example from my time working at a major e-commerce site: everyone rallied behind a metric, “Net purchasers.” This was measured by the number of experiment participants who made at least one purchase (that wasn’t later cancelled within the experiment runtime). It was a perfect example of a North Star metric because it:


  1. Measured customers finding value by making a purchase (and not cancelled).

  2. It represented the core product strategy to help anyone find at least something they wanted (relevant to that business’s context).

  3. It was a clear leading indicator of revenue.


So Rule 2? Make it a “North Star!” (well, like an Amplitude North Star 😊)


Rule 3: Keep it Simple


You’ve picked one, it fits the bill of a "North Star", but there’s one last thing to remember:


Make it simple.


It’s natural after you’ve considered everything you want one metric to capture, and to meet all the criteria of a North Star, that teams come up with some pretty complicated metrics.


Now, it can be OK if your business has been experimenting for years and iterated towards something complicated. For example, I’ve worked with a number of companies to introduce machine-learning based primary metrics.


However, for most businesses, you want to keep it super simple. Because simplicity enables understanding, trust, high-quality usage and decisions. Complexity starts to degrade these things.



To quickly check if you have a simple metric, tell a few people the name of your metric. Then ask them these questions and see if the answers are fairly easy:


  • “How do you think it’s calculated?”

  • “What do you think it means if it goes up or down?”

  • “Is there a change we care about that it might miss?” (maybe slightly harder)


If your team can easily understand how a metric is calculated, what it means to go up or down, and can understand the nature of edge-cases that it will miss (and when they might need to make decisions that don’t align with results), then you’re in a good place.


If not, best to simplify things down, even if it means picking a metric you think is a little less desirable. Because, it's better to know about the gaps of a simple but imperfect metric than have no idea what's going on with a more complicated one.


Great Primary Metrics Get Teams Rowing Together


At the end of the day, the purpose of primary metrics is to help many teams row faster in the same direction. So follow these these three rules if you want your primary metric to help your teams achieve a lot, while your competitors fire off in all sorts of directions:


  1. Pick 1

  2. Make it a “North Star”

  3. Keep it Simple


Happy metric mining and go smash those goals!


If you found this blog post useful, subscribe below to be notified of the next one 👇

Until next time, thanks for reading! 👋

192 views

Want more expert advice... free?

Get the next one delivered straight to your inbox!

Nice! Expect the next one in your inbox.

bottom of page