I’m at a startup and I’ve been asked to define the metrics strategy for our product. What questions should I be asking to define our metrics strategy and what tips can you share so I avoid the common pitfalls?
Answer from Eleanor Stribling, Group Product Manager at Zendesk
When you’re defining a metrics strategy for a startup, your three main challenges are Time, Resources, and Clarity.
- Time, because unless your startup is incredibly well-funded, there will be a thousand things to do at any moment for any member of the team, and there’s a lot of pressure to move quickly to crank out product. Incorporating metrics into your process takes time and reflection.
- Resources, because even if you find great software to help you track your metrics, you may need engineering time to add tracking to your application or their help pulling data. Often times, the ability to generate detailed reporting isn’t a high priority for founding teams.
- Clarity, because figuring out what’s important to measure in product and how it relates to the company’s overall goals can be difficult. Your ability to achieve clarity depends a lot on the way your executive team and the Board have defined success, how much that definition changes from quarter to quarter, and how fully formed your roadmap and key performance indicators (KPIs) are.
There’s also another aspect to the Clarity challenge that isn’t unique to startups: figuring out how to map the metrics to product features and linking them to outcomes. It’s never an exact science, but when time and resources are tight, and you’re pivoting, it’s extra tricky. Be prepared to review frequently and be flexible in your approach.
As you define your strategy, it helps to keep these constraints in mind because you will have to make conscious trade-offs between them.
What questions should I be asking to define our metrics strategy?
Understanding how the executive team and product leadership define success helps ensure that you identify product metrics that will resonate with them and help drive resource allocations and decision-making. Here are some questions you can ask them:
- What are the three most important indicators of success for the business for the next two quarters? Business plans can change quickly in a startup, so it’s useful to start with shorter timeframes.
- How does product factor into these metrics? Product success is often closely tied to company metrics, but it’s important to understand the context and what you can influence. Let’s say your start-up wants to increase revenue by 25% in the next six months. To do that you will need new customers. If the big feature you’re working on is considered important to securing some or all of these new accounts, it’s tied to the company goals.
- If there are important, product-specific metrics, can they be rolled up into the company’s key metrics? Product-specific metrics are things like the uptime of your application or how responsive you are to feature requests or bug reports from customers. Going back to our earlier example of the goal to increase revenue by 25%, does the executive team consider the product-specific metrics a contributing factor? If they do, report them as an input to the revenue goal. If they don’t, keep tracking, but that information might be most useful only to product and engineering.
- Is it important to you to prioritize engineering work based on these metrics, or is there room for other indicators? This information will help you determine if you should be focused mainly on driving completion of the features that will bring in revenue or if you can free up some time to work on technical projects and small improvements.
You will also need to understand what data is available, how easily that data can be retrieved, and what metrics can be derived. Engineers and/or data analysts should be able to help you answer these questions:
- What can we measure today, and how easy is it to access that data? For example, can anyone pull data with some minimal training or is that only available to engineers? The answers to these questions will help you identify the most realistic data options you have today.
- What would it take to add more metrics or change how they are calculated? If company goals shift every quarter or two, will it be expensive in person-hours or cash to change the metrics you’re using? This information will help prepare you to answer questions from your executive team and make recommendations if the company goals change.
- Do we have documentation of what different data fields mean? If not, what would it take to create and maintain documentation? Data can be complicated; maybe your Database Administrator didn’t come up with intuitive field names, and people might have questions about how you got that number. Documentation helps build a common understanding of what the metrics mean, ensures that not just one person can interpret them, and adds credibility to the data you use.
Together, the answers to these two sets of questions will help you decide how best to work within the constraints of time, resources, and clarity to arrive at a useful and workable metrics strategy.
What should go into the metrics strategy plan itself?
If you need to document your plans, create a word processing doc or slide deck with the following headers:
- Goals. In a couple of sentences, explain why you are collecting this data and what you intend to use it for.
- Metrics. Populate a table with the name, definition, purpose, and calculation methodology for each metric on your list.
- Key people. Enumerate the people who will pull, analyze, consume, and maintain this data.
- Sharing. Outline how frequently this data will be updated, where it can be found, and who is allowed to access it.
You will need to work these measures into your product development process, ideally at the specification development stage. Not every initiative will drive every metric, but having a clear plan for how metrics are used at a high level will help you determine the specifics of each project.
What tips can you share so I avoid the common pitfalls?
All startups are different, but there are a few things you can do in almost every situation to make sure your metrics strategy has staying power:
- Keep it simple and easy. As much as possible, use metrics that aren’t too arduous to obtain or maintain. Most startups don’t have the resources to spend hours pulling and analyzing data.
- Set expectations about reporting content and timing. You don’t want the task of creating these reports to take over your life.
- Get executive buy-in. Make sure the executive team is on board with what you are measuring and what you are not.
- Have a point of view on what’s important going into your meetings with other people in the company. You want inputs and direction, not someone else to write the plan for you!
Defining a metrics strategy may seem daunting, especially when you’re faced with the constraints of a startup environment. However, the process of defining the strategy is in itself a valuable exercise because it forces the executive, product, and engineering teams to come to a common understanding and get clear alignment on the company’s priorities, the business strategy, and the product roadmap. The metrics strategy is useful only if it helps you measure the things that actually matter.
I hope the process and tips I’ve outlined above help you define and execute your startup’s metrics strategy with confidence. Good luck!
Answer from Sara Nofeliyan, Senior Product Manager at Varo
A Metrics Strategy should start with the following components: a definition of success, a way to instrument your product, and a rubric for evaluating the product’s performance. I describe each step below so you can define a strategy from scratch, and close with a few pitfalls to avoid.
Use your mission to define success
Use your company’s mission to guide the Why of what you’re doing, and use that guidance to inform What you’re trying to achieve. A clear understanding of the Why simplifies the task of defining success when you look at the behaviors and actions that can be tracked.
For example, if your company’s mission is to create a world with no stray dogs, then your key action is placing dogs in their forever homes. You can also try this sentence structure: We want to increase the adoption of (feature) to/by (x rate) because we know that users who adopt this are (y times) more likely to (receive z benefit that ties to our mission). This statement turns your key actions into a key performance indicator (KPI) that is specific, measurable, attainable, realistic, and timely (SMART).
Instrument for a clear view
Instrumenting, monitoring, and measuring your product’s KPIs is critical to your metrics strategy. To gauge your success in achieving the example mission to create a world without stray dogs, you want to have a measurable event for a pet’s adoption.
There could be a few key actions (or data about the actions) to measure — such as the start and completion of the adoption process or the total number of rescue dogs in a facility. Invest the time to instrument products, clearly defining what will be measured and how it will be tested, and create a common language around your KPIs that teams will use cross-functionally.
A share-out of product metrics is always useful, so make your metrics available to team members and stakeholders. Steer clear of lengthy emails or posts in Slack that individuals might miss. Instead, create standardized reports that you can point to during stand-ups, or use a single slide for your all-hands or any product review meeting. The key is to provide a venue for people to ask questions and stay informed.
Regular share-outs of any product output — whether it’s research, wireframes, or metrics — will help you leverage the perspectives and creativity of your full team by creating opportunities for people to react to things in their own way.
With a common language for what success means and with definitions for your key actions, you are likely to see trends in the data. Avoid the temptation to define success based on usage trends that have already surfaced. You can resist this temptation by creating your rubric, or your lens, to evaluate this information up front.
Your rubric should consist of a set of rates or aggregate values that are the closest representation of the means to achieve your mission. Ask questions like: ”Does it matter how many users do this or how many times a day users do this? Is it critical to our success that we have a certain critical mass of users adopt this feature? Or is a steady rate of adoption more important in the long term? Why is an ‘active’ cohort defined this way?” After you answer these questions, you can document a clear set of metrics that you will evaluate consistently before, during, and after product launches.
Avoid these pitfalls
In this section, I share some common pitfalls that I’ve encountered — and that you should make an effort to avoid — as you define your metrics.
- Shortsighted or misaligned metrics. To run with the mission to create a world without stray dogs, you might not want to base your success on the number of adoption agreements signed if several owners come back and are not able to keep their new pet for the long-term. Your data may show that three months is the key inflection point that indicates a new pet owner is fully committed to keeping their new pet, so counting the number of adoptions that reach the three-month mark is a far more useful success metric than just the number of adoption agreements signed.
- Dishonest or narrow metrics. I refer to these as “vanity metrics” because they make your product look good at a glance, but the metrics don’t reflect the actual success of your users or of the product. Examples of vanity metrics include cumulative metrics or rates that don’t make sense (e.g., top-level growth without context on marketing spend, or total logins per month when frequent, daily engagement per user is what drives your business).
People rarely set out to create dishonest metrics intentionally. They typically come about when you get comfortable and depend on a never-ending stream of good news. Avoid this trap by finding and using the ratios that align with your primary product use cases and balancing these with a holistic view of your platform.
- Stale metrics. As your product develops and your onboarding techniques, interactions, or even business model changes, you’ll want to stop and recalibrate. For example, as the product matures, the reliability of the service and retention of users may become a steering force for the business, and replace the original KPIs for growth and acquisition.
After you’ve gone through the exercise of aligning your metrics around the mission, instrumenting your platform, and evaluating the product’s performance, you should have an idea of where to begin with feature improvements.
At the end of the day, if your product isn’t serving the needs of its users, then it isn’t achieving its purpose. Your product will change, and sometimes this means you’ll need to iterate on your metrics strategy as well. The intuition you build by creating a metrics strategy is a muscle that can help you make day-to-day decisions, so embrace the iterations.
Good luck and I hope this helps you create a strong, defensible strategy to steer your team!
Answer from Karmel Elshinnawi, Lead Product Manager at Consensys
The best way to begin crafting your metrics strategy is to think about the big picture and identify the most important metrics for the customer, the product, and the company. With the wide availability of tools and software to collect and analyse data, it’s far too easy to collect data without a clear set of goals. By focusing on the big picture, we avoid that trap.
I recommend the following steps as you think about your metrics strategy:
- Define your North Star metrics
- Align on important measurement categories
- Define the input and output metrics
- Use the right tools to get the job done
Define your North Star metrics
North Star metrics are the key set of metrics that measure the success of your product. They provide a mechanism that (a) aligns the entire company on measures that define product success, and (b) communicates the company’s progress towards that goal.
When you define your North Star metrics, keep in mind that they can change over time. The task of defining these metrics is not a one-time exercise.
LinkedIn published a great example of setting North Star metrics that align with their product vision for the Endorsements feature. They describe how they had originally optimized and focused too much on one type of North Star metric (i.e., count metrics, such as total endorsements given, unique endorsers, and unique recipients) and missed defining metrics that assess how well the feature was achieving the product vision (i.e., metrics that measure the quality of the endorsements given).
To quote their paper: “Count metrics, when used exclusively as the North Star, might inform product decisions that harm user experience.”
Align on important measurement categories
To ensure that our North Star metrics do a comprehensive job of measuring the success of our product, we must align these metrics on important measurement categories.
While there are many different ways to achieve this alignment, I like to use the HEART framework, which was developed by a research team at Google. With this framework, you can be confident that you have the foundational elements of what you need to measure.
- Happiness: a measure of user attitude or satisfaction.
- Engagement: a measure of how much a user interacts with a product.
- Adoption: the number of new users over a certain time frame.
- Retention: keeping your existing users for x amount of time.
- Task Success: can be broken down into smaller components; for example: time spent on any given task (can the process be improved?).
I find this article about UX metrics to be a great introduction to the HEART framework.
For example, if your North Star is getting people to log their workout sessions three times a week, then you might want to focus on Engagement and Task success.
Define the input and output metrics
Think of your North Star metrics as the output metrics. Your input metrics are the actions needed to get to that outcome. To develop the input metrics, think about what people would have to do to produce the desired output metrics. The lowest level of input metrics should be something you can develop tests for and track progress against.
I find the illustration below to be very helpful in understanding the relationship between input and output metrics.
Use the right tools to get the job done
Now that you have an understanding of what you want to measure, you can start looking for the right tools. There are many tools available, so I recommend doing competitive research and think about the engineering effort needed to implement your metrics strategy. Some tools are developer heavy, some aren’t.
Things to consider as you search for tools:
- Number of data points/month. Tools can get very pricey if you don’t monitor your usage carefully, especially for vanity metrics like page views.
- Funnel tracking. Do they have out of the box funnels you can use immediately?
- User data. What user data can they track? Do you want to track it? How easy is it for you to stop tracking it?
- Auto-tracking capabilities. Does the tool automatically track any data?
- Reporting capabilities. What reports can you generate? Do they provide insights or just numbers?
Once you’ve selected a tool, you will most likely develop an analytics spec sheet to help your engineers measure the right things with the right triggers. A spec sheet is a way for you to communicate to the engineering team what you want to measure and why. Mixpanel offers a sample implementation tracking sheet that you can use as a starting point.
A Metrics Strategy is a perfect opportunity to get your team and the product aligned with the goals that are important to the business. If you start with the North Star alignment, you can get everyone working towards metrics that will give you a comprehensive understanding of your product’s user base. Keep in mind, this is an iterative process and your Metrics Strategy will continue to grow and evolve just as your product does.
. . .