← Visit Appcues.com
×
×

3 cognitive biases skewing your growth experiments

The best growth marketers are obsessed with human behavior but rarely think about what cognitive biases are affecting their own work.
Skip to section:

Skip to section:

The best growth marketers are obsessed with human behavior.

They’re constantly assigning meaning to what users are doing while thinking how best to leverage psychological principles to attract and convert customers.

However, they rarely direct these astute insights inwards to think about what cognitive biases might be affecting their own work. Even the best in the business often get in their own way, letting their personal biases creep into how they are interpreting experiments.

As a growth marketer, I’ve experienced every one of these three cognitive biases and reckon you have as well.

1. Self-serving bias

One of the best documented psychological phenomena—self-serving bias—is also one of the most likely to affect a growth marketer.

We attribute success to something we did, but failure to external variables outside of our control.

In everyday life, you likely experience the self-serving bias without even realizing it. You win a basketball game because you played great. You lose because the ref was unfair.

Do you find yourself following this line of thinking when analyzing your KPIs?

Conversions skyrocket, and you give yourself a mental pat-on-the-back. This improvement must be because of that new experiment you launched or the redesign you championed.

But does your thought process shift when the numbers are less cheery?

When my conversion rate dipped this month, my first thoughts were of what external changes could have occurred. Had we gotten a different distribution of traffic? Did our content not drive hard enough to the product? Did a million potential MQLs get food poisoning in unison?

Probably not.

The reality, for both the positives and the negatives, is likely somewhere in between.

Even if you’re tackling growth on a small team, there will always be variables outside of your control that could be affecting your data. However, by following correct testing methodology, you should be able to have a relatively accurate idea of your impact on these KPIs.

Has every test you’ve run boosted conversions 25% compared to a control group but conversions are down overall? External factors are likely in play. Numbers dipping and not sure why? Start by owning it and work from there.

Giving yourself credit for success is important, but your idea of ‘success’ likely needs to shift.

Creating an experiment that somehow doubles MQLs might seem good in the short term, but if you don’t know how it happened you won’t be able to replicate.

Instead, focus on creating experiments that provide better insight into the way your customer thinks about your product, even if the results go against your initial hypothesis.

2. Anchoring

Experiments are exciting.

Moments after every new launch, I immediately hit the refresh button. More than once.

After about 25 minutes, the least statistical significant results in the planet role in. And I eat them up. Immediately I start assigning meaning to everything. I reevaluate my hypothesis, break it in two, and build a dozen more in its place.

All before any substantiated data has built up.

While I have consciously worked to shake this initial knee-jerk reaction, it still lingers in my subconscious and affects the way I view my final results. If the test initially came up positive, I’m secretly expecting (and nudging) the final result to come up positive as well.

This is because of anchoring—the tendency we all have to rely too heavily on the first bit of information we get.

This doesn’t just hit during the experimentation part of a growth marketer’s day.

Think about the benchmark metrics you use. You likely set them up a while ago, when your company was in very different place than it currently is. Maybe you were hitting a 10% conversion rate in 2016, but that was before you burst open the top of the funnel and flooded your site with more visitors. Now you are getting more MQLs with a lower conversion rate but still have that original number seared into your mind.

Curb your enthusiasm.

Create rules for yourself that you religiously adhere to. Don’t share results with teammates until your data is statistically significant or at least has been strongly trending in a direction after a significant amount of time. Give context to all of your data, ensuring that the benchmarks you’re using are still measuring what they’re supposed to be.

3. Confirmation Bias

Good growth methodology makes itself extremely vulnerable to the confirmation bias—the idea that when we start off with existing beliefs or theories, we interpret any evidence that follows as confirmation for what we already thought.

You friend Simon has just set you up on yet another blind date. Despite his good intentions, Simon has always set you up with the most insufferable people. When you meet Karen, you tell yourself you’ll keep your mind open. However, with the preconceived notion that she will be just the worst, you are subconsciously looking to confirm this belief. Odds are that if you’re looking to be put off, you’ll find something off-putting.

As growth marketers, we start by creating a hypothesis that we hopefully believe a great deal in. Then when evaluating results, we subconsciously pick out the data that confirms our hypothesis and devalue the evidence that goes against it.

When starting out with a strong hypothesis (as you should), it’s very hard to not secretly hope this comes to fruition—you’ve got a horse in the race. And while you logically know that you’ll follow the data, it’s easy to start viewing that data through a lens that rewards data confirming your hypothesis.

This isn’t an easy bias to shake, but it begins by looking for ways to challenge your initial beliefs. To continue our dating analogy, when meeting Karen, tell yourself that she might be your soulmate and look for the evidence. When analyzing your results, assume that your hypothesis is dead wrong and make your new goal be to prove it.

So why bother?

Cognitive biases hit us all harder than we’d like to admit.

Many think that they are too logical or experienced to be pushed and pulled by these biases. Or they accept the existence of these psychological influencers but feel helpless under their perceived inevitability.

However, being a growth marketer makes you in a perfect situation to understand and move above these biases. All it takes is shifting the focus of your psych analysis inwards, bringing in a little objectivity, and valuing knowledge over a set result.

Author's picture
Rustin Nethercott
Growth Marketing Consultant
Rustin is a growth marketing consultant at RSN Growth Consulting. A proud Vermonter lost in Boston, Rustin can most often be found wandering aimlessly around Trader Joe’s, hitchhiking to Burlington, or yelling out bad tech puns.
Skip to section:

Skip to section:

The best growth marketers are obsessed with human behavior.

They’re constantly assigning meaning to what users are doing while thinking how best to leverage psychological principles to attract and convert customers.

However, they rarely direct these astute insights inwards to think about what cognitive biases might be affecting their own work. Even the best in the business often get in their own way, letting their personal biases creep into how they are interpreting experiments.

As a growth marketer, I’ve experienced every one of these three cognitive biases and reckon you have as well.

1. Self-serving bias

One of the best documented psychological phenomena—self-serving bias—is also one of the most likely to affect a growth marketer.

We attribute success to something we did, but failure to external variables outside of our control.

In everyday life, you likely experience the self-serving bias without even realizing it. You win a basketball game because you played great. You lose because the ref was unfair.

Do you find yourself following this line of thinking when analyzing your KPIs?

Conversions skyrocket, and you give yourself a mental pat-on-the-back. This improvement must be because of that new experiment you launched or the redesign you championed.

But does your thought process shift when the numbers are less cheery?

When my conversion rate dipped this month, my first thoughts were of what external changes could have occurred. Had we gotten a different distribution of traffic? Did our content not drive hard enough to the product? Did a million potential MQLs get food poisoning in unison?

Probably not.

The reality, for both the positives and the negatives, is likely somewhere in between.

Even if you’re tackling growth on a small team, there will always be variables outside of your control that could be affecting your data. However, by following correct testing methodology, you should be able to have a relatively accurate idea of your impact on these KPIs.

Has every test you’ve run boosted conversions 25% compared to a control group but conversions are down overall? External factors are likely in play. Numbers dipping and not sure why? Start by owning it and work from there.

Giving yourself credit for success is important, but your idea of ‘success’ likely needs to shift.

Creating an experiment that somehow doubles MQLs might seem good in the short term, but if you don’t know how it happened you won’t be able to replicate.

Instead, focus on creating experiments that provide better insight into the way your customer thinks about your product, even if the results go against your initial hypothesis.

2. Anchoring

Experiments are exciting.

Moments after every new launch, I immediately hit the refresh button. More than once.

After about 25 minutes, the least statistical significant results in the planet role in. And I eat them up. Immediately I start assigning meaning to everything. I reevaluate my hypothesis, break it in two, and build a dozen more in its place.

All before any substantiated data has built up.

While I have consciously worked to shake this initial knee-jerk reaction, it still lingers in my subconscious and affects the way I view my final results. If the test initially came up positive, I’m secretly expecting (and nudging) the final result to come up positive as well.

This is because of anchoring—the tendency we all have to rely too heavily on the first bit of information we get.

This doesn’t just hit during the experimentation part of a growth marketer’s day.

Think about the benchmark metrics you use. You likely set them up a while ago, when your company was in very different place than it currently is. Maybe you were hitting a 10% conversion rate in 2016, but that was before you burst open the top of the funnel and flooded your site with more visitors. Now you are getting more MQLs with a lower conversion rate but still have that original number seared into your mind.

Curb your enthusiasm.

Create rules for yourself that you religiously adhere to. Don’t share results with teammates until your data is statistically significant or at least has been strongly trending in a direction after a significant amount of time. Give context to all of your data, ensuring that the benchmarks you’re using are still measuring what they’re supposed to be.

3. Confirmation Bias

Good growth methodology makes itself extremely vulnerable to the confirmation bias—the idea that when we start off with existing beliefs or theories, we interpret any evidence that follows as confirmation for what we already thought.

You friend Simon has just set you up on yet another blind date. Despite his good intentions, Simon has always set you up with the most insufferable people. When you meet Karen, you tell yourself you’ll keep your mind open. However, with the preconceived notion that she will be just the worst, you are subconsciously looking to confirm this belief. Odds are that if you’re looking to be put off, you’ll find something off-putting.

As growth marketers, we start by creating a hypothesis that we hopefully believe a great deal in. Then when evaluating results, we subconsciously pick out the data that confirms our hypothesis and devalue the evidence that goes against it.

When starting out with a strong hypothesis (as you should), it’s very hard to not secretly hope this comes to fruition—you’ve got a horse in the race. And while you logically know that you’ll follow the data, it’s easy to start viewing that data through a lens that rewards data confirming your hypothesis.

This isn’t an easy bias to shake, but it begins by looking for ways to challenge your initial beliefs. To continue our dating analogy, when meeting Karen, tell yourself that she might be your soulmate and look for the evidence. When analyzing your results, assume that your hypothesis is dead wrong and make your new goal be to prove it.

So why bother?

Cognitive biases hit us all harder than we’d like to admit.

Many think that they are too logical or experienced to be pushed and pulled by these biases. Or they accept the existence of these psychological influencers but feel helpless under their perceived inevitability.

However, being a growth marketer makes you in a perfect situation to understand and move above these biases. All it takes is shifting the focus of your psych analysis inwards, bringing in a little objectivity, and valuing knowledge over a set result.

Author's picture
Rustin Nethercott
Growth Marketing Consultant
Rustin is a growth marketing consultant at RSN Growth Consulting. A proud Vermonter lost in Boston, Rustin can most often be found wandering aimlessly around Trader Joe’s, hitchhiking to Burlington, or yelling out bad tech puns.
You might also like...