Customer feedback is absolutely essential. It’s also unreliable.
Many companies rightly consider customers a critical source of input and place a lot of stock in customer surveys and interviews. And that makes sense—after all, these are the people who will decide the fate of your product. Of course you should listen to what they say.
But should you trust it?
The truth is, customers (and humans in general) rarely know what they want. No matter how well-intentioned, people are unreliable narrators of their own stories, and most are actually quite bad at predicting their future behavior or explaining the cause of their actions. Treating what your customers tell you as gospel can lead to some disastrous results.
In contrast, observing your users can provide the sort of insights that take products from good to brilliant. Paying attention to how your users behave will reveal their true preferences and help you establish patterns in their actions.
Here, we look at why it’s important to take what your customers say with a grain of salt, and how you can use data to vet product and design ideas more effectively.
Users don’t really know what they want
Very few people are knowingly deceitful. When asked to recall their experiences, most people will do their best to answer honestly. But in fact, much of what they tell you is likely to be inaccurate.
The problem is a psychological one. Most humans suffer from a cognitive bias known as introspection illusion.
Essentially, humans falsely believe they have direct and honest insight into how and why they behave the way they do.
When people are asked to explain why they did something, their answers are often illogical and inconsistent—or made up entirely in order to fill a gap in memory. The same is true when they’re asked to predict what they want or how they’ll act in the future.
This is why observation is a necessary part of the research process. You should always confirm any customer feedback you’re given with behavioral data before spending precious time and resources on developing a new feature or launching a new marketing campaign.
Jakob Nielsen makes a strong case for watching what users do rather than what they say in his short essay, The First Rule of Usability? Don’t Listen to Users.
He explains: “In speculative surveys, people are simply guessing how they might act or which features they may like; it doesn’t mean they’ll actually use or like them in real life.”
Examples of misleading user feedback
Walmart’s “clean” aisles
In 2009, Walmart surveyed their customers and asked a simple question: “Would you like Walmart to be less cluttered?” Naturally, people said yes.
So Walmart cleaned up the aisles, and people rejoiced which is to say customer satisfaction rose. But sales dropped—a lot.
One analyst estimated that Walmart lost over $1.85 billion because of the changes. Even if that analyst was off by 50% (unlikely), the losses are catastrophic.
So what happened?
1: Walmart asked the wrong question.
Savvy designers and researchers should easily spot the flaw in the retail giant’s survey query: Instead of asking an open-ended question and using it to gain ideas, Walmart gave people a binary choice: clean or not-as-clean?
But more importantly...
2: Walmart believed people knew what they wanted.
Of course people want things to be cleaner. That idea is true in most scenarios.
But cleanliness is not the reason people go to Walmart. They go to be overwhelmed by the smorgasbord of cheap products. Providing an endless supply of cheap options is how Walmart became Walmart.
Yes, the question was poorly constructed. But the result illustrates that you can’t treat customer feedback as absolute truth. Every idea sourced from customers should be tested.
Emails for every interaction
Misleading or misinformed customer feedback happens all the time.
As the head of user experience at his company, Rowan Bradley received his fair share of off-the-wall suggestions from well-meaning customers. But one in particular stood out:
One user suggested that the company should send an email every time a specific event happened in their campaign—not a summary email, mind you, but individual emails every single time this an action happened.
Of course, that would be a disaster. The amount of emails would become overwhelming almost immediately.
Rather than take the suggestion at face value, Rowan dug a little deeper into why the user wanted all those emails in the first place. What he and his team found was that their users were logging into their accounts every morning to monitor changes in their campaigns. What their customers actually needed were notifications within the platform, not a torrent of automated emails.
The initial suggestion underlines how ill-equipped most users are to envision the features they actually need.
Using observational research for validation
Just because you can’t take everything customers say at face value doesn’t mean you should stop listening to what they’re telling you through their actions.
Pay attention to what your users are doing. There are a number of research and testing methods you can use to confirm or refute user suggestions —or your own design decisions—with data. Here are a few prominent techniques:
Moderated Usability Testing
Moderated usability testing requires people to complete certain tasks while under the observation of a UX researcher or designer. As participants navigate the user interface, moderators can ask questions, provide context, and answer questions.
Moderated testing is typically employed early in the design process because it gives researchers an opportunity to ask questions and dig deeper into the users’ behavior.
The user feedback that is generated by moderated testing is valuable, because people aren’t guessing or trying to explain an abstraction; they’re articulating an experience that’s fresh in their mind.
Unmoderated Usability Testing
Unmoderated usability testing allows participants to freely interact with your product without constant guidance. As part of this testing, participants are asked to vocalize their actions as they navigate the UI.
Because researchers aren’t able to ascertain as much context from unmoderated testing, this method is typically reserved for the later stages of the design process. Metrics like task completion rate, drop off rate, and time to completion are used to quantify the effectiveness of the design being tested.
Both moderated and unmoderated usability tests can be done remotely or in-person.
To successfully perform usability testing, you’re going to need to use the right tools. There are now plenty of powerful tools designed to help everyone from marketers to designers get the insights they need from observational research.
Noteworthy solutions include:
Each tool brings a slightly different set of features, but they all include the essentials for usability testing—like session recording, heat mapping, and behavioral analytics.
An offshoot of field studies, the contextual analysis method involves traveling to the users’ natural habitat—whether that’s an office or their home—and observing how they currently accomplish tasks.
While usability testing normally requires that you already have a prototype in place, contextual analysis allows you to observe how your audience completes their goals with the tools currently available to them.
This method provides you with a deeper view of what’s happening in the users’ world as they use your product. Contextual analysis is particularly useful for uncovering ideas and validating concepts for complex projects, like designing enterprise UX.
In the age of user-centered design, it may seem odd to hear that you should treat user feedback with skepticism.
But part of a UX designer’s job is being able to distinguish between what users think they want, and what they really need. You need to know your users better than they know themselves. The only way to consistently achieve that lofty goal is to observe how people behave—not just what they say.