In User Research, Don’t Stop at “Yes” or “No”

Published on:


Summary: 
Product stakeholders see user research as a tool to validate already-made decisions. But binary findings that confirm or reject a design provide little value.

The Temptation of Binary Validation

User experience professionals understand that research and design form two halves of a feedback loop: observing users’ behaviors and needs leads to design solutions that address those needs and that are tested to inform the next iteration of the design.

However, our business stakeholders frequently envision this process as linear. Rather than beginning with discovery to produce insights that will inform design decisions, stakeholders often rush to define a solution idea based on assumptions. 

By the time stakeholders involve UX, that idea is already fully formed. Instead of insights that will help refine it, stakeholders ask user researchers to merely validate the proposed design vision. In other words, they are looking for a simple “yes” answer before development can begin.

Performed this way, the value of research activities is negligible. The merit of a solution can’t be measured by a binary yes-or-no — and one of the most important reasons to do research is precisely to uncover the nuance between these two extremes.

“Yes” and “No” Are Not Trustworthy

Demanding that user research validate existing decisions is worse than doing no research at all because coming back with anything but a “yes, this solution is great” puts delivery deadlines at risk. Researchers can be (consciously or unconsciously) pressured into skimming over issues that user-testing participants have and focusing only on affirmative findings. 

When researchers are fixated on validation, a participant responding with “yeah, I like it” sounds like a stopping point: after all, the response validates the design decisions behind the solution. A researcher might even push the participant to confirm any downsides and hear “no, I wouldn’t change anything about it, it’s good.”

But that reply could be misleading.

There is nothing easier than getting a positive response in a user interview, even without meaning to. Users will rarely say no to any new features or improvements, especially if the framing does not ask them to make any tradeoffs. And the phrasing of questions can nudge participants into saying what you want them to say: by asking “what did you find difficult” or “how easy was it,” a question can imply the desired answer.

Moreover, research participants often want to please you, especially if they think you designed the product they are testing. Rather than share their honest opinion, they might end up saying what they think you want to hear.

The response “I think people would use it” is a common indicator that the participant is trying to spare your feelings. But the participant’s ability to imagine a person who might use it is not data, just speculation; you should never ask a research participant to imagine someone else’s actions. The only data you can gather from that response is that the participant is admitting that they would not use it themselves.

Attitudinal Data Needs to Be Enhanced with Behavioral Data

Regardless of how your participants react to the idea being validated, listening only to their opinions has a key limitation: what people say is not what they do. People are bad at predicting their future behavior, which will be heavily influenced by factors that they won’t be thinking about in the moment. Even if participants think that the idea fits an abstract use case, it may still fall apart in a specific real-world situation.

Asking participants about the solution idea gathers one type of data: attitudinal. For additional rigor, you should try to collect behavioral data as well. Wherever possible, research techniques like prototype testing will give you insight into not only what participants tell you about the solution, but how they might actually use it.

But when it is viewed through the lens of validation, even behavioral data can give the team an inaccurate impression. The same set of incentives that flattens attitudinal data into “yes/no” can flatten behavioral data into “users were able to successfully complete the task.” When that happens, the pressure to produce a “yes” may bias the team to present to participants only those least objectionable use cases and proposals. 

But a researcher prepared for a richer range of insights can test a far more provocative perspective, or a more complex scenario, and give themselves an opportunity to learn far more from the participants by seeing where they end up drawing the line. That way, it is the ways in which participants succeed or fail that are instructive, rather than the mere fact that they succeeded or failed.

Binary Validation Is Not Actionable

The purpose of research is to inform decisions. That means the insights coming out of research must be actionable and inspire next steps. A binary answer accomplishes the opposite: it creates the impression of finality. 

Researchers asked to perform binary validation will gather binary data: quotes and measurements that back one or the other result. But without insight into why, both “yes” (we succeeded) and “no” (we failed) present a dead end. This is the reason why teams engaged in validation are pressured to come back with a “yes”: a flat-out “no” means that all the work up to that point was a waste.

But a hypothesis will rarely be completely right or completely wrong, and the data gathered through research needs to be able to capture that nuance. Teams that can learn how their idea succeeded, and simultaneously how it failed, gain much more value out of their research, because it creates a complete picture of the product’s strengths and weaknesses. Any “no” is far easier for stakeholders to swallow, because it comes with a caveat (“not in this context”) or a path towards remediation (“not without this feature”).

A product leader using research solely as a tool for validation will end up either getting a far too low return on investment (ROI) or throwing away the promising parts of a solution together with the mistaken assumptions. But a nuanced understanding of the solution’s strengths and weaknesses creates an iteration mindset that will lead the team towards much greater value.

Source link

Related