Tips to avoid bias in your usability test results

You have made the case to work usability testing into the design process. You have gotten buy-in from the business. Well done! Now make sure that you get the best possible data by avoiding mis-steps that can bias your results and ultimately undermine your efforts and confidence in your work.

The tips I will discuss are:

●      How to select representative participants

●      How to phrase tasks so you do not lead the user

●      How to ask objective follow-up questions that don’t lead or make the user feel stupid

Follow these basic steps and you will greatly increase your chances of success.

How to select representative participants

The first thing to consider when recruiting is how targeted a user group do you need? Bear in mind that not all usability studies necessitate having a highly targeted user group. It all depends upon how much knowledge is required of the domain you are testing. If you do need to recruit a targeted demographic you’ll need to get ahold of a list of potential participants. These might be existing customers, in which case you can probably find a list of emails. It’s important that you “scrub” this list to remove any people who have not opted-in to being contacted. You can also set up a dialog box on your website asking your users if they would be interested in participating in a study, and that they will be compensated. This would be an “opt-in” so you would not need to go through the step of scrubbing. Using a crowdtesting platform such as Passbrains does the recruiting work for you but ultimately it’s your job to make sure you have the right level of participant knowledge required to get good data. No platform is going to know your users as well as you do.

How to phrase tasks so you do not lead the user

In my experience this is the most common error that leads to bad data. Here’s an example: I am testing an ecommerce site and want to know how easy the checkout process is. The Checkout process has 5 steps. The button in the Cart is labeled “Checkout”. A poor example of wording for the task would be “Find an iPhone to buy and checkout.” -- you have included the label of the button in your task and have thus biased the result. Better wording would be “Find an iPhone to buy”. This is a very basic example and I have seen far more egregious examples. But I think you get the idea.

Aside from leading the participant through repeating button labels and other calls to action in the task itself, another common pitfall to avoid is giving too much information in the task. Using our above example, it would be leading to say “Choose an iPhone from the product catalog, add it to the cart and then checkout.” You have told the participant a) where to find the iphone, b) that they need to add it to the cart, and as we’ve already learned c) given them the button label for Checkout. Better to err on the side of less is more when writing your tasks.

How to ask objective follow-up questions that don’t lead or make the user feel stupid

Whether you are doing an in-person moderated test, a remoted moderated test, or even a remote unmoderated test, you have the opportunity to ask follow-up questions. The greatest advantage of moderated testing is it allows you to “get into the head” of the participant in the moment they are having trouble. With unmoderated tests you won’t have that opportunity in the moment so you have to think through potential stumbling blocks in advance so you can ask questions afterward. As with phrasing our tasks, a good rule of thumb is “less is more”. It’s not your job to help them recover by telling them how to complete a task. If you do that, your data is no good. It is your job to get to the root cause of why they are having trouble without leading them. So using our shopping example, if a user is struggling to find the right iPhone, you wouldn’t say “why don’t you try clicking the iPhones tab at the top”. If you are moderating the test, your job is to sit and watch for a while as they think through the problem. Give them some time. If they are not vocalizing, ask “what are you thinking?” as they work through the problem. If you are conducting an unmoderated study, you would need to work in a post-task question to the effect of “If you encountered any issues in selecting a product, tell us more about that.”

Another best practice is to preface your study with wording to the effect of “This is not a test of your knowledge. There are no wrong answers. We are testing the system, not you. Please be candid with your feedback. You won’t hurt anybody’s feelings.” This makes the participant feel comfortable. If you have ever sat on “the other side of the desk” you’d know that it makes you feel like you’re in the spotlight and it can be nerve-wracking. So make them feel at ease right from the start.

 

 

 

About the author:

Warren Croce
Principal UX Designer at Gazelle
Principal at Warren Croce Design
Boston, Massachusetts, USA

Warren gets great satisfaction from knowing that he can help people through design. He believes customer empathy and a desire to simplify are two of the most important traits that a designer must possess. Warren received his BFA in Communication Design from Pratt Institute in 1990 and has been designing professionally ever since. He spent over twelve years at Intuit, most recently as Principal Designer and Team Manager for a team of eleven designers, usability engineers, and writers. Warren joined Gazelle in 2014, where he is responsible for the user experience of both the Trade-In and Direct Store sites. Previously, he worked as an independent consultant for two years. Warren has recently begun offering free, in-person, UX-design mentoring to small groups. He is also a fine artist with a studio in Boston.

Write new comment

Comments (0)

No comments found!