Menu

Beta — We’re reformatting our articles to work with the new site, but this is one hasn’t been done yet; it may contain formatting issues. No need to report this. We’re working on it.

Are You Really Prepared for Your Usability Study? The Three Steps for Success

by Christine Perfetti
on June 22, 2010

Over the years, many design teams have come to me requesting
usability testing consulting services. One of the reasons they reach
out is because they think usability testing must be a complex and
scientific process. As a result, they’d prefer to have an outside
consulting company conduct their tests.

The first thing I tell people is that usability testing is not a
complicated process. It’s a technique that anyone can learn with
training and lots of practice.

At its core, a usability test involves putting a person in front of
the product and watching what they do. The goal is to observe how
well users accomplish their goals with a product. At Perfetti Media,
the bulk of our work focuses on teaching clients that they can
quickly start gathering user feedback to help make informed design
decisions. Once teams learn the basics for conducting usability
tests, they find that testing is a straightforward process.

We consider a usability testing project a success if, after working
with us, the team considers testing so valuable that they decide to
bring the practice in-house. When teams contact us for a second
project, I typically recommend they consider hiring an in-house
resource instead of continuing ongoing consulting work with us. I
suggest this because, in my experience, the most successful teams
conduct their own testing throughout the design process.

If you’re thinking about bringing your usability testing in-house,
you’ll want to take the time to prepare appropriately for your first
study. In this article, I’ll be sharing the steps for ensuring
you’re really prepared for your usability study:

    1. Writing a test plan

    2. Recruiting the right participants

    3. Creating tasks

After completing these steps, you can begin your usability study
with confidence.

Step 1: Writing a usability test plan

At the beginning of any usability testing project, you’ll want to
meet with your design team, engineers, and organization’s
stakeholders to identify what they hope to learn from the usability
test.

It’s essential to have everyone who has an impact on design
decisions attend the planning meeting. This gets everyone on the
same page and ensures you won’t encounter any surprises when the
stakeholders observe the test sessions.

We typically allot two hours to the planning meeting. We start by
giving an overview of the testing process to ensure everyone is on
the same page. This also gives the team an opportunity to ask any
questions they have about the usability study.

We then work to define the focus of the test and outline the
research questions we’d like to answer. With a large and complex
product, it would be impossible to evaluate all concerns and issues
in one usability test project.

Because of this, we ask the stakeholders and design team what their
biggest issues, risks, and concerns are with the product. We also
ask the team what product features they know the least about and how
users will interact with them. After getting the team’s feedback, we
have a much better sense of where to focus the usability study.

Based on what we learn from the planning meeting, we write a test
plan to outline the goal of the tests, the research questions, and
the methodology. The usability test plan ends up being the blueprint
for the test we’ll run. It can be a formal or informal document, but
typically includes the following sections:

  • Goals of the study
  • Research questions and issues
  • Description of the target audience and key user behaviors we’ll recruit for
  • Test method
  • Project schedule and timeline
  • Tasks we’ll ask users to complete
  • Data we’ll collect

You can see a wonderful example of a usability test plan (and other
excellent testing resources) on the Usability.gov web site.

Step 2: Recruiting Users

When preparing your first usability study, you’ll want to find
people who are representative of your target audience. You have a
couple of options for doing this. With in-house recruitment, you can
assign someone internally to find your users. Or, you can outsource
to a reputable recruitment agency. We use both approaches at
Perfetti Media and they’ve worked well for us. Usability Works
and AlphaBuzz are two of the organizations we
recommend for their recruitment services.

When thinking about the right participants to recruit for a study,
many teams start by focusing on demographics, such as age, gender,
or ethnicity. Unfortunately, in most cases, recruiting for
demographics will be one of the least effective ways to find the
most appropriate users for your tests.

For example, if you were recruiting for a usability study of a video
game, what demographic would you usually think of? People who play
video games are typically thought of as boys in the 13-24 age range.
But if we only recruited for that demographic, we may mistakenly
recruit boys who don’t actually play video games. Additionally, many
women also play video games. We would risk missing their feedback if
we only recruited the boys.

In our work, we only focus on demographics when it’s a critical
component of the target audience, such as when we’re recruiting for
a university portal devoted to college students or a social security
benefits web site devoted to elderly users. Rather than focusing on
demographics, we focus on the specific behaviors the target audience
exhibits and their levels of tool knowledge and domain knowledge.

For any recruitment project, we work with the client to answer the
following questions:

    1. What are all of the specific behaviors we’re looking for in our users?
    2. What level of tool knowledge do users need?
    3. What level of domain knowledge do users need?

The first question we ask our clients is, “What are the key
behaviors of your audience?” For example, if we were recruiting for
a video game system, we’d want to find people who play video games.
If we were recruiting for an e-commerce site, we’d want to find
people who use the internet and have bought products online before.

Users’ behaviors are also informed by their amount of tool and
domain knowledge. The users’ tool knowledge, the level of
familiarity users have working with particular tools or products,
can have a huge impact on how they behave. We recruit for tool
knowledge when the test includes intermediate or advanced features
and techniques. For example, if we are recruiting for a video game,
we may need to find people who already know how to play with a
specific game console, such as the Wii.

We recruit for domain knowledge when users need to know and
understand specific information to use the product. For example, in
our recruitment for a video game, if we were testing the adventure
game, Myst, we may want to find people who are already familiar with
the game and how it’s played.

When finding users, we start by asking clients if they have a
database with existing users and prospects. This is always a great
place to start. We’ve also had a lot of success with recruitment
firms, Craigslist, user groups, temp agencies, real estate agencies,
and even friends and family.

Once you’ve found some potential recruits, it’s necessary to screen
the people by phone before bringing them into the lab. To help with
this process, we develop a screener to qualify or disqualify
potential users. The screener is a script that helps the recruiter
apply the requirements we’re seeking in users. The parts of a
screener script include:

  • Greeting and the purpose of the study
  • Qualifying questions to assess whether we’ve found a user who exhibits the right behaviors
  • Scheduling dates and times for the test session
  • Compensation information
  • You can see sample tests screeners on the Usability.gov web sites.

Step 3: Creating Tasks

The tasks you create for a usability study are essential for
gathering the right data. The tasks determine what you’ll test and
impacts what parts of the design your team fixes. If you give users
the wrong tasks, you risk focusing on the wrong parts of the design.
Even worse, you may provide your design team with misleading
recommendations. Yet teams often spend little time preparing for
this crucial step.

When creating tasks for a usability study, you’ll want to start by
asking yourself the following questions:

    1. What are your users’ goals with the product?

    List out the specific actions users most commonly complete with the
    product.

    2. What are your business goals?

    Think about all of the different ways your product or web site will
    increase revenue or reduce organizational costs. The best tasks
    focus on areas of the design crucial to your organization’s business
    goals.

    3. What are the greatest risks with the design?

    If there are certain areas of the design where you have little
    knowledge as to how users interact with it, this is another area to
    address in your tasks.

There are three types of tasks for a usability test: verb-based
tasks, scavenger hunt tasks, and interview-based tasks. Jared M.
Spool and his team of researchers at User Interface Engineering
first introduced this framework for designing tasks back in 1999.

Verb-based tasks

Verb-based tasks ask users to accomplish a specific action with the
product. Verb-based tasks are most commonly used to test software,
hardware, and web applications. For example, for an email system, we
might ask users to:

  • Respond to the email you just received from Kate Austin
  • Write a note to your mother
  • Copy the text of this page to another document
  • Send the message from Kate to your friend, Lisa

All of the tasks begin with a verb and ask users to complete a
specific action. Verb-based tasks effectively evaluate the product’s
functionality and give teams the capability to test multiple users
on the same tasks. Before the advent of the Web, almost all tasks
for evaluating products were verb-based tasks.

Scavenger hunt tasks

Unlike verb-based tasks, we don’t use scavenger hunt tasks to
evaluate software or web application functionality. Instead,
scavenger hunt tasks help us to assess content-rich systems such as
CRMs, rich data displays, and information-rich web sites.

With scavenger hunt tasks, we ask users to find a specific piece of
information. These tasks help design teams evaluate whether users
find and understand the product’s content. The tasks almost always
begin with the verb, “find.” Some examples of scavenger hunt tasks:

You were at a party last week. The discussion turned to recipes for
authentic Italian pasta dishes. Go to the Food Network site and find
an Italian recipe for pasta.

The doctor stops by on morning rounds and wants to know how much Mr.
LaFleur’s blood pressure has been out of the normal range during the
night. Find a record of Mr. LaFleur’s blood pressure.

The downside of traditional tasks such as verb-based and scavenger
hunt tasks, is that it’s challenging for teams to predict whether
they’ve chosen realistic tasks for users to accomplish. Because of
this, teams risk giving users tasks to complete with the product
that aren’t related to what they would actually do in a real-life
situation.

Interview-based Tasks

To address the limitations of verb-based and scavenger hunt tasks,
we use interview-based tasks, a task methodology developed by User
Interface Engineering. With interview-based tasks, we interview
users before and during the test to uncover users’ real goals with a
product. During the recruitment phase, we screen candidates to
ensure they have the appropriate interests before they come to the
lab.

With interview-based tasks, when users first arrive for the test
session, we don’t actually know what we’ll specifically be asking
them to do during the session. Instead, at the beginning of the
test, we interview users to get a better idea of how they use a
product. For example, when evaluating an investment web site, we
would start by asking the users specific questions, such as:

You mentioned you were interested in investing some money. 

  • How much money are we talking about?
  • What kinds of investments do you have in mind?
  • Do you have a retirement plan?
  • What are the worries you need to address?
  • How do you evaluate a potential investment?

Based on the users’ specific responses to the questions, we’ll work
with them during the session to create tasks that are relevant to
their specific needs. While we won’t ask all users to complete the
same tasks, we get a very good sense of how the product works for
users in the real world.

Interview-based tasks work best with sites and products that are
almost ready to ship and populated with real information and data.
Without real content or data for users to manipulate, it’s
impossible to mirror the true experience for the user.

After writing your test plan, recruiting your users, and creating
your tasks, you’ll be ready to run your sessions. You’ll also be
confident you’re fully prepared to gather rigorous data from the
sessions.

Share Your Thoughts with Us

What challenges have you had conducting usability tests? How have your
testing techniques worked for you? I’d love to hear what you’re
doing on UIE Brain Sparks blog.