Menu

Honing Your Usability Testing Skills: An Interview with Ginny Redish

by Christine Perfetti

If you’ve ever done any usability testing, then you’ve been affected by Ginny Redish’s pioneering work. Ginny is a world-renowned usability expert and co-author of the books, “A Practical Guide to Usability Testing” and “User and Task Analysis for Interface Design”.

While preparing for Ginny’s full-day seminar at the User Interface 9 Conference, UIE’s Christine Perfetti had the opportunity to ask Ginny about her thoughts on the best practices surrounding usability testing. Here is what Ginny had to say about her experiences.

UIE: How has usability testing evolved as a technique over the past 10 years?

Ginny: More and more companies and government agencies are doing usability testing. And, they are doing more of it, starting earlier in the process, and testing iteratively. Over the past 10 years, usability testing has become more integrated into the development process, especially in web design. As a result, we are doing more informal, formative, diagnostic usability testing now than ever before.

Has your philosophy changed at all since you first began usability testing?

No—and yes. My philosophy of usability testing has always been that it is the best way to find out how well a draft or prototype or product is doing for its users. I’ve always believed that usability is about helping designers and developers create products where users can quickly and easily find what they need and understand what they find. I’ve always believed that usability specialists should be part of the product team from the beginning and should do testing with the team—not as the “usability police.”

But my philosophy of how to do usability testing has also expanded over the last 20 years. When I started out, almost all usability testing was done in a formal lab with a very “hands off” approach to interactions between participant and facilitator. Today, the line between usability testing and field studies has blurred quite a bit. Typically, today, I sit with the participant. Depending on the stage the product is in, I may engage in much more dialogue than I did when I started out. I’ve done usability testing in conference rooms and cubicles; I even did one this summer in an airport hangar.

Your work focuses on helping usability practitioners to improve their usability testing skills. What recommendations would you give first-timers? Are there any common mistakes you see with design teams just starting out with usability testing?

Some aspects that I find design teams often need help with are:

  • Thinking about the issues—what you want to learn from the usability test
  • Writing good scenarios—that test the web site or product without giving away too much
  • Facilitating comfortably—knowing when to talk and when not to, how to ask neutral questions, how to keep participants thinking aloud
  • Taking good notes without missing anything critical
  • How to report results so that the right people act on them

That’s why, in my full-day seminar at UI9, we’ll work on all the aspects of planning, preparing, facilitating, note-taking, analyzing, and communicating results.

What recommendations do you have for usability professionals who want to get their development team on board and excited about usability testing?

Involve them. Be a team together. Invite them to observe. If you have an observation room for them, put out candy or other food as an incentive for them to come.

Have someone in the observation room with them to monitor their conversation and, if necessary, join in to keep them from jumping to conclusions too quickly. If you do not have an observation room, set up a schedule so that you only have a few observers in the room at one time. Give them brief instructions on how to behave.

In either case, give them note-taking forms and a bit of instruction on how to use them. Involve them in a debriefing right after the testing; involve them in helping to find good solutions to the problems. Include some positives in the report.

In your experience, what are the typical costs associated with usability testing? What are the budgeting considerations?

I’m always surprised when people think usability testing costs a lot of money. Yes, it costs money—but it’s usually a tiny fraction of the money being spent on the technical side of a project. Whether these are internal costs—or hard dollars you give out to someone—depends on how much you can do inhouse. The major costs for usability testing are

  • Time for internal people (and a usability consultant if you need one) to plan, prepare, conduct, analyze, and report
  • Recruiting and paying participants (You may find it is less expensive to use a recruiting firm than to have internal people spend lots of time on the phone trying to find the right people.)
  • Facilities (if you rent a lab, for example)—but you can also do usability testing in a conference room or even at people’s desks

Many usability practitioners believe that five to eight users is enough to find the majority of usability problems on web sites. In your experience, how many users are enough for testing?

“How many users” is one of the great controversies in usability testing today. I think it’s a false controversy because there is no one answer. The number you need depends on the type of testing you are doing, where in the development process you are, how many different types of users the web site or other product has, and a few other factors. I have slides in the seminar that explain all the factors you need to consider to decide on how many users.

The type of usability testing that we’ll focus on during the seminar day is the type that most companies and agencies are doing—testing prototypes of web sites or documents or software in the design stages for the purpose of finding and fixing major problems. My experience over many, many years of that type of usability testing is that you’ll find the major problems with relatively few users (I usually say six to 12 ).

Many design teams attempt to use heuristic evaluations as a less expensive alternative to usability testing. Where are your thoughts on this method?

Usability testing is only one of many techniques in the usability professional’s toolbox. You can use a heuristic evaluation (having one or more experts review the product) to catch major and obvious flaws in a product—if there are any. If the developers are willing to accept the results, they can bring a better product to usability testing. However, a heuristic evaluation is not an alternative to usability testing. A heuristic evaluation is just a prediction of what users will do. Until you see the real users, you don’t know whether those predictions are right. Only usability testing shows you where the real problems are.

You recently conducted two usability studies with Mary Theofanos and colleagues at the National Cancer Institute researching how vision-impaired users interact with web sites. What were some of your key learnings from this research?

Our first study was with 16 blind users; our second study was with 10 low-vision users. In both, we were watching and listening as our participants used web sites.

From our first study, we came up with many specific guidelines to help web designers and developers really make sites accessible—not just meet the letter of the law. One seemingly simple but very important learning is that many blind users do not know what Skip Navigation means. They want to skip over “all that stuff” at the top of each page, but they don’t click on the link that would help them do that. You can help them by changing the name of the link—and I’ll talk about how well different names worked.

Our second study was even more interesting because we found that the needs of low-vision users are so diverse that simple solutions are not going to help everyone. Adding accessibility on after a web site has been developed is not working. We need a new paradigm for thinking about “experience equity”—making web sites work for everyone.

Thanks, Ginny.

About the Author

Christine Perfetti picked up on these approaches, refined them, and started using them in her daily work at leading companies like Acquia and Carbonite. Not only has she built successful design teams who’ve created business-changing products, but she’s transformed a design team from a siloed group into collaborative partners. Her ability to bridge gaps and fuse product management with engineering will be evident in this talk.

How to Win Stakeholders & Influence Decisions program

Gain the power skills you need to grow your influence on critical product decisions.

Get mentored and coached by Jared Spool in a 16-week program.

Learn more about our How to Win Stakeholders & Influence Decisions program today!