Menu

The Hunt for Missing Expectations

by Jared M. Spool

Thanks to Einar Solbakken of ScienceLakes for the Danish translation to this article.

Few things in business are worse than a bookkeeper scorned, and this bookkeeper was angry. She was upset because she wasted what seemed like 20 minutes looking for something so simple and obvious, it just had to be there. But it wasn’t.

The bookkeeper in our study was looking for a way to indicate a double underline. When you have a column of numbers, divided into sections (such as income and expenses), you use a single underline above the total of each section. For the grand total, at the very bottom, the standard practice is to put two underlines.

The practice of double underlines for grand totals predates computers and spreadsheets. One can look at ledgers from the industrial age and see the double underline hanging out just above the ubiquitous “bottom line.”

That’s what our participant wanted. She had her income section, all neatly totaled, followed by an equally neat expense section. The formula of subtracting expenses from income left a hauntingly small profit line. This line begged for the double underline.

Yet nowhere in Google Spreadsheet could she find such a thing. It was a spreadsheet, just like Microsoft Excel, Lotus 1-2-3, and the others that came before it.

However, for some unknown reason, the developers of the Google Spreadsheet had either left out or hid the double underline. The bookkeeper couldn’t imagine they’d leave such an obvious thing out of the design, so she searched for it.

And searched for it.

And searched for it.

All to no avail. It just wasn’t there. The developers had just missed this function.

A double underline is not a difficult thing to implement. Maybe they knew about it and wanted to keep the interface lean, so it didn’t make the cut.

The Death of a Thousand Cuts

That didn’t stop this user from expecting the double underline to be there and, subsequently, wasting precious minutes searching for it. This simple thing upset her tremendously and she told us she didn’t like using the application.

There were other features and design elements she searched for. Each one took time. Each one distracted her from her goal of producing what should’ve been a simple profit and loss statement, something she’d done a million times in her job.

The design was dying the death of a thousand cuts. No single missing feature was a big deal, but when she added them all up, it was making the design less desirable to work with. This is the problem with missing expectations.

Missing Expectations vs. Failed Expectations

In a hair salon we recently visited, the receptionist was despondent. As clients came in, she’d glance at the screen showing what should be her day’s appointments, frown, then proceed to ask the customer their name and what they’d made their appointment for.

A few days earlier, the hard disk crashed on the server the salon used for its appointment booking application. If there were backups, nobody at the company or the software provider knew how to restore them. All the salon’s data was gone.

Customers would show up for an appointment the salon didn’t have a record of it. They were trying to recreate everything, but it was extremely frustrating.

The salon’s owner had the expectation that the salon’s data was safe. Hardware failures have been a fact of computing life for decades. The owner had every expectation the software provider would take all precautions to ensure no loss of data or time when something catastrophic happened, yet here they were with a business catastrophe of epic proportions.

The data loss is a failed expectation while the Google Spreadsheet’s double underline problem was a missed expectation. Users and customers don’t care about the difference, but teams should because you look for them in different places.

Failed expectations are uncovered by careful introspection of our designs. We can look at each functional component and ask “How will users be disappointed or disrupted if this feature fails them?” Even if we miss it the first time, we’ll hear about it when it raises its head and we’ll be quite alert to ensure it doesn’t happen again.

Missed expectations are more fiendish than failed expectations. They are much harder to spot. Introspection won’t likely get us there. Instead, this is where solid user research comes to the rescue.

Surveys and Usability Testing Won’t Help Much

When many folks reach into their user research toolbox, the first tools to emerge are surveys and usability testing. However, these are not that helpful with discovering potential missed expectations.

For example, imagine I was building a new hotel and I asked you what you’d expect to be in the ideal hotel room. It’s unlikely you’d think to mention “a working bathroom” because you’d just assume I’d already know that. However, I’m betting you’d be surprised if there was no bathroom when you checked in.

Watching users work with the design, the core practice known as usability testing, also struggles to uncover important missed expectations. They seem like they’d be perfect, but the way we construct the tasks gets in our way.

To keep the testing simple and under control, we often define the outcomes we want. For example, in testing Google Spreadsheet, we might have a profit and loss statement we’d want participants to make. To make it clear what we were expecting, we might show the final report we’d like them to make.

Since we never thought about the importance of double underlines, our sample final report wouldn’t have them. Our participant, wanting to do what we’ve asked of her, would unlikely add double underlines in. Our bias is reflected in the test results and we won’t uncover the missing expectation.

Milking Any Customer Contact

One good source that’s probably flowing with missing expectations are the calls and questions coming into customer service. Users, frustrated by what they can’t find, may regularly reach out to the service reps with inquiries about their expectations.

We’ve found most organizations don’t do a good job of collecting this information. Because it’s not a fixable problem, it usually doesn’t get logged and categorized. Even when these are collected, they are usually just counts of the requests and there’s no exploration as to why they are needed. Without answering why the user needs it, attempts to implement it will likely get it wrong.

Upgrading the customer support mechanism to collect and categorize these inquiries can raise the awareness of missing expectations, which, in turn, helps justify other research.

Upping our Ethnography Game

Armed with the list of possible missed expectations gleaned from the customer service department, teams can head into the field to observe how these needs manifest themselves. This is when we really learn what we can do better in our designs.

Walking into a customer site for the first time is much like awaking in the middle of the night and turning on the bathroom light—it’s too bright to see anything meaningful until your eyes adjust. The flood of contextual information you get from the initial customer visits is overwhelming to those who are new at it, thereby making it difficult to say what you’ve learned.

However, with repeated visits to other customers, you start to see patterns emerge. The definition of the missing expectations becomes clear. In each location, it’ll show up slightly different, but the basic theme will be there.

Regular ethnographic activities are critical for any team that wants to ensure it meets expectations. The investment here is easy to justify in reduced support costs, better customer engagement, and solid differentiation from those competitors that don’t go into the field.

Advanced Lab Techniques: Interview-Based Tasks

Having been in the field, the team can then create similar environments in the lab. Using a technique known as interview-based task design, the team can extract missing expectations by more closely simulating what users really do, thereby uncovering what they think they need to do it.

Instead of dictating exactly what the usability test participant will do, the moderator starts each session with an interview of the participant. They ask questions about how, in the past, they’ve performed the activities supported by the design. The moderator dives deep into what led to the activity and how it turned out for that participant.

Based on what they learn, the moderator works with the participant to create a task that replicates what they did last time. Then, they set out to replicate that past experience with the design.

It’s in the recreation of the work that missing expectations become pronounced. Since the co-created task is a better match to the users’ previous experiences, they’ll fall into their own habits. Once they are comfortable working as they always have, any place where the design has missed a feature or implemented something awkwardly will quickly emerge.

This approach takes a more sophisticated user research approach than standard usability testing. Many practitioners aren’t comfortable improvising tasks, while making sure they don’t lead the participant down a path that only confirms the teams’ existing balance. First time folks need to practice a bit to get it to work.

With a good combination of ethnographic field work and advanced lab techniques like interview-based tasks, teams can avoid missing important expectations. Even those expectations that seem minor at first, but add to the death of a thousand cuts, can be discovered and seamlessly integrated into the design.

About the Author

Jared M. Spool is a co-founder of Center Centre and the founder of UIE. In 2016, with Dr. Leslie Jensen-Inman, he opened Center Centre, a new design school in Chattanooga, TN to create the next generation of industry-ready UX Designers. They created a revolutionary approach to vocational training, infusing Jared’s decades of UX experience with Leslie’s mastery of experience-based learning methodologies.

How to Win Stakeholders & Influence Decisions program

Gain the power skills you need to grow your influence on critical product decisions.

Get mentored and coached by Jared Spool in a 16-week program.

Learn more about our How to Win Stakeholders & Influence Decisions program today!