Menu

The Road to Recommendation

by Jared M. Spool

The user moved their mouse all over the page and finally clicked on Search declaring, “I wish there was a link that had what I wanted.” And there was. Exactly what the user wanted. However, the user would have to scroll to see it and they never did.

During the debriefing meeting, a team member said what everyone was thinking: “Our users don’t like to scroll. We need to make all of our pages fit on one screen.” Everyone but me, that is.

Common Problem: Jumping to Inferences Too Quickly

It wasn’t that I’m particularly disagreeable. (I probably am, but that’s not the point.) It’s that I didn’t feel we had enough information to go from “the user didn’t see a link because they didn’t scroll” to “our users don’t like to scroll.” That’s a huge jump and I wasn’t ready to make it.

When working with teams, we see this same problem all the time. They are so anxious to fix things—fix anything—they rush the process of analyzing what they saw. And rushing, as we’ve been taught since grammar school, can get us in trouble.

The Road to Recommendation

To get a recommendation for change, we need to slow down and go through the four steps: Observation, Inference, Opinion, and finally Recommendation.

Observations are what we see and hear. They are objective elements. Anybody who saw or heard the same thing would report the same observations. (Most of the time.)

Seeing the user move their mouse but never scroll is an observation. Hearing the user say they wished a particular link was present is an observation. Observations form the basis of our analysis. Everything rests upon them.

Inferences are why we think the observations happened. We’re making a guess (possibly an educated guess,) about the causality of the observation. “The user didn’t scroll because, like many users, they don’t like to scroll,” would be one inference we could draw.

Opinions are our feelings about what is really happening. “Our pages are too long and often require scrolling,” is an opinion that follows naturally from the inference above. We form opinions from our inferences and we use opinions to form our recommendations.

Recommendations are why we’re doing the analysis in the first place. We want to change the design for the better, so we need to know what to change. “Make all the pages fit on a single screen,” is the recommendation that, upon approval, is the catalyst for improvement.

Hopefully. If we did it right.

Observations From All Over

Observations don’t only come from watching users in studies. You can draw observations from other types of analysis.

The designers at WellsFargo.com were pouring over their search logs one day. To their surprise, they discovered the most common search term wasn’t “checking” or “loans”, but was a blank entry. People were searching for nothing. A lot of people. All the time.

It turns out the reason this was happening was the home page design has an empty box, followed by a button labeled “Search”. The box is just a rectangle, not unlike many other rectangles found throughout the site’s motif and offers no clues that you can type in it.

Wells Fargo Homepage
The Wells Fargo homepage—full of rectangles
Wells Fargo Searchbox
Closeup of the Search Box on the Wells Fargo homepage

While Internet-Savvy users correctly deduced the purpose of the rectangle, many users wanting to search the site would just click on the button first, thereby creating the “blank” entry in the search log.

Bad Inferences Spawn Detrimental Recommendations

The trick is to make a good recommendation, one that leads to a recommendation that improves the interface. But hasty inferences, that haven’t been properly vetted, can lead to recommendations that don’t improve the interface, or, in the worst case, make it worse.

A while back, an e-commerce team we worked with looked through their logs to find that people were abandoning thousands of dollars of products in the shopping cart. They were leaving the checkout process at the page that asked for a credit card.

The site, which belonged to a successful, established catalog retailer, was the company’s first venture into Internet sales. The team’s product manager, who was sensitive because the site had been quickly put together with many “compromises”, was quick to point to the lack of a secure transaction capability as the reason users were abandoning their shopping. This inference would turn out to be wrong.

But the team wouldn’t learn that for a while. Instead, they immediately formed the opinion that they needed more secure transaction capabilities and spent thousands of dollars going forward. When that didn’t change the abandonment rate, they overhauled the messaging on the site, making sure that, on every page, it was loud and clear that the site was “Secure”. Of course, that didn’t help either.

It turns out that the real problem was that users weren’t informed about shipping costs until after they entered their credit card. And nowhere on the credit card screen did it inform the user they could back out of the transaction if they didn’t like the costs. Users, afraid of purchasing something without knowing the total costs, were unwilling to enter their credit card information without this critical information.

And the worst of it was that their shipping rates were the best in the business—often free. The users had no way of learning this good news without entering their credit card information.

While they probably needed to make the transactions secure, there was a ton of money wasted because it was perceived as a rushed priority. Not to mention the lost sales while they focused on fixing the wrong problem.

Looking into Alternative Inferences

Let’s return to our scrolling problem. There were three observations:

  1. The user waved their mouse around the screen for a long time, finally clicking on Search.
  2. The user didn’t scroll.
  3. The user said, “I wish there was a link that had what I wanted.”

The team members drew the inference that this user didn’t like to scroll. Was that the only inference they could draw?

Another inference is the design of the page communicated something to the user that blocked a desire to scroll. For example, there was a large margin at the bottom of the screen with a horizontal rule going right through it. We’ve seen, in other studies, users stop scrolling when they encounter large empty spaces or long horizontal rules, because it ‘feels’ like the page has come to an end. Could the margin be causing the problem?

Another possibility is iceberg syndrome, which occurs when users assume that everything visible on the screen is the best and most useful of everything available. If what’s visible isn’t valuable, they just assume the remainder is less valuable. They know there is more to see, they just do not see why they should bother trying to see it. So, they don’t bother scrolling. Could that cause it?

From this one set of observations, we could come to three independent inferences:

  1. This user is one that doesn’t like to scroll and will miss anything below the visible page.
  2. This user was stopped by the large empty spaces and horizontal rule from scrolling further.
  3. This user was stopped by the seemingly unrelated and useless visible content from scrolling further.

Validating Inferences is Critical

Each inference would lead the team to a completely different course of action, so it’s important to make sure we’ve validated our inferences before we continue.

The first question we should always ask, when drawing inferences from our observations, is, “What are all the different causes for these behaviors?” We should state all the alternative inferences we can for each set of significant observations we come across.

Once we’ve compiled our list of alternative inferences, we now need to deduce which one is most likely. This is not just about having a statistically significant data sample. This is about collecting enough data to prove or disprove a given inference.

For example, we might look at the user’s behavior on other pages that didn’t have large margins at the bottom. Did the user scroll on those pages?

Often, we’ll compare multiple types of data sources. For example, after observing users missing critical links in usability testing, we might look in the site logs to see if other users are traveling between the two pages in question. If the logs agree with what we see in testing, that can be helpful. If not, we need to search for another cause of the problem.

Sometimes, we’ll construct quick prototypes to see if we can get different behaviors. We could construct a page with different copy or without the empty spaces and try that with a few users to see if their behavior changes.

Producing Solid Recommendations

Affecting positive change is our end goal. Having solid recommendations is how we make that happen. Ensuring we’ve done our diligence to produce those recommendations is absolutely critical.

By the time we’re producing a recommendation, we want to have a bundle of recommendations to support it. We’ll have formed an army of inferences, only to pick and choose those that the evidence tells us are the strongest and most likely. If we’re not sure, well, we go back to the well and get more evidence.

That’s how we travel down the Road to Recommendation.

About the Author

Jared M. Spool is a co-founder of Center Centre and the founder of UIE. In 2016, with Dr. Leslie Jensen-Inman, he opened Center Centre, a new design school in Chattanooga, TN to create the next generation of industry-ready UX Designers. They created a revolutionary approach to vocational training, infusing Jared’s decades of UX experience with Leslie’s mastery of experience-based learning methodologies.

How to Win Stakeholders & Influence Decisions program

Gain the power skills you need to grow your influence on critical product decisions.

Get mentored and coached by Jared Spool in a 16-week program.

Learn more about our How to Win Stakeholders & Influence Decisions program today!