Menu

UX Metrics: Identify Trackable Footprints and Avoid the Woozles

by Jared M. Spool

Editor's note: Thanks to Marco Dini, you can read this article in Italian.

One of my favorite childhood stories comes from A.A. Milne’s Winnie The Pooh. It’s a story in which Pooh and Piglet set out to hunt a Woozle.

At the start, Piglet comes upon Pooh wandering through the 100-Acre Woods and joins him for the stroll. Pooh shows Piglet footprints in the snow that he’s following, which he believes are the tracks of a Woozle, which he’d very much like to catch. Piglet and Pooh excitedly continue to follow the tracks.

After a short while, Piglet and Pooh notice a second set of footprints join the first. “Oh, there must be two Woozles,” Pooh deduces. They continue to follow along, only to suddenly come across a point where a third set of footprints joins the tracks. However, this set is different, so there’s a bit of discussion about it possibly being two Woozles and a Wizzle or possibly two Wizzles and a Woozle.

Eventually, the always-hero of the story, Christopher Robin, arrives to ask Pooh what he was up to, saying he’d been watching the two of them walk in circles around a big oak tree. Pooh explains about their adventure in tracking the Woozles and Wizzles, only then to realize it was his own tracks and Piglet’s that they’d been tracking all along.

Why Out-of-the-Box Analytics Aren’t Helpful for UX Metrics

As I watch teams struggling to identify key UX metrics, I sometimes think they are behaving like Pooh and Piglet as they stroll through the 100-Acre Woods. The teams are hunting for a measure (or two or three) they can use to show that the investment their organization is making into improving the user experience is paying off. Finding key UX metrics is hard and, like the elusive Woozle, difficult to capture.

Teams often start with the metrics that come out of the box. Tools like Google Analytics come with metrics that have important sounding names, like Unique Visitors, Bounce Rate, and Time on Page. However, most teams quickly realize these metrics don’t actually track anything that’s meaningful to the users’ experience.

Sure, the Bounce Rate, which supposes to measure whether someone leaves the site immediately or stays, sounds like something important about how people interact with design. (I say ‘supposes to measure’ because it only does so if the site has been correctly instrumented and that, it turns out, rarely happens.) Did the person who left the site do so because they were confused and gave up? Or was it because the site did exactly what it was supposed to and they were happily moving along in their adventure?

A high Bounce Rate might be ‘bad’ (and warrant being lowered) or the same rate might be ‘good’ (and warrant doing more of the same). We can’t tell which from this number.

Bounce rate isn’t the only culprit in the out-of-the-box metrics. All of them are.

Lots of Unique Visitors are ‘good’ if they are exactly who the business wants to explore the design. Yet, if a counted visitor is a person who mistakenly clicked on an ad (and approximately 50% of mobile ads, for example, are clicked on by mistake), then the total number of Unique Visitors doesn’t mean what we think it means. An increase in Unique Visitors isn’t always a good thing.

Maybe Time On Page is a good thing, because we think it says people spent a lot of time looking around at all our good stuff? Or is Time On Page a bad thing, because we think it says people spent a lot of time being really confused about what’s on the page? By just looking at Time On Page, we don’t know if it’s better if the amount of time increases or decreases. What do we do differently?

Woozles and UX Metrics

Out-of-the-box metrics themselves are only observations. They don’t have a story that helps us understand if a better design should increase them or decrease them. Without the story, we don’t know what to do differently.

To compensate, teams use an inference. Inferences are stories we craft when direct observation doesn’t fill in what we need.

Pooh and Piglet had an observation: the tracks kept increasing. They inferred the tracks were those of a Woozle, then two, then two plus a Wizzle. There was no direct observational evidence that there were Woozles or Wizzles anywhere in the 100-Acre Woods. And without Christopher Robin’s help, they would’ve kept hunting the Woozles and Wizzles forever, to no avail.

When we say things like “Well, they left the site because our content is boring,” we’ve made an inference just like Pooh. There’s no observational evidence that our site is boring. The inference is a Woozle and by acting on it, we’ve set off on a Woozle hunt.

Hunting a Transaction Security Woozle

Years ago, when e-commerce was a very young thing indeed, we were contacted by the e-commerce manager for a major U.S. office supply chain. He was looking for any information we had about how shoppers decide if a site is safe to purchase from. Did we know any good ways to convince their shoppers that they’d implemented a secure shopping environment?

In those days, the evening news was filled with stories about people possibly getting scammed by fake internet web sites and how everyone had to be extra careful before shopping online. The manager told me they’d made a major investment in secure transaction technology, but that they were afraid users weren’t noticing and that was affecting their sales.

I asked the manager how they knew security concerns caused them to lose sales. He told me they’d seen a huge drop off in their checkout sequence when people were asked to enter their payment information. He said his entire team was convinced shoppers perceived the site wasn’t secure and were abandoning.

To make matters worse, the team had believed it was a security issue for a while and had taken expensive steps to fix it. They revamped their entire e-commerce transaction system because they’d felt it wasn’t handling SSL certificates well enough. And they paid a lot of money to several third-party trust partners to validate and verify their security.

And despite all the money and time they spent, none of these improvements seemed to reduce the drop off rates at the payment page. What could we do to help?

We told them we’d help with user research. We started our research as we often do, by watching real shoppers buy something on the site. The team had never done this before, it was an entirely new experience for them.

Acting On Observations, not Inferences

The first shopper in our study was a small business owner who, coincidentally, was in the market for a new high-end color printer. He wanted a $2,400 unit and was quite excited to see it on sale for $200 off. He was all set to make the purchase and started his way through checkout. Sure enough, he stopped when he got to the payment information page.

However, it wasn’t the security that stopped him cold. In fact, he didn’t care about the security at all. He assumed the retailer was legitimate and the site was secure.

Instead, the reason the shopper stopped was this high-end printer weighed 140 lbs. A printer that heavy would be costly to ship. And he didn’t know what the shipping costs were. There was no way he’d enter his payment information until he saw the total cost of the purchase, not just the price tag of the unit. Because he didn’t the shipping costs, he wouldn’t continue.

What the shopper didn’t know — what he couldn’t know — was that the next page would tell him the shipping costs and he could say no to the transaction if he thought it was too expensive. Nor did he realize that purchases over $25 had free shipping. (The site said it in a tiny ad-looking box on the homepage, which he never noticed because he was focused on finding his printer. It was never mentioned again.)

Turns out he wasn’t the only participant in our study to get stuck on the payment information page because they didn’t know the shipping costs. A significant portion of them did.

And sure enough, the team acted on that observation, by making it clear what purchases had free shipping and moving the shipping calculator earlier in the checkout process. From that, they saw a huge reduction in page abandonments on the credit card screen, that resulted in millions of dollars of additional sales completed on the site.

The team’s observation (users abandoning at the payment information page) had been correct. But, their inference that the site wasn’t secure enough was wrong. Thinking it was a problem of transactional security was a very expensive Woozle hunt.

Hunting a Comparison Shopping Woozle

In another e-commerce study — this time for a major clothing retailer — we were tasked with tracking down yet another Woozle. This time, the team was convinced that their product descriptions weren’t doing the job. They thought, when a shopper is in the store, seeing the product up close and try it on is important to the sale. How could the team duplicate that experience online?

The team was just days away from signing several expensive contracts with service providers who offered virtual models and other tools that promised better sales. The team wanted us to confirm that these investments would indeed provide increased sales.

We went into our study focusing on the information on the product description pages. At that time, many of us in the e-commerce world assumed online shoppers behaved like people shopping in a store, visiting each product’s page as if they were taking the item off the shelf and inspecting it for purchase. They’d only choose which product to buy after comparing several, picking the best of what they found.

The pattern we expected to see from our online shoppers was that they’d bounce from one product page to the next, carefully studying each one. And we did see some shoppers bounce from one product page to the next.

However, we also saw many purchase without bouncing. They’d go to the page that listed all the products in that category (such as all the men’s shirts) and choose the most interesting product. If that product met their needs, they’d purchase it without looking elsewhere for comparison.

They were only going to other pages when each product they visited didn’t meet their needs. The comparison wasn’t actually a comparison. It was a straight-out elimination search.

The product comparison idea was also a Woozle. When we watched shoppers actually shop, we never saw the Woozle. We saw something completely different.

Pogosticking

Those shoppers were jumping back and forth, between product and gallery page (that’s what we call a category page, because it’s often a gallery of products). The shoppers were eliminating one product, then the next, until they found the one they wanted. We gave that up-and-down motion, between product and gallery pages, the name “pogosticking.” And the more they pogosticked, the less they seemed to find an ideal product to purchase.

We went back to the analytic data we’d collected during our study. Sure enough, we saw the pogosticking pattern there too.

66% of all purchases happened with the shopper visiting a single product page. Of the shoppers who visited more than one product page, the more pages they visited, the less likely they were to purchase at all.

What we observed, and confirmed through the analytics, was our shoppers weren’t looking to product pages to compare and subsequently decide. Instead, they had already decided while looking at the gallery page. When the product on the gallery page surfaced the details the shopper needed to decide, they were more likely to purchase an item from that page.

Hunting the product comparison Woozle wouldn’t work. Increasing the detail on the product description or implementing virtual models wasn’t what would increase the retailer’s sales. Improving the gallery pages would.

Identify Trackable Footprints and Avoid the Woozles

One great thing about pogosticking is it gives us a very clear set of footprints to track. We can classify which pages are product pages and which are gallery pages. Then we can look for instances where users jump between them. We can measure how many purchases (or, at minimum, adding the product to the shopping cart) happen with pogostick and how many happen without.

We can also break it up by category. Maybe men’s shirts has less pogosticking than women’s shoes? If so, we can study the differences and how shoppers interact with those pages. Using clear footprints as UX metrics we track gives us a way to tell when our designs are working well and when they could use improvement. They tell us if the changes we’re making are improving the design or not.

Pogosticking footprints don’t only apply for e-commerce. When a news site lists its current stories, that’s a gallery page. When a customer support site lists pages where customers get answers, that’s a gallery page too. It’s possible we could use a similar footprint to measure those pages’ effectiveness.

Not all observations turn nicely into clear footprints. For example, it would be hard to code up a trackable footprint out of people who stop shopping when they don’t know the shipping costs. It’s hard to know, from the recorded log file activity, which abandoning users cared about the shipping costs.

However, we observed that knowing the shipping costs could increase sales. We can see if the easy-to-track footprint of people who stop shopping at the payment information page changes when we move shipping information to a better place in the checkout flow. If we think shipping cost calculation is a major cause, this number should go down when we make it easier to discover.

Trackable Footprints Come From User Observation

The best UX metrics we’ve ever found have all come from observing users. We see the users do something and ask “How often does that happen in real life?” This is how we’ve uncovered hidden treasure troves of millions of dollars in retail websites. We’ve also found under-read content on content marketing sites and under-utilized functionality in enterprise applications. We’ve found workflows that were super complicated and creating undue friction to helping the users achieve their objective.

When we’ve just drawn inferences from analytics, we end up chasing Woozles and getting nowhere. By starting with user observations and identifying footprints to build our UX metrics, we create powerful tools for measuring how our designs work.

About the Author

Jared M. Spool is a co-founder of Center Centre and the founder of UIE. In 2016, with Dr. Leslie Jensen-Inman, he opened Center Centre, a new design school in Chattanooga, TN to create the next generation of industry-ready UX Designers. They created a revolutionary approach to vocational training, infusing Jared’s decades of UX experience with Leslie’s mastery of experience-based learning methodologies.

How to Win Stakeholders & Influence Decisions program

Gain the power skills you need to grow your influence on critical product decisions.

Get mentored and coached by Jared Spool in a 16-week program.

Learn more about our How to Win Stakeholders & Influence Decisions program today!