Menu

The Challenge of Identifying UX Success Metrics

by Jared M. Spool

A city’s IT department is building a new online parking-violation payment system. For the first time, instead of sending checks in the mail or paying at the City Clerk’s Office, people can pay their parking tickets through the city’s website or a mobile app. One question hung over the design team, how would they know if the new system is an improvement over the existing system?

In the design team’s discovery research, they met recent parking ticket recipients. The recipients talked through what it’s like getting the ticket and how they made sure it got paid.

In their research, many people said it was difficult to pay by mail. For some, this was the first time they’d handwritten a bank check in months. Just locating their checkbook took effort.

If they didn’t want to pay by check, they had to go to the City Clerk’s Office. While the office accepted credit card payments, the parking violators had to take time off of work to go in. It was very inconvenient (and, ironically, required they pay for more parking).

Because the tickets are difficult to pay, many parking violators missed the 30-day payment deadline. They’d get hit with a late-payment penalty. If they still didn’t pay, they’d put their car registration at risk. In the worst cases, they could be subject to a warrant and arrest. None of this was a good experience.

The team hopes the new system will solve these challenges. It’s a straightforward e-commerce solution. Other municipalities have already implemented similar systems. The team has many models to explore.

Measuring UX success starts with identifying the outcome.

The system will be expensive to build and maintain. The City Council wants assurances that the investment will be worth it. They asked for data to show that the new system is, in fact, an improvement over the existing system.

When measuring the effects of implementing a user experience, we need to look at the intended outcomes of the design. A UX outcome is the change we see in the world because we’ve done a great job implementing our design.

In the case of the ticket payment system, the design team’s intended outcome was helping people pay sooner. This would eliminate any late fees and the associated hassles that come from missing the 30-day payment deadline.

Multiple user roles mean multiple intended outcomes.

When designs have multiple user roles, there are likely different intended outcomes for each role. After all, we want to improve everyone’s lives with our designs.

For example, the new payment system will hopefully reduce the burden on the City Clerk’s Office. Right now, a substantial part of the office employee’s time is processing in-person and mailed-in payments. The online system should reduce that processing time.

Also, parking officers currently write tickets by hand. Their handwriting can be hard to read, especially in cold weather when they’re wearing gloves. These tickets are hard to read by the clerks.

The team identified all the changes they hoped their design would make. From these outcomes, they built their key measures.

Measuring the evidence of behavior change.

The medium of designers is behavior. — Robert Fabricant

We achieve our design’s outcomes when our users behave differently. As designers, we can only change a user’s behavior by changing our designs.

The city’s design team wants users to pay through their new system. They can measure success by the percentage of ticket recipients choosing to pay online, versus mailing in checks or showing up at the City Clerk’s Office. They can also track the number of people missing the 30-day payment deadline as it hypothetically decreases.

This means users have to know the new payment method exists. Those users need to access it easily. They need to complete the payment online without encountering obstacles.

In the end, the team and their design will succeed if they see an increase in the system’s usage. If the percentage of transactions using the system steadily increases, the design team can report to the City Council the system is working.

The team can also report improvements from the City Clerk’s Office. Do the office employees spend less time processing tickets, now that many of the transactions are handled online? Is time spent handling transcription errors reduced, because tickets are now digitally created instead of handwritten?

Transactions make UX success metrics easy to identify.

Measuring the improvements from the online parking-violation payment system is straightforward. The team can see how to measure outcomes because they can see each transaction.

While researching the existing payment system, the design team can see where users encountered friction during their transactions. If they can find something measurable in that friction, they can use that measure to track improvements.

For example, they can measure the time it takes clerks to transcribe handwritten tickets. And they can count the errors that resulted from hard-to-read handwriting. By tracking those measurements of friction, the team can set a target to aim for.

The team can also work with parking violators to learn the burdens of making payments through the old system. They can measure how much time it takes to travel to the clerk’s office or to locate their bank checkbook. They can measure the costs of late fees and penalties. These measures can be aggregated and reported.

Transactional systems are the easiest to measure, because a transaction is a clear change in the world. When working on transactional systems, teams only need to consider a few scenarios. And each scenario has a clear end-state.

When the end-state is clear, measuring the end-state is simple. This makes the UX success metrics easy to collect and report.

Non-transactional outcomes are harder to measure.

What happens when the transactions aren’t clear? UX success metrics become more difficult to pin down.

As the city’s IT department is working on the new parking-violation payment system, they’re also redesigning the city’s Zoning Board information system. The Zoning Board system contains all the minutes, decisions, and regulations that the zoning board has compiled over many years of existence.

The existing repository is a collection of PDFs organized by date. It’s clear that the organization and format of this information is suboptimal.

It’s also not clear what an improved system should look like. When the team started on the project, they didn’t understand how anyone used the repository of zoning information.

When we don’t know what our improved designs will be used for, it’s very difficult to understand what the outcomes are. What changes do we need to see in the world? Without this, we can’t put UX success metrics into place for our project.

Gaining deep awareness to identify UX success metrics.

Picking the right metric to track is important. The team can pick a variety of metrics to show how users are interacting with the system. The team could monitor the number of visitors, how long they spend on pages, or how many times they use the search functionality.

But, none of these metrics tell the team if the design is helpful to the users. And without an understanding of why the users need the system, they can’t determine if they’ve achieved their goals.

For the person who receives a parking ticket, we know their goal is to pay off the ticket and make it go away. Yet, for the person who comes to the Zoning Board information system, what do they need to achieve? What needs to change in their world for their interaction with the information system to be a success?

To learn this, the team needed to gain deep awareness about the people who use this system. They started by talking to the clerks in the Zoning Board office. Who was calling with questions? What questions were they asking? Was the information they needed in the existing PDFs?

Then the team contacted the people who were calling into the office. What information were they seeking? Why did they need that information? Once they had it, what would they do with it? What triggered the need for this information at this time?

All of that information generated a deep awareness of how people were using the information today. It also showed how the current PDF-based structure of the information was creating friction to their individual goals.

UX success metrics are iterative too.

In their research, the team found that many of the zoning board callers were interested in a specific property. They either wanted to know the existing zoning requirements for that property, or they had requested a change in zoning to be considered by the board. Those interested in changing a property’s zoning were often interested in the status of their request.

If the team made the system easier for these two goals, they’d reduce the number of calls coming into the Zoning Board office. This would free up the office staff to do other work.

This gave the team two sets of UX success metrics for the Zoning Board information system. One, they could track the number of searches for specific properties and zoning change requests. Two, they could track the number of calls the clerks received questions to specific properties or change requests.

As the system was implemented, they monitored the types of questions the clerks were receiving. When callers asked more sophisticated questions, the team considered improvements to the system. The incoming calls became the driver for improvements, with the reduction in those questions as a new UX success metric.

As we learn more about what our users need from our designs, we need to constantly revisit our UX success metrics. Over time, our metrics will grow to be more sophisticated in themselves.

Choosing UX success metrics can’t be one-and-done.

UX success metric is a specific type of metric that tracks the outcomes our users want to achieve. Some outcomes, as in the case of the parking ticket payment system, are easy to determine from the start.

But many outcomes require us to up the maturity of our research capability. The metrics we create to track those outcomes need to adapt to what we learn in the research.

For many products and services, we can’t pick a set of metrics at the outset and say, “Ok, that’s how we’ll always measure success.” If we do that, we’ll lock ourselves into a simplified notion of success. That’s how we unintentionally leave a door open for competitors to steal our business from us.

Our metrics need to grow as our understanding of our users grows. And with every growth spurt, we’ll become one step closer to a design-mature organization.

About the Author

Jared M. Spool is a co-founder of Center Centre and the founder of UIE. In 2016, with Dr. Leslie Jensen-Inman, he opened Center Centre, a new design school in Chattanooga, TN to create the next generation of industry-ready UX Designers. They created a revolutionary approach to vocational training, infusing Jared’s decades of UX experience with Leslie’s mastery of experience-based learning methodologies.

How to Win Stakeholders & Influence Decisions program

Gain the power skills you need to grow your influence on critical product decisions.

Get mentored and coached by Jared Spool in a 16-week program.

Learn more about our How to Win Stakeholders & Influence Decisions program today!