Measuring Usability: The Basics
Download PDF Version (300KB)
Usability is not a luxury. If your website drives revenue, no matter how big or small, you have a vested interest in turning motivated users into buyers, and that requires understanding the needs and expectations of those users. Fortunately, you don't need a big budget or a team of men in white coats wielding eye-tracking lasers to get a handle on your website's usability.
Assessing your current usability means knowing where to look and creating measurable benchmarks and goals. This article will help you assess your current usability, quickly and easily, as well as give you some basic tools for improving your user experience and, with it, your bottom line. I'm going to break it down into three major areas: (1) Web analytics, (2) Conversion rates, and (3) Split (A/B) testing.
(1) Web Analytics
If you have a commercial website of any kind, you're already collecting data in the form of traffic logs. Odds are good that you also have an analytics package to analyze those logs, either through your host or a third party, like Google Analytics. If you don't, talk to your hosting company or see the Resources section at the end of this paper.You can get quite a few clues about usability from your existing traffic logs and web analytics. Although analytics have been around just about as long as websites, they've matured a lot in the last few years, and we're still just beginning to tap into their value.
It's All Relative
Whenever you're taking a fresh look at your web analytics, it's important to keep in mind that there are seldom "right" answers. Getting caught up in absolutes and what any given number should be is a good way to make yourself crazy. Websites vary wildly, and you need to have a good understanding of your own site's baselines. For any given metric, focus on improvement and gradually sorting out the story behind the number.
So, what are some of the numbers that matter? Following are a few of the metrics that speak to usability issues.
Metric: Pages per Visit
One of your first clues to your website's usability is whether or not your visitors stay on your site long enough to see what you have to offer. Web surfers are notoriously impatient, and this is probably the biggest battle of website usability. Tracking the average number of pages per visit is a good starting point to understanding whether or not your visitors are sticking around. Again, don't focus too much on what a "good" number is. If you have a blog where most of the visitors hit the home page, getting them to 2-3 pages/visit could be fantastic. If you've got an e-commerce site where the product ordering page is 5 layers deep, that same 2-3 page/visit average is probably a bad sign.
Metric: Time on Site
Similarly, the average time each visitor spends on your site is a decent indicator of how positive that visitor's experience is. Practically speaking, the more time people spend on your site the better, especially if you're trying to sell a product directly.
Metric: Bounce Rate
Your "bounce rate" is an indicator of how many people leave immediately after hitting your site (or a particular page on your site). It should come as no surprise that you want to drive the bounce rate down. It's important to note that many factors can conspire to inflate the bounce rate. If a lot of people hit your site and immediately leave, you may have general usability issues. You may also have marketing campaigns or search engine results that violate user expectations. In other words, it's important that the expectations that you create in an ad are matched when a visitor clicks through, or you can expect a high bounce rate.
Tool: Exit Pages
This is less of a metric and more of a tool for finding your trouble spots, but most analytics packages will tell you where your visitors are exiting. Specifically, look for points in your process that seem like unnatural exit points. It's not uncommon for people to exit at major entry points (like your home page), but if a large number of users are dropping out at the second page of your shopping cart, for example, you probably have a usability problem. Finding and fixing these problem points can have a major impact on your bottom line.
Don't Panic
If you haven't looked at your analytics for a while or haven't looked at any of these particular numbers before, don't panic. Website visitors are a fickle bunch, and your bounce rate and other numbers may make your heart skip a beat or two the first time you see them. Focus on the big picture and work towards improvement. Website usability is, ideally, an ongoing process of optimization, and today's numbers are just a benchmark for figuring out where you'd like to be tomorrow.
(2) Conversion Rates
Let's pretend you've dug through your analytics and are getting a handle on how usable your site currently is. It's a good starting point, and will help you establish some benchmarks, but the next step is to start thinking about your goals. What does usability success look like? In the world of website usability, the gold standard has become something we call the Conversion Rate (CR).Simply put, the Conversion Rate is just a measure of how many visitors convert into buyers, where "buyers" can be loosely defined to mean people who reach any target goal, including making a purchase, filling out a contact form, downloading a document, etc. If you're operating a commercial website, you have an action you want people to take, and the role of strategic usability is to make it as easy as possible for them to achieve that goal.
The Long Version
Of course, that's just the quick and dirty definition. Technically, conversion rate is the ratio of conversions to visitors, represented by a percentage. Let's say you're tracking actual online buyers of a product. If, in a given month, your website had 5,000 visitors and 125 of them purchased a product, your conversion rate for that month would be 2.5% (125/5000).
There are a couple of details that are worth mentioning. When pulling the visitor side of the equation from analytics, I generally prefer Unique Visitors. It may inflate your CR slightly, but for tracking over time, it's a much less noisy number than overall visitors. For conversions, you'll probably have to look outside of your analytics, although many packages are starting to support tracking conversions/goals. If you have a commercial website, you may already have a mechanism to track purchases. Bear in mind that you should measure the total number of visitors who converted, not how many purchases they made (i.e. one person buying 14 items is still just one conversion).
Tracking Changes in CR
Like any measurement, as soon as you start tracking CR, you're probably going to get a bit carried away. Let me try to head that off with some advice. First, as with just about any website metric, try to track changes over longer time periods. Comparing daily numbers is an exercise in futility. Personally, I like to track both weekly and monthly numbers (separately). Note that some analytics packages compute unique visitors differently depending on the time period, so only compare like time periods (months to months and weeks to weeks).
Just as importantly, let me share the mantra my experimental psychology professors drilled into me: "correlation does not imply causation". If you change something on your website, and CR goes up or down, don't assume it's because of what you did. Just like overall traffic, many factors can effect conversions, including weekends and holidays, consumer confidence, marketing campaigns, etc.
So, how do you determine if changes to your website actually do have an impact on conversions? For that, you're going to need to run an experiment, using something we like to call the A/B or "split" test.
(3) Split (A/B) Testing
Most of the time, people make changes to their websites and never look back. They take an educated guess and hope for the best. Ideally, though, you'd like to know whether a change was for the better or worse, in a real, quantifiable sense. Enter the split test, sometimes called the "A/B" test. The name is pretty self-explanatory; you have two versions of something (copy, a graphic, a layout element, etc.) and want to split them to your visitors and find out whether A or B performs better.Why Split Test?
So, why not just run version A for a while, then run version B for a while, and see what works better (sometimes called "sequential" testing)? The main reason is that something may have happened during those two different time periods to make visitors behave differently. People may go on vacation, the market may slump, you may have a major marketing campaign, the Fed might raise rates, Starbucks might introduce a size bigger than "Venti", etc. In the end, you want to have some confidence that any difference you measure between groups A and B is because of what you changed, not because of some external factors muddying up your results.
What Should You Test?
Split-testing originated in the advertising world and is often used on the web to test subtle differences in advertisements and landing pages (the pages people arrive on when they click on an ad). You might test changes in copy, layout, colors, button shapes, or just about any page element that potentially has an impact on your visitors.
As for whether you should test big changes or small ones, there seem to be two camps on this subject. One side, which I tend towards, believes in starting small. That way, if you do find that B is better than A (or vise-versa) you'll have some idea of why. In the long-term, that will allow you to make more educated guesses about future changes. The other side says that you should go for big changes, as you'll get more bang for your buck (and your time). I can't completely deny the logic of that; it often comes down to what you need to accomplish and how long you have to accomplish it. A stable site that does pretty well may want to stick to an evolutionary process of incremental changes. A site that's having trouble or needs a major overhaul may require more radical methods.
Finding Significance
A split-test is essentially an experiment; you present two options to two groups of people and measure how those groups react. Let's say you observe the following outcome: Version A converts at 1.5% and version B converts at 2.5%. That sounds good, but are the two groups really different? Simply put, maybe not: most often, this is because the groups aren't big enough for the difference to be reliable. In statistics, we call this "significance". A significant difference is one that, to the best of our ability to measure it, represents a real difference and not just poor measurement or individual differences between the people who happen to be in the groups.
Understanding all of the mathematics of significance is well beyond the scope of this article, but your best defense is to collect plenty of data. Make sure that your groups are: (1) split roughly 50/50 (in a straight A/B test), and (2) that you measure a decent chunk of conversions. The reality is that it may take thousands of visitors for you to get a reliable result, depending on the size of the difference that you observe. For a clear answer, you'll need to consult an expert or at least use a split-test calculator (see the Resources section below).
Some Advice Before You Split
From my days as an experimental psychologist, I know how tempting it can be to peek at the data every chance you get. Don't. Bias is a powerful force, and it's best to let the test run its course and not interpret the data until you're done. Too often, people start to see the results trending the way they'd like, decide they've got enough information, and cut the experiment off early. Also, try not to make changes during the test, no matter how small they seem. If you make a mistake in the testing procedure (that would bias the results), start over, as painful as that may be.
Beyond The Basics
Hearing the voices of your users across the internet isn't easy, but hopefully I've given you the incentive and a few of the tools to start listening better. Pay attention to your web analytics, define your goals, use conversion rates to measure those goals, and start testing changes that may have an impact on potential buyers. Good usability is a process and a habit; you may make a few mistakes along the way, but if you consistently make an effort to improve your user experience, you'll eventually be rewarded.Additional Resources
1. Google Analytics2. Split Test Calculator
Home | Who is Dr. Pete? | Are You A Real Doctor? | Can I Hire You? | Archive
©2024 User Effect, LLC.