Sunteți pe pagina 1din 20

Human Factors International

The Business of UX Metrics


How to measure and manage the user experience

White paper

Phil H. Goddard, Ph.D., CUA Executive Director Human Factors International, Inc.

July 30, 2007

Human Factors International 410 West Lowe PO Box 2020 Fairfield, IA 52556 8002424480 hfi@humanfactors.com www.humanfactors.com
2007 Human Factors International, Inc.

The Business of UX Metrics

Table of contents

About the author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 The UX manager dilemma - a scenario . . . . . . . . . . . . . . . . . . . . . . . .4 Evolution of the best practice review . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Discovering opportunity in the routine Expert Review . . . . . . . . . . . .5 Foundations for understanding UX the VIMM equation . . . . . . . . .5 Frameworks for evaluating best practice in UX design . . . . . . . . . . . .6 The scorecard phenomenon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 The organizing power of the scorecard . . . . . . . . . . . . . . . . . . . . . . . .7 The surprising business impact of the grade score . . . . . . . . . . . . . . .8 Tracking UX benchmarks across design iteration . . . . . . . . . . . . . . . .8 Tailoring scorecards for specific site types . . . . . . . . . . . . . . . . . . . . .9 Scorecarding with both eyes open . . . . . . . . . . . . . . . . . . . . . . . . . . . .9 Defining the page-level UX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 Shifting from site-wide to page-level scorecards . . . . . . . . . . . . . . . .10 Integrating general and page-level best practice scores . . . . . . . . . . .12 Competitive reviews at the page level . . . . . . . . . . . . . . . . . . . . . . . .13 Tracking page-level metrics over time . . . . . . . . . . . . . . . . . . . . . . . .13 Defining scenario metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14 Quantifying successful user flow . . . . . . . . . . . . . . . . . . . . . . . . . . . .14 Exploring metrics for brand perception and persuasion . . . . . . . . . . . .16 Evaluating deeper dimensions of the UX . . . . . . . . . . . . . . . . . . . . . .16 The business of UX metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17 Integrating metrics into a UX dashboard . . . . . . . . . . . . . . . . . . . . . .17 Elements of a UX metrics framework . . . . . . . . . . . . . . . . . . . . . . . .17 Tracking UX in business time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .19 Appendix: Best Practice Review Scorecard Web Site . . . . . . . . . . . .20

Human Factors International

18002424480 US/Canada 16414724480

+44 (0)20 7953 4010 Europe +91 (22) 4017 0400 Asia

hfi@humanfactors.com www.humanfactors.com

The Business of UX Metrics

Phil H. Goddard, Ph.D., CUA Executive Director Human Factors International About the author Phil Goddard is HFI Executive Director, Western Region, with 15 years experience in usability, 14 of those with HFI. Phil was on the Research Faculty at the University of Maryland, College Park, and a Post-Doctoral Fellow with the National Institute of Health and the Medical College of Pennsylvania. He has a Ph.D. in Cognitive Psychology, an M.S. in Cognitive Psychobiology, and a B.S. in Psychology/Biology. As director of training, Phil developed HFIs training curriculum and usability analyst certification program (CUA). He has published in scientific journals in the field of aging and cognitive performance, and served as reviewer for NCIs usability guidelines. Phil is an expert in all facets of usability engineering and user experience design, including techniques for concept design, feasibility, customer definition, site / application design and assessment, strategic development, usability metrics, usability infrastructure development, management, consulting, and mentoring. Recent projects have included financial services, health insurance, consumer user interface, corporate intranets, and e-commerce and corporate identity sites. Recent clients include Dell, Macys, Wal-Mart, Toyota, Chevron, Symantec, Kaiser-Permanente, Indymac Bank, DirecTV, Wellpoint, and CapGroup.

Human Factors International

18002424480 US/Canada 16414724480

+44 (0)20 7953 4010 Europe +91 (22) 4017 0400 Asia

hfi@humanfactors.com www.humanfactors.com

The Business of UX Metrics

Introduction

I remember when the field of information architecture was first introduced to usability practitioners years ago. It was a huge shift in the way we approached UI design and revolutionized the way we develop effective sites to this day. I believe the topic of User eXperience (UX) metrics has the same potential. Of course, UX metrics arent new. Some could argue that usability metrics have reached the level of international standardization. Customer satisfaction metrics routinely drive e-business marketing strategies. And Web site analytics are becoming a routine part of UX evaluation toolsets. But the connection between these metrics is fragmented or missing, and the integration of UX metrics into an accessible metrics framework is an opportunity waiting to be capitalized on. In this paper I focus on a narrow band of UX metrics I call best practice metrics, derived from our experience routinely evaluating business-critical Web sites. After a brief review of how best practice frameworks emerged at HFI, I share some insights and anecdotes about how our expert reviews are being consumed and acted upon by business folks. Most surprisingly, the impact of a simple graded scorecard approach has turned a routine report into powerful strategic planning and communication vehicle between UX teams and the business. I present a challenge/opportunity for UX managers to turn existing UX metrics into an integrated, UX dashboard that resonates with the business and from which more informed, strategic design decisions can be made. The UX managers dilemmaa scenario Lets start with a typical scenario that we encounter in our consulting work today. Imagine you are the UX manager at a large eCommerce company. Your team supports a variety of projects: 1) Corporate site 2) Intranet: employee and sales portals 3) eCommerce site 4) Product software and services 5) Call center You have a usability lab and a small usability team, and you use outside agencies for site design. Your business groups want continuous updates to their pages and sites. Meanwhile, the product groups are updating their UIs and need help. With all this, you have a wide variety of design review requests, user requirements projects, and testing projects.

Human Factors International

18002424480 US/Canada 16414724480

+44 (0)20 7953 4010 Europe +91 (22) 4017 0400 Asia

hfi@humanfactors.com www.humanfactors.com

The Business of UX Metrics

Your eCommerce site has been very successful but is taking all your resources, which means there has been little or no attention on the corporate site. You have no resources to dedicate to the individual business group demands, and the outside design team has no background in usability. If you turn any of these teams away, however, they might never come back for usability and design assistance again. How do you provide the business leadersand yourselfwith an understanding of the state of UX for each of these teams? How does the organization know what business areas need the most design and usability help?

Evolution of the best practice review

Discovering opportunity in the routine Expert Review Your usability operation delivers quantitative and qualitative reports. The usability testing (UT) data is powerfulit tells you where and why users are having problems. However, usability testing takes time, costs are relatively high, and you have to plan in advance to fit them into your development cycle effectively. Your Web analytics tool is cranking out everything happening on the server. After an initial investment, youve got a relatively cheap tool delivering reams of data. However, the reports dont tell why users are dropping off your site, and you need expertise in making valid inferences about the observations. Question (rhetorical): Is there an evaluation tool that approximates the insight power of the UT, yet in a faster timeframe and with less resource investment? The answer is yes. The expert review (ER) fills this assessment gap. It offers feedback at the speed of design (and design iteration) and can be done with a relatively low investment. At HFI, the expert review (or heuristic review) has been used routinely to provide an x-ray of an existing design. The review is conducted by independent reviewers, following a review framework based on research and experience in good UI design. The ER can deliver insights about where problems exist in the design, why, and who might experience them. I want to outline the research foundations that formed the basis for HFIs ER process. These foundations, or standards of practice, made the development of metrics possible. Foundations for understanding UXthe VIMM equation As HFI experimented with approaches, ways of teaching, and techniques for communicating good user experience design, some constructs rose to the top. One of those was VIMMan acronym for Visual, Intellectual, Memory, and Motor. Human factors engineering models

Human Factors International

18002424480 US/Canada 16414724480

+44 (0)20 7953 4010 Europe +91 (22) 4017 0400 Asia

hfi@humanfactors.com www.humanfactors.com

The Business of UX Metrics

focus on designing from the point of view of the user. One way to model the known capability and limitations of the human system is VIMM. VIMM is a way to focus on the amount of work a design forces the user to do. Does a particular interaction involve high demands on the users memory or a lot of motor work because the user has to switch back and forth from mouse to keyboard? By focusing on design factors that affect VIMM, we can define best practices for usability and ways to evaluate it.

Gradually, VIMM became a part of the expert review framework with our clients. We called it the VIMM equation. Frameworks for evaluating best practice in UX design While developing our training programs, another organizing construct evolved: NCPINavigation, Content, Presentation, and Interaction. NCPI categories offered a framework to help evaluate UI design and communicate best practice to diverse teams.

Human Factors International

18002424480 US/Canada 16414724480

+44 (0)20 7953 4010 Europe +91 (22) 4017 0400 Asia

hfi@humanfactors.com www.humanfactors.com

The Business of UX Metrics

Since NCPI elements can be evaluated independently, we often encounter: Beautiful designs that dont serve useful content Good navigation systems with poor information architecture Rich content/features with seriously flawed interaction models NCPI became a reliable, repeatable, and extensible framework for our expert reviews. If youre an experienced UX practitioner, this may be nothing terribly new or exciting; weve all been doing this for years, right?

The scorecard phenomenon

The organizing power of the scorecard The turning point, and relevance to metrics discussions, happened when expert reviews introduced the scorecard. Our reviews had always provided in-depth analysis of design issues and recommendations for redesign (by category), which amounted to a huge deck of slides. However, once digested and acted uponthe deck was sometimes left to obscurity. So we developed a scorecard that not only summarized the issues, but gave a quantitative metric by category and overall.

Reviewer 1 scorecard

Reviewer 2 scorecard

In practice, two reviewers independently review the UI using the same scorecard (see excerpt below). We use a scale of 100 points and weight each item and group as a fraction of an overall score of 100 points.

Human Factors International

18002424480 US/Canada 16414724480

+44 (0)20 7953 4010 Europe +91 (22) 4017 0400 Asia

hfi@humanfactors.com www.humanfactors.com

The Business of UX Metrics

For example, if the team needs to assess the corporate site, the usability practitioners evaluate the site (or appropriate sub-sites) and provide a score out of the total possible points for each design category. This becomes the grade for the site in that category. Add the scores for each category together, and you get the grade for the entire site. The surprising business impact of the grade score In a recent meeting with a client, our team presented a scorecard of the existing site as an initial benchmark. The score was lowaround 40 out of 100. The business teams looked at each other and said, Did we just get a C- or a D+? After a light laugh around the group, the Senior VP took a deep breath and acknowledged they knew their site was off track. He appreciated 1) confirmation of it, and 2) a clear framework to understand the problems and how to fix it. For the last three to four years, in many different engagements, the clear and surprising outcome has been the value the business groups gain from this simple grade/score approach. The strengths of the grade approach are that it: 1) is familiar (were entrenched in thinking this way) 2) has immediate impact (e-Business groups manage processes/changes using these measuresknowing where to apply their efforts and show success) 3) can be tracked as a benchmark 4) lends itself easily to competitive/comparative reviews Tracking UX benchmarks across design iteration With a standardized scale and routine measurement process, its possible to benchmark and track site-level best practices across time (see figure below). 45 trending to 60 is great information for managementjust as 45 trending to 35 is also great information.

Tracking benchmarks across site design iteration over time

Human Factors International

18002424480 US/Canada 16414724480

+44 (0)20 7953 4010 Europe +91 (22) 4017 0400 Asia

hfi@humanfactors.com www.humanfactors.com

The Business of UX Metrics

HFI routinely starts our redesign projects with a benchmarking exercise and applies the same benchmark at the end of the redesign. Tailoring scorecards for specific site types HFI has developed scorecards for different site types and applications, each with a grade-oriented format, including: 1) large-scale information sites 2) eCommerce Web sites 3) Web applications (applications in a browser) 4) software applications (applications on a desktop) 5) financial services General best practice scorecards focus the items on the type of goals and design challenges appropriate for a specific site. The following is taken from a general scorecard for an eCommerce site.

Scorecarding with both eyes open The role of expertise Expertise does make a difference in the interpretation of the review items. The scorecard must be thoroughly understood, preferably by someone with experience in best practice research/literature and UI design. Inter-rater reliability This refers to the consistency with which different raters score a site or page and is related to expertise. Within HFI, we have seen reasonably similar scores between reviewers. When reviewer scores deviate, this becomes an opportunity to discuss divergence and what meaning that might offer. Keeping the attributes that are evaluated as specific as possible also helps limit the interpretation variance. Directional indication vs. absolute value If you push any best practice guideline too far you break it. Likewise, putting numbers to these guidelines risks misrepresenting them as absolute values. View the scores as directional indicators, not absolute values. Their value is in their relation to each other and to themselves over time. If you are a practitioner, this gives a powerful structure to your design evaluation process. If you are a business owner, demand this sort of systematic measurement from your design and usability team.

Human Factors International

18002424480 US/Canada 16414724480

+44 (0)20 7953 4010 Europe +91 (22) 4017 0400 Asia

hfi@humanfactors.com www.humanfactors.com

The Business of UX Metrics

10

The scaly issue of scales Related to relative nature of the score is the issue of score-carding scales. Our current scorecards do a good job of specifying and scoring the attributes that contribute to each category score (e.g., good feedback messages). However, some attributes have less freedom to move because the individual score only ranges from 0 to 1 (i.e. the scale has no gradations). We are working on a scorecard tool that separates the scoring from the weighting, which makes creation of the tool more complex, but improves the rating process. Experts are not users Often, experts will see challenges that users dont view as problems. Also, there are many problems users see that the experts dont view as a problem (or dont even see!). Remember, this is a complementary tool to usability testing, not a replacement. Feature agnostic The best practice scorecard is not suited to counting features or functions. Even if a site has many features or tools, it may still score poorly on the best practice review. What is accounted for is if the right function is provided at the right time.

Defining the page-level UX

Shifting from site-wide to page-level scorecards A user experiences multiple pages when visiting a Web site or using a software application. Using a general scorecard that summarizes an average score may not capture the fact that there could be some good pages and some bad pages. This realization, combined with the fact that unique types of pages have a unique purpose, suggested we focus on page-level metrics too. The first step was to define the purpose of an individual page or collection of pages for a specific site (e.g., browse/search/compare, search and manage results, buy/transact, get support). The following example shows a hypothetical mapping of the secure portion of a transactional site (e.g., eCommerce or financial services).

Hypothetical page hierarchy (secure banking site)

Human Factors International

18002424480 US/Canada 16414724480

+44 (0)20 7953 4010 Europe +91 (22) 4017 0400 Asia

hfi@humanfactors.com www.humanfactors.com

The Business of UX Metrics

11

This type of site meta-architecture allows us to identify specific page types within a context. It also lets us articulate a purpose for each page in light of the user experience were trying to create. For example, upon logging into a secure banking site, the user lands on their account overview page (tier 1 secure landing page type) with the ability to a) view their current holdings in summary form, b) begin a frequent transaction (quick link), or c) see financial instruments they dont have but might be valuable to them (cross-sell). Moving into their banking accounts (secure category page) they can move between bank accounts easily, perform bank transactions, AND drilldown into a set of transactions for a specific bank account (transaction overview page). From this page map we can define a set of measurable criteria that reflect best practice in UX for that type of page. These criteria are based on the purpose of the page and what the users bring to that page in terms of goals, expectations, tasks, etc. Similar to the best practice components in the general scorecard, these page-type-specific components account for usability best practices, heuristics, and the latest research in the field. They are also logically grouped together for easy understanding. In our introductory scenario, you were challenged to build a set of metrics for an eCommerce site. Now you have a framework to begin. First, identify page types: 1) Product management pages (users looking for products) Category Sub-category Product overview Product details 2) User and account management pages (users setting up and managing their account) Login or register Account or account type overview Account details 3) Purchasing pages (users buying products) Shopping cart Checkout Customer service

Human Factors International

18002424480 US/Canada 16414724480

+44 (0)20 7953 4010 Europe +91 (22) 4017 0400 Asia

hfi@humanfactors.com www.humanfactors.com

The Business of UX Metrics

12

Second. create categorized, measurable criteria that reflect both common and unique attributes for each type of page. For example: 1) Product management pages (users looking for products) Common categoriesNavigation, Content, Presentation, and Interaction (NCPI) Unique page categoriesproduct content and presentation, product interaction 2) Purchasing pages (users buying products) Common categoriesNavigation, Content, Presentation, and Interaction Unique page categoriesshopping cart content and presentation, payment and order confirmation

Sample portion of metrics for elements of online banking:

Integrating general and page-level best practice scores Throughout the process, HFI continues to refine our scorecards, starting with the general scorecards and adding specificity relevant to the situation. The result is a balanced usability measurement: Common metrics for every page 50 points Unique metrics for unique pages 50 points In the end, the measure of usability (or score) accounts for both the common metrics that apply across the entire site (the core set of usability design attributes) and those metrics unique to the various page types (reflecting the best practice criteria for the specialized purpose of the page). Now we have a systematic, repeatable, measurement review framework. The business and design teams can send us individual pages, as well as groups of pages, and we can score them and send back the list of issues and recommendations. Again, this allows the organization to make decisions about which areas of the site need more or less focus, depending on their score.

Human Factors International

18002424480 US/Canada 16414724480

+44 (0)20 7953 4010 Europe +91 (22) 4017 0400 Asia

hfi@humanfactors.com www.humanfactors.com

The Business of UX Metrics

13

By evaluating the page types over time, we can identify which areas of the site are trending towards improvement and those trending towards usability collapse. Competitive reviews at the page level By scoring your page types against peer or competitor sites with the same page types, we can also produce comparative ratings. The page scorecards give you the ability to not only review designs, but to benchmark against competitors as well. The example below shows an excerpt from a product page scorecardcomparing three different sites on the efficacy of their product pages. The scorecard on the left is the common metric review (worth 50 points) and on the right is the unique product page metrics (worth 50 points). Standardizing on the 100 point scale allows for comparisons across general scores and page-level scores.

Tracking page-level metrics over time The resulting power of the scoring system is that sites and pages can be measured and tracked over timein design time or development time. The figure below highlights the potential tracking for a collection of non-secure pages over each quarter. This view allows for tracking of directional trends and, potentially, can be plotted along with other significant metricsusability testing, Web analytics, industry news, profit and loss, etc.

Page-level trend metrics

Human Factors International

18002424480 US/Canada 16414724480

+44 (0)20 7953 4010 Europe +91 (22) 4017 0400 Asia

hfi@humanfactors.com www.humanfactors.com

The Business of UX Metrics

14

In this format we can track page-level metrics (e.g., the audience landing page) and sets of pages (e.g., portal vs. secure), and plot them against success criteria (e.g., hitting a score of 80). These page metrics can be correlated with Web analytic data, UT data, conversion rates, customer satisfaction ratings, and plotted as standardized scores on a common scale (to be discussed below).

Defining scenario metrics

Quantifying successful user flow The best practice metrics approach takes a design perspective. But theres another important perspective or component of a good review the scenario review. While a best practice design review naturally focuses on design, the scenario review focuses on users and their task flows. When conducting scenario-based usability testing, we focus the tests on specific tasks and identify measurements for success by task: Success rates (e.g., 0 = unsuccessful, 1 = success with difficulty, 2 = success) Number of steps to accomplish the task Number of errors made in accomplishing the task Scoring the user comments Measuring the time to accomplish tasks Scoring the satisfaction level using self-report rating scales The results are a score by task, across scenarios, and for sections of the site. The score helps the organization know where to focus their efforts, and the observations are turned into inferences about design problems and opportunities. However, what usability testing lacks in ease of implementation, scenario-based expert reviews make possible. This is the process of usability experts evaluating an interface from the perspective of actual usersaccomplishing the tasks that users accomplish with the site. The scenario metric approach is an excellent complement to usability testing, as it provides expert observation and inferences in a timeefficient manner. This is done through role playing a specific persona performing a specific task. The interface is then scored using a scorecard to provide the measure of usability.

Human Factors International

18002424480 US/Canada 16414724480

+44 (0)20 7953 4010 Europe +91 (22) 4017 0400 Asia

hfi@humanfactors.com www.humanfactors.com

The Business of UX Metrics

15

This scorecard adds measures of flow, such as: Number of steps Time (includes the time to understand the organization of content and read the content) Effort (to understand and use the information) Navigational signposting (clarity of where am I / how did I get here / where can I go / how do I get there) The scenario review can also include measures of success and failure, such as whether or not the user can accomplish a specific task. This makes sure the scenarios are truly about what the user is trying to do, not about what the application does. Revisiting our eCommerce dilemma, you could combine these tasksequence scores and page-type-specific scorecard to conduct competitive (comparative) analysis. This helps remove some of the subjectivity from the evaluation (e.g., comparing how your site lets customers read user reviews of a product vs. how competitors enable this activity). Because youre evaluating a common task with common measures, comparing the two sites on that task can be very informative, even if the actual procedure is different. Some things must be kept in mind when conducting a scenario metric approach: Persona viewthe team must have a common understanding of who the user is. Personas are a great way to create this understanding and head-off arguments that John Smith would never do that. Scenario-basedthe team must agree on common, realistic scenarios. Use the same ones from the usability tests and ensure the evaluator stays on script. Keep the users perspectivethis requires solid personas and scenarios so that the evaluator focuses on how a real user interacts with the design.

Human Factors International

18002424480 US/Canada 16414724480

+44 (0)20 7953 4010 Europe +91 (22) 4017 0400 Asia

hfi@humanfactors.com www.humanfactors.com

The Business of UX Metrics

16

Scoring a sequenceevaluators must keep in mind that they are scoring a sequence, not just individual pages. The navigation may be clearly organized across the top of the page, but if the user cannot get there from here, that impacts the score. Use the best practice metricskeep employing the metrics as the scoring system to avoid ratings such as that seemed easy or it was confusing at first, but now I know how it works. Keep the goal in mindthe scenario review is really a cognitive walkthrough to be used when a usability test is not an available option. Treat the scores as what they aremeasures of usability based on an expert evaluation. Evaluate over time to understand if the score is trending up or trending down. But dont confuse these results with those that could be gained by putting real users in front of the site and having them accomplish tasks.

Exploring metrics for brand perception and persuasion

Evaluating deeper dimensions of the UX HFI is adding new measures of the user experience that go beyond traditional usability. For example, our new PETscan methodology evaluates site design to improve conversion on the basis of persuasion, emotion, and trust. As these additional measures are identified and developed, they complement and deepen our existing usability measurements and can be added to the review scorecards as appropriate. Below is an excerpt from a scenario review scorecard that includes measures of brand perception and persuasion.

To evaluate brand perception reviewers must know the target brand attributes that the design is intended to convey. This can be collected from the marketing/creative team prior to the review. Evaluating persuasion components of a design focuses on those elements that trigger the user to take an action, dissuade the user from taking action, or those that stop the user from taking the action.

Human Factors International

18002424480 US/Canada 16414724480

+44 (0)20 7953 4010 Europe +91 (22) 4017 0400 Asia

hfi@humanfactors.com www.humanfactors.com

The Business of UX Metrics

17

HFI has adapted our testing labs and test instruments to the PETscan, including eye tracking measures. As these advances become more routine, the metrics will show up in other formats like our expert reviews.

The business of UX metrics

Integrating metrics into a UX dashboard In the end, UX metrics come in many forms representing UX touchpoints, channels, and data formats: Web analyticsreal-time: focus on what and where things are happening Usability testingcyclic focus: focus on what and why things are happening Surveysperiodic focus: measures of customer satisfaction, indirect measures of user experience Expert review best practice and scenario metricsdesign time: intermediate view and what, where, and why The challenge/opportunity for a UX manager is to turn the outcomes of these evaluations into an integrated, strategic framework that businesses can use to make more informed design decisionsessentially an integrated UX dashboard. Elements of a UX metrics framework

Outlined above are potential customer touch points for a UX metrics reporting framework. Building a UX dashboard requires that measures of UX across these touchpoints be based on clear customer definitions, a standard scorecard for each metric type, and success criteria defined by the business.

Human Factors International

18002424480 US/Canada 16414724480

+44 (0)20 7953 4010 Europe +91 (22) 4017 0400 Asia

hfi@humanfactors.com www.humanfactors.com

The Business of UX Metrics

18

From this, for example, you could create a map of the eCommerce site and place each metric outcome on it as an overlay. These overlays create contoursthe deeper the valleys, the more room there is for UX improvements. If you notice steep drop-offs in UX on product pages, measured by both UT and our metrics-based Expert Reviews, this means its a key area that needs design and usability focus. Tracking UX in business time An effective UX dashboard allows the business to compare UX metrics over time and across sites (or competitors). This allows the organization to see the comparisons and understand when the UX measures are trending up and when they are trending downward. A hypothetical drill-down into the eCommerce metrics portion of the dashboard might reveal the following (see below): Across the last 4 quarters, customer satisfaction ratings are increasing, with a strong jump from Q1 to Q2 (associated with tactical improvements in the product pages and getting product and bonus offers into the cart easily). The ER reviews and the UT are tracking fairly closely, and trending with the increase in satisfaction ratings, but scores on the secure landing page suggest we can improve this area for frequent online-transactors, who find it difficult to get detailed transaction history.

This graphic is crudebut is only intended to make the point. The power of the UX dashboard is to provide an integrated, decision-making tool for measuring and managing change effectively.

Human Factors International

18002424480 US/Canada 16414724480

+44 (0)20 7953 4010 Europe +91 (22) 4017 0400 Asia

hfi@humanfactors.com www.humanfactors.com

The Business of UX Metrics

19

Summary

This discussion of UX metrics started with a narrow focus. We began with the UX manager dilemma. In satisfying his or her need to provide effective assessment options to the business in design time, we reintroduced the expert review (ER) as a simple and effective means to fill an assessment gap. Out of the ER process emerged the scorecard approach. The best practice scorecard provides a common language for business teams to use when discussing where the UX stands against the ideal 100-point score, against their competitors, and against themselves over time. Surprisingly, the grade score approach has had a significant impact as a reporting framework and communication vehicle between the UX teams and business. The scorecard: 1) is familiar (were entrenched in thinking this way) 2) has immediate impact (e-Business groups manage processes/changes using these measuresknowing where to apply their efforts and show success) 3) can be tracked as a benchmark 4) lends itself easily to competitive/comparative reviews From the business point of view, if you cant measure it you cant manage it. We then discussed how scorecarding adapts itself easily to a variety of site types and can be applied across or within sites, or at the page levelproviding flexibility to provide feedback in design time. We discussed scenario metrics, complementing the best practice aspect of the design by adding the ability to measure users and user flow through a collection of pagescomplementing usability testing and reporting. Out of this we introduced the idea of developing an integrated UX metrics frameworkusing a standardized scoring system applied across channels and touchpoints that could ultimately be expressed as a UX dashboard. An integrated approach empowers UX teams and the business to make informed decisions about where the evaluation and redesign efforts of the organization need to be focused. This in turn has the potential to significantly change how businesses measure and manage designing a successful user experience.

Human Factors International

18002424480 US/Canada 16414724480

+44 (0)20 7953 4010 Europe +91 (22) 4017 0400 Asia

hfi@humanfactors.com www.humanfactors.com

Appendix: Best Practice Review Scorecard Web Site

20

Site Topic 1. Home Page

Usability / Design Issue

Points possible 2 2 2 2 2

Item Score x of 10

1 2 3 4 5 2. Navigation 1 2 3 4 5 3. Content 1 2a 2b 2c 3d 4 5 4. Presentation 1a 1b 1c 1d 1e 2a 2b 2c 3a 3b 3d 3e 4a 4b 5. Interaction 1 2 3 4 5a 5b TOTAL

Identity and value proposition clear, positive Compelling content Primary navigation clear and well placed Strong visual affordances Speedy download Clear, comprehensible structure Works as an integrated whole Each page provides sense of place Supports frequent, common tasks Effective Search design, placement, results

x of 30 6 6 6 6 6 x of 30 Categorization scheme works Categories are distinctive Categories are descriptive Categories are balanced Categories promote important content Editorial stylecopy is scan able Editorial stylecopy is summary / detail format 6 6 2 2 2 6 6 x of 15 Layoutbalance of consistency and variety Layoutvisual complexity Layoutpage hierarchy Layoutalignment Layoutgrouping Colorfor attention Colorfor grouping Colorfor highlighting Graphicssupport brand Graphicsprovide content Graphicssupport layout Graphicsaugment navigation Typographyclear type hierarchy Typographylegibility 1 1 1 1 1 2 1 1 1 1 1 1 1 1 x of 15 Strong visual affordances for selection Convention control behavior Speed of interaction Browser support / technical nuances Prevent errorsformat cues, controls, help Good feedback messages 4 4 4 1 1 1

Human Factors International

18002424480 US/Canada 16414724480

+44 (0)20 7953 4010 Europe +91 (22) 4017 0400 Asia

hfi@humanfactors.com www.humanfactors.com

S-ar putea să vă placă și