Sunteți pe pagina 1din 12

Constantine & Lockwood, Ltd.

PREPRINT

Beyond User-Centered Design and User Experience: Designing for User Performance
Larry L. Constantine, IDSA Chief Scientist, Constantine & Lockwood, Ltd. Director, Laboratory for Usage-centered Software Engineering University of Madeira, Funchal, Portugal User-centered design is everywhere in the IT world. Just as it was once fashionable to tout user friendly interfaces, these days nearly everyone has jumped on the usercentered bandwagonor is running to catch up. The bandwagon is a roomy one, and user-centered design can be almost anything in practice so long as it adheres to the core philosophy of keeping users at the center of the software development process. This focus on users as the central subject certainly seems to be a step forward from the technology-centered focus of bygone days, when users were all too often regarded as an annoyance to be ignored as much as possible. However, the frustrations of everyday experience with even the best of modern software products and Web sites tells us that something is still badly wrong with the picture. You need only reflect on how many times a day you click on the wrong object or miss a step in a sequence or forget where a function is buried or curse the way some feature works to recognize how much modern user interfaces fall short of their potential. Putting users in the center of the picture and using techniques that focus on them and their experience may look to be reasonable, decent, and proper things to do, but despite the good intentions and noble efforts of designers, progress in usability remains unremarkable. Instead of getting breakthroughs in performance and leaps forward in what can be accomplished with computers, users are often left with a level of tolerable mediocrity marked by missing functions, frequent and irrelevant interruptions by modal messages that belabor the obvious, and multi-click detours to complete the most mundane of tasks. Breakthroughs in usability are possible, however. Consider these, for example.

Q In a debriefing at the end of the first day of live use of a newly designed medical
records application, a nursing professional was moved almost to tears because, despite a new system and only the briefest training, she found she was already getting more time with her patients. Q After seeing a new classroom information system, a veteran teacher declared that for the first time he felt that designers had understood what classroom teachers really do and what they needed from a computer system.

Preprint of article appearing in Cutter IT Journal, 17 (2), February, 2004.

2004, L. L. Constantine

Page 2 of 12

Q Using a radically redesigned integrated development environment, programmers


of automation applications were able to complete a typical mix of tasks in half as many steps as previously.

To consistently produce such radically outstanding results may require that designers and developers radically rethink how they approach the process of visual and interaction design. It may require them to consider the unthinkable, that there could be something wrong with user-centered design and its pre-occupation with users and user experience. Among usability professionals, user-centered design is so established that even to hint at problems in its premises or practices is regarded as sacrilege. As a co-inventor of usage-centered design (Constantine and Lockwood, 1999; Constantine and Lockwood, 2002), I have more than once been the target of such accusations. After one session at a SIGCHI conference, an audience suggested on an evaluation form that I never be allowed to speak at the conference again because I had I questioned some of the received wisdom regarding the role of user studies and usability testing in usercentered design! User-Centered Approaches Fully forewarned that I may be treading the path to design apostasy, I want to explore what user-centered design gets right and where it goes wrongand to suggest some ways to fix what is wrong with the process. Where user-centered design gets it right is the easy part. Involving end users and learning about their real needs is a good idea, no two ways about it. Spending time upfront to understand user requirements is an absolute prerequisite for sound design practice, irrespective of your approach or philosophy. Absent the goad of a user-centered approach, many projects would plunge too quickly into software design and construction. The result is the illusion of progress (Were in the first week and were already coding!) purchased at the price of premature commitment to particular solutions that invariably compromise utility and usability. (Too late to fix that, its already hard coded.) To understand what might be wrong with user-centered design and what needs to be done about it, we first need to understand better just what user-centered design is. Beyond its requisite focus on users, user-centered design gets a bit fuzzy. Crisp textbook definitions aside, user-centered design in practice is a rather cluttered collection of loosely related techniques and approaches having in common little more than a shared focus on users, user input, and user involvement. While it may be different things in the hands of different practitioners, at its core, user-centered design is distinguished by a few common practices: user studies, user feedback, and user testing. Through various techniques and tools ranging widely in formality and sophistication, user-centered design seeks to understand users as thoroughly as practical. Initial user studies provide the essential input for iterative prototyping driven by user feedback, which is followed by user testing and, one hopes, further refinement of the product. Thats it. Partisan adherents of particular variants of user-centered design may argue that this characterization omits some cherished technique, tool, or activity central to

Page 3 of 12 their preferred approach, but this admittedly over-simplified view highlights what user-centered design is really about and where it goes wrong. Lets start with design. Design in User-Centered Design A skeptical analysis might conclude that none of its core practicesuser studies, user feedback, and user testingreally have very much to do with design itself. Despite its name, there is not much design in user-centered design. Indeed, books on usercentered design often have much to say about users, user studies, human perception and cognition, human-machine interaction, user interface standards and guidelines, and usability testing but relatively little to say about design or design processes. The dirty secret that few advocates and practitioners will admit to is that user-centered design in practice is largely trial-and-error design. Stripped of semi-scientific rhetoric and professional self-justification, most user-centered design involves little more than good guessing followed by repeated corrections and adjustments guided by checking with users. Once the mandatory user studies are out of the way, a potentially workable solution is quickly sketched as a paper prototype. Little has been written about how this initial idea is conceived and few designers can articulate the mental legerdemain involved in its creation, but once you have something, you put it in front of one or more users to find out what is wrong with it. Basically, iterative refinement based on paper prototyping relies on users to tell the designers what is wrong and how to get it right. Done well, it certainly helps users to feel involved and empowered. It can also be reassuring to designers, particularly if they are unsure about their own guesses or lack complete confidence in their design skills. After all, the end product is the result of real feedback from real users. (Well, we did the right thing and got user feedback even if the result didnt work.) For similar reasons, repeated refinement through iterative prototyping is reassuring to clients and management. They get to see early evidence of apparent progress; it may not be real code but at least they get screen mockups. Furthermore, designers can defend the ultimately delivered design as being based on real datareal information from real users. So, whats wrong with iterative prototyping with user feedback? Here are the basic flaws and failings.

Q Q Q Q

It It It It

contributes to the illusion of progress. encourages premature preoccupation with detail. discourages courage. relies excessively on users.

As already hinted at, the flurry of paper prototypes can contribute to the illusion of progress. Decisions are being made and design artifacts are being generated, but this does not mean that real progress is being made toward a first-rate solution. For one thing, at the stage that realistic or representative paper prototypes are typically producedprototypes that are recognizable and make sense to usersit is usually too early in the game to be worrying about what the screens will look like and what functions they will present. The early involvement of users and the need for them to be able to interpret and react to the prototype forces premature investment in realism

Page 4 of 12 and the details of the design. Early paper prototyping encourages hurried decisions about details at the level of individual screens and widgets without first considering what screens in what arrangement and supporting what functions best support user needs. Designers are frequently seduced away from the more abstract and often less exciting work of first mapping out a sound overall architecture for the user interface. When paper prototypes are constructed earlyoften ahead of or even instead of a full analysis of user requirementsdetailed decisions can rely too heavily on unverified assumptions. Unlike a properly behaved iterative computer algorithm, the cycle of feedback and change in iterative prototyping does not necessarily converge toward a better solution. Users change their minds and different users will have different views and perspectives to offer. I have seen designs oscillate between solutions as one user rejects an approach and lobbies for its alternative, followed by another user whose feedback is the reverse. Any number of times I have seen designers reinvent discarded design ideas, blithely oblivious to the fact that they are going around in circles. Because it depends so heavily on the comments and contribution of users, iterative prototyping is vulnerable to the peculiarities of particular users. If the same user or users are involved repeatedly, the design can end up excessively tailored to their biases. Alternatively, varying the users from round to round can lead the design to jump from variation to variation without significant improvement. If early prototypes are also shown to clients to demonstrate progress or garner support, the design becomes vulnerable to the whims and biases of vocal or powerful influences. In one case, an egregiously bad Web design nearly went into production because an early design concept had been favored and championed by the company president. Perhaps most damning and least recognized among the limitations of user-centered design is the way it subtly discourages courage. Courage is one of the central tenets of extreme programming and agile development methods (Beck, 2001). Cooper (2003) advocates courageous programming that is decisive in response to user actions instead of saddling users with a plethora of irrelevant alerts and ineffective confirmation messages that reflect hesitant design. User-centered design, however, makes it too easy for designers to abdicate responsibility in deference to user preference, user opinion, and user bias. In truth, it is hard to stick with something you know works when users are screwing up their faces at it. What if you are wrong? What if you are not as good a designer as you thought you were? It takes real courage and conviction to stand up for an innovative design in the face of users who complain that it is not what they expected or who want it to work just like some other software or who object to certain sorts of features as a matter of course. It takes responsible judgment to know when to listen to users and when to ignore them. Designers and developers need to keep in mind that anything genuinely new and improved entails risk, and real progress invariably provokes resistance. Most users are inherently conservative when it comes to user interface design. Many would prefer a bad but familiar design to a better but unfamiliar one. That so many systems have been so difficult to learn and so nearly impossible to master makes users all the more reluctant to take on the burden of learning to use anything new. Assigning too much weight to user input, as user-centered design is prone to do, helps sustain the tyranny

Page 5 of 12 of legacy systems and legacy users. Without the overthrow of this tyranny, established but ineffective designs are perpetuated and progress in usability is impeded. I have seen designers back off from clever and effective solutions because of a single round of user feedback. All too often they settle for something not only less creative but also less effective. This is not to imply that user feedback is a bad idea or that users should be ignored altogether. In the absence of feedback from the real world, designers are left in a vacuum where design fantasies can send them spinning off into outer space. User feedback helps designers avoid really stupid mistakes or really bad design, but it also tends to put the brakes on creativity. Taken together, putting users at the center and making user feedback pivotal in a process of early prototyping with paper designs that are intended to resemble or represent real interfaces tends to favor solutions that are acceptable but uninspired at best. User-centered design thus can hamper real progress. Studying Users User feedback on designs may be the pivotal process in user-centered design but it is not the whole story. Although some designers on some projects may plunge into paper prototyping on day one, the tenets of user-centered design call for user studies to kick off the process. Here, too, making users themselves the primary focus is one of the problems with user-centered design in practice. As with user design feedback, user studies are a good idea. You cannot do a good design without knowing something about your users. The question is a matter of just what you need to know. Field techniques are varied and may incorporate many things: structured and unstructured observations, formal and informal interviews, surveys and questionnaires, analysis of artifacts, ethnographic investigation, requirements gathering, and the like. Unfortunately, thoroughgoing user studies can generate an overwhelming volume of data of many kinds, all of which must be organized, digested, and understood. In the interest of deep understanding, more information is assumed to be better. However, in the midst of such bounty, it can be all too easy for key information to be missed or lost. Ultimately, only some of the findings and conclusions based on some portion of the gathered data will be relevant for design of an effective user interface. The dilemma is to figure out what to focus on and what to ignore. User-centered approaches typically represent users in suitably user-centered ways through detailed profiles of user types or as recognizable stereotypes. Personas (Cooper, 2003) are a popular form for representing the results from user studies. A persona comprises a realistic and believable constructed description of a single archetypal user. Typically, personas incorporate considerable detailsuch as background, personality, personal history and experience, attitudes, or habitsthat make them seem more real and understandable but can be gratuitous distractions in the context of creating a well-designed user interface. From the standpoint of effective visual and information design, a few questions stand out as most important regarding users. What are they trying to accomplish and what do they need from the system in order to accomplish it? What is or will be their relationship with the system? Other things about userstheir personality, socio-

Page 6 of 12 economic status, personal preferences, work environment, and so forthmay be interesting and of some relevance, but are of distinctly lesser importance. To the extent that user-centered approaches do attend to the real work of users and what it is really about, they tend to do so in decidedly user-centered ways that depict hypothetical but realistic users and make them central characters in a story that is easily understood by users. They employ realistic scenarios, customer stories, user narratives, and even movie-style storyboards that seem authentic and are accessible to users. Unfortunately, while these may appeal to and serve some of the interests of users, stories in their several forms may have some disadvantages for designers. In the interest of verisimilitude, scenarios are often fleshed out with superfluous detail that enhances their appeal but can also obscure essentials. Scenarios often incorporate detours into exceptional or unusual cases in order to be more comprehensive in describing work, but this practice can also give undue attention to such uncommon or special cases. What is wrong with user studies as part of user-centered design is not that they do not deliver but that they are prone to delivering too much and to cast it in the wrong form. Focusing on users broadly necessarily entails not focusing more narrowly or sharply. The most critically important information is too easily missed amidst mountains of data or obscured by fascinating but less-important findings. Moreover, it costs extra to gather a surplus of information and takes extra time to analyze it and extract the crucial bits. Indeed, the most commonly heard management objection to upfront field investigation is that it costs too much and delays the start of the real work of design and construction. Arguments that building the wrong system is even more costly and time consuming usually fall on deaf ears. Consequently, user studies are often rushed or shortchanged for lack of time and resources. However useful they may be, ambitious and thorough ethnographic methods, such as, contextual inquiry (Beyer and Holtzblatt, 1998), are apt to be regarded as luxuries that are beyond the budget of many projects. Like user feedback, user studies may offer little to help designers to distinguish user wants from user needs. Indeed, it is common in both contexts to ask users what they want or would like to see. If the interviewer or designer then asks whether users really need something, the answer tends to be yesregardless of actual importance or demonstrable impact. In addressing actual users and actual situations, field techniques also tend to concentrate more on how things are done than on why. Why something is done the way it is or why it is done at all gets closer to the core of what users truly need. Good questions from a good interviewer or evaluator certainly help, but user studies in themselves do not distill genuine needs from the mish-mash of user wishes and fantasies and the mess of current practice. User Testing Raising questions about the practice and value of user feedback and user studies may border on sacrilege, but even to look skeptically askance at user testing is tantamount to professional heresy. User testing is the absolute centerpiece of usability efforts in many organizations. The good part, once again, is the easy part. User testing is useful. In many cases it can uncover subtle but serious usability problems that are apt to be missed by designers as well as expert evaluators. Some formal usability testing with

Page 7 of 12 real or representative users is always a good idea. However, it is not a good idea to put too many eggs in the user testing basket and depend on it as the primary means for improving usability. The problems with usability testing are the problems with testing of any form. Testing comes too late. By the time software is available for live testing with users it is typically too late to change many things. Relative to some important alternatives, like usability inspections, usability testing is relatively expensive for the information it yields. Then there is the coverage issue. It is impossible to do enough tests with enough scenarios to thoroughly exercise all the interaction paths in any interestingly complex system. Perhaps the biggest problem with usability testing is that it reinforces some of the problems with user studies and user-driven design. Particularly as currently practiced under the influence of discount usability (Nielsen, 1993) with only a small number of user subjects, user testing can make the experiences of particular users, who may or may not be representative, unduly important in shaping the outcome. While it can be quite effective at uncovering localized design flaws and suggesting directions for correction, it is less effective at exposing problems in the fundamental organization of the user interface, even when this is actually the root cause of user difficulties. User testing as typically carried out can also tend to favor pedestrian solutions over innovation. Nearly all user testing is based on a single encounter with the system by any one user. Since user subjects are seldom trained on the system and rarely given opportunities to practice or build skills, standard solutions usually fair better, even if a non-standard design might ultimately lead to greater productivity or fewer errors. Testers also tend to interpret user hesitation or uncertainty as indicative of usability problems. In the widely used think aloud test protocol, if a user says Whats that? or Im not sure, it can count against a design feature, even if the user performs well or would have no difficulty on a second attempt. Because usability tests are focused more on user experience than user performance, they seldom if ever look at a second or third encounter with a given part of the user interface. This shortcoming is particularly significant in light of what has been learned about making user interfaces self-teaching through so-called instructive interaction (Constantine and Lockwood, 2002b). Observation of users interacting with novel user interfaces has shown that certain design practices can enable users to learn and remember how to use a completely unconventional user interface after only a single trial. However, designs based on instructive interaction may not fare well in conventional usability testing even in cases where they work extremely well in actual use. From User Experience to User Performance It would be unfair to point out flaws and limitations without suggesting some potential fixes. Here is a series of well-defined practices that are readily incorporated into what most practitioners regard as user-centered design and that can substantially improve the process and its results. Some may already be in the repertoire of forward-thinking user-centered designers. All are relatively easy to learn.

Page 8 of 12

Q Q Q Q Q Q

Exploratory modeling to make user studies more efficient and tightly focused. Comprehensive task modeling to capture and represent essential user needs. Deriving initial designs from models rather than by magic. Abstract prototyping to defer introduction of realistic details. Usability inspections to multiply the effectiveness of user testing. Elevating user performance over user experience.

Model-driven exploration. Exploratory modeling (Windl, 2002a) is a technique for speeding up and simplifying the process of user study and requirements definition. Where common ethnographic approaches to user study begin with observation and data gathering followed by building models based on the data, exploratory modeling reverses the order, beginning with preliminary modeling on which to base subsequent user study. Provisional models of users and user tasks are first constructed in order to help identify holes in understanding, formulate questions, and to highlight areas of ambiguity and uncertainty or outright confusion. These admittedly incomplete models are used to clarify priorities and guide the investigation and observation by sharpening the focus onto key areas. Provisional models are then refined based on findings from a more rapid and pointed data gathering process. As originally proposed and most commonly practiced, exploratory modeling constructs condensed and simplified inventories of essential needs: the roles that users will play in relation to the planned system and the tasks that must be supported in order for users to successfully perform those roles. Instead of a protracted investigation producing a potentially overwhelming surplus of data, model-driven exploration quickly and efficiently delivers answers to the most important questions. In principle, model-driven exploration entails a risk that modelers will not realize where information is missing or will not recognize what is unknown or misunderstood, but in practice this does not seem to be a problem. In any event, it is almost invariably cheaper and easier to go back and fill in a few blanks missed on a tightly focused initial inquiry than to gather great quantities of superfluous data that must nevertheless be processed and understood before it can be discarded. Complete task modeling. Of all the thing that visual and interaction designers and other usability professionals need to understand about users, none is more important than their intentions. What are the tasks that users intend to accomplish with the product? Guided by a task model that maps out all the user tasks and how they are interconnected, designers are in a better position to put features that support user tasks in the most appropriate places. Task cases and essential use cases (Constantine, 1995; Constantine and Lockwood, 2001) offer a more precise and less wordy alternative to user-centered models like scenarios and customer stories. These models have a structure that focuses on the intentions of users and the responsibilities of the system in support of those needs. Their fine-grained and succinct format helps promotes compact but comprehensive modeling of task essentials. They the have a long record of success in leading to world-class designs that enhance user performance (Windl, 2002b).

Page 9 of 12 Model-driven abstract prototyping. As most widely practiced, good user interface design is often a mix of widget wizardry and software sorcery tempered by the fire of trial-and-error refinement. The most talented and skilled designers can almost always pull a rabbit out of the hat and produce a good approximation with the first paper prototype, but lesser wizards and apprentices are often left in the dark as to how to get from user data to user interface design. Tools and techniques that help designers derive initial designs directly from well-formed task models not only take the magic out of the process but lead toward better designs. Abstract prototypes (Constantine, 1998) based on canonical abstract components (Constantine, 2003) are one proven approach that does precisely that. Instead of a rough sketch intended to resemble or suggest an actual user interface, abstract prototypes represent the tools and materials to be presented by a user interface stripped of details of appearance and behavior. An abstract view selector, for instance, represents a needed capability that might ultimately be realized as a menu, a dropdown list, or even a dialog box. The important thing initially is that a view selector is needed particular places in the user interface to enable users to perform certain well-defined tasks. Abstract prototypes make it easier for designers to get the important aspects of the content and organization of the user interface right while deferring details about what the bits and pieces will look like and exactly how they will operate. Abstract prototypes in canonical form offer designers a toolkit of standard components from which to construct their abstract prototypes. Standardization facilitates comparisons and the recognition of common problems and solutions that can contribute to the compendium of design patterns (Constantine, 2003). Abstract prototypes are easily derived directly from good task models. The steps within well-constructed task cases clearly imply the need for particular abstract tools or materials. Abstract components suggest particular real components and design solutions. One large leap of magical transformation that can require advanced training in widget wizardry is thus replaced by two small steps of direct translation that are much easier to learn and to master. Usability inspections. For identifying software defects, code inspections and structured walkthroughs have repeatedly proved to be more efficient and cost effective than testing. Based on similar principles and premises, collaborative usability inspection (Constantine and Lockwood, 1999; Lockwood and Constantine, 2003) is a systematic technique developed and refined specifically for identifying usability defects. Through a highly structured procedure with explicit rules and highly refined roles, collaborative usability inspections identify more usability problems more quickly and at an earlier stage than usability testing. Collaborative inspections have a number of advantages. Experienced inspection teams can identify upwards of 100 usability defects per hour. Inspections require only one or two users or user surrogates to be effective. Inspections are good at identifying numerous small problems that in combination can substantially degrade user performance. Usability inspections do not replace user testing, but, by leading to a cleaner and more refined product thorough inspection of designs and prototypes, they can markedly reduce the amount of testing needed.

Page 10 of 12 Designing for use. So far, the suggestions for process improvement represent additional or alternative techniques, but simple changes in practice are not the whole story. All of the proposals are grounded in a common theme that is more radical than mere technique. Ultimately, designers need to begin designing for use rather than designing for users. As long as users are center stage, designers will be distracted from the real action where dramatic improvement is possible. Although it may be tempting to try, one cannot have it both ways. If your attention is on users then it is not on use. If users are at the center, then other matters are made peripheral. If some things are in focus, others are blurred. The choice of focus matters. It is well established that designers and developers tend to optimize for whatever factor is the focus of their attention at the expense of whatever is not. For usability and usefulness, uses are more important than users and supporting effective user performance is more important than promoting good user experience. If your designers concentrate their efforts on creating a good user experience they all too easily fail to support good user performance, which is arguably the most important contributor to the experience. No matter how amusing the descriptions or pleasant the graphics, if users fail to find the product they seek on a Web site, its a bad experience. If an elegant design prevents me from getting my work done on time, its a bad experience. Regardless of how many features a product might offer, if most of them are irrelevant and important ones are missing or impossible to find, the user experience is negative. User performance, not user experience, is also the design outcome that translates most directly into real value. It means increased sales, more work completed, fewer errors, faster learning, and better retention of skills. The seemingly nobler but decidedly fuzzier goal of good experience may or may not have anything to do with the real value of a system. Good experience in e-commerce shopping, for instance, means nothing if users repeatedly fail to perform correctly in entering credit card details. It is precisely such mundane tasks that tend to get short shrift when attention is on the broad landscape of users and the global goal of good user experience. Many user experience designers seem to forget that the single most important factor in user satisfaction is goal satisfaction: Did the user accomplish what they intended or needed to accomplish? Dramatic improvements in support of user performance are made possible by concentrating on essentials and downplaying non-essentials, by conducting efficient and tightly targeted inquiry, by building compact models that highlight the most important elements and forego flowery verbiage, and by following systematic processes that translate models into results. A decade of experience on diverse projects demonstrates that these practices of conceptual economy lead to better systems that actually cost less to design and develop. Perhaps the most persuasive reason for constructing a comprehensive task model that represents all the real needs of the user is that it not only can save you from creating features that are not needed or will just get in the way, but it can also keep you from missing vital functions. For example, in my work as a designer I often use a portable electronic whiteboard attachment that captures in digital form the work my team does

Page 11 of 12 on any whiteboard. The system has a reasonably straightforward user interface marred by one or two really stupid design choices. The button to save a snapshot frame of the current whiteboard is marked by an obscure icon intended to represent the completely non-intuitive idea of a tag. That button is adjacent to the button to start afresh with a blank board. This minor defect becomes a fundamental design flaw because an absolutely critical function is missing from the software. If the user ever inadvertently presses new instead of tag there is no way to recover because the software allows you to add frames to a file but not to delete them. A client team recently wasted more than an hour trying to find a workaround that would enable them to resume work on the design on one board after having moved the capture bar to another board to work on an alternative drawing. They ended up having to manually trace over a complex drawing in order to re-enter it in a fresh file. It is inconceivable that a workable task model based on essential use cases would have missed this absolutely essential function. Ignorant of the design flaw, I have repeatedly recommended this digital whiteboard; I will not recommend it again. Remarks There, it is done. The heretical theses have been posted in hopes of stirring a reformation. User-centered design is a good idea in need of improvement. The needed improvement is found in practices that put uses rather than users at the center of design and in changing the prime objective from enhancing user experience to enhancing user performance. For the record, this is the basis of usage-centered design, the approach responsible for the breakthrough examples given earlier. References
Constantine, L. L. (1995) Essential Modeling: Use Cases for User Interfaces, ACM interactions 2 (2), March/April. Constantine, L. L. (1998) Rapid Abstract Prototyping. Software Development, 6, (11), November. Reprinted in S. Ambler and L. Constantine, eds., The Unified Process Elaboration Phase: Best Practices in Implementing the UP. CMP Books: Lawrence, KS, 2000. Constantine, L. L. (2003) Canonical abstract prototypes for abstract visual and interaction design. In J. Jorge, N. Jardim Nunes, and J. Falcao e Cunha, Eds. Interactive Systems: Design, Specification, and Verification. Proceedings, 10th International Workshop, DSV-IS 2003, Funchal, Madeira Island, Portugal, 11-13 June 2003. Lecture Notes in Computer Science, Vol. 2844. ISBN: 3-540-20159-9 Springer-Verlag. Constantine, L. L., & Lockwood, L. A. D. (1999) Software for Use: A Practical Guide to the Models and Methods of Usage-Centered Design. Boston: Addison-Wesley. Constantine, L. L., and Lockwood, L. A. D. (2001) "Structure and Style in Use Cases for User Interfaces." In M. van Harmelan (ed.), Object Modeling and User Interface Design. Boston: Addison Wesley. Constantine, L. L., and Lockwood, L. A. D. (2002a) Usage-Centered Engineering for Web Applications. IEEE Software, 19 (2), March/April. Constantine, L. L., and Lockwood, L. A. D. (2002a) Instructive Interaction. User Experience, 1 (3), Winter. Cooper, A., and Reimann, R. M. (2003) About Face 2.0: The Essentials of Interaction Design. New York: Wiley. Lockwood, L. A. D., and Constantine, L. L. (2003) Usability by Inspection: Collaborative Techniques for Software and Web Applications. In L. Constantine, ed., Performance by Design: Proceedings of forUSE 2003, Second International Conference on Usage-Centered Design. Rowley, MA: Ampersand Press.

Page 12 of 12
Nielsen, J. (1993) Usability Engineering. Boston: Academic Press. Windl, H. (2002a) Usage-Centered Exploration: Speeding the Initial Design Process. In L. Constantine (Ed.), forUSE 2002: Proceedings of the First International Conference on UsageCentered, Task-Centered, and Performance-Centered Design. Rowley, MA: Ampersand Press. Windl, H. (2002b) Designing a Winner: Creating STEP 7 Lite with Usage-Centered Design. In L. Constantine (Ed.), forUSE 2002: Proceedings of the First International Conference on UsageCentered, Task-Centered, and Performance-Centered Design. Rowley, MA: Ampersand Press.

S-ar putea să vă placă și