Documente Academic
Documente Profesional
Documente Cultură
Joseph L. Gabbard, M.S. & Deborah Hix, Ph.D. Virginia Tech Systems Research Center Department of Computer Science Blacksburg, Virginia 24061 {gabbard, hix}@vt.edu 540/231-3559 540/231-6199
In Collaboration With
Steve Kobiela1, Brian Stearns 2, & Laurie Waisel3, Ph.D. Boeing/Autometric, Hampton VA Boeing/Autometric, Springfield VA 3 Concurrent Technologies Corporation, Johnstown PA
2 1
September 2001
DOMINANT BATTLESPACE COMMAND (DBC) USABILITY ENGINEERING FY 2001 FINAL REPORT TABLE OF CONTENTS
Executive Summary 1. Review of FY 2001 Usability Engineering Activities 2. Formative Evaluation, August 2001 3. Differences between Formative Evaluation and Wargames/Exercises 4. Recommendations for FY 2002 Usability Engineering Activities
3 4 9 25 27
EXECUTIVE SUMMARY
This DBC Usability Engineering FY 2001 Final Report summarizes usability engineering activities performed by Virginia Tech for DBC from Fall 2000 September 2001. Section 1 gives an overview of the main usability evaluations (two heuristic evaluations and one formative evaluation) we conducted. The heuristic evaluations have already been reported in detail earlier in the year. Section 2 gives a detailed report of the formative evaluation conducted in August 2001. Because this activity has not been reported previously, Section 2 is the longest of this document. An interesting observation from our evaluations, especially the formative evaluation, was that usability data collected at wargames/exercises (such as FBE-India) and usability data collected during heuristic and formative evaluations are complementary, without much replication. Reasons for this are discussed in Section 3. This report concludes (Section 4) by presenting three main recommendations for DBC usability engineering activities, to be led by Virginia Tech, for FY 2002. These recommendations are based on our years work with DBC, and include: Hiring a graphics designer for the DBC development team, Performing focused, quick evaluations of visualization techniques (e.g., for reducing clutter, etc.) for DBC, and Performing focused, quick evaluations of user interaction techniques and devices (e.g., Spaceball, auxiliary touch screen) for DBC.
Due Date
14 Feb 01 05 Mar 01 19 Mar 01 31 May 01 30 Jun 01 15 Aug 01 15 Sep 01 15 Oct 01
Status
Delivered on schedule Delivered on 29 March 2001 Delivered on 15 March 2001 Delivered on 29 June 2001 Delivered on 23 May 2001 Completed on 27 Aug 2001 Completed on 29 Aug 2001 Delivered by 30 Sept 2001 (this is the current document)
1.1. Summary: First Heuristic Evaluation Conducted in February 2001 and in March 2001 at Virginia Tech. This evaluation was conducted by personnel from Virginia Tech (J. Gabbard and D. Hix) and CTC (L. Waisel). The purpose of this evaluation was to identify DBC console usability issues and to provide a prioritized list of issues and recommendations for design changes to the consoles user interface. It focused only on DBC console user interaction, and did not evaluate any workbench components. This first heuristic evaluation was conducted in two phases:
Phase 1: In February 2001, this phase was performed by Gabbard and Hix, based on Virginia Techs expertise with VE user interfaces and user interaction, the usability engineering evaluation process, and a working knowledge of human-computer interaction principles in general. This evaluation did not use any specific user scenarios. Results of this evaluation were presented in a First Heuristic Evaluation Interim Report delivered by Virginia Tech on 14 February 2001. Phase 2: In March 2001, this phase was performed by Gabbard, Hix, and Waisel, based on scenarios and scripts provided by CTC and Autometric personnel to Virginia Tech personnel. Results of this evaluation were presented in a First Heuristic Evaluation Final Report delivered by Virginia Tech on 29 March 2001. This final report contained our combined findings from both phases of DBCs first heuristic evaluation. In summary, the evaluation showed that many areas of the DBC console user interface are well-designed. Further, in several years of working with military systems in general and Naval systems in particular, we have not seen any other system with DBCs breadth of functionality, and especially not any running on a PC. Inclusion of the workbench in DBC lends even more power and uniqueness to DBC. Pervasive use of the Explorer browser metaphor on the console should make many aspects of DBC seem familiar to console operators. Redesign suggestions from the Interim Report (Phase 1) had improved various aspects of the console user interface as we evaluated with scenarios during Phase 2. However, in both Phases 1 and 2, we found some notable deviations from the Explorer metaphor, as well as various other inconsistencies (some cosmetic, some more structural), task flow, file handling, and object visualization issues. We grouped identified usability issues into nine usability categories: 1. 2. 3. 4. 5. 6. 7. 8. 9. Conceptual Design Layout and Structure Language and Labels Messages and Feedback Visual Elements Navigation and Work Flow Functionality Error Prevention Scalability
Within each of these nine usability categories, we listed all observed usability issues according to specific design guidelines related to each usability issue. All of these were prioritized and discussed in detail in the report. Finally, a summary and list of next steps for both Virginia Tech and CTC/Autometric were given. Our major recommendation for next steps was to proceed with a second heuristic evaluation that would encompass the workbench components of DBC.
1.2. Summary: Second Heuristic Evaluation Conducted in June 2001 at Virginia Tech. The purpose of this evaluation was to focus on workbench usability only, and did not include any further evaluation of the DBC console. In May 2001, as a result of delays in getting DBC workbench software to Virginia Tech, all involved parties at CTC, Autometric, and Virginia Tech agreed to a revised plan to get results of this heuristic evaluation to the development team as quickly as possible. Our goal was to give developers several additional weeks to make revisions to the DBC user interface, based on these results, before we conducted formative evaluations late in the summer. Specifically, all team members agreed to minimize the effort of writing an in-depth report, which takes quite a bit of time, so we delivered an abbreviated and quickly-written document in which we reported all observed usability issues, but did not put in extensive detail on background, method, summaries, etc. This report was delivered on 29 June 2001, just two days after the second heuristic evaluation was completed. This evaluation was conducted by Gabbard, Hix, and Waisel by reviewing, using, and commenting on DBC in a systematic manner. Because DBC workbench components were in an early stage of development, no scripts (such as were used in Phase 2 heuristic evaluation of the DBC console) were used. We first examined, in detail, workbench interaction with the Spaceball alone, then interaction with the Stylus alone, and finally we examined combined (two-handed) Spaceball/Stylus interaction. Our approach for the evaluation was to look initially at what user tasks the DBC workbench currently supports, and then we brainstormed about what user tasks we think should be added, that currently are missing. Since many usability issues that we observed covered several usability categories (e.g., look, location, and behavior, which are often closely connected in a design), issues were reported by device (Spaceball and Stylus) rather than by usability category, as we did in prior reports. In summary, we found the Spaceball to be a highly-flexible multi-degree-of-freedom device that (once users know the interaction metaphor on which its design for workbench navigation is based) generally had effective and efficient interaction at the workbench. However, the Spaceball appeared to have some hardware/driver problems in that the translation axis (zooming) appeared to change from one run to the next. Moreover, slight problems with the hardware/driver made translation difficult since the earth often moved one direction then another, and other times appeared to jump back and forth in place. Further, brief user instructions did not explicitly indicate whether the navigation metaphor for egocentric (i.e., users head moves in relation to the world view) was different for the horizontal and vertical settings (obviously, the exocentric metaphor is slightly different in these two modes). The Stylus is used in DBC as a pointing device for selecting and manipulating objects. However, we found the Stylus currently very uncomfortable to use, with no visual cues to indicate where it was pointing. Evaluators had a great deal of difficulty pointing at and selecting an intended object. Use of the Stylus at this point was so inadequate that we felt it simply could not be used in a
formative evaluation. Our major recommendation for next steps was to proceed with a formative evaluation of DBC.
1.3. Summary: Formative Evaluation Conducted in August 2001 at Autometric, Springfield, Virginia. The final major deliverable for FY 2001 was a formative evaluation to study usability of the DBC user interface with representative users. Prior to conducting the formative evaluation, we prepared an extensive formative evaluation plan detailing protocol, types and recruiting of users, data collection and analysis issues, as well as evaluation constraints and expected reporting procedures. It also included activities and decisions to be accomplished prior to the evaluations. Many of these decisions required team consensus, since they directly affected the overall nature and scope of the formative evaluation. This report was delivered by Virginia Tech on 23 May 2001. We made the decision to focus on the Watch Officer role at the workbench, and to focus on visualizations of DBC for this evaluation (see further discussion of this decision in Section 2.1). Next we developed detailed scenarios for several different types of user tasks. Kobiela did the majority of work on creating scenarios, scripts, and data sets to support the formative evaluation. After several different discussions, we finally decided that the formative evaluation sessions would be held at Autometrics facility in Springfield, Virginia (rather than at Virginia Tech), in late August 2001. The very enthusiastic evaluation team consisted of Kobiela (subject matter expert, scenario writer, steno boy :>), and evaluation session leader) and Stearns (console operator and general counsel) from Autometric; Waisel (observer/data collection) from CTC; and Gabbard and Hix (organizers/co-ordinators for sessions, plus observer/data collection) from Virginia Tech. Three Watch Office users were recruited (M. Sullivan, M. Rayome, and L. Waisel helped with recruiting) and participated in individual evaluation sessions. Two of these users proved to be more effective than we could have imagined, giving us volumes of qualitative data about usability of the DBC workbench visualizations. In summary, we found that DBC visualizations give a great deal of information, but Watch Officers need a way to control the clutter. They also want to have some control themselves over the workbench (e.g., navigation of the map, drill down for additional details, user-defined functions and/or views), without having to give orders constantly to a console operator. We further discovered a strong need for role-based views of visualizations and dynamic (time-changing) information, as well as what if capability. Because all these usability issues are detailed in Section 2, we will not discuss them further here. From this evaluation, major recommendations for next steps include acquiring a skilled graphic designer for our team, and design/evaluate/compare/choose among different
visualization technique (e.g., for dynamic information) and interaction techniques/devices (e.g., for navigation). Another major finding was substantial differences between the kinds of data collected at exercises/wargames such as FBE-India and data collected in a formative evaluation such as this (discussed in Section 3).
usability engineering activities, and readily agreed to make ROTC instructors and, if ever appropriate, students available to us as subjects. Kobiela reviewed credentials of some of the instructors Snyder mentioned, and confirmed that they are, in fact, appropriate to serve as Watch Officer subjects. M. Rayome confirmed that M. Sullivan was tasked with determining availability of subjects for our proposed August dates for the formative evaluation. Thus, we found that both our subject-recruiting options were viable. Our final decision was to conduct the evaluation sessions at Autometrics Springfield facility in late August, with Autometric responsible for subject recruiting.
visualizations, acquire situational awareness, and make decisions. This evaluation did not address console/operations. Further, we had been asked to teach CTC and Autometric personnel the basics of conducting formative evaluations.
Day 1, Monday, 27 August 2001: Entire evaluation team made sure everything was in order (materials, hardware, software, etc.) and finalized all aspects of study; observers were instructed on data collection and deportment during sessions, etc. Day 2, Tuesday, 28 August 2001: Team ran two sessions (one morning, one afternoon). Day 3, Wednesday, 29 August 2001: In the morning, team ran one more session; in the afternoon, they began data analysis. Autometric Springfield facilities. The very enthusiastic usability evaluation team and their roles consisted of: Steve Kobiela (subject matter expert and evaluation session leader) Brian Stearns (console operator) Deborah Hix (session co-ordinator and data collection) Joe Gabbard (session co-ordinator and data collection) Laurie Waisel (data collection) Scott Burk attended portions of two sessions. The user from the first session observed session three.
10
Users
Types of users: Users for this DBC formative evaluation were recruited to be representative of the types of military personnel who would be expected to be typical users of DBC. As explained in Section 2.1, we chose the role of Watch Officer. Number of evaluation sessions: From our (and others) experience with formative evaluations, it is generally useful to plan for three to four users per iteration. Employing more users typically does not produce substantially increased returns in terms of usability findings. We initially expected to have four user sessions, but recruiting proved more difficult than we expected. We conducted three user sessions, each lasting about three hours. Comment on number of users: Note that there is no claim that this is a statistically valid study. Formative usability evaluations such as this are performed with a small number of carefully chosen (not randomly chosen) users, representing expected users of the application being evaluated. The point is not to perform statistical tests, but rather to extract as much information as possible from a few representative users. Typically, three to five such users discover 80% of the major usability issues of an application.
User Tasks
Creating user tasks: We created a number of representative user tasks, written to determine some specific piece of usability information about a particular important area of the DBC workbench user interface. Representative user tasks are those most likely to be performed most frequently by typical users of DBC. S. Kobiela, as subject matter expert, led this effort. Because this first evaluation could not be exhaustive in its coverage of DBC visualizations and capabilities, we decided to focus on situational awareness (not planning, etc.), as this is a major aspect of a Watch Officers responsibility. We used the following general approach for developing user scenarios/tasks: 1. Introduction to DBC 2. Easy tasks (e.g., show me, simple manipulations) 3. Analytical, more complex tasks (e.g., interpret visualizations, answer questions about situation; ISR usage and weapons/target pairings) Characteristics of user tasks: Because a team of users interacts with DBCs two-screen console and a workbench, DBC user tasks are inherently more complex than those of a typical single-screen, single-user GUI. The tasks in this evaluation
11
needed to represent those mostly likely and those mission-critical tasks that a DBC team would be expected to perform. Further, because we decided to focus on visualizations on the workbench, users had no physical interaction with DBC, but, rather, issued orders to the console operator. Creating and refining these more complex, but noninteractive, DBC user tasks involved more iteration of task-writing than might normally be expected for evaluation of a simpler GUI. Further, scenarios, detailed scripts, and data sets had to be created to support the tasks. Again, Kobiela took the lead on this activity.
Method and Protocol for Sessions
Final preparation for sessions: Although much work was done in advance to make this evaluation run smoothly, a number of activities could not be finalized until just before the sessions. As mentioned earlier, on Monday, 27 August 2001, the entire evaluation team spent a very long day in Springfield finalizing scenario documents, conducting one very effective pilot test, revising protocols based on the pilot, and finally producing all needed protocol documents and handling other logistics needed for evaluation sessions. Getting each session started: When each participant arrived, they were greeted and shown to the location of the DBC set-up. Each was asked to sign a non-disclosure form, an informed consent form (telling them their rights as a participant in the study), and complete a short demographic questionnaire. Evaluation session activities were then explained to them by Hix. Task performance: Participants were videotaped during their session. Participants were asked to think aloud while they worked, to indicate what they liked/did not like, what they were expecting, etc. Each participant was given some training and then was given, one at a time, each user task. Session roles: As already mentioned, Kobiela led each session, following prepared scripts of tasks. For the operator, B. Stearns drove DBC and responded to requests from the Watch Officer user during sessions. This precluded confounding the study by our trying to evaluate the console and visualization components simultaneously. Waisel, Gabbard, and Hix collected data. Length of each session: Each session lasted approximately three hours. Post-session follow-up with user teams: After each participant performed tasks, they were asked some final general questions, to assess their opinion of and satisfaction with DBC.
12
during each session. A critical incident is something that happens while a participant is working that has a significant effect, either positive or negative, on task performance or user satisfaction. This is arguably the single most important type of usability evaluation data. A negative critical incident (type of most importance to usability, typically) is a problem encountered. A positive critical incident is an occurrence that causes a participant to express satisfaction or closure. We collected large volumes of qualitative data from all three participants. Quantitative data: Typically, numerical data (e.g., time to perform task, number of errors made during task performance) are also collected. Because of the unusual nature of the tasks (i.e., no actual user interaction with the workbench), we decided not to collect any quantitative data. Analysis of data: Immediately following the final (third) evaluation session, the evaluation team compiled and discussed their findings to create an initial list of usability issues.
13
2.3. Formative Evaluation Results Our analysis and recommendations are presented as high-level, overarching usability issues, rather than low-level, discrete user interaction issues. Motivation for this is twofold: 1. this is the first evaluation to be conducted with representative users, and 2. users did not have direct interaction with DBC, but rather gave orders to the console operator. As a result, detailed user interaction issues did not emerge. Rather, we observed several key usability issues as reported below. As we look in future at user-based interactions on the workbench, we will study lower levels of interaction and other design details. Our main usability observations are summarized in Sections 2.3.1 2.3.3 in three areas: Role-based information architecture, Visual clutter management, and User task allocation. How To Read the Following Tables Each subsection (2.3.1 2.3.3) begins with a descriptive summary of the general usability issue for that section, and is followed by a list and discussion of specific usability issues and representative observations from formative evaluation sessions. Representative observations include a label to denote the user/participant number associated with that observation (e.g., P1 means we observed this from participant 1 the first usability session). Many of these comments are lifted almost verbatim from notes taken during evaluation sessions, so they are not always the best grammar! The subsection also includes exemplars of the types of observations that might translate into DBC requirements. That is, the exemplars represent a high level requirement that, in turn, can be refined into several detailed requirements suitable for a programmatic requirements definition. For example, an exemplar requirement that reads: The user shall be able to customize display of sensors may be further refined into several detailed requirements such as: The user shall be able to select if sensor coverage or weapon ranges extend over land and/or water, and The system will have the ability to show different colors that differentiate among types of sensor detections. We would caution that these exemplars given below may not be final DBC requirements and may change based on further analyses, inputs, evaluations, etc.
14
2.3.1. Role-based Information Architecture Formative evaluation sessions combined with observations from FBE-India strongly suggest that a role-based information architecture is needed to support the diverse set of data, information, and visualization requirements inherent in an operational setting. A role-based information architecture allows DBC to support customization of the user interface for each user or class (i.e., role) of users to more effectively support user task performance. For example, the type of detailed information presented, as well as how it is presented (levels of abstraction and detail) during drill down, may differ from user class to user class (and possibly even from user to user). Likewise, the manner in which key visualization components are presented may also be customized from user (class) to user (class) or, if appropriate, from task to task.
Usability Issues: Role-based Information Architecture 1. Support Role-based Dynamic Information : Dynamic information such as that transmitted via information feeds are typically critical to performance and should be continuously monitored by the system and subsequently conveyed to the user. Different users (and/or user roles) may wish to be aware of certain types of information whilst ignoring other types of information. A role-based information architecture should support varying types of dynamic information and varying ways in which it is presented across user roles. Some suggested examples of dynamic interface components that could be role-based include: intelligent visual/audio cues and alerts (e.g., what, when, how), visualization of time latency/lateness of information (e.g., what and how), and uncertainty visualization (e.g., uncertainty rings - what and how). P1 User didnt see any dynamic information the first time through a simulated run. Second time, he thought new objects were simply underneath other ones. User suggestions on presenting potential cues for notifying commander: First few seconds have them [cues] bold and flashing, or have them appear in a square with a white background for a few seconds, then fade automatically. User feels that audio cues are possible; he doesnt think they would be too annoying or cluttering. Could use different audio cues depending upon asset/enemy. User remarks: Want to make sure that any introductory beeps or fading only happens in the viewing frustum. Should be a method for restricting the beeps, even if they are nearby but not on screen. Spatial audio could be used to denote direction. You should be able to turn beeps on and off.
15
P2
User wished to see some type of indication of time lateness, e.g., some type of color/hatching in the icon. User remarks: Maybe use text like 1M for 1 minute, 2D for 2 days, placed right below the label for an icon and user can toggle on/off.
2.
Support Real-Time Planning on the Workbench: While DBC has great potential for off-line planning and mission rehearsal, there are also opportunities to provide support for real-time planning. One example is what if positioning of assets. P1 User wants to perform real-time planning, comments that friendly sensor coverage is good 3D representation of range and coverage. But wants to swing (reposition) sensors to see if gaps in coverage could be easily closed. User wants to be able to do real-time what ifs, e.g., wants to move around a friendly asset to see what if. User suggests ability to drag ships with smart pen (or other smart pointer). User comments that there should be a big label on screen to show WHAT IF mode (e.g., classification bar). User also wants to be able to save results as an image for subsequent use in PowerPoint brief.
P3
3.
Employ Visual and Textual Role-based Organization to Support Effective Drill Down: As users drill down into the rich set of information provided by DBC, they should be presented with text, symbology, and visualizations that are optimized for their specific roles. For example, when selecting an icon representing an aggregate of enemy sites, some user roles may wish to see a high-level description of the whole aggregate, while other user roles may wish to see a textual breakout of each component, and still others may wish to see visual icon representation (and the specific locations of) each component. Further examples of visual (graphical) and textual role-based information for drill down include: distances between objects and user-specified points some user roles may wish to see the distance represented as length, others as time, others as a function of environmental constraints (e.g., range rings and occlusions), use of symbology including aggregation and level of detail needed per role, and content of break-out boxes for various objects (e.g., track info) should be rolebased, or at least presented in a role-based hierarchical manner. P1 P1 Wants to see a call-out box that includes information on file, squadron ops center telephone and other contact info, personnel, etc.. User remarks: could use standard air traffic control symbology, to make adoption easy for users.
16
P1
User comments on drill down and textual call-out boxes: for example, if we have an aircraft I would like to click on the aircraft and show call-out box it could show live data in yellow, admin (on file) data in white or live data with bullet with last update fields next to it. The type of information the user desires includes: call sign, mission type, munitions or specialized equipment, altitude, speed, heading, time on station for aircraft, op squad, contact info, email address, whether they are mission ready. User wants to click on enemy target (weapon/target pairing task) to see what it is, what are we aiming at it. User wants option to drill down from workbench. User remarks: How many missiles are on board? What types of arms are onboard? User would like to see imagery of target e.g., to see buildings. User wants to click on a line and bring up attack plan. User wants to know distances between point a and point b. User wants to know strategic munitions (Tomahawks, SAMs), what types, how many left, etc., both in total and by individual site. For each asset, show how many are shot and how many are left, then at bottom of screen show overall totals.
P1 P2
P2 P3
4.
Allow Users to Create and Save Role-based Views: Depending upon the user role and the particular setting and context, differing views of the world may be appropriate. Here we are defining a view to include more than a geometric perspective, but also what types of information are visible or available, as well as how information is presented. P1 P2 P2 Prowler crew would want to isolate single target and its bubble and look from the view a pilot would take. User would like to search for all SA-6s and have them flash. Or alternatively, have the computer cull all other objects on request. User wishes to see speed and course indicators, altitude at a glance without drilling down. User also feels that there is other information the system could provide (e.g., via popup window), e.g., onboard arms, number of missiles in silo, name of captain, database of onboard officers.
17
5.
Support Role-based Sensor Dome Representations: Various means of representing sensor domes are needed to support efficient situational awareness depending upon user role and specific user tasks/goals. A role-based architecture (as it relates to sensor dome representations) should support easy customization and access to, for example: number and types of domes displayed (e.g., early warning versus target acquisition), manner in which domes appear and disappear, representation of dome types (color, opacity, etc.), and representation of lines of delineation along dome edges some user roles may wish to use terrain masking while others may wish to see simple circles, others may wish to see terrain masking at various elevations. P1 Wants to see just SA-6 sites with their sensor domes. Wants to take out bubbles and leave rings in place. Wants to reserve bubble use to looking at a single site, e.g., for approach. User remarks: was 3 domes, but now only 2 rings. This is confusing to him. Wants to use different colors for each dome thinks this would make it easier to see. Also consider using a gradation of color. User remarks: Can you show me ranges of arms from different altitudes? Thinks having range numbers presented would be useful. Wondered about segmenting domes/circles and showing angles at each interval. From the top down, we could show distances from center. User wants shading according to what dome a threat is currently in (e.g., enemy air asset). User suggests color coding track and line of detection to denote which dome threat is in. User also suggests that when the enemy air asset is painted, DBC should change the line of detection in some other way (e.g., make wider, flash, etc.). User also wishes to peel away sensor domes as they enter the domes. User also wishes to see dark lines/circles at height of aircraft versus at sea/site level.
P1
P1 P1
P3
6.
Allow Users to Cross-Reference Objects and Grid Cells: A role-based architecture should support the ability to cross-reference objects and grid references. This would allow a user to immediately know what objects are in a given grid cell; currently all objects are simply listed and grouped by a single hierarchy (based on its parent data folder, not spatial location). Indeed, employing the role-based architecture at a higher level implies that each object be classified, characterized, and cross-reference-able in many different ways. P2 User wishes to click on sensor dome/site directly e.g., click on the one in grid F5, as opposed to guessing which one is in F5.
18
Below are some exemplar DBC requirements based on the usability issues presented above. Each requirement is labeled with a number used as a cross reference between usability issues (above) and exemplar DBC requirements. Exemplar DBC Requirements: Role-based Information Architecture Ref. Description 1. The system shall provide cues for appropriate events. 2. 3. 3. 5. The system shall provide what if capabilities for real-time planning. The system shall support appropriate drill down capabilities. The system shall support drill down for textual display of detailed information (e.g., about an object or location). The user shall be able to customize display of sensors.
19
2.3.2. Visual Clutter Management Formative evaluations indicate that one of the most significant opportunities for DBC improvement lies in appropriate management of visual clutter. Clearly, a system that supports visualization of a very large and very rich data set needs methods (some implicit, some explicit) for handling display clutter; it is certainly not reasonable to simply display everything. Some of the visual clutter can be managed by adopting a rolebased information architecture (as presented previously), but other techniques must be employed to further alleviate the problem. Some examples of these are presented in the table below.
Usability Issues: Visual Clutter Management 7. Control/Remove Textual Clutter of Labels: In situations where there are many information items co-located in a small area (relative to zooming level), textual clutter of labels piled on top of each other often renders the numerous labels unreadable. A more appropriate placement/labeling technique, along with role-based information culling and representations, are among possible solutions. Another possibility is to implement a Magic Lens metaphor that allows users to place a magnifying-glass-type window over the cluttered area to reveal subsequent levels of detail and/or abstraction. P1 P3 P3 8. User wants to the system to move the label text further from the icons and tie the text to the icon via a line. He feels that this would help with the textual clutter. User remarks: need balance between clutter and context. User wants to have custom settings per view; e.g., labels on in this view, labels off in that view.
Support Direct Navigation and Viewpoint Changes: To help users distinguish details between and within areas of clutter, one possibility is to allow users to instantly swap between detailed, low-altitude views and general, gods-eye views. Another possibility is to allow workbench users to directly control their point of view. These techniques allow a workbench user to quickly and easily position the view to optimize and customize their own visual context. P1 P2 P3 User wishes to change viewpoint to better fit his role/goals (e.g., lose every threat except the SA-6 sites). User is interested in some navigation control at the workbench; user remarks: it would be faster, dont need an operator (for navigation). User does not want to save viewpoints himself, but wants to be able to name and recall views (perhaps via voice recognition) and when saving, he works with operator.
20
9.
Support Visual Aggregation of Objects: Along with a role-based architecture, visual aggregation of visual objects supports progressive disclosure and progressive understanding (drill down) of information. Visual representations of aggregates can further be used to drill down into lower levels of (textual) details. For example, when the user selects a specific aggregate, a breakout box could appear that contains more detailed textual information, based potentially on the role-based approach. P3 User wants to be able to abstract groups of information; when zoomed out, there may be one icon and label to represent a group of targets/threats. But when clicked on or zoomed in, specific components/items appear separately. From a management hierarchy perspective, user does not need to know that there is a 5 story building with a window in a specific location, just needs to know that information is there for others to drill down to as desired.
Below are some exemplar DBC requirements based on the visual clutter management usability issues presented above. Exemplar DBC Requirements: Visual Clutter Management Description The system shall support user- and system-management of visual clutter. The system shall support direct navigation and viewpoint changes. The system shall support visual aggregation of objects.
Ref. 7,8,9 8 9
21
2.3.3. User Task Allocation One of the more challenging usability aspects of DBC (or any multi-user system) is allocating features and functionality among DBCs multiple (usually two) users, i.e., console user and workbench user. The formative evaluation identified four key recommendation areas that address user task allocation, presented in the table below. Usability Issues: User Task Allocation 10. Support (at least limited) User Interaction on the Workbench: While some commanders may prefer to issue console orders for almost every aspect of DBC interaction, formative evaluation indicates that most commanders would like to have and would make use of at least some limited, simple functionality at the workbench. Moreover, many of these functions, if available on the workbench, will likely speed up user task performance and accuracy by giving the workbench user direct control. For example, when performing fine-grained point-of-view positioning, a workbench user can tweak the view to their liking much easier than continuously refining the point-ofview via verbal protocol to the console operator (i.e., a little to the left, now up, etc.). Three main areas of desirable user interaction at the workbench emerged from the formative evaluation: navigation and viewpoint control this includes fine-grained manipulation of users point-of-view, as well as the ability to instantly access specific views; selection and drill down console users should be able to interact with the workbench directly, allowing them to select objects (e.g., for query) as well as to perform drill down tasks; and user-defined functions this includes the ability for workbench users to map device components (e.g., buttons) to DBC functions. This would allow workbench users to, for example, set views and switch among views. We recommend that any functionality/user interaction developed for the workbench be evaluated via quick, focused, discrete usability evaluations. See Section 4 for more details. (This recommendation is closely related to usability issue 12 below.) P1 P3 User remarks: easy pan and zoom for commander would be nice. User wants to see support of a smart ruler or smart legend. User feels that user interaction on the workbench with real world props would be very useful. User also feels that DBC should support user interaction with hands and fingers. User wants the ability to do things himself on the workbench. User remarks: this way we can explore more like surfing the net without worrying about what the console operator may think (e.g., why are we doing this?).
P3
22
11.
Employ Visual Feedback/Notification in Combination with Commanders Acknowledgements: As dynamic events unfold, information from remote and local sources needs to be presented to the workbench user in a timely manner. In many cases, information to be presented may be off-screen or difficult to locate in a sea of visual clutter. Moreover, many of these emerging events may be mission-critically important, and may require time-critical responses. As such, DBC should support innovative visual and audio means to notify the workbench user, as well as a means to allow a workbench user to acknowledge an emerging event. In many cases, for example, providing an early warning tone, followed by an explicit notification, may provide the least obtrusive means of notification. The commanders acknowledgement should be monitored by the system, so that if the event is not acknowledged in a timely fashion, the system can further notify (and direct the attention of) the commander. Once again, the role-based information architecture may allow DBC users to further define how and when visual feedback and resulting acknowledgements are managed. P1 User wants to run the scenario again, using threat rings to assess the status of a site. If you think its neutralized, then denote the site differently all the circles could be dashed circles User wants some type of visual indication (e.g., flash) to denote dynamic events. User remarks: we could have the visual indication do different things for different semantics; e.g., a new ship/aircraft might flash until acknowledged, click on SA-6 site to flash once or twice but then flashing off. User remarks: if new threats come up on the screen. let the audio alert go to the subplots (specialist working in a small specific area) for potential filtering. The specialist can then make the commander aware of the threat should the specialist decide it warrants attention. This purposefully inserts the human in the loop.
P2
P3
23
12.
Explore the Use of Innovative Workbench User Interface Components: To provide efficient user task performance on the workbench, a number of user interface components should be examined. Specifically, there is a need to examine appropriate user interaction devices (e.g., spaceball, stylus, mouse, etc.), as well as different means of displaying visual windows (e.g., Magic Lens mentioned earlier). As with usability issue 10 above, we recommend that any functionality/user interaction/device developed and used on the workbench be evaluated via quick, focused, discrete usability evaluations, to determine which devices best support the identified workbench tasks. Some users recommended (and developers even prior to the formative evaluation had already considered the possibility of) auxiliary displays near the workbench. Potential display devices include touch screen, small monitors mounted along the back of the workbench, as well as a small LCD display mounted near the front of the workbench. Again, we recommend a series of quick, focused, discrete evaluations be used to determine the appropriateness and usability of such displays, as well as to identify the best way to present information (e.g., text to support drill down, visual indicators/legends) on these auxiliary displays. See Section 4 for more details. P1 User wants to use a mouse, touchpad to navigate. Would also consider using mouse to illustrate and annotate workbench display. Would also want to have a selection of colors or shapes, or potentially changeable shapes, to support rich annotation and illustration depending upon tasks. User wants voice recognition, or alternatively would like to right click, for example, on SA-6 site and choose show all or show similar or flash all. User wants to use a flat panel mounted on the wall to see auxiliary information.
P2 P2
Below are some exemplar DBC requirements based on the user task allocation usability issues presented above. Exemplar DBC Requirements: User Task Allocation Ref. Description 10. The system shall support customize-able user-defined functions (e.g., uncertainty rings, alert acknowledgements). 10. The system shall support user navigation at the workbench. 10. 11. 11. 12. The system shall support user creation, storage, and retrieval of viewpoints at the workbench. The system shall support notification of dynamic events. The system shall support workbench acknowledgement of notifications. The system shall provide workbench devices appropriate to workbench user tasks and functionality.
24
25
high-quality usability data without going on-board. Formative evaluation provides a very cost-effect process for getting many of the usability bugs out before going onboard, so that a much-improved system can be used in the exercises. At FBE-I, evaluation was much more about system performance than user performance with the system. One con to the formative evaluation process: The stress level of a real-world setting is difficult to simulate in the lab. We did not attempt to incorporate any kind of stress into our lab-based sessions.
Thus, it is our expectation that neither usability evaluations nor user-centered observation during exercises/wargames can in effect replace one another. Instead, it is evident that employing both techniques is very effective and can identify a larger (and more diverse) set of usability issues, as well as reinforce significant usability issues. A close examination of how best to modify our usability evaluation process to optimize data collection during an exercise/wargame may provide further insight into this interesting and promising finding.
26
One of DBCs primary design goals is to present a rich set of time-critical information to commanders to support situational awareness. The complex nature of the domain (e.g., littoral battlespace situational awareness) dictates that the set of information to be presented is both complex and copious. Thus, we recommend adding to the development team a trained graphics designer one who specializes in (graphically) presenting complex information in a concise manner, and ideally someone with some military experience/background.
Evaluate In conjunction with the creation of new visualization techniques Visualization (and with the aid of a graphics designer), we recommend Techniques undertaking small, discrete usability evaluations of emerging visualization techniques. This approach allows newly generated visualization techniques to be empirically evaluated quickly to identify each techniques strengths and weaknesses in a costeffective manner. From these evaluations can emerge recommendations on which visualization techniques will likely have the best usability for DBC. Based on observations and analysis of the formative evaluation, we have identified some likely candidates for graphics design and subsequent evaluation, including: text/label placement/breakout/deconfliction, sensor domes (altitude, distances, opacity, terrain masking, etc.), visual and aural alerts, and time lateness of dynamic information.
27
To further refine workbench interaction with DBC, we recommend undertaking small, discrete usability evaluations of interaction techniques and devices (like our previous recommendation for visualizations). This will allow us to further examine task allocation across console and workbench users. These small discrete evaluations will use simple scenarios of task and interaction components in a simple context. This will allow evaluations to focus on the inherent characteristics of different user interaction techniques and devices. Some candidate workbench interaction techniques/devices, based on our observations at the formative evaluation, include: rudimentary touch screen as an auxiliary console and interaction devices for navigation.
28