Sunteți pe pagina 1din 1

Eye Tracking in Usability Testing: Is It Worthwhile?

Antti Aaltonen
Department of Computer Science, University of Tampere, Finland
1. Introduction
Two years ago, we purchased Applied Science Laboratories model 4250R+ eye tracker for our usability lab. The system
accommodates floor mounted optics, tracking mirrors and extended head movement options. The system does not contact the user and
allows the subject to move more freely, which we considered an important issue. Until now, we have mostly used the eye tracker in
basic HCI research and as an additional device in our usability tests. Although we have recorded both the numeric gaze point data and
the gaze cursor superimposed on the video, in usability tests we have analyzed only the video.
2. Possible gains
There are situations in usability tests when we cannot locate the origin of a problem with certainty with a think aloud method.
However, since point of gaze relates with the subjects focus of attention, we can use an eye tracker to help us to notice the cause of the
problem. The gaze data also enhances and backs up observations revealed with the think aloud method.
With numeric data, we can estimate the distribution of the time spent at different areas of the screen. This information helps us to
understand what screen parts consume time, what information is fetched quickly and where the user searches certain information to
complete the task. Gaze data reveals information like is the user paying enough attention to some specific area in a problematic
situation or what screen parts seem to be difficult or confusing.
Eye tracking data is detailed enough to make observations of evolving strategies of the products usage while the test user gets
more familiar with the product. Analysis of scan paths across subjects may reveal patterns that occur from test subject to another, which
may become a proof of poor screen design. Also by examining the scan paths, we may understand how the user structures information
and based on that we can develop better screen layouts or information visualization techniques. Such analysis is impossible without a
quantitative data set.
3. The problems
When using an eye tracker in usability testing, selecting test users has to be done carefully. It is not enough to consider for example
their skills, but we have to also consider their eyes and vision, because these factors may disturb tracking and result in noisy data and
ruin the analysis. Therefore, a larger pool of test subjects (and test runs) is needed.
Our eye tracker is noisy e.g., the illuminator fan, the focus motor, and the servo-controlled mirrors, and it may distract the test
user. Therefore, we warn the user about the noise before the testing begins. We need to explain the procedure more than in a test
without the tracker. Altogether, the test with the tracker takes more time and it requires more patience from the test user.
Although our tracking system contains tracking mirrors, the users head movements cause problems. In addition, we minimize
keyboard usage since many users tend to look at the keyboard while typing, which usually causes the eye tracker to loose the eye.
The eye tracker requires a clear gaze line between its optics and the user and therefore the test tasks have to be given to the test user
on screen so that (s)he does not need to focus on a separate document. This causes difficulties for novice users who find it difficult to
switch between windows. Presenting test tasks in speech is often not viable, because the user forgets complicated tasks easily. We have
also noticed that when the observer is in the same room with the test subject and whenever the observer talks, the subject tends to turn
towards the speaker. Replacing the observer with a loudspeaker and a microphone is not a good solution since it is awkward to the test
user.
We have noticed that the tracker needs a re-calibration during longer tests. The re-calibrations are done between tasks, but because
this disturbs the test flow, we have considered developing an automatic calibration procedure in our system.
4. Analysis
Our numerical analysis is based on dividing the screen to the areas of interest. To avoid ambiguity, areas can not be overlapping
and they demand static window dimensions and single main window interface. Interpreting the data will become difficult, if a test
software contains scrollable and multiple windows, which can be positioned freely on the screen.
Because of the sampling rate (50 Hz), the amount and quality of the data are problematic in the numerical analysis. Firstly, the poor
rate makes finding of the exact start and end of fixations difficult. However, this problem is more serious when conducting HCI
research than in usability analysis. Secondly, eye movement data includes a great deal of noise due to individual and equipment
variability. Thirdly, although we feel that the sampling rate is low, it still produces a massive amount of raw data.
5. Discussion
The bottom line is how to ensure the customer that eye tracking provides additional value for their money. If we do numerical
analysis in addition to video analysis, the need for extra time is remarkable and the analysis will become more expensive. To reduce
analysis time we need automated special software and therefore we are currently developing scan path visualization software in which
we include a new fixation recognition algorithm.
Acknowledgements
These experiences were collected in usability tests carried out jointly with Aulikki Hyrskykari, Saila Ovaska and Kari-Jouko Rih.

S-ar putea să vă placă și