Documente Academic
Documente Profesional
Documente Cultură
Information Technology
Volume 139
Proceedings of the
Fourteenth Australasian User Interface Conference
(AUIC 2013), Melbourne, Australia,
Adelaide, Australia, 29 January 1 February 2013
Volume 139 in the Conferences in Research and Practice in Information Technology Series.
Published by the Australian Computer Society Inc.
acm
Published in association with the ACM Digital Library.
iii
User Interfaces 2013. Proceedings of the Fourteenth Australasian User Interface Conference (AUIC
2013), Adelaide, Australia, 29 January 1 February 2013
Conferences in Research and Practice in Information Technology, Volume 139.
c
Copyright
2013,
Australian Computer Society. Reproduction for academic, not-for-profit purposes permitted
provided the copyright text at the foot of the first page of each paper is included.
Editors:
Ross T. Smith
School of Computer and Information Science
University of South Australia
GPO Box 2471
Adelaide, South Australia 5001
Australia
Email: ross.t.smith@unisa.edu.au
Burkhard C. W
unsche
Department of Computer Science
University of Auckland
Private Bag 92019
Auckland
New Zealand
Email: burkhard@cs.auckland.ac.nz
Series Editors:
Vladimir Estivill-Castro, Griffith University, Queensland
Simeon J. Simoff, University of Western Sydney, NSW
Email: crpit@scm.uws.edu.au
Publisher: Australian Computer Society Inc.
PO Box Q534, QVB Post Office
Sydney 1230
New South Wales
Australia.
Conferences in Research and Practice in Information Technology, Volume 139.
ISSN 1445-1336.
ISBN 978-1-921770-24-1.
Document engineering, January 2013 by CRPIT
On-line proceedings, January 2013 by the University of Western Sydney
Electronic media production, January 2013 by the University of South Australia
The Conferences in Research and Practice in Information Technology series disseminates the results of peer-reviewed
research in all areas of Information Technology. Further details can be found at http://crpit.com/.
iv
Table of Contents
Contributed Papers
Tangible Agile Mapping: Ad-hoc Tangible User Interaction Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . .
James A. Walsh, Stewart von Itzstein and Bruce H. Thomas
Contributed Posters
An Ethnographic Study of a High Cognitive Load Driving Environment . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Robert Wellington, Stefan Marks
Experimental Study of Steer-by-Wire and Response Curves in a Simulated High Speed Vehicle . . . . . 123
Stefan Marks, Robert Wellington
3D Object Surface Tracking Using Partial Shape Templates Trained from a Depth Camera for Spatial
Augmented Reality Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Kazuna Tsuboi, Yuji Oyamada, Maki Sugimoto, Hideo Saito
My Personlal Trainer - An iPhone Application for Exercise Monitoring and Analysis . . . . . . . . . . . . . . . 127
Christopher R. Greeff, Joe Yang, Bruce MacDonald, Burkhard C. W
unsche
Interactive vs. Static Location-based Advertisements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Moniek Raijmakers, Suleman Shahid, Omar Mubin
Temporal Evaluation of Aesthetics of User Interfaces as one Component of User Experience . . . . . . . . 131
Marlene Vogel
vi
Preface
It is our great pleasure to welcome you to the 14th Australasian User Interface Conference (AUIC), held
in Adelaide, Australia, January 29th to February 1st 2013, at the University of South Australia. AUIC is
one of 11 co-located conferences that make up the annual Australasian Computer Science Week.
AUIC provides an opportunity for researchers in the areas of User Interfaces, HCI, CSCW, and pervasive computing to present and discuss their latest research, to meet with colleagues and other computer
scientists, and to strengthen the community and explore new projects, technologies and collaborations.
This year we have received a diverse range of submission from all over the world. Out of 31 submitted
papers, 12 papers were selected for full paper presentations and 6 were selected for posters. The breadth
and quality of the papers reflect the dynamic and innovative research in the field and we are excited to see
the international support.
Accepted papers were rigorously reviewed by the community to ensure high quality publications. This
year we are excited to announce that all AUIC publications will now be indexed by Scopus (Elsevier) to
help increase their exposure and citation rates.
We offer our sincere thanks to the people who made this years conference possible: the authors and participants, the program committee members and reviewers, the ACSW organizers, Scopus and the publisher
CRPIT (Conference in Research and Practice in Information Technology).
Ross T. Smith
University of South Australia
Burkhard W
unsche
University of Auckland
AUIC 2013 Programme Chairs
January 2013
vii
Programme Committee
Chairs
Ross T. Smith, University of South Australia, Australia
Burkhard C. W
unsche, University of Auckland, New Zealand
Web Chair
Stefan Marks, AUT University, New Zealand
Members
Mark Apperley, University of Waikato, New Zealand
Robert Amor, University of Auckland, New Zealand
Mark Billinghurst, HITLab, New Zealand
Rachel Blagojevic, University of Auckland, New Zealand
Paul Calder, Flinders University, Australia, Australia
David Chen, Griffith University, Australia
Sally Jo Cunningham, University of Waikato, New Zealand
John Grundy, Swinburne Univeristy of Technology, Australia
Stewart Von Itzstein, University of South Australia, Australia
Christof Lutteroth, University of Auckland, New Zealand
Stuart Marshall, Victoria University of Wellington, New Zealand
Masood Masoodian, University of Waikato, New Zealand
Christian M
uller-Tomfelde, CSIRO, Australia
Beryl Plimmer, University of Auckland, New Zealand
Gerald Weber, University of Auckland, New Zealand
Burkhard C. W
unsche, University of Auckland, New Zealand
viii
Organising Committee
Chair
Dr. Ivan Lee
Finance Chair
Dr. Wolfgang Mayer
Publication Chair
Dr. Raymond Choo
Registration Chair
Dr. Jinhai Cai
ix
On behalf of the Organising Committee, it is our pleasure to welcome you to Adelaide and to the 2013
Australasian Computer Science Week (ACSW 2013). Adelaide is the capital city of South Australia, and
it is one of the most liveable cities in the world. ACSW 2013 will be hosted in the City West Campus
of University of South Australia (UniSA), which is situated at the north-west corner of the Adelaide city
centre.
ACSW is the premier event for Computer Science researchers in Australasia. ACSW2013 consists of
conferences covering a wide range of topics in Computer Science and related area, including:
Australasian Computer Science Conference (ACSC) (Chaired by Bruce Thomas)
Australasian Database Conference (ADC) (Chaired by Hua Wang and Rui Zhang)
Australasian Computing Education Conference (ACE) (Chaired by Angela Carbone and Jacqueline
Whalley)
Australasian Information Security Conference (AISC) (Chaired by Clark Thomborson and Udaya
Parampalli)
Australasian User Interface Conference (AUIC) (Chaired by Ross T. Smith and Burkhard C. W
unsche)
Computing: Australasian Theory Symposium (CATS) (Chaired by Tony Wirth)
Australasian Symposium on Parallel and Distributed Computing (AusPDC) (Chaired by Bahman
Javadi and Saurabh Kumar Garg)
Australasian Workshop on Health Informatics and Knowledge Management (HIKM) (Chaired by Kathleen Gray and Andy Koronios)
Asia-Pacific Conference on Conceptual Modelling (APCCM) (Chaired by Flavio Ferrarotti and Georg
Grossmann)
Australasian Web Conference (AWC2013) (Chaired by Helen Ashman, Michael Sheng and Andrew
Trotman)
In additional to the technical program, we also put together social activities for further interactions
among our participants. A welcome reception will be held at Rockford Hotels Rooftop Pool area, to enjoy
the fresh air and panoramic views of the cityscape during Adelaides dry summer season. The conference
banquet will be held in Adelaide Convention Centres Panorama Suite, to experience an expansive view of
Adelaides serene riverside parklands through the suites seamless floor to ceiling windows.
Organising a conference is an enormous amount of work even with many hands and a very smooth
cooperation, and this year has been no exception. We would like to share with you our gratitude towards
all members of the organising committee for their dedication to the success of ACSW2013. Working like
one person for a common goal in the demanding task of ACSW organisation made us proud that we got
involved in this effort. We also thank all conference co-chairs and reviewers, for putting together conference
programs which is the heart of ACSW. Special thanks goes to Alex Potanin, who shared valuable experiences
in organising ACSW and provided endless help as the steering committee chair. Wed also like to thank
Elyse Perin from UniSA, for her true dedication and tireless work in conference registration and event
organisation. Last, but not least, we would like to thank all speakers and attendees, and we look forward
to several stimulating discussions.
We hope your stay here will be both rewarding and memorable.
Ivan Lee
School of Information Technology & Mathematical Sciences
ACSW2013 General Chair
January, 2013
CORE welcomes all delegates to ACSW2013 in Adelaide. CORE, the peak body representing academic
computer science in Australia and New Zealand, is responsible for the annual ACSW series of meetings,
which are a unique opportunity for our community to network and to discuss research and topics of mutual
interest. The original component conferences - ACSC, ADC, and CATS, which formed the basis of ACSW
in the mid 1990s - now share this week with eight other events - ACE, AISC, AUIC, AusPDC, HIKM,
ACDC, APCCM and AWC which build on the diversity of the Australasian computing community.
In 2013, we have again chosen to feature a small number of keynote speakers from across the discipline:
Wen Gao (AUIC), Riccardo Bellazzi (HIKM), and Divyakant Agrawal (ADC). I thank them for their
contributions to ACSW2013. I also thank invited speakers in some of the individual conferences, and the
CORE award winner Michael Sheng (CORE Chris Wallace Award). The efforts of the conference chairs
and their program committees have led to strong programs in all the conferences, thanks very much for all
your efforts. Thanks are particularly due to Ivan Lee and his colleagues for organising what promises to be
a strong event.
The past year has been turbulent for our disciplines. ERA2012 included conferences as we had pushed
for, but as a peer review discipline. This turned out to be good for our disciplines, with many more
Universities being assessed and an overall improvement in the visibility of research in our disciplines. The
next step must be to improve our relative success rates in ARC grant schemes, the most likely hypothesis for
our low rates of success is how harshly we assess each others proposals, a phenomenon which demonstrably
occurs in the US NFS. As a US Head of Dept explained to me, in CS we circle the wagons and shoot
within.
Beyond research issues, in 2013 CORE will also need to focus on education issues, including in Schools.
The likelihood that the future will have less computers is small, yet where are the numbers of students
we need? In the US there has been massive growth in undergraduate CS numbers of 25 to 40% in many
places, which we should aim to replicate. ACSW will feature a joint CORE, ACDICT, NICTA and ACS
discussion on ICT Skills, which will inform our future directions.
COREs existence is due to the support of the member departments in Australia and New Zealand,
and I thank them for their ongoing contributions, in commitment and in financial support. Finally, I am
grateful to all those who gave their time to CORE in 2012; in particular, I thank Alex Potanin, Alan Fekete,
Aditya Ghose, Justin Zobel, John Grundy, and those of you who contribute to the discussions on the CORE
mailing lists. There are three main lists: csprofs, cshods and members. You are all eligible for the members
list if your department is a member. Please do sign up via http://lists.core.edu.au/mailman/listinfo - we
try to keep the volume low but relevance high in the mailing lists.
I am standing down as President at this ACSW. I have enjoyed the role, and am pleased to have had
some positive impact on ERA2012 during my time. Thank you all for the opportunity to represent you for
the last 3 years.
Tom Gedeon
President, CORE
January, 2013
The Australasian Computer Science Week of conferences has been running in some form continuously
since 1978. This makes it one of the longest running conferences in computer science. The proceedings of
the week have been published as the Australian Computer Science Communications since 1979 (with the
1978 proceedings often referred to as Volume 0 ). Thus the sequence number of the Australasian Computer
Science Conference is always one greater than the volume of the Communications. Below is a list of the
conferences, their locations and hosts.
2014. Volume 36. Host and Venue - AUT University, Auckland, New Zealand.
2013. Volume 35. Host and Venue - University of South Australia, Adelaide, SA.
2012. Volume 34. Host and Venue - RMIT University, Melbourne, VIC.
2011. Volume 33. Host and Venue - Curtin University of Technology, Perth, WA.
2010. Volume 32. Host and Venue - Queensland University of Technology, Brisbane, QLD.
2009. Volume 31. Host and Venue - Victoria University, Wellington, New Zealand.
2008. Volume 30. Host and Venue - University of Wollongong, NSW.
2007. Volume 29. Host and Venue - University of Ballarat, VIC. First running of HDKM.
2006. Volume 28. Host and Venue - University of Tasmania, TAS.
2005. Volume 27. Host - University of Newcastle, NSW. APBC held separately from 2005.
2004. Volume 26. Host and Venue - University of Otago, Dunedin, New Zealand. First running of APCCM.
2003. Volume 25. Hosts - Flinders University, University of Adelaide and University of South Australia. Venue
- Adelaide Convention Centre, Adelaide, SA. First running of APBC. Incorporation of ACE. ACSAC held
separately from 2003.
2002. Volume 24. Host and Venue - Monash University, Melbourne, VIC.
2001. Volume 23. Hosts - Bond University and Griffith University (Gold Coast). Venue - Gold Coast, QLD.
2000. Volume 22. Hosts - Australian National University and University of Canberra. Venue - ANU, Canberra,
ACT. First running of AUIC.
1999. Volume 21. Host and Venue - University of Auckland, New Zealand.
1998. Volume 20. Hosts - University of Western Australia, Murdoch University, Edith Cowan University and
Curtin University. Venue - Perth, WA.
1997. Volume 19. Hosts - Macquarie University and University of Technology, Sydney. Venue - Sydney, NSW.
ADC held with DASFAA (rather than ACSW) in 1997.
1996. Volume 18. Host - University of Melbourne and RMIT University. Venue - Melbourne, Australia. CATS
joins ACSW.
1995. Volume 17. Hosts - Flinders University, University of Adelaide and University of South Australia. Venue Glenelg, SA.
1994. Volume 16. Host and Venue - University of Canterbury, Christchurch, New Zealand. CATS run for the first
time separately in Sydney.
1993. Volume 15. Hosts - Griffith University and Queensland University of Technology. Venue - Nathan, QLD.
1992. Volume 14. Host and Venue - University of Tasmania, TAS. (ADC held separately at La Trobe University).
1991. Volume 13. Host and Venue - University of New South Wales, NSW.
1990. Volume 12. Host and Venue - Monash University, Melbourne, VIC. Joined by Database and Information
Systems Conference which in 1992 became ADC (which stayed with ACSW) and ACIS (which now operates
independently).
1989. Volume 11. Host and Venue - University of Wollongong, NSW.
1988. Volume 10. Host and Venue - University of Queensland, QLD.
1987. Volume 9. Host and Venue - Deakin University, VIC.
1986. Volume 8. Host and Venue - Australian National University, Canberra, ACT.
1985. Volume 7. Hosts - University of Melbourne and Monash University. Venue - Melbourne, VIC.
1984. Volume 6. Host and Venue - University of Adelaide, SA.
1983. Volume 5. Host and Venue - University of Sydney, NSW.
1982. Volume 4. Host and Venue - University of Western Australia, WA.
1981. Volume 3. Host and Venue - University of Queensland, QLD.
1980. Volume 2. Host and Venue - Australian National University, Canberra, ACT.
1979. Volume 1. Host and Venue - University of Tasmania, TAS.
1978. Volume 0. Host and Venue - University of New South Wales, NSW.
Conference Acronyms
ACDC
ACE
ACSC
ACSW
ADC
AISC
APCCM
AUIC
AusPDC
AWC
CATS
HIKM
Note that various name changes have occurred, which have been indicated in the Conference Acronyms sections
in respective CRPIT volumes.
xiii
We wish to thank the following sponsors for their contribution towards this conference.
AUT University
www.aut.ac.nz
University
of South
Australia,
Project:
Identity
Date:
November 09
www.unisa.edu.au/
xiv
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
Contributed Papers
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
Abstract
People naturally externalize mental systems through
physical objects to leverage their spatial intelligence. The
advent of tangible user interfaces has allowed human
computer interaction to utilize these skills. However,
current systems must be written from scratch and
designed for a specific purpose, thus meaning end users
cannot extend or repurpose the system. This paper
presents Tangible Agile Mapping, our architecture to
address this problem by allowing tangible systems to be
defined ad-hoc. Our architecture addresses the tangible
ad-hoc definition of objects, properties and rules to
support tangible interactions. This paper also describes
Spatial Augmented Reality TAM as an implementation
of this architecture that utilizes a projector-camera setup
combined with gesture-based navigation to allow users to
create tangible systems from scratch. Results of a user
study show that the architecture and our implementation
are effective in allowing users to develop tangible
systems, even for users with little computing or tangible
experience.
Keywords: Tangible user interfaces, programming by
demonstration, organic users interfaces, proxemic
interactions, authoring by interaction.
Introduction
Related Work
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
Derived Challenges
Exploratory Interview
Define types/groups
substitution.
5.1
objects
that
enable
properties
Common Updates
Isolated
interactions
Scenario 1
Scenario 2
Overlapping
interactions
Scenario 3
Scenario 4
5.1.1
Architecture
of
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
5.1.2
Define types/groups
5.1.3
Properties
5.1.4
Sequential Interactions
5.2
Implementation
System Overview
6.1
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
(b)
Evaluation
Figure 3: Introducing a new object: (a) Performing the introduction pose, (b) entering a name, (c) selecting a
default colour and (d) confirming the selection.
(a)
(b)
(c)
(d)
Figure 4: Creating rules for an interaction: (a) Performing the new interaction pose, (b) selecting objects involved,
(c) resolving group selection for substitution and (d) performing the changes as a result of the interaction
As a result, our evaluation was designed to evaluate;
How easily can users grasp the concepts involved in
TAM to convert a scenario into the required structure?
How easily can users then communicate that structure
to the system?
This was evaluated by means of an exploratory user
study. We employ an experimental design similar to Scott
et al. (2005) to gain an understanding of how users
engage with and understand the system. The study
consisted of the participants first being seated at a desk,
watching a video that explained the system and
demonstrated a single, group-based interaction between
three objects to simulate the reaction between hydrogen
and chlorine to form hydrogen-chloride (two hydrogen
atoms were defined, and a group created so either
hydrogen could trigger the reaction). To ensure equal
7.1
Results
Mean Groups
Task 1
1.00
0.19
Task 2
1.29
0.33
Task 3
1.46
2.38
10
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
Future Work
10 Conclusion
TAM enables the development of and interaction with
novel TUIs with no background development. Through
providing an abstracted set of interactions, novice users
can program rules-based interactions, utilizing both
individual objects and group-based interactions, offering
type definition and substitution. Our system allows users
11 Acknowledgements
The authors would like to thank Thuong Hoang and
Markus Broecker for proofreading the paper and the
reviewers for their feedback and ideas.
12 References
BORCHERS, J., RINGEL, M., TYLER, J. & FOX, A.
2002. Interactive Workspaces: A Framework for
Physical and Graphical User Interface
Prototyping. IEEE Wireless Communications, 9,
64-69.
DEY, A. K., HAMID, R., BECKMANN, C., LI, I. &
HSU, D. 2004. a CAPpella: programming by
demonstration of context-aware applications.
Proc. of the SIGCHI conference on Human
factors in computing systems. Vienna, Austria:
ACM.
FITZMAURICE, G. W. 1996. Graspable User
Interfaces. Doctor of Philosophy, University of
Toronto.
FITZMAURICE, G. W., ISHII, H. & BUXTON, W. A.
S. 1995. Bricks: laying the foundations for
graspable user interfaces. Proc. of the SIGCHI
conference on Human factors in computing
systems. Denver, Colorado, United States: ACM
Press/Addison-Wesley Publishing Co.
GREENBERG, S. & FITCHETT, C. 2001. Phidgets: easy
development of physical interfaces through
physical widgets. Proc. of the 14th annual ACM
symposium on User interface software and
technology. Orlando, Florida: ACM.
HACKER, W. 1994. Action regulation theory and
occupational psychology: Review of German
empirical research since 1987. German Journal
of Psychology, 18, 91-120.
HALBERT, D. C. 1984. Programming by example.
Doctoral Dissertation, University of California.
HOLMAN, D. & VERTEGAAL, R. 2008. Organic user
interfaces: designing computers in any way,
shape, or form. Communications of the ACM,
51, 48-55.
ISHII, H. & ULLMER, B. 1997. Tangible bits: towards
seamless interfaces between people, bits and
atoms. Proc. of the SIGCHI conference on
Human factors in computing systems. Atlanta,
Georgia, United States: ACM.
IZADI, S., KIM, D., HILLIGES, O., MOLYNEAUX, D.,
NEWCOMBE, R., KOHLI, P., SHOTTON, J.,
HODGES, S., FREEMAN, D., DAVISON, A. &
FITZGIBBON, A. 2011. KinectFusion: realtime 3D reconstruction and interaction using a
11
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
Abstract
We present vsInk, a plug-in that affords digital ink
annotation in the Visual Studio code editor. Annotations
can be added in the same window as the editor and
automatically reflow when the underlying code changes.
The plug-in uses recognisers built using machine learning
to improve the accuracy of the annotations anchor. The
user evaluation shows that the core functionality is
sound. .
Keywords: Digital ink, code annotation, Visual Studio.
Introduction
Related Work
13
14
Requirements
3.1
3.2
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
3.3
vsInk has been implemented using C#, WPF and the .Net
4 framework. It uses the Visual Studio 2010 SDK for
integration with Visual Studio 2010. It consists of a single
package that can be installed directly into the IDE. While
it has been designed to be used on a Tablet PC it can be
used with a mouse on any Windows PC. Figure 2 shows
the main elements in the user interface.
This section describes the five major features of vsInk:
editor integration, grouping annotations, anchoring annotations, annotation adornments and navigation.
3.4
4.1
3.5
Navigation
Implementation
Editor Integration
15
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
Figure 6: The Visual Studio editor. The grey box shows the viewport region used by Visual Studio.
4.2
30px
30px
Bounding
box
Boundary
region
4.3
Anchoring Annotations
17
Linker Type
Line horizontal
Example
Line vertical
Line diagonal
Circle
Brace
4.4
Each annotation can have a number of associated adornments. These adornments are slightly different from the
Visual Studio adornments in two ways: they are
associated with an annotation rather than an adornment
layer and their visibility is controlled by vsInk. There are
two default adornments in vsInk: the boundary region
indicator and the anchor indicator. In addition vsInk
allows for third parties to write their own custom
adornments. An example of a custom adornment is
provided in the project; it displays the user name of the
person who added the annotation.
When an annotation is added a factory class for each
adornment is called to generate the adornments for the
new annotation. This process is called for both loading
annotations (e.g. when a document is opened) and for a
user adding a new annotation. Each adornment is then
added to a sub-layer of the ink canvas. The sub-layer is
needed to prevent the adornments from being selected
and directly modified by the user. Custom adornments
can be added to vsInk by adding a new factory class.
Adornments are positioned using a similar process to
ink strokes. If an annotation is hidden during a canvas
update all the associated adornments are hidden as well.
If the annotation is visible then each adornment for the
annotation is called to update its location. Adornments
typically update their position using the details from the
annotation (e.g. the bounding box or similar).
4.5
Arrow
Table 1: Recognised linker types. The red cross indicates the location of the anchor.
the type of the first stroke. Both RCA and CodeAnnotator
used some simple heuristics for determining the type of
the linking stroke which could be a line or a circle (Chen
and Plimmer, 2007, Priest and Plimmer, 2006) but this
was too limiting, especially as new linker types are
needed.
To overcome this we used Rata.Gesture (Chang et al.,
2012) to recognise the stroke type. Rata.Gesture is a tool
that was developed at the University of Auckland for
generating ink recognisers. Rata works by extracting 115
features for each stroke and then training a model to
classify strokes. This model is then used in the recogniser
for classifying strokes.
To generate the recogniser for vsInk an informal user
survey was performed to see what the most common
types of linking strokes would be. This produced the list
of strokes in Table 1. Ten users were then asked to
provide ten examples of each stroke, giving a total of 600
strokes to use in training. These strokes were manually
labelled and Rata used to generate the recogniser.
When a new annotation is started the recogniser is
used to classify the type of linker. Each linker type has a
specific anchor location (see Table 1) this location is
used to find the Line# for anchoring.
18
Annotation Adornments
Navigating Annotations
There are two parts to navigation collapsed region support and a navigation outline. Collapsed region support
adds an icon to a sub-layer of the ink canvas whenever a
collapsed region contains annotations. The addition or
deletion of the icon is performed during the canvas
update process, which is triggered whenever a region is
changed. This ensures the icon is always up-to-date and
only displayed when there are annotations in a collapsed
region.
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
5.2
Evaluation
5.1
Methodology
There were eight participants in the study (6 male, 2 female). Four were computer science graduate students,
three full-time developers and one a computer science
lecturer. All had some experience with Visual Studio with
most participants saying they use it frequently. Participants were evenly split between those who had used penbased computing before and those who hadnt. All but
one of the participants had prior experience reviewing
program code. Two rounds of testing were performed
after the first round of testing the major flaws identified
were fixed and then the second round of testing was
performed.
Results
After the first four subjects the results were reviewed and
a number of issues were identified. Before the second
round of testing changes were made in an attempt to fix
these issues. The main issue found was strokes were
being incorrectly added to existing annotations. During
the tests the opposite (strokes not being added to
annotations correctly) occurred rarely. Therefore the three
changes mentioned (see 4.2 above) were made to the
grouping process.
The other refinements to vsInk were as follows. Some
of the participants complained that the lines were two
small or the ink too fat. To fix this the ink thickness was
reduced and the line size increased slightly. Another
common complaint was the adornments obscured the
code. This was fixed by making all adornments semitransparent and removing non-necessary ones (e.g. the
name of the annotator). Participants also mentioned the
ink navigator distorted the annotations too much so the
amount of distortion was limited to between 20% and
100% of the original size. Observations suggested that the
navigation features were not obvious. When a participant
selected an annotation in the navigator they did not know
which annotation it matched on the document (especially
when there were several similar annotations). To fix this
the flash was added to identify the selected annotation.
In addition to the issues mentioned above, there were
other issues noted that were not fixed due to time
constraints. These included: tall annotations disappearing
when the anchor point was out of the viewport, cut/paste
not including the annotations, and annotations not being
included in the undo history.
After the modifications the second set of participants
tested vsInk. We found most of the modifications had the
desired effect and vsInk was easier to use. However there
were still issues with the grouping of strokes into annotations. Using time to group strokes sometimes caused
19
4
3
3
Task 1
2
Task 2
Task 1
2
Task 2
1
0
0
1
4
3
3
Task 1
2
Task 2
Task 2
2
1
1
0
0
1
20
subjects found that the annotations obstructed the underlying code, making it harder to read. While most of the
participants understood why vsInk grouped the strokes
together they thought the grouping routine was too
inaccurate. In addition to improving the grouping the
some suggested improvements were being able to selectively hide annotations (maybe by colour), having some
form of zoom for the code under the pen and displaying
the code in the navigator window.
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
Conclusions
References
21
22
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
Abstract
Informed decisions are based on the availability of
information and the ability of decision-makers to
manipulate this information. More often than not, the
decision-relevant information is subject to uncertainty
arising from different sources. Consequently, decisions
involve an undeniable amount of risk. An effective
visualisation tool to support informed decision-making
must enable users to not only distil information, but also
explore the uncertainty and risk involved in their
decisions. In this paper, we present VisIDM, an
information visualisation tool to support informed
decision-making (IDM) under uncertainty and risk. It
aims to portray information about the decision problem
and facilitate its analysis and exploration at different
levels of detail. It also aims to facilitate the integration of
uncertainty and risk into the decision-making process and
allow users to experiment with multiple what-if
scenarios. We evaluate the utility of VisIDM through a
qualitative user study. The results provide valuable
insights into the benefits and drawbacks of VisIDM for
assisting people to make informed decisions and raising
their awareness of uncertainty and risk involved in their
decisions.
Keywords: Information visualisation, Interaction design,
Informed decision-making, Uncertainty, Risk. .
Introduction
23
Related Work
3
3.1
http://maps.google.com
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
3.2
In addition to the aforementioned information, decisionmakers need to be able to explore and compare
alternatives at different levels of detail. The presence of
uncertainty in the values of input variables implies that
there are many possible realisations (or values) for each
input variable. This gives rise to the presence of many
possible scenarios, where each scenario represents a
possible combination of all values of input variables, one
for each variable (Marco et al., 2008). In this situation,
the visualisation tool should allow the generation of all
possible scenarios. This requires facilities for enabling
decision-makers to provide their own estimates of the
values for each uncertain variable and its distribution. In
addition, it requires computational facilities for
propagating all uncertainties through models and criteria
used in decision-making. Once all uncertainties are
propagated through the models, the visualisation tool
should then provide decision-makers with a complete
picture of all generated scenarios and the distribution of
uncertainties and risks anticipated to exist in these
scenarios. At the same time, it should allow decisionmakers to interact with the decision model to allow
experimentation with different possible what-if
scenarios and exploration of the outcomes and risks
associated with alternatives under these scenarios. The
ability to analyse what-if scenarios is a key requirement
for developing understanding about the implications of
uncertainly, which in turn leads to making more informed
and justifiable decisions (French, 2003).
3.3
Risk criterion
Input
uncertainties
Specify
Decisionmaker
Decision
Specify
Description of VisIDM
4.1
25
Figure 2: The Decision Bars (left) and the Risk Explorer (right).
To predict and analyse the profitability of an
investment, a financial model for investment decisionmaking called Net Present Value (NPV) is commonly
used (Magni, 2009; Tziralis et al., 2009). The NPV model
is emphasised in many textbooks as a theoretically and
practically sound decision model (e.g. Copeland &
Weston, 1983; Koller et al., 2005). It represents the
difference between the present value of all cash inflows
(profits) and cash outflows (costs) over the life of the
investment, all discounted at a particular rate of return
(Magni, 2009). The purpose of NPV is basically to
estimate the extent to which the profits of an investment
exceed its costs. A positive NPV indicates that the
investment is profitable, while a negative NPV indicates
that the investment is making a loss. A basic version of
calculating NPV is given by Equation 1:
(1)
Where
is the initial investment.
n is the total time of the investment.
r is the discount rate (the rate of return that could be
earned on the investment).
is the cash inflow at time t.
is the cash outflow at time t.
As shown in Equation 1, in its basic form, the NPV
model consists of five input variables. In practice, each of
these variables is subject to uncertainty because the
information available on their values is usually based on
predictions, and fluctuations may occur in the future.
Consequently, the investment decision can lead to many
possible outcomes (i.e. different values of NPV). Since
not all possible outcomes are equally desirable to the
decision-maker, the investment decision involves a
degree of risk. The risk is present because there is a
chance that the investment decision can lead to an
undesirable rather than a desirable outcome.
4.2
Decision Bars
26
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
4.3
Risk Explorer
4.3.1
Providing
an
Overview
of
the
Uncertainty and Risk of Undesirable
Outcomes
27
Figure 5: A screenshot of Risk Explorer after selecting alternatives 1 and 2 for further exploration and
comparison.
4.3.2
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
Figure 6: A screenshot of Risk Explorer after exploring alternatives 2 and 5 under initial investment of $35000
and discount rate of 10%.
User Study
5.1
5.2
5.2.1
Decision-Making Processes
30
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
5.2.2
31
Acknowledgements
We would like to acknowledge all participants without
whom the study would not have been completed.
References
32
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
Richard Watson
Abstract
Management of the increasingly large collections of
files and other electronic artifacts held on desktop as
well as enterprise systems is becoming more difficult.
Organisation and searching using extensive metadata
is an emerging solution, but is predicated upon the
development of appropriate interfaces for metadata
management. In this paper we seek to advance the
state of the art by proposing a set of design principles
for metadata interfaces. We do this by first defining
the abstract operations required, then reviewing the
functionality and interfaces of current applications
with respect to these operations, before extending the
observed best practice to create a generic set of guidelines. We also present a novel direct manipulation interface for higher level metadata manipulation that
addresses shortcomings observed in the sampled software.
1
Introduction
Computer users of all kinds are storing an ever increasing number of files (Agrawal et al. 2007). The
usage ranges from the straightforward personal storage of generic media files to the specialised storage of
outcomes of scientific observations or simulations and
includes diverse and increasingly mandated archival
storage of corporate and government agency documents.
While the increasing aggregate size of stored files
presents significant challenges in storing the bitstreams (Rosenthal 2010), there are other important
and complex issues related to the growing number of
files, most prominently the attendant problem of (a)
organising and (b) locating individual files within a
file store. The traditional hierarchical file system is
no longer able to support either the kinds or organisation or the search strategies that users need (Seltzer &
Murphy 2009). Alternate, post-hierarchical file system architectures have been proposed (e.g. Ames et
al. 2006, Dekeyser et al. 2008, Gifford et al. 1991, Padioleau & Ridoux 2003, Rizzo 2004, Seltzer & Murphy 2009) whose logical organisation is based on a
rich collection of file metadata rather than the familiar nested directory structure.
This problemhow to organise and find growing
numbers of electronic artifactsextends beyond the
desktop file system. A huge number of files are now
c
Copyright
2013,
Australian Computer Society, Inc. This paper appeared at the 14th Australasian User Interface Conference (AUIC 2013), Adelaide, Australia, January 2013. Conferences in Research and Practice in Information Technology
(CRPIT), Vol. 139, Ross T. Smith and Burkhard Wuensche,
Eds. Reproduction for academic, not-for-profit purposes permitted provided this text is included.
33
2. Some kinds of metadata are inherently subjective rather than objective; since the values for
such attributes depend purely on the user, there
is no software process that can obtain them.
An obvious example is the rating tag that is
associated to music or image files. More generally (Sease & McDonald 2011), individual users
catalogue (i.e. attach metadata to) files in idiosyncratic ways that suit their own organisational and retrieval strategies.
3. Searches based on automatically extracted metadata (such as document keywords or a users contextual access history) may well locate a single
file or range of similar files, but only a wellorganised set of manually assigned metadata is
likely to return logically-related collections of
files. The argument here is that an automatic
system would attempt to cluster files according
to extracted metadata; however, the number of
metadata attributes is relatively large, and values for many would be missing for various files.
Clustering is ineffective when the multidimensional space is sparsely populated, so this approach is unlikely to be able to retrieve collections without user input.
Consider as an example a freelance software engineer who works on several long-running projects
concurrently. A time-based search in a flat
document store is likely to return files that belong to more than one project. If the freelancer
only works from home, other searches based
on automatically extracted contextual metadata
(e.g. location, or audio being played while working (Hailpern et al. 2011)) are unlikely to be able
to cluster files exactly around projects. Again,
the user will need to supply the project details
not as directory names, but as metadata attribute values.
4. Finally, the simple fact that a large number of applications exist that allow users to modify metadata (together, perhaps, with the perceived popularity of those applications that manage to do
it well) is proof of a need for such systems.
Given our assertion of the importance of user-centric
metadata, and recognising that users may be reluctant to commit effort to metadata creation, we arrive
at the central issue addressed in this paper: how to
increase the likelihood that users will supply metadata? Our thesis is that (1) there is a clear need to
develop powerful and intuitive interfaces for actively
empowering users to capture metadata, and (2) very
few such interfaces currently exist.
Organisation In this paper we will first propose a
framework (Section 2) including definitions and then
in Section 3 proceed to assess a set of software titles
(representing the state-of-the-art in the area of metadata manipulation interface design) with respect to
the framework. Having identified a hiatus in the capabilities of assessed software, we then add to the
state-of-the-art by introducing a tightly-focused prototype in Section 4. Both the software assessment
and the prototype then lead to a number of guides or
principles in Section 5 for the design of user interfaces
for updating metadata.
Scope This paper deals with interface design issues
for systems that allow users to create and modify
metadata. The related issue of query interfaces will
34
not be considered in detail. Most of the systems examined are desktop applications that manage files on
a local file system. We also consider different target objects: email and cloud-based file systems. Although web-based systems are examined briefly we
note that, even with the advent of AJAX-ified interfaces, these are frequently less rich in terms of interface design. Mobile apps have not been considered, as
touch interfaces are arguably not (yet) optimized to
manipulate sets of objects, and screen size limitations
are a significant limiting factor.
Contributions The contributions made through
this paper include: (1) a proposed framework for
assessing metadata manipulation interfaces; (2) assessment of a number of relevant software titles; (3)
the presentation of a partial prototype that addresses
identified shortcomings; and (4) the enumeration of a
concrete set of guiding principles for UI design in this
context.
2
Framework
2.1
Metadata
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
T
V
S
::=
::=
|
|
::=
[(Attr, V )]
S
[S]
T
string | int | . . . | E
Update language
Example: Consider a metadata store that is represented by the nested relation R(Id, Name, Extension,
{Tag}, {Address(No, Street, City)}) containing the
following files (attribute types can be inferred from
the schema signature and the example rows): {(1,
P9104060akt, JPG, {John, Home, Food}, {(17, Main
St, Seattle)}), (2, IMG1384, JPG, {Ann, Work, office, Meeting}, DC), (3, notes, DOC, {Letter, Support, Sam}, {(1, Baker St, Brisbane)})}.
The following statement updates the metadata
for one file:
CHANGE SET Name:=ann at work,
Tags:=Tags+{Client,lunch} - {office}
WHERE Ann IN Tags AND Name=IMG1384
The following statement updates the complex Address attribute:
CHANGE SET Addresses.No:=7
WHERE Seattle IN Addresses.City
2.2
To be able to present a language and operations to update metadata for a set of objects, we propose a simple logical data model for a metadata store, based on
the definition of user-centric metadata given above.
35
Attributes
Files
Single value
Many values
Complex value
1
1
>1
>1
>1
GUI operations
36
in a string-valued attribute. Instead a typical interface would present a character-at-a-time editing interface to the underlying API. A typical constraint
may be that a list of values represent a set, prohibiting duplicate values. Users may also be constrained
in choice of value, picking from a predetermined set
of appropriate values.
More examples of user interface functionality will
be seen in Section 3 where we examine some example
metadata manipulation interfaces.
Note that in database parlance, we have so far only
described instance data manipulation operations. An
advanced metadata management system would also
allow schema operations, such as defining new attributes, or new enumerated data types. In the Introduction we referred to this as user-defined metadata.
We believe that these schema operations are necessary to build a truly capable system; such operation
are orthogonal to the instance operations that are the
focus of this paper.
3
Evaluating interfaces
In this section we report the results of a critical evaluation of a number of applications that display and
manipulate metadata. Based on this evaluation, we
propose guidelines for developers of advanced interfaces. Criteria for selection was that applications
were readily available (typically open source or bundled with a major operating system), representative
of the application domain, and that they collectively
handled a broad range of file types. We have summarised the results here and intend to publish a more
detailed analysis in the future. With a single exception (gmail) these are file handling applications; because of this we often use the word file instead of
the more generic object when referring to the artifact being described.
3.1
Applications
Thirteen desktop and three web applications were selected. Some applications are designed to manage
metadata for specific file formats (image, video, audio, bibliography) while others are not format specific. Without claiming to be exhaustive, we have chosen a set of applications that we believe to be among
the best representatives of commercial and freeware
software across the range of domains.
Table 1 lists the applications selected by application domain and by platform. Some applications were
available on both Windows and Linux platforms (Picasa, Tabbles, Clementine); the table shows the version tested. All programs were tested on the authors
computers except Adobe Bridgewe relied mainly on
Adobe instructional material to evaluate this product.
3.2
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
Type
Image
Video
Music
Biblio
Mail
Generic
Table 1: Applications
Application
Ver
Code
ACDSee
12
AC
Picasa
3.9
Pic
Adobe Bridge
Br
iTag
476
Tag
Shotwell
0.6.1 Sw
flickr.com
flkr
Personal VideoDB
Vdb
Usher
1.1.4 Us
iTunes
10
iTu
MP3tag
2.49 Mp3
Clementine
1.0
Cl
Papers
2.1
Pap
gmail.com
gml
Explorer
7
Exp
Tabbles
2.0.6 Tab
box.com
box
Platform
Windows
Windows
N/A
Windows
Linux
Web
Windows
MacOSX
MacOSX
Windows
Linux
MacOSX
Web
Windows
Windows
Web
Range of operations
Selecting files/objects
in Section 2.3. This is unsurprising given the expressive power of the condition expression; however, there
is scope for applications to use a Query-By-Example
(QBE)-type approach (Zloof 1975) to increase the selection capabilities for users. We will return to this
issue in Section 4.
3.3.2
Assessment
37
Value management
Advanced features
38
3.7
Discussion
Updatable views
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
Prototype
To illustrate the proposal we have made in this section, we briefly present a prototype interface that we
developed in the context of metada-based file system
(Dekeyser et al. 2008). Note that the implementation
did not focus on the other issues identified in Section 3; it is purely meant to demonstrate the notion
of saveable updatable views as a clean alternative to
templates.
The prototype was developed on top of a technology preview of Microsofts WinFS. The main feature
is a file browser application which allows (1) the listing of objects in the file store, (2) a simplified mechanism to capture rich metadata, and (3) the creation
of virtual folders (view definitions).
Figure 3 illustrates the use of virtual folders as a
means to capture metadata through a drag and drop
operation. The screenshots of the prototype show
that initially four Photo objects were selected from
the Photos Folder and subsequently dragged into
the virtual folder Photos with Comments Family
Holiday. The second screen then depicts the content of the latter, and shows that the four objects
have obtained the necessary metadata to belong in
the virtual folder.
Dekeyser (2005) first proposed this novel drag and
drop approach to metadata manipulation and the
technique has been independently implemented (Kandel et al. 2008) in a system that allows field biologists
to annotate large collections of photographs. While
targeted at a particular problem rather than a generic
file system, their system established through extensive user experience the viability of the concept.
The Query-by-Example interface is illustrated in
Figure 4. It is possible to create a propositional calculus style query that is a set of relational expressions
between attributes and values that are joined by conjunctive or disjunctive logical operators. A new query
(view) is initially anonymous (Untitled) but can be
assigned a meaningful name.
Design principles
Minimise work
39
Figure 3: (a) Dragging photos into the Virtual Folder Photos with comment Family Holiday, (b) Result after
the drag operation, showing that metadata has been updated to make the photos appear in this Virtual Folder.
5.3
Application: Support typed attributes, and particularly user enumerations rather than a string type.
Adopting a typed metadata system, similar to the
definition in Section 2.1, offers significant advantages.
Typing of attributes assists in display and interpretation (e.g. sort order, non-textual displays) of values, and enables provision of appripriate aggregation
functions for each type. It also facilitates input validation, and other non-UI features such as specialised
storage index construction. Typing does not necessarily require a cumbersome declare an attribute
modal window as types can be inferred from user actions and a sophisticated interface could provide hints
about expected types.
Application: Provide an operation to change the
representation of a value.
Values may need to be renamed to better reflect
meaning. Value renaming is a global operation that
can affect attribute values of many files. Normally
renaming to an existing name would cause an error,
but it is useful to identify value merge as a special case
of rename. This is a shorthand for set attribute value
to new for all files with attribute value old followed
by deletion of the old value.
40
Application:
Provide an operation to delete a
value from an enumerated attribute type.
If the value is currently associated with a file attribute then confirmation should be sought before
proceeding.
5.4
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
Conclusions
41
of the 23rd IEEE / 14th NASA Goddard Conference on Mass Storage Systems and Technologies.
Sease, R. & McDonald, D. W. (2011), The organization of home media, ACM Trans. Comput.-Hum.
Interact. 18, 9:19:20.
Seltzer, M. & Murphy, N. (2009), Hierarchical file
systems are dead, in Proceedings of the 12th conference on Hot topics in operating systems, HotOS09, USENIX Association, Berkeley, CA, USA.
Soules, C. A. N. & Ganger, G. R. (2003), Why cant
I find my files? new methods for automating attribute assignment, in Proceedings of the Ninth
Workshop on Hot Topics in Operating Systems,
USENIX Association.
Soules, C. A. N. & Ganger, G. R. (2005), Connections: using context to enhance file search, in Proceedings of the Twentieth ACM symposium on Operating systems principles, SOSP 05, ACM, New
York, NY, USA, pp. 119132.
Zloof, M. M. (1975), Query-by-example: the invocation and definition of tables and forms, in Proceedings of the 1st International Conference on Very
Large Data Bases, VLDB 75, ACM, New York,
NY, USA, pp. 124.
Gentner, D. & Nielsen, J. (1996), The anti-mac interface, Commun. ACM 39, 7082.
Gyssens, M. & van Gucht, D. (1988), The powerset algebra as a result of adding programming constructs
to the nested relational algebra, in Proceedings of
the 1988 ACM SIGMOD international conference
on Management of data, SIGMOD 88, ACM, New
York, NY, USA, pp. 225232.
Hailpern, J., Jitkoff, N., Warr, A., Karahalios, K.,
Sesek, R. & Shkrob, N. (2011), Youpivot: improving recall with contextual search, in Proceedings
of the 2011 annual conference on Human factors
in computing systems, CHI 11, ACM, New York,
NY, USA, pp. 15211530.
Kandel, S., Paepcke, A., Theobald, M., GarciaMolina, H. & Abelson, E. (2008), Photospread: a
spreadsheet for managing photos, in Proceedings
of the twenty-sixth annual SIGCHI conference on
Human factors in computing systems, CHI 08,
ACM, New York, NY, USA, pp. 17491758.
Karger, D. R. & Quan, D. (2004), Haystack: a user
interface for creating, browsing, and organizing arbitrary semistructured information, in CHI 04 extended abstracts on Human factors in computing
systems, CHI EA 04, ACM, New York, NY, USA,
pp. 777778.
Korth, H. & Roth, M. (1987), Query languages for
nested relational databases, Technical Report TR87-45, Department of Computer Science, The University of Texas at Austin.
Merrett, T. H. (2005), A nested relation implementation for semistructured data, Technical report,
McGill University.
Padioleau, Y. & Ridoux, O. (2003), A logic file system, in Proceedings of the USENIX 2003 Annual
Technical Conference, General Track, pp. 99112.
Rizzo, T. (2004), WinFS 101: Introducing the New
Windows File System, http://msdn.microsoft.
com/en-US/library/aa480687.aspx.
42
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
Abstract
This paper introduces research into the presence of
temporal information in email that relates to time
obligations, such as deadlines, events and tasks. A user
study was undertaken which involved a survey,
observations and interviews to understand current user
strategies for temporal information management and
awareness generation in email. The study also focused on
current difficulties faced in temporal information
organisation. The results are divided across trends
identified in use of the inbox, calendar, tasks list and
projects as well as general temporal information
organisation difficulties. Current problematic conventions
and opportunities for future integration are discussed and
strong support for careful visual representation of
temporal information is established..
Keywords: Time, Temporal information, email, User
Interface, Information management, Awareness.
Introduction
43
2
2.1
Related Works
Understanding Email Use
44
2.2
2.3
2.4
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
Methodology
3.1
Survey
3.2
Observations
3.3
Interviews
Results
4.1
4.1.1
The Inbox
Style of Inbox: Breadth or Depth of
Feature Visibility
as the best email application they had used (by seven out
of twelve interview participants, with 43% of survey
participants using the client). One implication derived
from this distinction was in regards to feature visibility.
Feature-knowledge participants displayed
during
observations of Gmail use originated from the fact that
Gmails smaller, but more focused, feature-set was highly
visible (e.g. search, tagging, chat and priority inbox, all
accessible from the inbox). By comparison Outlooks
more robust feature set, including sophisticated
scheduling, filtering and Task Management had to be
found through menus, tabs, dialogue boxes and screens
that remove users from the context of the inbox. When
asked how they recall project emails during observations,
Gmail users were usually one click away from
performing a sort or search. Interestingly, one of Gmails
few discreet features, the date-recognition pane that
appears circumstantially in the message view, was not
well sighted by observation participants (with three out of
eleven Gmail users noticing the feature and only one
having used it).
4.1.2
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
4.2
4.2.1
4.3
4.3.1
The Calendar
All-or-Nothing Calendar Use
4.3.2
4.3.3
Calendars:
rigidity
continued
practice
of
4.3.4
4.4
4.4.1
Projects
Big Association For a Little-known
Feature
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
4.5
4.5.1
4.5.2
4.5.3
Discussion
49
Conclusion
References
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
51
52
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
Abstract
The past decade has seen healthcare costs rising faster
than government expenditure in most developed
countries. Various telehealth solutions have been
proposed to make healthcare services more efficient and
cost-effective. However, existing telehealth systems are
focused on treating diseases instead of preventing them,
suffer from high initial costs, lack extensibility, and do
not address the social and psychological needs of
patients. To address these shortcomings, we have
employed a user-centred approach and leveraged Web 2.0
technologies to develop Healthcare4Life (HC4L), an
online telehealth system targeted at seniors. In this paper,
we report the results of a 6-week user study involving 43
seniors aged 60 and above. The results indicate that
seniors welcome the opportunity of using online tools for
managing their health, and that they are able to use such
tools effectively. Functionalities should be tailored
towards individual needs (health conditions). Users have
strong opinions about the type of information they would
like to submit and share. Social networking
functionalities are desired, but should have a clear
purpose such as social games or exchanging information,
rather than broadcasting emotions and opinions. The
study suggests that the system positively changes the
attitude of users towards their health management, i.e.
users realise that their health is not controlled by health
professionals, but that they have the power to positively
affect their well-being..
Keywords: Telehealth, senior citizens, perceived ease-ofuse, behavioural change, Web 2.0.
Introduction
53
2
2.1
2.2
54
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
Section
Activities
Health
Apps
Profile
Mail
Friends
Settings
Description
To share information about ones activities with
the HC4L applications, view and comment on
the activities of HC4L friends (allowing users to
motivate friends with positive comments).
To access health applications added by thirdparty developers. Patients can add applications
from the applications directory and remove them
from their profile.
To enable patients to create an online health
profile, which will enable other patients of
similar interest or disease to locate them in the
system. It also presents a summary of recent
health applications used by the user.
To send mails to friends and other members in
the HC4L network.
To access friends profile page, find and add
new friends, and invite others to join HC4L.
To change password and profile privacy settings,
and to delete the user account.
3
3.1
Methodology
Procedure
55
Assessment
Milestone
Initial
Meeting
End of
Week 3
End of
Week 6
Content of
Questionnaire
Demographics,
MHLC
MHLC, IMI,
SUS
Additional
Likert scale and
open-ended
items
Completed
(n)
43
24
21
3.2
Instrumentation
56
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
7
8
9
10
4
4.1
Results
Socio-demographic Characteristics
4.2
Number of Logins
Interest/Enjoyment
1 I enjoyed using the system very much.
2 I thought the system was boring. (R)
3 I would describe the system as very interesting.
Perceived Competence
1 I think I am pretty good at using the system.
2 After working with the system for a while, I felt pretty
competent.
3 I couldnt do very well with the system. (R)
Effort/Importance
1 I put a lot of effort into learning how to use the system.
2 It was important to me to learn how to use the system
well.
3 I didnt put much energy into using the system. (R)
Pressure/Tension
1 I did not feel nervous at all while using the system. (R)
2 I felt very tense while using the system.
3 I was anxious while interacting with the system.
Value/Usefulness
1 I think that the system is useful for managing my health
from home.
2 I think it is important to use the system because it can
help me to become more involved with my healthcare.
3 I would be willing to use the system again because it has
some value to me.
Week
57
the system, and felt that the system has some value or
utility for them. The pressure/tension subscale obtained a
low score indicating that the participants did not
experience stress while using the system. There are
significant differences between age groups for the scores
for perceived competence and value/usefullness. Seniors
of age range 60-69 consider themselves more competent
and find the system more valuable than older seniors.
Subscale
All
(n = 24)
Age 60-69
(n = 12)
Age 70-85
(n = 12)
Interest/Enjoyment
4.40 1.68
4.42 1.73
4.39 1.70
Perceived
Competence
4.39 1.78
4.89 1.52
3.89 1.94
Effort/Importance
4.11 1.58
4.11 1.57
4.11 1.56
Pressure/Tension
2.61 1.56
2.67 1.45
2.56 1.69
Value/Usefulness
4.25 1.81
4.53 1.83
3.97 1.75
4.5
4.3
Change in Attitude
M
.04
-.29
-.10
SD
1.04
1.27
1.23
Range
-4 to 2
-10 to 6
-6 to 5
Positive Responses
I like the idea of it.
It is easy to use.
The health applications are a great help to keep
track of ones health.
Negative Responses
Sorting out calories values for foods seems a
lot of trouble (Calorie Calculator).
Im not so keen on the social
Facebook-like aspects of the system.
Limited applications.
4.4
Motivation
58
Discussion
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
No.
1
Statement
HC4L encourages me to be better aware of my health.
n
15
M
4.27
SD
1.44
% Agree*
80
15
3.93
1.28
80
18
4.17
1.47
72
A system like HC4L that provide access to a variety of health applications will
reduce the need to use different websites for managing health.
18
3.89
1.78
72
17
3.82
1.67
65
HC4L has the potential to help seniors to deal with social isolation.
18
3.94
1.35
61
18
3.56
1.69
56
16
3.06
1.57
56
HC4L allows me to get in touch with other patients with a similar disease or health
problem.
15
3.6
1.45
53
10
The social features of HC4L (e.g. making friends, sharing activity updates with each
other, playing social games, etc) motivated me to use the system.
15
2.6
1.45
33
11
13
2.54
1.76
31
*Percent Agree (%) = Strongly Agree, Moderately Agree & Slightly Agree responses combined
59
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
Limitations
Conclusion
Acknowledgements
References
61
62
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
stewart.vonitzstein@unisa.edu.au
Abstract
We describe new techniques to allow constraint driven
design using spatial augmented reality (SAR), using
projectors to animate a physical prop. The goal is to
bring the designer into the visual working space,
interacting directly with a dynamic design, allowing for
intuitive interactions, while gaining access to affordance
through the use of physical objects. We address the
current industrial design process, expressing our intended
area of improvement with the use of SAR.
To
corroborate our hypothesis, we have created a prototype
system, which we have called SARventor. Within this
paper, we describe the constraint theory we have applied,
the interaction techniques devised to help illustrate our
ideas and goals, and finally the combination of all input
and output tasks provided by SARventor.
To validate the new techniques, an evaluation of the
prototype system was conducted. The results of this
evaluation indicated promises for a system allowing a
dynamic design solution within SAR. Design experts see
potential in leveraging SAR to assist in the collaborative
process during industrial design sessions, offering a high
fidelity, transparent application, presenting an enhanced
insight into critical design decisions to the projects
stakeholders. Through the rich availability of affordance
in SAR, designers and stakeholders have the opportunity
to see first-hand the effects of the proposed design while
considering both the ergonomic and safety requirements..
Keywords: Industrial Design Process, Spatial Augmented
Reality, Tangible User Interface.
Introduction
63
64
Related Work
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
65
3.1
Constraint Logic
(1)
To illustrate our design solution, we also have allowed
the changing of distances between elements. We have
implemented this in an 'as is' basis. We initially calculate
the current trajectory between elements A and B as seen
in equation 2.
(2)
This direction is then used within equation 3, along
with the designer's input distance, to provide a new point
the required distance away from point A.
(3)
The parallel constraint is described in Figure 2. The
first row in the hierarchical table allows parallelisation on
an arbitrary axis, rather than constrained to either X, Y or
Z planes. The parallel tool will always conform the
Abitrary
Plane
XY
YZ
YZ
XZ
XY
XZ
XZ
XY
YZ
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
(4)
(5)
(6)
(7)
To provide the same functionality to the designer as
provided with the distance constraint, a change constraint
is also provided for use. Our implementation uses a
rotation matrix to allow for the change of an inner angle
between projection elements. The matrix, as seen in (4)
is 3x3 and uses values determined by the input user angle.
The chosen angle is converted to radians (from degrees)
and is used to produce both c (cosine*angle) & s
(sine*angle).
illustrating
3.2
Scene Logic
3.3
Tangible Interaction
67
4
Digital Mode
Physical Marker
Circle
Circle
Mode Marker
In Toolbox
Unselected
Action Tool
Distance
Constraint
Distance Mode
Marker
Contacts
Square
Circle
Square Mode
Marker
Action Tool
Selected
Circle
68
Expert Analysis
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
4.1
Results
70
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
References
71
72
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
Haoyang Feng
Robert Amor2
Burkhard C. W
unsche3
Abstract
Traditional music education places a large emphasis
on individual practice. Studies have shown that individual practice is frequently not very productive due
to limited feedback and students lacking interest and
motivation. In this paper we explore the use of augmented reality to create an immersive experience to
improve the efficiency of learning of beginner piano
students. The objective is to stimulate development
in notation literacy and to create motivation through
presenting as a game the task that was perceived as
a chore. This is done by identifying successful concepts from existing systems and merging them into a
new system designed to be used with a head mounted
display. The student is able to visually monitor their
practice and have fun while doing so. An informal
user study indicates that the system initially puts
some pressure on users, but that participants find it
helpful and believe that it improves learning.
Keywords: music education, augmented reality, cognitive overlap, human-computer interaction
1
Introduction
Music is an important part of virtually every culture and society. Musical traditions have been taught
and passed down through generations. Traditionally,
Western culture has placed a large emphasis on music education. For example, the New Zealand Curriculum (New Zealand Ministry of Education 2007)
defines music as a fundamental form of expression
and states that music along with all other forms of
art help stimulate creativity.
Traditional music education focuses on individual
practice assisted by an instructor. Due to time and
financial constraints most students only have one lesson per week (Percival et al. 2007). For beginner
students, this lesson usually lasts half an hour, and
the majority of time spent with the instrument is
without any supervision from an instructor. Sanchez
et al. (1990) note that during these unsupervised
practice times students may play wrong notes, wrong
rhythms, or simply forget the instructors comments
c
Copyright
2013,
Australian Computer Society, Inc. This
paper appeared at the 14th Australasian User Interface Conference (AUIC 2013), Adelaide, Australia, January-February
2013. Conferences in Research and Practice in Information
Technology (CRPIT), Vol. 139, Ross Smith and Burkhard C.
W
unsche, Ed. Reproduction for academic, not-for profit purposes permitted provided this text is included.
Related Work
A review of the literature revealed a number of interesting systems for computer-based music education.
Systems for piano teaching include Piano Tutor (Dannenberg et al. 1990), pianoFORTE (Smoliar et al.
1995), the AR Piano Tutor (Barakonyi & Schmalstieg 2005), and Piano AR (Huang 2011). Several
applications for teaching other instruments have been
developed (Cakmakci et al. 2003, Motokawa & Saito
2006). We will review the Digital Violin Tutor (Yin
et al. 2005) in more detail due to its interesting use
of VR and visualisation concepts for creating a cognitive overlap between hand/finger motions and the
resulting notes.
The Piano Tutor was developed by Dannenberg
et al. (1990) in collaboration with two music teachers. The application uses a standard MIDI interface
to connect a piano (electronic or otherwise) to the
computer in order to obtain the performance data.
MIDI was chosen because it transfers a wealth of performance related information including the velocity at
which a note is played (which can be used to gauge
dynamics) and even information about how pedals
are used. An expert system was developed to provide feedback on the users performance. Instructions
and scores are displayed on a computer screen placed
in front of the user. User performance is primarily
graded according to accuracy in pitch, timing and dynamics. Instead of presenting any errors directly to
the user, the expert system determines the most significant errors and guides the user through mistakes
one by one.
Smoliar et al. (1995) developed pianoFORTE,
which focuses on teaching the interpretation of music
rather than the basic skills. The authors note that
music is neither the notes on a printed page nor the
motor skills required for the proper technical execution. Rather, because music is an art form, there
is an emotional aspect that computers cannot teach
or analyse. The system introduces more advanced
analysis functionalities, such as the accuracy of articulation and synchronisation of chords. Articulation describes how individual notes are to be played.
For example, staccato indicates a note that is separate from neighbouring notes while legato indicates
notes that are smoothly transitioned between with
no silence between them. Synchronisation refers to
whether notes in a chord are played simultaneously
and whether notes of equal length are played evenly.
These characteristics form the basis of advanced musical performance abilities. In terms of utilised technologies, pianoFORTE uses a similar hardware set-up
as Piano Tutor.
74
The AR Piano Tutor by Barakonyi & Schmalstieg (2005) is based on a fishtank AR setup
(PC+monitor+webcam), where the physical MIDI
keyboard is tracked with the help of a single optical
marker. This puts limitations on the permissible size
of the keyboard, since for large pianos the users view
might not contain the marker. The application uses
a MIDI interface to capture the order and the timing of the piano key presses. The AR interface gives
instant visual feedback over the real keyboard, e.g.,
the note corresponding to a pressed key or wrongly
pressed or missed keys. Vice versa, the keys corresponding to a chord can be highlighted before playing
the chord, and as such creating a mental connection
between sounds and keys.
A more recent system presented by Huang (2011)
focuses on improving the hardware set-up of an AR
piano teaching system, by employing fast and accurate markerless tracking. The main innovation with
regard to the visual interface is use of virtual fingers,
represented by simple cylinders, to indicate the hand
position and keys to be played.
Because MIDI was created for use with equipment
with a rather flexible form of input (such as pianos,
synthesisers and computers), a purely analogue instrument such as the violin cannot use MIDI to interface with a computer. The Digital Violin Tutor (Yin
et al. 2005) contains a transcriber module capable
of converting the analogue music signal to individual
notes. Feedback is generated by comparing the students transcribed performance to either a score or
the teachers transcribed performance. The software
provides an extensive array of visualisations: An animation of the fingerboard shows a student how to
position their fingers to produce the desired notes,
and a 3D animated character is provided to stimulate
interest and motivation in students.
3
Requirements Analysis
Target Audience
Similar to the Piano Tutor, we target beginner students, with the goal of teaching notation literacy and
basic skills. This is arguably the largest user group,
and is likely to benefit most from an affordable and
fun-to-use system.
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
3.2
Instrument choice
Feedback
Design
Physical Setup
Based on the requirements, the physical setup comprises one electronic keyboard, one head mounted display with camera, and one computer for processing.
The user wears the head mounted display and sits in
front of the keyboard. The keyboard connects to the
computer using a MIDI interface. The head mounted
display connects to the computer using a USB interface. The head mounted display we use for this
project is a Trivisio ARvision-3D HMD1. These are
video see-through displays in that the displays are
not optically transparent. The video captured by the
cameras in front of the device must be projected onto
the display to create the augmented reality effect.
The keyboard we use for this project is a generic electronic keyboard with MIDI out. Figure 1 illustrates
the interactions between these hardware components.
4.2
User Interface
As explained in the requirements analysis, the representation of notes in the system must visually indicate which key each written note corresponds to.
We drew inspiration from music and rhythm games
and Karaoke videos, where text and music are synchronised using visual cues. In our system each note
is represented as a line above the corresponding key,
where the length of the line represents the duration
of the note. The notes approach the keys in the AR
view in a steady tempo. When the note reaches the
Feature Detection
The feature detection step can be performed by directly analysing the camera image using computer
vision techniques. An alternative solution is to use
fiduciary markers and to define features within a coordinate system defined by the markers. Feature detection using markers is easier to implement and usually more stable, but often less precise and requires
some user effort for setting up the system (placing
markers, calibration). In our application a markerless
solution is particularly problematic, since the camera view only shows a section of the keyboard, which
makes it impossible to distinguish between keys in different octaves. A unique identification of keys would
either require global information (e.g., from the background) or initialisation using a unique position (e.g.,
the boundary of the keyboard) followed by continuous tracking. We hence chose a marker-based solution
based on the ARToolkit software. The software uses
markers with a big black border, which can be easily identified in the camera view and hence can be
scaled to a sufficiently small size. NyARToolkit is capable of detecting the position and orientation (otherwise known as the pose) of each marker and returns
a homogeneous 3D transformation matrix required to
translate and rotate an object in 3D space so that it
is directly on top of the detected marker. Because
this matrix is a standard mathematical notation, it
can be used directly in OpenTK.
4.3.2
Registration
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
The MIDI interface is used to obtain the users performance for analysis. MIDI is an event-based format;
each time a key is pressed or released, a digital signal
containing information about the way the note was
played is sent to the computer. This information includes the note that was played and the velocity at
which the note was played. A high velocity indicates
a loud sound, while a low velocity indicates a soft
sound. The time at which the note was played can be
inferred from when the event was received. MIDI also
supports information about other keyboard functionalities, such as pedals or a synthesisers knobs, but
this information was outside this projects scope.
The users performance must be compared against
some reference model in order to assess it. Since MIDI
is capable of storing such detailed information, we decided to use recorded MIDI files of the music pieces
as reference models. This allows evaluating the users
note accuracy and rhythm accuracy. Other information, such as dynamics or articulation, can be added,
but as explained previously, were considered too advanced for beginners.
Feedback is important as it allows the user to
learn from mistakes and to set goals for future practice. Real-time feedback on note playing accuracy
is provided by colour coding the note visualisations
in the AR view as illustrated in figure 6. Colour is
the most appropriate visual attribute for representing
this information, since colours are perceived preattentively (Healey & Enns 2012), colours such as red and
green have an intuitive meaning, colours do not use
extra screen space (as opposed to size and shape),
and colour changes are less distracting than changes
of other visual attributes (such as shape).
Implementation
Results
User Study
6.2
Discussion
Future Work
Conclusion
78
References
Azuma, R. T. (1997), A survey of augmented reality,
Presence: Teleoperators and Virtual Environments
6(4), 355385.
Barakonyi, I. & Schmalstieg, D. (2005), Augmented
reality agents in the development pipeline of computer entertainment, in Proceedings of the 4th international conference on Entertainment Computing, ICEC05, Springer-Verlag, Berlin, Heidelberg,
pp. 345356.
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
79
80
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
Steve Reeves
Andrea Schweer
(1,2)
Introduction
There have been many investigations into the effectiveness of different types of usability testing and evaluation techniques, see for example (Nielsen & Landauer 1993) and (Doubleday et al. 1997) as well as
research into the most effective ways of running the
various types of studies (numbers of participants, expertise of testers, time and cost considerations etc.)
(Nielsen 1994), (Lewis 2006). Our interest, however,
is in a particular type of usability study, that of user
evaluations. We are interested in how such studies are
developed, e.g. what is the basis for the activities performed by the participants? In particular, given an
implementation (or partial implementation) to test,
is there a difference between the sort of study the developer of the system under test might produce and
c
Copyright 2013,
Australian Computer Society, Inc. This paper appeared at the Fourteenth Australasian User Interface
Conference (AUIC 2013), Adelaide, Australia. Conferences in
Research and Practice in Information Technology (CRPIT),
Vol. , Ross T. Smith and Burkhard Wuensche, Ed. Reproduction for academic, not-for-profit purposes permitted provided
this text is included.
81
82
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
Goals
The first study had two main goals. The first was to
detect any serious usability flaws in the Digital Parrots user interface before using it in a long-term user
study. We wanted to test how well the software could
be used by novice users given minimal instructions.
This mode of operation is not the typical mode for a
system such as the Digital Parrot but was chosen to
cut down on the time required by the participants as
it removed the need to include a training period.
The second goal was to find out whether the participants would understand the visualisations and the
purpose of the four different navigators.
3.2
83
Expectations
Results
The median SUS score of the Digital Parrot as determined in the usability test is 65 (min = 30, max
= 92.5, IQR = 35), below the cut-off point for an acceptable SUS score (which is 70). The overall score of
65 corresponds to a rating between ok and good
on Bangor et al.s adjective scale (Bangor et al. 2009).
The median SUS score in the graph condition alone is
80 (min = 42.5, max = 92.5, IQR = 40), which indicates an acceptable user experience and corresponds
to a rating between good and excellent on the
adjective scale. The median SUS score in the list condition is 57.5 (min = 30, max = 77.5, IQR = 42.5).
The difference in SUS score is not statistically significant but does reflect our observations that the users
in the list condition found the system harder to use
than those in the graph condition.
84
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
The Models
We began the modelling process by examining screenshots of the Digital Parrot. This enabled us to identify the widgets used in the various windows and dialogues of the system that provided the outline for the
first set of models. We used presentation models and
presentation interaction models (PIMs) from the work
described in (Bowen & Reeves 2008) as they provide a
way of formally describing UI designs and UIs with a
defined process for generating abstract tests from the
models (Bowen & Reeves 2009). Presentation models
describe each dialogue or window of a software system
in terms of its component widgets, and each widget
is described as a tuple consisting of a name, a widget
category and a set of the behaviours exhibited by that
widget. Behaviours are separated into S-behaviours,
which relate to system functionality (i.e. behaviours
that change the state of the underlying system) and
I-behaviours that relate to interface functionality (i.e.
behaviours relating to navigation or appearance of the
UI).
Once we had discovered the structure of the UI
and created the initial model we then spent time using
the software and discovering what each of the identified widgets did in order to identify the behaviours to
add to the model. For some parts of the system this
was relatively easy, but occasionally we were unable
to determine the behaviour by interaction alone. For
example, the screenshot in figure 5 shows the Find
dialogue from the Digital Parrot, from which we developed the following presentation model:
FindWindow is
(SStringEntry, Entry, ())
(HighlightButton, ActionControl,
(S_HighlightItem))
(FMinIcon, ActionControl, (I_FMinToggle))
(FMaxIcon, ActionControl, (I_FMaxToggle))
(FXIcon, ActionControl, (I_Main))
(HSCKey, ActionControl, (S_HighlightItem))
(TSCKey, ActionControl, (?))
We were unable to determine what the behaviour of
the shortcut key option Alt-T was and so marked
the model with a ? as a placeholder. Once the
presentation models were complete we moved on to
the second set of models, the PIMs, which describe
the navigation of the interface. Each presentation
model is represented by a state in the PIM and transitions between states are labelled with I-behaviours
(the UI navigational behaviours) from those presentation models. PIMs are described using the Charts
language (Reeve 2005), which enables each part of
the system to be modelled within a single, sequential
chart that can then be composed together or embedded in states of other models to build the complete
model of the entire system. Figure 6 shows one of
the PIMs representing part of the navigation of the
Find dialogue and Main window.
In the simplest case, a system with five different
windows would be described by a PIM with five states
(each state representing the presentation model for
one of the windows). However, this assumes that each
of the windows is modal and does not interact with
85
I_FMinToggle
MainandFind
MainandMinFind
I_FMaxToggle
I_MainMinToggle
MinMainandMinFind
I_MainMaxToggle
I_FMinToggle
I_MainMinToggle
I_FMaxToggle
MinMainandFind
86
HighlightItems 7 SelectItems
PointerMode 7 TogglePointerMode
CurrentTrail 7 SelectCurrentTrailItems
SelectItemMenu 7 MenuChoice
ZoomInTL 7 TimelineZoomInSubset
ZoomOutTL 7 TimelineZoomOutSuperset
SelectItemsByTime 7 SelectItemsByTime
FitSelectionByTime 7 FitItemsByTime
HighlightItem 7 SelectByName
AddToTrail 7 AddToTrail
ZoomInMap 7 RestrictByLocation
ZoomOutMap 7 RestrictByLocation
Histogram 7 UpdateHistogram
Abstract tests are based on the conditions that are required to hold in order to bring about the behaviour
given in the models. The tool which we use for creating the presentation models and PIMs, called PIMed
(PIMed 2009) has the ability to automatically generate a set of abstract tests from the models, but for this
work we derived them manually using the process described in (Bowen & Reeves 2009). Tests are given
in first-order logic. The informal, intended meaning of the predicates can initially be deduced from
their names, and are subsequently formalised when
the tests are instantiated. For example, two of the
tests that were derived from the presentation model
and PIM of the Find dialogue and MainandFind
are:
State(MainFind )
Visible(FXIcon) Active(FXIcon)
hasBehaviour (FXIcon, I Main)
State(MainFind )
Visible(HighlightButton)
Active(HighlightButton)
hasBehaviour (HighlightButton, S HighlightItem)
The first defines the condition that when the system is in the MainFind state a widget called FXIcon
should be visible and available for interaction (active) and when interacted with should generate the
interaction behaviour called I Main, whilst the second requires that in the same state a widget called
HighlightButton is similarly visible and available
for interaction and generates the system behaviour
S HighlightItem. When we come to instantiate the
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
As with the first study, ethical consent to run the second study was sought, and given. We recruited our
participants from the same target group, and in the
same way, as for the initial study with the added criterion that no one who had participated in the initial
study would be eligible to participate in the second.
We used a between groups methodology with
the ten participants being randomly assigned either
the Graph view version of the software or the List
view. The study was run as an observational study
with notes being taken of how participants completed, or attempted to complete, each task. We also
recorded how easily we perceived they completed the
tasks and subsequently compared this with the participants perceptions. Upon completion of the tasks
the participants were asked to complete the questionnaire and provide any other feedback they had regarding any aspect of the study. Each study took, on
average, an hour to complete.
4.7
Results
87
88
One of the things we have achieved by this experiment is an understanding of how formal models might
be used to develop a framework for developing user
evaluations. This work shows that a study produced
in such a way is as good (and in some cases better)
at discovering both usability and functional problems
with software. It is also clear, however, that the type
of study produced does not allow for analysis of utility and learnability from the perspective of a user
encouraged to interact as they choose with software.
Some of the advantages of this approach are: the
ability to clearly identify the scope of the study
with respect to the navigational possibilities of the
software-under-test (via the PIM); a framework to
identify relevant user tasks (via the abstract tests);
a mechanism to support creation of oracles for inputs/outputs to tasks (via the specification). This
supports our initial goal of supporting development
of evaluation studies by someone other than the software developer as it provides structured information
to support this. However, it also leads to an artificial
approach to interacting with the software and does
not take into account the ability of participants to
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
Conclusion
It seems clear that there is no one size fits all approach to developing evaluation studies, as the underlying goals and intentions must play a part in how
the tasks are structured. However, it does appear
that the use of formal models in the ways shown here
can provide a structure for determining what those
tasks should be and suggests ways of organising them
to maximise interaction. Perhaps using both methods
(traditional and formally based) is the best way forward. Certainly there are benefits to be found from
taking the formal approach, and for developers with
no expertise in developing evaluation studies this process may prove supportive and help them by providing a framework to work within. Similarly for formal
practitioners who might otherwise consider usability
testing and evaluation as too informal to be useful the
formal structure might persuade them to reconsider
and include this important step within their work.
The benefits of a more traditional approach are the
ability to tailor the study for discovery as well as evaluation, something the formally devised study in its
current form was not good at at all. Blending the
two would be a valuable way forward so that we can
use the formal models as a framework to devise structured and repeatable evaluations, and then extend
or develop the study with a more human-centred approach that allows for the other benefits of evaluation
that would otherwise be lost.
8
Future Work
Acknowledgments
89
90
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
Abstract
3D display technologies improve perception and
interaction with 3D scenes, and hence can make
applications more effective and efficient. This is achieved
by simulating depth cues used by the human visual
system for 3D perception. The type of employed depth
cues and the characteristics of a 3D display technology
affect its usability for different applications. In this paper
we review, analyze and categorize 3D display
technologies and applications, with the goal of assisting
application developers in selecting and exploiting the
most suitable technology.
Our first contribution is a classification of depth cues
that incorporates their strengths and limitations. These
factors have not been considered in previous
contributions, but they are important considerations when
selecting depth cues for an application. The second
contribution is a classification of display technologies that
highlights their advantages and disadvantages, as well as
their requirements. We also provide examples of suitable
applications for each technology. This information helps
system developers to select an appropriate display
technology for their applications.
Keywords: classification, depth cues, stereo perception,
3D display technologies, applications of 3D display
technologies
Introduction
Depth Cues
91
2.1
92
2.2
3D Display Technologies
3.1
Stereoscopic Display
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
3.1.1
Stereo Pair
93
3.1.2
Autostereoscopic
94
forth. The right and left images are merged in the brain
using transverse (crossed) or parallel (uncrossed) viewing.
However, some viewers are not able to perceive 3D
images from autostereograms. Autostereograms are used
for steganography and entertainment books (Tsuda et al.
2008).
Holographic Stereogram. Images are stored on a
holographic film shaped as a cylinder, and provide motion
parallax as a viewer can see different perspectives of the
same scene when moving around the cylinder (Halle
1988).
Holographic stereograms are normally used for clinical,
educational, mathematical and engineering applications
and in space exploration. The method has some
constraints that limit its usage. For example, if viewers
step further away from Holographic stererograms with
short view distances the size of the image changes or
distorts (Watson 1992, Halle 1994, ZebraImaging 2012).
Parallax Barrier. In this technique left and right images
are divided into slices and placed in vertical slits. The
viewers have to be positioned in front of the image so that
the barrier conducts right and left images to their
corresponding eyes (Pollack, n.d.).
Forming the images in a cylindrical or panoramic shape
can provide motion parallax as viewers are able to see
different perspectives by changing their position.
However, the number of images that can be provided is
limited, so horizontal movement beyond a certain point
will cause image flipping (McAllister 1993).
Lenticular Sheets. Lenticular sheets consist of small
semi cylindrical lenses that are called lentics and conduct
each of the right and left images to their corresponding
eyes. Because its mechanism is based on refraction rather
than occlusion, the resulting images look brighter
(LenstarLenticular 2007).
Alternating Pairs (VISIDEP). This method is based on
vertical parallax. Images are exposed to the viewer with a
fast rocking motion to help viewers fuse them into 3D
images. This method avoids image flicker and ghosting
because of vertical parallax.
VISIDEP was used in computer generated terrain
models and molecular models. However, not all the
viewers were able to fuse the vertical parallax images into
a 3D image. This method was limited in terms of
implementation speed and quality of images, thus it is not
in use anymore (Hodges 1985).
3.2
Real 3D Display
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
3.2.1
3.2.2
3.2.3
Holographic Display
95
96
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
Static/
Animated
S&A
(McAllister
1993)
Depth Cues
Strength
Range
Accommodation
Weak (McAllister
1993)
0-2m (McAllister
1993)
Convergence
Weak (McAllister
1993)
0-10m (McAllister
1993)
S&A
(McAllister
1993)
2.5-20m (Kaufman
et al. 2006)
S&A
(McAllister
1993)
Binocular Parallax
(Stereopsis)
Strong
(Kaufman et al.
2006)
Limitations
Monocular
Movement
(Motion) parallax
0- (Mikkola et al.
2010)
1. Any extra movement of the viewer or the scene create powerful and independent
depth cues (Mather 2006)
2. Does not work for static objects (McAllister 1993)
A (McAllister
1993)
Depth from
Defocus
0- (Mather 1996)
1. Depth of field depends on the size of pupils as well. The estimated depth may be
inaccurate (Mather 2006)
2. Human eyes cannot detect small differences in a blurry scene (Mather 2006)
S (Mather
1996)
Retinal Image
Size
Strong (Howard
2012)
0- (Bardel 2001)
1. Retinal size change for distances over 2 meter is very small (Mather 2006)
Linear Perspective
0- (Bardel 2001)
1. Works good for parallel or continuous lines that are stretched towards horizon (Mather
2006)
S & A (Mather
2006)
Texture Gradient
Strong (Howard
2012)
0- (Bardel 2001)
1. Only reliable when the scene consists of elements of the same size, volume and shape.
And texture Cues vary slower for a taller viewer compared to a shorter (Mather 2006)
S & A (Mather
2006)
Overlapping
0- (Bardel 2001)
1. Does not provide accurate information about the depth. Only ordering of the objects
(McAllister 1993)
S&A
(McAllister
1993)
Aerial Perspective
S & A (Mather
2006)
0- (Bardel 2001)
S&A
(McAllister
1993)
Weak (McAllister
1993)
0- (McAllister
1993)
1. Objects at the same depth with different colour are perceived with different depths.
2. Brighter objects appear to be closer (McAllister 1993)
S&A
(McAllister
1993)
Shadowing And
Shading
Colour
Binocular
Monocular
Display
Technique
Category
Physical Depth
Cues
Exploited
Stereo pair
Non-polarized
Binocular
Parallax
Transparency
viewers
Stereo pair
Non-polarized
Binocular
Parallax
Head Mounted
Displays
Stereo pair
Non-polarized
Binocular
Parallax&
Motion Parallax
Anaglyph
Stereo pair
Time parallel
Polarized
Binocular
Parallax
Fish Tank VR
Vectographs
StereoJet
Stereo pair
Time parallel
Polarized
Stereo pair
Time parallel
Polarized
Stereo pair
Time parallel
Polarized
Binocular
Parallax&
Motion Parallax
S&A
(McAllister
1993)
Binocular
Parallax
1. Vectograph sheets in the rolls of two-thousand feet length for ~US$ 37,000 (Friedhoff et al. 2010)
Binocular
Parallax
ChromaDepth
Stereo pair
Time parallel
Polarized
Binocular
Parallax
Stereo pair
Time parallel
Non-polarized
Binocular
Parallax&
Motion Parallax
Eclipse Method
(Active Shutter
System)
Stereo pair
Field-sequential
Polarized
Binocular
Parallax
1. A box shaped binocular mounted on sensors to simulate movement in the virtual world (depending on their
degrees of freedom their prices vary from US$ 10,000 to US$ 85,000) (McAllister 1993)
1. Stereo sync output (Z-Screen by StereoGraphics Ltd.) (McAllister 1993)
2. Normal PCs can use an emitter to enhance their screen update frequency and a software program to convert left
and right images into an appropriate format for normal displays. The price for emitter is approximately US$ 400
97
ImmersaDesk
Stereo pair
Field-sequential
Polarized
Binocular
Parallax &
Motion Parallax
Stereo pair
Field-sequential
Polarized
Binocular
Parallax &
Motion Parallax
Interference Filter
Technology
Stereo pair
Time parallel
Polarized
Binocular
Parallax
Lenticular Sheets
Free View
Holographic
Stereogram
Auto
stereoscopic
Binocular
Parallax &
Motion Parallax
if panoramic
Auto
stereoscopic
Binocular
Parallax
1. Autostereogram designing software applications (e.g. stereoptica, XenoDream which are priced US$ 15-120)
(BrotheSoft 2012)
Auto
stereoscopic
Binocular
Parallax &
Motion Parallax
Parallax Barrier
Auto
stereoscopic
Binocular
Parallax &
Motion Parallax
if panoramic
Alternating Pairs
(VISIDEP)
Auto
stereoscopic
Binocular
Parallax
Oscillating Planar
Mirror
Multiplanar
Swept olume
Varifocal Mirror
Multiplanar
SweptVolume
Rotating Mirror
Multiplanar
Swept olume
Static Volume
Displays
Static Volume
1. A transparent medium
2. Laser or Infrared Projector (Stevens 2011, Hambling 2006)
1. Two vertically mounted Cameras with similar frame rates and lenses (Hodges 1985)
Non Multi-User
Display
Technique
Anaglyph
1-Very cheap
2-Can be viewed on any colour display
3-Doesnt require a special hardware
4-Most of colour information is lost during colour
reproduction process
5-Long use of anaglyph glasses cause head ache
or nausea
6-Does not provide head tracking feature
7-Most of the times image cross talk occurs
8-Ghosting is possible if colours are not adjusted
properly
1-Advertisements
2-Post Cards
3-3D comics
4-Scientific charts
5-Demographic Diagrams
6-Anatomical studies
1-Time parallel
2-Not for more than one user
3-May require head tracking system
1-Augmented Reality
2-Video games
References
(McAllister
1993)
(Okoshi
1976)
(Planar3D
2012)
(Jorke et al.
2008)
(McAllister
1993)
(Okoshi
1976)
(Dzignlight
2012)
1-Field sequential
1-Video games
3-High 3D resolution
2-Movies
98
Application Examples
2-Preservation of colour
Active
Polarizer
Multi-User
(Penna 1988)
(Farrel et al.
1987) (Perron
and Wolf
2008)
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
1-Cheap
Passive
Circular
Polarized
5-Mechanical designs
10-Radiology
(ASC
Scientific
2011)
(Planar3D
2012)
(Penna 1988)
(Dzignlight
2012)
(StereoJet
2012)
(Chromatek
A, n.d.)
(McAllister
1993)
12-Amusement parks]
13-Educational applications
1-Requires wearing shutter glasses
2-Requires gloves for interacting with virtual
environment
Fully
Immersive
CAVE
1-Flight simulation
1-Study circumstances that are impossible or
expensive to implement in real world (serious
games)
5-Fully immersive
2-Pilot training]
3-Studies on human interaction with
specific conceptual environments
(McAllister
1993)
(Planar3D
2012)
1-Molecular modeling
2-Crystallography
4-Product design
3-Radiology
(Planar3D
2012)
(Penna 1988)
(McAllister
1993)
Autostereo
scopic
Displays
4-Business cards
5-Post cards
4-Limited budget
7-Cheap
6-Decoration ornament
7-Panoramic images
8-Molecular modeling
(LandRover
2010)
(Planar3d
2012) (BBC
News 2004)
(Chantal et al.
2010)
( Watson
1992)
10-Educational applications
11-Advertisements
Business dot com, 2012, Head Mounted Displays Pricing and Costs,
accessed on 19th August 2012, http://www.business.com/guides/headmounted-displays-hmd-pricing-and-costs-38625/
99
Media College, 2010, Stereo-Blind: People Who Cant See 3D, accessed
on 30th August 2012, http://www.mediacollege.com/3d/depthperception/stereoblind.html
Evans, H., Buckland, G., Lefer, D., They Made America: From the
Steam Engine to the Search Engine: Two centuries of innovators
Farrell, J., Benson, B., Haynie, C., (1987): Predicting Flicker Threshold
for Video Display Terminals, Hewlett Packard, Proc. SID, Vol. 28/4,
August
2012,
Kaufman, L., Kaufman, J., Noble, R., Edlund, S., Bai, S., King, T.
(2006): Perseptual Distance and Constancy of Size and Stereoptic
Depth, Spatial Vision, vol. 19, No. 5, 23 January 2006
100
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
Abstract
Ajax, as one of the technological pillars of Web 2.0, has
revolutionized the way that users access content and
interact with each other on the Web. Unfortunately, many
developers appear to be inspired by what is
technologically possible through Ajax disregarding good
design practice and fundamental usability theories. The
key usability challenges of Ajax have been noted in the
research literature with some technical solutions and
design advice available on developer forums. What is
unclear is how commercial Ajax developers respond to
these issues. This paper presents the results of an
empirical study of four commercial web sites that utilize
Ajax technologies. The study investigated two usability
issues in Ajax with the results contrasted in relation to the
general usability principles of consistency, learnability
and feedback.
The results of the study found
inconsistencies in how the sites managed the usability
issues and demonstrated that combinations of the issues
have a detrimental effect on user performance and
satisfaction. The findings also suggest that developers
may not be consistently responding to the available advice
and guidelines.
The paper concludes with several
recommendations for Ajax developers to improve the
usability of their Web applications..
Keywords: Ajax, usability, world-wide web.
Introduction
The World Wide Web has evolved in both size and uses
well beyond the initial conceptions of its creators. The
rapid growth in the size of the Web has driven the need
for innovation in interface technologies to support users
in navigating and interacting with the increasing amount
of diverse and rich information. The technological
innovations over the past decade have strived to provide
users with a supportive interface to access information on
the Web with ease including new Web 2.0 models of
interaction that allow users to interact, contribute,
collaborate and communicate.
However, with this
innovation there has been an unintended consequence of
increased and additional complexity for users. The new
models of interaction have in some cases required a shift
Copyright 2013, Australian Computer Society, Inc. This
paper appeared at the 14th Australasian User Interface
Conference (AUIC 2013), Adelaide, Australia. Conferences in
Research and Practice in Information Technology (CRPIT),
Vol. 139. Ross T. Smith and Burkhard Wuensche, Eds.
Reproduction for academic, not-for-profit purposes permitted
provided this text is included.
101
3
3.1
AJAX - Challenges
Usability Principles and Disorientation
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
3.2
AJAX Usability
3.2.1
103
3.2.2
Experiment
104
Method
6
6.1
Results
Task Completion Times
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
6.2
6.3
105
6.4
Update Management
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
6.5
Other Results
Discussion
107
Conclusion
108
References
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
109
110
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
Burkhard Wnsche
Christof Lutteroth
b.wuensche@auckland.ac.nz
lutteroth@cs.auckland.ac.nz
Abstract
Over the last century, virtual reality (VR) technologies
(stereoscopic displays in particular) have repeatedly been
advertised as the future of movies, television, and more
recently, gaming and general HCI. However after each
wave of commercial VR products, consumer interest in
them has slowly faded away as the novelty of the
experience wore off and its benefits were no longer
perceived as enough to outweigh the cost and limitations.
Academic research has shown that the amount of benefit a
VR technology provides depends in the application it is
used for and that, contrary to how these technologies are
often marketed, there is currently no one-size-fits-all 3D
technology. In this paper we present an evaluation
framework designed to determine the quality of depth
cues produced when using a 3D display technology with a
specific application. We also present the results of using
this framework to evaluate some common consumer VR
technologies. Our framework works by evaluating the
technical properties of both the display and application
against a set of quality metrics. This framework can help
identify the 3D display technology which provides the
largest benefit for a desired application.*
Keywords: virtual reality, evaluation framework, 3D
displays, 3D applications.
Introduction
Substitutional
Virtual
Reality
Reality
Augmented
Reality
Mixed Reality
Mediated Reality
Mostly virtual
Mostly real
Pictorial
Physiological
Parallax
Depth Cue
Perspective
Texture
Shading
Shadows
Relative Motion
Occlusion
Accommodation
Convergence
Binocular
Motion
111
Related Work
112
Evaluation Framework
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
3.1
Display Technology
3.2
Application
3.3
Quality Metrics
113
3.3.1
Soft Metrics
3.3.2
5.1
Where
and
respectively,
the application and
the display.
114
Results
Discussion
Findings
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
5.2
Validity
5.3
Limitations
Future Work
References
116
9
9.1
Appendix A
Conclusions
http://vcg.isti.cnr.it/Publications/2011/BAADEGMM
11 Accessed 6 July 2012.
Wen, Q., Eindhoven Russell, T., Christopher,.H. and
Jean-Bernard, M. (2006): A comparison of immersive
HMD, fish tank VR and fish tank with haptics
displays for volume visualization. Proc. of the 3rd
symposium on Applied perception in graphics and
visualization. Massachusetts, USA, 51-58. ACM
Litwiller, T. and LaViola Jr., J (2011): Evaluating the
benefits of 3d stereo in modern video games. Proc. of
the 2011 annual conference on Human factors in
computing systems. Vancouver, BC, Canada. 23452354. ACM
Treadgold, M., Novins, K., Wyvill, G. and Niven, B.
(2001): What do you think you're doing? Measuring
perception in fish tank virtual reality. Proc. Computer
Graphics International. 325-328.
Grossman, T. and Balakrishnan, R. (2006): An evaluation
of depth perception on volumetric displays. Proc. of
the working conference on Advanced visual
interfaces. Venezia, Italy. 193-200, ACM
Blundell, B. (2012): On Exemplar 3D Display
Technologies.
http://www.barrygblundell.com/upload/BBlundellWhi
tePaper.pdf Retrieved 7 February 2012.
Pimenta, W. and Santos, L. (2012): A Comprehensive
Taxonomy for Three-dimensional Displays. WSCG
2012 Communication Proceedings. 139-146
Wanger, L., Ferwerda, J. and Greenberg, D. (1992):
Perceiving spatial relationships in computer-generated
images. Computer Graphics and Applications. 12:4458. IEEE
9.2
Display Technologies
Swept volume
Sparse integral multiview (one view per user)
Dense integral multiview (many views per user)
Light-field (hypothetical display capable of
producing at least a 4D light field)
Head-coupled perspective
Fish-tank VR
Head-mounted display
Tracked head-mounted display
Anaglyph stereoscopy
Line-interlace polarised stereoscopy
Temporally-interlaced stereoscopy
Parallax-barrier autostereoscopy
Applications
Cinema
Home theatre
TV console gaming
TV console motion gaming
Mobile gaming
Mobile videotelephony
Information kiosk
Desktop gaming
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
9.3
9.4
Requirements
Number of viewers
Display portability
Variable binocular parallax produced
Variable convergence produced
Variable motion parallax produced
9.4.3
Motion Parallax
Quality Metrics
9.4.1
General
9.4.4
System cost
Cost of users
Rendering computation cost
More views rendered than seen
Scene depth accuracy
Headgear needed
Binocular Parallax
Amount of wobble
Stereo inversion
9.4.5
Accommodation
A/C breakdown
9.4.6
9.4.2
Pictorial
Convergence
Spatial resolution
10 Appendix B
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
Resolution
1
1
1
1
1
1
1
1
1
1
Occlusion
3
3
3
3
3
3
3
3
3
3
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
Refresh rate
0
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
0.25
0.25
0.25
0.25
0.25
0.25
0.25
0.25
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
Colour distortion
Per-user
0.1
0.25
0.5
0.5
0.5
0.25
1
1
1
1
1
1
1
1
1
1
1
1
1
1
3
3
3
3
3
3
3
3
Brightness
1
1
1
1
1
1
1
1
1
1
Number of axis
0.5
0.1
0.5 0.25
0.5
0.5
0.5
0.5
0.5
0.5
0.5 0.25
0.5
1
0.5
1
0.5
1
0.5
1
1 4 0.5
1 4 0.5
1 4 0.5
1 4 0.5
1 4 0.5
1 4 0.5
1 4
1
1 4
1
Pictorial
Motion Parallax
Latency
Magnitude
1
1
1
1
1
1
1
1
1
0.1
0.02
0.02
0.02
0.2
0.1
0.02
0.02
0.1
1
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
Continuous
5
0.01
0.5
0.2
1
1
0.5
1
5
1
5
0.1
1
0.2
0.5
0.2
0.1 0.5
0.01 0.5
0.02 0.5
0.01 0.5
0.1 0.5
0.1 0.5
0.02 0.5
0.01 0.5
0.1 0.5
0.01 0.5
1 4
0.02
0.5 4
0.02
5 4
0.02
2
1
10
1
10
1
0.005
0.5
0.005
5
0.005
0 0.5
0 0.5
0 0.5
0 0.5
0 0.5
0 0.5
0 0.5
0 0.5
0 0.5
0 0.5
Wasted views
Users cost
0.75 4
1 4
1 4
1 4
0.5 4
0.75 4
1 4
1 4
0.75
1
1
1
0.5
0.75
1
1
0.5
1
1 2
1 2
0.5 2
1
1
0.5
1
0.5
1 1
1 1
0.5 1
0.75
1
1
1
0.5
0.75
1
1
0.5
1
System cost
1
1
1
1
1
1
1
1
1
1
1
1
1
General
1
2
1
2
1
2
1
1
1
1
1
1
1
1
1
1
Headgear
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
Render cost
Present
10
4
2
2
2
4
1
1
1000
400
200
200
200
400
100
100
100
100
8
8
8
Depth accuracy
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
1
1
1
Convergence
1
0.5
0.5
0.5
0.5
0.5
0.5
0.5
1
0.5
0.5
0.5
0.5
0.5
0.5
0.25
0.25
0.5
0.5
0.5
0.5
0.5
0.5
0.5
1
0.5
0.5
0.5
0.5
0.5
0.5
0.25
0.25
0.5
Wobble
Binocular Parallax
Non-invertable
Enhancement
Anaglyph Stereoscopy
Anaglyph Stereoscopy
Anaglyph Stereoscopy
Anaglyph Stereoscopy
Anaglyph Stereoscopy
Anaglyph Stereoscopy
Anaglyph Stereoscopy
Anaglyph Stereoscopy
Dense Integral Multiview
Dense Integral Multiview
Dense Integral Multiview
Dense Integral Multiview
Dense Integral Multiview
Dense Integral Multiview
Dense Integral Multiview
Dense Integral Multiview
Dense Integral Multiview
Dense Integral Multiview
Fish-tank VR
Fish-tank VR
Fish-tank VR
Head-coupled Perspective
Head-coupled Perspective
Head-coupled Perspective
Head-coupled Perspective
Head-coupled Perspective
Head-mounted Display
Head-mounted Display
Head-mounted Display
Light Field
Light Field
Light Field
Light Field
Light Field
Light Field
Light Field
Light Field
Light Field
Light Field
Application
Cinema
Console Gaming
Desktop PC CAD
Desktop PC Gaming
Desktop Videotelephony
Home Theatre
Information Kiosk
Motion Console Gaming
Cinema
Console Gaming
Desktop PC CAD
Desktop PC Gaming
Desktop Videotelephony
Home Theatre
Information Kiosk
Mobile Gaming
Mobile Videotelephony
Motion Console Gaming
Desktop PC CAD
Desktop PC Gaming
Desktop Videotelephony
Desktop PC CAD
Desktop PC Gaming
Desktop Videotelephony
Mobile Gaming
Mobile Videotelephony
Desktop PC CAD
Desktop PC Gaming
Desktop Videotelephony
Cinema
Console Gaming
Desktop PC CAD
Desktop PC Gaming
Desktop Videotelephony
Home Theatre
Information Kiosk
Mobile Gaming
Mobile Videotelephony
Motion Console Gaming
1 1
1 1
1 1
1 1
1 1
1 1
1 1
1 1
1 1
1 1
1 1
1 1
1 1
1 1
1 1
1 1
1 1
1 1
1 1
1 1
1 1
1 1
1 1
1 1
1 1
1 1
1 1
1 1
1 1
1 1
1 1
1 1
1 1
1 1
1 1
1 1
117
1 1
1 1
1 1
118
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
Contributed Posters
119
120
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
Stefan Marks
Abstract
This poster outlines Ethnographic research into the
design of an environment to study a land speed record
vehicle, or more generally, a vehicle posing a high cognitive load for the user. The challenges of empirical
research activity in the design of unique artifacts is
discussed, where we may not have the artefact available in the real context to study, nor key informants
that have direct relevant experience. We also describe
findings from the preliminary design studies and the
study into the design of the yoke for driving steer-bywire.
Keywords: Yoke, Steering, Ethnography, Cognitive
Load
1
Introduction
The task for the research team was to create an environment to undertake research on the cockpit design
of a Land Speed Record vehicle, this being inspired
by the public launch of the New Zealand Jetblack
land speed record project, and our growing awareness
of numerous land speed record projects sprouting up
around the globe. We have conceptualised this research project slightly more broadly as undertaking
research on vehicle user interaction in a high cognitive
load environment. This gives us a unique environment, in contrast to the majority of current research
that concentrates on what is considered a normal cognitive load, and the focus is then on attention, fatigue,
and distractions (Ho & Spence 2008). In contrast, our
focus is on an innately higher risk activity with a cognitive load that is bordering on extreme.
So how do you go about researching artifacts that
dont exist? Every method for undertaking empirical
research is a limited representation of reality, a simplification, and for a reality that is hypothetical this
potentially exacerbates these limitations. We have
predominently been using an Ethnographic process
(Wellington 2011) to gather data from participants
and experts about the elements we have been designing. The specific design is only the focus of the context that we are interested in. Too much emphasis
has been placed on the role of Ethnography to provide sociological requirements in the design process
(Dourish 2006). The objective here is also to explore
c 2013, Australian Computer Society, Inc. This
Copyright
paper appeared at the 14th Australasian User Interface Conference (AUIC 2013), Adelaide, Australia. Conferences in Research and Practice in Information Technology (CRPIT), Vol.
139, Ross T. Smith and Burkhard Wuensche, Eds. Reproduction for academic, not-for-profit purposes permitted provided
this text is included.
Figure 1: A still extracted from video of a participants run. Note the right thumb activating the button for additional thrust.
the HCI theory related to this context and activities
in an analytical way.
For our project, having its foundations in a Land
Speed Record vehicle, the cohort of potential drivers
has traditionally come from available and willing airforce pilots. Therefore, we interviewed actual air force
pilots in the cockpits of various military aircraft, and
were able to discuss the design of the control systems,
as well as the potential concepts a land speed record
vehicle driver would need to be aware of in controlling his or her vehicle. We used an ethnographic data
collection method for gathering the knowledge of experts, where a conversational style is preferred over
structured questions, and the researcher / interviewer
says as little as possible to make sure that they are
collecting the interviewees truth rather than confirming their own.
Later in the research, once we had built our simulator, the research participants were informed that
we were interested in any of their opinions on any
aspect of the simulation or the design, we can place
more significance on anything they volunteer specifically about the yoke or steering, as this then suggests
it had some importance to their experience, rather
than something they had to come up with since they
were prompted. When participants then start discussing something they felt in the steering we then
have to restrain ourselves from asking too many questions, or explaining the technical details of the device,
and simply try to capture what they are saying.
121
Findings
122
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
Robert Wellington
Abstract
In this poster, we outline a research study of the steering system for a potential land speed record vehicle.
We built a cockpit enclosure to simulate the interior space and employed a game engine to create
a suitable virtual simulation and appropriate physical behaviour of the vehicle to give a realistic experience that has a suitable level of difficulty to represent
the challenge of such a task. With this setup, we
conducted experiments on different linear and nonlinear steering response curves to find the most suitable steering configuration.
The results suggest that linear steering curves with
a high steering ratio are better suited than non-linear
curves, regardless of their gradient.
Keywords: Yoke, Steering, High Cognitive Load
1
Introduction
The task for the research team was to create an environment to undertake research on the cockpit design
of a Land Speed Record vehicle, this being inspired
by the public launch of the New Zealand Jetblack
land speed record project, and our growing awareness
of numerous land speed record projects sprouting up
around the globe.
Creating this environment elevates the sensitivity
of the quality of the user interaction design significantly, and will allow us to trial and evaluate many
designs and gather rich data. The aim of our research
in collecting this data is targeted at developing theory rather than just evaluating a set of designs or
undertaking a requirements gathering activity. We
do intend to develop the simulation to be as close
to the physical reality as possible, as the land speed
record context provides something concrete for participants driving in the simulator to imagine, and a
target context for participants to relate their experiences. Making this context explicit then provides a
fixed reference point to combine the variety of experiences of the participants that have ranged from games
enthusiasts, pilots, drag race drivers, and engineers,
to general office staff and students.
c 2013, Australian Computer Society, Inc. This
Copyright
paper appeared at the 14th Australasian User Interface Conference (AUIC 2013), Adelaide, Australia. Conferences in Research and Practice in Information Technology (CRPIT), Vol.
139, Ross T. Smith and Burkhard Wuensche, Eds. Reproduction for academic, not-for-profit purposes permitted provided
this text is included.
Steering Design
The steering of a landspeed record vehicle is very different from a standard automobile. Instead of a standard steering wheel, a yoke is used for controlling the
vehicle. The rotation range of the yoke is limited to
about 90 to at most 180 degrees, since the pilot constantly has to keep both hands on it. A larger motion
range would result in crossing arms or uncomfortable
rotation angles of arm and hand joints. In addition,
the maximum range of the steering angle of the front
wheels of the vehicle is very limited as well. The vehicle is designed primarily to drive a straight course
without any bends. In our simulation, we found that
during most runs, the front wheels were rarely rotated
more than 1 degree.
While there is a significant body of research into
vehicle control via steering wheels, yokes, and joysticks, e.g., (McDowell et al. 2007, Hill et al. 2007) in
the context of military vehicles, we were not able to
find any research output in the context of high-speed
land vehicles such as Jetblack.
For the experiments, we implemented a steering module with two parameters: an adjustable
yoke/front wheel transfer ratio, and an adjustable response curve. The steering module expects the yoke
input as a value between -1 and 1 and translates it to
an intermediate value in the same range by applying
a simple power function with an adjustable exponent.
An exponent of 1 results in a linear curve while higher
exponents (e.g., 1.5 or 2) result in the nonlinear curves
shown in Figure 1.
The intermediate value is then multiplied by a factor that represents the steering ratio (e.g., 1:30 or
1:60), the ratio between the yoke input angle and the
123
Methodology
Power
Ratio
1*
2
3
4
5
6
7*
1 (linear)
1 (linear)
1 (linear)
1 (linear)
1.5 (quadratic)
2 (quadratic)
3 (cubic)
1:20
1:30
1:45
1:60
1:45
1:45
1:45
The configurations were randomised and changed after every run. Configurations with an asterisk were
only tested once or twice to test if the participants
would notice such extreme values.
Data was logged for every timestep of the physical simulation which ran at 200 times per second.
As a measure for the stability of a run, we evaluated
the average of the lateral velocity of the vehicle during the acceleration phase at speeds above 500km/h.
Above this speed, the randomised, simulated turbulences and side wind had a major effect on the stability of the vehicle, and therefore required most steering
influence from the participants.
124
Results
1
2
3
4
5
6
7
0.537 m/s
0.572 m/s
0.396 m/s
0.394 m/s
0.522 m/s
0.588 m/s
1.5611 m/s
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
Abstract
We present a 3D object tracking method using a single
depth camera for Spatial Augmented Reality (SAR). The
drastic change of illumination in a SAR environment
makes object tracking difficult. Our method uses a depth
camera to train and track the 3D physical object. The
training allows maker-less tracking of the moving object
under illumination changes. The tracking is a combination
of feature based matching and frame sequential matching
of point clouds. Our method allows users to adapt 3D
objects of their choice into a dynamic SAR environment. .
Keywords: Spatial Augmented Reality, Depth Camera, 3D
object, tracking, real-time, point cloud
Introduction
Related Works
Proposed method
125
3.1
The point clouds of the object from different views are first
trained. The relations between each template are labelled
and also trained with the templates. These point clouds are
called partial shape templates. This relationship is used to
guess the new template when the object view changes and
re-initialization is necessary. The trained point clouds are
meshed to polygon model for surface projection. The
tracking method only uses the point cloud of the templates
to track the surface of the object.
3.2
3.3
3.4
Experiment Result
126
Conclusion
References
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
Abstract
The obesity epidemic facing the Western world has
been a topic of numerous discussions and research
projects. One major issue preventing people from
becoming more active and following health care recommendations is an increasingly busy life style and
the lack of motivation, training, and available supervision. While the use of personal trainers increases
in popularity, they are often expensive and must be
scheduled in advance. In this research we developed
a smartphone application, which assists users with
learning and monitoring exercises. A key feature of
the application is a novel algorithm for analysing accelerometer data and automatically counting repetitive exercises. This allows users to perform exercises
anywhere and anytime, while doing other activities
at the same time. The recording of exercise data allows users to track their performance, monitor improvements, and compare it with their goals and the
performance of other users, which increases motivation. A usability study and feedback from a public
exhibition indicates that users like the concept and
find it helpful for supporting their exercise regime.
The counting algorithm has an acceptable accuracy
for many application scenarios, but has limitations
with regard to complex exercises, small number of
repetitions, and poorly performed exercises.
Keywords: accelerometer, activity monitoring, signal
processing, exercise performance, fitness application,
human-computer interfaces
1
Introduction
The worldwide prevalence of obesity has almost doubled between 1980 and 2008 and has reached an estimated half a billion men and women over the age of
20 (World Health Organization 2012). Exercises help
to fight obesity, but are often not performed due to
lack of motivation (American Psychological Association 2011). Motivation can be increased by enabling
users to self-monitor and record exercise performance
and to set goals. For example, an evaluation of 26
studies with a total of 2767 participants found that
pedometer users increased their physical activity by
26.9% (Bravata et al. 2007).
c
Copyright
2013,
Australian Computer Society, Inc. This
paper appeared at the 14th Australasian User Interface Conference (AUIC 2013), Adelaide, Australia, January-February
2013. Conferences in Research and Practice in Information
Technology (CRPIT), Vol. 139, Ross Smith and Burkhard C.
W
unsche, Ed. Reproduction for academic, not-for profit purposes permitted provided this text is included.
In this research we present an iPhone based personal trainer application, which assists users with
performing exercises correctly, self-monitoring them,
and evaluating performance. A key contribution is a
novel algorithm for counting repetitive exercises. This
helps users to easily keep track of daily exercise data,
and encourages them to perform simple exercises frequently, e.g., during work breaks and recreational activities.
2
2.1
Design
Software Architecture
The application is designed with a three tier architecture. The presentation tier of the application consists of two parts: The exercise view gives feedback
to the user while performing exercises, whereas the
data view is used for planning and monitoring exercises and provides instructions and visualisations of
recorded data. The repetition counter generates information about user performance based on the selected exercise and an analysis of accelerometer data.
The database manager is responsible for saving and
retrieving information (e.g., educational material and
exercise performance data).
2.2
Counting Algorithm
Figure 1: The steps of the counting algorithm: (a) Raw accelerometer data of a user performing ten arm curls
with the iPhone strapped to the lower arm. (b) Acceleration coordinate with the largest standard deviation.
(c) Data after simplification with the Douglas Peucker algorithm and computing the mean value and standard
deviation of its sample points. The number of repetitions is computed as the number of cycles with sections
below the threshold ( ) followed by a section above the threshold ( + ). The above graph represents 10
repetitions of arm curls.
additionally requiring that the cycle time (distance
between peaks and valleys) is above 0.5 seconds and
moderately regular, i.e., very short and very long cycles are not counted. This restriction is acceptable
since an interview with an exercise therapist confirmed that exercises are most useful when performed
with a smooth moderate motion.
3
3.1
Results
Methodology
128
Overall users were satisfied with the design and information content of the application. Users regarded
the application as only moderately useful, but slightly
more than half of the participants could imagine to
download and use the application, if available. Subsequently we presented the application at a public display in the university. The visitor feedback was overwhelmingly positive and several visitors were keen to
buy the application on the Applet app store.
4
We have presented a novel iPhone application assisting users with getting physical active by providing
information on simple exercises, and automatically
recording exercise performance. A key contribution
is a novel algorithm for analysing accelerometer data
in order to detect the number of repetitions in an exercise performance.
A user study confirmed that the algorithm is satisfactorily accurate for simple smooth motions and
high number of repetitions. Problems exist for complex motions, exercises with a very low number of
repetitions, and exercises performed with jerky and
irregular motions.
More work needs to be done to make the presented prototype useful in practice, and in particular to achieve behavioral change. The counting algorithm needs to be improved to make it work for a
larger range of exercises, and to make it more stable
with regard to low number of repetitions and irregular
and jaggy motions. Usability and motivation could be
improved by adding voice activation and feedback. A
controlled long term study is necessary to measure
behavioural change, such as more frequent or longer
exercises when using the application.
References
American Psychological Association (2011), Stress
in america findings. http://www.apa.org/news/
press/releases/stress/national-report.pdf,
Last retrieved 25th August 2012.
Bravata, D. M., Smith-Spangler, C., Sundaram, V.,
Gienger, A. L., Lin, N., Lewis, R., Stave, C. D.,
Olkin, I. & Sirard, J. R. (2007), Using pedometers
to increase physical activity and improve health:
A systematic review, JAMA: The Journal of the
American Medical Association 298(19), 22962304.
World Health Organization (2012), World health
statistics 2012.
http://www.who.int/gho/
publications/world_health_statistics/EN_
WHS2012_Full.pdf, Last retrieved 25th August
2012.
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
{s.shahid@uvt.nl, o.mubin@uws.edu.au}
Abstract
Our research is focused on analysing how users perceive
different mediums of advertisements on their mobile
devices. Such advertisements are also called locationbased advertisements (LBAs) as they relay brand and
product information to mobile phones that are in the
vicinity. We investigated two different ways of
presentation marketing information (static vs. interactive).
Our results clearly showed that interactive (clickable
advertisement with additional information) LBAs were
preferred to static LBAs.
Keywords: Location-based ads, mobile commerce
Introduction
Method
129
Results
Pop-up
Trust
5.28
5.74
t(18)=
3.03
<.05
Informative
3.80
6.0
t(18)=
7.14
<.001
<.01
<.001
Purchase
intention
3.92
5.60
t(18)=
3.16
Overall
Liking
5.57
6.61
t(18)=
9.03
130
References
advertising.
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
Abstract
User experience (UX) is gaining more and more relevance
for designing interactive systems. But the real character,
drivers and influences of UX are not sufficiently
described until now. There are different theoretical
models trying to explain UX in more detail, but there are
still essential definitions missing regarding influencing
factors i.e. temporal aspects. UX is increasingly seen as a
dynamic phenomenon, that can be subdivided in different
phases (Pohlmeyer, 2011; Karapanos, Zimmerman,
Forlizzi, & Martens, 2009, ISO 9241-210). Trying to gain
more knowledge about temporal changes in UX, an
experiment was conducted examining the influence of
exposure on the evaluation of aesthetics as one hedonic
component of UX. A pre-use situation was focused
including an anticipated experience of the user and no
interaction was accomplished. It could be found that a
repeated mere-exposure (Zajonc, 1969) does significantly
influence the evaluation of aesthetics over time..
Keywords: Mere-Exposure Effect, Dynamics of User
Experience, Evaluation, Aesthetics.
Introduction
2
2.1
Experiment
Objective
2.2
2.3
Procedure
Results
3.1
3.2
Reaction Time
132
Discussion
Conclusion
References
Proceedings of the Fourteenth Australasian User Interface Conference (AUIC2013), Adelaide, Australia
Author Index
Amor, Robert, 73
Bowen, Judy, 81
Chow, Jonathan, 73
Churcher, Clare, 23
Daradkeh, Mohammad, 23
Dekeyser, Stijn, 33
Dhillon, Jaspaljeet Singh, 53
Feng , Haoyang, 73
Greeff, Christopher, 127
Irlitti, Andrew, 63
Lutteroth, Christof, 53, 91, 111
MacDonald, Bruce, 127
Maher, Mary Lou, 43
Marks, Stefan, 121, 123
McKinnon, Alan, 23
Mehrabi, Mostafa, 91
Mubin, Omar, 129
W
unsche, Burkhard C., iii, 53, 73, 91, 111, 127
Walsh, James A., 3
Watson, Richard, 33
Wellington, Robert, 121, 123
133
ISSN 1445-1336
Listed below are some of the latest volumes published in the ACS Series Conferences in Research and
Practice in Information Technology. The full text of most papers (in either PDF or Postscript format) is
available at the series website http://crpit.com.
Volume 113 - Computer Science 2011
Edited by Mark Reynolds, The University of Western Australia, Australia. January 2011. 978-1-920682-93-4.
Security