Documente Academic
Documente Profesional
Documente Cultură
Daniel Innala Ahlmark Haptic Navigation Aids for the Visually Impaired
ISSN 1402-1544
Visually Impaired
ISBN 978-91-7583-605-8 (print)
ISBN 978-91-7583-606-5 (pdf)
Industrial Electronics
Haptic Navigation Aids for the
Visually Impaired
Supervisors:
Kalevi Hyyppä, Jan van Deventer, Ulrik Röijezon
European Union
Structural Funds
Printed by Luleå University of Technology, Graphic Production 2016
ISSN 1402-1544
ISBN 978-91-7583-605-8 (print)
ISBN 978-91-7583-606-5 (pdf)
Luleå 2016
www.ltu.se
To my mother
iii
iv
Abstract
Assistive technologies have improved the situation in society for visually impaired individ-
uals. The rapid development the last decades have made both work and education much
more accessible. Despite this, moving about independently is still a major challenge, one
that at worst can lead to isolation and a decreased quality of life.
To aid in the above task, devices exist to help avoid obstacles (notably the white cane),
and navigation aids such as accessible GPS devices. The white cane is the quintessential
aid and is much appreciated, but solutions trying to convey distance and direction to
obstacles further away have not made a big impact among the visually impaired. One
fundamental challenge is how to present such information non-visually. Sounds and
synthetic speech are typically utilised, but feedback through the sense of touch (haptics)
is also used, often in the form of vibrations. Haptic feedback is appealing because it
does not block or distort sounds from the environment that are important for non-visual
navigation. Additionally, touch is a natural channel for information about surrounding
objects, something the white cane so successfully utilises.
This doctoral thesis explores the question above by presenting the development and
evaluations of different types of haptic navigation aids. The goal has been to attain a
simple user experience that mimics that of the white cane. The idea is that a navigation
aid able to do this should have a fair chance of being successful on the market. The
evaluations of the developed prototypes have primarily been qualitative, focusing on
judging the feasibility of the developed solutions. They have been evaluated at a very
early stage, with visually impaired study participants.
Results from the evaluations indicate that haptic feedback can lead to solutions that
are both easy to understand and use. Since the evaluations were done at an early stage in
the development, the participants have also provided valuable feedback regarding design
and functionality. They have also noted many scenarios throughout their daily lives
where such navigation aids would be of use.
The thesis document these results, together with ideas and thoughts that have emerged
and been tested during the development process. This information contributes to the
body of knowledge on different means of conveying information about surrounding ob-
jects non-visually.
v
vi
Contents
Abstract v
Contents vii
Acknowledgements xi
Summary of Included Papers xiii
List of Figures xvii
Part I 1
Chapter 1 – Introduction 3
1.1 Overview – Five Years of Questions . . . . . . . . . . . . . . . . . . . . . 3
1.1.1 The Beginning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.2 Next Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.3 The Second Prototype . . . . . . . . . . . . . . . . . . . . . . . . 6
1.1.4 The LaserNavigator . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.1.5 Two Trials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.1.6 The Finish Line? . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2 Aims, Contributions and Delimitations . . . . . . . . . . . . . . . . . . . 10
1.3 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.4 Thesis Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Chapter 2 – Background 13
2.1 Visual Impairments and Assistive Technologies . . . . . . . . . . . . . . . 13
2.1.1 Navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2 Perception, Proprioception and Haptics . . . . . . . . . . . . . . . . . . . 15
2.2.1 Spatial Perception . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2.2 The Sense of Touch and Proprioception . . . . . . . . . . . . . . . 16
2.2.3 Haptic Feedback Technologies . . . . . . . . . . . . . . . . . . . . 17
Chapter 3 – Related Work 19
3.1 Navigation Aids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.1.1 GPS Devices and Smartphone Applications . . . . . . . . . . . . . 19
vii
3.1.2 Devices Sensing the Surrounding Environment . . . . . . . . . . . 20
3.1.3 Sensory Substitution Systems . . . . . . . . . . . . . . . . . . . . 21
3.1.4 Prepared Environment Solutions . . . . . . . . . . . . . . . . . . . 23
3.1.5 Location Fingerprinting . . . . . . . . . . . . . . . . . . . . . . . 24
3.2 Scientific Studies Involving Visually Impaired Participants . . . . . . . . 24
Chapter 4 – The Virtual White Cane 27
4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.2 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.2.1 Haptic Rendering . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.3 Field Trial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Chapter 5 – LaserNavigator 33
5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.2 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
5.3 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
5.3.1 Additional Features and Miscellaneous Notes . . . . . . . . . . . . 36
5.3.2 Manual Length Adjustment . . . . . . . . . . . . . . . . . . . . . 36
5.4 Haptic Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.4.1 Simple Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.4.2 Complex Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.5 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.6 Evaluations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Chapter 6 – Discussion 41
Chapter 7 – Conclusions 45
References 47
Part II 51
Paper A – Presentation of Spatial Information in Navigation Aids
for the Visually Impaired 53
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3 Non-visual Spatial Perception . . . . . . . . . . . . . . . . . . . . . . . . 57
4 Navigation Aids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.1 Haptic Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.2 Auditory Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Paper B – Obstacle Avoidance Using Haptics and a Laser Rangefinder 67
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
viii
2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3 The Virtual White Cane . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
3.2 Software Architecture . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.3 Dynamic Haptic Feedback . . . . . . . . . . . . . . . . . . . . . . 76
4 Field Trial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
5.1 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Paper C – An Initial Field Trial of a Haptic Navigation System for
Persons with a Visual Impairment 83
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
1.1 Delimitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
2.1 Participants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
2.2 Test Set-up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
2.3 Field trial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
2.4 Interviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
2.5 Data analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
3.1 Findings from the interviews . . . . . . . . . . . . . . . . . . . . . 90
4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Paper D – A Haptic Navigation Aid for the Visually Impaired – Part
1: Indoor Evaluation of the LaserNavigator 97
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
1.1 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
2.1 Participants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
2.2 Test Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
2.3 Task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
2.4 Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
2.5 Interviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
3.1 Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
3.2 Interviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
4.1 Daniel’s Comments . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Paper E – A Haptic Navigation Aid for the Visually Impaired – Part
2: Outdoor Evaluation of the LaserNavigator 115
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
1.1 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
2.1 Participants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
ix
2.2 Trial Task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
2.3 Observations And Interviews . . . . . . . . . . . . . . . . . . . . . 121
3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
3.1 Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
3.2 Interviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
4.1 Daniel’s Comments . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Paper F – Developing a Laser Navigation Aid for Persons with Visual
Impairment 129
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
2 Navigation Aid Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
3 Laser Navigators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
3.1 LaserNavigator Evaluations . . . . . . . . . . . . . . . . . . . . . 136
4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
4.1 Intuitive Navigation Aid . . . . . . . . . . . . . . . . . . . . . . . 141
4.2 Sensor Integration . . . . . . . . . . . . . . . . . . . . . . . . . . 141
4.3 System Integration . . . . . . . . . . . . . . . . . . . . . . . . . . 142
4.4 Three Research Paths . . . . . . . . . . . . . . . . . . . . . . . . 143
5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
x
Acknowledgements
This doctoral thesis describes five years of work with navigation aids for visually impaired
individuals. The work has been carried out at the Department of Computer Science,
Electrical and Space Engineering at Luleå University of Technology. I wish to thank
Centrum för medicinsk teknik och fysik (CMTF) for financial support, provided through
the European Union.
The multidisciplinary nature of the project has allowed me to work with many differ-
ent people with diverse backgrounds. This has been a great catalyst for creativity, and
has made the work much more fun, interesting and meaningful.
First and foremost, I want to thank my principal supervisor Kalevi Hyyppä, whose
great skill, knowledge and creativity have been key assets for the project from start to
finish. For me, his ever-present support and assistance have been a large comfort in a
world that, to a new doctoral student, can at times be both harsh and confusing. I would
also like to thank my assistant supervisors: Håkan Fredriksson, Jan van Deventer and
Ulrik Röijezon. They have brought fresh views to the project and have helped make the
results both broader in scope and richer in detail.
Further, Maria Prellwitz, Jenny Röding and Lars Nyberg were instrumental in the
work with the first evaluation and its associated article; a great experience and learning
process. Maria has continued to aid the qualitative analysis process in the later evalua-
tions. I am grateful for that as the articles are far more interesting now than they would
otherwise have been.
I am also grateful to Mikael Larsmark, Henrik Mäkitaavola and Andreas Lindner for
their work on the LaserNavigator. Further, I would like to acknowledge the support of
teachers and other staff at the university who have helped me on the sometimes winding
path that started 11 years ago and now comes to an end in the form of this dissertation.
Thank you!
xi
xii
Summary of Included Papers
Paper A – Presentation of Spatial Information in Navigation
Aids for the Visually Impaired
Daniel Innala Ahlmark and Kalevi Hyyppä
Purpose: The purpose of this article is to present some guidelines on how different
means of information presentation can be used when conveying spatial information non-
visually. The aim is to further the understanding of the qualities navigation aids for
visually impaired individuals should possess.
Design/methodology/approach: A background in non-visual spatial perception is
provided, and existing commercial and non-commercial navigation aids are examined
from a user interaction perspective, based on how individuals with a visual impairment
perceive and understand space.
Findings: The discussions on non-visual spatial perception and navigation aids lead to
some user interaction design suggestions.
Originality/value: This paper examines navigation aids from the perspective of non-
visual spatial perception. The presented design suggestions can serve as basic guidelines
for the design of such solutions.
Published in: Proceedings of the 2013 Workshop on Advanced Robotics and its So-
cial Impacts, Tokyo, Japan.
In its current form, the white cane has been used by visually impaired people for al-
most a century. It is one of the most basic yet useful navigation aids, mainly because of
its simplicity and intuitive usage. For people who have a motion impairment in addition
to a visual one, requiring a wheelchair or a walker, the white cane is impractical, leading
to human assistance being a necessity. This paper presents the prototype of a virtual
white cane using a laser rangefinder to scan the environment and a haptic interface to
xiii
present this information to the user. Using the virtual white cane, the user is able to
”poke” at obstacles several meters ahead and without physical contact with the obstacle.
By using a haptic interface, the interaction is very similar to how a regular white cane
is used. This paper also presents the results from an initial field trial conducted with six
people with a visual impairment.
Daniel Innala Ahlmark, Maria Prellwitz, Jenny Röding, Lars Nyberg and Kalevi Hyyppä
Purpose: The purpose of the presented field trial was to describe conceptions of feasi-
bility of a haptic navigation system for persons with a visual impairment.
Design/methodology/approach: Six persons with a visual impairment who were
white cane users were tasked with traversing a predetermined route in a corridor en-
vironment using the haptic navigation system. To see whether white cane experience
translated to using the system, the participants received no prior training. The proce-
dures were video-recorded, and the participants were interviewed about their conceptions
of using the system. The interviews were analyzed using content analysis, where induc-
tively generated codes that emerged from the data were clustered together and formulated
into categories.
Findings: The participants quickly figured out how to use the system, and soon adopted
their own usage technique. Despite this, locating objects was difficult. The interviews
highlighted the desire to be able to feel at a distance, with several scenarios presented
to illustrate current problems. The participants noted that their previous white cane
experience helped, but that it nevertheless would take a lot of practice to master using
this system. The potential for the device to increase security in unfamiliar environments
was mentioned. Practical problems with the prototype were also discussed, notably the
lack of auditory feedback.
Originality/value: One novel aspect of this field trial is the way it was carried out.
Prior training was intentionally not provided, which means that the findings reflect im-
mediate user experiences. The findings confirm the value of being able to perceive things
beyond the range of the white cane; at the same time, the participants expressed concerns
about that ability. Another key feature is that the prototype should be seen as a navi-
gation aid rather than an obstacle avoidance device, despite the interaction similarities
with the white cane. As such, the intent is not to replace the white cane as a primary
means of detecting obstacles.
xiv
Paper D – A Haptic Navigation Aid for the Visually Impaired –
Part 1: Indoor Evaluation of the LaserNavigator
Daniel Innala Ahlmark, Maria Prellwitz, Ulrik Röijezon, George Nikolakopoulos, Jan van
Deventer, Kalevi Hyyppä
To be submitted.
To be submitted.
Negotiating the outdoors can be a difficult challenge for individuals who are visually
impaired. The environment is dynamic, which at times can make even the familiar route
unfamiliar. This article presents the second part evaluation of the LaserNavigator, a
newly developed prototype built to work like a “virtual white cane” with an easily ad-
justable length. The user can quickly adjust this length from a few metres up to 50 m.
The intended use of the device is as a navigation aid, helping with perceiving distant
landmarks needed to e.g. cross an open space and reach the right destination. This sec-
ond evaluation was carried out in an outdoor environment, with the same participants
who partook in the indoor study, described in part one of the series. The participants
used the LaserNavigator while walking a rectangular route among a cluster of buildings.
The walks were filmed, and after the trial the participants were interviewed about their
xv
conceptions of usability of the device. Results from observations and interviews show
that while the device is designed with the white cane in mind, one can learn to see the
device as something different. An example of this difference is that the LaserNavigator
enables keeping track of buildings on both sides of a street. The device was seen as most
useful in familiar environments, and in particular when crossing open spaces or walking
along e.g. a building or a fence. The prototype was too heavy and all participant re-
quested some feedback on how they were pointing the device, as they all had difficulties
with holding it horizontally.
To be submitted.
This article presents the development of a new navigation aid for visually impaired per-
sons (VIPs) that uses a laser range finder and electronic proprioception to convey the
VIPs’ physical surroundings. It is denominated LaserNavigator. In addition to the tech-
nical contributions, an essential result is a set of reflections leading to what an “intuitive”
handheld navigation aid for VIPs could be. These reflections are influenced by field trials
in which VIPs have evaluated the LaserNavigator indoors and outdoors. The trials di-
vulged technology-centric misconceptions regarding how VIPs use the device to sense the
environment and how that physical environment information should be provided back to
the user. The set of reflections relies on a literature review of other navigation aids, which
provide interesting insights on what is possible when combining different concepts.
xvi
List of Figures
1.1 The Novint Falcon haptic interface and the SICK LMS111 laser rangefinder,
used in the first prototype. . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 A picture of the second prototype: the LaserNavigator. . . . . . . . . . . 8
3.1 The UltraCane, a white cane augmented with ultrasonic sensors and haptic
feedback. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2 A picture of the Miniguide, a handheld ultrasonic mobility aid with haptic
feedback. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.1 This figure shows the virtual white cane on the MICA (Mobile Internet
Connected Assistant) wheelchair. . . . . . . . . . . . . . . . . . . . . . . 28
4.2 The Novint Falcon haptic display. . . . . . . . . . . . . . . . . . . . . . . 29
4.3 A simple environment (a) is scanned to produce data, plotted in (b). These
data are used to produce the model depicted in (c). . . . . . . . . . . . . 30
5.1 A picture of the latest version of the LaserNavigator. The primary com-
ponents are the laser rangefinder (1), the ultrasound sensor (2), the loud-
speaker (3), and the button under a spring (4) used in manual length
adjustment mode to adjust the “cane length”. . . . . . . . . . . . . . . . 34
5.2 Basic architecture diagram showing the various components of the Laser-
Navigator and how they communicate with each other. . . . . . . . . . . 35
B.1 The virtual white cane. This figure depicts the system currently set up on
the MICA wheelchair. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
B.2 The Novint Falcon, joystick and SICK LMS111. . . . . . . . . . . . . . . 73
B.3 The X3D scenegraph. This diagram shows the nodes of the scene and the
relationship among them. The transform (data) node is passed as a refer-
ence to the Python script (described below). Note that nodes containing
configuration information or lighting settings are omitted. . . . . . . . . . 74
B.4 The ith wall segment, internally composed of two triangles. . . . . . . . . 75
B.5 The virtual white cane as mounted on a movable table. The left hand
is used to steer the table while the right hand probes the environment
through the haptic interface. . . . . . . . . . . . . . . . . . . . . . . . . . 77
B.6 The virtual white cane in use. This is a screenshot of the application
depicting a corner of an office, with a door being slightly open. The user’s
”cane tip”, represented by the white sphere, is exploring this door. . . . . 79
xvii
C.1 The prototype navigation aid mounted on a movable table. The Novint
Falcon haptic interface is used with the right hand to feel where walls and
obstacles are located. The white sphere visible on the computer screen is a
representation of the position of the grip of the haptic interface. The grip
can be moved freely as long as the white sphere does not touch any obsta-
cle, at which point forces are generated to counteract further movement
”into“ the obstacle. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
D.1 A photo of the LaserNavigator, showing the laser rangefinder (1), ultra-
sound sensor (2) and the loudspeaker (3). . . . . . . . . . . . . . . . . . . 100
D.2 The two reflectors (spherical and cube corner) used alternately to improve
the body–device measurements. . . . . . . . . . . . . . . . . . . . . . . . 100
D.3 A picture of the makeshift room as viewed from outside the entrance door. 103
D.4 One of the researchers (Daniel) trying out the trial task. The entrance
door is visible in the figure. . . . . . . . . . . . . . . . . . . . . . . . . . 104
D.5 Movement tracks for each participant and attempt, obtained by the reflec-
tor markers on the sternum. The entrance door is marked by the point
labelled start, and the target door is the other point, door. Note that the
start point appears inside the room because the motion capture cameras
were unable to see part of the walk. Additionally, attempt 3 by partici-
pant B does not show the walk back to the entrance door due to a data
corruption issue. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
D.6 This figure shows the three attempts of participant B, with the additional
red line indicating the position of the LaserNavigator. Note that attempt
3 is incomplete due to data corruption. . . . . . . . . . . . . . . . . . . . 107
E.1 A picture of the LaserNavigator, showing the laser rangefinder (1), the
ultrasound sensor (2), the loudspeaker (3), and the button under a spring
(4) used for adjusting the “cane length”. . . . . . . . . . . . . . . . . . . 119
E.2 The tactile model used by the participants to familiarise themselves with
the route. The route starts at (1) and is represented by a thread. Us-
ing the walls of buildings (B1) and (B2) as references, the participants
walked towards (2), where they found a few downward stairs lined by a
fence. Turning 90 degrees to the right and continuing, following the wall of
building (B2), the next point of interest was at (3). Here, another fence on
the right side could be used as a reference when taking the soft 90-degree
turn. The path from (3) to (6) is through an alley lined with sparsely
spaced trees. Along this path, the participants encountered the two simu-
lated crossings (4) and (5), in addition to the bus stop (B5). At (6) there
was a large snowdrift whose presence guided the participants into the next
90-degree turn. Building B4 was the cue to perform yet another turn, and
then walk straight back to the starting point (1), located just past the end
of (B3). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
xviii
E.3 This figure shows three images captured from the videos. From left to
right, these were captured: just before reaching (6); just before (5), with
one of the makeshift traffic light poles visible on the right; between (3)
and (4). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
F.1 Indoor evaluation. Motion capture cameras at the top with unique reflec-
tive identifier on chest, head, LaserNavigator and white cane. Door 3 is
closed. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
F.2 Paths (black) taken by the three participants (one per row) over three
indoor trials. The red line shows how they used the LaserNavigator. . . . 138
F.3 Model of the outdoor trial environment. . . . . . . . . . . . . . . . . . . 140
xix
xx
Part I
1
2
Chapter 1
Introduction
3
4 Introduction
After some more inquiry I had a clearer perspective, and knew that doctoral studies
was something I would be interested in pursuing. I had figured out that the idea was to
work on some project, focussing on some very specific problem, solving it, and writing a
lot about it. Thus when I graduated from the master of science programme I thought I
had a pretty good idea what would be ahead. I did not.
While asking around at the department, I soon met my to be principal supervisor,
and came to hear of a project that immediately sparked great interest in me. This
project was called the Sighted Wheelchair, and the idea was to enable visually impaired
individuals to drive a powered wheelchair using the sense of touch to detect obstacles.
At this time, the project had already started, and an initial proof-of-concept system had
been developed and was just about to be tested. The system scanned its surrounding
environment with a laser rangefinder and allowed the user to perceive these scans by
touch. My first connection to the project was as a tester of that prototype.
One day while walking through the campus corridors I passed by a team doing some-
thing curious – not an unusual encounter at a university. Then from behind me I heard
someone call “excuse me” after which followed a conversation ending with me enthusi-
astically saying something along the lines of “I would love to”. This was the first time
I encountered a haptic interface (the Novint Falcon), and the experience was amazing.
Here was a device through which you could experience computer generated content; a
device that was like a computer screen for the hand. The Falcon was originally mar-
keted as a gaming device, although it seemed not to cause the great excitement in that
market one might have initially expected. Shortly after my encounter with the Falcon,
in February of 2011, I joined the project as a research engineer with the task of further
developing the software.
With great eagerness I started looking
for the pieces I needed for the software puz-
zle. That picturesque metaphor is hinting
that yet again this was a case of subprob-
lem management. The laser rangefinder
would continually scan the environment,
and this needed to be reflected both in a
graphical model that was displayed on a
screen, and in a haptic model that the user
would probe with the Falcon. The biggest
challenge was to find a good way to present
Figure 1.1: The Novint Falcon haptic interface a haptic model that would constantly be
and the SICK LMS111 laser rangefinder, used changing. A situation can happen where
in the first prototype. as the user pushes the handle of the Fal-
con towards a dynamically changing ob-
ject, the object might change in a way leaving the probing position inside the object,
rather than on the outside surface. This is a known issue with these kinds of haptic
interfaces, and is at its core a consequence of one intriguing idea: the user is using the
same modality (i.e. touch) for both input (moving the handle) and output (having the
1.1. Overview – Five Years of Questions 5
user’s hand by remaining stationary. Fortunately, such directed force feedback is not the
only way to provide haptic stimuli, but would any other kind be comparable, or only a
bad compromise? Also, a laser rangefinder able to automatically scan the room as used
on the wheelchair would be far too bulky for a portable device, which points to another
problem: how, and what, to measure? At this point, the number of new unresolved
issues had grown to the extent where they could easily make up another doctoral project
or two. It would seem that a major part of a doctoral student’s work is to pose new
and relevant questions. The hierarchical nature of things strikes again, making sure that
every completed puzzle is shattered – broken down into much smaller pieces than before.
ranges while still retaining the ability to discriminate closer objects? Another physicist
comes to mind here, Heisenberg, as it seems we would have to choose one at the cost of the
other. Commercial navigation aids such as the UltraCane and MiniGuide (see 3.1) have a
button or switch to allow the user to set a maximum distance that the device is reacting
to (think “virtual cane length”), but we opted for a completely different approach.
On the device and facing the user, an ultrasound sensor is mounted. This continually
measures the distance from the device to the closest point on the user’s body. Instead of
using a button or switch to set a maximum distance, we could now use the body–device
measurement, which meant that the user could vary said length simply by moving the
arm closer to or further away from their body. A physical analogy would be a very long
telescopic white cane whose length would automatically be varied when the user moves
it further away from or closer to their body. This way, when the user wants to examine
close objects, they hold the device close, whereas if they want to detect distant buildings
for example, they would reach out far with the device.
Having this additional parameter begged the question of how to relate it to the “‘cane
length”. A simple solution that turned out to be quite acceptable indoors is to multiply
the body–device distance by 10, meaning that if the user holds the device 50 cm out from
their body, they would have a 5 m long “virtual cane”.
Having the body–device distance provides another interesting opportunity. Instead
of trying to convey the actual device–object distance with vibrations, we could let the
user infer that based on how far they held out their arm. Similarly, when hitting an
object with a white cane, the distance to the object is established by knowing the length
of the cane and how far away from the body it is held. This idea was intriguing, and
prompted me to look further into the way human beings know where their limbs are
without looking at them, known as proprioception.
The experiments led me to several alternatives which I found feasible. Those could be
divided into two categories: simple and complex feedback. In simple feedback mode, the
vibrations only signal the presence of an object, whereas complex mode tries to convey
the distance to said object as well. I personally prefer the elegance of simple feedback,
because it behaves very much like a physical cane. In this case, the distance to the object
is inferred from the length of the cane and how far out it is held.
better experience.
I implemented the different feedback techniques I had previously developed, and
through testing concluded that I was still in favour of the simpler alternatives. My
justification for this is that simple haptic feedback is intuitive, something we are used to.
Voice guidance from GPS devices are similarly intuitive, as we can draw upon experi-
ences gained from communicating with fellow human beings. Note that simple feedback
techniques do not necessarily equate to a rich experience. Feedback can be made highly
complex, providing a lot more information, but at the cost of requiring much more train-
ing to use efficiently. If we accept a long training period, it may seem that a complex
solution is always better, but we need to look at other factors such as enjoyment. Is
the system fun to use? As a thought experiment, consider this art metaphor. Imagine
looking at a beautiful painting. Now, we can use a camera to capture that painting with
accurate colours and very high resolution. Then we could take this information and con-
vey it by a series of audio frequencies, corresponding to the colours. This way, we have
reproduced the painting, but it probably does not sound as beautiful as it looks. This
is not the fault of the camera being too bad, but the impressions from our senses have
evolved to be far more than the sensory information itself. Such a sensory substitution
device would make a beautiful painting accessible to someone who has never seen, but
there likely is better “music” out there.
the automatic mode leads to a compressed depth perception, which is manageable with
practice, but is not intuitive. The modified LaserNavigator has a button, which when
pressed will set the length based on the body–device distance. Additionally, a certain
number of feedback “ticks” are given to tell the user roughly how long the “virtual cane”
is.
With the improved LaserNavigator, it was soon time for the outdoor trial. The
participants who had performed the indoor trial partook in this new test, which consisted
of walking a closed path among a cluster of buildings on campus. Before the actual walks,
the participants got some time to familiarise themselves with the environment with the
help of a tactile model. All participants liked the changes made to the LaserNavigator,
and one participant in particular really enjoyed the experience and the ability to use a
“very long cane” to keep to the path. One aspect which I find intriguing surfaced during
this trial, namely the distinction between an obstacle avoidance device and a navigation
aid, and how these two kinds of devices are linked. During the project, this is something
we have spent many thoughts on. The distinction becomes blurred when having access to
great range, where some objects are used as a guiding reference, and would not otherwise
be seen as an obstacle to go to and poke with the white cane. From both observations
and interviews from this latest trial, it appears that the participants went through this
kind of thought process. At first, the device was seen as a “new kind of white cane”. It
seems that the device is first seen as a white cane with mostly limitations, but is later
reinterpreted as a navigation aid, at which point possibilities surface. Given the design
choice of trying to mimic the interaction with a white cane, it is perhaps not surprising
that the participants thought of the device in terms of a cane, with implied limitations.
The fact that this familiarity can be utilised is encouraging as it can lead to an easier-
to-learn device. The challenge then is to go beyond the familiar concept and realise that
the device is something more – something different from a white cane.
athletics class at school when running around the oval track. I remember times when,
exhausted, I’d reached the finish line and heard, “and you thought you were done?”
Let us hope that in this case, the track is an inward spiral.
The main contributions of this thesis are in the field of user interaction, more specif-
ically on the problem of how to convey spatial information non-visually, primarily to
visually impaired individuals. While this thesis focuses on navigation aids for the vi-
sually impaired, they are not the only group that benefits from this work. Non-visual
interaction focused towards navigation is of interest to e.g. firefighters as well, who can
end up in situations where they have to find their way around in a smoke-filled build-
ing. In addition, advances in non-visual interaction in general is useful for anyone on
the move. Oulasvirta et al. [2] note that when mobile, cognitive resources are allocated
to monitor the react to contextual events, leading to interactions being done in short
bursts with interruptions inbetween. In particular, vision is typically occupied with nav-
igation and obstacle avoidance (not to mention driving a car), thus using a mobile device
simultaneously may lead to accidents.
While the focus for this work has been on haptic solutions, other sensory channels
(notably audition) are also relevant for navigation aids. Haptics is an appealing choice
for the specific task of conveying object location information, and audition has important
drawbacks in this regard (see chapter 2 and paper A).
In numerous places throughout this text, both commercial and prototype navigation
aids are mentioned. These do not form an exhaustive list, but are chosen based on the
novelty they bring to the discussion, be it an interaction or functionality aspect.
1.3 Terminology
Terminology regarding visual impairment as well as disabilities in a more general sense
are many and are subject to change over time. For example, the term handicapped might
be offending today, despite the fact that the term itself originated as a replacement for
other terms. Throughout this text, visual impairment and visually impaired are used.
1.4. Thesis Structure 11
They refer specifically to the underlying problem, the impairment, and this may in turn
be the reason for a disability.
A further challenge is classifying degrees of visual impairment. Terms such as blind,
low vision, partially sighted and mobility vision are troublesome as they are not clearly
defined. Such definitions are not easily established even if objective eye measurement are
used. For this thesis, precise judgement of visual ability (acuity) is not important, but
the categorisation is. In a navigation context, the key piece of information is how vision
aids the navigation task. A person who can see some light has an advantage over a person
unable to see light, and a person able to discern close objects has further advantages.
Throughout this thesis and unless otherwise stated, visually impaired is used to denote an
individual or group of individuals with a disadvantage in a navigation context compared
to what is considered normal sight.
Part I
• Chapter 1 contains a personal chronicle of events, some notes on terminology, and
scope of the thesis.
• Chapter 4 describes the Virtual White Cane and the conducted evaluation.
• Chapter 6 discusses results and the research questions formulated in this chapter.
Part II
• Paper A discusses non-visual spatial perception in a navigation context, and pro-
poses some interaction design guidelines.
• Paper D is the first part in a series of two about the LaserNavigator. This paper
focuses on the first indoor trial.
• Paper E is the second part regarding evaluating the LaserNavigator, this time in
an outdoor setting.
13
14 Background
also very limited. Because of its short range, it does not aid significantly in navigation.
2.1.1 Navigation
Navigating independently in unfamiliar environments is a challenge for visually impaired
individuals. The difficulties to go to new places independently might decrease partic-
ipation in society and can have a negative impact on the personal quality of life [5].
The degree to which this affects a certain individual is a very personal matter though.
Some are adventurous and quite successful in overcoming many challenges, while others
might not even wish to try. The point is that people who are visually impaired are at a
disadvantage to begin with.
The emphasis on unfamiliar environments is intentional, as it is possible to learn how
to negotiate well-known environments with confidence and security. Even so, the world
is a dynamic place, and some day the familiar environment might have changed in such
a way as to be unfamiliar. As an example, this happens in areas that have a lot of snow
during the winters.
Navigation is difficult without sight as the bulk of cues necessary for the task are
visual in nature. This is especially true outdoors, where useful landmarks include specific
buildings and street signs. Inside buildings there are a lot of landmarks that are available
without sight, such as the structure of the building (walls, corridors, floors), changes in
floor material and environmental sounds. Even so, if the building is unfamiliar, any signs
and maps that may be found inside are usually not accessible without sight.
There are two parts to the navigation problem: firstly, the current position needs to be
known; secondly, the way to go. There are various ways to identify the current position,
but one way to think is to view them as fingerprints. A location is identified by some
unique feature, such as a specific building nearby. Without sight, it is usually difficult to
tell a building from any other, and so other landmarks obtained through local exploration
may be necessary to establish the current location. The next problem, knowing where
to go, can then be described as knowing how to move through a series of locations to
reach the final location. This requires relating locations to one another in space. Vision
is excellent at doing this because of its long range. It is often possible to directly see the
next location. This is not possible without sight, at least not directly. The range of touch
is too limited, while sound, although having a much greater range, does not often provide
unique enough fingerprints of locations. The solution to this problem is to use one’s own
movements to relate locations to one another in space. Unfortunately, human beings are
not very good at determining their position solely based on their own movements [6].
Without vision to correct for this inaccuracy, visually impaired individuals must instead
have many identifiable locations close to each other. Consider the task of getting from a
certain building to another (visible) building. With sight there is usually no need to use
any intermittent steps between those. On the contrary, the same route without sight will
likely consist of multiple landmarks (typically intersections and turns). Additionally, a
means to avoid obstacles along the way is necessary.
The problem of obstacle avoidance is closely related to the navigation problem. Vision
2.2. Perception, Proprioception and Haptics 15
solves both by being able to look at distant landmarks as well as close-by obstacles. The
white cane, on the other hand, is an obstacle avoidance device working at close proximity
to the user. An obstacle avoidance device which possesses a great reach could address
this issue, as well as aid in navigation. The prototypes presented in this thesis provide
an extended reach, limited only by the specifications of the range sensors.
To fill the gap that the missing vision creates, audition and touch are used. Audition is
surprisingly capable, and some people become very proficient using it (see e.g. [10]). Note
that audition is not only used to judge the position of objects that make sounds, but also
silent objects. This ability, often referred to as human echolocation, can provide some of
the missing information about the environment, a typical example being knowing where
nearby walls are.
The presence of a piece of large furniture in a room can be inferred from the way
its physical properties affect sounds in the room, but aside from its presence, there is
usually nothing auditory showing that it is a bookcase. Many things can be logically
inferred, but to get a direct experience it may be necessary to use the sense of touch.
While audition can provide some of the large-scale spatial knowledge, touch can give
the details about objects. Note that unlike vision, exploring a room by touch implies
walking around in the room, which means that the relationship among objects has to be
maintained by this self-movement.
to get a grasp of the position and size of an object, it is through those mechanoreceptors
one can perceive the texture and shape of the object. The next section discusses the
technologies that make use of these physiological systems: haptics.
19
20 Related Work
the features of the Trekker family of products mentioned above. Another such solution
that has generated lots of attention recently is BlindSquare [20]. The surge of interest
in this app may be due to the fact that unlike Ariadne and typical GPS solutions,
BlindSquare uses crowdsourced data from OpenStreetMap and FourSquare. The use of
these services makes the app into a “Wikipedia for maps” where user contribution is key
to success. This overcomes one of the fundamental limitations of most GPS systems:
the use of static data. Additionally, BlindSquare is trying to overcome the limitations
of using GPS indoors by instead placing Bluetooth beacons with relevant information
through the building. The team has demonstrated this usage in a shopping centre, where
the beacons contain information about the stores and other relevant landmarks such as
escalators and elevators. BlindSquare and other similarly connected solutions have large
potential, but users must understand the implications of open data.
Figure 3.1: The UltraCane, a white cane augmented with ultrasonic sensors and haptic feedback.
the handle come to a stop as they try to move it “into” an obstacle, much like using a
white cane.
Figure 3.2: A picture of the Miniguide, a handheld ultrasonic mobility aid with haptic feedback.
was commercialised as BrainPort [26]. Despite seemingly incredible results, we can pose
the question of why these solutions have not taken off. Lenay et al. [27] wrote on this:
They further note that while sensory substitution is a good term from a publicity
and marketing perspective, it unfortunately is misleading in many ways. One issue the
authors raise is whether one can properly call it substitution. The term seems to imply
that a simple transformation of stimuli from one sense to another can bring with it all
3.1. Navigation Aids 23
the abilities and sensory awareness acquired from a lifetime’s experience with that sense.
The authors write:
“Certainly, these devices made it possible to carry out certain tasks which
would otherwise have been impossible for them. However, this was not the
fundamental desire which motivated the blind persons who lent themselves to
these experiments. A blind person can well find personal fulfilment irrespec-
tive of these tasks for which vision is necessary. What a blind person who
accepts to undergo the learning of a coupling device is really looking for, is
rather the sort of knowledge and experience that sighted persons tell him so
much about: the marvels of the visible world. What the blind person hopes
for, is the joy of this experiential domain which has hitherto remained beyond
his ken.” — Lenay et al. [27]
A much more complete navigation aid can be created if the environment in which the
user is supposed to navigate is involved in the process. The first impression of such
systems might be that they are proof of concept implementations rather than practical
solutions, but from a pervasive computing perspective where sensors and microcomputers
are ubiquitous, the idea becomes more plausible.
An early example of this kind of system is Drishti [28]. It uses a wearable computer
with speech input and output to communicate with the user, who can ask questions and
give commands concerning the environment in which the system is operating. The system
uses GPS to locate the user outdoors, and is connected to a server through a wireless
network. This server has a detailed geographical database on surrounding buildings and
other relevant navigation information. Drishti also works similarly indoors, but using
an ultrasound positioning systems with beacons placed around the environment. This,
together with a detailed database, enable the system to be queried for the location of a
piece of furniture, for example.
Radio Frequency Identification (RFID) technology has also been utilised. An example
is the robot guide RG by Kulyukin et al. [29], which uses RFID tags scattered throughout
an indoor environment to guide the user. Of note is that in this case, true to its name,
the robot guides the user, and it does so using a potential field (PF) algorithm. Such
algorithms work by associating each known obstacle with a repulsive field, and the target
(goal) with an attractive field. The navigator, RG in this case, is then treated as a
particle in this field, affected by the repulsive and attractice forces.
Blindsquare [20] deserves a mention in this category as well, since it supports Blue-
tooth beacons for indoor navigation. This takes the inverse approach compared to the
robot guide, in that the user is the one in charge.
24 Related Work
“The three groups performed similarly when asked to judge perspective while
imagining a new point of observation. However, locomoting to the new point
greatly facilitated the judgments of the sighted and late-blinded subjects, but
1
Blind since birth.
2
Having had better sight.
3.2. Scientific Studies Involving Visually Impaired Participants 25
not those of the early-blinded subjects. The findings indicate that visual ex-
perience plays an important role in the development of sensitivity to changes
in perspective structure when walking without vision.” — Rieser et al. [31]
“From the results of this task, we conclude that at least some of the congeni-
tally blind observers are able to update the location of known objects during
locomotion as well as blindfolded sighted observers.” — Loomis et al. [6]
They further discussed this discrepancy, noting that many studies recruit participants
through schools and agencies for the blind, where many adults are unable to travel
independently. This can lead to a biased selection wherein the participants have worse
mobility skills than average, thus negatively affecting the results of that group.
They also note that this could be the other way around:
“Although we too obtained our subjects largely with the assistance of the
Braille Institute, we sought subjects who were able to travel independently;
accordingly, our selection procedure may have been biased toward subjects
with better-than-average mobility skills.” — Loomis et al. [6]
The authors further argue that the mentioned issues with the selection process and
the fact that sample sizes often are small, means that research on the question of how
visual experience affects spatial ability is not as definitive as it may seem.
26 Related Work
Chapter 4
The Virtual White Cane
The virtual white cane was the name given to the first prototype navigation aid
(figure 4.1). The licentiate thesis [32] and papers A, B and C are about the virtual white
cane. This section summarises that work.
4.1 Overview
The concept of the virtual white cane originated from previous research on an autonomous
powered wheelchair, MICA (Mobile Internet Connected Assistant) [33]. MICA was
equipped with a SICK LMS111 laser rangefinder [34] able to determine the distance to
every object in a horizontal plane in front of and slightly to the sides of the wheelchair.
Based on these measurements, MICA was programmed to drive autonomously without
hitting obstacles. From this, the idea of giving a visually impaired wheelchair user access
to this information and thus the ability to drive the wheelchair was born.
The laser rangefinder scanning the environment was already present. The main ques-
tion then was how to convey the range information to a visually impaired individual. This
was done with a Novint Falcon haptic interface [35]. A laptop gathers range information
from the rangefinder and builds a three-dimensional model which is then transmitted to
the haptic interface for presentation, as well as displayed graphically on the computer
screen. Figure 4.3 shows the transformation from scanned data to model.
The Novint Falcon (depicted in figure 4.2) is a haptic interface geared towards the
gaming audience. As such, it is an evolution from force-feedback joysticks that can
vibrate to signal certain events in a game. A haptic interface, on the other hand, can
simulate touching objects. This is accomplished by the user moving the handle (hereafter
referred as the grip) of the device around in a limited volume known as the workspace
(in the case of the Falcon this is about 1 dm3 ). The haptic interface contains electric
motors which work to counteract the user’s movements, and can thus simulate the feeling
of bumping into a wall at a certain position in the workspace volume. The reason for
using a haptic interface is that it can provide an interaction that is very similar to that
of the white cane. The Novint Falcon was chosen specifically as it had sufficiently good
27
28 The Virtual White Cane
Figure 4.1: This figure shows the virtual white cane on the MICA (Mobile Internet Connected
Assistant) wheelchair.
specifications for the prototype, and was easily available at a low cost.
The SICK LMS111 is a laser rangefinder manufactured for industrial use. Using time-
of-flight technology (measuring flight times of reflected pulses) it is able to determine an
object’s position at a range of up to 20 metres and with an error of a few centimetres.
The unit uses a rotating mirror to obtain a field of view of 270◦ . Unfortunately, this field
of view is limited to a single plane (in our case chosen to be parallel to the floor), and
is thus not a fully three-dimensional scan. This has not been an issue for the current
prototype, as in a controlled indoor environment there is no need to be able to feel at
different heights to navigate as walls are vertical.
4.2 Software
There are many software libraries for haptics available. Our primary requirement was
that it must be able to handle a rapidly changing dynamic model, which is not the case for
all available libraries. We ended up choosing H3D API, developed by SenseGraphics [36],
as it came with a lot of functionality we needed out of the box. H3D is also open-source,
and can easily be extended with Python scripts or C++ libraries.
The biggest challenge related to haptics was to overcome a phenomenon known as
haptic fall-through. Haptic interfaces such as the Falcon act as both input and output
4.2. Software 29
devices at the same time. While the device has to at all times figure out, based on grip
position, what kind of force to apply, the motions and forces the user exerts on the grip
can be used to affect the displayed objects. At any instant in time, a situation may
arise where the user pushes the grip, and the system determines that the grip is now
behind the surface that was being felt, thus not sending any force to the grip. To work
around this issue, haptic rendering settings were carefully chosen, as explained in the
next section.
(a) (b)
(c)
Figure 4.3: A simple environment (a) is scanned to produce data, plotted in (b). These data
are used to produce the model depicted in (c).
it and the grip is calculated. The resulting distance is used in a formula analogous to
Hooke’s law for springs to calculate a force. This force is applied in order to return the
grip’s position to that of the god-object.
The god-object renderer described above is efficient and easy to implement, but suffers
from some problems. If the model being touched has tiny holes in it, the god-object, being
a single point, would fall through the surface. Even if there are no holes in the model, the
problem of haptic fall-through is not uncommon. To address this, one can extend the god-
object idea to an actual object, having physical dimensions. The Ruspini renderer [37]
is such an approach, where the object is a sphere of variable size. The Ruspini renderer
solves most of the fall-through problems, but is not as easy to implement nor as processor-
efficient as the god-object renderer.
For a more in-depth explanation of haptic rendering, see the article Haptic Rendering:
Introductory Concepts by Salisbury et al. [38].
4.3. Field Trial 31
This chapter presents the second prototype, dubbed the LaserNavigator (figure 5.1).
5.1 Overview
The idea behind this prototype came about after the successful evaluation of the Virtual
White Cane. The aim was to create a device that retains as much of the intuitive haptic
feedback as possible, while not relying on a wheelchair, thus broadening the potential
user base.
The intended use of the LaserNavigator is as a complement to the standard white
cane. The device is used to get an idea of ones surroundings when needed. As an example,
imagine crossing a large open space. Unless there is some clearly discernible feature of
the ground to follow, the white cane will not be of much use. There may be a lamppost
somewhere that could help moving in the proper direction, but it can be easy to miss if
just walking with no references. With the LaserNavigator, one can keep track of where
that lamppost or other significant landmarks are located, be it at 1 or 50 metres.
The user interface of the LaserNavigator is designed with the white cane in mind.
When aiming the device at an object, the device measures the distance to it, and may or
may not alert the user of its presence through vibrations. Whether to do so is determined
by a number representing the length of an imagined cane pointing towards the object.
This value is obtained by measuring the distance from the device to the closest point
on the user’s body. The effect is that the user can vary the length of their “virtual
cane” by moving the device away from or towards their body. The advantages of doing
this compared to simply conveying the measured distance directly are many. Presenting
distances in a large range such as up to 50 m accurately through vibrations would require
a complex feedback mode needing much user training. Additionally, a cane of such
length would almost always be in contact with something, leading to a constant stream
of feedback that can be seen as obtrusive.
33
34 LaserNavigator
Figure 5.1: A picture of the latest version of the LaserNavigator. The primary components are
the laser rangefinder (1), the ultrasound sensor (2), the loudspeaker (3), and the button under
a spring (4) used in manual length adjustment mode to adjust the “cane length”.
5.2 Hardware
The hardware architecture of the LaserNavigator is depicted in figure 5.2. The core of
the system is an LPC17651 microcontroller from NXP Semiconductors [39]. Mounted on
the main board are also a Bluetooth module and an inertial measurement unit (IMU).
To measure distances, an SF02/F laser rangefinder from Lightware Optoelectronics [40]
is used, connected to the main board through serial UART. The SF02/F has a range of
up to 50 m, while still able to take up to 32 measurements per second. A further benefit
to using a laser rangefinder is the low beam divergence, making it possible to accurately
detect edges of objects.
For haptic feedback, a small loudspeaker is used, connected to a simple digital switch
circuit driven by one of the general purpose input/output (GPIO) pins on the LPC1765.
The reason for using a speaker instead of a conventional vibration actuator lies in the
response time. As the user scans their surroundings with the LaserNavigator, it is crucial
that it react quickly, which the typical actuators do not. The two most common types
of electromechanical units (eccentric rotating mass (ERM) and linear resonant actuator
(LRA)) have a response time of several tens of milliseconds [41]. While there are much
faster options using either piezoelectric or electrochemical principles, they require much
more complex electronics to drive than their electromechanical counterparts. A loud-
speaker, on the other hand, is easy to drive and also has a quick response time (< 1 ms).
The drawback (or advantage depending on application) is the generated sound, but when
1
The board features an ARM Cortex-M3 processor running at 100 MHz.
5.3. Software 35
Smart phone
USB
Battery
Bluetooth
Programing
UART
Range Finder
Gyros (3D)
UART Micro-Controller
Unit (MCU)
SPI
UART/ADC
SPI
Range Finder
PWM
Accelerometers
(3D)
Vibrator
Figure 5.2: Basic architecture diagram showing the various components of the LaserNavigator
and how they communicate with each other.
5.3 Software
The software has three primary functions: to read data from the laser and ultrasound
sensors; to process these data, and to provide haptic feedback through the loudspeaker.
The application is a mixture of C and C++ code, and is organised in three layers of
abstraction. The lowest layer consists of C code to initialise the microcontroller and set
up all peripherals. On top of that is the hardware abstraction layer (HAL), which exposes
the underlying hardware through C++ functions organised in files based on periphperal
(e.g. vibrator, bluetooth, laser). Finally, the top layer comprises the main application,
36 LaserNavigator
primarily managed through two functions: app init (runs once at startup) and app loop
(runs continually).
• On starting the device, the current battery voltage is conveyed as a series of short
vibration bursts. For example, 5.2 V would be conveyed with five bursts with short
pauses in between, followed by a longer interruption, and then two short bursts
again.
• The device can enter a stand-by mode where no vibrations are emitted. This
happens automatically when the device has been stationary for a few seconds.
This is accomplished using data from the on-board accelerometer. Every time a
new value is obtained, it is pushed onto a double-ended queue (deque) of fixed size.
To determine whether to enter or exit stand-by mode, the standard deviation of
the deque is checked against a predetermined threshold.
• It is possible to switch from indoor mode (scale factor 10) to outdoor mode (scale
factor 50) without reprogramming the device. The mode is switched when the
device is in stand-by mode and both the laser and ultrasound measurements are
less than 10 cm. Thus, with the device stationary, one can put both hands close
to the range sensors and trigger the mode change. The device signals the entered
mode by short Morse codes emitted through the loudspeaker.
• Abrupt variations in the ultrasound measurement can occur, but during normal
operation they will not change that rapidly. A new measurements is skipped if it
differs more than 60 cm from the previous measurement.
trials, one participant put it this way: “The technology has to exist for me, not the other
way around.”
While such complex feedback is typical, a relevant question to ask is why this is the
case. It may be an obvious, almost instinctive choice, as this kind of feedback exists in
vehicle reverse warning systems, for example.
5.5 Algorithms
Following is an algorithmic description of the different approaches outlined above. In the
text below, the operator := is used to indicate variable assignment.
In the main program file, beside app init and app loop, there is also a function
called sensorcallback. Recent sensor values are accessed and filtered here, and the
values from the range sensors are converted to cm. In the following text, the relevant
variables are d (ultrasound measurement) and D (laser measurement).
In sensorcallback, d is filtered to remove the occasionally occurring very large value
(> 300). This is accomplished by only accepting a new value d if |d − dp | < 60, where dp
is the previous value.
This also means that in automatic length adjustment mode, if the device is angled
so the body isn’t in view of the ultrasound sensor, those measurements will usually be
ignored. Thus, the device can still be used properly in those cases.
The next task is to calculate the “cane length” (l) based on d. For the evaluations,
this was done using l := kd where k is a constant with a value of 10 indoors and 50
outdoors. Previously, we also tried letting k be different linear functions of d, notably
k := (d/5 + 3) indoors. The effect of a linear function is that l is proportional to d2 ,
enabling e.g. finer depth-wise details to be perceived at shorter distances. With the
example function above, the device held 10 cm from the body would lead to l = 500, and
held at 60 cm, l = 900.
In the simple feedback mode, the “cane length” l is very important, as it is the
parameter needed to determine the distance to an object. In the complex feedback
mode, however, it is only a way of filtering what is detected, as the distance is conveyed
haptically. In that mode, a cane that is “too long” can still be used to detect close-by
objects. It is important to note that when the hand is moved closer to an object, not only
does the cane “cane length” increase but the laser distance also decreases. This leads
to a compressed depth perception, which can be avoided by not adjusting the length
continually. While being stationary, an invariant distance is d + c + D where c is the
distance between the laser and ultrasound sensors.
In the case of simple feedback with a threshold zone, a further task is to establish
how large (a distance interval) this zone should be. Let T denote the zone size, expressed
in cm. The more intense feedback occurs close to the point where the length (l) and
laser distance (D) are equal, i.e. when |D − l| < T . A constant T works well when k
is a constant, but if it is a linear function, the zone will feel differently depending on d,
because of the compressed depth perception mentioned previously. To solve this, T can
also be a linear function of d.
5.6. Evaluations 39
For complex feedback, the next task is to determine the burst frequency, which is
expressed as time between bursts (tb ) in the program. This value can reasonably range
from a few ms to a few hundred ms. If the burst frequency should be an indication of
absolute distance regardless of other settings, tb should be a function of D only. Simply
letting tb := D is a good start at short ranges, but it is difficult to discern small differences.
Thus, one might want a function where the derivative, t0b (D), is larger at closer distances,
and decreases slightly with distance2 . To accomplish this, a function composed of two
linear segments was tested: a steeper one up to 50 cm, and then one with a smaller slope.
That is, (
aD, if D < 50
tb (D) := (5.1)
bD + m, otherwise
where a > b, and m is chosen based on a and b so that the lines intersect at D = 50.
5.6 Evaluations
Two evaluations of the LaserNavigator were performed in order to determine the fea-
sibility of the device, and get general user feedback. The same three blind individuals
participated in both evaluations.
The first evaluation (see paper D) was carried out in a prepared indoor environment,
where the participants got the task of finding doorways. For that evaluation, the Laser-
Navigator used automatic length adjustment, and we tested simple feedback both with
and without a threshold zone. The results showed that the participants were able to de-
tect the doorways with the LaserNavigator, but the device required more training than
we anticipated. Practical issues were also identified, and one notable change made to the
device after the trial was to introduce manual length adjustment mode. The participants
also identified scenarios where they would find such a navigation aid useful.
The second evaluation (see paper E) was performed outdoors, where the participants
walked a predetermined route. This time, the LaserNavigator was set to manual length
adjustment and simple feedback with no threshold zone. All participants thought that
the LaserNavigator had improved since the indoor trial. They were able to use the device
confidently while following the walls of buildings and walking straight, but needed a lot
of instructions during turns and other more challenging situations. They expressed the
need for more information than the device provided, but also noted that it might be
useful in familiar environments.
2
This follows the same principle as when setting l, where finer details at close proximity are given
priority.
40 LaserNavigator
Chapter 6
Discussion
It should now be easy to dismiss the idea that blind individuals cannot have a working
spatial model. Still of interest, however, are the reasons behind such ideas. A couple of
hundred years ago, the societal views on blindness were different, and that in itself has
likely been an important factor. Today, assistive technologies have made it possible for
blind people to contribute to society in far more ways, yet at least as important are the
more positive views. Considering spatial perception, we know that this ability does not
somehow just magically manifest, but is developed, like most other abilities. Seen in this
light, we can question how, and if, blind people develop this ability. It is important that
this training is provided, but this was not likely provided centuries ago. Those who are
blind or have a visual impairment affecting their ability to travel independently should
be encouraged and assisted in developing their basic mobility skills. Navigation aids can
be of great help, but are not a substitute for core mobility skills. Barriers to independent
travel come not only from missing navigation information but can also stem from an
inherent insecurity. Developing core mobility skills are likely to have a large impact on
this. Then, navigation aids can be used to add knowledge that the missing vision has
made inaccessible.
What is this knowledge, and how should it be presented? This is one of the primary
research questions of interest for this dissertation. One of the participants in one of our
evaluations said that when you walk outside without sight “there are no references”.
These “references” are one type of knowledge that can be of great help. Such knowledge
answers the question “where am I?” in terms of nearby points of interest. Another piece
of information difficult to access without sight is the nature of buildings and objects
around. The white cane can be used to poke at a the walls of a building, but what
building is it? GPS devices can help with this, in addition to the challenge of how to get
to a certain place. Further, one needs a means of avoiding obstacles along the way. This
is typically handled by the white cane and/or a guide dog.
The prototypes presented in this dissertation have been attempts at providing the
missing “references”. A scenario illustrating the need for this is when walking across an
open space, with the intention of reaching a landmark such as a lamppost on the other
41
42 Discussions
side. The cane may not be of much use in that case, and nor would a GPS. The open
space also means that there are not many, if any, useful auditory cues. Someone skilled
in acoustic echolocation could hear the presence of the lamppost if close, but walking
straight across an open space without any references is a difficult task.
The second research question asks what can feasibly be done with current technologies.
While there are many options for haptic feedback, the natural kind of direct force feedback
used e.g. for the Virtual White Cane would be very difficult to provide in a portable
device. That said, it is not impossible. The CyArm (see [24]) does this, but requires
one part of the system being attached to the user, with a string extending out to the
handheld part of the device. Even so, haptic feedback technologies today cannot match
the richness experienced when using the white cane. Photography does this well for the
visual domain, but what about the haptic? Haptography is apparently not just a made-up
word [43], and could be of interest to future navigation aids.
During the outdoor trial of the LaserNavigator, one researcher was always beside the
participant and gave any instructions and information needed. One participant remarked
that it was this information that needs to be provided. Indeed, a good navigation aid is a
sighted human being. They can see the world and give relevant guidance and information,
adapted to the specific individual being guided. A promising technological alternative is
image processing techniques. Smartphone apps such as TapTapSee [44] show promise,
but the information given about an image is nowhere near detailed enough to serve as
navigation instructions, if it is even correct.
The third research question is about users’ conceptions. First off, it bears mentioning
that the evaluations with potential users of navigation aids have been very valueable
in many ways. Observing how the participants are able to complete the task is one
aspect, but assessing task performance and similar quantitative measures have not been
the objective of the evaluations. We wanted to know users’ conceptions of feasibility of
the devices, and where such devices would help them in their daily lives. These questions
would have been impossible to explore, had we opted to evaluate the system with e.g.
blindfolded sighted subjects, as is sometimes done.
The evaluation of the Virtual White Cane showed promise of such a device. The
participants found the experience interesting and fun, and easily understood how to use
the haptic interface to probe their environment. The evaluations of the LaserNavigator
showed again that the interaction is promising, and the participants were affected by
their familiarity with the white cane. This was intentional, but brought with it a new
kind of challenge. Despite the white cane-like interaction, the device is not a white cane
replacement, nor is it just “a very long cane”. The key to using the LaserNavigator as
intended is to think outside the white cane metaphor. An example of this is the ability
to keep track of two walls, on both sides, at once. Another is the idea of using any object
as a landmark, that might not really be of relevance to a white cane user.
The evaluations also showed many practical problems. In particular, the LaserNavi-
gator was too heavy and it was difficult to hold it horizontally. Both of these concerns
should not be difficult to address. Another, more fundamental concern raise by the par-
ticipants was the lack of rich feedback. “How do I know if I am detecting a tree or a
43
lamppost?” was a question that cropped up. With the white cane, this is easily done,
as hitting the object will produce a characteristic sound. Additionally, dragging the
cane tip across the object would allow one to get a feeling for the object’s texture. The
LaserNavigator can do neither of these things. If thought of as a “very long white cane”
these are certainly major drawbacks, but when the device is seen as a navigation aid,
these become less of an issue. The participants thought the LaserNavigator would be
most useful in familiar environments, and in that case the familiarity would help with
identifying whatever is felt with the device. In less familiar environments, the device may
still provide some security as one can use it to keep track of one’s own location, assisting
the inherent path integration ability by expanding the small circle of detection offered
by the white cane alone.
44 Discussions
Chapter 7
Conclusions
In an area where most solution attempts do not go very far, and those that do have
not made a big impact, basic studies are important. The experiments with the Virtual
White Cane and the LaserNavigator have shed light on many aspects that were clouded
or even invisible, ranging from theoretical conundrums to practical issues.
The Virtual White Cane showed the feasibility of a haptic interface conveying infor-
mation about nearby objects. The development and evaluations of the LaserNavigator
further showed that a handheld device doing the same is possible, albeit practically
much more challenging to develop. The LaserNavigator takes a step beyond the cur-
rently available navigation aids by allowing a much greater detection range and provides
the possibility to examine the shapes of objects within that range in much detail. The
development of the LaserNavigator has led to a patent application [45].
The following summarises the work by answering the research questions posed in the
introduction.
• What are users’ conceptions of such technologies? Users are able to draw
upon their experiences (in our case white cane use), making the interaction some-
what familiar. They are able to perceive objects, but using the technology effec-
tively requires practice.
There is always more work to be done. To improve the state of navigation aids,
there are many aspects that need to be studied further, across many different fields
45
46 Conclusions
[1] “The sighted wheelchair - successful first test drive of ”sighted” wheelchair (YouTube
video),” http://www.youtube.com/watch?v=eXMWpa4zYRY, 2011, accessed 2014-
02-24.
[4] T. Pey, F. Nzegwu, and G. Dooley, “Functionality and the needs of blind and par-
tially sighted adults in the uk: a survey,” Reading, UK: The Guide Dogs for the
Blind Association, 2007.
[8] H. Lotze, Metaphysic: in three books, ontology, cosmology, and psychology. Claren-
don Press, 1884, vol. 3.
47
48 References
[11] U. Proske and S. C. Gandevia, “The proprioceptive senses: their roles in signaling
body shape, body position and movement, and muscle force,” Physiological reviews,
vol. 92, no. 4, pp. 1651–1697, 2012.
[12] R. J. Stone, “Haptic feedback: A brief history from telepresence to virtual reality,”
in Haptic Human-Computer Interaction. Springer, 2001, pp. 1–16.
[17] R. Velazquez and S. Gutierrez, “New test structure for tactile display using laterally
driven tactors,” in Instrumentation and Measurement Technol. Conf. Proc., May
2008, pp. 1381–1386.
[21] Sound Foresight Technology, “Ultracane - putting the world at your fingertips,”
http://www.ultracane.com/, accessed 2016-03-21.
[23] R. Farcy and Y. Bellik, “Locomotion assistance for the blind,” in Universal Access
and Assistive Technology. Springer, 2002, pp. 277–284.
[24] J. Akita, T. Komatsu, K. Ito, T. Ono, and M. Okamoto, “Cyarm: haptic sensing
device for spatial localization on basis of exploration by arms,” Advances in Human-
Computer Interaction, vol. 2009, p. 6, 2009.
References 49
[41] E. Siegel, “Haptics technology: picking up good vibrations,” EETimes, http:// www.
eetimes.com/ document.asp? doc id=1278948 , July 24, 2011, accessed 2016-04-05.
[45] K. Hyyppä, “Device and methods thereof,” Patent application, Swedish Patent and
Registration Office, 1 650 541-4, April 22, 2016.
Part II
51
52
Paper A
Presentation of Spatial Information
in Navigation Aids for the Visually
Impaired
Authors:
Daniel Innala Ahlmark and Kalevi Hyyppä
c 2015, Emerald Group Publishing Limited, Reprinted with permission.
53
54
Presentation of Spatial Information in Navigation
Aids for the Visually Impaired
Abstract
Purpose: The purpose of this article is to present some guidelines on how different
means of information presentation can be used when conveying spatial information non-
visually. The aim is to further the understanding of the qualities navigation aids for
visually impaired individuals should possess.
Design/methodology/approach: A background in non-visual spatial perception is
provided, and existing commercial and non-commercial navigation aids are examined
from a user interaction perspective, based on how individuals with a visual impairment
perceive and understand space.
Findings: The discussions on non-visual spatial perception and navigation aids lead to
some user interaction design suggestions.
Originality/value: This paper examines navigation aids from the perspective of non-
visual spatial perception. The presented design suggestions can serve as basic guidelines
for the design of such solutions.
1 Introduction
Assistive technology has made it possible for people with a visual impairment to navi-
gate the web, but negotiating unfamiliar physical environments independently is often a
major challenge. Much of the information that provides a sense of location (e.g. signs,
maps, buildings and other landmarks) are visual in nature, and thus are not available
to many visually impaired individuals. Often, a white cane is used to avoid obstacles,
and to aid in finding and following the kinds of landmarks that are useful to the visually
impaired. Examples of these include kerbs, lampposts, walls, and changes in ground
material. Additionally, environmental sounds provide a sense of context, and the taps
from the cane can be useful as the short sound pulses emitted enable limited acoustic
echolocation. The cane is easy to use and trust due to its simplicity, but it is only able
to convey information about obstacles at close proximity. This restricted reach does not
significantly aid navigation, as that task is more dependent on knowledge about things
farther away, such as doors in a hallway or buildings and roads. One of the authors
has long personal experience of the navigation problem as he has been visually impaired
(Leber’s congenital amaurosis) since birth.
Many technological navigation aids—also known as electronic travel aids (ETAs)—
have been developed and produced, but they have not been widely adopted by the visually
55
56 Paper A
impaired community. In order for a product to succeed, the benefit it provides must
outweigh the effort and risks involved in using it. The latter factor is of critical importance
in a system whose job it is to guide the user reliably through a world filled with potentially
dangerous hazards.
A major challenge faced when designing a navigation aid is how to present spatial
information by non-visual means. Positioning systems and range sensors can provide
the needed information, but care must be taken in presenting it to the user. Firstly,
there is no easy sensory translation from the highly-spatial visual sense, and secondly,
the interaction should be as intuitive as possible. This not only minimises training times
and risks, but also increases comfort and security.
The purpose of this article is to review the literature on navigation aids, focusing on
the issues of user interaction. The goal is to further the understanding of the qualities
navigation aids should possess, and possibly shed light on the reasons for the weak adop-
tion of past and present solutions. To accomplish this, several solutions are presented
and discussed based on the interaction modes. There are many solutions not mentioned
herein; solutions that employ similar means of interaction to the ones presented were ex-
cluded. To aid in the discussion, some background information on how space is perceived
non-visually is also presented. The focus for this article is on the technological aspects,
but for technology adoption the socio-economical and cultural aspects are equally im-
portant. While the visually impaired are the main target users, non-visual navigation
and obstacle avoidance solutions can be of use to sighted individuals, for instance to
firefighters operating in smoke-filled buildings.
Section 2 describes the literature selection process. Section 3 contains some back-
ground information on non-visual spatial perception. This, together with section 4 which
examines some commercial and prototype navigation aids serve as background to the dis-
cussion in section 5. Lastly, section 6 concludes the paper with some guidelines on how
different modes of interaction should be utilised.
2 Methods
Database searches were made to find relevant literature. Scopus, Google Scholar and
Web of Science were primarily used, with keywords such as navigation aids, visually
impaired, assistive technology, haptic, audio, speech, blind and user interaction. Articles
were then selected based on user interaction. The goal was to have articles representing
novel uses of different interaction modes, thus many articles presenting similar solutions
were excluded. The purpose was to have literature supporting the later discussion, rather
than presenting a comprehensive overview.
As an example, the search string “navigation aid” AND “visually impaired” yielded
41 unique articles in Scopus. Of those, 39
3. Non-visual Spatial Perception 57
fatigue. Multiple sound sources making noise all the time can be both distracting and
tiring. Also, the real environmental sounds should not be blocked out or distorted [5].
The way visually impaired people perceive and understand the space around them
should be taken into account when designing navigation aids. The next section describes
some commercial and non-commercial navigation aids that utilise haptics and/or audio.
4 Navigation Aids
Electronic travel aids come in numerous shapes and sizes ranging from small wearable and
hand-held devices designed to accomplish a very specific thing, to complex multi-sensor
and multi-interface devices. For the purpose of this article, the devices presented below
are grouped based on how they communicate with the user. An important distinction to
keep in mind is that some devices use positioning (such as GPS) while others are obstacle
avoidance devices sensing the environment. These two kinds of devices complement
each other perfectly, as obstacle avoidance devices do not give travel directions, and
positioning devices (typically based on GPS) rely on stored map data that can provide
travel instructions, but need to be kept up to date. Further, the GPS system does not
work indoors and cannot by itself give precise movement directions relative to the user’s
current orientation. GPS devices can overcome the latter limitation by incorporating a
magnetometer or through utilising the user’s direction of motion.
of interaction resembling a white cane was feasible and easy to learn for visually impaired
users familiar with the regular white cane.
5 Discussion
Some of the solutions mentioned in the previous section are commercially available, the
least expensive being the smartphone apps (provided the user already has a smartphone).
Despite this, the adoption of this kind of assistive technology has not been great. Com-
pare this to the smartphones themselves, which are used by many non-sighted individu-
als. Even touch-screen devices can be and are used by the blind, thanks to screen reader
software.
The reason for the weak adoption of navigation aids appears not to have been sci-
entifically investigated. More generally, there seems to be a lack of scientifically sound
studies on the impact of assistive technology for the visually impaired. In a 2011 synthe-
sis article by Kelly and Smith [16] on the impact of assistive technology in education, 256
60 Paper A
studies were examined, but only a few articles were deemed to follow proper evidence-
based research practices. Going even further in the generalisation, one can find a lot
written about technology acceptance in a general sense. Models such as the Technology
Acceptance Model (TAM) [17] are well-established, but it is not clear how these apply
to persons with disabilities.
Despite the lack of studies on adoption in this specific case, some things can be said
based on how individuals with a visual impairment perceive space, and the solutions they
presently employ. It should no longer be questionable that non-sighted people have a
working world model. It is, however, important to note that this model is constructed
differently than that of a sighted individual. It is important to keep this in mind when
planning user interaction. For example, consider the “where am I?” function mentioned
in the previous section. This function can be more or less useful depending on how the
surrounding points of interest are presented. A non-sighted individual would be more
likely to benefit from a presentation that reads like a step-by-step trip, as this favours
the “bottom up” way of learning about ones surroundings.
Some things can be learnt by comparing the technological solutions to a sighted human
being who knows a specific route. This person is able to give the same instructions
as a GPS device, but can adapt the verbosity of these instructions based on current
needs and preferences. Additionally, this person can actively see what is going on in
the environment, and can assist if, for example, the planned route is blocked or if some
unexpected obstacle has to be negotiated. All of this is possible with vision alone, but
is difficult to replicate with the other senses. Ideally, a navigation aid should have the
ability to adapt its instructions in the same way a human guide can.
Most of the available solutions use speech output. This interaction works well on a
high level, providing general directions and address information. There are, however,
fundamental limitations that speech interfaces possess. Interpreting speech is a slow
process that requires much mental effort [18], and accurately describing an environment
in detail is difficult to do with a sensible amount of speech [19]. Non-speech auditory
cues have the advantage that they can convey complex information much faster, but they
still require much mental effort to process in addition to more training. Headphones are
typically used to receive this kind of feedback, but they generate their own problems
as they (at least partially) block out sounds from the environment that are useful to a
visually impaired person. Strothotte et al. [5] noted that many potential users of their
system (MoBIC) expressed worries about using headphones for precisely this reason.
Complex auditory representations such as used in The vOICe for Android [15] require
much training and long-time use is questionable.
Haptic feedback is a promising option as humans have evolved to instinctively know
how to avoid obstacles by touch. While the typical vibration feedback widely employed
today does not easily convey complex information, it works well in conveying alerts of
various kinds. Tactile displays of various kinds are being developed [20, 21] that could
be very useful for navigation purposes. For instance, nearby walls could be displayed in
real-time on a tactile display. This would be very similar to looking at a close-up map on
a smartphone or GPS device. The usefulness of tactile maps on paper has been studied,
6. Conclusions 61
with mostly positive outcomes [22]. Even so, the efficiency of real-time tactile maps is
not guaranteed.
Interaction issues aside, there are many practical problems that need to be solved
to minimize the effort involved in using the technology. In these regards, much can be
learnt from the white cane. The cane is very natural to use; it behaves like an extended
arm. It is easy to know the benefits and limitations of the cane, and it is obvious if
the cane suddenly stops working, i.e. it breaks. This can be compared to a navigation
aid, where although it might provide more information than the cane, it requires more
training to use efficiently. Additionally, there is an issue of security. It is not easy to
tell if the information given by the system is accurate or even true. Devices that aim to
replace the white cane face a much tougher challenge than those wishing to complement
the cane.
When conducting scientific evaluations, care should be taken when drawing conclu-
sions based on sighted (usually blindfolded) individuals’ experiences. While such studies
are certainly useful, one should be careful when applying these to non-sighted persons.
For example, studies have shown that visually impaired individuals perform better at
exploring objects by touch [23] and are better at using spatial audio [24]. As a result,
one should expect conclusions based on sighted participants’ performances to be worse
than that of visually impaired persons. Care must also be taken when comparing the
experience provided by a certain navigation aid to that of a sighted person’s unaided
experience. This comparison is of limited value as it rests on the assumption that one
should try to mimic the experience of sight, rather than what is provided by sight. This
assumption is valid if the user in question has the experience of sighted navigation to
draw upon, but does not hold for people who have been blind since birth. The benefits
and issues of navigation aids need to be understood from a non-visual perspective. One
should not try to impose a visual world model on someone who already has a perfectly
working, albeit different, spatial model.
6 Conclusions
The purpose of this article was to look into the means present solutions employ to present
spatial information non-visually. The goal was to suggest some design guidelines based
on the present solutions and on how non-visual spatial perception works. A secondary
goal was to shed light on the reasons for the weak adoption of navigation aids. While
technology adoption has been studied in general, there is a research gap to be filled when
it comes to navigation aids for the visually impaired. Though the previous discussion
mentioned several issues regarding information presentation, it is not clear if or how these
contribute to the weak adoption. Further, there are a multitude of non-technological
aspects that affect adoption as well. Looking back only a couple of decades, a central
technological issue was how to make a system employing sensors practically feasible.
Components were bulky and needed to be powered by large batteries. Today, this is less
of an issue, as sensors are getting so small they can be woven into clothes. Even though
spatial information can now easily be collected and processed in real-time, the problem
62 Paper A
of how to convey this information non-visually remains. Many solutions have been tried,
with mixed results, but there are no clear guidelines on how this interaction should be
done. There are guidelines on how different kinds of information should be displayed in
a graphical user interface on a computer screen. Similarly, there should be guidelines on
how to convey different types of spatial information non-visually. The primary means of
doing this are through audio and touch. Audio technology is quite mature today, whereas
solutions based on haptics still have a lot of room for improvement. As audio and touch
both have their unique advantages, it is likely they both will play an important role in
future navigation aids, but it is not clear yet what kind of feedback is best suited to one
modality or the other. A further issue for investigation is how to code the information
such that it is easily understood and efficient to use.
Design choices should stem from an understanding of how visually impaired individ-
uals perceive and understand the space around them. From a visual point of view, it
is easy to make assumptions that are invalid from the perspective of non-visual spatial
understanding. It is encouraging to see studies conclude that lack of vision per se does
not affect spatial ability negatively. This stresses the importance of training visually
impaired individuals to navigate independently.
Below are some important points summarised from the previous discussion:
• Use speech with caution. Speech can convey complex information but requires
much concentration and is time-consuming. It should therefore not be used in
critical situations that require quick actions.
• Real-time tactile maps will be possible. Tactile displays have the potential to
provide real-time tactile maps, but using such maps effectively likely requires much
training for individuals who are not used to this kind of spatial view.
minimises needed training, but also the risks involved in using the system. For
obstacle avoidance, one should try to exploit the natural ways humans have evolved
to avoid obstacles.
• Systems should adapt. Ideally, systems should have the ability to adapt their
instructions based on preferences and situational needs. The difference in prefer-
ences is likely large, as there are many types and degrees of visual impairment, and
thus users will have very different navigation experiences.
Acknowledgement
This work was supported by Centrum för medicinsk teknik och fysik (CMTF) at Umeå
University and Luleå University of Technology—both in Sweden—and by the European
Union Objective 2 North Sweden structural fund.
References
[1] V. Morash, A. E. Connell Pensky, A. U. Alfaro, and A. McKerracher, “A review of
haptic spatial abilities in the blind,” Spatial Cognition and Computation, vol. 12,
no. 2-3, pp. 83–95, 2012.
[6] Sound Foresight Technology, “Ultracane - putting the world at your fingertips,”
http://www.ultracane.com/, accessed 2016-03-21.
[8] S. Ertan, C. Lee, A. Willets, H. Tan, and A. Pentland, “A wearable haptic navi-
gation guidance system,” in Wearable Computers, 1998. Digest of Papers. Second
International Symposium on, 1998, pp. 164–165.
[9] D. Innala Ahlmark, H. Fredriksson, and K. Hyyppä, “Obstacle avoidance using hap-
tics and a laser rangefinder,” in Advanced Robotics and its Social Impacts (ARSO),
2013 IEEE Workshop on, 2013, pp. 76–81.
[13] GT Sonification Lab, “SWAN: System for wearable audio navigation,” http://sonify.
psych.gatech.edu/research/swan/, accessed 2014-02-24.
[14] B. Jameson and R. Manduchi, “Watch your head: A wearable collision warning
system for the blind,” in Sensors, 2010 IEEE, 2010, pp. 1922–1927.
[16] M. Kelly, Stacy and W. Smith, Derrick, “The impact of assistive technology on the
educational performance of students with visual impairments: A synthesis of the
research.” Journal of Visual Impairment & Blindness, vol. 105, no. 2, pp. 73–83,
2011.
[17] F. D. Davis, “Perceived usefulness, perceived ease of use, and user acceptance of
information technology,” MIS Quarterly, vol. 13, no. 3, pp. 319–340, 1989.
[18] I. Pitt and A. Edwards, “Improving the usability of speech-based interfaces for blind
users,” in Int. ACM Conf. Assistive Technologies. New York, NY, USA: ACM, 1996,
pp. 124–130.
65
[24] R. W. Massof, “Auditory assistive devices for the blind,” in Proc. Int. Conf. Auditory
Display, 2003, pp. 271–275.
66
Paper B
Obstacle Avoidance Using Haptics
and a Laser Rangefinder
Authors:
Daniel Innala Ahlmark, Håkan Fredriksson and Kalevi Hyyppä
c 2013, IEEE, Reprinted with permission.
67
68
Obstacle Avoidance Using Haptics and a Laser
Rangefinder
Abstract
In its current form, the white cane has been used by visually impaired people for almost
a century. It is one of the most basic yet useful navigation aids, mainly because of its
simplicity and intuitive usage. For people who have a motion impairment in addition to
a visual one, requiring a wheelchair or a walker, the white cane is impractical, leading to
human assistance being a necessity. This paper presents the prototype of a virtual white
cane using a laser rangefinder to scan the environment and a haptic interface to present
this information to the user. Using the virtual white cane, the user is able to ”poke” at
obstacles several meters ahead and without physical contact with the obstacle. By using
a haptic interface, the interaction is very similar to how a regular white cane is used.
This paper also presents the results from an initial field trial conducted with six people
with a visual impairment.
1 Introduction
During the last few decades, people with a visual impairment have benefited greatly
from the technological development. Assistive technologies have made it possible for
children with a visual impairment to do schoolwork along with their sighted classmates,
and later pick a career from a list that–largely due to assistive technologies–is expanding.
Technological innovations specifically designed for people with a visual impairment also
aid in daily tasks, boosting confidence and independence.
While recent development has made it possible for a person with a visual impairment
to navigate the web with ease, navigating the physical world is still a major challenge.
The white cane is still the obvious aid to use. It is easy to operate and trust because it
behaves like an extended arm. The cane also provides auditory information that helps
with identifying the touched material as well as acoustic echolocation. For someone who,
in addition to a visual impairment, is in need of a wheelchair or a walker, the cane is
impractical to use and therefore navigating independently of another person might be
an impossible task. The system presented in this paper, henceforth referred to as ’the
virtual white cane’, is an attempt to address this problem using haptic technology and
a laser rangefinder. This system makes it possible to detect obstacles without physically
hitting them, and the length of the virtual cane can be varied based on user preference
and situational needs. Figure B.1 shows the system in use.
69
70 Paper B
Figure B.1: The virtual white cane. This figure depicts the system currently set up on the
MICA wheelchair.
Haptic technology (the technology of the sense of touch) opens up new possibilities of
human-machine interaction. Haptics can be used to enhance the experience of a virtual
world when coupled with other modalities such as sight and sound [1], as well as for
many stand-alone applications such as surgical simulations [2]. Haptic technology also
paves way for innovative applications in the field of assistive technology. People with
a visual impairment use the sense of touch extensively; reading braille and navigating
with a white cane are two diverse scenarios where feedback through touch is the common
element. Using a haptic interface, a person with a visual impairment can experience
three-dimensional models without the need to have a physical model built. For the
virtual white cane, a haptic interface was a natural choice as the interaction resembles
the way a regular white cane is used. This should result in a system that is intuitive to
use for someone who has previous experience using a traditional white cane.
The next section discusses previous work concerning haptics and obstacle avoidance
systems for people with a visually impairment. Section 3 is devoted to the hardware and
software architecture of the system. Section 4 presents results from an initial field trial,
and section 5 concludes the paper and gives some pointers to future work.
2. Related Work 71
2 Related Work
The idea of presenting visual information to people with a visual impairment through a
haptic interface is an appealing one. This idea has been applied to a number of different
scenarios during recent years. Fritz et al. [3] used haptic interaction to present scientific
data, while Moustakas et al. [4] applied the idea to maps.
Models that are changing in time pose additional challenges. The problem of ren-
dering dynamic objects haptically was investigated by e.g. Diego Ruspini and Oussama
Khatib [5], who built a system capable of rendering dynamic models, albeit with many
restrictions. When presenting dynamic information (such as in our case a model of the
immediate environment) through a haptic interface, care must be taken to minimize a
phenomenon referred to as haptic fall-through, where it is sometimes possible to end up
behind (fall through) a solid surface (see section 3.3 for more details). Minimizing this
is of critical importance in applications where the user does not see the screen, as it
would be difficult to realize that the haptic probe is behind a surface. Gunnar Jansson
at Uppsala University in Sweden has studied basic issues concerning visually impaired
peoples’ use of haptic displays [6]. He notes that being able to look at a visual display
while operating the haptic device increases the performance with said device significantly.
The difficulty lies in the fact that there is only one point of contact between the virtual
model and the user.
When it comes to sensing the environment numerous possibilities exist. Ultrasound
has been used in devices such as the UltraCane [7], and Yan and Manduchi [8] used a
laser rangefinder in a triangulation approach by surface tracking. Depth-measuring (3D)
cameras are appealing, but presently have a narrow field of view, relatively low accuracy,
and a limited range compared to laser rangefinders. These cameras undergo constant
improvements and will likely be a viable alternative in a few years. Indeed, consumer-
grade devices such as the Microsoft Kinect has been employed as range-sensors for mobile
robots (see e.g. [9]). The Kinect is relatively cheap, but suffers from the same problems
as other 3D cameras at present [10].
Spatial information as used in navigation and obstacle avoidance systems can be con-
veyed in a number of ways. This is a primary issue when designing a system specifically
for the visually impaired, perhaps evidencing the fact that not many systems are widely
adopted despite many having been developed. Speech has often been used, and while it
is a viable option in many cases, it is difficult to present spatial information accurately
through speech [11]. Additionally, interpreting speech is time-consuming and requires a
lot of mental effort [12]. Using non-speech auditory signals can speed up the process,
but care must be taken in how this audio is presented to the user, as headphones make
it more difficult to perceive useful sounds from the environment [13].
better user experience. The virtual white cane presented in this paper provides haptic
feedback decoupled from the steering process, so that a person with a visual impair-
ment can ”poke” at the environment like when using a white cane. Some important
considerations when designing such a system are:
• Ease of use. The system should be intuitive to use. This factor is especially valuable
in an obstacle avoidance system because human beings know how to avoid obstacles
intuitively. Minimized training and better adoption of the technology should follow
from an intuitive design.
3.1 Hardware
The virtual white cane consists of a haptic display (Novint Falcon [16]), a laser rangefinder
(SICK LMS111 [17]), and a laptop (MSI GT663R [18] with an Intel Core i7-740QM run-
ning at 1.73 GHz, 8GB RAM and an NVIDIA GeForce GTX 460M graphics card). These
components, depicted in figure B.2, are currently mounted on the electric wheelchair
MICA (Mobile Internet Connected Assistant), which has been used for numerous re-
search projects at Luleå University of Technology over the years [19, 20, 21]. MICA is
steered using a joystick in one hand, and the Falcon is used to feel the environment with
the other.
The laser rangefinder is mounted so that it scans a horizontal plane of 270 degrees
in front of the wheelchair. The distance information is transmitted to the laptop over
an ethernet connection at 50 Hz and contains 541 angle-distance pairs (θ, r), yielding an
angular resolution of half a degree. The LMS111 can measure distances up to 20 meters
with an error within three centimeters. This information is used by the software to build a
three-dimensional representation of the environment. This representation assumes that
for each angle θ, the range r will be the same regardless of height. This assumption
works fairly well in a corridor environment where most potential obstacles that could be
missed are stacked against the walls. This representation is then displayed graphically
as well as transmitted to the haptic device, enabling the user to touch the environment
continuously.
3. The Virtual White Cane 73
Scenegraph View
The X3D scenegraph, depicted in figure B.3, contains configuration information com-
prised of haptic rendering settings (see section 3.3) as well as properties of static objects.
Since the bottom of the Novint Falcon’s workspace is not flat, a ”floor” is drawn at a
74 Paper B
Figure B.3: The X3D scenegraph. This diagram shows the nodes of the scene and the rela-
tionship among them. The transform (data) node is passed as a reference to the Python script
(described below). Note that nodes containing configuration information or lighting settings are
omitted.
height where maximum horizontal motion of the Falcon’s handle is possible without any
bumps. This makes using the system more intuitive since this artificial floor behaves like
the real floor, and the user can focus on finding obstacles without getting distracted by
the shape of the haptic workspace. At program start up, this floor is drawn at a low
(outside the haptic workspace) height, and is then moved slowly upwards to the desig-
nated floor coordinate in a couple of seconds. This movement is done to make sure the
haptic proxy (the rendered sphere representing the position of the haptic device) does
not end up underneath the floor when the program starts.
The scenegraph also contains a Python script node. This script handles all dynamics
of the program by overriding the node’s traverseSG method. This method executes once
every scenegraph loop, making it possible to use it for obtaining, filtering and rendering
new range data.
Python Script
The Python script fetches data from the laser rangefinder continually, then builds and
renders the model of this data graphically and haptically. It renders the data by creating
an indexed triangle set node and attaching it to the transform (data) node it gets from
the scenegraph.
The model can be thought of as a set of tall, connected rectangles where each rectangle
is positioned and angled based on two adjacent laser measurements. Below is a simplified
version of the algorithm buildModel, which outputs a set of vertices representing the
model. From this list of points, the wall segments are built as shown in figure B.4. For
3. The Virtual White Cane 75
Figure B.4: The ith wall segment, internally composed of two triangles.
rendering purposes, each tall rectangle is divided into two triangles. The coordinate
system is defined as follows: Sitting in the wheelchair, the positive x-axis is to the right,
y-axis is up and the z-axis points backwards.
Algorithm 1 buildModel
Require: a = an array of n laser data points where the index represents angles from 0
to n2 degrees, h = the height of the walls
Ensure: v = a set of size 2n of n vertices representing triangles to be rendered
for i = 0 to n − 1 do
r ← a[i]
π i
θ ← 180 2
convert (r, θ) to cartesian coordinates (x, z)
v[i] ← vector(x, 0, z)
v[n + i] ← vector(x, h, z)
end for
In our current implementation, laser data is passed through three filters before the
model is built. These filters—a spatial low-pass filter, a spatial median filter and a time-
domain median filter—serve two purposes: Firstly, the laser data is subject to some
noise which is noticeable visually and haptically. Secondly, the filters are used to prevent
too sudden changes to the model in order to minimize haptic fall-through (see the next
section for an explanation of this).
76 Paper B
• Haptic renderer has been chosen with this issue in mind. The renderer chosen for
the virtual white cane was created by Diego Ruspini [23]. This renderer treats the
proxy as a sphere rather than a single point (usually referred to as a god-object),
which made a big difference when it came to fall-through. The proxy radius had a
large influence on this problem; a large proxy can cope better with larger changes
in the model since it is less likely that a change is bigger than the proxy radius.
On the other hand, the larger the proxy is, the less haptic resolution is possible.
• Any large coordinate changes are linearly interpolated over time. This means that
sudden changes are smoothed out, preventing a change that would be bigger than
the proxy. As a trade-off, any rapid and large changes in the model will be unnec-
essarily delayed.
• Three different filters (spatial low-pass and median, time-domain median) are ap-
plied to the data to remove spuriouses and reduce fast changes. These filters delays
all changes in the model slightly, and has some impact on the application’s frame
rate.
Having these restrictions in place avoids most fall-through problems, but does so at
the cost of haptic resolution and a slow-reacting model, which has been acceptable in the
early tests.
4 Field Trial
In order to assess the feasibility of haptics as a means of presenting information about
nearby obstacles to people with a visual impairment, a field trial with six participants
(ages 52—83) was conducted. All participants were blind (one since birth) and were
white cane users. Since none of the participants were used to a wheelchair, the system
was mounted on a table on wheels (see figure B.5). A crutch handle with support for
the arm was attached to the left side of the table (from the user’s perspective) so that
it could be steered with the left hand and arm, while the right hand used the haptic
interface.
4. Field Trial 77
Figure B.5: The virtual white cane as mounted on a movable table. The left hand is used to
steer the table while the right hand probes the environment through the haptic interface.
The trial took place in a corridor environment at the Luleå University of Technology
campus. The trial consisted of an acquaintance phase of a few minutes where the par-
ticipants learnt how to use the system, and a second phase where they were to traverse
a couple of corridors, trying to stay clear of the walls and avoiding doors and other ob-
stacles along the way. The second phases were video-recorded, and the participants were
interviewed afterwards.
All users grasped the idea of how to use the system very quickly. When interviewed,
they stated that they thought their previous white cane experience helped them use this
system. This supports the notion that the virtual white cane is intuitive to use and easy
to understand for someone who is familiar with the white cane. While the participants
understood how to use the system, they had difficulties accurately determining the dis-
tances and angles to obstacles they touched. This made it tricky to perform maneuvers
that require high precision such as passing through doorways. It is worth noting that the
participants quickly adopted their own technique of using the system. Most notably, a
pattern emerged where a user would trace back and forth along one wall, then sweep (at
a close distance) to the other wall, and repeated this procedure starting from this wall.
None of the users expressed discomfort or insecurity, but comments were made re-
garding the clumsiness of the prototype and that it required both physical and mental
effort to use. An upcoming article (see [24]; title may change) will present a more detailed
78 Paper B
5 Conclusions
Figure B.6 shows a screenshot of the application in use. The field trial demonstrated the
feasibility of haptic interaction for obstacle avoidance, but many areas of improvement
were also identified. The difficulty in determining the precise location of obstacles could
be due to the fact that none of the users had practiced this earlier. Since a small
movement of the haptic grip translates to a larger motion in the physical world, a scale
factor between the real world and the model has to be learned. This is further complicated
by the placement of the laser rangefinder and haptic device relative to the user. As the
model is viewed through the perspective of the laser rangefinder, and perceived through
a directionless grip held with the right hand, a translation has to be learned in addition
to the scale factor in order to properly match the model with the real world. A practice
phase specifically made for learning this correspondence might be in order, however, the
point of the performed field trial was to provide as little training as possible.
The way the model is built and the restrictions placed on it in order to minimize haptic
fall-through have several drawbacks. Since the obstacle model is built as a coherent,
deformable surface, a moving object such as a person walking slowly from side to side
in front of the laser rangefinder will cause large, rapid changes in the model. As the
person moves, rectangles representing obstacles farther back are rapidly shifted forward
to represent the person, and vice versa. This means that even some slow motions are
unnecessarily delayed in the model as its rate of deformation is restricted. Since the
haptic proxy is a large sphere, the spatial resolution that can be perceived is also limited.
• Data acquisition. Some other sensor(s) should be used in order to gather real
three-dimensional measurements. 3D time-of-flight cameras look promising but are
currently too limited in field of view and signal to noise ratio for this application.
• Haptic feedback. The most prominent problem with the current system regarding
haptics is haptic fall-through. The current approach of interpolating changes avoids
most fall-through problems but severely degrades the user experience in several
ways. One solution is to use a two-dimensional tactile display instead of a haptic
interface such as the Falcon. Such displays have been explored in many forms over
the years [25, 26, 27]. One big advantage of such displays is that multiple fingers
can be used to feel the model at once. Also, fall-through would not be an issue. On
the flip side, the inability of such displays to display three-dimensional information
and their current state of development makes haptic interfaces such as the Falcon
a better choice under present circumstances.
5. Conclusions 79
Figure B.6: The virtual white cane in use. This is a screenshot of the application depicting a
corner of an office, with a door being slightly open. The user’s ”cane tip”, represented by the
white sphere, is exploring this door.
• Data model and performance. At present the model is built as a single deformable
object. Performance is likely suffering because of this. Different strategies to rep-
resent the data should be investigated. This issue becomes critical once three-
dimensional information is available due in part to the greater amount of informa-
tion itself but also because of the filtering that needs to be performed.
• Ease of use. A user study focusing on model settings (scale and translation primar-
ily) may lead to some average settings that work best for most users, thus reducing
training times further for a large subset of users.
Acknowledgment
This work was supported by Centrum för medicinsk teknik och fysik (CMTF) at Umeå
University and Luleå University of Technology–both in Sweden–and by the European
Union Objective 2 North Sweden structural fund.
80 Paper B
References
[1] A. Lécuyer, P. Mobuchon, C. Mégard, J. Perret, C. Andriot, and J. pierre Col-
inot, “Homere: a multimodal system for visually impaired people to explore virtual
environments,” in Proc. IEEE VR, 2003, pp. 251–258.
[5] D. Ruspini and O. Khatib, “Dynamic models for haptic rendering systems,”
accessed 2014-02-24. [Online]. Available: http://citeseerx.ist.psu.edu/viewdoc/
download?doi=10.1.1.127.5804&rep=rep1&type=pdf
[6] G. Jansson, “Basic issues concerning visually impaired people’s use of haptic dis-
plays,” in The 3rd International Conf. Disability, Virtual Reality and Assoc. Tech-
nol., Alghero, Sardinia, Italy, Sep. 2000, pp. 33–38.
[7] Sound Foresight Technology, “Ultracane - putting the world at your fingertips,”
http://www.ultracane.com/, accessed 2016-03-21.
[8] D. Yuan and R. Manduchi, “Dynamic environment exploration using a virtual white
cane,” in Proc. 2005 IEEE Computer Society Conf. Computer Vision and Pattern
Recognition (CVPR’05). Washington, DC, USA: IEEE Computer Society, 2005,
pp. 243–249.
[9] D. Correa, D. Sciotti, M. Prado, D. Sales, D. Wolf, and F. Osorio, “Mobile robots
navigation in indoor environments using kinect sensor,” in 2012 Second Brazilian
Conf. Critical Embedded Systems (CBSEC), May 2012, pp. 36–41.
[10] K. Khoshelham and S. O. Elberink, “Accuracy and resolution of kinect depth data
for indoor mapping applications,” Sensors, vol. 12, no. 2, pp. 1437–1454, 2012.
[12] I. Pitt and A. Edwards, “Improving the usability of speech-based interfaces for blind
users,” in Int. ACM Conf. Assistive Technologies. New York, NY, USA: ACM, 1996,
pp. 124–130.
References 81
[15] J. Staton and M. Huber, “An assistive navigation paradigm using force feedback,” in
IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO), Nov. 2009,
pp. 119–125.
[20] K. Hyyppä, “On a laser anglemeter for mobile robot navigation,” Ph.D. dissertation,
Luleå University of Technology, Luleå, Sweden, 1993.
[21] S. Rönnbäck, “On methods for assistive mobile robots,” Ph.D. dissertation, Luleå
University of Technology, Luleå, Sweden, 2006.
[23] D. C. Ruspini, K. Kolarov, and O. Khatib, “The haptic display of complex graphical
environments,” in Proc. 24th Annu. Conf. Computer Graphics and Interactive Tech-
niques. New York, NY, USA: ACM Press/Addison-Wesley Publishing Co., 1997,
pp. 345–352.
[24] D. Innala Ahlmark, M. Prellwitz, J. Röding, L. Nyberg, and K. Hyyppä, “An initial
field trial of a haptic navigation system for persons with a visual impairment,”
Journal of Assistive Technologies, vol. 9, no. 4, pp. 199–206, 2015.
[26] R. Velazquez and S. Gutierrez, “New test structure for tactile display using laterally
driven tactors,” in Instrumentation and Measurement Technol. Conf. Proc., May
2008, pp. 1381–1386.
Authors:
Daniel Innala Ahlmark, Maria Prellwitz, Jenny Röding, Lars Nyberg and Kalevi Hyyppä
c 2015, Emerald Group Publishing Limited, Reprinted with permission.
83
84
An Initial Field Trial of a Haptic Navigation System
for Persons with a Visual Impairment
Daniel Innala Ahlmark, Maria Prellwitz, Jenny Röding, Lars Nyberg and Kalevi
Hyyppä
Abstract
Purpose: The purpose of the presented field trial was to describe conceptions of feasi-
bility of a haptic navigation system for persons with a visual impairment.
Design/methodology/approach: Six persons with a visual impairment who were
white cane users were tasked with traversing a predetermined route in a corridor en-
vironment using the haptic navigation system. To see whether white cane experience
translated to using the system, the participants received no prior training. The proce-
dures were video-recorded, and the participants were interviewed about their conceptions
of using the system. The interviews were analyzed using content analysis, where induc-
tively generated codes that emerged from the data were clustered together and formulated
into categories.
Findings: The participants quickly figured out how to use the system, and soon adopted
their own usage technique. Despite this, locating objects was difficult. The interviews
highlighted the desire to be able to feel at a distance, with several scenarios presented
to illustrate current problems. The participants noted that their previous white cane
experience helped, but that it nevertheless would take a lot of practice to master using
this system. The potential for the device to increase security in unfamiliar environments
was mentioned. Practical problems with the prototype were also discussed, notably the
lack of auditory feedback.
Originality/value: One novel aspect of this field trial is the way it was carried out.
Prior training was intentionally not provided, which means that the findings reflect im-
mediate user experiences. The findings confirm the value of being able to perceive things
beyond the range of the white cane; at the same time, the participants expressed concerns
about that ability. Another key feature is that the prototype should be seen as a navi-
gation aid rather than an obstacle avoidance device, despite the interaction similarities
with the white cane. As such, the intent is not to replace the white cane as a primary
means of detecting obstacles.
1 Introduction
Vision provides the ability to identify danger and obstacles at a distance, and also aids in
the identification and location of objects in the environment. Vision also grants essential
85
86 Paper C
information used for postural control, motion control and handling things in the environ-
ment [1]. According to the World Health Organization there are 285 million people with
a visual impairment (VI) in the world [2]. The International Classification of Diseases
(ICD) defines four vision categories: normal vision, moderate visual impairment, severe
visual impairment, and blindness. Throughout this article, the term ’visual impairment’
is used in accordance with the ICD, that is, it implies all categories except normal vision.
Studies [3, 4] have shown that mobility is compromised for persons with VI and that
this in turn affects many daily activities, such as shopping and going for walks. Problems
with obstacles like bicycle stands, awnings and bricks in the pavement can cause limited
outdoor activities. Not being able to go to a variety of places independently and the
fear of unfamiliar environments can also limit activities. There are also studies [5, 6]
that have shown that mobility problems affect the quality of life of persons with VI in a
negative way as a result of activity limitations.
For persons with VI, the primary aid is the white cane, which provides a direct
experience of obstacles at close proximity. This aid can provide the user with a lot of
valuable information about their environment. During the last couple of decades, persons
with VI have benefited from the development of technological devices. Many of these
have the potential to support a better quality of life for individuals with VI and enhance
their ability to participate fully in daily activities and to live independently [7].
Technological solutions ranging from accessible GPS devices such as the Trekker
Breeze [8] to extensions of the white cane that use ultrasound (e.g. UltraCane [9])
are available, but have not been widely adopted. Most of them involve a great deal of
effort and are not intuitive for persons with VI [10, 11]. Therefore there is a need to
focus on solutions that are usable and that enable the user to make appropriate and
timely decisions [12, 13, 14, 10]. The majority of current solutions use speech interfaces
to interact with users with VI, but informing the user of nearby obstacles with sufficient
detail is difficult and takes a lot of time [15] compared to the quick and intuitive reaction
attained when hitting an obstacle with a white cane.
Due to the problems with speech for spatial information, we chose a haptic interface
to communicate nearby obstacles. The present prototype consists of a laser rangefinder,
a haptic interface, and a laptop (see figure 1). The laser rangefinder obtains distances to
nearby objects. This information is then made into a three-dimensional model, which in
turn is transmitted to a Novint Falcon [16] haptic interface for presentation. This way
a user can feel obstacles several meters in front of them, much in the same way they
could with a white cane. To do this, the user moves the grip of the haptic interface,
and because the interface uses force feedback to counteract grip movements, contours
of obstacles and walls can be traced. The laptop that runs the software also displays a
graphical representation of the model and shows the current probing position (the grip
of the haptic interface) as a white sphere. More information about the system itself can
be found in an earlier article [17]. A hand-held version is currently being developed.
Early field trials in the development of this navigation aid are done in order to explore
its potential. The goal is to make the system intuitive for persons who are users of the
white cane today. To reach this goal, input for further development from potential users is
2. Methods 87
essential. Thus, the aim of this study is to describe conceptions of the system’s feasibility
from an end-user perspective.
1.1 Delimitations
The point of this field trial was to get early feedback from potential end-users. Since the
prototype might change considerably, we chose to focus on the qualitative aspects rather
than performance metrics at this stage. A further aim was to assess how white cane
experience translated to using our prototype, as the interaction possesses similarities to
that of the cane. Because of this, the participants did not have the opportunity of an
extended familiarization phase, and as such we cannot at this stage draw conclusions on
the effects of training.
The current prototype has several known limitations. As the laser rangefinder was
mounted horizontally, it is not possible to detect drops or small obstacles on the ground.
Additionally, no audible feedback from touching an obstacle is generated. These factors
pose a major problem if one intends to replace obstacle-avoidance devices such as the
white cane, but we see a continuation of this device as a navigation aid complementing
the cane.
2 Methods
This initial field trial was carried out by six persons with VI. Participants made a one-
shot trial during a standardized procedure in two parts: one initial, acquaintance part
and one problem solving part. Both of these procedures were video-recorded, and the
participants were interviewed about their conceptions of using the prototype. Finally, all
gathered data were analyzed qualitatively.
2.1 Participants
The 6 participants in the study all had at least five years of experience using a white cane,
were able to move around without assistance and could communicate their experiences
verbally. The persons were recruited with help from the regional ombudsman for persons
with visual impairments in northern Sweden. Ethical approval for this study was given
by the Regional Ethical Review Board, Umeå, Sweden (Dnr 2010-139-31).
Figure C.1: The prototype navigation aid mounted on a movable table. The Novint Falcon
haptic interface is used with the right hand to feel where walls and obstacles are located. The
white sphere visible on the computer screen is a representation of the position of the grip of
the haptic interface. The grip can be moved freely as long as the white sphere does not touch
any obstacle, at which point forces are generated to counteract further movement ”into“ the
obstacle.
laptop was placed on top of the table which made it easy to observe—both during the
trial and on the recorded videos—the model of the surroundings and what the users were
touching. The current position of the grip of the haptic interface was represented by a
white sphere clearly visible on the screen.
and the soda can, pick up the can, and then turn around and walk back to the starting
point. This was done with few instructions or minor assistance from the researchers.
This problem solving part was accomplished on average in 10 minutes (range 6 to 14
minutes).
2.4 Interviews
The interviews with each participant took place directly after the trial. A semi-structured
interview guide was used with nine questions regarding the participants’ conceptions of
the solution’s feasibility. The focus of the interviews was on the participants’ concep-
tions of using the device in relation to the use of the white cane, and on what they
thought needed to be done to improve the usability of the system. Each interview took
approximately 45 minutes and was recorded and transcribed verbatim.
3 Results
During the acquaintance part of the trial, all participants had an initial phase in which
they obviously acquainted themselves with the equipment and how to use it in order to
feel the area in front of them. In this phase, lasting from one to seven minutes, they
all needed verbal cues or physical help in order not to collide with the walls or other
obstacles. In this phase they also developed their own pattern of probing the area.
Two participants used a passive pattern, making few and scarce probing attempts
with the device. They had difficulties navigating in the corridor and needed frequent
verbal cues and physical assistance. One of these participants chose not to perform the
problem solving part, and the other was not able to get any effective help from the
system.
Three of the participants had an active pattern in which they obviously navigated
by actively using the aid after the initial phase. They employed a horizontal U-shaped
pattern, one with a rather low, and the other two with a rather high frequency. Two
90 Paper C
used one wall as a reference surface, feeling sideways towards the other wall in regular
intervals and more often when approaching a door, while the third constantly moved
the grip, alternately feeling the walls on each side. During the problem solving part,
these participants navigated well between the walls and managed door openings with the
exception that one participant lost the spatial orientation when negotiating one of the
doorways.
One participant showed a very active and efficient pattern, moving the grip frequently
from side to side, but also forwards and backwards, in a flexible way using different
frequencies, directions and amplitudes depending on the situation. This participant was
able to identify small obstacles beside the actual course. During the problem solving
phase, this participant cleared the walls and most doorways without any problems and
needed verbal guidance only in order to find the way towards the narrow doorway after the
90 degree turn. Still, this participant had the same problems as the others with obstacles
in the very near vicinity at the sides, and needed verbal assistance when coming close to
the table and reaching for the can.
4 Discussion
This initial field trial showed that most of the participants, despite being introduced to
the prototype for the first time, quickly understood how to use it. The participants’
conceptions were in general positive; they appreciated the ability to feel at a distance,
while perceiving the actual range was difficult. The absence of any auditory cues was
also expressed.
The literature lacks of reports on trials of similar systems. Sharma et al.[19] de-
scribed a trial for an obstacle avoidance system where blindfolded people used a powered
wheelchair to navigate an obstacle course. They demonstrated, as do our results, that
systems that can provide users with essential navigation information covering distances
beyond the reach of a cane might be valuable to support safe mobility.
A remarkable fact is that all participants quickly adopted their own usage technique.
This implies an intuitive learning process which could be attributed to the concept of
the system, but also to the fact that the participants were experienced cane users. The
U-shaped pattern that emerged in the participants’ use of the system could also be seen
as a limited use of it, not utilizing the full potential of scanning the total area in front of
them. It must be emphasized that the participants used the prototype for the first time,
and it is possible that a prolonged use would have made them aware of this opportunity.
While the participants quickly became familiar with how to use the system, they all
had difficulties with range perception. This meant that when performing high-precision
maneuvers such as passing through a narrow doorway, positioning themselves at a proper
angle was troublesome. Again, the fact that none of the users had prior training with
the system is important in this respect; it might be that they simply had not had enough
experience to precisely judge the scaling between the small movements of the haptic
grip and distances in the physical world. Another important factor to consider is the
position of the laser rangefinder and haptic interface relative to the user. In our case, the
laser rangefinder was positioned about half a meter directly in front of the user, while the
haptic interface was closer, but more to the right of the user. This means that in addition
to having to learn the scaling between the physical world and the haptic representation,
an additional sideways translation is required in order to properly match the physical
world with the virtual model.
Based on the participants’ descriptions of using the device and its feasibility it seems
like it can provide a combination of a direct experience of an environment as well as a sort
of tactile map due to its possibility to feel at a distance. Studies by Espinosa et al. [20]
have shown that being able to combine these two approaches constitutes a useful way to
orientate in unfamiliar environments. A longing to explore unfamiliar environments was
expressed by the participants in this study, and was something they saw the system could
aid in. Assistive technologies have the potential to enhance quality of life via improved
autonomy, safety and by decreasing social isolation [10].
This study must be seen as a first field trial and has, as such, a certain number of
limitations. A very early prototype was tried, which has effects on the usability for the
participants. Nevertheless, we believed that such an early trial would bring us important
References 93
knowledge for further development. The reason for not offering the participants the
opportunity of a longer familiarization with the system was that we wanted to get an
impression of how intuitive the system was to learn to use. The fact that this was a
very early stage trial also motivated us to choose a qualitative and open approach in
describing both user experience and actual performance when using the prototype.
Regarding the trustworthiness of the findings from the interviews, one limitation is the
sample size. A larger number of participants might have widened the range of experiences,
however, all six of the participants did describe similar conceptions of the system. To
strengthen the trustworthiness, the analysis of the transcribed data was discussed among
the authors and representative quotations were chosen to increase the credibility of the
results [21].
We also would like to emphasize that the participants represented potential users, and
were not people with normal vision being blindfolded. This is important as we wanted
to get the experiences from people who do not rely on visual information for navigation
and who were used to another haptic instrument: the white cane. In this respect, we
are aware of the findings of Patla [22], who demonstrated that among individuals with
normal vision that was partially or completely restricted, information provided by haptic
systems has to match the quantity and immediacy provided by the visual system in order
to support a well-controlled motor performance. How haptic information affects motor
control in persons not used to rely on visual information needs to be studied specifically.
In conclusion, this early field trial indicated an expected usability of the device from
an end-user perspective. We would like to emphasize the participants’ appreciation of
the ability to feel the environment at ranges beyond white cane range and the swift
acquaintance phase, which may be due to the cane-like interaction. The trial also gave
important perspectives from the users on issues for further development of the system.
Acknowledgement
This work was supported by Centrum för medicinsk teknik och fysik (CMTF) at Umeå
University and Luleå University of Technology—both in Sweden—and by the European
Union Objective 2 North Sweden structural fund.
References
[1] A. Shumway-Cook and M. H. Woollacott, Motor control: translating research into
clinical practice. Wolters Kluwer Health, 2007.
[9] Sound Foresight Technology, “Ultracane - putting the world at your fingertips,”
http://www.ultracane.com/, accessed 2016-03-21.
[12] B. Ando, “A smart multisensor approach to assist blind people in specific urban nav-
igation tasks,” Neural Systems and Rehabilitation Engineering, IEEE Transactions
on, vol. 16, no. 6, pp. 592–594, Dec 2008.
[13] B. Ando and S. Graziani, “Multisensor strategies to assist blind people: A clear-path
indicator,” IEEE Transactions on Instrumentation and Measurement, vol. 58, no. 8,
pp. 2488–2494, Aug 2009.
[14] L. A. Guerrero, F. Vasquez, and S. F. Ochoa, “An indoor navigation system for the
visually impaired,” Sensors, vol. 12, no. 6, pp. 8236–8258, 2012.
[15] I. Pitt and A. Edwards, “Improving the usability of speech-based interfaces for blind
users,” in Int. ACM Conf. Assistive Technologies. New York, NY, USA: ACM, 1996,
pp. 124–130.
95
[17] D. Innala Ahlmark, H. Fredriksson, and K. Hyyppä, “Obstacle avoidance using hap-
tics and a laser rangefinder,” in Advanced Robotics and its Social Impacts (ARSO),
2013 IEEE Workshop on, 2013, pp. 76–81.
[20] M. A. Espinosa and E. Ochaita, “Using tactile maps to improve the practical spatial
knowledge of adults who are blind.” Journal of Visual Impairment & Blindness,
vol. 92, no. 5, pp. 338–45, 1998.
Authors:
Daniel Innala Ahlmark, Maria Prellwitz, Ulrik Röijezon, George Nikolakopoulos, Jan van
Deventer and Kalevi Hyyppä
To be submitted.
97
98
A Haptic Navigation Aid for the Visually Impaired –
Part 1: Indoor Evaluation of the LaserNavigator
Daniel Innala Ahlmark, Maria Prellwitz, Ulrik Röijezon, George Nikolakopoulos, Jan
van Deventer, Kalevi Hyyppä
Abstract
1 Introduction
Navigation is an ability that is largely mediated by vision. Visual impairments thus
limit this ability [1], which can lead to a decreased quality of life [2]. The white cane
is an excellent solution at close proximity and near the ground, but there is a lack of
accurate and user-friendly options for ranges greater than the cane’s length. The few
commercial products that provide this have not reached a wide impact and use among
visually impaired individuals [3], thus making further innovation and research all the more
important for the development of such devices. Factors such as security and usability,
in addition to technical issues about how distance information should be presented non-
visually need to be evaluated in order to create a better navigation aid.
Navigation aids, often referred to as electronic travel aids (ETAs), are available rang-
ing from small handheld devices and smartphone apps to extended white canes. An
example of a handheld device is the Miniguide [4], which uses ultrasound to measure
the distance to objects the device is pointed at. This distance is then reported by burst
99
100 Paper D
Figure D.1: A photo of the LaserNavigator, showing the laser rangefinder (1), ultrasound sensor
(2) and the loudspeaker (3).
Figure D.2: The two reflectors (spherical and cube corner) used alternately to improve the
body–device measurements.
of vibrations where the burst frequency is related to the measured distance. Another
solution possessing similar functionality is the UltraCane [5], albeit as a custom-built
1. Introduction 101
white cane. Being based on ultrasound, these devices have the disadvantage of a limited
range (a few metres) and a significant beam spread (15◦ or greater).
Handheld GPS units such as the Trekker family of products [6] as well as accessible
smartphone apps [7, 8, 9] with similar features are available. They depend ultimately
on the accuracy of the GPS system, and the limitations of stored maps. Smartphone
apps such as BlindSquare [9] try to overcome the latter limitation by connecting to open
online services, which means they can respond to changes in the environment, provided
someone has altered the information to reflect these changes.
This paper presents a first evaluation of a newly developed handheld navigation aid
dubbed the LaserNavigator (depicted in figure D.1). Its name refers to the fact that
the device uses a laser rangefinder to measure the distance to objects from the user’s
hand. Because it is an optical system, rather than e.g. an ultrasonic system as often
utilised [5, 4], it can measure very large distances (up to 50 m) with high accuracy (error
less than 10 cm + 1% of range) and with a beam spread of 0.2◦ . The beam spread is of
particular importance in this case as one intended application of the device is to determine
the direction and approximate distance to a distant landmark such as a lamppost. The
user gets haptic feedback through one finger placed on top of a small loudspeaker1 . The
vibrations are not directly related to the distance. Instead, the user is able to vary the
maximum distance of interest. If an object is beyond this selected distance (virtual “cane
length”), no vibrations will be emitted even though an object might be measured. When
the object is at or closer than the selected distance, the speaker membrane will emit
short vibration pulses. These are not dependent on the measured distance, but merely
signal the presence of an object.
To vary the “cane length”, the user moves their arm holding the LaserNavigator
closer to or further away from their body. The device possesses an ultrasound sensor
determining the body–device distance. This value is then multiplied by a constant factor
and is then set to be the “cane length”. This way, the user can seamlessly vary the
desired reach without any interruption or additional input methods. A way to visualise
how the system works is to think of a telescopic white cane that automatically expands
or contracts depending on how far away from the body the user is holding it. In the
presented indoor trials, the scale factor was set to 10, so that a body–device distance
of 50 cm would equate to having a 5 m long “virtual cane”. More information on the
development of the LaserNavigator will be published in an upcoming paper [10].
During the trials, two kinds of vibration feedback were experimented with. Contrary
to a real cane, there is nothing stopping the user from pushing far “into” an object. This
means that when the device is vibrating, the user has to pull their arm back until they
locate the threshold where the device stops vibrating. At that point, the user can infer
the approximate distance to the object by knowing the position of their hand relative to
their body. This feedback type, where vibrations only signal the presence of an object,
is denoted single frequency feedback in the remainder of this text. An additional mode,
dual frequency feedback, was added, where the device vibrates at a higher frequency when
1
A speaker was chosen instead of more conventional vibration actuators because of its quick response
time.
102 Paper D
the “cane tip” is at the boundary of an object. In this mode, a lower frequency tells the
user that they need to pull back.
To improve the accuracy of the body–device distance measurement, the participants
alternately wore one of two specially manufactured reflectors shown in figure D.2. These
serve as stable reflective surfaces for the ultrasound, which would otherwise reflect off the
user’s clothes. The cube corner reflector gives off very strong reflections, and the idea
was to decrease the power of the emitted ultrasound so that only those reflections would
be detected. The cube corner reflector was tested by one (the last) participant.
The paper is organised as follows. Section 2 describes the participants, the test
environment and study protocol. Section 3 shows the results from observations and
participant interviews. These are then summarised and discussed in section 4.
1.1 Purpose
The purpose of this study was to get early feedback from potential users. We wanted
to understand users’ conceptions of usability of the LaserNavigator in an indoor envi-
ronment, later intending to perform another trial outdoors. We were also interested in
identifying movement patterns and strategies employed when using the device, as these
can suggest important changes to the design of the system.
2 Methods
This section characterises the participants, test setup and assessments.
2.1 Participants
The study participants were recruited via the local district of the Swedish National
Association for the Visually Impaired (Synskadades riksförbund, SRF). An information
letter was sent out to the district members, and three participants presented their interest
to participate in the study. The selection criterion was that participants be visually
impaired and able to move about independently, with or without a white cane.
The participants, 2 females and 1 male, were 60, 72 and 78 years old, respectively. All
were blind2 , thus no one was likely able to use visual cues to aid in the task. Participants
B and C were adventitiously blind, having been blind for 2 and 7 years, respectively.
Also noteworthy is that while all three used white canes, participant C used it only as a
walking cane. Additionally, participant B had a guide dog and used a GPS device daily.
Participants A and B were comfortable with walking around in familiar territory on their
own, while participant C said he never leaves the home by himself.
Due to the low number of participants, we opted for an evolving strategy rather
than looking to obtain statistically comparable results. This meant that we altered both
the LaserNavigator and the test environment after each test, based on feedback from
2
The term is used in accordance with the International Classification of Diseases (see [11]).
2. Methods 103
the previous participant. To highlight this process, the three participants are discussed
separately in the results section below.
Figure D.3: A picture of the makeshift room as viewed from outside the entrance door.
2.3 Task
Upon arrival, the participants had about an hour to familiarise themselves and train
with the LaserNavigator. During this time, they also had the opportunity to practice
the actual trial task multiple times. Following this, the trial proceeded as follows:
104 Paper D
Figure D.4: One of the researchers (Daniel) trying out the trial task. The entrance door is
visible in the figure.
2.4 Observations
For looking at movement patterns and strategies, all trials were filmed by one of the
researchers, as well as being recorded by the motion capture cameras at 100 frames per
3. Results 105
second. The participants were also encouraged to explain their thoughts and actions
during the trials.
To analyse these data, the videos were watched independently by the researchers, and
key points regarding movement patterns and strategies were noted and then summarised.
The motion capture data are used to illustrate the results (figures D.5 and D.6).
2.5 Interviews
After the trial, each participant was interviewed based on a semi-structured ten-question
interview guide focusing on their conceptions of using the prototype. The interviews
were then transcribed verbatim, and analysed based on content analysis as described by
Graneheim and Lundman [13].
3 Results
This section describes results from the observations and interviews. The observation
results are based on video and motion capture material, and are discussed separately for
each participant to highlight the changes made to the LaserNavigator and set-up. The
findings from the interviews are summarised in three categories: benefits, challenges and
future possibilities.
3.1 Observations
The following text outlines general movement patterns, strategies and other movement
behaviours of interest obtained by looking at the recorded videos. Figure D.5 shows
position graphs for all nine trial runs. The results are discussed separately for each
participant as the system and set-up were slightly altered from trial to trial.
Participant A
The first participant used the spherical reflector (figure D.2) on the body, and the Laser-
Navigator was set to single frequency feedback.
Generally, this participant was very conscious of the way she moved, and seemed
to have a good sense of the location of known things (notably the entrance door) at
all times. She also used a specific strategy in all three attempts, walking about the
room in a clockwise fashion. In general she moved quite fast compared to the other two
participants.
She easily found the entrance door from the starting position, but had a harder time
finding the other open door in the room. She seemed to think the room was circular, which
may be due to corners being similar to open doors when probed with the LaserNavigator
unless one carefully traces the walls. While performing the tasks she effectively used
both her white cane and the LaserNavigator. She alternated between sweeping and the
in-and-out movements with the device.
106 Paper D
Figure D.5: Movement tracks for each participant and attempt, obtained by the reflector markers
on the sternum. The entrance door is marked by the point labelled start, and the target door
is the other point, door. Note that the start point appears inside the room because the motion
capture cameras were unable to see part of the walk. Additionally, attempt 3 by participant B
does not show the walk back to the entrance door due to a data corruption issue.
3. Results 107
Figure D.6: This figure shows the three attempts of participant B, with the additional red line
indicating the position of the LaserNavigator. Note that attempt 3 is incomplete due to data
corruption.
On the third attempt she managed to detect the correct door without any additional
circuits about the room. She then found her way back to the entrance without difficulties.
This was a noticeable improvement from the preceding attempts. One notable issue was
that she sometimes held the device either too far out to the side or at a steep angle,
which meant that the ultrasound sensor would not measure the proper distance.
Participant B
The second participant also used the spherical reflector (figure D.2), but we altered the
LaserNavigator to use dual frequency feedback. The idea behind this was to make it
easier to know the actual distance to the walls, which should help in differentiating
corners from doors.
This participant found the experience tiring, both physically and mentally. This
meant that holding the navigator pointing horizontally was demanding, and the floor
was often thus detected with the device. She went through the room at a slow pace,
without any explicit strategy, as opposed to participant A who explicitly used a clockwise
movement strategy. In general she used too large motions with the navigator to detect
the doorways, and also often used a very long “virtual cane”. These two factors meant
she rarely found the actual distance to the walls, and as such the additional frequency
feedback employed was not of much if any help. She almost never used her white cane
during the tasks. Figure D.6 shows the three attempts by participant B with an additional
curve representing the position of the LaserNavigator. The figure shows the transition
from vigorous sweeping in the first attempt to more subdued movements in the third.
Participant C
The final participant used the cube corner reflector (figure D.2). Additionally, the Laser-
Navigator was switched back to single frequency feedback. Of note is that this participant
did not use a white cane in the conventional way. Instead, he used it as a walking cane, al-
though during the tests he only used the LaserNavigator, relying on one of the researchers
108 Paper D
3.2 Interviews
The analysis of the interviews resulted in three categories: benefits, challenges and future
possibilities. Benefits are advantages that the current prototype provided during the
trial, and good points about the prototype itself. Challenges refer to statements ranging
from practical problems with the current prototype to more general usability concerns.
The final category, future possibilities, encapsulates ideas and scenarios where a further-
developed LaserNavigator would be helpful. The statements below are illustrated with
quotations from the participants.
Benefits
The participants noted that the device helped them find the doorways; “else I would
have done it like I always do: I walk until I reach a wall and then I follow that wall.”
One participant noted that with a little practice the device became easier to use with
each attempt, although one would have to move a little slower than usual.
All participants noted that the vibrations were clear and easy to discern. Noteworthy
is the fact that participant C stated that he used the emitted audio rather than paying
attention to the vibrations, whereas participant A said she “did not have time” to pay
attention to the sounds. Participant C expressed the general feeling that “it was a good
prototype”. Also noted was the benefit of not having to use the device continuously; the
device was seen more as something you pull out from time to time to check your bearings,
a complement to the white cane.
3. Results 109
Challenges
The general opinion was that the task was difficult, and two participants noted specifically
the difficulties and frustrations associated with finding the corners of the room. One said
that using the device gradually became easier, while another said it was more tiring than
it helped. One expressed the opinion that “even the slightest technological aids are in
the way. What you need is the cane.”
All participants noted many practical issues with the prototype. In particular, par-
ticipant B felt it was really tiring to use the device due to its weight and also pondered
how to practically use the system in conjunction with the cane, a GPS device and/or a
guide dog: “You have to use the cane, and then you are supposed to use this too in some
way. One hand is being used by the cane, so how do you practically use this at the same
time?”
One question asked during the interviews was how the participants handled getting
lost, if they had any special strategies. One remarked: “then I’ll go back and soon I’ll
find my way again”.
The guide dog user (B) simply stated that “I’ll just put the harness on and say ‘go
home’.”
Participant C did not describe any strategy, but instead told a story to highlight
the fact that one can get lost even in smaller spaces. He described “getting lost on the
balcony” whilst bringing something from the balcony into the house.
Future Possibilities
All participants had improvement suggestions and presented situations from their daily
activities where they thought such an improved device would be of use. One such scenario
was going out of the house to put up laundry. Going out was easy, but finding the way
back was far more difficult. Similar scenarios included finding the door to a shopping
mall or finding a landmark such as a lamppost. In particular, one wished for the option
to filter objects so only the things of interest would be detectable. “There are so many
details outside: bikes, waste bins, trees, flower boxes, decorations, other people... you
might want to find that particular lamppost in all that mess.”
The guide dog owner (B) in the group noted that one cannot always count on having
a guide dog, and that a future LaserNavigator could work as a temporary solution, for
instance when waiting to get a new guide dog. The same participant also described the
following, very specific, scenario: “if I’m walking outside, I may get information about
where the walls of houses are, so I know when I pass a house; it [the LaserNavigator]
vibrates. One might also encounter a low wall or other low obstacles by the house, and
through the vibrations be able to feel depth.”
Participant C had a very specific idea of his ideal system in mind, and thus put
up many suggestions. These included mounting the device on a walking cane, getting
auditory feedback through a headset, and having a button announce the exact distance
measured by speech. Another idea that came up was for the device to somehow announce
compass directions, which could easily be added as the required sensors are already
110 Paper D
present.
4 Discussion
This first evaluation of the handheld LaserNavigator has contributed with several im-
portant initial results. It has shown that the device can contribute with valuable infor-
mation about the surroundings, in this case finding doorways in a relatively unknown
environment. It was also found that the LaserNavigator was more usable in a more pre-
dictable situation. This can be exemplified by the fact that all three participants quickly
learned to find and navigate through the entrance door from the starting point outside
the makeshift room, which was the same for each trial. It is likely that the navigator is
most usable in fairly predictable contexts, that is when having a good idea of where to
direct the navigator and what to look for, at least in an initial learning stage. Navigating
in a more unpredictable environment will, however, probably need a lot more practice to
learn to scan with the device and comprehend the information.
The participants were mostly able to find the doorways, thus showing the ability to
integrate the new information provided to them by the LaserNavigator. The participants’
conceptions of usability of the device were mixed; the current device was difficult to use,
but the concept was met with interest and many areas of use were identified. Participant
B, being used to a GPS device, detailed a scenario where she got lost and had to retrace
her steps until she started receiving familiar GPS instructions. On such occasions, where
one has just left a familiar path for the unfamiliar, the LaserNavigator might help in
identifying a known landmark and thus establish a sense of location. Open spaces were
also discussed, where the white cane might not provide much information, yet there is
an important landmark somewhere in that open space which could be detected by the
LaserNavigator.
All participants encountered practical issues with the prototype. In addition to being
heavy and unbalanced, one issue common to all three participants was how to hold the
LaserNavigator horizontally. This may be difficult without any visual feedback or a lot
of practice, but at least two participants may have performed worse in this regard due
to the effort of holding the device straight, thus often pointing it at a downward angle.
The weight and balance issues can be mitigated by redesigning the prototype with this
in mind, and sensors for determining the pointing angle are present.
As for the different reflector and feedback types, we do not notice any obvious effects.
The study did not attempt to specifically measure the impact of these changes, and the
small sample size and diversity would have made it difficult to draw any conclusions
in this regard. The additional level of feedback (dual frequency) does provide more
information, but no participant attained the level of skill required to appreciate this
additional information. The idea of having an easily variable “cane length” would be a
new concept even to white cane users, and as such needs training to use effectively.
As mentioned in the introduction, visual input is of major importance for navigation
in 3D space. Development of accurate and versatile navigation aids for the visually
impaired is therefore of highest relevance and importance. As in controlling movements
4. Discussion 111
Future research will include additional training and navigation in an outdoor scenario
to shed further light on the overall conceptions of the LaserNavigator.
Acknowledgements
This work was supported by Centrum för medicinsk teknik och fysik (CMTF) at Umeå
University and Luleå University of Technology – both in Sweden – and by the European
Union Objective 2 North Sweden structural fund. We would also like to thank Dar-
iusz Kominiak for managing the motion capture system and aiding in the planning and
construction of the makeshift room.
References
[1] D. M. Brouwer, G. Sadlo, K. Winding, and M. I. G. Hanneman, “Limitation in
mobility: Experiences of visually impaired older people,” British Journal of Occu-
pational Therapy, vol. 71, no. 10, pp. 414–421, 2008.
[3] T. Pey, F. Nzegwu, and G. Dooley, “Functionality and the needs of blind and par-
tially sighted adults in the uk: a survey,” Reading, UK: The Guide Dogs for the
Blind Association, 2007.
[5] Sound Foresight Technology, “Ultracane - putting the world at your fingertips,”
http://www.ultracane.com/, accessed 2016-03-21.
[10] J. van Deventer, D. Innala Ahlmark, and K. Hyyppä, “Developing a Laser Naviga-
tion Aid for Persons with Visual Impairment,” To be published, 2016.
[14] R. Farcy and Y. Bellik, “Locomotion assistance for the blind,” in Universal Access
and Assistive Technology. Springer, 2002, pp. 277–284.
Authors:
Daniel Innala Ahlmark, Maria Prellwitz, Ulrik Röijezon, Jan van Deventer and Kalevi
Hyyppä
To be submitted.
115
116
A Haptic Navigation Aid for the Visually Impaired –
Part 2: Outdoor Evaluation of the LaserNavigator
Daniel Innala Ahlmark, Maria Prellwitz, Ulrik Röijezon, Jan van Deventer, Kalevi
Hyyppä
Abstract
Negotiating the outdoors can be a difficult challenge for individuals who are visually
impaired. The environment is dynamic, which at times can make even the familiar
route unfamiliar. This article presents the second part evaluation of the LaserNavigator,
a newly developed prototype built to work like a “virtual white cane” with an easily
adjustable length. The user can quickly adjust this length from a few metres up to 50
m. The intended use of the device is as a navigation aid, helping with perceiving distant
landmarks needed to e.g. cross an open space and reach the right destination. This
second evaluation was carried out in an outdoor environment, with the same participants
who partook in the indoor study, described in part one of the series. The participants
used the LaserNavigator while walking a rectangular route among a cluster of buildings.
The walks were filmed, and after the trial the participants were interviewed about their
conceptions of usability of the device. Results from observations and interviews show that
while the device is designed with the white cane in mind, one can learn to see the device
as something different. An example of this difference is that the LaserNavigator enables
keeping track of buildings on both sides of a street. The device was seen as most useful in
familiar environments, and in particular when crossing open spaces or walking along e.g.
a building or a fence. The prototype was too heavy and all participant requested some
feedback on how they were pointing the device, as they all had difficulties with holding
it horizontally.
1 Introduction
Independent navigation is a challenge for individuals with a visual impairment. Without
sight, the range that landmarks can be detected is short, and therefore a relatively simple
route for the sighted might be a complex route with many landmarks for the visually
impaired. Additionally, things in the environment change all the time, which means that
some day the trusty landmark may be out of reach, not to mention the whole route.
Further, helpful information such as signs are not usually accessible without sight, and if
they happen to be, you have to know to look for them. These challenges might mean that
a person who has a visual impairment chooses to stay at home [1], or have to schedule
the excursion at a later time when sighted assistance can be brought along.
117
118 Paper E
Figure E.1: A picture of the LaserNavigator, showing the laser rangefinder (1), the ultrasound
sensor (2), the loudspeaker (3), and the button under a spring (4) used for adjusting the “cane
length”.
igation) mode. First, the user enters the length adjustment mode where they choose a
desired length of the imagined cane. This mode is active while the user holds down the
button on top of the handle in figure E.1. While in this mode, the device uses its second
range measurement unit (an ultrasound sensor [10]) to measure the distance from the
device to the user’s body. On releasing the button, the device takes the last body–device
measurement, multiplies it by 50, and uses the result as the “cane length” for the main
usage mode. In the main mode, if the laser detects objects closer to or equal to this
cane length, the device will signal this to the user by vibrations. The vibration pattern
is a series of short repeating bursts with a fixed frequency. Note that this frequency
is not varied based on the measured distance; the vibrations convey the presence of an
object. More information on the development of the LaserNavigator will be published in
an upcoming paper [11].
The remainder of the article is organised as follows. Section 2 characterises the par-
ticipants and the study. Section 3 describes the results from observations and interviews.
These are then discussed in section 4, which concludes the paper.
1.1 Purpose
The purpose of this study was to better understand users’ conceptions of usability of
the LaserNavigator in an outdoor context. This knowledge, together with the results
from the indoor study, contribute to further development of the device, and towards
understanding how such a device can be useful in different scenarios.
120 Paper E
Figure E.2: The tactile model used by the participants to familiarise themselves with the route.
The route starts at (1) and is represented by a thread. Using the walls of buildings (B1) and
(B2) as references, the participants walked towards (2), where they found a few downward stairs
lined by a fence. Turning 90 degrees to the right and continuing, following the wall of building
(B2), the next point of interest was at (3). Here, another fence on the right side could be used
as a reference when taking the soft 90-degree turn. The path from (3) to (6) is through an alley
lined with sparsely spaced trees. Along this path, the participants encountered the two simulated
crossings (4) and (5), in addition to the bus stop (B5). At (6) there was a large snowdrift
whose presence guided the participants into the next 90-degree turn. Building B4 was the cue
to perform yet another turn, and then walk straight back to the starting point (1), located just
past the end of (B3).
Figure E.3: This figure shows three images captured from the videos. From left to right, these
were captured: just before reaching (6); just before (5), with one of the makeshift traffic light
poles visible on the right; between (3) and (4).
2. Methods 121
2 Methods
This section characterises the participants and describes the study.
2.1 Participants
The three participants were the same ones who participated in our earlier indoor evalu-
ation, described in part 1 of the article series [8]. As such, they had tried the LaserNav-
igator before, albeit an earlier version with some significant differences.
The participants were recruited from the local district of the Swedish Association
for the Visually Impaired (SRF). They were 2 females and 1 male, aged 60, 72 and 78,
respectively. All were blind2 , with participants B and C being adventitiously blind. All
three used white canes, but participant C used it only as a walking cane. Additionally,
participant B had a guide dog and used a GPS device daily. Participants A and B were
comfortable with walking around in familiar territory on their own, while participant C
said he never leaves the home by himself.
scribed verbatim and were subsequently analysed using content analysis as described by
Graneheim and Lundman [13].
3 Results
This sections describes the results from the observations and interviews.
3.1 Observations
All participants needed a lot of instructions during the walk. While they brought their
white canes, they did not use them that much, instead concentrating on using the Laser-
Navigator. They used the device to find a “corridor”, i.e. an open path straight ahead
with “walls” on both sides. When they found this, they seemed confident in walking
through, following one or both of the “walls”, whether they were actual walls or trees.
Figure E.3 shows three pictures from the trials captured from the video recordings.
Generally, all participants had difficulties holding the LaserNavigator horizontally,
and often needed instructions to angle the device up or down. Below are some comments
specific to each participant.
Participant A brought her white cane, but did not use it regularly, instead concentrat-
ing on the LaserNavigator. She mainly used the LaserNavigator to find open space where
she could walk, and having found that, mostly held the device fixed while she walked
straight. She initially used too wide and quick movements with the LaserNavigator, but
later assumed a more calculated and controlled use. When instructed to walk to a land-
mark she found, she was able to move there without much difficulty. The second time
around, participant A was more confident and walked the route considerably quicker.
Participant B found it very fatiguing to hold the LaserNavigator, and walked the
route one time only, during which she often had to pause for awhile to rest. She had
her white cane but did not use it regularly. She walked at a normal walking pace, and
sometimes missed important landmarks and had to be stopped and given the relevant
information. She had difficulties finding the walkway in the alley, as she used too large
and fast movements with the device. Because of this, she also missed small landmarks
like the traffic light poles, which she needed a lot of help to find. During the later part
of the walk, she let go of her white cane and used the left hand to help steady the right,
holding the LaserNavigator.
Participant C mainly used the LaserNavigator while stationary at first, but later
incorporated the use while walking. The first time around he did not bring his cane, but
during the second time he used it to support himself and to help him with the stairs.
Due to the cold weather he used gloves a short time, after which he removed them and
commented that he did not feel the vibrations through them. Noteworthy is that he
generally seemed to listen for the feedback more than he felt it. As for participant A,
the second time around was considerably quicker.
3. Results 123
3.2 Interviews
This section lays out the findings from the interviews. Three categories were formulated,
and the findings below are grouped based on those. The analysis shows that with practice,
one can learn to see the device as more than a white cane, despite the similarities which
inevitably affect the initial impressions. All participants spoke of “the complex outdoors”
and the challenges of getting enough information for safe and accurate travel. They also
discussed the prototype itself, noting that there was room for improvement. Following is
a more detailed description of the participants’ conceptions of using the LaserNavigator.
The participants said that the LaserNavigator would be most useful in familiar envi-
ronments. They said that following the walls of the buildings worked well, but found it
challenging as soon as this familiarity was replaced by the unfamiliar. The tactile model
of the environment was an attempt to increase familiarity, and was seen as helpful. “I
memorized it, and had it in mind while I walked.”
The participants also noted several situations where the LaserNavigator would be
useful, such as in the woods, in a tree-lined alley, or finding the small walkway from the
town square. Participant A described one typical situation: finding her way back into
her house after being out in the back yard. She expressed that she felt the main idea
with the LaserNavigator to be using it in larger open spaces and finding out “here I can
go, here I can’t.”
4 Discussion
This second evaluation of the LaserNavigator has widened our understanding of users’
conceptions of the device, by testing it in a more realistic outdoor scenario. While the
concept of the LaserNavigator has much in common with a white cane, it is not intended
to serve the same function. It is highly interesting to observe the shift in conception of
the device from the participants’ point of view. The device was designed with white cane
users in mind, and the fact that the initial conceptions went along those lines suggests
that this similarity in concept was successful. On the flipside, the participants now had to
4. Discussion 125
make the transition from the idea of a “virtual cane” to that of a navigation aid. Realising
the possibility to sense something at a great distance compared to the cane is one thing,
but knowing how to use the device as a navigation aid requires a mental transition to a
conception of the device as a new kind of aid. For example, “The LaserNavigator told
me where I should go,” was the way participant A described the device, thus viewing it
as being different from the white cane.
Feedback was discussed both in the interviews and during the trials, with one concern
being how to know what object is being felt, and when to feel for something. If one
considers only the feedback and no extra knowledge, this is indeed an issue. The current
feedback signals the presence of an object at a certain angle and approximate distance,
and by carefully probing the object, it is possible to get a sense of its size and shape. The
rest is left to the user’s knowledge, and perhaps aided by a GPS system. It is perhaps
because of this that the participants expressed that the LaserNavigator would be most
useful in familiar environments.
Future work will need to address the practical issues with the LaserNavigator, and
look into adding a mode where the device helps with pointing it horizontally. The
latter feature is possible to implement in the current system as the hardware includes
an accelerometer and a gyro. The primary question is how to give the feedback. On the
subject of feedback, one participant in particular spoke of the need for more information
when using the device. A camera and an earpiece was suggested, with the idea that
these components might provide some of the information that the researcher walking
with the participants did. Image processing techniques and machine learning algorithms
are advancing rapidly, and there are applications today such as TapTapSee for iPhone
that tries to describe the contents of a picture [14]. While the current solutions are unable
to give the rich and accurate descriptions likely sought by the participant, this kind of
technology is certainly something to keep an eye on.
The change from automatic length adjustment to manual has been a big one.
Being well-trained with the device, I did not personally find the automatic
length adjustment difficult, but I do understand the drawbacks. The concept
of automatic length adjustment would be unfamiliar even to a cane user, and
depth perception would be compressed. In fact, when I first started using the
recent manual mode, I had become so accustomed to the compressed depth
perception that all objects felt extremely large, depth-wise. It did not take
long, however, to adapt to manual mode, and the benefit it provides regarding
ease of learning are evident from the participants’ comments. I agree with
the participants on the need for more information while walking. This is
a general problem the LaserNavigator does not address, but it is less of an
126 Paper E
Acknowledgements
This work was supported by Centrum för medicinsk teknik och fysik (CMTF) at Umeå
University and Luleå University of Technology – both in Sweden – and by the European
Union Objective 2 North Sweden structural fund.
References
[1] D. M. Brouwer, G. Sadlo, K. Winding, and M. I. G. Hanneman, “Limitation in
mobility: Experiences of visually impaired older people,” British Journal of Occu-
pational Therapy, vol. 71, no. 10, pp. 414–421, 2008.
[2] T. Pey, F. Nzegwu, and G. Dooley, “Functionality and the needs of blind and par-
tially sighted adults in the uk: a survey,” Reading, UK: The Guide Dogs for the
Blind Association, 2007.
[6] Sound Foresight Technology, “Ultracane - putting the world at your fingertips,”
http://www.ultracane.com/, accessed 2016-03-21.
[11] J. van Deventer, D. Innala Ahlmark, and K. Hyyppä, “Developing a Laser Naviga-
tion Aid for Persons with Visual Impairment,” To be published, 2016.
Authors:
Jan van Deventer, Daniel Innala Ahlmark and Kalevi Hyyppä
To be submitted.
129
130
Developing a Laser Navigation Aid for Persons with
Visual Impairment
Abstract
This article presents the development of a new navigation aid for visually impaired per-
sons (VIPs) that uses a laser range finder and electronic proprioception to convey the
VIPs’ physical surroundings. It is denominated LaserNavigator. In addition to the tech-
nical contributions, an essential result is a set of reflections leading to what an “intuitive”
handheld navigation aid for VIPs could be. These reflections are influenced by field trials
in which VIPs have evaluated the LaserNavigator indoors and outdoors. The trials di-
vulged technology-centric misconceptions regarding how VIPs use the device to sense the
environment and how that physical environment information should be provided back
to the user. The set of reflections relies on a literature review of other navigation aids,
which provide interesting insights on what is possible when combining different concepts.
1 Introduction
The World Health Organization estimates that there are 285 million persons who are
visually impaired worldwide of which 39 millions are blind and 246 millions have low
vision [1]. This very large population could be reduced as the impairment’s root cause is
often disease. Nonetheless, this population exists and deserves to be assisted. Assistance
or aid to visually impaired persons (VIPs) comes in different forms. From a technology-
centric view, navigation aids are a valuable form of assistance, which can provide a
feeling of independence to the VIPs. Engineers and researchers can find purpose and
pleasure in developing technical solutions or aids to the problem of perceiving the physical
surroundings for persons with visual impairment. This problem can further be divided
into: sensing and feedback.
To understand what is really needed by VIPs and what is technically possible is a
difficult task. A person in need of a navigation aid does not necessarily know what
is technically possible. While, a technical person can only imagine what could be a
good navigation aid, and can lose focus of the true user-centric solution [2]. This can
be witnessed by the collection of available devices designed to help the visual impaired
navigate with none being a true reference [3].
The need for such devices is clear when one considers that the blink of a scene provides
so much information about the surrounding for those who are not visually impaired. We
seek the means to provide clear information to persons who have a visually impairment
without overloading their remaining sensory inputs. But as Ackoff points out: “Successful
131
132 Paper F
problem solving requires finding the right solution to the right problem. We fail more
often because we solve the wrong problem than because we get the wrong solution to
the right problem” [4]. As we reflect on the development of the navigation aid, one can
wonder if technological solutions address the “right” problem or are they the solutions
to the misunderstood concept of an intuitive navigation aid.
The purpose of this paper is two fold. Firstly, it presents the development of a naviga-
tion aid that uses a laser range finder to discern the surroundings and electronic sensors
to establish the location and movement of the user’s hand. The device is dubbed Laser-
Navigator. Secondly, it tells about the struggle to define what an intuitive navigation
aid is or could be, which we initially address by stating some basic requirements.
A system engineering approach requires a set of system requirements for a navigation
aid to guide research and development towards a “Eureka!” solution. The requirements
for the device could start with “a system should be intuitive”. It should provide as much
information as fast as possible without overloading the user. It should not interfere with
the other senses of the user. It should be very light to enable use over a longer period and
yet offer enough battery power to be used for at least a day. And, to make it available to
many, one could also wish for low cost. These requirements do need further refinements
and extensions, yet can assist in the evaluation of navigation aids.
There is a non-electronic solution that meet all of these requirements: the white cane.
It is intuitive, as one can quickly use it without any extensive training. It is light. It does
not run out of batteries. It provides information about what is around the user and even
about the ground texture. It generates gentle sounds that help the user locate nearby
walls without muffling the environment. It even communicates with others as it clearly
flags the situation, e.g., when walking down a crowded street, the crowd parts away as
a white cane user approaches. But, it does have a limitation: it is about a meter to a
meter and a half long depending on the size of the user. Adding the requirement that
a user should be able to discern his or her physical surroundings beyond the length of a
white cane justifies the development of any electronic navigation aid.
The paper is organized in three main sections. Following the introduction, we have
a short review of some existing navigation aids, some of which became products while
others are research projects that seem to have entered a hibernating state. Each of the
ones mentioned here have an innovation that contributes to our search for the utopian
navigation aid. The second section covers our development journey starting with the
“Sighted Wheelchair” and includes trials with VIPs. The third section is a discussion,
which combines the LaserNavigator with the other concepts to describe a next generation
of navigation aids. The article naturally ends with a conclusion and acknowledgements.
The UltraCane is a research innovation that did become a commercial product [5]. It
incorporates two ultrasonic sensors integrated in a white cane while providing feedback
to the user by two vibrating buttons and auditive beeps. Both ultrasonic sensors are
aimed forward and are based on acoustic time of flight methods. The lower sensor looks
straight ahead of the white cane to warn of upcoming obstacle while the upper one looks
more upward to detect any obstacle that could hit the user above the belt. The haptic
feedback consists of vibration bursts whose frequency is dependent on the measured
distance, and the user is required to undergo training to use the UltraCane efficiently.
With training comes proficiency as a sighted person learns to drive a car or ride a bicycle.
Hoyle points out that, to be relevant, when comparing the UltraCane’s performance with
other navigation aids, the user must be trained with both devices [6]. The UltraCane
is quite heavy, so much that it has a ball bearing with a ball at the ground end to ease
sweeping with the cane. With its wide ultrasonic beam, it does detect a pole in front but
not necessarily an open door as it senses instead the frame of the door.
Another commercial navigation aid is the MiniGuide [7]. It is a small and light
handheld ultrasonic rangefinder that has an intermittent vibration when an object is
detected at a certain distance. This certain distance could be referred as the “virtual”
cane length, which can be adjusted when entering a setup mode. As an object gets closer
within the cane length, the intermittent vibration bursts get closer to each other. The
device also has a 3.5 mm audio jack output to be used with headphones, which offers
a finer depth perception through a tone as a function of distance. Some users might
find the use of headphone disturbing as hearing provides other information about the
environment. The MiniGuide has also the difficulty with detecting open doors at large
distances as it also uses a wide beam ultrasonic sensor.
Gallo et al. demonstrated an augmented white cane with, among other features, a
narrow beam long distance IR sensor to detect objects ahead and a flywheel in the role
of toque reaction as haptic feedback [8]. The first feature addresses the issue of the open
door detection, while the second provides some torque feedback. The flywheel offers a
vertical torque feedback when abruptly stopped. The flywheel has a complex stopping
mechanism and the publication does not mention the weight of the device, which is added
to the white cane. There is also no mention of battery life.
Amemiya developed a haptic direction indicator based on a kinesthetic perception
method called the “pseudo-attraction force” technique [9]. The device exploits the non-
linear relationship between perceived and physical acceleration to generate a force sen-
sation. The user holding the navigation aid feels a gentle pull or a push in his or her
hand, which guides the user to a destination. Ameniya uses a GPS (Global Positioning
System) signal to sense where the user is as it guides the user to a desired destination.
Hemmert et al. presented a comparison of other devices that point to the directions
of interest [10]. One device uses weight shifting while another communicates by changing
shape. The comparison also included a device with a graphical display containing a
pointing arrow towards the direction of interest. In trials with non-VIPs, the latter
proves to be the best only if paying attention to the display, and not if other tasks have
to be performed at the same time. Their comparison incorporates the task of detecting
134 Paper F
3 Laser Navigators
We begin our development reflections with the “Sighted Wheelchair”[14, 15]. It has been
a successful project that enabled a visually impaired person, who additionally needs to
use an electric wheelchair, to navigate freely. The system on the wheelchair provides
haptic feedback to the wheelchair operator through a Novint Falcon [16]. The Falcon is a
game controller with high-fidelity three-dimensional haptic force feedback, that measures
the user’s hand location and movement. For environment sensing, the wheelchair has a
SICK LMS111 laser rangefinder that scanned a frontal angle of 270 degrees up to 20
meters [17]. The combination of range finder and game controller provides an intuitive
feedback to the wheelchair operator. For example, if a person steps in 10 meters in front
of the wheelchair, the Falcon’s ball grip pushes back on the operator’s hand, who can
then feel in three-dimensions what is in front of him/her. If a pillar is on the side, the
Falcon provides a resistance when the user moves her/his hand to the side.
Being easily biased to our own R&D achievements, the system had to be tested
with external VIPs. Following an ethical approval procedure, we were unsuccessful in
our search for blind persons who used wheelchairs. Training VIPs to drive an electric
3. Laser Navigators 135
wheelchair in order to test the device would undesirably affect the results. The Falcon,
laser scanner, the associated computer and power supply are too large and heavy to be
considered portable, and so it was decided to use a rolling table to let the experimental
subjects evaluate the proposed solution. It worked but because it is cumbersome to move
around a rolling table, it was not the success we sought, more of an anticlimax. With
the yearning for larger impact, the Sighted Wheelchair was set aside at a demonstration
stage in search of a handheld equivalent: the intuitive navigation aid.
It is possible to experience with amazement the three-dimensional world beyond the
length of a white cane with the combination of the Falcon and the SICK laser. Answering
the difficult question “why?” is necessary to develop the handheld version. To address
this, in hindsight, one can break down the idea in two: sensing and feedback. The
Sighted Wheelchair senses the world in front of itself with the SICK scanning laser, and
the position of the user’s hand with the Falcon. The Sighted Wheelchair provides clear
haptic feedback, in three dimensions, to the user as if the user could palpate the scanned
world.
From the sensing side, the replacement of the SICK laser with a laser rangefinder is
an obvious choice. Being a narrow beam device, it provides sharp and accurate distance
information even beyond the 10 meters of an ultrasonic rangefinder. The narrow beam
width addresses the issue of detecting an open door, especially compared to the devices
using wide beam ultrasonic sensors. This laser rangefinder had to weigh little and not
threaten the eyesight of other people if the device is pointed to their faces. Getting the
distance in front of the user is one thing, what about knowing where the user’s hand
is with respect to the user? The Falcon has a fixed reference frame since it is attached
to the wheelchair, but the handheld device is only held by the hand of the user. As
an analogy to proprioception, where a person usually knows where one’s own hand is
without looking, the device needs to know where it is with respect to the user and how it
moved. We refer to this as electronic proprioception. To achieve this, we began by using
an ultrasonic sensor aimed backwards to the user.
The Falcon is able to provide force feedback in three dimensions because it relies on
the principle that every action has an equal but opposite reaction. It can do that because
it is anchored to a heavy wheelchair and powered by a large battery. The handheld device
cannot have either the luxury of weight nor large battery reserves. We initially followed
the standard vibration feedback to communicate with the user.
Our first prototype consisted of a laser rangefinder, an ultrasonic range finder, an
Arduino prototyping platform and an Android telephone. The telephone provided power
to the navigation aid, a visual interface to the developer as well as the vibration feedback.
The Arduino Mega 2560 ADK was the electronic communication center of the navigation
aid [18]. It communicated with the phone through its USB port, to the laser range finder
via a UART serial port and to the ultrasonic sensor through a signal pin. The rearward
ultrasonic sensor was a Parallax PING with a center frequency of 40 kHz [19]. A short
start pulse was sent out on the signal pin of the Arduino, while a timer counted the
time required for an echo to return. The frontal range finder considered was an SF01
Laser Rangefinder from LightWare Optoelectronics [20]. It could detect targets over 60
136 Paper F
meters away with a resolution of 1 centimeter and has an update rate of eight readings
per second. The laser light emitted from the pulsed laser is invisible, with a wavelength
of 850 nm, an average power of 11mW and a peak power of 14W. It was soon replaced
by the SF02/F, a lightweight laser rangefinder from the same manufacturer. There was
range reduction from 60 to 50 meters, but more importantly, a weight reduction from
185 g to 69 g. The update rate increased to 12 readings per second. The first prototype
served its purpose well. It confirmed the concept of electronic proprioception for the
navigation aid, i.e. it knew how far it was from the body, providing a distance d. With
the frontal laser rangefinder providing D, the user could sense the proportional depth by
moving their hand back and forth to find where D = kd with k being a constant gain.
Our first prototype had two major drawbacks. The first one was its weight. Tolerable
for some time, it was unacceptable for longer test periods. The second drawback was the
lack of software flexibility close to the embedded hardware, e.g., control of the vibrating
oscillator.
A second prototype was then developed. The second prototype shed the telephone
and the Arduino from the device. Its computational power came from a Cortex M3
micro-controller and was programmed via a JTAG interface. It was powered either by
its micro-USB interface or by batteries, which have been placed into the device’s handle
to provide better balance. It was enhanced with a 3D accelerometer, a 3D gyroscope
and a Bluetooth module. The accelerometer provided acceleration in three directions
and inclination with respect to gravity. The first use of the accelerometer was to put
the device in a sleep mode when it rested on a table. The gyroscope informed about the
rate of rotation of the device. The Bluetooth module communicated wirelessly to mobile
phones or computers, which was useful in development as well as interacting with other
devices. The haptic feedback was an eccentric rotating mass (ERM) vibration motor.
Being much lighter and better balanced, this prototype became easier to manipulate.
As with any engineering project, once the big issues are resolved, smaller ones take
center stage. Quickly, two response-type problems were revealed.They were most evi-
dent when informing the user of the presence of a pole or tree when sweeping sideways.
The first issue had to do with the vibrating motor, which was too slow to start and
stop vibrating. To address this, the vibrating motor was cast aside in favor of a small
loudspeaker onto which the user rested his/her index finger. The next problem was the
update rate of the laser; 12 readings per second was insufficient. Luckily, LightWare
Optoelectronics had just released a new version of the SF02/F with 32 measurements
per second, which we purchased immediately.
Satisfied with our creation, we had to label it to be able to refer to it. The designation
of LaserNavigator seemed fitting as it was a navigation aid using a laser rangefinder.
Figure F.1: Indoor evaluation. Motion capture cameras at the top with unique reflective iden-
tifier on chest, head, LaserNavigator and white cane. Door 3 is closed.
To find trial participants, we contacted the local district of the Swedish National
Association for the Visually Impaired (SRF) and solicited volunteers to test the Laser-
Navigator. Three persons kindly volunteered. They were all between sixty and eighty
years old. Two were women and the third a man.
Indoor Evaluation
An indoor experiment was designed around a staged room with doors, which was located
within a larger laboratory room. The walls of the staged room were made of plywood
and only 1.2 meters high to have a real “feel” with a white cane and the LaserNavigator,
while enabling a three dimensional motion capture system to follow the participants’
bodies, heads and the LaserNavigator through the trials (c.f. figure F.1). The tracking
was made possible with a Vicon Bonita system [21]. The evaluation scenario asked each
VIP, starting from the same place in the lab, to find the entrance to the staged room
and enter it. Once in the room, each participant had to find a second open door, go to
it, and then return to the first door; i.e., there were only two doors open per trial. There
were three trials per participant, all of them being additionally video recorded.
After a short training session, each participant performed three tests and were inter-
138 Paper F
Figure F.2: Paths (black) taken by the three participants (one per row) over three indoor trials.
The red line shows how they used the LaserNavigator.
viewed thereupon. It was interesting to see that there was a very good correlation between
the participants’ perceptions of the tests as communicated through the interviews and
their performance captured with the motion capture system. Figure F.2 shows, in black,
the path each participant (rows) took during each trial (column). In red, it shows how
the LaserNavigator was used. If we consider the second row of figure F.2, participant B
found the trials difficult and tiresome. In the first trial, one clearly see wide sweeps and
by the third trial, there is minimal sweeping. Participant A liked the device and learned
to use it quickly, which is obvious as one looks at the evolution of paths’ length in first
row of figure F.2.
As Manduchi and Kurniawan point out, experimental trials for assistive technology
are essential to grasp the users’ perspectives [2]. In our case, the indoor trials revealed
3. Laser Navigators 139
some wrong inferences in our proposed navigation aid. The first one had to do with how
the navigation aid was used while the second one has to do with shape perception.
The first one was a surprise as one of the authors, and the main system developer, is
visually impaired and uses a white cane daily to navigate. The misinterpretation was an
indication that our research was engrossed on depth perception, i.e., technology-centric.
The research of the CyArm seems to have had the same focus. The idea was to move
the LaserNavigator back and forth between the target and the user seeking to please
the equation D = kd in order to perceive the three dimensional scene. Associated with
this idea, the electronic proprioception was with respect the frontal or coronal plane of
the body. During the trials, the VIPs used a sideways sweeping movement to scan the
field in front of them, as they would with a white cane. The electronic proprioception
must then be with respect to the axis of rotation of the sweeping movement rather than
coronal plane of the body. This axis of rotation is a vertical line going through the joint
where the head of the humerus bone yokes into the shoulder.
The second discovery has to do with the VIPs having difficulties in differentiating an
open door versus following a wall towards a corner of the room. Some analysis had to be
considered to really latch the issue, define it and propose a remedy. The door opening
is an abrupt change of distance. Sweeping along a wall towards a corner also increases
the target distance. In the latter case, the distance change is gentler. Solutions to these
issues are presented in the discussion below on intuitive torque feedback.
As we prepared for the outdoor trials, we added on the handle a micro-switch to
address the matter that the users swept the LaserNavigator rather than prodded the
scene. When the switch is pressed down, the LaserNavigator records the distance d
between the hand and the frontal plane of the user while providing pulsed feedback.
When the switch is released, the cane “length” l is fixed at kd. The pulsed feedback
indicates l with one meter for every pulse when operating in indoor mode, and five
meters for every pulse when in outdoor mode. The LaserNavigator then behaves like the
MiniGuide, except that the cane length l can be adjusted on the fly. For the outdoor
trials, the gain k was set to 50 such that l could be varied from less-than 5 meters to 40
meters.
Outdoor Evaluation
The outdoor evaluation of the LaserNavigator was set around a parking lot on the outskirt
of the university (c.f. figure F.3). The gravel-covered asphalt path around the parking
lot was rectangular, with one set of stairs with three steps (2). The parking lot had three
buildings on it (B2, B3, B4). On the outer sides, there was a major university building
(B1), a road on two sides and a dirt mound with trees on the fourth side. The dirt
mound was covered with snow during the time of the evaluation. Along the roadside, the
sidewalk had trees on either sides and was intersected by two access drives to the parking
lot (4 & 5). A pair of poles were place at each access drive to simulate traffic lights.
The participants were provided with a 3D scaled model of the buildings, parking lot
and trees, which they could feel and get an idea of their navigation task. This model
is shown in figure F.3. They got a chance to practice and get reacquainted with the
140 Paper F
LaserNavigator indoors along long corridors. Two volunteers performed the navigation
task two times with less and less support from the accompanying researcher. The third
volunteer performed it only once and found the exercise tiresome.
The scaled model turned out to be most helpful to the participants who became blind
later in their life. With the LaserNavigator, they could detect where the buildings and
the buildings’ corners were. They very often used the ability to change the cane length on
the fly to resolve objects at known distances. Being able to have a 40 meters cane allowed
them to feel a corridor in between the trees along the path parallel to the road. They
even discovered that the branches went over the path when aiming the device upward.
An interesting outcome from the post-trial interviews was that the VIPs grasped the
navigation aid as a “white cane” and startled themselves during the tests discovering
they were feeling the world far away.
Another new revelation was the difference between distorted depth versus real depth.
Having a fixed length on the cane, although it was set by electronic proprioception, gave a
different feeling of perception than with continuous length adjustment used in the indoor
trials. With the latter, a depth of 1 m at 20 m with a gain k = 50, gives a depth ∆d =
(2100 cm-2000 cm)/50 = 2 cm. With a fixed length cane of 20 m and a gain k = 50, the
difference remains 1 m. The fixed length cane provides a real depth while the continuous
length adjustment gives a distorted depth. We were not able to evaluate which methods
the VIPs preferred. Akita et al. showed, through their tests, that users could correctly
estimate depth with a distorted depth method [13].
4 Discussion
When blessed with the faculty of sight, it is so easy and instantaneous to perceive one’s
physical surroundings. It is quite difficult to provide this information to VIPs, as exem-
4. Discussion 141
plified by our trials described above. Developing an electronic navigation aid to provide
this information is a challenge. Many, including us, have attempted to meet this chal-
lenge. Contemplating these attempts, which include the Sighted Wheelchair, bring some
insights that we share here.
Hoyle along with Gallo et al. added a wide beam ultrasonic sensor to warn VIPs of
protruding objects above the waist. Sensor fusion can go further and come closer to the
utopian intuitive handheld electronic navigation aid. If we take the example of the 3D
accelerometer and gyroscope onboard the LaserNavigator, we could provide interesting
feedback. When, in a steady state, the laser rangefinder measures suddenly a shorter
distance D, e.g., when someone walked in front of the VIP, the cane should push back
the holding hand. When sweeping, which is detected by the accelerometer and gyroscope,
across a pole or a door frame, the measured distance D is also shortened. The feedback
should then be a torque about a vertical axis.
The accelerometer can also detect when a tired user points the navigation aid down-
wards such that the measurement is to the ground. The warning feedback could be a
short vibration.
In our indoor trials, we noticed that the users were sweeping the scene with the
LaserNavigator. We realized that the electronic proprioception should be with respect to
the vertical axis going through the shoulder rather than the frontal plane. We successfully
experimented with a second electronic unit that also had a 3D accelerometer, which could
be worn on the upper arm. The horizontal distance of the elbow from the vertical shoulder
axis is simply the product of the upper arm length with the ratio of the acceleration
normal to the upper arm to the acceleration due to gravity. To be clearer, let the
3D accelerometer measure the acceleration az along the upper arm, ax the acceleration
perpendicular to the arm aiming forward and ay pointed to the side. When the upper
arm is aligned with the vertical axis az is equal but opposite to the gravity of earth, i.e.
−g, which is -9.8 m/s2 , and ax and ay are 0 m/s2 . As the elbow is moved backwards, the
distance between the vertical shoulder axis and the elbow in the x direction is
ax
ex = L , (1)
g
where L is the upper arm’s length. The same goes for ey and ay as the elbow is moved
to the side away from the body. The information must then be communicated from the
wearable electronic unit to the navigation aid. This can be done with a serial wired
communication or in our case using the Bluetooth modules on each electronic units.
Wireless communication between different units offers a new set of possibilities, where
imagination is the limit. What can be done combining a navigation aid with a mag-
netometer (compass) and Bluetooth along with a phone with BlindSquare? During the
feedback interviews from the trial, the VIPs described different needs, such as finding
the way back to the house after hanging laundry outdoors. In other words, information
fusion from different existing systems could lead to interesting performances.
5. Conclusions 143
5 Conclusions
We did not develop an intuitive handheld navigation aid for VIPs. But, in our attempt
to do so we managed to define what an intuitive handheld navigation aid is, at least for
ourselves, and are working towards that goal. It is a handheld device that provides a
feedback similar to a white cane but with a sensing range beyond two meters from the
user. Looking at other research ideas, we find good haptic feedback concepts that are
interesting to consider. The LaserNavigator we developed uses a frontal laser rangefinder,
which provides long distance coverage with a narrow beam that address the open door
in the wall issue. The LaserNavigator uses electronic proprioception to continuously or
manually adjust the length of the “cane”. Electronic proprioception was achieved by a
backwards aimed ultrasonic sensor or in combination with a 3D accelerometer worn on
the upper arm. Indoor trials with VIPs revealed that proprioception should be done with
respect to the vertical axis going through the shoulder rather than the frontal or Coronal
plane. Outdoor trials disclosed that light poles and trees are difficult to detect without
torque feedback. We reckon that sensor fusion, low power force and torque feedback
and system integration are three parallel research paths to continue the search towards
intuitive navigation aids.
Acknowledgement
We are thankful to the SRF (Swedish National Association for the Visually Impaired) and
their volunteers . We are grateful for the initial funding of the project by the Centrum
för medicinsk teknik och fysik (CMTF) with the European Union Objective 2 North
Sweden structural fund. Financial support was then provided by the divisions of Health
Sciences and EISLAB at Luleå University of Technology. We are thankful of the Kempe
Foundation to have financially supported the acquisition of the motion capture system
used during the indoor experiments.
144 Paper F
References
[1] World Heath Oganization, “Visual impairment and blindness,” http://www.who.
int/mediacentre/factsheets/fs282/en/, November 2014, accessed 2016-03-21.
[2] R. Manduchi and S. Kurniawan, Eds., Assistive technology for blindness and low
vision. ISBN: 9781439871539, Boca Raton, FL, USA: CRC Press, 2012.
[3] T. Pey, F. Nzegwu, and G. Dooley, Functionality and the needs of blind and partially
sighted adults in the UK: a survey. Guide Dogs for the Blind Association, 2007.
[4] R. L. Ackoff, Redesigning the future. ISBN: 0471002968, New York, USA: John
Wiley & Sons, 1974.
[5] B. Hoyle and D. Waters, Assistive Technology for Visually Impaired and Blind Peo-
ple. London: Springer London, 2008, ch. Mobility AT: The Batcane (UltraCane),
pp. 209–229.
[6] B. S. Hoyle, “Letter to the editor,” Assistive Technology, vol. 25, no. 1, pp. 58–59,
2013.
[9] T. Amemiya, Kinesthetic Cues that Lead the Way. INTECH Open Access Publisher,
2011. [Online]. Available: http://cdn.intechopen.com/pdfs-wm/14997.pdf
[13] J. Akita, T. komatsu, K. Ito, T. Ono, and M. Okamoto, “Cyarm: Haptic sensing
device for spatial localization on basis of exploration by arms,” Advances in Human-
Computer Interaction, vol. 2009, pp. 1–6, 2009.
References 145
[14] D. Innala Ahlmark, H. Fredriksson, and K. Hyyppä, “Obstacle avoidance using hap-
tics and a laser rangefinder,” in Advanced Robotics and its Social Impacts (ARSO),
2013 IEEE Workshop on, 2013, pp. 76–81.
[15] “The sighted wheelchair - successful first test drive of ”sighted” wheelchair (YouTube
video),” http://www.youtube.com/watch?v=eXMWpa4zYRY, 2011, accessed 2014-
02-24.
[22] M. Sakai, Y. Fukui, and N. Nakamura, “Effective output patterns for torque display
“GyroCube”,” in 13th International Conference on Artificial Reality and Telexis-
tence, 2003, pp. 160–165.