Sunteți pe pagina 1din 170

DOC TOR A L T H E S I S

Department of Computer Science, Electrical and Space Engineering


Division of EISLAB

Haptic Navigation Aids for the

Daniel Innala Ahlmark Haptic Navigation Aids for the Visually Impaired
ISSN 1402-1544

Visually Impaired
ISBN 978-91-7583-605-8 (print)
ISBN 978-91-7583-606-5 (pdf)

Luleå University of Technology 2016

Daniel Innala Ahlmark

Industrial Electronics
Haptic Navigation Aids for the
Visually Impaired

Daniel Innala Ahlmark

Dept. of Computer Science, Electrical and Space Engineering


Luleå University of Technology
Luleå, Sweden

Supervisors:
Kalevi Hyyppä, Jan van Deventer, Ulrik Röijezon

European Union
Structural Funds
Printed by Luleå University of Technology, Graphic Production 2016

ISSN 1402-1544
ISBN 978-91-7583-605-8 (print)
ISBN 978-91-7583-606-5 (pdf)
Luleå 2016
www.ltu.se
To my mother

iii
iv
Abstract

Assistive technologies have improved the situation in society for visually impaired individ-
uals. The rapid development the last decades have made both work and education much
more accessible. Despite this, moving about independently is still a major challenge, one
that at worst can lead to isolation and a decreased quality of life.
To aid in the above task, devices exist to help avoid obstacles (notably the white cane),
and navigation aids such as accessible GPS devices. The white cane is the quintessential
aid and is much appreciated, but solutions trying to convey distance and direction to
obstacles further away have not made a big impact among the visually impaired. One
fundamental challenge is how to present such information non-visually. Sounds and
synthetic speech are typically utilised, but feedback through the sense of touch (haptics)
is also used, often in the form of vibrations. Haptic feedback is appealing because it
does not block or distort sounds from the environment that are important for non-visual
navigation. Additionally, touch is a natural channel for information about surrounding
objects, something the white cane so successfully utilises.
This doctoral thesis explores the question above by presenting the development and
evaluations of different types of haptic navigation aids. The goal has been to attain a
simple user experience that mimics that of the white cane. The idea is that a navigation
aid able to do this should have a fair chance of being successful on the market. The
evaluations of the developed prototypes have primarily been qualitative, focusing on
judging the feasibility of the developed solutions. They have been evaluated at a very
early stage, with visually impaired study participants.
Results from the evaluations indicate that haptic feedback can lead to solutions that
are both easy to understand and use. Since the evaluations were done at an early stage in
the development, the participants have also provided valuable feedback regarding design
and functionality. They have also noted many scenarios throughout their daily lives
where such navigation aids would be of use.
The thesis document these results, together with ideas and thoughts that have emerged
and been tested during the development process. This information contributes to the
body of knowledge on different means of conveying information about surrounding ob-
jects non-visually.

v
vi
Contents

Abstract v
Contents vii
Acknowledgements xi
Summary of Included Papers xiii
List of Figures xvii

Part I 1
Chapter 1 – Introduction 3
1.1 Overview – Five Years of Questions . . . . . . . . . . . . . . . . . . . . . 3
1.1.1 The Beginning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.2 Next Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.3 The Second Prototype . . . . . . . . . . . . . . . . . . . . . . . . 6
1.1.4 The LaserNavigator . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.1.5 Two Trials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.1.6 The Finish Line? . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2 Aims, Contributions and Delimitations . . . . . . . . . . . . . . . . . . . 10
1.3 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.4 Thesis Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Chapter 2 – Background 13
2.1 Visual Impairments and Assistive Technologies . . . . . . . . . . . . . . . 13
2.1.1 Navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2 Perception, Proprioception and Haptics . . . . . . . . . . . . . . . . . . . 15
2.2.1 Spatial Perception . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2.2 The Sense of Touch and Proprioception . . . . . . . . . . . . . . . 16
2.2.3 Haptic Feedback Technologies . . . . . . . . . . . . . . . . . . . . 17
Chapter 3 – Related Work 19
3.1 Navigation Aids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.1.1 GPS Devices and Smartphone Applications . . . . . . . . . . . . . 19

vii
3.1.2 Devices Sensing the Surrounding Environment . . . . . . . . . . . 20
3.1.3 Sensory Substitution Systems . . . . . . . . . . . . . . . . . . . . 21
3.1.4 Prepared Environment Solutions . . . . . . . . . . . . . . . . . . . 23
3.1.5 Location Fingerprinting . . . . . . . . . . . . . . . . . . . . . . . 24
3.2 Scientific Studies Involving Visually Impaired Participants . . . . . . . . 24
Chapter 4 – The Virtual White Cane 27
4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.2 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.2.1 Haptic Rendering . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.3 Field Trial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Chapter 5 – LaserNavigator 33
5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.2 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
5.3 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
5.3.1 Additional Features and Miscellaneous Notes . . . . . . . . . . . . 36
5.3.2 Manual Length Adjustment . . . . . . . . . . . . . . . . . . . . . 36
5.4 Haptic Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.4.1 Simple Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.4.2 Complex Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.5 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.6 Evaluations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Chapter 6 – Discussion 41
Chapter 7 – Conclusions 45
References 47

Part II 51
Paper A – Presentation of Spatial Information in Navigation Aids
for the Visually Impaired 53
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3 Non-visual Spatial Perception . . . . . . . . . . . . . . . . . . . . . . . . 57
4 Navigation Aids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.1 Haptic Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.2 Auditory Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Paper B – Obstacle Avoidance Using Haptics and a Laser Rangefinder 67
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

viii
2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3 The Virtual White Cane . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
3.2 Software Architecture . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.3 Dynamic Haptic Feedback . . . . . . . . . . . . . . . . . . . . . . 76
4 Field Trial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
5.1 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Paper C – An Initial Field Trial of a Haptic Navigation System for
Persons with a Visual Impairment 83
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
1.1 Delimitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
2.1 Participants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
2.2 Test Set-up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
2.3 Field trial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
2.4 Interviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
2.5 Data analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
3.1 Findings from the interviews . . . . . . . . . . . . . . . . . . . . . 90
4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Paper D – A Haptic Navigation Aid for the Visually Impaired – Part
1: Indoor Evaluation of the LaserNavigator 97
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
1.1 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
2.1 Participants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
2.2 Test Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
2.3 Task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
2.4 Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
2.5 Interviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
3.1 Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
3.2 Interviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
4.1 Daniel’s Comments . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Paper E – A Haptic Navigation Aid for the Visually Impaired – Part
2: Outdoor Evaluation of the LaserNavigator 115
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
1.1 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
2.1 Participants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

ix
2.2 Trial Task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
2.3 Observations And Interviews . . . . . . . . . . . . . . . . . . . . . 121
3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
3.1 Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
3.2 Interviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
4.1 Daniel’s Comments . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Paper F – Developing a Laser Navigation Aid for Persons with Visual
Impairment 129
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
2 Navigation Aid Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
3 Laser Navigators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
3.1 LaserNavigator Evaluations . . . . . . . . . . . . . . . . . . . . . 136
4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
4.1 Intuitive Navigation Aid . . . . . . . . . . . . . . . . . . . . . . . 141
4.2 Sensor Integration . . . . . . . . . . . . . . . . . . . . . . . . . . 141
4.3 System Integration . . . . . . . . . . . . . . . . . . . . . . . . . . 142
4.4 Three Research Paths . . . . . . . . . . . . . . . . . . . . . . . . 143
5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

x
Acknowledgements

This doctoral thesis describes five years of work with navigation aids for visually impaired
individuals. The work has been carried out at the Department of Computer Science,
Electrical and Space Engineering at Luleå University of Technology. I wish to thank
Centrum för medicinsk teknik och fysik (CMTF) for financial support, provided through
the European Union.
The multidisciplinary nature of the project has allowed me to work with many differ-
ent people with diverse backgrounds. This has been a great catalyst for creativity, and
has made the work much more fun, interesting and meaningful.
First and foremost, I want to thank my principal supervisor Kalevi Hyyppä, whose
great skill, knowledge and creativity have been key assets for the project from start to
finish. For me, his ever-present support and assistance have been a large comfort in a
world that, to a new doctoral student, can at times be both harsh and confusing. I would
also like to thank my assistant supervisors: Håkan Fredriksson, Jan van Deventer and
Ulrik Röijezon. They have brought fresh views to the project and have helped make the
results both broader in scope and richer in detail.
Further, Maria Prellwitz, Jenny Röding and Lars Nyberg were instrumental in the
work with the first evaluation and its associated article; a great experience and learning
process. Maria has continued to aid the qualitative analysis process in the later evalua-
tions. I am grateful for that as the articles are far more interesting now than they would
otherwise have been.
I am also grateful to Mikael Larsmark, Henrik Mäkitaavola and Andreas Lindner for
their work on the LaserNavigator. Further, I would like to acknowledge the support of
teachers and other staff at the university who have helped me on the sometimes winding
path that started 11 years ago and now comes to an end in the form of this dissertation.

Thank you!

Luleå, May 2016


Daniel Innala Ahlmark

xi
xii
Summary of Included Papers
Paper A – Presentation of Spatial Information in Navigation
Aids for the Visually Impaired
Daniel Innala Ahlmark and Kalevi Hyyppä

Published in: Journal of Assistive Technologies, 9(3), 2015, pp. 174–181.

Purpose: The purpose of this article is to present some guidelines on how different
means of information presentation can be used when conveying spatial information non-
visually. The aim is to further the understanding of the qualities navigation aids for
visually impaired individuals should possess.
Design/methodology/approach: A background in non-visual spatial perception is
provided, and existing commercial and non-commercial navigation aids are examined
from a user interaction perspective, based on how individuals with a visual impairment
perceive and understand space.
Findings: The discussions on non-visual spatial perception and navigation aids lead to
some user interaction design suggestions.
Originality/value: This paper examines navigation aids from the perspective of non-
visual spatial perception. The presented design suggestions can serve as basic guidelines
for the design of such solutions.

Paper B – Obstacle Avoidance Using Haptics and a Laser


Rangefinder
Daniel Innala Ahlmark, Håkan Fredriksson and Kalevi Hyyppä

Published in: Proceedings of the 2013 Workshop on Advanced Robotics and its So-
cial Impacts, Tokyo, Japan.

In its current form, the white cane has been used by visually impaired people for al-
most a century. It is one of the most basic yet useful navigation aids, mainly because of
its simplicity and intuitive usage. For people who have a motion impairment in addition
to a visual one, requiring a wheelchair or a walker, the white cane is impractical, leading
to human assistance being a necessity. This paper presents the prototype of a virtual
white cane using a laser rangefinder to scan the environment and a haptic interface to

xiii
present this information to the user. Using the virtual white cane, the user is able to
”poke” at obstacles several meters ahead and without physical contact with the obstacle.
By using a haptic interface, the interaction is very similar to how a regular white cane
is used. This paper also presents the results from an initial field trial conducted with six
people with a visual impairment.

Paper C – An Initial Field Trial of a Haptic Navigation System


for Persons with a Visual Impairment

Daniel Innala Ahlmark, Maria Prellwitz, Jenny Röding, Lars Nyberg and Kalevi Hyyppä

Published in: Journal of Assistive Technologies, 9(4), 2015, pp. 199–206.

Purpose: The purpose of the presented field trial was to describe conceptions of feasi-
bility of a haptic navigation system for persons with a visual impairment.
Design/methodology/approach: Six persons with a visual impairment who were
white cane users were tasked with traversing a predetermined route in a corridor en-
vironment using the haptic navigation system. To see whether white cane experience
translated to using the system, the participants received no prior training. The proce-
dures were video-recorded, and the participants were interviewed about their conceptions
of using the system. The interviews were analyzed using content analysis, where induc-
tively generated codes that emerged from the data were clustered together and formulated
into categories.
Findings: The participants quickly figured out how to use the system, and soon adopted
their own usage technique. Despite this, locating objects was difficult. The interviews
highlighted the desire to be able to feel at a distance, with several scenarios presented
to illustrate current problems. The participants noted that their previous white cane
experience helped, but that it nevertheless would take a lot of practice to master using
this system. The potential for the device to increase security in unfamiliar environments
was mentioned. Practical problems with the prototype were also discussed, notably the
lack of auditory feedback.
Originality/value: One novel aspect of this field trial is the way it was carried out.
Prior training was intentionally not provided, which means that the findings reflect im-
mediate user experiences. The findings confirm the value of being able to perceive things
beyond the range of the white cane; at the same time, the participants expressed concerns
about that ability. Another key feature is that the prototype should be seen as a navi-
gation aid rather than an obstacle avoidance device, despite the interaction similarities
with the white cane. As such, the intent is not to replace the white cane as a primary
means of detecting obstacles.

xiv
Paper D – A Haptic Navigation Aid for the Visually Impaired –
Part 1: Indoor Evaluation of the LaserNavigator
Daniel Innala Ahlmark, Maria Prellwitz, Ulrik Röijezon, George Nikolakopoulos, Jan van
Deventer, Kalevi Hyyppä

To be submitted.

Navigation ability in individuals with a visual impairment is diminished as it is largely me-


diated by vision. Navigation aids based on technology have been developed for decades,
although to this day most of them have not reached a wide impact and use among the
visually impaired. This paper presents a first evaluation of the LaserNavigator, a newly
developed prototype built to work like a “virtual white cane” with an easily adjustable
length. This length is automatically set based on the distance from the user’s body to
the handheld LaserNavigator. The study participants went through three attempts at a
predetermined task carried out in an indoor makeshift room. The task was to locate a
randomly positioned door opening. During the task, the participants’ movements were
recorded both on video and by a motion capture system. After the trial, the partici-
pants were interviewed about their conceptions of usability of the device. Results from
observations and interviews show potential for this kind of device, but also highlight
many practical issues with the present prototype. The device helped in locating the door
opening, but it was too heavy and the idea of automatic length adjustment was difficult
to get used to with the short practice time provided. The participants also identified
scenarios where such a device would be useful.

Paper E – A Haptic Navigation Aid for the Visually Impaired –


Part 2: Outdoor Evaluation of the LaserNavigator
Daniel Innala Ahlmark, Maria Prellwitz, Ulrik Röijezon, Jan van Deventer, Kalevi Hyyppä

To be submitted.

Negotiating the outdoors can be a difficult challenge for individuals who are visually
impaired. The environment is dynamic, which at times can make even the familiar route
unfamiliar. This article presents the second part evaluation of the LaserNavigator, a
newly developed prototype built to work like a “virtual white cane” with an easily ad-
justable length. The user can quickly adjust this length from a few metres up to 50 m.
The intended use of the device is as a navigation aid, helping with perceiving distant
landmarks needed to e.g. cross an open space and reach the right destination. This sec-
ond evaluation was carried out in an outdoor environment, with the same participants
who partook in the indoor study, described in part one of the series. The participants
used the LaserNavigator while walking a rectangular route among a cluster of buildings.
The walks were filmed, and after the trial the participants were interviewed about their

xv
conceptions of usability of the device. Results from observations and interviews show
that while the device is designed with the white cane in mind, one can learn to see the
device as something different. An example of this difference is that the LaserNavigator
enables keeping track of buildings on both sides of a street. The device was seen as most
useful in familiar environments, and in particular when crossing open spaces or walking
along e.g. a building or a fence. The prototype was too heavy and all participant re-
quested some feedback on how they were pointing the device, as they all had difficulties
with holding it horizontally.

Paper F – Developing a Laser Navigation Aid for Persons with


Visual Impairment
Jan van Deventer, Daniel Innala Ahlmark, Kalevi Hyyppä

To be submitted.

This article presents the development of a new navigation aid for visually impaired per-
sons (VIPs) that uses a laser range finder and electronic proprioception to convey the
VIPs’ physical surroundings. It is denominated LaserNavigator. In addition to the tech-
nical contributions, an essential result is a set of reflections leading to what an “intuitive”
handheld navigation aid for VIPs could be. These reflections are influenced by field trials
in which VIPs have evaluated the LaserNavigator indoors and outdoors. The trials di-
vulged technology-centric misconceptions regarding how VIPs use the device to sense the
environment and how that physical environment information should be provided back to
the user. The set of reflections relies on a literature review of other navigation aids, which
provide interesting insights on what is possible when combining different concepts.

xvi
List of Figures

1.1 The Novint Falcon haptic interface and the SICK LMS111 laser rangefinder,
used in the first prototype. . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 A picture of the second prototype: the LaserNavigator. . . . . . . . . . . 8
3.1 The UltraCane, a white cane augmented with ultrasonic sensors and haptic
feedback. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2 A picture of the Miniguide, a handheld ultrasonic mobility aid with haptic
feedback. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.1 This figure shows the virtual white cane on the MICA (Mobile Internet
Connected Assistant) wheelchair. . . . . . . . . . . . . . . . . . . . . . . 28
4.2 The Novint Falcon haptic display. . . . . . . . . . . . . . . . . . . . . . . 29
4.3 A simple environment (a) is scanned to produce data, plotted in (b). These
data are used to produce the model depicted in (c). . . . . . . . . . . . . 30
5.1 A picture of the latest version of the LaserNavigator. The primary com-
ponents are the laser rangefinder (1), the ultrasound sensor (2), the loud-
speaker (3), and the button under a spring (4) used in manual length
adjustment mode to adjust the “cane length”. . . . . . . . . . . . . . . . 34
5.2 Basic architecture diagram showing the various components of the Laser-
Navigator and how they communicate with each other. . . . . . . . . . . 35
B.1 The virtual white cane. This figure depicts the system currently set up on
the MICA wheelchair. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
B.2 The Novint Falcon, joystick and SICK LMS111. . . . . . . . . . . . . . . 73
B.3 The X3D scenegraph. This diagram shows the nodes of the scene and the
relationship among them. The transform (data) node is passed as a refer-
ence to the Python script (described below). Note that nodes containing
configuration information or lighting settings are omitted. . . . . . . . . . 74
B.4 The ith wall segment, internally composed of two triangles. . . . . . . . . 75
B.5 The virtual white cane as mounted on a movable table. The left hand
is used to steer the table while the right hand probes the environment
through the haptic interface. . . . . . . . . . . . . . . . . . . . . . . . . . 77
B.6 The virtual white cane in use. This is a screenshot of the application
depicting a corner of an office, with a door being slightly open. The user’s
”cane tip”, represented by the white sphere, is exploring this door. . . . . 79

xvii
C.1 The prototype navigation aid mounted on a movable table. The Novint
Falcon haptic interface is used with the right hand to feel where walls and
obstacles are located. The white sphere visible on the computer screen is a
representation of the position of the grip of the haptic interface. The grip
can be moved freely as long as the white sphere does not touch any obsta-
cle, at which point forces are generated to counteract further movement
”into“ the obstacle. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
D.1 A photo of the LaserNavigator, showing the laser rangefinder (1), ultra-
sound sensor (2) and the loudspeaker (3). . . . . . . . . . . . . . . . . . . 100
D.2 The two reflectors (spherical and cube corner) used alternately to improve
the body–device measurements. . . . . . . . . . . . . . . . . . . . . . . . 100
D.3 A picture of the makeshift room as viewed from outside the entrance door. 103
D.4 One of the researchers (Daniel) trying out the trial task. The entrance
door is visible in the figure. . . . . . . . . . . . . . . . . . . . . . . . . . 104
D.5 Movement tracks for each participant and attempt, obtained by the reflec-
tor markers on the sternum. The entrance door is marked by the point
labelled start, and the target door is the other point, door. Note that the
start point appears inside the room because the motion capture cameras
were unable to see part of the walk. Additionally, attempt 3 by partici-
pant B does not show the walk back to the entrance door due to a data
corruption issue. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
D.6 This figure shows the three attempts of participant B, with the additional
red line indicating the position of the LaserNavigator. Note that attempt
3 is incomplete due to data corruption. . . . . . . . . . . . . . . . . . . . 107
E.1 A picture of the LaserNavigator, showing the laser rangefinder (1), the
ultrasound sensor (2), the loudspeaker (3), and the button under a spring
(4) used for adjusting the “cane length”. . . . . . . . . . . . . . . . . . . 119
E.2 The tactile model used by the participants to familiarise themselves with
the route. The route starts at (1) and is represented by a thread. Us-
ing the walls of buildings (B1) and (B2) as references, the participants
walked towards (2), where they found a few downward stairs lined by a
fence. Turning 90 degrees to the right and continuing, following the wall of
building (B2), the next point of interest was at (3). Here, another fence on
the right side could be used as a reference when taking the soft 90-degree
turn. The path from (3) to (6) is through an alley lined with sparsely
spaced trees. Along this path, the participants encountered the two simu-
lated crossings (4) and (5), in addition to the bus stop (B5). At (6) there
was a large snowdrift whose presence guided the participants into the next
90-degree turn. Building B4 was the cue to perform yet another turn, and
then walk straight back to the starting point (1), located just past the end
of (B3). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

xviii
E.3 This figure shows three images captured from the videos. From left to
right, these were captured: just before reaching (6); just before (5), with
one of the makeshift traffic light poles visible on the right; between (3)
and (4). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
F.1 Indoor evaluation. Motion capture cameras at the top with unique reflec-
tive identifier on chest, head, LaserNavigator and white cane. Door 3 is
closed. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
F.2 Paths (black) taken by the three participants (one per row) over three
indoor trials. The red line shows how they used the LaserNavigator. . . . 138
F.3 Model of the outdoor trial environment. . . . . . . . . . . . . . . . . . . 140

xix
xx
Part I

1
2
Chapter 1
Introduction

“The only thing worse than being blind is having


sight but no vision.”
Helen Keller

1.1 Overview – Five Years of Questions


This section presents my personal chronicle of events spanning from my master’s thesis
to this dissertation. The purpose is to give a light-weight introduction, and to highlight
the underlying thought process and steps that are often not visible in scientific writing.
Being my personal story, this section also serves to outline my own contributions to the
project that is really a true team effort.

1.1.1 The Beginning


What does a doctoral student do?
Some time during the later period of my computer science studies I found myself com-
pletely open to the idea of a post-degree continuation in research. Back then I only had
the general idea of what that meant, so questions such as the one above naturally formed
in my mind.
In engineering studies, you quickly encounter the idea of breaking down problems into
smaller pieces which when solved will allow you to solve the larger problem. This not only
allows you to tackle more manageable pieces one at a time, but also makes it possible to
distribute subproblems across a team of people. This is thanks to the hierarchical nature
of things.
So, what about research? A doctoral student does research in order to become an
independent researcher. I had started asking around, and the preceding answer was the
one I often received. Still, I was not satisfied; you do research, but what does that
mean exactly? To answer my original question, I now had to answer a subquestion. The
hierarchical nature of things shows up.

3
4 Introduction

After some more inquiry I had a clearer perspective, and knew that doctoral studies
was something I would be interested in pursuing. I had figured out that the idea was to
work on some project, focussing on some very specific problem, solving it, and writing a
lot about it. Thus when I graduated from the master of science programme I thought I
had a pretty good idea what would be ahead. I did not.
While asking around at the department, I soon met my to be principal supervisor,
and came to hear of a project that immediately sparked great interest in me. This
project was called the Sighted Wheelchair, and the idea was to enable visually impaired
individuals to drive a powered wheelchair using the sense of touch to detect obstacles.
At this time, the project had already started, and an initial proof-of-concept system had
been developed and was just about to be tested. The system scanned its surrounding
environment with a laser rangefinder and allowed the user to perceive these scans by
touch. My first connection to the project was as a tester of that prototype.
One day while walking through the campus corridors I passed by a team doing some-
thing curious – not an unusual encounter at a university. Then from behind me I heard
someone call “excuse me” after which followed a conversation ending with me enthusi-
astically saying something along the lines of “I would love to”. This was the first time
I encountered a haptic interface (the Novint Falcon), and the experience was amazing.
Here was a device through which you could experience computer generated content; a
device that was like a computer screen for the hand. The Falcon was originally mar-
keted as a gaming device, although it seemed not to cause the great excitement in that
market one might have initially expected. Shortly after my encounter with the Falcon,
in February of 2011, I joined the project as a research engineer with the task of further
developing the software.
With great eagerness I started looking
for the pieces I needed for the software puz-
zle. That picturesque metaphor is hinting
that yet again this was a case of subprob-
lem management. The laser rangefinder
would continually scan the environment,
and this needed to be reflected both in a
graphical model that was displayed on a
screen, and in a haptic model that the user
would probe with the Falcon. The biggest
challenge was to find a good way to present
Figure 1.1: The Novint Falcon haptic interface a haptic model that would constantly be
and the SICK LMS111 laser rangefinder, used changing. A situation can happen where
in the first prototype. as the user pushes the handle of the Fal-
con towards a dynamically changing ob-
ject, the object might change in a way leaving the probing position inside the object,
rather than on the outside surface. This is a known issue with these kinds of haptic
interfaces, and is at its core a consequence of one intriguing idea: the user is using the
same modality (i.e. touch) for both input (moving the handle) and output (having the
1.1. Overview – Five Years of Questions 5

handle pushed on by the device). The outcome is described in section 4.2.1.


After a couple of months the first prototype was ready. A short video is available
online which shows the system in use [1]. Were we done? An issue had been identified,
and a solution had been presented. As a first prototype, there were naturally many
practical issues that would need to be dealt with before the system could be put in
production. Nevertheless, a solution to the original issue was presented. At this point I
started noticing one big difference between research problems and problems you might
encounter in an undergraduate textbook: you do not have the solutions manual. This is
obvious, for if you did, the problem simply would not be a new one. There is another
important consequence of this though: you usually do not have the solutions manual for
the subproblems you divide your problem into either. This leads to more questions, that
in order to be answered inevitably leads to even more questions. It started occurring to
me just how deep one can look into a problem that at first glance seems very simple. At
the time I had only worked with this for a few months, but I would have years ahead to
look into the problems.
The initial prototype was completed, but was it any good? The scientifically apt
question we wanted to investigate was whether the user interaction was deemed feasible
by the users. More specifically, is haptics a feasible choice to present spatial information
non-visually? The Falcon made it possible to “hit” objects in a way that much resembles
how a white cane is used, and as such we thought it valuable to look into whether
experienced white cane users could use this system without much training. With these
questions in mind we decided to perform a qualitative evaluation with visually impaired
participants. Details about that event can be found in paper C.

1.1.2 Next Steps


The evaluation showed that haptics seemed indeed to be a good way to convey informa-
tion about one’s surroundings. After all, the white cane which is so ubiquitous among
visually impaired is a haptic device, albeit a very basic one. Its very basic nature also
makes it successful, as it is easy to learn, easy to use, and easy to trust.
The initial prototype had its drawbacks, most notably the fact that there are a ma-
jority of visually impaired individuals who would benefit from such a navigation aid that
are not using a wheelchair. This led us in the direction of a portable prototype, later
manifesting in the form of the LaserNavigator. While writing articles about the first
prototype and attending conferences (notably a robotics conference in Tokyo), I was also
involved in developing software for the next prototype.
The team working on the project had grown and changed, but the core ideas were
still the same: we wanted to make a portable device with a user interface retaining the
simplicity of the first prototype. Unfortunately for us, Newton’s third law makes creating
such an interface a challenge. For the user to feel a force (as was the case with the first
prototype), there has to be an equal but opposite force; the device cannot push your
hand back unless it has something to push against. In the case of the first prototype, the
whole system was mounted on a wheelchair, meaning the haptic robot could push on the
6 Introduction

user’s hand by remaining stationary. Fortunately, such directed force feedback is not the
only way to provide haptic stimuli, but would any other kind be comparable, or only a
bad compromise? Also, a laser rangefinder able to automatically scan the room as used
on the wheelchair would be far too bulky for a portable device, which points to another
problem: how, and what, to measure? At this point, the number of new unresolved
issues had grown to the extent where they could easily make up another doctoral project
or two. It would seem that a major part of a doctoral student’s work is to pose new
and relevant questions. The hierarchical nature of things strikes again, making sure that
every completed puzzle is shattered – broken down into much smaller pieces than before.

1.1.3 The Second Prototype


To quickly test some ideas, we used a smartphone connected to an Arduino electronics
prototyping board. The phone’s built-in vibrator initially served as haptic feedback, and
the Arduino board was connected to a laser rangefinder, albeit not a scanning one, as a
distance measurement unit. Vibration actuators are not uncommon in navigation aids
(see e.g. Miniguide in section 3.1), but on the other hand ultrasound is typically used to
measure distances. The notion of using ultrasound in those cases is perfectly legitimate
given the purpose of many such devices is to alert the user to the presence and often
distance to nearby obstacles. Our core idea was a bit different, and makes ultrasound a
poor option.
Imagine taking a quick peek inside a room. This short glance already gives enough
information to be able to move about the room safely. Our brains have superb processing
abilities for this very purpose, making the task of moving about safely effortless. Without
vision, exploring a room needs to be facilitated through audition and touch, where the
auditory landscape (the soundscape) can provide the big picture while moving about and
feeling objects are the key to the details. The equivalent of the quick peek is a far more
laborious process of systematically moving about and building the mental view of the
room piece by piece. The white cane extends the reach of the arm, but at the cost of
surface details, though even with the cane the range is limited compared to the eyes.
The idea of providing a “longer cane” seemed a perfect fit for a laser rangefinder. The
unit we first used was able to measure distances up to 50 m, 12 times per second, with
a maximum error of about 10 cm at 10 m.
With a hardware platform ready, the next issue was how to use the vibration actuator
effectively. A very interesting stage of the project followed wherein I experimented with
different ideas. My goal was to find the way that felt most promising, so that we later
could perform another evaluation with target users.
In the typical case of this kind of vibration feedback, some parameter is varied de-
pending on the measured distance, the most commonly used ones being frequency or
burst frequency. One novel thing which made my experiments even more interesting was
yet another parameter in the equation: another distance measurement.
Because of the great range of the laser rangefinder (50 m), the system would have a
use outside in large open spaces, but how do we provide meaningful feedback for such
1.1. Overview – Five Years of Questions 7

ranges while still retaining the ability to discriminate closer objects? Another physicist
comes to mind here, Heisenberg, as it seems we would have to choose one at the cost of the
other. Commercial navigation aids such as the UltraCane and MiniGuide (see 3.1) have a
button or switch to allow the user to set a maximum distance that the device is reacting
to (think “virtual cane length”), but we opted for a completely different approach.
On the device and facing the user, an ultrasound sensor is mounted. This continually
measures the distance from the device to the closest point on the user’s body. Instead of
using a button or switch to set a maximum distance, we could now use the body–device
measurement, which meant that the user could vary said length simply by moving the
arm closer to or further away from their body. A physical analogy would be a very long
telescopic white cane whose length would automatically be varied when the user moves
it further away from or closer to their body. This way, when the user wants to examine
close objects, they hold the device close, whereas if they want to detect distant buildings
for example, they would reach out far with the device.
Having this additional parameter begged the question of how to relate it to the “‘cane
length”. A simple solution that turned out to be quite acceptable indoors is to multiply
the body–device distance by 10, meaning that if the user holds the device 50 cm out from
their body, they would have a 5 m long “virtual cane”.
Having the body–device distance provides another interesting opportunity. Instead
of trying to convey the actual device–object distance with vibrations, we could let the
user infer that based on how far they held out their arm. Similarly, when hitting an
object with a white cane, the distance to the object is established by knowing the length
of the cane and how far away from the body it is held. This idea was intriguing, and
prompted me to look further into the way human beings know where their limbs are
without looking at them, known as proprioception.
The experiments led me to several alternatives which I found feasible. Those could be
divided into two categories: simple and complex feedback. In simple feedback mode, the
vibrations only signal the presence of an object, whereas complex mode tries to convey
the distance to said object as well. I personally prefer the elegance of simple feedback,
because it behaves very much like a physical cane. In this case, the distance to the object
is inferred from the length of the cane and how far out it is held.

1.1.4 The LaserNavigator


The next step of the development process was to skip the phone and build something
custom. The phone added considerable weight to the system, and the control of the
vibrator was limited. With a lot of help from many people, we soon swapped the phone
for a custom microcontroller and vibration actuator. At this point there were some
issues to attend to: the update rate of 12 Hz for the laser felt too slow, and the spin-up
time of the vibrator was too significant. Fortunately, an updated laser rangefinder had
become available, featuring an update rate of 32 times per second. As for vibrations,
we attached a small loudspeaker on which the user places their finger. Speakers have an
insignificant reaction time, and the increased update rate of the laser provided a much
8 Introduction

better experience.
I implemented the different feedback techniques I had previously developed, and
through testing concluded that I was still in favour of the simpler alternatives. My
justification for this is that simple haptic feedback is intuitive, something we are used to.
Voice guidance from GPS devices are similarly intuitive, as we can draw upon experi-
ences gained from communicating with fellow human beings. Note that simple feedback
techniques do not necessarily equate to a rich experience. Feedback can be made highly
complex, providing a lot more information, but at the cost of requiring much more train-
ing to use efficiently. If we accept a long training period, it may seem that a complex
solution is always better, but we need to look at other factors such as enjoyment. Is
the system fun to use? As a thought experiment, consider this art metaphor. Imagine
looking at a beautiful painting. Now, we can use a camera to capture that painting with
accurate colours and very high resolution. Then we could take this information and con-
vey it by a series of audio frequencies, corresponding to the colours. This way, we have
reproduced the painting, but it probably does not sound as beautiful as it looks. This
is not the fault of the camera being too bad, but the impressions from our senses have
evolved to be far more than the sensory information itself. Such a sensory substitution
device would make a beautiful painting accessible to someone who has never seen, but
there likely is better “music” out there.

1.1.5 Two Trials


In the late autumn of 2015, trial time was
once again upon us. This time, we wanted
to perform two trials: one initial indoor
trial as a first feasibility check and a first
opportunity for potential users to influence
development, then a more elaborate out-
door trial to see how the LaserNavigator
would work in a more practical setting.
The indoor task was finding doorways,
and was carried out in the Field Robotics
Lab (FROST Lab) at the department. A
makeshift rectangular room with walls and Figure 1.2: A picture of the second prototype:
doors were constructed inside the lab, and the LaserNavigator.
the participants had to find and walk to a
randomly chosen open door. The task turned out to be more difficult and time-consuming
than we had expected, with more training needed than was provided. Additionally, we
received a lot of feedback regarding the LaserNavigator itself and its potential uses in ev-
eryday situations. One big decision made after that trial was to dismiss automatic length
adjustment in favour of a manual mode, controlled by a button. We noticed that the au-
tomatic mode was difficult to grasp, and a manual mode where the length is fixed during
use behaves more like a real cane, and should thus be easier to understand. Additionally,
1.1. Overview – Five Years of Questions 9

the automatic mode leads to a compressed depth perception, which is manageable with
practice, but is not intuitive. The modified LaserNavigator has a button, which when
pressed will set the length based on the body–device distance. Additionally, a certain
number of feedback “ticks” are given to tell the user roughly how long the “virtual cane”
is.
With the improved LaserNavigator, it was soon time for the outdoor trial. The
participants who had performed the indoor trial partook in this new test, which consisted
of walking a closed path among a cluster of buildings on campus. Before the actual walks,
the participants got some time to familiarise themselves with the environment with the
help of a tactile model. All participants liked the changes made to the LaserNavigator,
and one participant in particular really enjoyed the experience and the ability to use a
“very long cane” to keep to the path. One aspect which I find intriguing surfaced during
this trial, namely the distinction between an obstacle avoidance device and a navigation
aid, and how these two kinds of devices are linked. During the project, this is something
we have spent many thoughts on. The distinction becomes blurred when having access to
great range, where some objects are used as a guiding reference, and would not otherwise
be seen as an obstacle to go to and poke with the white cane. From both observations
and interviews from this latest trial, it appears that the participants went through this
kind of thought process. At first, the device was seen as a “new kind of white cane”. It
seems that the device is first seen as a white cane with mostly limitations, but is later
reinterpreted as a navigation aid, at which point possibilities surface. Given the design
choice of trying to mimic the interaction with a white cane, it is perhaps not surprising
that the participants thought of the device in terms of a cane, with implied limitations.
The fact that this familiarity can be utilised is encouraging as it can lead to an easier-
to-learn device. The challenge then is to go beyond the familiar concept and realise that
the device is something more – something different from a white cane.

1.1.6 The Finish Line?


The evaluations of the LaserNavigator marks the final part of my time as a doctoral
student. When I started working in this project, it was like having a small eternity
ahead. In hindsight, it is easy to see just how small this “eternity” was, and it is time
to reflect on what has been accomplished. This project has contributed to the body of
knowledge concerning navigation aids from the perspective of potential users. At the
start of the project, when I scoured the scientific literature on this subject, I was left
with some concerns about the lack of answers to some basic questions, and the often not
so prominent user participation. During the years, we have obtained knowledge based
on very early trials, with potential target users. In particular, the final trial shows that
it is possible to mimic the interaction of a white cane, but use it for a different purpose.
The interviews have also given us many ideas of what constitutes a good navigation aid.
Finally, we can ask: “are we done yet?”
The hierarchical nature of things assures that there is always the next challenge to
tackle, and that the answer to the above question might never be “yes”. It is just like
10 Introduction

athletics class at school when running around the oval track. I remember times when,
exhausted, I’d reached the finish line and heard, “and you thought you were done?”
Let us hope that in this case, the track is an inward spiral.

1.2 Aims, Contributions and Delimitations


The aim of the work described in this thesis was to further the understanding of how
spatial information should be presented non-visually. To investigate this, navigation aids
have been developed, and subsequently evaluated, with visually impaired individuals.
This can be formulated as the following research questions:

• How should spatial information be presented non-visually?

• What can feasibly be done with current haptic feedback technologies?

• What are users’ conceptions of such technologies?

The main contributions of this thesis are in the field of user interaction, more specif-
ically on the problem of how to convey spatial information non-visually, primarily to
visually impaired individuals. While this thesis focuses on navigation aids for the vi-
sually impaired, they are not the only group that benefits from this work. Non-visual
interaction focused towards navigation is of interest to e.g. firefighters as well, who can
end up in situations where they have to find their way around in a smoke-filled build-
ing. In addition, advances in non-visual interaction in general is useful for anyone on
the move. Oulasvirta et al. [2] note that when mobile, cognitive resources are allocated
to monitor the react to contextual events, leading to interactions being done in short
bursts with interruptions inbetween. In particular, vision is typically occupied with nav-
igation and obstacle avoidance (not to mention driving a car), thus using a mobile device
simultaneously may lead to accidents.
While the focus for this work has been on haptic solutions, other sensory channels
(notably audition) are also relevant for navigation aids. Haptics is an appealing choice
for the specific task of conveying object location information, and audition has important
drawbacks in this regard (see chapter 2 and paper A).
In numerous places throughout this text, both commercial and prototype navigation
aids are mentioned. These do not form an exhaustive list, but are chosen based on the
novelty they bring to the discussion, be it an interaction or functionality aspect.

1.3 Terminology
Terminology regarding visual impairment as well as disabilities in a more general sense
are many and are subject to change over time. For example, the term handicapped might
be offending today, despite the fact that the term itself originated as a replacement for
other terms. Throughout this text, visual impairment and visually impaired are used.
1.4. Thesis Structure 11

They refer specifically to the underlying problem, the impairment, and this may in turn
be the reason for a disability.
A further challenge is classifying degrees of visual impairment. Terms such as blind,
low vision, partially sighted and mobility vision are troublesome as they are not clearly
defined. Such definitions are not easily established even if objective eye measurement are
used. For this thesis, precise judgement of visual ability (acuity) is not important, but
the categorisation is. In a navigation context, the key piece of information is how vision
aids the navigation task. A person who can see some light has an advantage over a person
unable to see light, and a person able to discern close objects has further advantages.
Throughout this thesis and unless otherwise stated, visually impaired is used to denote an
individual or group of individuals with a disadvantage in a navigation context compared
to what is considered normal sight.

1.4 Thesis Structure


The thesis is organised as follows:

Part I
• Chapter 1 contains a personal chronicle of events, some notes on terminology, and
scope of the thesis.

• Chapter 2 provides a background on visual impairment and non-visual navigation.


It also discusses the physiological systems relevant to this task, as well as haptic
feedback technologies.

• Chapter 3 discusses non-visual spatial perception and existing navigation aids,


both commercial and research prototypes.

• Chapter 4 describes the Virtual White Cane and the conducted evaluation.

• Chapter 5 is about the LaserNavigator and the two associated evaluations.

• Chapter 6 discusses results and the research questions formulated in this chapter.

• Chapter 7 concludes the first part of the thesis.

Part II
• Paper A discusses non-visual spatial perception in a navigation context, and pro-
poses some interaction design guidelines.

• Paper B describes the Virtual White Cane in more detail.

• Paper C is about the Virtual White Cane field trial.


12 Introduction

• Paper D is the first part in a series of two about the LaserNavigator. This paper
focuses on the first indoor trial.

• Paper E is the second part regarding evaluating the LaserNavigator, this time in
an outdoor setting.

• Paper F presents a summary and reflections on the development and evaluation


process for the entire project.
Chapter 2
Background

2.1 Visual Impairments and Assistive Technologies


Vision is a primary sense in many tasks, thus it comes as no surprise that losing it has a
large impact on an individual’s life. The World Health Organization (WHO) maintains a
so-called fact sheet containing estimates on the number of visually impaired individuals
and the nature of impairments. The October 2013 fact sheet [3] estimates the total
number of people with any kind of visual impairment to 285 million, and that figure is
not likely to decrease as the world population gets older. Fortunately, WHO notes that
visual impairments as a result of infectious diseases are decreasing, and that as many as
80% of cases could be cured or avoided.
Thankfully, assistive technology has and is playing an important role in making sure
that visually impaired people are able to take part in society and live more indepen-
dently. Louis Braille brought reading to the visually impaired community, and a couple
of hundred years later people are using his system, together with synthetic speech and
screen magnification, to read web pages and write doctoral theses. Devices that talk
or make other sounds are abundant today, ranging from bank ATMs to thermometers,
egg timers and liquid level indicators to put on cups. Despite all of these successful
innovations, there is still no solution for independent navigation that has reached a wide
impact [4]. Such a solution would help visually impaired people move about indepen-
dently, which should improve the quality of life. A technological solution could either
replace or complement the age-old solution: the white cane.
It has likely been known a long time that poking at objects with a stick is a good idea.
The white cane, as it is known today, got its current appearance about a hundred years
ago, although canes of various forms have presumably been used for centuries. Visually
impaired individuals rely extensively on touch, and a cane is a natural extension of the
arm. It is easy to learn, easy to use, and if it breaks you immediately know it. These
characteristics have made sure that the cane has stood the test of time. Despite it being
close to perfect at what it does, notifying the user of close-by obstacles, the white cane is

13
14 Background

also very limited. Because of its short range, it does not aid significantly in navigation.

2.1.1 Navigation
Navigating independently in unfamiliar environments is a challenge for visually impaired
individuals. The difficulties to go to new places independently might decrease partic-
ipation in society and can have a negative impact on the personal quality of life [5].
The degree to which this affects a certain individual is a very personal matter though.
Some are adventurous and quite successful in overcoming many challenges, while others
might not even wish to try. The point is that people who are visually impaired are at a
disadvantage to begin with.
The emphasis on unfamiliar environments is intentional, as it is possible to learn how
to negotiate well-known environments with confidence and security. Even so, the world
is a dynamic place, and some day the familiar environment might have changed in such
a way as to be unfamiliar. As an example, this happens in areas that have a lot of snow
during the winters.
Navigation is difficult without sight as the bulk of cues necessary for the task are
visual in nature. This is especially true outdoors, where useful landmarks include specific
buildings and street signs. Inside buildings there are a lot of landmarks that are available
without sight, such as the structure of the building (walls, corridors, floors), changes in
floor material and environmental sounds. Even so, if the building is unfamiliar, any signs
and maps that may be found inside are usually not accessible without sight.
There are two parts to the navigation problem: firstly, the current position needs to be
known; secondly, the way to go. There are various ways to identify the current position,
but one way to think is to view them as fingerprints. A location is identified by some
unique feature, such as a specific building nearby. Without sight, it is usually difficult to
tell a building from any other, and so other landmarks obtained through local exploration
may be necessary to establish the current location. The next problem, knowing where
to go, can then be described as knowing how to move through a series of locations to
reach the final location. This requires relating locations to one another in space. Vision
is excellent at doing this because of its long range. It is often possible to directly see the
next location. This is not possible without sight, at least not directly. The range of touch
is too limited, while sound, although having a much greater range, does not often provide
unique enough fingerprints of locations. The solution to this problem is to use one’s own
movements to relate locations to one another in space. Unfortunately, human beings are
not very good at determining their position solely based on their own movements [6].
Without vision to correct for this inaccuracy, visually impaired individuals must instead
have many identifiable locations close to each other. Consider the task of getting from a
certain building to another (visible) building. With sight there is usually no need to use
any intermittent steps between those. On the contrary, the same route without sight will
likely consist of multiple landmarks (typically intersections and turns). Additionally, a
means to avoid obstacles along the way is necessary.
The problem of obstacle avoidance is closely related to the navigation problem. Vision
2.2. Perception, Proprioception and Haptics 15

solves both by being able to look at distant landmarks as well as close-by obstacles. The
white cane, on the other hand, is an obstacle avoidance device working at close proximity
to the user. An obstacle avoidance device which possesses a great reach could address
this issue, as well as aid in navigation. The prototypes presented in this thesis provide
an extended reach, limited only by the specifications of the range sensors.

2.2 Perception, Proprioception and Haptics


This section gives a brief introduction to the physiological systems that are relevant for
this work. Firstly, spatial perception is discussed in the context of navigation. Secondly,
the sense of touch is explained in the sections on proprioception and haptics.

2.2.1 Spatial Perception


Spatial perception, or more broadly spatial cognition, is concerned with how we perceive
and understand the space around us. This entails being able to gather information about
the immediate environment (e.g. seeing where objects are in a room), and organising
this into a mental representation, often referred to as a cognitive map.
Vision plays a major role in gathering spatial information. A single glance about a
room can provide all necessary knowledge about where objects are located as well as
many properties of said objects. Furthermore, thanks to the way vision has evolved, this
process is almost instantaneous and completely effortless.
For individuals not possessing sufficient vision to gather this information, spatial
knowledge can be very challenging to acquire. A natural question to pose is whether
blind individuals have a decreased spatial ability, as the primary means of gathering such
knowledge is diminished. This can fortunately be tested by using a knowledge acquisi-
tion phase not dependent on vision. Schmidt et al. [7] did this by verbally describing
environments to both sighted and blind individuals. The participants were then tested
on their knowledge of this environment. While the researchers found that the average
performance in the blind group was worse than for the sighted group, they also noticed
that those blind individuals who were more independent in their daily lives (walking
about by themselves) performed equally well to their sighted peers. This suggests that
the mental resources and tools for spatial perception are not inherently related to vision.
It also highlights the importance of spatial training for the visually impaired.
A hundred years ago, this kind of training was not usually provided. In fact, it was
even questioned whether blind people could perceive space at all. Lotze [8] expressed the
opinion that space is inherently a visual phenomenon incomprehensible without vision.
This extremely negative view was perhaps not as odd back then when blind people were
not encouraged to develop spatial skills, but is absurd today when we see blind people
walking about on the streets by themselves. The question still remains, how do they do
it?
In their article A Review of Haptic Spatial Abilities in the Blind [9], Morash et al.
gives a more detailed historical account as well as an overview of contemporary studies.
16 Background

To fill the gap that the missing vision creates, audition and touch are used. Audition is
surprisingly capable, and some people become very proficient using it (see e.g. [10]). Note
that audition is not only used to judge the position of objects that make sounds, but also
silent objects. This ability, often referred to as human echolocation, can provide some of
the missing information about the environment, a typical example being knowing where
nearby walls are.
The presence of a piece of large furniture in a room can be inferred from the way
its physical properties affect sounds in the room, but aside from its presence, there is
usually nothing auditory showing that it is a bookcase. Many things can be logically
inferred, but to get a direct experience it may be necessary to use the sense of touch.
While audition can provide some of the large-scale spatial knowledge, touch can give
the details about objects. Note that unlike vision, exploring a room by touch implies
walking around in the room, which means that the relationship among objects has to be
maintained by this self-movement.

2.2.2 The Sense of Touch and Proprioception


What is the sense of touch? The answer to that question is not so readily apparent
compared to vision or audition. What we colloquially refer to as touch are in fact many
types of stimuli, often coinciding. As an example, consider an object such as a mobile
phone. Tactile mechanoreceptors (specialised nerve endings) in the skin enable the feeling
of texture; the screen might feel smoother than the back. Thermoreceptors mediate the
feeling of temperature; the phone might feel warmer when its battery is being charged.
Proprioreceptors found mostly in muscles and joints tell the brain where parts of the
body are located; by holding the phone you know how big it is, even without looking at
it, and you feel its weight. These and a few other receptors combine to create our sense
of touch.
Touch can provide much of the details that vision can, yet audition is incapable of.
Because of this, touch is key in such diverse tasks as reading braille and finding a door. In
particular, proprioception is crucial when walking with a white cane. The cane behaves
like an extended arm, and despite not touching objects directly, the proprioceptive and
auditory feedback provided is often enough to give a good characterisation of the surface.
Proprioception (from the Latin proprius meaning one’s own, and perception) is our
sense of where our body parts are and how they move. A detailed description, along with
a historical account, can be found in a review article by Proske and Gandevia [11].
Inside muscles, sensory receptors known as muscle spindles detect muscle length and
changes in the length of the muscle. This information, along with data obtained from
receptors in joints and tendons, are conveyed through the central nervous system (CNS)
to the brain where it is processed and perceived consciously or unconsciously. Similarly,
receptors in our inner ears (collectively known as the vestibular system) detect rotations
and accelerations of the head.
Another important aspect of touch is tactile sensation, mediated by nervous cells (no-
tably mechanoreceptors) spread throughout the skin. While proprioception can be used
2.2. Perception, Proprioception and Haptics 17

to get a grasp of the position and size of an object, it is through those mechanoreceptors
one can perceive the texture and shape of the object. The next section discusses the
technologies that make use of these physiological systems: haptics.

2.2.3 Haptic Feedback Technologies


Haptic (from the Greek haptós) literally means ’to grasp something’, and the field of hap-
tic technology is often referred to simply as haptics. Incorporating haptics into products
is nothing new, yet for a long time throughout its brief history, such technologies were
only available in very specific applications. Early examples include conveying external
forces through springs and masses to aircraft pilots, or remotely operating a robotic arm
handling hazardous materials in a nuclear plant. A summary of the historical develop-
ment of haptic technologies can be found in a paper by Stone [12].
One of the most common encounters people have with haptic feedback today is with
their mobile phones demanding attention by vibrating. Such units are often electrome-
chanical systems, either unbalanced electric motors (known as Eccentric Rotating Mass
(ERM) actuators), or mass-and-spring systems (referred to as Linear Resonant Actuators
(LRAs)). For a summary and comparison of these and other types of vibration actuators,
see [13].
Haptic interfaces similar to that used by the Virtual White Cane (see papers B
and C) have found their place in surgical simulations (e.g. the Moog Simodont Den-
tal Trainer [14]), but should also be of interest to the visually impaired community. Such
interfaces work by letting the user move a handle around in a certain workspace volume,
with the interface able to exert forces on that grip depending on its position. This means
that it is possible to experience a three-dimensional model by tracing its contours with
the grip. Note that this mechanism provides true directional force feedback, as opposed
to just vibrating.
For the visually impaired, one key development was the refreshable braille display,
which did the equivalent transition of going from static paper to a dynamic computer
screen. Haptics has also been incorporated into navigation aids (see section 3.1), typically
in the form of vibration actuators to give some important alert such as the presence
of an obstacle or a direction change. One large advantage of using haptic feedback
in a navigation aid is that it does not interfere with the audio from the environment.
A drawback is that while complex information can be conveyed haptically, doing so
efficiently is difficult, and would likely require much training on behalf of the users.
While touch input in the form of touchscreens are now common, the output side of
that is still missing. Work on tactile displays is ongoing, and is at a stage where many
innovative ideas are tested (see e.g. [15, 16, 17]). When mature, this technology will be
very significant for the visually impaired, perhaps as significant as the Braille display.
18 Background
Chapter 3
Related Work

3.1 Navigation Aids


Through the last decades, many attempts at creating a navigation aid for the visually
impaired have been done. These come in numerous different forms and functions, and are
alternatively known as electronic travel aids (ETAs) or orientation and mobility (ORM)
aids. While there have been many innovative ideas, no device to date has become as
ubiquitous as the white cane, most attaining only minor impact among the visually
impaired. A 2007 UK survey [4] conducted with 1428 visually impaired individuals
showed that only 2% of them used any kind of electronic travel aid, yet almost half (48%)
of the participants expressed that they had some difficulty going out by themselves.
Below is an overview of some navigation aids, both research prototypes and commer-
cial products.

3.1.1 GPS Devices and Smartphone Applications


The Global Positioning System (GPS) has since its military infancy reached widespread
public use. Thus it may not come as a surprise that much effort has been put into bringing
this technology to visually impaired individuals. Perhaps one of the most successful GPS
devices offering a completely non-visual interaction is the Trekker family of products by
Humanware (e.g. the Trekker Breeze [18]). The most basic use of such devices is when
the user simply walks about with the device turned on, whereby it will announce street
names, intersections and close-by points of interest by synthetic speech. Additionally,
typical turn-by-turn guidance behaviour is also possible, and some special functions are
provided. Examples include “Where am I?” that describes the user’s location relative
to close-by streets and points of interests such as shops, and a retrace function allowing
users to retrace their steps back to a known point in the route where they went astray.
The advent of accessible smartphones led to apps specifically designed for visually
impaired users. GPS applications such as Ariadne [19] are available, providing many of

19
20 Related Work

the features of the Trekker family of products mentioned above. Another such solution
that has generated lots of attention recently is BlindSquare [20]. The surge of interest
in this app may be due to the fact that unlike Ariadne and typical GPS solutions,
BlindSquare uses crowdsourced data from OpenStreetMap and FourSquare. The use of
these services makes the app into a “Wikipedia for maps” where user contribution is key
to success. This overcomes one of the fundamental limitations of most GPS systems:
the use of static data. Additionally, BlindSquare is trying to overcome the limitations
of using GPS indoors by instead placing Bluetooth beacons with relevant information
through the building. The team has demonstrated this usage in a shopping centre, where
the beacons contain information about the stores and other relevant landmarks such as
escalators and elevators. BlindSquare and other similarly connected solutions have large
potential, but users must understand the implications of open data.

3.1.2 Devices Sensing the Surrounding Environment


As an alternative to relying on stored maps, devices can use sensors to acquire essential in-
formation about the environment surrounding the user. Challenges with such approaches
include what information should be collected and how, and also the manner in which said
information is presented to the user. Many such devices are designed to alert the user of
obstacles beyond the reach of the white cane.
Typically, sensing solutions utilise ultrasonic sensors to measure the distance to nearby
objects. Such devices come in the form of extensions to the white cane (e.g. Ultra-
Cane [21], figure 3.1) or standalone complementary units such as Miniguide [22], shown
in figure 3.2. Both of these have a selectable maximum range (up to 4 m for UltraCane
and 8 m for Miniguide) beyond which objects are not reported. Similarly, both devices
convey the distance by vibrating in short bursts whose frequencies vary with the mea-
sured distance. A major difference between the two is that UltraCane has two vibration
actuators as well as two ultrasound sensors, one measuring forwards while the purpose
of the other is to alert the user of obstacles at head-height. An important property of
ultrasound sensors is the beam spread, which may or may not be advantageous depending
on what is desired. They are excellent for alerting the user of present objects, but are a
poor choice if detailed information is desired. In such cases, optical sensors are a better
option.
Besides ultrasound, optical systems are used, albeit less frequently. One example
is Teletact [23] which uses a triangulation approach wherein a laser diode emits laser
light that then bounces back off of obstacles at different angles detected by an array of
photodetectors. The distance is conveyed by a series of vibration actuators or by musical
tones. Advantages of optical sensors include accuracy and range. Another advantage is
the insignificant beam divergence, making it possible to determine directions precisely.
Another device in this category deserving a special mention is CyARM [24] – not
because of its sensor approach but instead the way feedback is handled. Instead of the
typical vibrations, CyARM has a wire connecting the device to the body of the user.
The tension of this wire can be controlled by the device, meaning that the user can feel
3.1. Navigation Aids 21

Figure 3.1: The UltraCane, a white cane augmented with ultrasonic sensors and haptic feedback.

the handle come to a stop as they try to move it “into” an obstacle, much like using a
white cane.

3.1.3 Sensory Substitution Systems


The brain has a remarkable way of adapting to accommodate new circumstances. This
ability, neuroplasticity, has one wondering how large these adaptations can be. In the
1970s, Bach-y-Rita [25] devised a tactile-visual sensory substitution system (TVSS) where
pictures taken by a video camera were transformed to “tactile images” displayed by a
matrix of vibration actuators worn by the user. Initial reports seemed very promising,
and more recently a similar system (except the actuators being placed on the tongue)
22 Related Work

Figure 3.2: A picture of the Miniguide, a handheld ultrasonic mobility aid with haptic feedback.

was commercialised as BrainPort [26]. Despite seemingly incredible results, we can pose
the question of why these solutions have not taken off. Lenay et al. [27] wrote on this:

“However, once the initial flush of enthusiasm has passed, it is legitimate


to ask why these devices, first developed in the 1960’s, have not passed into
general widespread use in the daily life of the blind community. Paradoxically,
an analysis of the possible reasons for this relative failure raises some of the
most interesting questions concerning these devices. One way of addressing
this question is to critically discuss the very term “sensory substitution”,
which carries with it the ambiguity, and even the illusory aspect, of the aim
of these techniques.” — Lenay et al. [27]

They further note that while sensory substitution is a good term from a publicity
and marketing perspective, it unfortunately is misleading in many ways. One issue the
authors raise is whether one can properly call it substitution. The term seems to imply
that a simple transformation of stimuli from one sense to another can bring with it all
3.1. Navigation Aids 23

the abilities and sensory awareness acquired from a lifetime’s experience with that sense.
The authors write:

“Certainly, these devices made it possible to carry out certain tasks which
would otherwise have been impossible for them. However, this was not the
fundamental desire which motivated the blind persons who lent themselves to
these experiments. A blind person can well find personal fulfilment irrespec-
tive of these tasks for which vision is necessary. What a blind person who
accepts to undergo the learning of a coupling device is really looking for, is
rather the sort of knowledge and experience that sighted persons tell him so
much about: the marvels of the visible world. What the blind person hopes
for, is the joy of this experiential domain which has hitherto remained beyond
his ken.” — Lenay et al. [27]

3.1.4 Prepared Environment Solutions

A much more complete navigation aid can be created if the environment in which the
user is supposed to navigate is involved in the process. The first impression of such
systems might be that they are proof of concept implementations rather than practical
solutions, but from a pervasive computing perspective where sensors and microcomputers
are ubiquitous, the idea becomes more plausible.
An early example of this kind of system is Drishti [28]. It uses a wearable computer
with speech input and output to communicate with the user, who can ask questions and
give commands concerning the environment in which the system is operating. The system
uses GPS to locate the user outdoors, and is connected to a server through a wireless
network. This server has a detailed geographical database on surrounding buildings and
other relevant navigation information. Drishti also works similarly indoors, but using
an ultrasound positioning systems with beacons placed around the environment. This,
together with a detailed database, enable the system to be queried for the location of a
piece of furniture, for example.
Radio Frequency Identification (RFID) technology has also been utilised. An example
is the robot guide RG by Kulyukin et al. [29], which uses RFID tags scattered throughout
an indoor environment to guide the user. Of note is that in this case, true to its name,
the robot guides the user, and it does so using a potential field (PF) algorithm. Such
algorithms work by associating each known obstacle with a repulsive field, and the target
(goal) with an attractive field. The navigator, RG in this case, is then treated as a
particle in this field, affected by the repulsive and attractice forces.
Blindsquare [20] deserves a mention in this category as well, since it supports Blue-
tooth beacons for indoor navigation. This takes the inverse approach compared to the
robot guide, in that the user is the one in charge.
24 Related Work

3.1.5 Location Fingerprinting


Positioning systems are not the only way to determine location. An alternative approach
is location fingerprinting, which relies on the idea that a location can be inferred from
any data as long as these data are unique to that location.
An example of this approach is a system developed by Golding and Lesh [30], which
uses temperature, magnetic field, light and motion data to fingerprint locations. To work,
the system needs to be trained on locations, as it uses a machine learning algorithm to
classify locations. The authors claim a 2% error rate in location determination.

3.2 Scientific Studies Involving Visually Impaired Par-


ticipants
Scientific studies need, by the very definition of proper scientific methodology, to be
critically evaluated. Every part of the process, from participant selection through study
design to data analyses, has the potential to bias the results. When wishing to determine
whether one device or procedure is superior to another, we try to isolate the process
under study from other factors that may affect the outcome. Ideally, participants should
have the same prior level of experience with the process under study, including all small
subprocesses that may be involved. This is practically infeasible, and as a remedy we
opt for large sample sizes from which we draw statistical conclusions instead of absolute
conclusions. Given an unbiased participant selection process and a flawless study design,
statistical results have the ability to highlight differences between e.g. a group of blind
people versus one of sighted people performing a task. This selection needs to be done
very carefully where visually impaired participants are concerned, as the sample sizes
are often small and very heterogeneous. The rest of this section describes two studies
highlighting the potential issues.
People are able to navigate based on two kinds of information: external information
(primarily visual), and internal information (proprioceptive and vestibular). Navigation
by using the internal sense of motion is know as path integration or dead reckoning,
and a natural question to ask is whether the path integration ability of congenitally
blind1 people is reduced since there is no visual sense to correct for errors in the path
integration ability. Two studies investigating this are Rieser et al. [31] and Loomis
et al. [6]. Both studies compared performance on path integration tasks among three
participant groups: blindfolded sighted, adventitiously blind2 and congenitally blind.
Quoting Rieser et al. [31]:

“The three groups performed similarly when asked to judge perspective while
imagining a new point of observation. However, locomoting to the new point
greatly facilitated the judgments of the sighted and late-blinded subjects, but
1
Blind since birth.
2
Having had better sight.
3.2. Scientific Studies Involving Visually Impaired Participants 25

not those of the early-blinded subjects. The findings indicate that visual ex-
perience plays an important role in the development of sensitivity to changes
in perspective structure when walking without vision.” — Rieser et al. [31]

Loomis et al. [6] noted a different outcome:

“From the results of this task, we conclude that at least some of the congeni-
tally blind observers are able to update the location of known objects during
locomotion as well as blindfolded sighted observers.” — Loomis et al. [6]

They further discussed this discrepancy, noting that many studies recruit participants
through schools and agencies for the blind, where many adults are unable to travel
independently. This can lead to a biased selection wherein the participants have worse
mobility skills than average, thus negatively affecting the results of that group.
They also note that this could be the other way around:

“Although we too obtained our subjects largely with the assistance of the
Braille Institute, we sought subjects who were able to travel independently;
accordingly, our selection procedure may have been biased toward subjects
with better-than-average mobility skills.” — Loomis et al. [6]

The authors further argue that the mentioned issues with the selection process and
the fact that sample sizes often are small, means that research on the question of how
visual experience affects spatial ability is not as definitive as it may seem.
26 Related Work
Chapter 4
The Virtual White Cane

The virtual white cane was the name given to the first prototype navigation aid
(figure 4.1). The licentiate thesis [32] and papers A, B and C are about the virtual white
cane. This section summarises that work.

4.1 Overview
The concept of the virtual white cane originated from previous research on an autonomous
powered wheelchair, MICA (Mobile Internet Connected Assistant) [33]. MICA was
equipped with a SICK LMS111 laser rangefinder [34] able to determine the distance to
every object in a horizontal plane in front of and slightly to the sides of the wheelchair.
Based on these measurements, MICA was programmed to drive autonomously without
hitting obstacles. From this, the idea of giving a visually impaired wheelchair user access
to this information and thus the ability to drive the wheelchair was born.
The laser rangefinder scanning the environment was already present. The main ques-
tion then was how to convey the range information to a visually impaired individual. This
was done with a Novint Falcon haptic interface [35]. A laptop gathers range information
from the rangefinder and builds a three-dimensional model which is then transmitted to
the haptic interface for presentation, as well as displayed graphically on the computer
screen. Figure 4.3 shows the transformation from scanned data to model.
The Novint Falcon (depicted in figure 4.2) is a haptic interface geared towards the
gaming audience. As such, it is an evolution from force-feedback joysticks that can
vibrate to signal certain events in a game. A haptic interface, on the other hand, can
simulate touching objects. This is accomplished by the user moving the handle (hereafter
referred as the grip) of the device around in a limited volume known as the workspace
(in the case of the Falcon this is about 1 dm3 ). The haptic interface contains electric
motors which work to counteract the user’s movements, and can thus simulate the feeling
of bumping into a wall at a certain position in the workspace volume. The reason for
using a haptic interface is that it can provide an interaction that is very similar to that
of the white cane. The Novint Falcon was chosen specifically as it had sufficiently good

27
28 The Virtual White Cane

Figure 4.1: This figure shows the virtual white cane on the MICA (Mobile Internet Connected
Assistant) wheelchair.

specifications for the prototype, and was easily available at a low cost.
The SICK LMS111 is a laser rangefinder manufactured for industrial use. Using time-
of-flight technology (measuring flight times of reflected pulses) it is able to determine an
object’s position at a range of up to 20 metres and with an error of a few centimetres.
The unit uses a rotating mirror to obtain a field of view of 270◦ . Unfortunately, this field
of view is limited to a single plane (in our case chosen to be parallel to the floor), and
is thus not a fully three-dimensional scan. This has not been an issue for the current
prototype, as in a controlled indoor environment there is no need to be able to feel at
different heights to navigate as walls are vertical.

4.2 Software
There are many software libraries for haptics available. Our primary requirement was
that it must be able to handle a rapidly changing dynamic model, which is not the case for
all available libraries. We ended up choosing H3D API, developed by SenseGraphics [36],
as it came with a lot of functionality we needed out of the box. H3D is also open-source,
and can easily be extended with Python scripts or C++ libraries.
The biggest challenge related to haptics was to overcome a phenomenon known as
haptic fall-through. Haptic interfaces such as the Falcon act as both input and output
4.2. Software 29

Figure 4.2: The Novint Falcon haptic display.

devices at the same time. While the device has to at all times figure out, based on grip
position, what kind of force to apply, the motions and forces the user exerts on the grip
can be used to affect the displayed objects. At any instant in time, a situation may
arise where the user pushes the grip, and the system determines that the grip is now
behind the surface that was being felt, thus not sending any force to the grip. To work
around this issue, haptic rendering settings were carefully chosen, as explained in the
next section.

4.2.1 Haptic Rendering


Rendering is the process of going from a model of what is to be displayed, to the actual
output. In the case of visual displays, the job of the rendering algorithm is to decide
which pixels to turn on. There are multiple ways of doing this, as is the case for haptics.
Broadly, haptic rendering approaches can be classified as either volume or surface meth-
ods. Volume rendering is used for volumetric source data such as medical scans, whereas
surface methods are used to render surface models. For the Virtual White Cane, the
surfaces of obstacles and walls are the important aspects, thus we chose a surface-based
method.
At any given time, the haptic rendering algorithm has to decide which force vector, if
any, to send to the haptic interface. The most straightforward solution to this problem is
often referred to as the god-object renderer. Consider an infinitely small object (the god-
object) that is present in a virtual scene, and let the position of this object be the same
as that of the haptic grip. Now, as the haptic grip is moved, the software gets notified
about this movement, and can update the position of the god-object accordingly. This
happens continually, until the god-object hits a surface. If the haptic grip is moved
into the surface, the god-object stays on the surface, and the position difference between
30 The Virtual White Cane

(a) (b)

(c)

Figure 4.3: A simple environment (a) is scanned to produce data, plotted in (b). These data
are used to produce the model depicted in (c).

it and the grip is calculated. The resulting distance is used in a formula analogous to
Hooke’s law for springs to calculate a force. This force is applied in order to return the
grip’s position to that of the god-object.

The god-object renderer described above is efficient and easy to implement, but suffers
from some problems. If the model being touched has tiny holes in it, the god-object, being
a single point, would fall through the surface. Even if there are no holes in the model, the
problem of haptic fall-through is not uncommon. To address this, one can extend the god-
object idea to an actual object, having physical dimensions. The Ruspini renderer [37]
is such an approach, where the object is a sphere of variable size. The Ruspini renderer
solves most of the fall-through problems, but is not as easy to implement nor as processor-
efficient as the god-object renderer.

For a more in-depth explanation of haptic rendering, see the article Haptic Rendering:
Introductory Concepts by Salisbury et al. [38].
4.3. Field Trial 31

4.3 Field Trial


When the prototype was working satisfactorily, we wanted to get feedback from potential
users on the feasibility of this kind of interaction, as well as input for future development
of the system. To get this feedback, we performed a field trial which is fully described
in paper C. This field trial was different from typical ones in that we wanted to get
immediate feedback from people who had never used the system before. The hypothesis
was that white cane users should be able to figure out this system easily, and the best
way to test this hypothesis was to have white cane users try the system. We had six
white cane users use the Virtual White Cane to navigate a predetermined route, and
interviewed them about their conceptions afterwards. The trial procedures were also
video-recorded for later observation.
Based on how quickly the six participants familiarised themselves with the system,
we concluded that spatial information can feasibly be conveyed using a haptic interface.
The participants had no difficulties understanding how the system worked, and expressed
no worries about trying it out. Later, during the interviews, they confirmed that they
thought their regular white cane experience helped them use the Virtual White Cane.
Despite this ease of use, actual range perception was very difficult. The participants
had trouble gauging the position of objects they felt, which led to difficulties negotiating
doorways and keeping straight in corridors. There are many possible reasons for this,
but it is important to remember that the participants did not receive any training in
using the system. The mental translation from the small haptic model to the physical
surroundings needs practice, but it may be possible to ease this learning process by
figuring out optimal model settings.
32 The Virtual White Cane
Chapter 5
LaserNavigator

This chapter presents the second prototype, dubbed the LaserNavigator (figure 5.1).

5.1 Overview
The idea behind this prototype came about after the successful evaluation of the Virtual
White Cane. The aim was to create a device that retains as much of the intuitive haptic
feedback as possible, while not relying on a wheelchair, thus broadening the potential
user base.
The intended use of the LaserNavigator is as a complement to the standard white
cane. The device is used to get an idea of ones surroundings when needed. As an example,
imagine crossing a large open space. Unless there is some clearly discernible feature of
the ground to follow, the white cane will not be of much use. There may be a lamppost
somewhere that could help moving in the proper direction, but it can be easy to miss if
just walking with no references. With the LaserNavigator, one can keep track of where
that lamppost or other significant landmarks are located, be it at 1 or 50 metres.
The user interface of the LaserNavigator is designed with the white cane in mind.
When aiming the device at an object, the device measures the distance to it, and may or
may not alert the user of its presence through vibrations. Whether to do so is determined
by a number representing the length of an imagined cane pointing towards the object.
This value is obtained by measuring the distance from the device to the closest point
on the user’s body. The effect is that the user can vary the length of their “virtual
cane” by moving the device away from or towards their body. The advantages of doing
this compared to simply conveying the measured distance directly are many. Presenting
distances in a large range such as up to 50 m accurately through vibrations would require
a complex feedback mode needing much user training. Additionally, a cane of such
length would almost always be in contact with something, leading to a constant stream
of feedback that can be seen as obtrusive.

33
34 LaserNavigator

Figure 5.1: A picture of the latest version of the LaserNavigator. The primary components are
the laser rangefinder (1), the ultrasound sensor (2), the loudspeaker (3), and the button under
a spring (4) used in manual length adjustment mode to adjust the “cane length”.

5.2 Hardware
The hardware architecture of the LaserNavigator is depicted in figure 5.2. The core of
the system is an LPC17651 microcontroller from NXP Semiconductors [39]. Mounted on
the main board are also a Bluetooth module and an inertial measurement unit (IMU).
To measure distances, an SF02/F laser rangefinder from Lightware Optoelectronics [40]
is used, connected to the main board through serial UART. The SF02/F has a range of
up to 50 m, while still able to take up to 32 measurements per second. A further benefit
to using a laser rangefinder is the low beam divergence, making it possible to accurately
detect edges of objects.
For haptic feedback, a small loudspeaker is used, connected to a simple digital switch
circuit driven by one of the general purpose input/output (GPIO) pins on the LPC1765.
The reason for using a speaker instead of a conventional vibration actuator lies in the
response time. As the user scans their surroundings with the LaserNavigator, it is crucial
that it react quickly, which the typical actuators do not. The two most common types
of electromechanical units (eccentric rotating mass (ERM) and linear resonant actuator
(LRA)) have a response time of several tens of milliseconds [41]. While there are much
faster options using either piezoelectric or electrochemical principles, they require much
more complex electronics to drive than their electromechanical counterparts. A loud-
speaker, on the other hand, is easy to drive and also has a quick response time (< 1 ms).
The drawback (or advantage depending on application) is the generated sound, but when
1
The board features an ARM Cortex-M3 processor running at 100 MHz.
5.3. Software 35

Smart phone

USB

Battery
Bluetooth

Programing
UART

Range Finder

Gyros (3D)
UART Micro-Controller
Unit (MCU)
SPI
UART/ADC

SPI
Range Finder
PWM
Accelerometers
(3D)

Vibrator

Figure 5.2: Basic architecture diagram showing the various components of the LaserNavigator
and how they communicate with each other.

a finger is held on the speaker membrane the sound is muffled significantly.


To measure the body–device distance, a PING))) ultrasonic sensor is utilised [42].
Here, in contrast to the device–object distance, the larger beam divergence is not a
problem, as the purpose is only to measure the distance to the user’s body.
After the initial indoor trial (see paper D), a small tactile button was attached to
allow setting the “cane length” (see section 5.3.2 below).
Finally, the device can be powered either by a USB connection or by two 3 V CR123A
lithium batteries connected in series.

5.3 Software
The software has three primary functions: to read data from the laser and ultrasound
sensors; to process these data, and to provide haptic feedback through the loudspeaker.
The application is a mixture of C and C++ code, and is organised in three layers of
abstraction. The lowest layer consists of C code to initialise the microcontroller and set
up all peripherals. On top of that is the hardware abstraction layer (HAL), which exposes
the underlying hardware through C++ functions organised in files based on periphperal
(e.g. vibrator, bluetooth, laser). Finally, the top layer comprises the main application,
36 LaserNavigator

primarily managed through two functions: app init (runs once at startup) and app loop
(runs continually).

5.3.1 Additional Features and Miscellaneous Notes


Some additional features and notes:

• On starting the device, the current battery voltage is conveyed as a series of short
vibration bursts. For example, 5.2 V would be conveyed with five bursts with short
pauses in between, followed by a longer interruption, and then two short bursts
again.

• The device can enter a stand-by mode where no vibrations are emitted. This
happens automatically when the device has been stationary for a few seconds.
This is accomplished using data from the on-board accelerometer. Every time a
new value is obtained, it is pushed onto a double-ended queue (deque) of fixed size.
To determine whether to enter or exit stand-by mode, the standard deviation of
the deque is checked against a predetermined threshold.

• It is possible to switch from indoor mode (scale factor 10) to outdoor mode (scale
factor 50) without reprogramming the device. The mode is switched when the
device is in stand-by mode and both the laser and ultrasound measurements are
less than 10 cm. Thus, with the device stationary, one can put both hands close
to the range sensors and trigger the mode change. The device signals the entered
mode by short Morse codes emitted through the loudspeaker.

• Abrupt variations in the ultrasound measurement can occur, but during normal
operation they will not change that rapidly. A new measurements is skipped if it
differs more than 60 cm from the previous measurement.

5.3.2 Manual Length Adjustment


Based on feedback from the first indoor trial (see paper D) we added a small tactile
button on the handle of the device. This allows the separation of length adjustment and
regular use.
In this version, the length is fixed during regular use, and when the user wishes to
change it, the button is pushed and held down. When this occurs, the length is set as
before, based on the body–device distance. Additionally, the haptic feedback serves a
new function while in the length adjustment mode. Periodically, the speaker emits a
series of bursts, whose number tell the user the approximate length which would be set
upon releasing the button. This means that the user can hold down the button, and
move the device back and forth until a desired length is found (conveyed as a certain
number of bursts), then release the button and continue using the device normally.
5.4. Haptic Feedback 37

5.4 Haptic Feedback


The hardware on the LaserNavigator enable at least two completely different approaches
to conveying a distance: it can either be done through vibrations or inferred through the
body–device distance (proprioception).

5.4.1 Simple Feedback


The mode using the body–device distance is simple as the only task left to the haptic
feedback is to signal when the user has hit an object. Imagine hitting something with the
white cane. The feedback of hitting the object together with the knowledge of the length
of the cane and the awareness of how far the arm has reached combine into knowledge of
how far away the object is. An additional thing the LaserNavigator does is increase this
“cane length” proportionally to the body–device distance, giving the possibility to have
a “very long cane” when the arm is extended, while retaining a “short cane” at close
proximity.
A drawback with simple feedback is apparent when considering the following scenario.
Imagine using the LaserNavigator in this mode to determine how far away another person
is. This would be done by pointing the device in the person’s direction, and then extend-
ing the arm outward from the body until haptic feedback is received. At that point, the
distance to the person can be inferred from the body–device distance. If, however, the
person were to start moving straight towards the user, the vibrations will continue indif-
ferently, and the user would have to actively track the vibrating/non-vibrating threshold
to follow the movement of the person. In contrast, if the person were moving away, the
vibrations would cease, signalling the need to extend the arm further. One solution to
this issue is to have two levels of feedback: one when an object is reached, and another
when the “virtual cane tip” is too far inside or behind the object. Thus if the person in
the above scenario moves towards the user, a shift in feedback would signal the need to
pull the arm back towards the body. This added level of feedback is referred to as the
threshold zone in the algorithmic section below.

5.4.2 Complex Feedback


A different approach, one which is often used in commercial navigation aids, is to let
the haptic feedback signal the currently measured distance in some way, typically by
varying the burst frequency. This feedback is complex in the sense that the haptic
feedback both conveys the presence of and distance to an object simultaneously. Note
that proprioception typically still plays the role of conveying direction.
We have experimented with this kind of feedback, but have not used it during the
evaluations. Complex feedback has the drawback of requiring more training, and one of
the goals of the LaserNavigator is that it should be easy to learn and require minimal
effort to use.
While there is no question that complex feedback can be effective after extensive
training, we have to think in terms of the users’ needs and desires. During the outdoor
38 LaserNavigator

trials, one participant put it this way: “The technology has to exist for me, not the other
way around.”
While such complex feedback is typical, a relevant question to ask is why this is the
case. It may be an obvious, almost instinctive choice, as this kind of feedback exists in
vehicle reverse warning systems, for example.

5.5 Algorithms
Following is an algorithmic description of the different approaches outlined above. In the
text below, the operator := is used to indicate variable assignment.
In the main program file, beside app init and app loop, there is also a function
called sensorcallback. Recent sensor values are accessed and filtered here, and the
values from the range sensors are converted to cm. In the following text, the relevant
variables are d (ultrasound measurement) and D (laser measurement).
In sensorcallback, d is filtered to remove the occasionally occurring very large value
(> 300). This is accomplished by only accepting a new value d if |d − dp | < 60, where dp
is the previous value.
This also means that in automatic length adjustment mode, if the device is angled
so the body isn’t in view of the ultrasound sensor, those measurements will usually be
ignored. Thus, the device can still be used properly in those cases.
The next task is to calculate the “cane length” (l) based on d. For the evaluations,
this was done using l := kd where k is a constant with a value of 10 indoors and 50
outdoors. Previously, we also tried letting k be different linear functions of d, notably
k := (d/5 + 3) indoors. The effect of a linear function is that l is proportional to d2 ,
enabling e.g. finer depth-wise details to be perceived at shorter distances. With the
example function above, the device held 10 cm from the body would lead to l = 500, and
held at 60 cm, l = 900.
In the simple feedback mode, the “cane length” l is very important, as it is the
parameter needed to determine the distance to an object. In the complex feedback
mode, however, it is only a way of filtering what is detected, as the distance is conveyed
haptically. In that mode, a cane that is “too long” can still be used to detect close-by
objects. It is important to note that when the hand is moved closer to an object, not only
does the cane “cane length” increase but the laser distance also decreases. This leads
to a compressed depth perception, which can be avoided by not adjusting the length
continually. While being stationary, an invariant distance is d + c + D where c is the
distance between the laser and ultrasound sensors.
In the case of simple feedback with a threshold zone, a further task is to establish
how large (a distance interval) this zone should be. Let T denote the zone size, expressed
in cm. The more intense feedback occurs close to the point where the length (l) and
laser distance (D) are equal, i.e. when |D − l| < T . A constant T works well when k
is a constant, but if it is a linear function, the zone will feel differently depending on d,
because of the compressed depth perception mentioned previously. To solve this, T can
also be a linear function of d.
5.6. Evaluations 39

For complex feedback, the next task is to determine the burst frequency, which is
expressed as time between bursts (tb ) in the program. This value can reasonably range
from a few ms to a few hundred ms. If the burst frequency should be an indication of
absolute distance regardless of other settings, tb should be a function of D only. Simply
letting tb := D is a good start at short ranges, but it is difficult to discern small differences.
Thus, one might want a function where the derivative, t0b (D), is larger at closer distances,
and decreases slightly with distance2 . To accomplish this, a function composed of two
linear segments was tested: a steeper one up to 50 cm, and then one with a smaller slope.
That is, (
aD, if D < 50
tb (D) := (5.1)
bD + m, otherwise
where a > b, and m is chosen based on a and b so that the lines intersect at D = 50.

5.6 Evaluations
Two evaluations of the LaserNavigator were performed in order to determine the fea-
sibility of the device, and get general user feedback. The same three blind individuals
participated in both evaluations.
The first evaluation (see paper D) was carried out in a prepared indoor environment,
where the participants got the task of finding doorways. For that evaluation, the Laser-
Navigator used automatic length adjustment, and we tested simple feedback both with
and without a threshold zone. The results showed that the participants were able to de-
tect the doorways with the LaserNavigator, but the device required more training than
we anticipated. Practical issues were also identified, and one notable change made to the
device after the trial was to introduce manual length adjustment mode. The participants
also identified scenarios where they would find such a navigation aid useful.
The second evaluation (see paper E) was performed outdoors, where the participants
walked a predetermined route. This time, the LaserNavigator was set to manual length
adjustment and simple feedback with no threshold zone. All participants thought that
the LaserNavigator had improved since the indoor trial. They were able to use the device
confidently while following the walls of buildings and walking straight, but needed a lot
of instructions during turns and other more challenging situations. They expressed the
need for more information than the device provided, but also noted that it might be
useful in familiar environments.

2
This follows the same principle as when setting l, where finer details at close proximity are given
priority.
40 LaserNavigator
Chapter 6

Discussion

It should now be easy to dismiss the idea that blind individuals cannot have a working
spatial model. Still of interest, however, are the reasons behind such ideas. A couple of
hundred years ago, the societal views on blindness were different, and that in itself has
likely been an important factor. Today, assistive technologies have made it possible for
blind people to contribute to society in far more ways, yet at least as important are the
more positive views. Considering spatial perception, we know that this ability does not
somehow just magically manifest, but is developed, like most other abilities. Seen in this
light, we can question how, and if, blind people develop this ability. It is important that
this training is provided, but this was not likely provided centuries ago. Those who are
blind or have a visual impairment affecting their ability to travel independently should
be encouraged and assisted in developing their basic mobility skills. Navigation aids can
be of great help, but are not a substitute for core mobility skills. Barriers to independent
travel come not only from missing navigation information but can also stem from an
inherent insecurity. Developing core mobility skills are likely to have a large impact on
this. Then, navigation aids can be used to add knowledge that the missing vision has
made inaccessible.
What is this knowledge, and how should it be presented? This is one of the primary
research questions of interest for this dissertation. One of the participants in one of our
evaluations said that when you walk outside without sight “there are no references”.
These “references” are one type of knowledge that can be of great help. Such knowledge
answers the question “where am I?” in terms of nearby points of interest. Another piece
of information difficult to access without sight is the nature of buildings and objects
around. The white cane can be used to poke at a the walls of a building, but what
building is it? GPS devices can help with this, in addition to the challenge of how to get
to a certain place. Further, one needs a means of avoiding obstacles along the way. This
is typically handled by the white cane and/or a guide dog.
The prototypes presented in this dissertation have been attempts at providing the
missing “references”. A scenario illustrating the need for this is when walking across an
open space, with the intention of reaching a landmark such as a lamppost on the other

41
42 Discussions

side. The cane may not be of much use in that case, and nor would a GPS. The open
space also means that there are not many, if any, useful auditory cues. Someone skilled
in acoustic echolocation could hear the presence of the lamppost if close, but walking
straight across an open space without any references is a difficult task.
The second research question asks what can feasibly be done with current technologies.
While there are many options for haptic feedback, the natural kind of direct force feedback
used e.g. for the Virtual White Cane would be very difficult to provide in a portable
device. That said, it is not impossible. The CyArm (see [24]) does this, but requires
one part of the system being attached to the user, with a string extending out to the
handheld part of the device. Even so, haptic feedback technologies today cannot match
the richness experienced when using the white cane. Photography does this well for the
visual domain, but what about the haptic? Haptography is apparently not just a made-up
word [43], and could be of interest to future navigation aids.
During the outdoor trial of the LaserNavigator, one researcher was always beside the
participant and gave any instructions and information needed. One participant remarked
that it was this information that needs to be provided. Indeed, a good navigation aid is a
sighted human being. They can see the world and give relevant guidance and information,
adapted to the specific individual being guided. A promising technological alternative is
image processing techniques. Smartphone apps such as TapTapSee [44] show promise,
but the information given about an image is nowhere near detailed enough to serve as
navigation instructions, if it is even correct.
The third research question is about users’ conceptions. First off, it bears mentioning
that the evaluations with potential users of navigation aids have been very valueable
in many ways. Observing how the participants are able to complete the task is one
aspect, but assessing task performance and similar quantitative measures have not been
the objective of the evaluations. We wanted to know users’ conceptions of feasibility of
the devices, and where such devices would help them in their daily lives. These questions
would have been impossible to explore, had we opted to evaluate the system with e.g.
blindfolded sighted subjects, as is sometimes done.
The evaluation of the Virtual White Cane showed promise of such a device. The
participants found the experience interesting and fun, and easily understood how to use
the haptic interface to probe their environment. The evaluations of the LaserNavigator
showed again that the interaction is promising, and the participants were affected by
their familiarity with the white cane. This was intentional, but brought with it a new
kind of challenge. Despite the white cane-like interaction, the device is not a white cane
replacement, nor is it just “a very long cane”. The key to using the LaserNavigator as
intended is to think outside the white cane metaphor. An example of this is the ability
to keep track of two walls, on both sides, at once. Another is the idea of using any object
as a landmark, that might not really be of relevance to a white cane user.
The evaluations also showed many practical problems. In particular, the LaserNavi-
gator was too heavy and it was difficult to hold it horizontally. Both of these concerns
should not be difficult to address. Another, more fundamental concern raise by the par-
ticipants was the lack of rich feedback. “How do I know if I am detecting a tree or a
43

lamppost?” was a question that cropped up. With the white cane, this is easily done,
as hitting the object will produce a characteristic sound. Additionally, dragging the
cane tip across the object would allow one to get a feeling for the object’s texture. The
LaserNavigator can do neither of these things. If thought of as a “very long white cane”
these are certainly major drawbacks, but when the device is seen as a navigation aid,
these become less of an issue. The participants thought the LaserNavigator would be
most useful in familiar environments, and in that case the familiarity would help with
identifying whatever is felt with the device. In less familiar environments, the device may
still provide some security as one can use it to keep track of one’s own location, assisting
the inherent path integration ability by expanding the small circle of detection offered
by the white cane alone.
44 Discussions
Chapter 7
Conclusions

In an area where most solution attempts do not go very far, and those that do have
not made a big impact, basic studies are important. The experiments with the Virtual
White Cane and the LaserNavigator have shed light on many aspects that were clouded
or even invisible, ranging from theoretical conundrums to practical issues.
The Virtual White Cane showed the feasibility of a haptic interface conveying infor-
mation about nearby objects. The development and evaluations of the LaserNavigator
further showed that a handheld device doing the same is possible, albeit practically
much more challenging to develop. The LaserNavigator takes a step beyond the cur-
rently available navigation aids by allowing a much greater detection range and provides
the possibility to examine the shapes of objects within that range in much detail. The
development of the LaserNavigator has led to a patent application [45].
The following summarises the work by answering the research questions posed in the
introduction.

• How should spatial information be presented non-visually? Auditory and


haptic interfaces have their respective strengths and weaknesses. Audio should be
used with caution so as not to block environmental sounds or be too obtrusive.
Haptics have major advantages when conveying the location of nearby objects.

• What can feasibly be done with current haptic feedback technologies?


Knowledge of the distance and direction to nearby objects can be conveyed through
haptics in such a way that the interaction benefits from the experience gained
through the sense of touch.

• What are users’ conceptions of such technologies? Users are able to draw
upon their experiences (in our case white cane use), making the interaction some-
what familiar. They are able to perceive objects, but using the technology effec-
tively requires practice.

There is always more work to be done. To improve the state of navigation aids,
there are many aspects that need to be studied further, across many different fields

45
46 Conclusions

of research. More groundwork on non-visual spatial perception is needed to allow for


better interaction design decisions. Technology research then has to figure out how to
implement these design ideas in a practical way. Implementations need to be evaluated
by potential users, which is especially important when most researchers are not potential
users themselves. Finally, there are social, economical and cultural issues that need to
be addressed so that users would want to use the system, and are given the possibility
to do so.
References

[1] “The sighted wheelchair - successful first test drive of ”sighted” wheelchair (YouTube
video),” http://www.youtube.com/watch?v=eXMWpa4zYRY, 2011, accessed 2014-
02-24.

[2] A. Oulasvirta, S. Tamminen, V. Roto, and J. Kuorelahti, “Interaction in 4-second


bursts: the fragmented nature of attentional resources in mobile hci,” in Proceedings
of the SIGCHI conference on Human factors in computing systems. ACM, 2005,
pp. 919–928.

[3] World Health Organization, “Fact sheet, n282,” http://www.who.int/mediacentre/


factsheets/fs282/en/, 2014, accessed 2016-03-21.

[4] T. Pey, F. Nzegwu, and G. Dooley, “Functionality and the needs of blind and par-
tially sighted adults in the uk: a survey,” Reading, UK: The Guide Dogs for the
Blind Association, 2007.

[5] D. M. Brouwer, G. Sadlo, K. Winding, and M. I. G. Hanneman, “Limitation in


mobility: Experiences of visually impaired older people,” British Journal of Occu-
pational Therapy, vol. 71, no. 10, pp. 414–421, 2008.

[6] J. M. Loomis, R. L. Klatzky, R. G. Golledge, J. G. Cicinelli, J. W. Pellegrino, and


P. A. Fry, “Nonvisual navigation by blind and sighted: assessment of path integration
ability,” Journal of Experimental Psychology, vol. 122, no. 1, pp. 73–91, 1993.

[7] S. Schmidt, C. Tinti, M. Fantino, I. C. Mammarella, and C. Cornoldi, “Spatial


representations in blind people: The role of strategies and mobility skills,” Acta
Psychologica, vol. 142, no. 1, pp. 43–50, 2013.

[8] H. Lotze, Metaphysic: in three books, ontology, cosmology, and psychology. Claren-
don Press, 1884, vol. 3.

[9] V. Morash, A. E. Connell Pensky, A. U. Alfaro, and A. McKerracher, “A review of


haptic spatial abilities in the blind,” Spatial Cognition and Computation, vol. 12,
no. 2-3, pp. 83–95, 2012.

[10] “World Access for the Blind,” http://www.worldaccessfortheblind.org/, 2016, ac-


cessed 2016-03-21.

47
48 References

[11] U. Proske and S. C. Gandevia, “The proprioceptive senses: their roles in signaling
body shape, body position and movement, and muscle force,” Physiological reviews,
vol. 92, no. 4, pp. 1651–1697, 2012.

[12] R. J. Stone, “Haptic feedback: A brief history from telepresence to virtual reality,”
in Haptic Human-Computer Interaction. Springer, 2001, pp. 1–16.

[13] V. Huotari, “Tactile Actuator Technologies,” http://www.uta.fi/sis/tie/him/


schedule/Vesa Huotari presentation.pdf, accessed 2016-05-03.

[14] Moog, “Simodont Dental Trainer,” http://www.moog.com/markets/


medical-dental-simulation/haptic-technology-in-the-moog-simodont-dental-trainer/,
accessed 2016-04-07.

[15] J. Rantala, K. Myllymaa, R. Raisamo, J. Lylykangas, V. Surakka, P. Shull, and


M. Cutkosky, “Presenting spatial tactile messages with a hand-held device,” in IEEE
World Haptics Conf. (WHC), Jun. 2011, pp. 101–106.

[16] A. Yamamoto, S. Nagasawa, H. Yamamoto, and T. Higuchi, “Electrostatic tactile


display with thin film slider and its application to tactile telepresentation systems,”
IEEE Trans. Visualization and Computer Graphics, vol. 12, no. 2, pp. 168–177,
Mar–Apr 2006.

[17] R. Velazquez and S. Gutierrez, “New test structure for tactile display using laterally
driven tactors,” in Instrumentation and Measurement Technol. Conf. Proc., May
2008, pp. 1381–1386.

[18] HumanWare, “Trekker Breeze,” http://www.humanware.com/en-usa/products/


blindness/talking gps/trekker breeze/ details/id 101/trekker breeze handheld
talking gps.html, accessed 2016-03-21.

[19] L. Ciaffoni, “Ariadne GPS,” http://www.ariadnegps.eu/, accessed 2016-03-21.

[20] “BlindSquare,” http://blindsquare.com/, 2016, accessed 2016-03-21.

[21] Sound Foresight Technology, “Ultracane - putting the world at your fingertips,”
http://www.ultracane.com/, accessed 2016-03-21.

[22] GDP Research, “The miniguide mobility aid,” http://www.gdp-research.com.au/


minig 1.htm, accessed 2016-03-21.

[23] R. Farcy and Y. Bellik, “Locomotion assistance for the blind,” in Universal Access
and Assistive Technology. Springer, 2002, pp. 277–284.

[24] J. Akita, T. Komatsu, K. Ito, T. Ono, and M. Okamoto, “Cyarm: haptic sensing
device for spatial localization on basis of exploration by arms,” Advances in Human-
Computer Interaction, vol. 2009, p. 6, 2009.
References 49

[25] C. C. Collins and P. Bach-y Rita, “Transmission of pictorial information through


the skin,” Advances in biological and medical physics, vol. 14, pp. 285–315, 1973.
[26] P. Bach-y Rita, M. E. Tyler, and K. A. Kaczmarek, “Seeing with the brain,” Inter-
national journal of human-computer interaction, vol. 15, no. 2, pp. 285–295, 2003.
[27] C. Lenay, O. Gapenne, S. Hanneton, C. Marque, and C. Genouëlle, “Sensory sub-
stitution: limits and perspectives,” Touching for knowing, pp. 275–292, 2003.
[28] L. Ran, S. Helal, and S. Moore, “Drishti: an integrated indoor/outdoor blind nav-
igation system and service,” in Pervasive Computing and Communications, 2004.
PerCom 2004. Proceedings of the Second IEEE Annual Conference on. IEEE, 2004,
pp. 23–30.
[29] V. Kulyukin, C. Gharpure, J. Nicholson, and S. Pavithran, “Rfid in robot-
assisted indoor navigation for the visually impaired,” in Intelligent Robots and Sys-
tems, 2004.(IROS 2004). Proceedings. 2004 IEEE/RSJ International Conference on,
vol. 2. IEEE, 2004, pp. 1979–1984.
[30] A. R. Golding and N. Lesh, “Indoor navigation using a diverse set of cheap, wearable
sensors,” in Wearable Computers, 1999. Digest of Papers. The Third International
Symposium on. IEEE, 1999, pp. 29–36.
[31] J. J. Rieser, D. A. Guth, and E. W. Hill, “Sensitivity to perspective structure while
walking without vision,” Perception, vol. 15, no. 2, pp. 173–188, 1986.
[32] D. Innala Ahlmark, “The development of a virtual white cane using a haptic inter-
face,” Licentiate thesis, Luleå University of Technology, 2014.
[33] S. Rönnbäck, J. Piekkari, K. Hyyppä, L. Haakapää, V. Kammunen, and S. Koskinen,
“Mica - mobile internet connected assistant,” in First International Conference on
Lifestyle, Health and Technolgy, Luleå University of Technology, June 2005.
[34] SICK AG, “Laser measurement systems of the lms100 product family,”
https://www.sick.com/media/dox/3/03/403/online help Laser Measurement
Systems of the LMS100 Product Family en IM0031403.PDF, November 2010,
accessed 2016-05-04.
[35] Novint Technologies Inc, “Novint Falcon,” http://www.novint.com/index.php/
novintfalcon, accessed 2014-02-24.
[36] SenseGraphics AB, “Open source haptics - H3D.org,” http://www.h3dapi.org/, ac-
cessed 2014-02-24.
[37] D. C. Ruspini, K. Kolarov, and O. Khatib, “The haptic display of complex graphical
environments,” in Proc. 24th Annu. Conf. Computer Graphics and Interactive Tech-
niques. New York, NY, USA: ACM Press/Addison-Wesley Publishing Co., 1997,
pp. 345–352.
50 References

[38] K. Salisbury, F. Conti, and F. Barbagli, “Haptic rendering: introductory concepts,”


Computer Graphics and Applications, IEEE, vol. 24, no. 2, pp. 24–32, March 2004.

[39] NXP Semiconductors, “LPC1700 32-bit MCUs,” http://www.nxp.com/


products/microcontrollers-and-processors/arm-processors/lpc-cortex-m-mcus/
lpc-cortex-m3-mcus/lpc1300-cortex-m3-mcus:MC 1403790745385, accessed 2016-
05-03.

[40] Lightware Optoelectronics, “SF02/F (50 m),” http://www.lightware.co.za/shop/en/


drone-altimeters/7-sf02f.html, accessed 2016-04-29.

[41] E. Siegel, “Haptics technology: picking up good vibrations,” EETimes, http:// www.
eetimes.com/ document.asp? doc id=1278948 , July 24, 2011, accessed 2016-04-05.

[42] Parallax Inc., “PING))) Ultrasonic Distance Sensor,” https://www.parallax.com/


product/28015, accessed 2016-05-02.

[43] K. J. Kuchenbecker, J. Romano, and W. McMahan, “Haptography: Capturing and


recreating the rich feel of real surfaces,” in Robotics Research. Springer, 2011, pp.
245–260.

[44] “TapTapSee - Blind and Visually Impaired Camera,” http://www.taptapseeapp.


com/, accessed 2016-05-02.

[45] K. Hyyppä, “Device and methods thereof,” Patent application, Swedish Patent and
Registration Office, 1 650 541-4, April 22, 2016.
Part II

51
52
Paper A
Presentation of Spatial Information
in Navigation Aids for the Visually
Impaired

Authors:
Daniel Innala Ahlmark and Kalevi Hyyppä

Reformatted version of paper originally published in:


Journal of Assistive Technologies, 9(3), 2015, pp. 174–181.


c 2015, Emerald Group Publishing Limited, Reprinted with permission.

53
54
Presentation of Spatial Information in Navigation
Aids for the Visually Impaired

Daniel Innala Ahlmark and Kalevi Hyyppä

Abstract

Purpose: The purpose of this article is to present some guidelines on how different
means of information presentation can be used when conveying spatial information non-
visually. The aim is to further the understanding of the qualities navigation aids for
visually impaired individuals should possess.
Design/methodology/approach: A background in non-visual spatial perception is
provided, and existing commercial and non-commercial navigation aids are examined
from a user interaction perspective, based on how individuals with a visual impairment
perceive and understand space.
Findings: The discussions on non-visual spatial perception and navigation aids lead to
some user interaction design suggestions.
Originality/value: This paper examines navigation aids from the perspective of non-
visual spatial perception. The presented design suggestions can serve as basic guidelines
for the design of such solutions.

1 Introduction
Assistive technology has made it possible for people with a visual impairment to navi-
gate the web, but negotiating unfamiliar physical environments independently is often a
major challenge. Much of the information that provides a sense of location (e.g. signs,
maps, buildings and other landmarks) are visual in nature, and thus are not available
to many visually impaired individuals. Often, a white cane is used to avoid obstacles,
and to aid in finding and following the kinds of landmarks that are useful to the visually
impaired. Examples of these include kerbs, lampposts, walls, and changes in ground
material. Additionally, environmental sounds provide a sense of context, and the taps
from the cane can be useful as the short sound pulses emitted enable limited acoustic
echolocation. The cane is easy to use and trust due to its simplicity, but it is only able
to convey information about obstacles at close proximity. This restricted reach does not
significantly aid navigation, as that task is more dependent on knowledge about things
farther away, such as doors in a hallway or buildings and roads. One of the authors
has long personal experience of the navigation problem as he has been visually impaired
(Leber’s congenital amaurosis) since birth.
Many technological navigation aids—also known as electronic travel aids (ETAs)—
have been developed and produced, but they have not been widely adopted by the visually

55
56 Paper A

impaired community. In order for a product to succeed, the benefit it provides must
outweigh the effort and risks involved in using it. The latter factor is of critical importance
in a system whose job it is to guide the user reliably through a world filled with potentially
dangerous hazards.
A major challenge faced when designing a navigation aid is how to present spatial
information by non-visual means. Positioning systems and range sensors can provide
the needed information, but care must be taken in presenting it to the user. Firstly,
there is no easy sensory translation from the highly-spatial visual sense, and secondly,
the interaction should be as intuitive as possible. This not only minimises training times
and risks, but also increases comfort and security.
The purpose of this article is to review the literature on navigation aids, focusing on
the issues of user interaction. The goal is to further the understanding of the qualities
navigation aids should possess, and possibly shed light on the reasons for the weak adop-
tion of past and present solutions. To accomplish this, several solutions are presented
and discussed based on the interaction modes. There are many solutions not mentioned
herein; solutions that employ similar means of interaction to the ones presented were ex-
cluded. To aid in the discussion, some background information on how space is perceived
non-visually is also presented. The focus for this article is on the technological aspects,
but for technology adoption the socio-economical and cultural aspects are equally im-
portant. While the visually impaired are the main target users, non-visual navigation
and obstacle avoidance solutions can be of use to sighted individuals, for instance to
firefighters operating in smoke-filled buildings.
Section 2 describes the literature selection process. Section 3 contains some back-
ground information on non-visual spatial perception. This, together with section 4 which
examines some commercial and prototype navigation aids serve as background to the dis-
cussion in section 5. Lastly, section 6 concludes the paper with some guidelines on how
different modes of interaction should be utilised.

2 Methods

Database searches were made to find relevant literature. Scopus, Google Scholar and
Web of Science were primarily used, with keywords such as navigation aids, visually
impaired, assistive technology, haptic, audio, speech, blind and user interaction. Articles
were then selected based on user interaction. The goal was to have articles representing
novel uses of different interaction modes, thus many articles presenting similar solutions
were excluded. The purpose was to have literature supporting the later discussion, rather
than presenting a comprehensive overview.
As an example, the search string “navigation aid” AND “visually impaired” yielded
41 unique articles in Scopus. Of those, 39
3. Non-visual Spatial Perception 57

3 Non-visual Spatial Perception


The interaction design of a navigation aid should be based on how individuals with
a visual impairment perceive and understand the space around them. A reasonable
question to ask is whether spatial ability is diminished in people with severe vision loss.
It is not illogical to assume that the lack of eyesight would have a negative impact on
spatial ability, as neither sounds nor touch can mimic the reach and accuracy of vision. It
is therefore noteworthy that a recent review by Morash et al. [1] on this subject concluded
that, on the contrary, the spatial ability of visually impaired individuals is not inferior to
that of sighted persons, although it works differently. Another recent study by Schmidt
et al. [2], concluded that the mental imagery created from spatial descriptions can convey
an equally well-working spatial model for visually impaired individuals. A particularly
interesting insight this study provides is that while many blind participants performed
worse at the task, those whose performance were equal to that of sighted persons were
more independent, and were thus more used to encountering spatial challenges. This
suggests that sight loss per se does not hamper spatial ability; that in fact this ability
can be trained to the level of sighted individuals.
Even though spatial understanding does not seem to pose a problem, a fundamental
issue is how to effectively convey such understanding using other senses than sight. The
review by Morash et al. [1] concentrates on haptic (touch) spatial perception, presenting
several historical arguments on the inferiority of this modality. It has been argued that
a prominent problem with haptic spatial perception is the fact that it is an inherently
sequential process. When exploring a room by touch, one has to focus on each object in
turn. The conclusion was that touch cannot accurately convey the spatial relationships
among objects, compared to vision where a single glance encompasses a larger scene with
multiple objects. The problems with this argument, as noted in the review, are evident
if considering the vastly different “fields of view” provided by touch and vision. When
a braille letter (composed of multiple raised dots) is read, it is not a sequential process.
There is no need to consciously feel each dot and then elaborately map out the relative
positions of those in the mind. Touch is only sequential when considering objects that are
too large for its “field of view”, just as vision is sequential when the scene is too large for
a single glance to contain. In fact, at the higher level of unconscious sensory processing,
vision has been shown to be sequential even for a single scene. When looking at a scene,
the eyes focus on each object in turn, albeit very rapidly and unconsciously [3]. With
vision, the scene is constructed in a “top down” manner, whereas a haptic explorer must
build the scene “bottom up” by relating each object to others as they are discovered.
Besides touch, spatial audio is used extensively by visually impaired individuals. The
sounds from the environment help with getting the big picture, and can also aid in
localisation [4]. Even smells that are specific to a particular place can add a small piece
to the spatial puzzle. Audio is perhaps the closest substitute to vision in that it provides
both an understanding of what is making the sound, and where it is emanating from.
Unfortunately, the localisation aspect is not that accurate, and a navigation system
employing spatial sounds to represent obstacles has to overcome the challenge of user
58 Paper A

fatigue. Multiple sound sources making noise all the time can be both distracting and
tiring. Also, the real environmental sounds should not be blocked out or distorted [5].
The way visually impaired people perceive and understand the space around them
should be taken into account when designing navigation aids. The next section describes
some commercial and non-commercial navigation aids that utilise haptics and/or audio.

4 Navigation Aids
Electronic travel aids come in numerous shapes and sizes ranging from small wearable and
hand-held devices designed to accomplish a very specific thing, to complex multi-sensor
and multi-interface devices. For the purpose of this article, the devices presented below
are grouped based on how they communicate with the user. An important distinction to
keep in mind is that some devices use positioning (such as GPS) while others are obstacle
avoidance devices sensing the environment. These two kinds of devices complement
each other perfectly, as obstacle avoidance devices do not give travel directions, and
positioning devices (typically based on GPS) rely on stored map data that can provide
travel instructions, but need to be kept up to date. Further, the GPS system does not
work indoors and cannot by itself give precise movement directions relative to the user’s
current orientation. GPS devices can overcome the latter limitation by incorporating a
magnetometer or through utilising the user’s direction of motion.

4.1 Haptic Feedback


Haptics, being the primary way to explore ones surroundings non-visually, has been dif-
ficult to incorporate into navigation aids. The typical manifestation of haptics is in
the form of vibration feedback, which is primarily used to convey simple alerts. Exam-
ples of navigation aids utilising this kind of feedback include the UltraCane [6] and the
Miniguide [7]. These two devices work on the same principle, but the UltraCane is an
extension of a regular white cane, whereas the Miniguide is a complementary unit. Both
employ ultrasound to measure the distance to obstacles, and both present this informa-
tion through vibrating in bursts. The time between these bursts increases as the distance
to the measured obstacle increases. This kind of feedback has also been used for route
guidance. Ertan et al. [8] used a grid of 4-by-4 vibration motors embedded in a vest to
signal directions. This was accomplished by turning the motors on and off in specific
patterns to signal a given direction.
Vibration feedback is limited when it comes to presenting more detailed information.
Another option for haptic feedback is to use a haptic interface. These interfaces have been
used primarily for surgical simulations, but are more and more used for virtual reality
applications and gaming. The Virtual White Cane [9] used such a haptic interface to
convey spatial information. The system was mounted on a wheelchair and used a laser
rangefinder to obtain range data in a horizontal plane of 270◦ centred in the forward
direction. A three-dimensional model was constructed from these data, and the haptic
interface was used to explore this model by touch. A field trial concluded that this kind
5. Discussion 59

of interaction resembling a white cane was feasible and easy to learn for visually impaired
users familiar with the regular white cane.

4.2 Auditory Feedback


The most widely used method of conveying complex information non-visually is through
audio. Of these, devices based on GPS are the most common ones. Most GPS apps and
devices designed for sighted users present information by displaying a map on a screen,
and can provide eyes-free access by announcing turn-by-turn directions with synthetic
or recorded phrases of speech. Devices specifically tailored to the visually impaired usu-
ally rely solely on speech synthesis as output, and buttons and/or speech recognition as
input. Efforts have been made to improve the usefulness of this mode of interaction.
For example, the Trekker Breeze [10] offers a “Where am I?” function that describes
the current position based on close-by landmarks. Additionally, a retrace feature is pro-
vided, allowing someone who has gone astray to retrace their steps back to the intended
route. These days much of this functionality can be provided through apps, as evidenced
by Ariadne GPS for the iPhone [11] and Loadstone GPS for S60 Nokia handsets [12].
An alternative to speech for route guidance can be found in the System for Wearable
Navigation (SWAN) [13]. The SWAN system uses stereo headphones equipped with a
device (magnetometer) that keeps track of the orientation of the head. Based on the
relation between the next waypoint and the direction the user is facing, virtual auditory
“beacons” are positioned in stereo space.
For obstacle avoidance, Jameson and Manduchi [14] developed a wearable device that
alerts the user of obstacles at head-height. An acoustic warning signal is emitted when
an obstacle is sensed (by ultrasound) to be inside a predetermined range. While simple
auditory cues are often used, together or alternatively with vibration feedback. There
are exceptions, such as The vOICe for Android [15], which converts images it continually
captures from the camera into short snippets of sound. These sound snippets contain
multiple frequencies corresponding to pixels in the image.

5 Discussion
Some of the solutions mentioned in the previous section are commercially available, the
least expensive being the smartphone apps (provided the user already has a smartphone).
Despite this, the adoption of this kind of assistive technology has not been great. Com-
pare this to the smartphones themselves, which are used by many non-sighted individu-
als. Even touch-screen devices can be and are used by the blind, thanks to screen reader
software.
The reason for the weak adoption of navigation aids appears not to have been sci-
entifically investigated. More generally, there seems to be a lack of scientifically sound
studies on the impact of assistive technology for the visually impaired. In a 2011 synthe-
sis article by Kelly and Smith [16] on the impact of assistive technology in education, 256
60 Paper A

studies were examined, but only a few articles were deemed to follow proper evidence-
based research practices. Going even further in the generalisation, one can find a lot
written about technology acceptance in a general sense. Models such as the Technology
Acceptance Model (TAM) [17] are well-established, but it is not clear how these apply
to persons with disabilities.
Despite the lack of studies on adoption in this specific case, some things can be said
based on how individuals with a visual impairment perceive space, and the solutions they
presently employ. It should no longer be questionable that non-sighted people have a
working world model. It is, however, important to note that this model is constructed
differently than that of a sighted individual. It is important to keep this in mind when
planning user interaction. For example, consider the “where am I?” function mentioned
in the previous section. This function can be more or less useful depending on how the
surrounding points of interest are presented. A non-sighted individual would be more
likely to benefit from a presentation that reads like a step-by-step trip, as this favours
the “bottom up” way of learning about ones surroundings.
Some things can be learnt by comparing the technological solutions to a sighted human
being who knows a specific route. This person is able to give the same instructions
as a GPS device, but can adapt the verbosity of these instructions based on current
needs and preferences. Additionally, this person can actively see what is going on in
the environment, and can assist if, for example, the planned route is blocked or if some
unexpected obstacle has to be negotiated. All of this is possible with vision alone, but
is difficult to replicate with the other senses. Ideally, a navigation aid should have the
ability to adapt its instructions in the same way a human guide can.
Most of the available solutions use speech output. This interaction works well on a
high level, providing general directions and address information. There are, however,
fundamental limitations that speech interfaces possess. Interpreting speech is a slow
process that requires much mental effort [18], and accurately describing an environment
in detail is difficult to do with a sensible amount of speech [19]. Non-speech auditory
cues have the advantage that they can convey complex information much faster, but they
still require much mental effort to process in addition to more training. Headphones are
typically used to receive this kind of feedback, but they generate their own problems
as they (at least partially) block out sounds from the environment that are useful to a
visually impaired person. Strothotte et al. [5] noted that many potential users of their
system (MoBIC) expressed worries about using headphones for precisely this reason.
Complex auditory representations such as used in The vOICe for Android [15] require
much training and long-time use is questionable.
Haptic feedback is a promising option as humans have evolved to instinctively know
how to avoid obstacles by touch. While the typical vibration feedback widely employed
today does not easily convey complex information, it works well in conveying alerts of
various kinds. Tactile displays of various kinds are being developed [20, 21] that could
be very useful for navigation purposes. For instance, nearby walls could be displayed in
real-time on a tactile display. This would be very similar to looking at a close-up map on
a smartphone or GPS device. The usefulness of tactile maps on paper has been studied,
6. Conclusions 61

with mostly positive outcomes [22]. Even so, the efficiency of real-time tactile maps is
not guaranteed.
Interaction issues aside, there are many practical problems that need to be solved
to minimize the effort involved in using the technology. In these regards, much can be
learnt from the white cane. The cane is very natural to use; it behaves like an extended
arm. It is easy to know the benefits and limitations of the cane, and it is obvious if
the cane suddenly stops working, i.e. it breaks. This can be compared to a navigation
aid, where although it might provide more information than the cane, it requires more
training to use efficiently. Additionally, there is an issue of security. It is not easy to
tell if the information given by the system is accurate or even true. Devices that aim to
replace the white cane face a much tougher challenge than those wishing to complement
the cane.
When conducting scientific evaluations, care should be taken when drawing conclu-
sions based on sighted (usually blindfolded) individuals’ experiences. While such studies
are certainly useful, one should be careful when applying these to non-sighted persons.
For example, studies have shown that visually impaired individuals perform better at
exploring objects by touch [23] and are better at using spatial audio [24]. As a result,
one should expect conclusions based on sighted participants’ performances to be worse
than that of visually impaired persons. Care must also be taken when comparing the
experience provided by a certain navigation aid to that of a sighted person’s unaided
experience. This comparison is of limited value as it rests on the assumption that one
should try to mimic the experience of sight, rather than what is provided by sight. This
assumption is valid if the user in question has the experience of sighted navigation to
draw upon, but does not hold for people who have been blind since birth. The benefits
and issues of navigation aids need to be understood from a non-visual perspective. One
should not try to impose a visual world model on someone who already has a perfectly
working, albeit different, spatial model.

6 Conclusions
The purpose of this article was to look into the means present solutions employ to present
spatial information non-visually. The goal was to suggest some design guidelines based
on the present solutions and on how non-visual spatial perception works. A secondary
goal was to shed light on the reasons for the weak adoption of navigation aids. While
technology adoption has been studied in general, there is a research gap to be filled when
it comes to navigation aids for the visually impaired. Though the previous discussion
mentioned several issues regarding information presentation, it is not clear if or how these
contribute to the weak adoption. Further, there are a multitude of non-technological
aspects that affect adoption as well. Looking back only a couple of decades, a central
technological issue was how to make a system employing sensors practically feasible.
Components were bulky and needed to be powered by large batteries. Today, this is less
of an issue, as sensors are getting so small they can be woven into clothes. Even though
spatial information can now easily be collected and processed in real-time, the problem
62 Paper A

of how to convey this information non-visually remains. Many solutions have been tried,
with mixed results, but there are no clear guidelines on how this interaction should be
done. There are guidelines on how different kinds of information should be displayed in
a graphical user interface on a computer screen. Similarly, there should be guidelines on
how to convey different types of spatial information non-visually. The primary means of
doing this are through audio and touch. Audio technology is quite mature today, whereas
solutions based on haptics still have a lot of room for improvement. As audio and touch
both have their unique advantages, it is likely they both will play an important role in
future navigation aids, but it is not clear yet what kind of feedback is best suited to one
modality or the other. A further issue for investigation is how to code the information
such that it is easily understood and efficient to use.
Design choices should stem from an understanding of how visually impaired individ-
uals perceive and understand the space around them. From a visual point of view, it
is easy to make assumptions that are invalid from the perspective of non-visual spatial
understanding. It is encouraging to see studies conclude that lack of vision per se does
not affect spatial ability negatively. This stresses the importance of training visually
impaired individuals to navigate independently.
Below are some important points summarised from the previous discussion:

• Use speech with caution. Speech can convey complex information but requires
much concentration and is time-consuming. It should therefore not be used in
critical situations that require quick actions.

• Headphones block environmental sounds. If using audio, headphones should


be used with caution as they block useful sounds from the environment. Bone
conduction headphones that do not cover the ears can help when the system is
silent, but any audio it emits will compete with environmental sounds for the user’s
attention.

• Non-speech audio is effective, but requires training. Complex pieces of


information can be rapidly delivered through non-speech audio, at the cost of more
needed training.

• Be careful with continuous audio. Continuous auditory feedback can be both


distracting and annoying.

• Consider vibrations for alerts. Vibration feedback is a viable alternative to


non-speech audio as alert signals. More complex information can be conveyed at
the cost of more needed training.

• Real-time tactile maps will be possible. Tactile displays have the potential to
provide real-time tactile maps, but using such maps effectively likely requires much
training for individuals who are not used to this kind of spatial view.

• Strive for an intuitive interaction. Regardless of the means used to present


spatial information, one should strive for an intuitive interaction. This not only
References 63

minimises needed training, but also the risks involved in using the system. For
obstacle avoidance, one should try to exploit the natural ways humans have evolved
to avoid obstacles.

• Systems should adapt. Ideally, systems should have the ability to adapt their
instructions based on preferences and situational needs. The difference in prefer-
ences is likely large, as there are many types and degrees of visual impairment, and
thus users will have very different navigation experiences.

• Be careful when drawing conclusions from sighted individuals’ experi-


ences. When conducting evaluations with sighted participants, one must be careful
when drawing general conclusions. Non-sighted individuals have more experience
of using other senses besides vision for spatial tasks. Additionally, one must not
forget that the prior navigation experiences of non-sighted compared to sighted in-
dividuals can categorically differ. In other words, assumptions made from a sighted
point of view do not necessarily hold for non-sighted individuals. For these reasons
it is important to conduct evaluations with the target users, or when not possible
to do so, carefully limit the applicability of conclusions drawn based on sighted
(including blindfolded) individuals’ experiences.

Acknowledgement
This work was supported by Centrum för medicinsk teknik och fysik (CMTF) at Umeå
University and Luleå University of Technology—both in Sweden—and by the European
Union Objective 2 North Sweden structural fund.

References
[1] V. Morash, A. E. Connell Pensky, A. U. Alfaro, and A. McKerracher, “A review of
haptic spatial abilities in the blind,” Spatial Cognition and Computation, vol. 12,
no. 2-3, pp. 83–95, 2012.

[2] S. Schmidt, C. Tinti, M. Fantino, I. C. Mammarella, and C. Cornoldi, “Spatial


representations in blind people: The role of strategies and mobility skills,” Acta
Psychologica, vol. 142, no. 1, pp. 43–50, 2013.

[3] S. Martinez-Conde, S. L. Macknik, and D. H. Hubel, “The role of fixational eye


movements in visual perception,” Nat. Rev. Neurosci., pp. 229–240, Mar 2004.

[4] J. C. Middlebrooks and D. M. Green, “Sound localization by human listeners,”


Annual review of psychology, vol. 42, no. 1, pp. 135–159, 1991.
64 Paper A

[5] T. Strothotte, S. Fritz, R. Michel, A. Raab, H. Petrie, V. Johnson, L. Reichert,


and A. Schalt, “Development of dialogue systems for a mobility aid for blind peo-
ple: initial design and usability testing,” in Proc. 2nd Annu ACM Conf. Assistive
Technologies. New York, NY, USA: ACM, 1996, pp. 139–144.

[6] Sound Foresight Technology, “Ultracane - putting the world at your fingertips,”
http://www.ultracane.com/, accessed 2016-03-21.

[7] GDP Research, “The miniguide mobility aid,” http://www.gdp-research.com.au/


minig 1.htm, accessed 2016-03-21.

[8] S. Ertan, C. Lee, A. Willets, H. Tan, and A. Pentland, “A wearable haptic navi-
gation guidance system,” in Wearable Computers, 1998. Digest of Papers. Second
International Symposium on, 1998, pp. 164–165.

[9] D. Innala Ahlmark, H. Fredriksson, and K. Hyyppä, “Obstacle avoidance using hap-
tics and a laser rangefinder,” in Advanced Robotics and its Social Impacts (ARSO),
2013 IEEE Workshop on, 2013, pp. 76–81.

[10] HumanWare, “Trekker Breeze,” http://www.humanware.com/en-usa/products/


blindness/talking gps/trekker breeze/ details/id 101/trekker breeze handheld
talking gps.html, accessed 2016-03-21.

[11] L. Ciaffoni, “Ariadne GPS,” http://www.ariadnegps.eu/, accessed 2016-03-21.

[12] Loadstone GPS Team, “Loadstone GPS,” http://www.loadstone-gps.com/, accessed


2016-03-21.

[13] GT Sonification Lab, “SWAN: System for wearable audio navigation,” http://sonify.
psych.gatech.edu/research/swan/, accessed 2014-02-24.

[14] B. Jameson and R. Manduchi, “Watch your head: A wearable collision warning
system for the blind,” in Sensors, 2010 IEEE, 2010, pp. 1922–1927.

[15] P. B. L. Meijer, “The voice for android,” http://www.artificialvision.com/android.


htm, accessed 2014-02-24.

[16] M. Kelly, Stacy and W. Smith, Derrick, “The impact of assistive technology on the
educational performance of students with visual impairments: A synthesis of the
research.” Journal of Visual Impairment & Blindness, vol. 105, no. 2, pp. 73–83,
2011.

[17] F. D. Davis, “Perceived usefulness, perceived ease of use, and user acceptance of
information technology,” MIS Quarterly, vol. 13, no. 3, pp. 319–340, 1989.

[18] I. Pitt and A. Edwards, “Improving the usability of speech-based interfaces for blind
users,” in Int. ACM Conf. Assistive Technologies. New York, NY, USA: ACM, 1996,
pp. 124–130.
65

[19] N. Franklin, “Language as a means of constructing and conveying cognitive maps,”


in The construction of cognitive maps. Springer, 1996, pp. 275–295.

[20] J. Rantala, K. Myllymaa, R. Raisamo, J. Lylykangas, V. Surakka, P. Shull, and


M. Cutkosky, “Presenting spatial tactile messages with a hand-held device,” in IEEE
World Haptics Conf. (WHC), Jun. 2011, pp. 101–106.

[21] A. Yamamoto, S. Nagasawa, H. Yamamoto, and T. Higuchi, “Electrostatic tactile


display with thin film slider and its application to tactile telepresentation systems,”
IEEE Trans. Visualization and Computer Graphics, vol. 12, no. 2, pp. 168–177,
Mar–Apr 2006.

[22] M. A. Espinosa, S. Ungar, E. Ochaı́ta, M. Blades, and C. Spencer, “Comparing


methods for introducing blind and visually impaired people to unfamiliar urban
environments,” Journal of Environmental Psychology, vol. 18, no. 3, pp. 277 – 287,
1998.

[23] A. Vinter, V. Fernandes, O. Orlandi, and P. Morgan, “Exploratory procedures of


tactile images in visually impaired and blindfolded sighted children: How they relate
to their consequent performance in drawing,” Research in Developmental Disabili-
ties, vol. 33, no. 6, pp. 1819–1831, 2012.

[24] R. W. Massof, “Auditory assistive devices for the blind,” in Proc. Int. Conf. Auditory
Display, 2003, pp. 271–275.
66
Paper B
Obstacle Avoidance Using Haptics
and a Laser Rangefinder

Authors:
Daniel Innala Ahlmark, Håkan Fredriksson and Kalevi Hyyppä

Reformatted version of paper originally published in:


Proceedings of the 2013 Workshop on Advanced Robotics and its Social Impacts, Tokyo,
Japan.


c 2013, IEEE, Reprinted with permission.

67
68
Obstacle Avoidance Using Haptics and a Laser
Rangefinder

Daniel Innala Ahlmark, Håkan Fredriksson, Kalevi Hyyppä

Abstract

In its current form, the white cane has been used by visually impaired people for almost
a century. It is one of the most basic yet useful navigation aids, mainly because of its
simplicity and intuitive usage. For people who have a motion impairment in addition to
a visual one, requiring a wheelchair or a walker, the white cane is impractical, leading to
human assistance being a necessity. This paper presents the prototype of a virtual white
cane using a laser rangefinder to scan the environment and a haptic interface to present
this information to the user. Using the virtual white cane, the user is able to ”poke” at
obstacles several meters ahead and without physical contact with the obstacle. By using
a haptic interface, the interaction is very similar to how a regular white cane is used.
This paper also presents the results from an initial field trial conducted with six people
with a visual impairment.

1 Introduction
During the last few decades, people with a visual impairment have benefited greatly
from the technological development. Assistive technologies have made it possible for
children with a visual impairment to do schoolwork along with their sighted classmates,
and later pick a career from a list that–largely due to assistive technologies–is expanding.
Technological innovations specifically designed for people with a visual impairment also
aid in daily tasks, boosting confidence and independence.
While recent development has made it possible for a person with a visual impairment
to navigate the web with ease, navigating the physical world is still a major challenge.
The white cane is still the obvious aid to use. It is easy to operate and trust because it
behaves like an extended arm. The cane also provides auditory information that helps
with identifying the touched material as well as acoustic echolocation. For someone who,
in addition to a visual impairment, is in need of a wheelchair or a walker, the cane is
impractical to use and therefore navigating independently of another person might be
an impossible task. The system presented in this paper, henceforth referred to as ’the
virtual white cane’, is an attempt to address this problem using haptic technology and
a laser rangefinder. This system makes it possible to detect obstacles without physically
hitting them, and the length of the virtual cane can be varied based on user preference
and situational needs. Figure B.1 shows the system in use.

69
70 Paper B

Figure B.1: The virtual white cane. This figure depicts the system currently set up on the
MICA wheelchair.

Haptic technology (the technology of the sense of touch) opens up new possibilities of
human-machine interaction. Haptics can be used to enhance the experience of a virtual
world when coupled with other modalities such as sight and sound [1], as well as for
many stand-alone applications such as surgical simulations [2]. Haptic technology also
paves way for innovative applications in the field of assistive technology. People with
a visual impairment use the sense of touch extensively; reading braille and navigating
with a white cane are two diverse scenarios where feedback through touch is the common
element. Using a haptic interface, a person with a visual impairment can experience
three-dimensional models without the need to have a physical model built. For the
virtual white cane, a haptic interface was a natural choice as the interaction resembles
the way a regular white cane is used. This should result in a system that is intuitive to
use for someone who has previous experience using a traditional white cane.

The next section discusses previous work concerning haptics and obstacle avoidance
systems for people with a visually impairment. Section 3 is devoted to the hardware and
software architecture of the system. Section 4 presents results from an initial field trial,
and section 5 concludes the paper and gives some pointers to future work.
2. Related Work 71

2 Related Work
The idea of presenting visual information to people with a visual impairment through a
haptic interface is an appealing one. This idea has been applied to a number of different
scenarios during recent years. Fritz et al. [3] used haptic interaction to present scientific
data, while Moustakas et al. [4] applied the idea to maps.
Models that are changing in time pose additional challenges. The problem of ren-
dering dynamic objects haptically was investigated by e.g. Diego Ruspini and Oussama
Khatib [5], who built a system capable of rendering dynamic models, albeit with many
restrictions. When presenting dynamic information (such as in our case a model of the
immediate environment) through a haptic interface, care must be taken to minimize a
phenomenon referred to as haptic fall-through, where it is sometimes possible to end up
behind (fall through) a solid surface (see section 3.3 for more details). Minimizing this
is of critical importance in applications where the user does not see the screen, as it
would be difficult to realize that the haptic probe is behind a surface. Gunnar Jansson
at Uppsala University in Sweden has studied basic issues concerning visually impaired
peoples’ use of haptic displays [6]. He notes that being able to look at a visual display
while operating the haptic device increases the performance with said device significantly.
The difficulty lies in the fact that there is only one point of contact between the virtual
model and the user.
When it comes to sensing the environment numerous possibilities exist. Ultrasound
has been used in devices such as the UltraCane [7], and Yan and Manduchi [8] used a
laser rangefinder in a triangulation approach by surface tracking. Depth-measuring (3D)
cameras are appealing, but presently have a narrow field of view, relatively low accuracy,
and a limited range compared to laser rangefinders. These cameras undergo constant
improvements and will likely be a viable alternative in a few years. Indeed, consumer-
grade devices such as the Microsoft Kinect has been employed as range-sensors for mobile
robots (see e.g. [9]). The Kinect is relatively cheap, but suffers from the same problems
as other 3D cameras at present [10].
Spatial information as used in navigation and obstacle avoidance systems can be con-
veyed in a number of ways. This is a primary issue when designing a system specifically
for the visually impaired, perhaps evidencing the fact that not many systems are widely
adopted despite many having been developed. Speech has often been used, and while it
is a viable option in many cases, it is difficult to present spatial information accurately
through speech [11]. Additionally, interpreting speech is time-consuming and requires a
lot of mental effort [12]. Using non-speech auditory signals can speed up the process,
but care must be taken in how this audio is presented to the user, as headphones make
it more difficult to perceive useful sounds from the environment [13].

3 The Virtual White Cane


Published studies on the subject of obstacle avoidance utilizing force feedback [14, 15]
indicate that adding force feedback to steering controls leads to fewer collisions and a
72 Paper B

better user experience. The virtual white cane presented in this paper provides haptic
feedback decoupled from the steering process, so that a person with a visual impair-
ment can ”poke” at the environment like when using a white cane. Some important
considerations when designing such a system are:

• Reliability. A system behaving unexpectedly immediately decreases the trust of


said system and might even cause an accident. To become adopted, the benefit
and reliability must outweigh the risk and effort associated with using the system.
If some problem should arise, the user should immediately be alerted; an error
message displayed on a computer monitor is not sufficient.

• Ease of use. The system should be intuitive to use. This factor is especially valuable
in an obstacle avoidance system because human beings know how to avoid obstacles
intuitively. Minimized training and better adoption of the technology should follow
from an intuitive design.

• The system should respond as quickly as possible to changes in the environment.


This feature has been the focus for our current prototype. Providing immediate
haptic feedback through a haptic interface turned out to be a challenge (see section
3.3).

3.1 Hardware
The virtual white cane consists of a haptic display (Novint Falcon [16]), a laser rangefinder
(SICK LMS111 [17]), and a laptop (MSI GT663R [18] with an Intel Core i7-740QM run-
ning at 1.73 GHz, 8GB RAM and an NVIDIA GeForce GTX 460M graphics card). These
components, depicted in figure B.2, are currently mounted on the electric wheelchair
MICA (Mobile Internet Connected Assistant), which has been used for numerous re-
search projects at Luleå University of Technology over the years [19, 20, 21]. MICA is
steered using a joystick in one hand, and the Falcon is used to feel the environment with
the other.
The laser rangefinder is mounted so that it scans a horizontal plane of 270 degrees
in front of the wheelchair. The distance information is transmitted to the laptop over
an ethernet connection at 50 Hz and contains 541 angle-distance pairs (θ, r), yielding an
angular resolution of half a degree. The LMS111 can measure distances up to 20 meters
with an error within three centimeters. This information is used by the software to build a
three-dimensional representation of the environment. This representation assumes that
for each angle θ, the range r will be the same regardless of height. This assumption
works fairly well in a corridor environment where most potential obstacles that could be
missed are stacked against the walls. This representation is then displayed graphically
as well as transmitted to the haptic device, enabling the user to touch the environment
continuously.
3. The Virtual White Cane 73

Figure B.2: The Novint Falcon, joystick and SICK LMS111.

3.2 Software Architecture


The software is built on the open-source H3DAPI platform which is developed by Sense-
Graphics AB [22]. H3D is a scenegraph-API based on the X3D 3D-graphics standard,
enabling rapid construction of haptics-enabled 3D scenes. At the core of such an API
is the scenegraph: a tree-like data structure where each node can be defining anything
from global properties and scene lighting to properties of geometric objects as well as
the objects themselves. To render a scene described by a scenegraph, the program tra-
verses this graph, rendering each node as it is encountered. This concept makes it easy
to perform a common action on multiple nodes by letting them be child nodes of a node
containing the action. For example, in order to move a group of geometric objects a
certain distance, it is sufficient to let the geometric nodes be children of a transform
node defining the translation.
H3DAPI provides the possibility of extension through custom-written program mod-
ules (which are scenegraph nodes). These nodes can either be defined in scripts (using
the Python language), or compiled into dynamically linked libraries from C++ source
code. Our current implementation uses a customized node defined in a Python script
that repeatedly gets new data from the laser rangefinder and renders it.

Scenegraph View
The X3D scenegraph, depicted in figure B.3, contains configuration information com-
prised of haptic rendering settings (see section 3.3) as well as properties of static objects.
Since the bottom of the Novint Falcon’s workspace is not flat, a ”floor” is drawn at a
74 Paper B

Figure B.3: The X3D scenegraph. This diagram shows the nodes of the scene and the rela-
tionship among them. The transform (data) node is passed as a reference to the Python script
(described below). Note that nodes containing configuration information or lighting settings are
omitted.

height where maximum horizontal motion of the Falcon’s handle is possible without any
bumps. This makes using the system more intuitive since this artificial floor behaves like
the real floor, and the user can focus on finding obstacles without getting distracted by
the shape of the haptic workspace. At program start up, this floor is drawn at a low
(outside the haptic workspace) height, and is then moved slowly upwards to the desig-
nated floor coordinate in a couple of seconds. This movement is done to make sure the
haptic proxy (the rendered sphere representing the position of the haptic device) does
not end up underneath the floor when the program starts.
The scenegraph also contains a Python script node. This script handles all dynamics
of the program by overriding the node’s traverseSG method. This method executes once
every scenegraph loop, making it possible to use it for obtaining, filtering and rendering
new range data.

Python Script
The Python script fetches data from the laser rangefinder continually, then builds and
renders the model of this data graphically and haptically. It renders the data by creating
an indexed triangle set node and attaching it to the transform (data) node it gets from
the scenegraph.
The model can be thought of as a set of tall, connected rectangles where each rectangle
is positioned and angled based on two adjacent laser measurements. Below is a simplified
version of the algorithm buildModel, which outputs a set of vertices representing the
model. From this list of points, the wall segments are built as shown in figure B.4. For
3. The Virtual White Cane 75

Figure B.4: The ith wall segment, internally composed of two triangles.

rendering purposes, each tall rectangle is divided into two triangles. The coordinate
system is defined as follows: Sitting in the wheelchair, the positive x-axis is to the right,
y-axis is up and the z-axis points backwards.

Algorithm 1 buildModel
Require: a = an array of n laser data points where the index represents angles from 0
to n2 degrees, h = the height of the walls
Ensure: v = a set of size 2n of n vertices representing triangles to be rendered
for i = 0 to n − 1 do
r ← a[i]
π i
θ ← 180 2
convert (r, θ) to cartesian coordinates (x, z)
v[i] ← vector(x, 0, z)
v[n + i] ← vector(x, h, z)
end for

In our current implementation, laser data is passed through three filters before the
model is built. These filters—a spatial low-pass filter, a spatial median filter and a time-
domain median filter—serve two purposes: Firstly, the laser data is subject to some
noise which is noticeable visually and haptically. Secondly, the filters are used to prevent
too sudden changes to the model in order to minimize haptic fall-through (see the next
section for an explanation of this).
76 Paper B

3.3 Dynamic Haptic Feedback


The biggest challenge so far has been to provide satisfactory continual haptic feedback.
The haptic display of dynamically changing nontrivial models is an area of haptic ren-
dering that could see much improvement. The most prominent issue is the fall-through
phenomenon where the haptic proxy goes through a moving object. When an object is
deforming or moving rapidly, time instances occur where the haptic probe is moved to a
position where there is no triangle to intercept it at the current instant in time, thus no
force is sent to the haptic device. This issue is critical in an obstacle avoidance system
such as the virtual white cane where the user does not see the screen, thus having a
harder time detecting fall-through.
To minimize the occurrence of fall-through, three actions have been taken:

• Haptic renderer has been chosen with this issue in mind. The renderer chosen for
the virtual white cane was created by Diego Ruspini [23]. This renderer treats the
proxy as a sphere rather than a single point (usually referred to as a god-object),
which made a big difference when it came to fall-through. The proxy radius had a
large influence on this problem; a large proxy can cope better with larger changes
in the model since it is less likely that a change is bigger than the proxy radius.
On the other hand, the larger the proxy is, the less haptic resolution is possible.

• Any large coordinate changes are linearly interpolated over time. This means that
sudden changes are smoothed out, preventing a change that would be bigger than
the proxy. As a trade-off, any rapid and large changes in the model will be unnec-
essarily delayed.

• Three different filters (spatial low-pass and median, time-domain median) are ap-
plied to the data to remove spuriouses and reduce fast changes. These filters delays
all changes in the model slightly, and has some impact on the application’s frame
rate.

Having these restrictions in place avoids most fall-through problems, but does so at
the cost of haptic resolution and a slow-reacting model, which has been acceptable in the
early tests.

4 Field Trial
In order to assess the feasibility of haptics as a means of presenting information about
nearby obstacles to people with a visual impairment, a field trial with six participants
(ages 52—83) was conducted. All participants were blind (one since birth) and were
white cane users. Since none of the participants were used to a wheelchair, the system
was mounted on a table on wheels (see figure B.5). A crutch handle with support for
the arm was attached to the left side of the table (from the user’s perspective) so that
it could be steered with the left hand and arm, while the right hand used the haptic
interface.
4. Field Trial 77

Figure B.5: The virtual white cane as mounted on a movable table. The left hand is used to
steer the table while the right hand probes the environment through the haptic interface.

The trial took place in a corridor environment at the Luleå University of Technology
campus. The trial consisted of an acquaintance phase of a few minutes where the par-
ticipants learnt how to use the system, and a second phase where they were to traverse
a couple of corridors, trying to stay clear of the walls and avoiding doors and other ob-
stacles along the way. The second phases were video-recorded, and the participants were
interviewed afterwards.
All users grasped the idea of how to use the system very quickly. When interviewed,
they stated that they thought their previous white cane experience helped them use this
system. This supports the notion that the virtual white cane is intuitive to use and easy
to understand for someone who is familiar with the white cane. While the participants
understood how to use the system, they had difficulties accurately determining the dis-
tances and angles to obstacles they touched. This made it tricky to perform maneuvers
that require high precision such as passing through doorways. It is worth noting that the
participants quickly adopted their own technique of using the system. Most notably, a
pattern emerged where a user would trace back and forth along one wall, then sweep (at
a close distance) to the other wall, and repeated this procedure starting from this wall.
None of the users expressed discomfort or insecurity, but comments were made re-
garding the clumsiness of the prototype and that it required both physical and mental
effort to use. An upcoming article (see [24]; title may change) will present a more detailed
78 Paper B

report on the field trial.

5 Conclusions
Figure B.6 shows a screenshot of the application in use. The field trial demonstrated the
feasibility of haptic interaction for obstacle avoidance, but many areas of improvement
were also identified. The difficulty in determining the precise location of obstacles could
be due to the fact that none of the users had practiced this earlier. Since a small
movement of the haptic grip translates to a larger motion in the physical world, a scale
factor between the real world and the model has to be learned. This is further complicated
by the placement of the laser rangefinder and haptic device relative to the user. As the
model is viewed through the perspective of the laser rangefinder, and perceived through
a directionless grip held with the right hand, a translation has to be learned in addition
to the scale factor in order to properly match the model with the real world. A practice
phase specifically made for learning this correspondence might be in order, however, the
point of the performed field trial was to provide as little training as possible.
The way the model is built and the restrictions placed on it in order to minimize haptic
fall-through have several drawbacks. Since the obstacle model is built as a coherent,
deformable surface, a moving object such as a person walking slowly from side to side
in front of the laser rangefinder will cause large, rapid changes in the model. As the
person moves, rectangles representing obstacles farther back are rapidly shifted forward
to represent the person, and vice versa. This means that even some slow motions are
unnecessarily delayed in the model as its rate of deformation is restricted. Since the
haptic proxy is a large sphere, the spatial resolution that can be perceived is also limited.

5.1 Future Work


The virtual white cane is still in its early development stage. Below are some pointers to
future work:

• Data acquisition. Some other sensor(s) should be used in order to gather real
three-dimensional measurements. 3D time-of-flight cameras look promising but are
currently too limited in field of view and signal to noise ratio for this application.
• Haptic feedback. The most prominent problem with the current system regarding
haptics is haptic fall-through. The current approach of interpolating changes avoids
most fall-through problems but severely degrades the user experience in several
ways. One solution is to use a two-dimensional tactile display instead of a haptic
interface such as the Falcon. Such displays have been explored in many forms over
the years [25, 26, 27]. One big advantage of such displays is that multiple fingers
can be used to feel the model at once. Also, fall-through would not be an issue. On
the flip side, the inability of such displays to display three-dimensional information
and their current state of development makes haptic interfaces such as the Falcon
a better choice under present circumstances.
5. Conclusions 79

Figure B.6: The virtual white cane in use. This is a screenshot of the application depicting a
corner of an office, with a door being slightly open. The user’s ”cane tip”, represented by the
white sphere, is exploring this door.

• Data model and performance. At present the model is built as a single deformable
object. Performance is likely suffering because of this. Different strategies to rep-
resent the data should be investigated. This issue becomes critical once three-
dimensional information is available due in part to the greater amount of informa-
tion itself but also because of the filtering that needs to be performed.

• Ease of use. A user study focusing on model settings (scale and translation primar-
ily) may lead to some average settings that work best for most users, thus reducing
training times further for a large subset of users.

• Other interfaces. It might be beneficial to add additional interaction means (e.g.


auditory cues) to the system. These could be used to alert the user that they are
about to collide with an obstacle. Such a feature becomes more useful when a full
three-dimensional model of the surroundings is available. Additionally, auditory
feedback has been shown to have an effect on haptic perception [28].

Acknowledgment
This work was supported by Centrum för medicinsk teknik och fysik (CMTF) at Umeå
University and Luleå University of Technology–both in Sweden–and by the European
Union Objective 2 North Sweden structural fund.
80 Paper B

References
[1] A. Lécuyer, P. Mobuchon, C. Mégard, J. Perret, C. Andriot, and J. pierre Col-
inot, “Homere: a multimodal system for visually impaired people to explore virtual
environments,” in Proc. IEEE VR, 2003, pp. 251–258.

[2] M. Eriksson, M. Dixon, and J. Wikander, “A haptic VR milling surgery simulator–


using high-resolution CT-data,” Stud. Health, Technol., Inform., vol. 119, pp. 138–
143, 2006.

[3] J. P. Fritz, T. P. Way, and K. E. Barner, “Haptic representation of scientific data


for visually impaired or blind persons,” in Technology and Persons With Disabilities
Conf., 1996.

[4] K. Moustakas, G. Nikolakis, K. Kostopoulos, D. Tzovaras, and M. Strintzis, “Haptic


rendering of visual data for the visually impaired,” Multimedia, IEEE, vol. 14, no. 1,
pp. 62–72, Jan–Mar 2007.

[5] D. Ruspini and O. Khatib, “Dynamic models for haptic rendering systems,”
accessed 2014-02-24. [Online]. Available: http://citeseerx.ist.psu.edu/viewdoc/
download?doi=10.1.1.127.5804&rep=rep1&type=pdf

[6] G. Jansson, “Basic issues concerning visually impaired people’s use of haptic dis-
plays,” in The 3rd International Conf. Disability, Virtual Reality and Assoc. Tech-
nol., Alghero, Sardinia, Italy, Sep. 2000, pp. 33–38.

[7] Sound Foresight Technology, “Ultracane - putting the world at your fingertips,”
http://www.ultracane.com/, accessed 2016-03-21.

[8] D. Yuan and R. Manduchi, “Dynamic environment exploration using a virtual white
cane,” in Proc. 2005 IEEE Computer Society Conf. Computer Vision and Pattern
Recognition (CVPR’05). Washington, DC, USA: IEEE Computer Society, 2005,
pp. 243–249.

[9] D. Correa, D. Sciotti, M. Prado, D. Sales, D. Wolf, and F. Osorio, “Mobile robots
navigation in indoor environments using kinect sensor,” in 2012 Second Brazilian
Conf. Critical Embedded Systems (CBSEC), May 2012, pp. 36–41.

[10] K. Khoshelham and S. O. Elberink, “Accuracy and resolution of kinect depth data
for indoor mapping applications,” Sensors, vol. 12, no. 2, pp. 1437–1454, 2012.

[11] N. Franklin, “Language as a means of constructing and conveying cognitive maps,”


in The construction of cognitive maps. Springer, 1996, pp. 275–295.

[12] I. Pitt and A. Edwards, “Improving the usability of speech-based interfaces for blind
users,” in Int. ACM Conf. Assistive Technologies. New York, NY, USA: ACM, 1996,
pp. 124–130.
References 81

[13] T. Strothotte, S. Fritz, R. Michel, A. Raab, H. Petrie, V. Johnson, L. Reichert,


and A. Schalt, “Development of dialogue systems for a mobility aid for blind peo-
ple: initial design and usability testing,” in Proc. 2nd Annu ACM Conf. Assistive
Technologies. New York, NY, USA: ACM, 1996, pp. 139–144.

[14] A. Fattouh, M. Sahnoun, and G. Bourhis, “Force feedback joystick control of a


powered wheelchair: preliminary study,” in IEEE Int. Conf. Systems, Man and
Cybernetics, vol. 3, Oct. 2004, pp. 2640–2645.

[15] J. Staton and M. Huber, “An assistive navigation paradigm using force feedback,” in
IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO), Nov. 2009,
pp. 119–125.

[16] Novint Technologies Inc, “Novint Falcon,” http://www.novint.com/index.php/


novintfalcon, accessed 2014-02-24.

[17] SICK Inc., “LMS100 and LMS111,” http://www.sick.com/us/en-us/home/


products/product news/laser measurement systems/Pages/lms100.aspx, accessed
2014-02-24.

[18] MSI, “MSI global - notebook and tablet - GT663,” http://www.msi.com/product/


nb/GT663.html, accessed 2014-02-24.

[19] H. Fredriksson, “Laser on kinetic operator,” Ph.D. dissertation, Luleå University of


Technology, Luleå, Sweden, 2010.

[20] K. Hyyppä, “On a laser anglemeter for mobile robot navigation,” Ph.D. dissertation,
Luleå University of Technology, Luleå, Sweden, 1993.

[21] S. Rönnbäck, “On methods for assistive mobile robots,” Ph.D. dissertation, Luleå
University of Technology, Luleå, Sweden, 2006.

[22] SenseGraphics AB, “Open source haptics - H3D.org,” http://www.h3dapi.org/, ac-


cessed 2014-02-24.

[23] D. C. Ruspini, K. Kolarov, and O. Khatib, “The haptic display of complex graphical
environments,” in Proc. 24th Annu. Conf. Computer Graphics and Interactive Tech-
niques. New York, NY, USA: ACM Press/Addison-Wesley Publishing Co., 1997,
pp. 345–352.

[24] D. Innala Ahlmark, M. Prellwitz, J. Röding, L. Nyberg, and K. Hyyppä, “An initial
field trial of a haptic navigation system for persons with a visual impairment,”
Journal of Assistive Technologies, vol. 9, no. 4, pp. 199–206, 2015.

[25] J. Rantala, K. Myllymaa, R. Raisamo, J. Lylykangas, V. Surakka, P. Shull, and


M. Cutkosky, “Presenting spatial tactile messages with a hand-held device,” in IEEE
World Haptics Conf. (WHC), Jun. 2011, pp. 101–106.
82

[26] R. Velazquez and S. Gutierrez, “New test structure for tactile display using laterally
driven tactors,” in Instrumentation and Measurement Technol. Conf. Proc., May
2008, pp. 1381–1386.

[27] A. Yamamoto, S. Nagasawa, H. Yamamoto, and T. Higuchi, “Electrostatic tactile


display with thin film slider and its application to tactile telepresentation systems,”
IEEE Trans. Visualization and Computer Graphics, vol. 12, no. 2, pp. 168–177,
Mar–Apr 2006.

[28] F. Avanzini and P. Crosato, “Haptic-auditory rendering and perception of contact


stiffness,” in Haptic and Audio Interaction Design, LNCS, 2006, pp. 24–35.
Paper C
An Initial Field Trial of a Haptic
Navigation System for Persons
with a Visual Impairment

Authors:
Daniel Innala Ahlmark, Maria Prellwitz, Jenny Röding, Lars Nyberg and Kalevi Hyyppä

Reformatted version of paper originally published in:


Journal of Assistive Technologies, 9(4), 2015, pp. 199–206.


c 2015, Emerald Group Publishing Limited, Reprinted with permission.

83
84
An Initial Field Trial of a Haptic Navigation System
for Persons with a Visual Impairment

Daniel Innala Ahlmark, Maria Prellwitz, Jenny Röding, Lars Nyberg and Kalevi
Hyyppä

Abstract

Purpose: The purpose of the presented field trial was to describe conceptions of feasi-
bility of a haptic navigation system for persons with a visual impairment.
Design/methodology/approach: Six persons with a visual impairment who were
white cane users were tasked with traversing a predetermined route in a corridor en-
vironment using the haptic navigation system. To see whether white cane experience
translated to using the system, the participants received no prior training. The proce-
dures were video-recorded, and the participants were interviewed about their conceptions
of using the system. The interviews were analyzed using content analysis, where induc-
tively generated codes that emerged from the data were clustered together and formulated
into categories.
Findings: The participants quickly figured out how to use the system, and soon adopted
their own usage technique. Despite this, locating objects was difficult. The interviews
highlighted the desire to be able to feel at a distance, with several scenarios presented
to illustrate current problems. The participants noted that their previous white cane
experience helped, but that it nevertheless would take a lot of practice to master using
this system. The potential for the device to increase security in unfamiliar environments
was mentioned. Practical problems with the prototype were also discussed, notably the
lack of auditory feedback.
Originality/value: One novel aspect of this field trial is the way it was carried out.
Prior training was intentionally not provided, which means that the findings reflect im-
mediate user experiences. The findings confirm the value of being able to perceive things
beyond the range of the white cane; at the same time, the participants expressed concerns
about that ability. Another key feature is that the prototype should be seen as a navi-
gation aid rather than an obstacle avoidance device, despite the interaction similarities
with the white cane. As such, the intent is not to replace the white cane as a primary
means of detecting obstacles.

1 Introduction
Vision provides the ability to identify danger and obstacles at a distance, and also aids in
the identification and location of objects in the environment. Vision also grants essential

85
86 Paper C

information used for postural control, motion control and handling things in the environ-
ment [1]. According to the World Health Organization there are 285 million people with
a visual impairment (VI) in the world [2]. The International Classification of Diseases
(ICD) defines four vision categories: normal vision, moderate visual impairment, severe
visual impairment, and blindness. Throughout this article, the term ’visual impairment’
is used in accordance with the ICD, that is, it implies all categories except normal vision.
Studies [3, 4] have shown that mobility is compromised for persons with VI and that
this in turn affects many daily activities, such as shopping and going for walks. Problems
with obstacles like bicycle stands, awnings and bricks in the pavement can cause limited
outdoor activities. Not being able to go to a variety of places independently and the
fear of unfamiliar environments can also limit activities. There are also studies [5, 6]
that have shown that mobility problems affect the quality of life of persons with VI in a
negative way as a result of activity limitations.
For persons with VI, the primary aid is the white cane, which provides a direct
experience of obstacles at close proximity. This aid can provide the user with a lot of
valuable information about their environment. During the last couple of decades, persons
with VI have benefited from the development of technological devices. Many of these
have the potential to support a better quality of life for individuals with VI and enhance
their ability to participate fully in daily activities and to live independently [7].
Technological solutions ranging from accessible GPS devices such as the Trekker
Breeze [8] to extensions of the white cane that use ultrasound (e.g. UltraCane [9])
are available, but have not been widely adopted. Most of them involve a great deal of
effort and are not intuitive for persons with VI [10, 11]. Therefore there is a need to
focus on solutions that are usable and that enable the user to make appropriate and
timely decisions [12, 13, 14, 10]. The majority of current solutions use speech interfaces
to interact with users with VI, but informing the user of nearby obstacles with sufficient
detail is difficult and takes a lot of time [15] compared to the quick and intuitive reaction
attained when hitting an obstacle with a white cane.
Due to the problems with speech for spatial information, we chose a haptic interface
to communicate nearby obstacles. The present prototype consists of a laser rangefinder,
a haptic interface, and a laptop (see figure 1). The laser rangefinder obtains distances to
nearby objects. This information is then made into a three-dimensional model, which in
turn is transmitted to a Novint Falcon [16] haptic interface for presentation. This way
a user can feel obstacles several meters in front of them, much in the same way they
could with a white cane. To do this, the user moves the grip of the haptic interface,
and because the interface uses force feedback to counteract grip movements, contours
of obstacles and walls can be traced. The laptop that runs the software also displays a
graphical representation of the model and shows the current probing position (the grip
of the haptic interface) as a white sphere. More information about the system itself can
be found in an earlier article [17]. A hand-held version is currently being developed.
Early field trials in the development of this navigation aid are done in order to explore
its potential. The goal is to make the system intuitive for persons who are users of the
white cane today. To reach this goal, input for further development from potential users is
2. Methods 87

essential. Thus, the aim of this study is to describe conceptions of the system’s feasibility
from an end-user perspective.

1.1 Delimitations
The point of this field trial was to get early feedback from potential end-users. Since the
prototype might change considerably, we chose to focus on the qualitative aspects rather
than performance metrics at this stage. A further aim was to assess how white cane
experience translated to using our prototype, as the interaction possesses similarities to
that of the cane. Because of this, the participants did not have the opportunity of an
extended familiarization phase, and as such we cannot at this stage draw conclusions on
the effects of training.
The current prototype has several known limitations. As the laser rangefinder was
mounted horizontally, it is not possible to detect drops or small obstacles on the ground.
Additionally, no audible feedback from touching an obstacle is generated. These factors
pose a major problem if one intends to replace obstacle-avoidance devices such as the
white cane, but we see a continuation of this device as a navigation aid complementing
the cane.

2 Methods
This initial field trial was carried out by six persons with VI. Participants made a one-
shot trial during a standardized procedure in two parts: one initial, acquaintance part
and one problem solving part. Both of these procedures were video-recorded, and the
participants were interviewed about their conceptions of using the prototype. Finally, all
gathered data were analyzed qualitatively.

2.1 Participants
The 6 participants in the study all had at least five years of experience using a white cane,
were able to move around without assistance and could communicate their experiences
verbally. The persons were recruited with help from the regional ombudsman for persons
with visual impairments in northern Sweden. Ethical approval for this study was given
by the Regional Ethical Review Board, Umeå, Sweden (Dnr 2010-139-31).

2.2 Test Set-up


The system components were mounted on a table on wheels as depicted in figure 1. A
crutch handle was attached to the left side of the table (from the perspective of the user)
so that it was possible to steer it with the left hand and arm. The haptic interface was
fastened to the surface of the table and was operated by the right hand, with the arm
resting on a foam pad glued to the edge of the table. The laser rangefinder was attached
to the front of the table so that it scanned at a height of about 80 cm. Finally, the
88 Paper C

Figure C.1: The prototype navigation aid mounted on a movable table. The Novint Falcon
haptic interface is used with the right hand to feel where walls and obstacles are located. The
white sphere visible on the computer screen is a representation of the position of the grip of
the haptic interface. The grip can be moved freely as long as the white sphere does not touch
any obstacle, at which point forces are generated to counteract further movement ”into“ the
obstacle.

laptop was placed on top of the table which made it easy to observe—both during the
trial and on the recorded videos—the model of the surroundings and what the users were
touching. The current position of the grip of the haptic interface was represented by a
white sphere clearly visible on the screen.

2.3 Field trial


Before starting the trial, each participant received information about the system and
instructions regarding how to use it from one of the researchers. The trials were recorded
on video shot obliquely from behind, so that the participants’ way of using the prototype
were visible on the videos. The first acquaintance part of the field trial was performed
in a corridor environment (visible in figure 1) with obstacles stacked against the walls.
The task was to walk a 37 m long and 2.4 m wide corridor, passing through two 1.8 m
wide doorways, to turn around at an open space at the end of the corridor and then walk
back again. Along the corridor, a few objects (chairs, sofas, and a waste bin) were placed
along the walls. After accomplishing this, the participants began the second, problem
solving part in which they walked through a 1.8 m wide doorway, into a 3.2 m wide
corridor, turned right after 1.5 m and passed through a narrow (0.9 m) doorway, thereby
entering a classroom (5 m by 5.5 m) cleared of furniture except for a small table half-way
along the right wall upon which a soda can was placed. The task was to find the table
3. Results 89

and the soda can, pick up the can, and then turn around and walk back to the starting
point. This was done with few instructions or minor assistance from the researchers.
This problem solving part was accomplished on average in 10 minutes (range 6 to 14
minutes).

2.4 Interviews
The interviews with each participant took place directly after the trial. A semi-structured
interview guide was used with nine questions regarding the participants’ conceptions of
the solution’s feasibility. The focus of the interviews was on the participants’ concep-
tions of using the device in relation to the use of the white cane, and on what they
thought needed to be done to improve the usability of the system. Each interview took
approximately 45 minutes and was recorded and transcribed verbatim.

2.5 Data analysis


Video recordings of each participant’s trial were observed as for how the participants
acquainted themselves with the device, how they used it to navigate, and how they
succeeded in clearing obstacles and doorways and finding the soda can. The participants’
performance while using the prototype was displayed on the computer screen which
was constantly visible on the recorded video. Similarities and differences in observed
performance were identified and described qualitatively.
To analyze the interviews, content analysis inspired by Graneheim and Lundman [18]
was used. The text was divided into meaning units, these were then condensed. The
condensed meaning units were assigned inductively generated codes that emerged from
the data. These codes were then clustered together and sorted into different categories.
After that, three different main categories were formulated.

3 Results
During the acquaintance part of the trial, all participants had an initial phase in which
they obviously acquainted themselves with the equipment and how to use it in order to
feel the area in front of them. In this phase, lasting from one to seven minutes, they
all needed verbal cues or physical help in order not to collide with the walls or other
obstacles. In this phase they also developed their own pattern of probing the area.
Two participants used a passive pattern, making few and scarce probing attempts
with the device. They had difficulties navigating in the corridor and needed frequent
verbal cues and physical assistance. One of these participants chose not to perform the
problem solving part, and the other was not able to get any effective help from the
system.
Three of the participants had an active pattern in which they obviously navigated
by actively using the aid after the initial phase. They employed a horizontal U-shaped
pattern, one with a rather low, and the other two with a rather high frequency. Two
90 Paper C

used one wall as a reference surface, feeling sideways towards the other wall in regular
intervals and more often when approaching a door, while the third constantly moved
the grip, alternately feeling the walls on each side. During the problem solving part,
these participants navigated well between the walls and managed door openings with the
exception that one participant lost the spatial orientation when negotiating one of the
doorways.
One participant showed a very active and efficient pattern, moving the grip frequently
from side to side, but also forwards and backwards, in a flexible way using different
frequencies, directions and amplitudes depending on the situation. This participant was
able to identify small obstacles beside the actual course. During the problem solving
phase, this participant cleared the walls and most doorways without any problems and
needed verbal guidance only in order to find the way towards the narrow doorway after the
90 degree turn. Still, this participant had the same problems as the others with obstacles
in the very near vicinity at the sides, and needed verbal assistance when coming close to
the table and reaching for the can.

3.1 Findings from the interviews


The content analysis resulted in three categories: to be able to feel at a distance; not
without a lot of practice; the need to feel secure in unfamiliar environments. These
categories are presented in the text that follows and illustrated with quotations from the
interviewed participants.

To be able to feel at a distance


In this category the participants described their conception of how it “felt” to use the
system. The walls and corners were obvious to detect; the ability to “in time” feel what
was coming up like a door or a corner gave the participants a chance to get a broader
perspective of the environment around them. This was according to the participants
better than having to actually hit something with the regular cane to know it existed.
“To feel an obstacle well ahead of time, so that you know something is coming is an
advantage.”
With the prototype, range perception was difficult. The participants commented that
the range was too large and that it was difficult for them to judge distances. To be able
to feel at a greater distance compared to the white cane was met with mixed feelings.
One of the risks with the device as a “longer cane” was that it could be easier to lose
ones orientation. Another was that it required a lot of concentration that in turn might
mean using too much mental resources. One participant described the problem this way:
“If it is 20 meters, something has to tell you that, because it is difficult to know how far
away something at 20 meters is”.
In order to make the device more usable, the participants discussed what distance it
should reach; to feel 20 meters ahead was considered too far. According to the partic-
ipants, 4–5 meters would be a better choice. The need to be able to vary the distance
and to receive some sort of auditory feedback was one way to make it more usable.
3. Results 91

Not without a lot of practice


The participants’ conceptions of the prototype was that it would take a lot of practice
to learn. The need to become more familiar with it was important according to the
participants. This would increase the feeling of security: “[...] it is not until you get used
to it [the device] one might start trusting it more”. The participants also discussed that
with practice one would not need to concentrate as much as when trying it out for the
first time.
The fear of not being able to walk in a straight line and losing focus when not
navigating against a wall was discussed as well as how the system would work outdoors.
How it would work in an unfamiliar environment was the challenge and something some
of the participants wanted to try while others felt that they needed only their white cane.
Using the device had some aspects in common with the white cane. For instance, the
participants remarked that they used the same technique as with the cane and that it
felt better than they had expected. They also described the test as an interesting and
fun experience. Nevertheless, the white cane was easier to move to the sides and the
feedback from the prototype was harder to interpret.

The need to feel secure in unfamiliar environments


In this category the participants described some positive and negative aspects of the
prototype. It could lead to increased security if practical problems are solved. In order
to feel secure with it, one would need to trust its technical features. When able to perceive
more distant objects, security could increase by being able to orient oneself when lost.
Being aware of obstacles earlier in time could increase the feeling of security. The fact
that you could not accidentally hit peoples’ legs with the system was another positive
aspect.
Another aspect of security the participants described was the ability to locate things
in a room upon entering it. This was something that the participants thought could be
very useful in new surroundings. In unfamiliar environments, the need to train in each
specific location is a must regardless of aid.
“For example to find a place when you enter an unfamiliar environment: when you visit
someone or in a waiting room and places like that, to find a chair to sit on.”
To be able to read unfamiliar surroundings better could result in greater indepen-
dence that in turn could result in trying to venture out more and expand ones regularly
visited territory. One participant described it this way: “One can learn more about ones
surroundings. One can be more impulsive. Now I can go there by myself. You will be
able to go to the pharmacy in your area. If you have to have assistance you will have to
apply for it ahead of time and agree on what time, and then you have to arrange your
life accordingly. But if I wish to do it right now, that can never be arranged.”
The lack of auditory feedback when hitting something was yet another problem the
participants conceived. Not being able to feel the tactile surface and lose all the infor-
mation that auditory feedback gives with the white cane made the prototype less usable.
“With the regular cane I can feel a pot hole and I can feel where the stairs start.”
92 Paper C

4 Discussion
This initial field trial showed that most of the participants, despite being introduced to
the prototype for the first time, quickly understood how to use it. The participants’
conceptions were in general positive; they appreciated the ability to feel at a distance,
while perceiving the actual range was difficult. The absence of any auditory cues was
also expressed.
The literature lacks of reports on trials of similar systems. Sharma et al.[19] de-
scribed a trial for an obstacle avoidance system where blindfolded people used a powered
wheelchair to navigate an obstacle course. They demonstrated, as do our results, that
systems that can provide users with essential navigation information covering distances
beyond the reach of a cane might be valuable to support safe mobility.
A remarkable fact is that all participants quickly adopted their own usage technique.
This implies an intuitive learning process which could be attributed to the concept of
the system, but also to the fact that the participants were experienced cane users. The
U-shaped pattern that emerged in the participants’ use of the system could also be seen
as a limited use of it, not utilizing the full potential of scanning the total area in front of
them. It must be emphasized that the participants used the prototype for the first time,
and it is possible that a prolonged use would have made them aware of this opportunity.
While the participants quickly became familiar with how to use the system, they all
had difficulties with range perception. This meant that when performing high-precision
maneuvers such as passing through a narrow doorway, positioning themselves at a proper
angle was troublesome. Again, the fact that none of the users had prior training with
the system is important in this respect; it might be that they simply had not had enough
experience to precisely judge the scaling between the small movements of the haptic
grip and distances in the physical world. Another important factor to consider is the
position of the laser rangefinder and haptic interface relative to the user. In our case, the
laser rangefinder was positioned about half a meter directly in front of the user, while the
haptic interface was closer, but more to the right of the user. This means that in addition
to having to learn the scaling between the physical world and the haptic representation,
an additional sideways translation is required in order to properly match the physical
world with the virtual model.
Based on the participants’ descriptions of using the device and its feasibility it seems
like it can provide a combination of a direct experience of an environment as well as a sort
of tactile map due to its possibility to feel at a distance. Studies by Espinosa et al. [20]
have shown that being able to combine these two approaches constitutes a useful way to
orientate in unfamiliar environments. A longing to explore unfamiliar environments was
expressed by the participants in this study, and was something they saw the system could
aid in. Assistive technologies have the potential to enhance quality of life via improved
autonomy, safety and by decreasing social isolation [10].
This study must be seen as a first field trial and has, as such, a certain number of
limitations. A very early prototype was tried, which has effects on the usability for the
participants. Nevertheless, we believed that such an early trial would bring us important
References 93

knowledge for further development. The reason for not offering the participants the
opportunity of a longer familiarization with the system was that we wanted to get an
impression of how intuitive the system was to learn to use. The fact that this was a
very early stage trial also motivated us to choose a qualitative and open approach in
describing both user experience and actual performance when using the prototype.
Regarding the trustworthiness of the findings from the interviews, one limitation is the
sample size. A larger number of participants might have widened the range of experiences,
however, all six of the participants did describe similar conceptions of the system. To
strengthen the trustworthiness, the analysis of the transcribed data was discussed among
the authors and representative quotations were chosen to increase the credibility of the
results [21].
We also would like to emphasize that the participants represented potential users, and
were not people with normal vision being blindfolded. This is important as we wanted
to get the experiences from people who do not rely on visual information for navigation
and who were used to another haptic instrument: the white cane. In this respect, we
are aware of the findings of Patla [22], who demonstrated that among individuals with
normal vision that was partially or completely restricted, information provided by haptic
systems has to match the quantity and immediacy provided by the visual system in order
to support a well-controlled motor performance. How haptic information affects motor
control in persons not used to rely on visual information needs to be studied specifically.
In conclusion, this early field trial indicated an expected usability of the device from
an end-user perspective. We would like to emphasize the participants’ appreciation of
the ability to feel the environment at ranges beyond white cane range and the swift
acquaintance phase, which may be due to the cane-like interaction. The trial also gave
important perspectives from the users on issues for further development of the system.

Acknowledgement
This work was supported by Centrum för medicinsk teknik och fysik (CMTF) at Umeå
University and Luleå University of Technology—both in Sweden—and by the European
Union Objective 2 North Sweden structural fund.

References
[1] A. Shumway-Cook and M. H. Woollacott, Motor control: translating research into
clinical practice. Wolters Kluwer Health, 2007.

[2] World Health Organization, “Fact sheet, n282,” http://www.who.int/mediacentre/


factsheets/fs282/en/, 2014, accessed 2016-03-21.

[3] D. M. Brouwer, G. Sadlo, K. Winding, and M. I. G. Hanneman, “Limitation in


mobility: Experiences of visually impaired older people,” British Journal of Occu-
pational Therapy, vol. 71, no. 10, pp. 414–421, 2008.
94 Paper C

[4] E. L. Lamoureux, J. B. Hassell, and J. E. Keeffe, “The impact of diabetic retinopathy


on participation in daily living,” Archives of Ophthalmology, vol. 122, no. 1, p. 84,
2004.

[5] J. Desrosiers, M.-C. Wanet-Defalque, K. Témisjian, J. Gresset, M.-F. Dubois, J. Re-


naud, C. Vincent, J. Rousseau, M. Carignan, and O. Overbury, “Participation in
daily activities and social roles of older adults with visual impairment,” Disability
& Rehabilitation, vol. 31, no. 15, pp. 1227–1234, 2009.

[6] S. R. Nyman, B. Dibb, C. R. Victor, and M. A. Gosney, “Emotional well-being


and adjustment to vision loss in later life: a meta-synthesis of qualitative studies,”
Disability and Rehabilitation, vol. 34, no. 12, pp. 971–981, 2012.

[7] E. J. Steel and L. P. de Witte, “Advances in european assistive technology service


delivery and recommendations for further improvement,” Technology and Disability,
vol. 23, no. 3, pp. 131–138, 2011.

[8] HumanWare, “Trekker Breeze,” http://www.humanware.com/en-usa/products/


blindness/talking gps/trekker breeze/ details/id 101/trekker breeze handheld
talking gps.html, accessed 2016-03-21.

[9] Sound Foresight Technology, “Ultracane - putting the world at your fingertips,”
http://www.ultracane.com/, accessed 2016-03-21.

[10] L. Hakobyan, J. Lumsden, D. O’Sullivan, and H. Bartlett, “Mobile assistive tech-


nologies for the visually impaired,” Survey of Ophthalmology, vol. 58, no. 6, pp.
513–528, 2013.

[11] N. A. Bradley and M. D. Dunlop, “An experimental investigation into wayfinding


directions for visually impaired people,” Personal Ubiquitous Computing, vol. 9,
no. 6, pp. 395–403, Nov. 2005.

[12] B. Ando, “A smart multisensor approach to assist blind people in specific urban nav-
igation tasks,” Neural Systems and Rehabilitation Engineering, IEEE Transactions
on, vol. 16, no. 6, pp. 592–594, Dec 2008.

[13] B. Ando and S. Graziani, “Multisensor strategies to assist blind people: A clear-path
indicator,” IEEE Transactions on Instrumentation and Measurement, vol. 58, no. 8,
pp. 2488–2494, Aug 2009.

[14] L. A. Guerrero, F. Vasquez, and S. F. Ochoa, “An indoor navigation system for the
visually impaired,” Sensors, vol. 12, no. 6, pp. 8236–8258, 2012.

[15] I. Pitt and A. Edwards, “Improving the usability of speech-based interfaces for blind
users,” in Int. ACM Conf. Assistive Technologies. New York, NY, USA: ACM, 1996,
pp. 124–130.
95

[16] Novint Technologies Inc, “Novint Falcon,” http://www.novint.com/index.php/


novintfalcon, accessed 2014-02-24.

[17] D. Innala Ahlmark, H. Fredriksson, and K. Hyyppä, “Obstacle avoidance using hap-
tics and a laser rangefinder,” in Advanced Robotics and its Social Impacts (ARSO),
2013 IEEE Workshop on, 2013, pp. 76–81.

[18] U. H. Graneheim and B. Lundman, “Qualitative content analysis in nursing research:


concepts, procedures and measures to achieve trustworthiness,” Nurse Education
Today, vol. 24, no. 2, pp. 105–112, 2004.

[19] V. Sharma, R. C. Simpson, E. F. LoPresti, and M. Schmeler, “Clinical evaluation


of semiautonomous smart wheelchair architecture (drive-safe system) with visually
impaired individuals,” Journal of Rehabilitation Research and Development, vol. 49,
no. 1, p. 35, 2012.

[20] M. A. Espinosa and E. Ochaita, “Using tactile maps to improve the practical spatial
knowledge of adults who are blind.” Journal of Visual Impairment & Blindness,
vol. 92, no. 5, pp. 338–45, 1998.

[21] Y. S. Lincoln, Naturalistic inquiry. Sage, 1985, vol. 75.

[22] A. E. Patla, T. C. Davies, and E. Niechwiej, “Obstacle avoidance during locomotion


using haptic information in normally sighted humans,” Experimental Brain Research,
vol. 155, no. 2, pp. 173–185, 2004.
96
Paper D
A Haptic Navigation Aid for the
Visually Impaired – Part 1: Indoor
Evaluation of the LaserNavigator

Authors:
Daniel Innala Ahlmark, Maria Prellwitz, Ulrik Röijezon, George Nikolakopoulos, Jan van
Deventer and Kalevi Hyyppä

To be submitted.

97
98
A Haptic Navigation Aid for the Visually Impaired –
Part 1: Indoor Evaluation of the LaserNavigator

Daniel Innala Ahlmark, Maria Prellwitz, Ulrik Röijezon, George Nikolakopoulos, Jan
van Deventer, Kalevi Hyyppä

Abstract

Navigation ability in individuals with a visual impairment is diminished as it is largely me-


diated by vision. Navigation aids based on technology have been developed for decades,
although to this day most of them have not reached a wide impact and use among the
visually impaired. This paper presents a first evaluation of the LaserNavigator, a newly
developed prototype built to work like a “virtual white cane” with an easily adjustable
length. This length is automatically set based on the distance from the user’s body to
the handheld LaserNavigator. The study participants went through three attempts at a
predetermined task carried out in an indoor makeshift room. The task was to locate a
randomly positioned door opening. During the task, the participants’ movements were
recorded both on video and by a motion capture system. After the trial, the partici-
pants were interviewed about their conceptions of usability of the device. Results from
observations and interviews show potential for this kind of device, but also highlight
many practical issues with the present prototype. The device helped in locating the door
opening, but it was too heavy and the idea of automatic length adjustment was difficult
to get used to with the short practice time provided. The participants also identified
scenarios where such a device would be useful.

1 Introduction
Navigation is an ability that is largely mediated by vision. Visual impairments thus
limit this ability [1], which can lead to a decreased quality of life [2]. The white cane
is an excellent solution at close proximity and near the ground, but there is a lack of
accurate and user-friendly options for ranges greater than the cane’s length. The few
commercial products that provide this have not reached a wide impact and use among
visually impaired individuals [3], thus making further innovation and research all the more
important for the development of such devices. Factors such as security and usability,
in addition to technical issues about how distance information should be presented non-
visually need to be evaluated in order to create a better navigation aid.
Navigation aids, often referred to as electronic travel aids (ETAs), are available rang-
ing from small handheld devices and smartphone apps to extended white canes. An
example of a handheld device is the Miniguide [4], which uses ultrasound to measure
the distance to objects the device is pointed at. This distance is then reported by burst

99
100 Paper D

Figure D.1: A photo of the LaserNavigator, showing the laser rangefinder (1), ultrasound sensor
(2) and the loudspeaker (3).

Figure D.2: The two reflectors (spherical and cube corner) used alternately to improve the
body–device measurements.

of vibrations where the burst frequency is related to the measured distance. Another
solution possessing similar functionality is the UltraCane [5], albeit as a custom-built
1. Introduction 101

white cane. Being based on ultrasound, these devices have the disadvantage of a limited
range (a few metres) and a significant beam spread (15◦ or greater).
Handheld GPS units such as the Trekker family of products [6] as well as accessible
smartphone apps [7, 8, 9] with similar features are available. They depend ultimately
on the accuracy of the GPS system, and the limitations of stored maps. Smartphone
apps such as BlindSquare [9] try to overcome the latter limitation by connecting to open
online services, which means they can respond to changes in the environment, provided
someone has altered the information to reflect these changes.
This paper presents a first evaluation of a newly developed handheld navigation aid
dubbed the LaserNavigator (depicted in figure D.1). Its name refers to the fact that
the device uses a laser rangefinder to measure the distance to objects from the user’s
hand. Because it is an optical system, rather than e.g. an ultrasonic system as often
utilised [5, 4], it can measure very large distances (up to 50 m) with high accuracy (error
less than 10 cm + 1% of range) and with a beam spread of 0.2◦ . The beam spread is of
particular importance in this case as one intended application of the device is to determine
the direction and approximate distance to a distant landmark such as a lamppost. The
user gets haptic feedback through one finger placed on top of a small loudspeaker1 . The
vibrations are not directly related to the distance. Instead, the user is able to vary the
maximum distance of interest. If an object is beyond this selected distance (virtual “cane
length”), no vibrations will be emitted even though an object might be measured. When
the object is at or closer than the selected distance, the speaker membrane will emit
short vibration pulses. These are not dependent on the measured distance, but merely
signal the presence of an object.
To vary the “cane length”, the user moves their arm holding the LaserNavigator
closer to or further away from their body. The device possesses an ultrasound sensor
determining the body–device distance. This value is then multiplied by a constant factor
and is then set to be the “cane length”. This way, the user can seamlessly vary the
desired reach without any interruption or additional input methods. A way to visualise
how the system works is to think of a telescopic white cane that automatically expands
or contracts depending on how far away from the body the user is holding it. In the
presented indoor trials, the scale factor was set to 10, so that a body–device distance
of 50 cm would equate to having a 5 m long “virtual cane”. More information on the
development of the LaserNavigator will be published in an upcoming paper [10].
During the trials, two kinds of vibration feedback were experimented with. Contrary
to a real cane, there is nothing stopping the user from pushing far “into” an object. This
means that when the device is vibrating, the user has to pull their arm back until they
locate the threshold where the device stops vibrating. At that point, the user can infer
the approximate distance to the object by knowing the position of their hand relative to
their body. This feedback type, where vibrations only signal the presence of an object,
is denoted single frequency feedback in the remainder of this text. An additional mode,
dual frequency feedback, was added, where the device vibrates at a higher frequency when
1
A speaker was chosen instead of more conventional vibration actuators because of its quick response
time.
102 Paper D

the “cane tip” is at the boundary of an object. In this mode, a lower frequency tells the
user that they need to pull back.
To improve the accuracy of the body–device distance measurement, the participants
alternately wore one of two specially manufactured reflectors shown in figure D.2. These
serve as stable reflective surfaces for the ultrasound, which would otherwise reflect off the
user’s clothes. The cube corner reflector gives off very strong reflections, and the idea
was to decrease the power of the emitted ultrasound so that only those reflections would
be detected. The cube corner reflector was tested by one (the last) participant.
The paper is organised as follows. Section 2 describes the participants, the test
environment and study protocol. Section 3 shows the results from observations and
participant interviews. These are then summarised and discussed in section 4.

1.1 Purpose
The purpose of this study was to get early feedback from potential users. We wanted
to understand users’ conceptions of usability of the LaserNavigator in an indoor envi-
ronment, later intending to perform another trial outdoors. We were also interested in
identifying movement patterns and strategies employed when using the device, as these
can suggest important changes to the design of the system.

2 Methods
This section characterises the participants, test setup and assessments.

2.1 Participants
The study participants were recruited via the local district of the Swedish National
Association for the Visually Impaired (Synskadades riksförbund, SRF). An information
letter was sent out to the district members, and three participants presented their interest
to participate in the study. The selection criterion was that participants be visually
impaired and able to move about independently, with or without a white cane.
The participants, 2 females and 1 male, were 60, 72 and 78 years old, respectively. All
were blind2 , thus no one was likely able to use visual cues to aid in the task. Participants
B and C were adventitiously blind, having been blind for 2 and 7 years, respectively.
Also noteworthy is that while all three used white canes, participant C used it only as a
walking cane. Additionally, participant B had a guide dog and used a GPS device daily.
Participants A and B were comfortable with walking around in familiar territory on their
own, while participant C said he never leaves the home by himself.
Due to the low number of participants, we opted for an evolving strategy rather
than looking to obtain statistically comparable results. This meant that we altered both
the LaserNavigator and the test environment after each test, based on feedback from
2
The term is used in accordance with the International Classification of Diseases (see [11]).
2. Methods 103

the previous participant. To highlight this process, the three participants are discussed
separately in the results section below.

Figure D.3: A picture of the makeshift room as viewed from outside the entrance door.

2.2 Test Environment


The tests were performed at the Field Robotics Lab (FROST Lab) at Luleå University of
Technology. The lab is a large room with a set of Vicon Bonita motion capture cameras
mounted near the ceiling [12]. Inside the lab, a makeshift room of size 5.6 by 8.8 m2 was
constructed of plywood with a wall height of 1.2 m. The shorter sides of the room had
one door while the longer had two (see figure D.3). The low walls were detectable with
the LaserNavigator, while still allowing the ceiling-mounted motion capture cameras [12]
to track markers moving about inside the room. Reflector markers were placed on the
participant (sternum and head), the white cane and the LaserNavigator. The photo
shown in figure D.4 was taken during our initial tests of the setup.

2.3 Task
Upon arrival, the participants had about an hour to familiarise themselves and train
with the LaserNavigator. During this time, they also had the opportunity to practice
the actual trial task multiple times. Following this, the trial proceeded as follows:
104 Paper D

Figure D.4: One of the researchers (Daniel) trying out the trial task. The entrance door is
visible in the figure.

1. The participants were positioned at a predetermined spot outside the makeshift


room, close to the entrance door, designated as the starting position.
2. They then located the entrance door (same one each time), and entered the room.
3. Next, the task was to locate another, randomly opened door, and move to that
opening.
4. Finally, the task was to find their way back to the entrance door again, and exit
back through to the starting position.
These steps were repeated three times by each participant, with each trial being
recorded both as video and as motion capture data from the ceiling-mounted cameras.
This design allowed for both predictable and unpredictable elements within the task.
The first part, i.e. finding the entrance door from the same starting position each trial
can be considered a rather predictable element of the task after a few practice trials. The
next part, however, i.e. finding a randomly opened door somewhere in the room, can be
considered more unpredictable. The last part of the task, i.e. finding the way back to the
entrance door can be considered either predictable or unpredictable depending on sense
of location (sense of knowing how they have moved around inside the room and knowing
where they are in relation to the entrance door).

2.4 Observations
For looking at movement patterns and strategies, all trials were filmed by one of the
researchers, as well as being recorded by the motion capture cameras at 100 frames per
3. Results 105

second. The participants were also encouraged to explain their thoughts and actions
during the trials.
To analyse these data, the videos were watched independently by the researchers, and
key points regarding movement patterns and strategies were noted and then summarised.
The motion capture data are used to illustrate the results (figures D.5 and D.6).

2.5 Interviews
After the trial, each participant was interviewed based on a semi-structured ten-question
interview guide focusing on their conceptions of using the prototype. The interviews
were then transcribed verbatim, and analysed based on content analysis as described by
Graneheim and Lundman [13].

3 Results
This section describes results from the observations and interviews. The observation
results are based on video and motion capture material, and are discussed separately for
each participant to highlight the changes made to the LaserNavigator and set-up. The
findings from the interviews are summarised in three categories: benefits, challenges and
future possibilities.

3.1 Observations
The following text outlines general movement patterns, strategies and other movement
behaviours of interest obtained by looking at the recorded videos. Figure D.5 shows
position graphs for all nine trial runs. The results are discussed separately for each
participant as the system and set-up were slightly altered from trial to trial.

Participant A
The first participant used the spherical reflector (figure D.2) on the body, and the Laser-
Navigator was set to single frequency feedback.
Generally, this participant was very conscious of the way she moved, and seemed
to have a good sense of the location of known things (notably the entrance door) at
all times. She also used a specific strategy in all three attempts, walking about the
room in a clockwise fashion. In general she moved quite fast compared to the other two
participants.
She easily found the entrance door from the starting position, but had a harder time
finding the other open door in the room. She seemed to think the room was circular, which
may be due to corners being similar to open doors when probed with the LaserNavigator
unless one carefully traces the walls. While performing the tasks she effectively used
both her white cane and the LaserNavigator. She alternated between sweeping and the
in-and-out movements with the device.
106 Paper D

Figure D.5: Movement tracks for each participant and attempt, obtained by the reflector markers
on the sternum. The entrance door is marked by the point labelled start, and the target door
is the other point, door. Note that the start point appears inside the room because the motion
capture cameras were unable to see part of the walk. Additionally, attempt 3 by participant B
does not show the walk back to the entrance door due to a data corruption issue.
3. Results 107

Figure D.6: This figure shows the three attempts of participant B, with the additional red line
indicating the position of the LaserNavigator. Note that attempt 3 is incomplete due to data
corruption.

On the third attempt she managed to detect the correct door without any additional
circuits about the room. She then found her way back to the entrance without difficulties.
This was a noticeable improvement from the preceding attempts. One notable issue was
that she sometimes held the device either too far out to the side or at a steep angle,
which meant that the ultrasound sensor would not measure the proper distance.

Participant B
The second participant also used the spherical reflector (figure D.2), but we altered the
LaserNavigator to use dual frequency feedback. The idea behind this was to make it
easier to know the actual distance to the walls, which should help in differentiating
corners from doors.
This participant found the experience tiring, both physically and mentally. This
meant that holding the navigator pointing horizontally was demanding, and the floor
was often thus detected with the device. She went through the room at a slow pace,
without any explicit strategy, as opposed to participant A who explicitly used a clockwise
movement strategy. In general she used too large motions with the navigator to detect
the doorways, and also often used a very long “virtual cane”. These two factors meant
she rarely found the actual distance to the walls, and as such the additional frequency
feedback employed was not of much if any help. She almost never used her white cane
during the tasks. Figure D.6 shows the three attempts by participant B with an additional
curve representing the position of the LaserNavigator. The figure shows the transition
from vigorous sweeping in the first attempt to more subdued movements in the third.

Participant C
The final participant used the cube corner reflector (figure D.2). Additionally, the Laser-
Navigator was switched back to single frequency feedback. Of note is that this participant
did not use a white cane in the conventional way. Instead, he used it as a walking cane, al-
though during the tests he only used the LaserNavigator, relying on one of the researchers
108 Paper D

to assure safety while moving around and exploring the room.


This participant used the LaserNavigator in a very systematic and cautious way,
preferring to remain stationary and scan his surroundings before moving, which he seemed
to be reluctant in doing. Because of his careful use of the device, he often found both
doors and corners, though he had trouble discriminating between the two. He also
had difficulties holding the navigator horizontally, sometimes detecting the floor and
sometimes pointing over the makeshift walls. He seemed not to use any particular strategy
when moving about, and did not keep track of where he entered.

Notes on the Unpredictable vs Predictable Parts of the Task


All three participants relatively quickly learned to find and navigate through the entrance
door from the starting point, which was the same spot for each trial. This indicates that it
is easier to learn to navigate with the LaserNavigator in a more predictable environment
and situation. Finding the way back again to the entrance door to exit the room as the
final part of each trial would also seem to be a more predictable task compared to finding
the randomly opened door inside the room. This was true for participant A who had a
well-developed sense of location, but not as much for participants B and C who found it
more difficult to perceive their location inside the room in relation to where they entered
it.

3.2 Interviews
The analysis of the interviews resulted in three categories: benefits, challenges and future
possibilities. Benefits are advantages that the current prototype provided during the
trial, and good points about the prototype itself. Challenges refer to statements ranging
from practical problems with the current prototype to more general usability concerns.
The final category, future possibilities, encapsulates ideas and scenarios where a further-
developed LaserNavigator would be helpful. The statements below are illustrated with
quotations from the participants.

Benefits
The participants noted that the device helped them find the doorways; “else I would
have done it like I always do: I walk until I reach a wall and then I follow that wall.”
One participant noted that with a little practice the device became easier to use with
each attempt, although one would have to move a little slower than usual.
All participants noted that the vibrations were clear and easy to discern. Noteworthy
is the fact that participant C stated that he used the emitted audio rather than paying
attention to the vibrations, whereas participant A said she “did not have time” to pay
attention to the sounds. Participant C expressed the general feeling that “it was a good
prototype”. Also noted was the benefit of not having to use the device continuously; the
device was seen more as something you pull out from time to time to check your bearings,
a complement to the white cane.
3. Results 109

Challenges
The general opinion was that the task was difficult, and two participants noted specifically
the difficulties and frustrations associated with finding the corners of the room. One said
that using the device gradually became easier, while another said it was more tiring than
it helped. One expressed the opinion that “even the slightest technological aids are in
the way. What you need is the cane.”
All participants noted many practical issues with the prototype. In particular, par-
ticipant B felt it was really tiring to use the device due to its weight and also pondered
how to practically use the system in conjunction with the cane, a GPS device and/or a
guide dog: “You have to use the cane, and then you are supposed to use this too in some
way. One hand is being used by the cane, so how do you practically use this at the same
time?”
One question asked during the interviews was how the participants handled getting
lost, if they had any special strategies. One remarked: “then I’ll go back and soon I’ll
find my way again”.
The guide dog user (B) simply stated that “I’ll just put the harness on and say ‘go
home’.”
Participant C did not describe any strategy, but instead told a story to highlight
the fact that one can get lost even in smaller spaces. He described “getting lost on the
balcony” whilst bringing something from the balcony into the house.

Future Possibilities
All participants had improvement suggestions and presented situations from their daily
activities where they thought such an improved device would be of use. One such scenario
was going out of the house to put up laundry. Going out was easy, but finding the way
back was far more difficult. Similar scenarios included finding the door to a shopping
mall or finding a landmark such as a lamppost. In particular, one wished for the option
to filter objects so only the things of interest would be detectable. “There are so many
details outside: bikes, waste bins, trees, flower boxes, decorations, other people... you
might want to find that particular lamppost in all that mess.”
The guide dog owner (B) in the group noted that one cannot always count on having
a guide dog, and that a future LaserNavigator could work as a temporary solution, for
instance when waiting to get a new guide dog. The same participant also described the
following, very specific, scenario: “if I’m walking outside, I may get information about
where the walls of houses are, so I know when I pass a house; it [the LaserNavigator]
vibrates. One might also encounter a low wall or other low obstacles by the house, and
through the vibrations be able to feel depth.”
Participant C had a very specific idea of his ideal system in mind, and thus put
up many suggestions. These included mounting the device on a walking cane, getting
auditory feedback through a headset, and having a button announce the exact distance
measured by speech. Another idea that came up was for the device to somehow announce
compass directions, which could easily be added as the required sensors are already
110 Paper D

present.

4 Discussion
This first evaluation of the handheld LaserNavigator has contributed with several im-
portant initial results. It has shown that the device can contribute with valuable infor-
mation about the surroundings, in this case finding doorways in a relatively unknown
environment. It was also found that the LaserNavigator was more usable in a more pre-
dictable situation. This can be exemplified by the fact that all three participants quickly
learned to find and navigate through the entrance door from the starting point outside
the makeshift room, which was the same for each trial. It is likely that the navigator is
most usable in fairly predictable contexts, that is when having a good idea of where to
direct the navigator and what to look for, at least in an initial learning stage. Navigating
in a more unpredictable environment will, however, probably need a lot more practice to
learn to scan with the device and comprehend the information.
The participants were mostly able to find the doorways, thus showing the ability to
integrate the new information provided to them by the LaserNavigator. The participants’
conceptions of usability of the device were mixed; the current device was difficult to use,
but the concept was met with interest and many areas of use were identified. Participant
B, being used to a GPS device, detailed a scenario where she got lost and had to retrace
her steps until she started receiving familiar GPS instructions. On such occasions, where
one has just left a familiar path for the unfamiliar, the LaserNavigator might help in
identifying a known landmark and thus establish a sense of location. Open spaces were
also discussed, where the white cane might not provide much information, yet there is
an important landmark somewhere in that open space which could be detected by the
LaserNavigator.
All participants encountered practical issues with the prototype. In addition to being
heavy and unbalanced, one issue common to all three participants was how to hold the
LaserNavigator horizontally. This may be difficult without any visual feedback or a lot
of practice, but at least two participants may have performed worse in this regard due
to the effort of holding the device straight, thus often pointing it at a downward angle.
The weight and balance issues can be mitigated by redesigning the prototype with this
in mind, and sensors for determining the pointing angle are present.
As for the different reflector and feedback types, we do not notice any obvious effects.
The study did not attempt to specifically measure the impact of these changes, and the
small sample size and diversity would have made it difficult to draw any conclusions
in this regard. The additional level of feedback (dual frequency) does provide more
information, but no participant attained the level of skill required to appreciate this
additional information. The idea of having an easily variable “cane length” would be a
new concept even to white cane users, and as such needs training to use effectively.
As mentioned in the introduction, visual input is of major importance for navigation
in 3D space. Development of accurate and versatile navigation aids for the visually
impaired is therefore of highest relevance and importance. As in controlling movements
4. Discussion 111

in general, navigation in 3D space involves integration of several sensory systems. Apart


from vision, safe and efficient navigation also involves sensory input from vestibular,
somatosensory (including proprioception and tactile senses) and audial systems. This
means that the novel information from the navigation aid needs to be transmitted via
one or several of the available sensory systems and integrated with all sensory information
relevant for solving the task. For example, the LaserNavigator presented in this study
relies on proprioception from hand, arm and trunk, as well as precise motor commands
to adjust the length of the “virtual cane”. Also, the information from the device is
transmitted via tactile vibration to the index finger holding the device. Processing and
integration of this information is a highly important task for the central nervous system
(CNS) to achieve useful information for navigation. Results from this study and others
(e.g. [14]) illustrate that this is not trivial, and that the CNS needs to adapt and learn
how to process and integrate this sensory information. The CNS also needs to learn to
create optimal motor commands for precise and efficient movements with the navigation
aid. Our beliefs are that the plasticity of the CNS will allow for learning to integrate the
augmented information from technological devices such as the LaserNavigator with all
sensory systems for safe and efficient navigation. This will however need practice, more
practice than was given in this study, to gain full use of the navigation aid.
The participants discussed training during the interviews, with one of them (partic-
ipant A) talking about an improvement experienced during the trial itself. The main
difficulty seemed to be getting used to the back-and-forward motion, and then efficiently
combining that with the sweeping typically used with the white cane. In this respect, it
is interesting to examine the graphs in figure D.5. The improvement can be seen from
the trials of participant A, while the latter attempts of participant B show the effort she
experienced, thus using the device less actively (figure D.6). The graphs show character-
istic movement patterns for each participant, but no general strategy emerged, in part
likely due to insufficient time to practice with the LaserNavigator.
The participant selection was made from a group with a great inherent degree of
diversity. The way participants are selected from such a group needs careful consideration,
as discussed by Loomis et al. [15]. This is particularly important for studies looking to
find group differences, whereas in our case the low number of participants make such
methods statistically inappropriate. There are many factors that greatly influence the
performance in a task such as the one described herein, with independent mobility skills
likely being a large contributor. The study methodology was chosen with these limitations
in mind.
The feedback from the participants has led to immediate changes to the LaserNav-
igator. The difficulties in combining the back-and-forth movements with the sweeping
motion motivates us to separate the two. One will be a length adjustment mode where
the back-and-forth motion is used to set the “cane length”. The other and main mode
will not adjust the cane length based on arm position. This behaviour is more like a real
cane, and should improve depth perception. The key is an efficient method to vary the
length, which is a necessity if one expects to use the device both at short and very long
ranges. This will require further study.
112 Paper D

Future research will include additional training and navigation in an outdoor scenario
to shed further light on the overall conceptions of the LaserNavigator.

4.1 Daniel’s Comments


The first author, Daniel, has a severe visual impairment known as Leber’s congenital
amaurosis. Following are his reflections on using the LaserNavigator in general, and on
the trial task in particular:

One of the most appealing yet challenging aspects of the LaserNavigator is


that it provides access to knowledge about surrounding objects at ranges
beyond that of the white cane. The expanded range is a great advantage,
but as the information is limited to distance and direction, the tricky part
is trying to piece together the perceived information into a useful cognitive
map of the environment. Having used the LaserNavigator far more than the
participants, I was easily able to detect the doorways, with one caveat: when
positioned at a steep angle to the doorway, it was very difficult to detect.
Differentiating doorways from corners does take some effort, and is again
an angular phenomenon. When standing with a corner straight ahead and
sweeping the LaserNavigator side-to-side, the distance to the walls changes,
and to tell whether an abrupt pause in feedback is a door or corner requires
active tracing of the walls. In terms of feedback mode I prefer the dual
frequency feedback as it helps with this very task.

Acknowledgements
This work was supported by Centrum för medicinsk teknik och fysik (CMTF) at Umeå
University and Luleå University of Technology – both in Sweden – and by the European
Union Objective 2 North Sweden structural fund. We would also like to thank Dar-
iusz Kominiak for managing the motion capture system and aiding in the planning and
construction of the makeshift room.

References
[1] D. M. Brouwer, G. Sadlo, K. Winding, and M. I. G. Hanneman, “Limitation in
mobility: Experiences of visually impaired older people,” British Journal of Occu-
pational Therapy, vol. 71, no. 10, pp. 414–421, 2008.

[2] S. R. Nyman, B. Dibb, C. R. Victor, and M. A. Gosney, “Emotional well-being


and adjustment to vision loss in later life: a meta-synthesis of qualitative studies,”
Disability and Rehabilitation, vol. 34, no. 12, pp. 971–981, 2012.
113

[3] T. Pey, F. Nzegwu, and G. Dooley, “Functionality and the needs of blind and par-
tially sighted adults in the uk: a survey,” Reading, UK: The Guide Dogs for the
Blind Association, 2007.

[4] GDP Research, “The miniguide mobility aid,” http://www.gdp-research.com.au/


minig 1.htm, accessed 2016-03-21.

[5] Sound Foresight Technology, “Ultracane - putting the world at your fingertips,”
http://www.ultracane.com/, accessed 2016-03-21.

[6] HumanWare, “Trekker Breeze,” http://www.humanware.com/en-usa/products/


blindness/talking gps/trekker breeze/ details/id 101/trekker breeze handheld
talking gps.html, accessed 2016-03-21.

[7] L. Ciaffoni, “Ariadne GPS,” http://www.ariadnegps.eu/, accessed 2016-03-21.

[8] Loadstone GPS Team, “Loadstone GPS,” http://www.loadstone-gps.com/, accessed


2016-03-21.

[9] “BlindSquare,” http://blindsquare.com/, 2016, accessed 2016-03-21.

[10] J. van Deventer, D. Innala Ahlmark, and K. Hyyppä, “Developing a Laser Naviga-
tion Aid for Persons with Visual Impairment,” To be published, 2016.

[11] World Health Organization, “Fact sheet, n282,” http://www.who.int/mediacentre/


factsheets/fs282/en/, 2014, accessed 2016-03-21.

[12] VICON, “Bonita Motion Capture Camera,” http://www.vicon.com/products/


camera-systems/bonita, accessed 2016-03-21.

[13] U. H. Graneheim and B. Lundman, “Qualitative content analysis in nursing research:


concepts, procedures and measures to achieve trustworthiness,” Nurse Education
Today, vol. 24, no. 2, pp. 105–112, 2004.

[14] R. Farcy and Y. Bellik, “Locomotion assistance for the blind,” in Universal Access
and Assistive Technology. Springer, 2002, pp. 277–284.

[15] J. M. Loomis, R. L. Klatzky, R. G. Golledge, J. G. Cicinelli, J. W. Pellegrino, and


P. A. Fry, “Nonvisual navigation by blind and sighted: assessment of path integration
ability,” Journal of Experimental Psychology, vol. 122, no. 1, pp. 73–91, 1993.
114
Paper E
A Haptic Navigation Aid for the
Visually Impaired – Part 2:
Outdoor Evaluation of the
LaserNavigator

Authors:
Daniel Innala Ahlmark, Maria Prellwitz, Ulrik Röijezon, Jan van Deventer and Kalevi
Hyyppä

To be submitted.

115
116
A Haptic Navigation Aid for the Visually Impaired –
Part 2: Outdoor Evaluation of the LaserNavigator

Daniel Innala Ahlmark, Maria Prellwitz, Ulrik Röijezon, Jan van Deventer, Kalevi
Hyyppä

Abstract

Negotiating the outdoors can be a difficult challenge for individuals who are visually
impaired. The environment is dynamic, which at times can make even the familiar
route unfamiliar. This article presents the second part evaluation of the LaserNavigator,
a newly developed prototype built to work like a “virtual white cane” with an easily
adjustable length. The user can quickly adjust this length from a few metres up to 50
m. The intended use of the device is as a navigation aid, helping with perceiving distant
landmarks needed to e.g. cross an open space and reach the right destination. This
second evaluation was carried out in an outdoor environment, with the same participants
who partook in the indoor study, described in part one of the series. The participants
used the LaserNavigator while walking a rectangular route among a cluster of buildings.
The walks were filmed, and after the trial the participants were interviewed about their
conceptions of usability of the device. Results from observations and interviews show that
while the device is designed with the white cane in mind, one can learn to see the device
as something different. An example of this difference is that the LaserNavigator enables
keeping track of buildings on both sides of a street. The device was seen as most useful in
familiar environments, and in particular when crossing open spaces or walking along e.g.
a building or a fence. The prototype was too heavy and all participant requested some
feedback on how they were pointing the device, as they all had difficulties with holding
it horizontally.

1 Introduction
Independent navigation is a challenge for individuals with a visual impairment. Without
sight, the range that landmarks can be detected is short, and therefore a relatively simple
route for the sighted might be a complex route with many landmarks for the visually
impaired. Additionally, things in the environment change all the time, which means that
some day the trusty landmark may be out of reach, not to mention the whole route.
Further, helpful information such as signs are not usually accessible without sight, and if
they happen to be, you have to know to look for them. These challenges might mean that
a person who has a visual impairment chooses to stay at home [1], or have to schedule
the excursion at a later time when sighted assistance can be brought along.

117
118 Paper E

Numerous attempts have been made to address the above-mentioned challenges.


These have unfortunately not made a big impact among the visually impaired [2], though
there are a few exceptions. Firstly, the white cane has been extremely successful, and
has a strong symbolic value associated with it. The reasons for its success might lie in its
simplicity, and also the security it provides, e.g. when notifying the user of a downward
staircase. Secondly, electronic travel aids (ETAs) in the form of accessible GPS devices
(such as the Trekker [3]) and apps (e.g. Ariadne [4] and BlindSquare [5]) are available to
help with finding the way. These two kinds of devices complement each other, but have
individual drawbacks that make certain situations very difficult. For example, consider
crossing an open space. The GPS device or app likely have too little information to guide
the user correctly across, and inherent inaccuracy of the GPS system means one cannot
solely rely on it. The white cane, usually used to follow kerbs or other ground features,
may not provide much information, if any, while crossing the open space. In this case,
closely listening for environmental sounds and echoes might be the only useful cues for
crossing the space. The next landmark might be a lamppost on the other side of the
open space, which would not be easy to find.
There is a third category of navigation aid: devices that try to augment or complement
the white cane with sensors and feedback systems. An example of a modified cane is the
UltraCane [6] and a handheld device is the Miniguide [7]. These are very similar in that
they both use ultrasonic time-of-flight technology to measure the distance to objects,
and alert the user of these by vibration bursts varying in frequency depending on the
measured distance. Typically, these devices possess a range of up to 8 metres. Using
ultrasound also means a significant beam spread (more than 10◦ ), which may or may
not be a problem depending on the intended use of the device. As a way of alerting the
user of close-by obstacles, they work well, but not so for discerning the detailed shape of
objects at long ranges. Further, open spaces might require far greater range.
This article is the second and final part in a series evaluating the LaserNavigator,
a prototype navigation aid based on a laser rangefinder and haptic feedback. The first
article (see [8]) presented an initial indoor evaluation, while this second part features an
outdoor evaluation performed four months after the indoor tests. The LaserNavigator
uses a laser rangefinder to overcome the limited range and significant beam spread of
ultrasonic systems. With the device, it is possible to detect a lamppost across an open
space of up to 50 metres.
True to its name, the LaserNavigator (depicted in figure E.1) uses a laser rangefinder
to measure the distance to objects. Specifically, an SF02/F unit from Lightware Opto-
electronics (see [9]) was chosen which has range of 50 m. The unit is able to take 32
measurements per second, with an error less than 10 cm +1% of range, and a low beam
spread of 0.2◦ .
The feedback consists of vibrations produced by a small loudspeaker1 , on which the
user puts their index finger.
The device operates in two modes: one length adjustment mode, and the main (nav-
1
A loudspeaker was chosen instead of a mechanical vibration actuator because of its much quicker
response time.
1. Introduction 119

Figure E.1: A picture of the LaserNavigator, showing the laser rangefinder (1), the ultrasound
sensor (2), the loudspeaker (3), and the button under a spring (4) used for adjusting the “cane
length”.

igation) mode. First, the user enters the length adjustment mode where they choose a
desired length of the imagined cane. This mode is active while the user holds down the
button on top of the handle in figure E.1. While in this mode, the device uses its second
range measurement unit (an ultrasound sensor [10]) to measure the distance from the
device to the user’s body. On releasing the button, the device takes the last body–device
measurement, multiplies it by 50, and uses the result as the “cane length” for the main
usage mode. In the main mode, if the laser detects objects closer to or equal to this
cane length, the device will signal this to the user by vibrations. The vibration pattern
is a series of short repeating bursts with a fixed frequency. Note that this frequency
is not varied based on the measured distance; the vibrations convey the presence of an
object. More information on the development of the LaserNavigator will be published in
an upcoming paper [11].
The remainder of the article is organised as follows. Section 2 characterises the par-
ticipants and the study. Section 3 describes the results from observations and interviews.
These are then discussed in section 4, which concludes the paper.

1.1 Purpose
The purpose of this study was to better understand users’ conceptions of usability of
the LaserNavigator in an outdoor context. This knowledge, together with the results
from the indoor study, contribute to further development of the device, and towards
understanding how such a device can be useful in different scenarios.
120 Paper E

Figure E.2: The tactile model used by the participants to familiarise themselves with the route.
The route starts at (1) and is represented by a thread. Using the walls of buildings (B1) and
(B2) as references, the participants walked towards (2), where they found a few downward stairs
lined by a fence. Turning 90 degrees to the right and continuing, following the wall of building
(B2), the next point of interest was at (3). Here, another fence on the right side could be used
as a reference when taking the soft 90-degree turn. The path from (3) to (6) is through an alley
lined with sparsely spaced trees. Along this path, the participants encountered the two simulated
crossings (4) and (5), in addition to the bus stop (B5). At (6) there was a large snowdrift
whose presence guided the participants into the next 90-degree turn. Building B4 was the cue
to perform yet another turn, and then walk straight back to the starting point (1), located just
past the end of (B3).

Figure E.3: This figure shows three images captured from the videos. From left to right, these
were captured: just before reaching (6); just before (5), with one of the makeshift traffic light
poles visible on the right; between (3) and (4).
2. Methods 121

2 Methods
This section characterises the participants and describes the study.

2.1 Participants
The three participants were the same ones who participated in our earlier indoor evalu-
ation, described in part 1 of the article series [8]. As such, they had tried the LaserNav-
igator before, albeit an earlier version with some significant differences.
The participants were recruited from the local district of the Swedish Association
for the Visually Impaired (SRF). They were 2 females and 1 male, aged 60, 72 and 78,
respectively. All were blind2 , with participants B and C being adventitiously blind. All
three used white canes, but participant C used it only as a walking cane. Additionally,
participant B had a guide dog and used a GPS device daily. Participants A and B were
comfortable with walking around in familiar territory on their own, while participant C
said he never leaves the home by himself.

2.2 Trial Task


On arrival, the participants got some time to familiarise themselves with the new Laser-
Navigator, and practiced using the length adjustment feature in a corridor. Subsequently,
with assistance from one of the researchers, the participants explored a tactile model of
an outdoor environment (figure E.2) where the trial task would take place. The task
was to walk a 385 m closed path among a cluster of buildings. The route contained
different kinds of landmarks, ranging from walls on both sides to open sides lined by
sparsely spaced trees. The route was rectangular, with one corner being a large curve
instead of an abrupt 90 degree turn. Four vertical pipes were placed in such a way as to
simulate traffic light poles for two imaginary road crossings. The environment and route
is depicted and described in more detail in figure E.2.

2.3 Observations And Interviews


All trials were filmed by one of the researchers following the participants. Similarly, a
closer audio recording was made to capture any live comments made by the participants.
One of the researchers walked with the participants in each trial to give instructions about
buildings, other objects, obstacles and locations. Instructions were also given regarding
suitable length settings and identifying objects. The weather was fairly cold during the
trials; two participants wore finger gloves.
After the trials, the participants were interviewed based on a semi-structured inter-
view guide of ten open-ended questions. The focus for the questions was on conceptions
of usability of the LaserNavigator in an outdoor environment. The interviews were tran-
2
The term is used in accordance with the International Classification of Diseases (see [12]).
122 Paper E

scribed verbatim and were subsequently analysed using content analysis as described by
Graneheim and Lundman [13].

3 Results
This sections describes the results from the observations and interviews.

3.1 Observations
All participants needed a lot of instructions during the walk. While they brought their
white canes, they did not use them that much, instead concentrating on using the Laser-
Navigator. They used the device to find a “corridor”, i.e. an open path straight ahead
with “walls” on both sides. When they found this, they seemed confident in walking
through, following one or both of the “walls”, whether they were actual walls or trees.
Figure E.3 shows three pictures from the trials captured from the video recordings.
Generally, all participants had difficulties holding the LaserNavigator horizontally,
and often needed instructions to angle the device up or down. Below are some comments
specific to each participant.
Participant A brought her white cane, but did not use it regularly, instead concentrat-
ing on the LaserNavigator. She mainly used the LaserNavigator to find open space where
she could walk, and having found that, mostly held the device fixed while she walked
straight. She initially used too wide and quick movements with the LaserNavigator, but
later assumed a more calculated and controlled use. When instructed to walk to a land-
mark she found, she was able to move there without much difficulty. The second time
around, participant A was more confident and walked the route considerably quicker.
Participant B found it very fatiguing to hold the LaserNavigator, and walked the
route one time only, during which she often had to pause for awhile to rest. She had
her white cane but did not use it regularly. She walked at a normal walking pace, and
sometimes missed important landmarks and had to be stopped and given the relevant
information. She had difficulties finding the walkway in the alley, as she used too large
and fast movements with the device. Because of this, she also missed small landmarks
like the traffic light poles, which she needed a lot of help to find. During the later part
of the walk, she let go of her white cane and used the left hand to help steady the right,
holding the LaserNavigator.
Participant C mainly used the LaserNavigator while stationary at first, but later
incorporated the use while walking. The first time around he did not bring his cane, but
during the second time he used it to support himself and to help him with the stairs.
Due to the cold weather he used gloves a short time, after which he removed them and
commented that he did not feel the vibrations through them. Noteworthy is that he
generally seemed to listen for the feedback more than he felt it. As for participant A,
the second time around was considerably quicker.
3. Results 123

3.2 Interviews
This section lays out the findings from the interviews. Three categories were formulated,
and the findings below are grouped based on those. The analysis shows that with practice,
one can learn to see the device as more than a white cane, despite the similarities which
inevitably affect the initial impressions. All participants spoke of “the complex outdoors”
and the challenges of getting enough information for safe and accurate travel. They also
discussed the prototype itself, noting that there was room for improvement. Following is
a more detailed description of the participants’ conceptions of using the LaserNavigator.

More than a White Cane


The participants’ conception of the LaserNavigator was that it added something more
than what the regular white cane could provide. The participants described that it
seemed easier to walk in a straight line and to walk more freely without having to touch
something with the white cane. The participants also noted that walking the route with
the regular white cane would have been more difficult, since the use of the white cane
requires something to follow, e.g. a wall or kerb. One participant described it this way:
“I always have to have something to follow, now [with the LaserNavigator] I could feel
both sides and walk in the middle.”
Another conception that surfaced was the idea of being able to greatly vary the “cane
length”, which made it different from the regular white cane. This idea was met with
interest, but a question that came up was how to know what length to use. Participant
C saw the device helpful when avoiding obstacles. He felt that he only needed to know
about obstacles five metres ahead of him, no more. Later in the interview, however, he
stated that with the LaserNavigator he could “better keep track of where I am” and that
the LaserNavigator might be of help if he was to walk outside by himself.
The LaserNavigator was seen as having an advantage compared to the white cane
since the white cane required the need to walk close to walls, which could contain bicycle
stands in the summer, and a lot of snow in the winter. All participants expressed the
need to practice more with the device, and two noted an improvement the second time
around the track. In particular, participant A experienced a big improvement during the
trial, saying that “first time around was difficult; second time easier,” and expressed the
general feeling that “it was fun!”

The Complex Outdoors


The participants talked about the differences between indoor and outdoor environments
as well as unfamiliar and familiar situations. Indoor environments were seen as easier,
compared to the outdoors. Participant B described the outdoor experience this way: “it
is like walking in a dark room with a pinpoint of light used to examine the environment.”
Another thing described by participant B regarding the complex outdoors was: “How
do you know when to search for something? How do you know what you find? There are
no references. It’s a mess outdoors.”
124 Paper E

The participants said that the LaserNavigator would be most useful in familiar envi-
ronments. They said that following the walls of the buildings worked well, but found it
challenging as soon as this familiarity was replaced by the unfamiliar. The tactile model
of the environment was an attempt to increase familiarity, and was seen as helpful. “I
memorized it, and had it in mind while I walked.”
The participants also noted several situations where the LaserNavigator would be
useful, such as in the woods, in a tree-lined alley, or finding the small walkway from the
town square. Participant A described one typical situation: finding her way back into
her house after being out in the back yard. She expressed that she felt the main idea
with the LaserNavigator to be using it in larger open spaces and finding out “here I can
go, here I can’t.”

Room for Improvement


All participants thought the LaserNavigator had improved since the indoor trial described
in part one of this two-part article series. “This device is far more sophisticated than
the last”, stated one participant. Another participant stated that “one should have one
of these built into the white cane telling you ‘do not go there because there is something
there’.”
The participants also had suggestions on how to improve the device. The device
was still too heavy and it was difficult to hold horizontally, and participant C suggested
adding some sort of feedback to help with this. Also discussed was how to practically
use the device in conjunction with other devices. “You can’t have it in one hand, cane
in the other, guide dog in the third, GPS in the fourth...”
Participant C had specific suggestions based on how he saw the device. He did not
like the current way of adjusting the length, and said he would prefer a thumbwheel.
Participant B added that one needs more feedback than currently provided by the Laser-
Navigator. The participant described a need to know if it was a tree or a lamp post that
she felt with the device. She stated: “The difficulty lies in interpreting the environment
and what it is in the environment. Perhaps some sort of camera communicating to me
through an earpiece what I am passing by.”
The same participant highlighted the need for simplicity, saying: “The technology
should exist for me, and not the other way around.”

4 Discussion
This second evaluation of the LaserNavigator has widened our understanding of users’
conceptions of the device, by testing it in a more realistic outdoor scenario. While the
concept of the LaserNavigator has much in common with a white cane, it is not intended
to serve the same function. It is highly interesting to observe the shift in conception of
the device from the participants’ point of view. The device was designed with white cane
users in mind, and the fact that the initial conceptions went along those lines suggests
that this similarity in concept was successful. On the flipside, the participants now had to
4. Discussion 125

make the transition from the idea of a “virtual cane” to that of a navigation aid. Realising
the possibility to sense something at a great distance compared to the cane is one thing,
but knowing how to use the device as a navigation aid requires a mental transition to a
conception of the device as a new kind of aid. For example, “The LaserNavigator told
me where I should go,” was the way participant A described the device, thus viewing it
as being different from the white cane.
Feedback was discussed both in the interviews and during the trials, with one concern
being how to know what object is being felt, and when to feel for something. If one
considers only the feedback and no extra knowledge, this is indeed an issue. The current
feedback signals the presence of an object at a certain angle and approximate distance,
and by carefully probing the object, it is possible to get a sense of its size and shape. The
rest is left to the user’s knowledge, and perhaps aided by a GPS system. It is perhaps
because of this that the participants expressed that the LaserNavigator would be most
useful in familiar environments.
Future work will need to address the practical issues with the LaserNavigator, and
look into adding a mode where the device helps with pointing it horizontally. The
latter feature is possible to implement in the current system as the hardware includes
an accelerometer and a gyro. The primary question is how to give the feedback. On the
subject of feedback, one participant in particular spoke of the need for more information
when using the device. A camera and an earpiece was suggested, with the idea that
these components might provide some of the information that the researcher walking
with the participants did. Image processing techniques and machine learning algorithms
are advancing rapidly, and there are applications today such as TapTapSee for iPhone
that tries to describe the contents of a picture [14]. While the current solutions are unable
to give the rich and accurate descriptions likely sought by the participant, this kind of
technology is certainly something to keep an eye on.

4.1 Daniel’s Comments


The first author, Daniel, has a visual impairment (Leber’s congenital amaurosis) and has
experience using the device while tweaking the software, and has walked the test route a
few times. Below are his comments:

The change from automatic length adjustment to manual has been a big one.
Being well-trained with the device, I did not personally find the automatic
length adjustment difficult, but I do understand the drawbacks. The concept
of automatic length adjustment would be unfamiliar even to a cane user, and
depth perception would be compressed. In fact, when I first started using the
recent manual mode, I had become so accustomed to the compressed depth
perception that all objects felt extremely large, depth-wise. It did not take
long, however, to adapt to manual mode, and the benefit it provides regarding
ease of learning are evident from the participants’ comments. I agree with
the participants on the need for more information while walking. This is
a general problem the LaserNavigator does not address, but it is less of an
126 Paper E

issue in familiar environments. At present, I see the LaserNavigator useful


in overcoming specific challenges in known environments, such as finding a
lamppost on the other side of an open space as discussed in the introduction.
It could also be a great device to have for increased security, a device used to
help get one’s bearings when having strayed from the intended path.

Acknowledgements
This work was supported by Centrum för medicinsk teknik och fysik (CMTF) at Umeå
University and Luleå University of Technology – both in Sweden – and by the European
Union Objective 2 North Sweden structural fund.

References
[1] D. M. Brouwer, G. Sadlo, K. Winding, and M. I. G. Hanneman, “Limitation in
mobility: Experiences of visually impaired older people,” British Journal of Occu-
pational Therapy, vol. 71, no. 10, pp. 414–421, 2008.

[2] T. Pey, F. Nzegwu, and G. Dooley, “Functionality and the needs of blind and par-
tially sighted adults in the uk: a survey,” Reading, UK: The Guide Dogs for the
Blind Association, 2007.

[3] HumanWare, “Trekker Breeze,” http://www.humanware.com/en-usa/products/


blindness/talking gps/trekker breeze/ details/id 101/trekker breeze handheld
talking gps.html, accessed 2016-03-21.

[4] L. Ciaffoni, “Ariadne GPS,” http://www.ariadnegps.eu/, accessed 2016-03-21.

[5] “BlindSquare,” http://blindsquare.com/, 2016, accessed 2016-03-21.

[6] Sound Foresight Technology, “Ultracane - putting the world at your fingertips,”
http://www.ultracane.com/, accessed 2016-03-21.

[7] GDP Research, “The miniguide mobility aid,” http://www.gdp-research.com.au/


minig 1.htm, accessed 2016-03-21.

[8] D. Innala Ahlmark, M. Prellwitz, U. Röijezon, G. Nikolakopoulos, J. van Deventer,


and K. Hyyppä, “A Haptic Navigation Aid for the Visually Impaired – Part 1:
Indoor Evaluation of the LaserNavigator,” To be published, 2016.

[9] Lightware Optoelectronics, “SF02/F (50 m),” http://www.lightware.co.za/shop/en/


drone-altimeters/7-sf02f.html, accessed 2016-04-29.

[10] Parallax Inc., “PING))) Ultrasonic Distance Sensor,” https://www.parallax.com/


product/28015, accessed 2016-05-02.
127

[11] J. van Deventer, D. Innala Ahlmark, and K. Hyyppä, “Developing a Laser Naviga-
tion Aid for Persons with Visual Impairment,” To be published, 2016.

[12] World Health Organization, “Fact sheet, n282,” http://www.who.int/mediacentre/


factsheets/fs282/en/, 2014, accessed 2016-03-21.

[13] U. H. Graneheim and B. Lundman, “Qualitative content analysis in nursing research:


concepts, procedures and measures to achieve trustworthiness,” Nurse Education
Today, vol. 24, no. 2, pp. 105–112, 2004.

[14] “TapTapSee - Blind and Visually Impaired Camera,” http://www.taptapseeapp.


com/, accessed 2016-05-02.
128
Paper F
Developing a Laser Navigation Aid
for Persons with Visual Impairment

Authors:
Jan van Deventer, Daniel Innala Ahlmark and Kalevi Hyyppä

To be submitted.

129
130
Developing a Laser Navigation Aid for Persons with
Visual Impairment

Jan van Deventer, Daniel Innala Ahlmark, Kalevi Hyyppä

Abstract

This article presents the development of a new navigation aid for visually impaired per-
sons (VIPs) that uses a laser range finder and electronic proprioception to convey the
VIPs’ physical surroundings. It is denominated LaserNavigator. In addition to the tech-
nical contributions, an essential result is a set of reflections leading to what an “intuitive”
handheld navigation aid for VIPs could be. These reflections are influenced by field trials
in which VIPs have evaluated the LaserNavigator indoors and outdoors. The trials di-
vulged technology-centric misconceptions regarding how VIPs use the device to sense the
environment and how that physical environment information should be provided back
to the user. The set of reflections relies on a literature review of other navigation aids,
which provide interesting insights on what is possible when combining different concepts.

1 Introduction
The World Health Organization estimates that there are 285 million persons who are
visually impaired worldwide of which 39 millions are blind and 246 millions have low
vision [1]. This very large population could be reduced as the impairment’s root cause is
often disease. Nonetheless, this population exists and deserves to be assisted. Assistance
or aid to visually impaired persons (VIPs) comes in different forms. From a technology-
centric view, navigation aids are a valuable form of assistance, which can provide a
feeling of independence to the VIPs. Engineers and researchers can find purpose and
pleasure in developing technical solutions or aids to the problem of perceiving the physical
surroundings for persons with visual impairment. This problem can further be divided
into: sensing and feedback.
To understand what is really needed by VIPs and what is technically possible is a
difficult task. A person in need of a navigation aid does not necessarily know what
is technically possible. While, a technical person can only imagine what could be a
good navigation aid, and can lose focus of the true user-centric solution [2]. This can
be witnessed by the collection of available devices designed to help the visual impaired
navigate with none being a true reference [3].
The need for such devices is clear when one considers that the blink of a scene provides
so much information about the surrounding for those who are not visually impaired. We
seek the means to provide clear information to persons who have a visually impairment
without overloading their remaining sensory inputs. But as Ackoff points out: “Successful

131
132 Paper F

problem solving requires finding the right solution to the right problem. We fail more
often because we solve the wrong problem than because we get the wrong solution to
the right problem” [4]. As we reflect on the development of the navigation aid, one can
wonder if technological solutions address the “right” problem or are they the solutions
to the misunderstood concept of an intuitive navigation aid.
The purpose of this paper is two fold. Firstly, it presents the development of a naviga-
tion aid that uses a laser range finder to discern the surroundings and electronic sensors
to establish the location and movement of the user’s hand. The device is dubbed Laser-
Navigator. Secondly, it tells about the struggle to define what an intuitive navigation
aid is or could be, which we initially address by stating some basic requirements.
A system engineering approach requires a set of system requirements for a navigation
aid to guide research and development towards a “Eureka!” solution. The requirements
for the device could start with “a system should be intuitive”. It should provide as much
information as fast as possible without overloading the user. It should not interfere with
the other senses of the user. It should be very light to enable use over a longer period and
yet offer enough battery power to be used for at least a day. And, to make it available to
many, one could also wish for low cost. These requirements do need further refinements
and extensions, yet can assist in the evaluation of navigation aids.
There is a non-electronic solution that meet all of these requirements: the white cane.
It is intuitive, as one can quickly use it without any extensive training. It is light. It does
not run out of batteries. It provides information about what is around the user and even
about the ground texture. It generates gentle sounds that help the user locate nearby
walls without muffling the environment. It even communicates with others as it clearly
flags the situation, e.g., when walking down a crowded street, the crowd parts away as
a white cane user approaches. But, it does have a limitation: it is about a meter to a
meter and a half long depending on the size of the user. Adding the requirement that
a user should be able to discern his or her physical surroundings beyond the length of a
white cane justifies the development of any electronic navigation aid.
The paper is organized in three main sections. Following the introduction, we have
a short review of some existing navigation aids, some of which became products while
others are research projects that seem to have entered a hibernating state. Each of the
ones mentioned here have an innovation that contributes to our search for the utopian
navigation aid. The second section covers our development journey starting with the
“Sighted Wheelchair” and includes trials with VIPs. The third section is a discussion,
which combines the LaserNavigator with the other concepts to describe a next generation
of navigation aids. The article naturally ends with a conclusion and acknowledgements.

2 Navigation Aid Review


The short review presented here shows that researchers have tried to address the need for
physical environment perception by developing electronic navigation aids. They all ascer-
tain the environment and provide feedback to the device user. The order the navigation
aids are listed here do not carry any preference by the authors.
2. Navigation Aid Review 133

The UltraCane is a research innovation that did become a commercial product [5]. It
incorporates two ultrasonic sensors integrated in a white cane while providing feedback
to the user by two vibrating buttons and auditive beeps. Both ultrasonic sensors are
aimed forward and are based on acoustic time of flight methods. The lower sensor looks
straight ahead of the white cane to warn of upcoming obstacle while the upper one looks
more upward to detect any obstacle that could hit the user above the belt. The haptic
feedback consists of vibration bursts whose frequency is dependent on the measured
distance, and the user is required to undergo training to use the UltraCane efficiently.
With training comes proficiency as a sighted person learns to drive a car or ride a bicycle.
Hoyle points out that, to be relevant, when comparing the UltraCane’s performance with
other navigation aids, the user must be trained with both devices [6]. The UltraCane
is quite heavy, so much that it has a ball bearing with a ball at the ground end to ease
sweeping with the cane. With its wide ultrasonic beam, it does detect a pole in front but
not necessarily an open door as it senses instead the frame of the door.
Another commercial navigation aid is the MiniGuide [7]. It is a small and light
handheld ultrasonic rangefinder that has an intermittent vibration when an object is
detected at a certain distance. This certain distance could be referred as the “virtual”
cane length, which can be adjusted when entering a setup mode. As an object gets closer
within the cane length, the intermittent vibration bursts get closer to each other. The
device also has a 3.5 mm audio jack output to be used with headphones, which offers
a finer depth perception through a tone as a function of distance. Some users might
find the use of headphone disturbing as hearing provides other information about the
environment. The MiniGuide has also the difficulty with detecting open doors at large
distances as it also uses a wide beam ultrasonic sensor.
Gallo et al. demonstrated an augmented white cane with, among other features, a
narrow beam long distance IR sensor to detect objects ahead and a flywheel in the role
of toque reaction as haptic feedback [8]. The first feature addresses the issue of the open
door detection, while the second provides some torque feedback. The flywheel offers a
vertical torque feedback when abruptly stopped. The flywheel has a complex stopping
mechanism and the publication does not mention the weight of the device, which is added
to the white cane. There is also no mention of battery life.
Amemiya developed a haptic direction indicator based on a kinesthetic perception
method called the “pseudo-attraction force” technique [9]. The device exploits the non-
linear relationship between perceived and physical acceleration to generate a force sen-
sation. The user holding the navigation aid feels a gentle pull or a push in his or her
hand, which guides the user to a destination. Ameniya uses a GPS (Global Positioning
System) signal to sense where the user is as it guides the user to a desired destination.
Hemmert et al. presented a comparison of other devices that point to the directions
of interest [10]. One device uses weight shifting while another communicates by changing
shape. The comparison also included a device with a graphical display containing a
pointing arrow towards the direction of interest. In trials with non-VIPs, the latter
proves to be the best only if paying attention to the display, and not if other tasks have
to be performed at the same time. Their comparison incorporates the task of detecting
134 Paper F

a traffic light as an additional task. Weight shifting proved to be an interesting form of


feedback.
Anybody who needs directions can take advantage of GPS technologies. When our
sight is unavailable due to driving or being visually impaired, voice guidance is an option.
For VIPs, one can find devices such as the Trekker Breeze+ handheld talking GPS or
apps on smart phones as BlindSquare [11, 12]. These solutions clearly complement the
white cane.
The final navigation aid reviewed here is the CyArm, short for cyber arm [13]. We only
discovered it through a patent search rather than a literature search as it has similarities
to the concepts we pursued. This is quite an exciting device as it enables the user to feel
the depth of the scene pointed to instead of deciphering intermittent buzzing feedback.
From a sensing point of view, the device measures the distance D between the user’s hand
and the target using ultrasound, and it additionally measures the distance d between the
hand and the user. The measurement of d is done via a wire rolled out between the hip
and the unit containing the forward aiming ultrasonic sensor held in the hand. Force
feedback is provided by a motor that wheels in the wire until D = kd, where k is a
proportionality gain constant. If D is further reduced, the motor will pull in the line
until d = D/k. Much of the weight of the system was moved from the hand to the hip
in the second version of the device, which consisted of a pouch attached to a belt around
the user’s waist. With this revision, only the ultrasound sensor was left in the hand.
One positive secondary effect of the design is that the user can let go of the handheld
ultrasonic sensor, and it will hang at the waist freeing the user’s hand until the device is
needed again.
The above review provides an adequate background for the discussion section, which
follows the presentation of the development of the LaserNavigator.

3 Laser Navigators
We begin our development reflections with the “Sighted Wheelchair”[14, 15]. It has been
a successful project that enabled a visually impaired person, who additionally needs to
use an electric wheelchair, to navigate freely. The system on the wheelchair provides
haptic feedback to the wheelchair operator through a Novint Falcon [16]. The Falcon is a
game controller with high-fidelity three-dimensional haptic force feedback, that measures
the user’s hand location and movement. For environment sensing, the wheelchair has a
SICK LMS111 laser rangefinder that scanned a frontal angle of 270 degrees up to 20
meters [17]. The combination of range finder and game controller provides an intuitive
feedback to the wheelchair operator. For example, if a person steps in 10 meters in front
of the wheelchair, the Falcon’s ball grip pushes back on the operator’s hand, who can
then feel in three-dimensions what is in front of him/her. If a pillar is on the side, the
Falcon provides a resistance when the user moves her/his hand to the side.
Being easily biased to our own R&D achievements, the system had to be tested
with external VIPs. Following an ethical approval procedure, we were unsuccessful in
our search for blind persons who used wheelchairs. Training VIPs to drive an electric
3. Laser Navigators 135

wheelchair in order to test the device would undesirably affect the results. The Falcon,
laser scanner, the associated computer and power supply are too large and heavy to be
considered portable, and so it was decided to use a rolling table to let the experimental
subjects evaluate the proposed solution. It worked but because it is cumbersome to move
around a rolling table, it was not the success we sought, more of an anticlimax. With
the yearning for larger impact, the Sighted Wheelchair was set aside at a demonstration
stage in search of a handheld equivalent: the intuitive navigation aid.
It is possible to experience with amazement the three-dimensional world beyond the
length of a white cane with the combination of the Falcon and the SICK laser. Answering
the difficult question “why?” is necessary to develop the handheld version. To address
this, in hindsight, one can break down the idea in two: sensing and feedback. The
Sighted Wheelchair senses the world in front of itself with the SICK scanning laser, and
the position of the user’s hand with the Falcon. The Sighted Wheelchair provides clear
haptic feedback, in three dimensions, to the user as if the user could palpate the scanned
world.
From the sensing side, the replacement of the SICK laser with a laser rangefinder is
an obvious choice. Being a narrow beam device, it provides sharp and accurate distance
information even beyond the 10 meters of an ultrasonic rangefinder. The narrow beam
width addresses the issue of detecting an open door, especially compared to the devices
using wide beam ultrasonic sensors. This laser rangefinder had to weigh little and not
threaten the eyesight of other people if the device is pointed to their faces. Getting the
distance in front of the user is one thing, what about knowing where the user’s hand
is with respect to the user? The Falcon has a fixed reference frame since it is attached
to the wheelchair, but the handheld device is only held by the hand of the user. As
an analogy to proprioception, where a person usually knows where one’s own hand is
without looking, the device needs to know where it is with respect to the user and how it
moved. We refer to this as electronic proprioception. To achieve this, we began by using
an ultrasonic sensor aimed backwards to the user.
The Falcon is able to provide force feedback in three dimensions because it relies on
the principle that every action has an equal but opposite reaction. It can do that because
it is anchored to a heavy wheelchair and powered by a large battery. The handheld device
cannot have either the luxury of weight nor large battery reserves. We initially followed
the standard vibration feedback to communicate with the user.
Our first prototype consisted of a laser rangefinder, an ultrasonic range finder, an
Arduino prototyping platform and an Android telephone. The telephone provided power
to the navigation aid, a visual interface to the developer as well as the vibration feedback.
The Arduino Mega 2560 ADK was the electronic communication center of the navigation
aid [18]. It communicated with the phone through its USB port, to the laser range finder
via a UART serial port and to the ultrasonic sensor through a signal pin. The rearward
ultrasonic sensor was a Parallax PING with a center frequency of 40 kHz [19]. A short
start pulse was sent out on the signal pin of the Arduino, while a timer counted the
time required for an echo to return. The frontal range finder considered was an SF01
Laser Rangefinder from LightWare Optoelectronics [20]. It could detect targets over 60
136 Paper F

meters away with a resolution of 1 centimeter and has an update rate of eight readings
per second. The laser light emitted from the pulsed laser is invisible, with a wavelength
of 850 nm, an average power of 11mW and a peak power of 14W. It was soon replaced
by the SF02/F, a lightweight laser rangefinder from the same manufacturer. There was
range reduction from 60 to 50 meters, but more importantly, a weight reduction from
185 g to 69 g. The update rate increased to 12 readings per second. The first prototype
served its purpose well. It confirmed the concept of electronic proprioception for the
navigation aid, i.e. it knew how far it was from the body, providing a distance d. With
the frontal laser rangefinder providing D, the user could sense the proportional depth by
moving their hand back and forth to find where D = kd with k being a constant gain.
Our first prototype had two major drawbacks. The first one was its weight. Tolerable
for some time, it was unacceptable for longer test periods. The second drawback was the
lack of software flexibility close to the embedded hardware, e.g., control of the vibrating
oscillator.
A second prototype was then developed. The second prototype shed the telephone
and the Arduino from the device. Its computational power came from a Cortex M3
micro-controller and was programmed via a JTAG interface. It was powered either by
its micro-USB interface or by batteries, which have been placed into the device’s handle
to provide better balance. It was enhanced with a 3D accelerometer, a 3D gyroscope
and a Bluetooth module. The accelerometer provided acceleration in three directions
and inclination with respect to gravity. The first use of the accelerometer was to put
the device in a sleep mode when it rested on a table. The gyroscope informed about the
rate of rotation of the device. The Bluetooth module communicated wirelessly to mobile
phones or computers, which was useful in development as well as interacting with other
devices. The haptic feedback was an eccentric rotating mass (ERM) vibration motor.
Being much lighter and better balanced, this prototype became easier to manipulate.
As with any engineering project, once the big issues are resolved, smaller ones take
center stage. Quickly, two response-type problems were revealed.They were most evi-
dent when informing the user of the presence of a pole or tree when sweeping sideways.
The first issue had to do with the vibrating motor, which was too slow to start and
stop vibrating. To address this, the vibrating motor was cast aside in favor of a small
loudspeaker onto which the user rested his/her index finger. The next problem was the
update rate of the laser; 12 readings per second was insufficient. Luckily, LightWare
Optoelectronics had just released a new version of the SF02/F with 32 measurements
per second, which we purchased immediately.
Satisfied with our creation, we had to label it to be able to refer to it. The designation
of LaserNavigator seemed fitting as it was a navigation aid using a laser rangefinder.

3.1 LaserNavigator Evaluations


With a functional LaserNavigator, it was time to evaluate the system with visually im-
paired volunteers. The tests were divided into two environments: indoors and outdoors.
The environment and results are presented in the two following subsections.
3. Laser Navigators 137

Figure F.1: Indoor evaluation. Motion capture cameras at the top with unique reflective iden-
tifier on chest, head, LaserNavigator and white cane. Door 3 is closed.

To find trial participants, we contacted the local district of the Swedish National
Association for the Visually Impaired (SRF) and solicited volunteers to test the Laser-
Navigator. Three persons kindly volunteered. They were all between sixty and eighty
years old. Two were women and the third a man.

Indoor Evaluation
An indoor experiment was designed around a staged room with doors, which was located
within a larger laboratory room. The walls of the staged room were made of plywood
and only 1.2 meters high to have a real “feel” with a white cane and the LaserNavigator,
while enabling a three dimensional motion capture system to follow the participants’
bodies, heads and the LaserNavigator through the trials (c.f. figure F.1). The tracking
was made possible with a Vicon Bonita system [21]. The evaluation scenario asked each
VIP, starting from the same place in the lab, to find the entrance to the staged room
and enter it. Once in the room, each participant had to find a second open door, go to
it, and then return to the first door; i.e., there were only two doors open per trial. There
were three trials per participant, all of them being additionally video recorded.
After a short training session, each participant performed three tests and were inter-
138 Paper F

Figure F.2: Paths (black) taken by the three participants (one per row) over three indoor trials.
The red line shows how they used the LaserNavigator.

viewed thereupon. It was interesting to see that there was a very good correlation between
the participants’ perceptions of the tests as communicated through the interviews and
their performance captured with the motion capture system. Figure F.2 shows, in black,
the path each participant (rows) took during each trial (column). In red, it shows how
the LaserNavigator was used. If we consider the second row of figure F.2, participant B
found the trials difficult and tiresome. In the first trial, one clearly see wide sweeps and
by the third trial, there is minimal sweeping. Participant A liked the device and learned
to use it quickly, which is obvious as one looks at the evolution of paths’ length in first
row of figure F.2.
As Manduchi and Kurniawan point out, experimental trials for assistive technology
are essential to grasp the users’ perspectives [2]. In our case, the indoor trials revealed
3. Laser Navigators 139

some wrong inferences in our proposed navigation aid. The first one had to do with how
the navigation aid was used while the second one has to do with shape perception.
The first one was a surprise as one of the authors, and the main system developer, is
visually impaired and uses a white cane daily to navigate. The misinterpretation was an
indication that our research was engrossed on depth perception, i.e., technology-centric.
The research of the CyArm seems to have had the same focus. The idea was to move
the LaserNavigator back and forth between the target and the user seeking to please
the equation D = kd in order to perceive the three dimensional scene. Associated with
this idea, the electronic proprioception was with respect the frontal or coronal plane of
the body. During the trials, the VIPs used a sideways sweeping movement to scan the
field in front of them, as they would with a white cane. The electronic proprioception
must then be with respect to the axis of rotation of the sweeping movement rather than
coronal plane of the body. This axis of rotation is a vertical line going through the joint
where the head of the humerus bone yokes into the shoulder.
The second discovery has to do with the VIPs having difficulties in differentiating an
open door versus following a wall towards a corner of the room. Some analysis had to be
considered to really latch the issue, define it and propose a remedy. The door opening
is an abrupt change of distance. Sweeping along a wall towards a corner also increases
the target distance. In the latter case, the distance change is gentler. Solutions to these
issues are presented in the discussion below on intuitive torque feedback.
As we prepared for the outdoor trials, we added on the handle a micro-switch to
address the matter that the users swept the LaserNavigator rather than prodded the
scene. When the switch is pressed down, the LaserNavigator records the distance d
between the hand and the frontal plane of the user while providing pulsed feedback.
When the switch is released, the cane “length” l is fixed at kd. The pulsed feedback
indicates l with one meter for every pulse when operating in indoor mode, and five
meters for every pulse when in outdoor mode. The LaserNavigator then behaves like the
MiniGuide, except that the cane length l can be adjusted on the fly. For the outdoor
trials, the gain k was set to 50 such that l could be varied from less-than 5 meters to 40
meters.

Outdoor Evaluation
The outdoor evaluation of the LaserNavigator was set around a parking lot on the outskirt
of the university (c.f. figure F.3). The gravel-covered asphalt path around the parking
lot was rectangular, with one set of stairs with three steps (2). The parking lot had three
buildings on it (B2, B3, B4). On the outer sides, there was a major university building
(B1), a road on two sides and a dirt mound with trees on the fourth side. The dirt
mound was covered with snow during the time of the evaluation. Along the roadside, the
sidewalk had trees on either sides and was intersected by two access drives to the parking
lot (4 & 5). A pair of poles were place at each access drive to simulate traffic lights.
The participants were provided with a 3D scaled model of the buildings, parking lot
and trees, which they could feel and get an idea of their navigation task. This model
is shown in figure F.3. They got a chance to practice and get reacquainted with the
140 Paper F

Figure F.3: Model of the outdoor trial environment.

LaserNavigator indoors along long corridors. Two volunteers performed the navigation
task two times with less and less support from the accompanying researcher. The third
volunteer performed it only once and found the exercise tiresome.
The scaled model turned out to be most helpful to the participants who became blind
later in their life. With the LaserNavigator, they could detect where the buildings and
the buildings’ corners were. They very often used the ability to change the cane length on
the fly to resolve objects at known distances. Being able to have a 40 meters cane allowed
them to feel a corridor in between the trees along the path parallel to the road. They
even discovered that the branches went over the path when aiming the device upward.
An interesting outcome from the post-trial interviews was that the VIPs grasped the
navigation aid as a “white cane” and startled themselves during the tests discovering
they were feeling the world far away.
Another new revelation was the difference between distorted depth versus real depth.
Having a fixed length on the cane, although it was set by electronic proprioception, gave a
different feeling of perception than with continuous length adjustment used in the indoor
trials. With the latter, a depth of 1 m at 20 m with a gain k = 50, gives a depth ∆d =
(2100 cm-2000 cm)/50 = 2 cm. With a fixed length cane of 20 m and a gain k = 50, the
difference remains 1 m. The fixed length cane provides a real depth while the continuous
length adjustment gives a distorted depth. We were not able to evaluate which methods
the VIPs preferred. Akita et al. showed, through their tests, that users could correctly
estimate depth with a distorted depth method [13].

4 Discussion
When blessed with the faculty of sight, it is so easy and instantaneous to perceive one’s
physical surroundings. It is quite difficult to provide this information to VIPs, as exem-
4. Discussion 141

plified by our trials described above. Developing an electronic navigation aid to provide
this information is a challenge. Many, including us, have attempted to meet this chal-
lenge. Contemplating these attempts, which include the Sighted Wheelchair, bring some
insights that we share here.

4.1 Intuitive Navigation Aid


The first reflections address the idea of an intuitive electronic navigation aid, which
is obvious only in hindsight. The white cane or any cane is an intuitive device with
which the user does not need much training to begin to use (although training improves
efficiency). The user knows where the cane is and feels forces and torques from the cane
as it touches different surfaces at its tip or on its side. The issue with the white cane is
only its length when trying to perceive the world beyond a meter and a half.
The sighted wheelchair is also intuitive and its drawback is its weight and size. With
its scanning laser, it has a clear picture of what is in front and it is able to provide the
three dimensional information with force feedback in different directions.
A buzzing vibration is not an intuitive feedback, although one can be trained to
interpret it until it becomes a second nature. An intuitive electronic navigation aid
should feel like a cane. It should push back when poking an object. It should provide
a torque when sweeping sideways and hitting an open door’s frame. The distinction
between the two makes it clear to the user if he/she is sweeping along the corner of a
room or across an open door. In the former case, the feedback should be a gentle pull
towards the corner and gentle push away from the corner. In the case of sweeping across
an open door, it should be a clear torque feedback about the vertical axis.
This type of feedback has been already revealed in the publications mentioned above.
The CyArm and the Haptic Handheld Wayfinder do provide longitudinal forces feedback
that can indicate prodding, i.e. push and pull. Gallo et al.’s complex flywheel with abrupt
stop does provide torque feedback. One could also consider Sakai et al.’s GyroCube,
although maybe a little large [22]. We did experiments with a flywheel mounted to the
vertical axis of a small electrcal motor, but did not achieve any satisfying performance
yet. In their comparison, Hemmert et al. pointed to other potential solutions, e.g., weigh
shifting, when power and battery life are limited.
Intuitive feedback is attractive, but it must be accompanied with a combination of
sensors.

4.2 Sensor Integration


The laser rangefinder has proved to be quite a desirable instrument because of its narrow
beam and long range. With its narrow beam, it can detect an open door in a wall, both
at short and long distances. With its long range it can provide the feel of a corridor
between trees along a sidewalk. But alone, it is limited in information and integrating
other sensors is of the essence. Especially when technology is continually reducing sensor
sizes.
142 Paper F

Hoyle along with Gallo et al. added a wide beam ultrasonic sensor to warn VIPs of
protruding objects above the waist. Sensor fusion can go further and come closer to the
utopian intuitive handheld electronic navigation aid. If we take the example of the 3D
accelerometer and gyroscope onboard the LaserNavigator, we could provide interesting
feedback. When, in a steady state, the laser rangefinder measures suddenly a shorter
distance D, e.g., when someone walked in front of the VIP, the cane should push back
the holding hand. When sweeping, which is detected by the accelerometer and gyroscope,
across a pole or a door frame, the measured distance D is also shortened. The feedback
should then be a torque about a vertical axis.
The accelerometer can also detect when a tired user points the navigation aid down-
wards such that the measurement is to the ground. The warning feedback could be a
short vibration.
In our indoor trials, we noticed that the users were sweeping the scene with the
LaserNavigator. We realized that the electronic proprioception should be with respect to
the vertical axis going through the shoulder rather than the frontal plane. We successfully
experimented with a second electronic unit that also had a 3D accelerometer, which could
be worn on the upper arm. The horizontal distance of the elbow from the vertical shoulder
axis is simply the product of the upper arm length with the ratio of the acceleration
normal to the upper arm to the acceleration due to gravity. To be clearer, let the
3D accelerometer measure the acceleration az along the upper arm, ax the acceleration
perpendicular to the arm aiming forward and ay pointed to the side. When the upper
arm is aligned with the vertical axis az is equal but opposite to the gravity of earth, i.e.
−g, which is -9.8 m/s2 , and ax and ay are 0 m/s2 . As the elbow is moved backwards, the
distance between the vertical shoulder axis and the elbow in the x direction is

ax
ex = L , (1)
g

where L is the upper arm’s length. The same goes for ey and ay as the elbow is moved
to the side away from the body. The information must then be communicated from the
wearable electronic unit to the navigation aid. This can be done with a serial wired
communication or in our case using the Bluetooth modules on each electronic units.

4.3 System Integration

Wireless communication between different units offers a new set of possibilities, where
imagination is the limit. What can be done combining a navigation aid with a mag-
netometer (compass) and Bluetooth along with a phone with BlindSquare? During the
feedback interviews from the trial, the VIPs described different needs, such as finding
the way back to the house after hanging laundry outdoors. In other words, information
fusion from different existing systems could lead to interesting performances.
5. Conclusions 143

4.4 Three Research Paths


As we continue to our pursuit towards an intuitive handheld electronic navigation aid,
we see three parallel research path. The first one is the continued development of force
and torque feedback. There are initially incompatible requirements such as clear feed-
back with low power consumption for a long battery life that would be compact and of
low weight. The second research path is with sensor fusion such as enhancing a laser
rangefinder and an inertial measurement unit (IMU). The latter are getting smaller and
cost less while being offered with impressive software libraries to make their use accessi-
ble. The third parallel path is in system integration, where different systems (e.g., smart
phones, LaserNavigators, wireless earbuds) could be added and removed to enhance nav-
igation and comfort as needed by the user.

5 Conclusions
We did not develop an intuitive handheld navigation aid for VIPs. But, in our attempt
to do so we managed to define what an intuitive handheld navigation aid is, at least for
ourselves, and are working towards that goal. It is a handheld device that provides a
feedback similar to a white cane but with a sensing range beyond two meters from the
user. Looking at other research ideas, we find good haptic feedback concepts that are
interesting to consider. The LaserNavigator we developed uses a frontal laser rangefinder,
which provides long distance coverage with a narrow beam that address the open door
in the wall issue. The LaserNavigator uses electronic proprioception to continuously or
manually adjust the length of the “cane”. Electronic proprioception was achieved by a
backwards aimed ultrasonic sensor or in combination with a 3D accelerometer worn on
the upper arm. Indoor trials with VIPs revealed that proprioception should be done with
respect to the vertical axis going through the shoulder rather than the frontal or Coronal
plane. Outdoor trials disclosed that light poles and trees are difficult to detect without
torque feedback. We reckon that sensor fusion, low power force and torque feedback
and system integration are three parallel research paths to continue the search towards
intuitive navigation aids.

Acknowledgement
We are thankful to the SRF (Swedish National Association for the Visually Impaired) and
their volunteers . We are grateful for the initial funding of the project by the Centrum
för medicinsk teknik och fysik (CMTF) with the European Union Objective 2 North
Sweden structural fund. Financial support was then provided by the divisions of Health
Sciences and EISLAB at Luleå University of Technology. We are thankful of the Kempe
Foundation to have financially supported the acquisition of the motion capture system
used during the indoor experiments.
144 Paper F

References
[1] World Heath Oganization, “Visual impairment and blindness,” http://www.who.
int/mediacentre/factsheets/fs282/en/, November 2014, accessed 2016-03-21.

[2] R. Manduchi and S. Kurniawan, Eds., Assistive technology for blindness and low
vision. ISBN: 9781439871539, Boca Raton, FL, USA: CRC Press, 2012.

[3] T. Pey, F. Nzegwu, and G. Dooley, Functionality and the needs of blind and partially
sighted adults in the UK: a survey. Guide Dogs for the Blind Association, 2007.

[4] R. L. Ackoff, Redesigning the future. ISBN: 0471002968, New York, USA: John
Wiley & Sons, 1974.

[5] B. Hoyle and D. Waters, Assistive Technology for Visually Impaired and Blind Peo-
ple. London: Springer London, 2008, ch. Mobility AT: The Batcane (UltraCane),
pp. 209–229.

[6] B. S. Hoyle, “Letter to the editor,” Assistive Technology, vol. 25, no. 1, pp. 58–59,
2013.

[7] GDP Research, “The miniguide mobility aid,” http://www.gdp-research.com.au/


minig 1.htm, accessed 2016-03-21.

[8] S. Gallo, D. Chapuis, L. Santos-Carreras, Y. Kim, P. Retornaz, H. Bleuler, and


R. Gassert, “Augmented white cane with multimodal haptic feedback,” in Interna-
tional Conference on Biomedical Robotics and Biomechatronics,. IEEE, 2010, pp.
149–155.

[9] T. Amemiya, Kinesthetic Cues that Lead the Way. INTECH Open Access Publisher,
2011. [Online]. Available: http://cdn.intechopen.com/pdfs-wm/14997.pdf

[10] F. Hemmert, S. Hamann, M. Löwe, A. Wohlauf, J. Zeipelt, and G. Joost, “Take me


by the hand: haptic compasses in mobile devices through shape change and weight
shift,” in Proceedings of the 6th Nordic Conference on Human-Computer Interaction:
Extending Boundaries. ACM, 2010, pp. 671–674.

[11] HumanWare, “Trekker Breeze,” http://www.humanware.com/en-usa/products/


blindness/talking gps/trekker breeze/ details/id 101/trekker breeze handheld
talking gps.html, accessed 2016-03-21.

[12] “BlindSquare,” http://blindsquare.com/, 2016, accessed 2016-03-21.

[13] J. Akita, T. komatsu, K. Ito, T. Ono, and M. Okamoto, “Cyarm: Haptic sensing
device for spatial localization on basis of exploration by arms,” Advances in Human-
Computer Interaction, vol. 2009, pp. 1–6, 2009.
References 145

[14] D. Innala Ahlmark, H. Fredriksson, and K. Hyyppä, “Obstacle avoidance using hap-
tics and a laser rangefinder,” in Advanced Robotics and its Social Impacts (ARSO),
2013 IEEE Workshop on, 2013, pp. 76–81.

[15] “The sighted wheelchair - successful first test drive of ”sighted” wheelchair (YouTube
video),” http://www.youtube.com/watch?v=eXMWpa4zYRY, 2011, accessed 2014-
02-24.

[16] Novint, “Novint Falcon,” http://www.novint.com/index.php/novintfalcon, Novem-


ber 2012, accessed 2016-03-21.

[17] SICK Inc., “LMS100 and LMS111,” http://www.sick.com/us/en-us/home/


products/product news/laser measurement systems/Pages/lms100.aspx, accessed
2014-02-24.

[18] Arduino, “Arduino MEGA ADK,” https://www.arduino.cc/en/Main/


ArduinoBoardMegaADK, accessed 2016-04-29.

[19] Parallax Inc., “PING))) Ultrasonic Distance Sensor,” https://www.parallax.com/


product/28015, accessed 2016-05-02.

[20] Lightware Optoelectronics, “SF02/F (50 m),” http://www.lightware.co.za/shop/en/


drone-altimeters/7-sf02f.html, accessed 2016-04-29.

[21] VICON, “Bonita Motion Capture Camera,” http://www.vicon.com/products/


camera-systems/bonita, accessed 2016-03-21.

[22] M. Sakai, Y. Fukui, and N. Nakamura, “Effective output patterns for torque display
“GyroCube”,” in 13th International Conference on Artificial Reality and Telexis-
tence, 2003, pp. 160–165.

S-ar putea să vă placă și