Sunteți pe pagina 1din 141

UNIVERSITY OF KENT

AT CANTERBURY

3-D IMAGING USING OPTICAL COHERENCE RADAR

a thesis submitted to The University of Kent at Canterbury in the subject of physics for the degree of doctor of philosophy.

By Mauritius Seeger December 1997

c Copyright 1997 by Mauritius Seeger

ii

Contents
List of Tables List of Figures Abstract Acknowledgements 1 Three Dimensional Imaging Techniques 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . 1.2 Non-Optical 3-D Measurements . . . . . . . . . . . 1.2.1 Stylus Scanning . . . . . . . . . . . . . . . . 1.2.2 NMR and CT . . . . . . . . . . . . . . . . . 1.2.3 Ultrasound . . . . . . . . . . . . . . . . . . 1.3 Optical Techniques . . . . . . . . . . . . . . . . . . 1.3.1 Stereo Pair Imaging . . . . . . . . . . . . . 1.3.2 Confocal Microscopy . . . . . . . . . . . . . 1.3.3 Fringe Projection Techniques . . . . . . . . 1.4 Optical Interferometry . . . . . . . . . . . . . . . . 1.4.1 Two Wavelength . . . . . . . . . . . . . . . 1.4.2 Electronic Speckle Pattern Interferometry . 1.4.3 Low-Coherence Interferometry . . . . . . . 1.4.4 Channelled Spectrum . . . . . . . . . . . . 1.5 Interference Detection using a CCD Detector . . . 1.5.1 Automated Phase Measurement Microscopy 1.5.2 CCD Based Low-Coherence Interferometry 1.6 Summary . . . . . . . . . . . . . . . . . . . . . . . 2 Surface Topography using Coherence Radar 2.1 Introduction . . . . . . . . . . . . . . . . . . . 2.2 Principles of Coherence Radar . . . . . . . . . 2.2.1 Phase Stepping . . . . . . . . . . . . . 2.2.2 Surface Finding . . . . . . . . . . . . . 2.3 Experimental System . . . . . . . . . . . . . . 2.3.1 Michelson Interferometer . . . . . . . 2.3.2 Imaging Optics . . . . . . . . . . . . . 2.3.3 Translation Devices . . . . . . . . . . 2.4 Data Processing . . . . . . . . . . . . . . . . iii . . . . . . . . . . . . . . . . . . . . . . . . . . . vi ix x xii 1 1 1 1 2 3 3 3 4 7 7 7 8 8 11 11 13 16 17 20 20 20 22 23 23 24 25 26 26

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

2.5 2.6 2.7 2.8 2.9

Surface Prole Measurements . . . . . . . . . Noise Thresholding and Surface Interpolation Evaluation of Noise Thresholding . . . . . . . Analysis of Hypervelocity Impact Craters . . Noise . . . . . . . . . . . . . . . . . . . . . . . 2.9.1 Phase Stepping Error . . . . . . . . . 2.9.2 PZT Hysteresis . . . . . . . . . . . . . 2.9.3 Vibrational Noise . . . . . . . . . . . . 2.9.4 Image Noise . . . . . . . . . . . . . . . 2.10 Accuracy of Surface Location . . . . . . . . . 2.11 Empirical Evaluation of Accuracy . . . . . . . 2.12 Conclusion . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

27 30 31 33 42 42 44 47 47 49 51 51 54 54 55 55 57 62 65 65 65 69 69 74 74 74 76 78 79 79 80 81 81 81 84 86 87 87 91 93 94 95 95 95 97

3 Imaging of Multiple Reecting Layers 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Theoretical Considerations . . . . . . . . . . . . . . . . . . . . . 3.2.1 Resolving Multiple Layers . . . . . . . . . . . . . . . . . 3.2.2 Simulation of Signal Strength from a Multilayer Object 3.2.3 Eect of the Object Medium on the Measurement . . . 3.3 Experimental . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Investigation of 20 Glass Plates . . . . . . . . . . . . . . 3.3.3 Solar Cell . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 In Vitro Imaging of the Human Ocular Fundus 4.1 Introduction: Properties of the Human Fundus . . . . . 4.1.1 The Human Eye . . . . . . . . . . . . . . . . . . 4.1.2 Human Fundus Sample and Tissue Preparation . 4.1.3 Optical Properties of the Eye . . . . . . . . . . . 4.1.4 Light Scattering in Biological Tissue . . . . . . . 4.1.5 Illumination Wavelength . . . . . . . . . . . . . . 4.2 Signal Processing . . . . . . . . . . . . . . . . . . . . . . 4.3 Experimental . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Coherence Radar . . . . . . . . . . . . . . . . . . 4.3.2 Coherence Prole Broadening through Dispersion 4.3.3 Fundus Imaging of a Model Eye . . . . . . . . . 4.3.4 In Vitro Examination of Fundus Layers . . . . . 4.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Data Acquisition and Processing Speed . . . . . 4.4.2 Speed Optimisation . . . . . . . . . . . . . . . . 4.4.3 OCT versus CCD Based Interferometry . . . . . 4.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5 Balanced Detection 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Balanced Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Dynamic Range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv

5.4 5.5 5.6 5.7

Experimental System . Data Processing . . . Experimental Results . Conclusion . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. 99 . 99 . 102 . 107 109 109 110 111 112 112 112 113 113 113 113 114 116

6 Conclusion 6.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Digital Imaging System A.0.1 CCD Sensor . . . . . . . . . . . . A.1 CCD Camera . . . . . . . . . . . . . . . A.1.1 The TM520 Video Camera . . . A.1.2 The Thomson Linescan Camera . A.2 Frame Grabber . . . . . . . . . . . . . . A.2.1 The Bit Flow Frame Grabbers . A.3 Noise . . . . . . . . . . . . . . . . . . . . A.4 Sensitivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

B Publications Arising from this Thesis 119 B.1 Refereed Journal Papers . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 B.2 Conference Papers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Bibliography 121

List of Tables
Values of approximate positional error based on interference amplitude error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 4.1 Comparison of the current imaging hardware (see also appendix A) with commercially available high performance components . . . . . . . . . . . 92 4.2 Potential acquisition speed for longitudinal and transverse sections when using 10 intensity samples (n=10) . . . . . . . . . . . . . . . . . . . . . 93 A.1 Digital imaging system performance determined experimentally at high gain setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 2.1

vi

List of Figures
1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10 2.11 2.12 2.13 2.14 2.15 2.16 2.17 2.18 2.19 2.20 2.21 2.22 2.23 2.24 Stereo pair imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Confocal system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Formation of Moir fringes . . . . . . . . . . . . . . . . . . . . . . . . . . e Michelson interferometer . . . . . . . . . . . . . . . . . . . . . . . . . . . Interference obtained with a low-coherence source . . . . . . . . . . . . . Transverse scanning in a berised low-coherence reectometer . . . . . . Conguration of interference microscopes . . . . . . . . . . . . . . . . . Overview of three dimensional measurement techniques . . . . . . . . . Coherence Radar experimental arrangement . . . . . . . . . . . . . . . . The Coherence Radar experimental system . . . . . . . . . . . . . . . . Coherence function of the super-luminescent diode at a bias current of 139mA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Telecentric telescope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Flow chart of data acquisition and hardware control . . . . . . . . . . . 5 pence coin (the rulings shown are 0.5mm) . . . . . . . . . . . . . . . . Surface topography of a 5 pence coin; depth is indicated by colour (scale in m) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Prole cross-section at position indicated by dashed line in gure 2.7 . Surface of hemispherical crater, depth indicated by colour (microns) . . Surface prole of hemispherical crater after thresholding. The central spike is a remaining rogue point. . . . . . . . . . . . . . . . . . . . . . . Surface with missing points interpolated . . . . . . . . . . . . . . . . . . Three dimensional representation of crater 1 (head-on impact) . . . . . Surface topography of crater 1 (head-on impact) . . . . . . . . . . . . . Cross section showing surface prole of crater 1 (position indicated in gure 2.13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Photograph of crater 2 resulting from a head-on impact (the rulings shown are 0.5mm) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Surface topography of crater 2 (head-on impact) - compare to photograph in gure 2.15 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Surface topography of crater 3 (impact 70 to normal) . . . . . . . . . . Surface topography of crater 4 (impact 70 to normal) . . . . . . . . . . Zernike t of crater 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zernike t of crater 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zernike t of crater 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zernike t of crater 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Numerical simulation of low-coherence interferogram . . . . . . . . . . . Error in demodulating Gaussian interference amplitude . . . . . . . . . vii 4 5 6 9 10 12 15 18 21 23 24 26 28 29 29 30 32 32 33 35 35 36 36 37 37 38 40 40 41 41 43 43

2.25 Hysteresis of the PZT material . . . . . . . . . . . . . . . . . . . . . . . 45 2.26 Amplitude error as a result of PZT hysteresis . . . . . . . . . . . . . . . 46 2.27 Numerical simulation of interference in the presence of mechanical vibrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 2.28 Distribution of amplitude error . . . . . . . . . . . . . . . . . . . . . . . 47 2.29 Relationship between image noise (ccd ) and the resultant amplitude error (A ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 2.30 Peak search: relationship between amplitude error (Ea ) and position error (Ed ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 2.31 Interference amplitude vs. depth along a line of 512 pixels, showing measured surface position . . . . . . . . . . . . . . . . . . . . . . . . . . 52 2.32 RMS deviation of surface position from line of best t . . . . . . . . . . 52 3.1 Interfaces separated by d = 11 . . . . . . . . . . . . . . . . . . . . . . 55 3.2 Interfaces separated by d = 11 + /8 . . . . . . . . . . . . . . . . . . 56 3.3 Interfaces separated by d = 11 + /4 . . . . . . . . . . . . . . . . . . 56 3.4 Model of multilayer object composed of many identical glass plates . . . 58 3.5 Interference amplitude versus interface number in a stack of 100 glass plates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 3.6 Dynamic range required to detect the interference signal from interface j in a stack of 100 glass slides (200 interfaces) . . . . . . . . . . . . . . . . 61 3.7 Focal plane shift caused by refractive object medium . . . . . . . . . . . 63 3.8 Interferogram of rst 8 glass plates in a stack of 20 . . . . . . . . . . . . 66 3.9 Average of interference amplitude versus depth (the amplitude is calculated as an average of 10 neighbouring pixels) . . . . . . . . . . . . . . . 67 3.10 Log of maximum interference amplitude, Ae (j), versus interface number, j 68 3.11 Extraction of a cross-sectional image from a set of transverse images . . 70 3.12 Image of the Hubble Space Telescope solar cell showing the position of the extracted cross section relative to the impact site . . . . . . . . . . . 71 3.13 Tomographic image of solar cell (geometric distance is given as m in parenthesis) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 3.14 Schematic view of solar cell cross-section . . . . . . . . . . . . . . . . . . 73 4.1 Anatomy of the human eye (refractive index shown in parentheses) . . . 75 4.2 Schematic representation of the fundus layers . . . . . . . . . . . . . . . 76 4.3 Cross section of the fundus tissue container . . . . . . . . . . . . . . . . 77 4.4 Stainless steel sample container . . . . . . . . . . . . . . . . . . . . . . . 77 4.5 Fundus tissue in the sample container (scale graduation = 0.5 mm) . . . 78 4.6 False path interpretation due to photon scattering in a diusive medium 80 4.7 Plot of interference amplitude for dispersive and non-dispersive paths . 82 4.8 Experimental arrangement to avoid strong back-reections at the airglass boundary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 4.9 Orientation of longitudinal and transversal sections relative to the eye . 84 4.10 Experimental arrangement for in vivo imaging using Coherence Radar . 85 4.11 Experimental arrangement to image a model eye using corrective optics 85 4.12 Longitudinal section of the model fundus . . . . . . . . . . . . . . . . . 86 4.13 Post mortem fundus tissue showing the approximate position of longitudinal sections obtained using Coherence Radar . . . . . . . . . . . . . . 87 4.14 Longitudinal section (1) of post mortem fundus tissue . . . . . . . . . . 88 4.15 Longitudinal section (2) of post mortem fundus tissue . . . . . . . . . . 89 viii

4.16 Operations performed by Coherence Radar . . . . . . . . . . . . . . . . 90 5.1 Mach-Zehnder interferometer . . . . . . . . . . . . . . . . . . . . . . . . 96 5.2 Experimental Arrangement implementing a balanced Coherence Radar technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 5.3 Data Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 5.4 Intensity variation along the CCD line-scan sensor . . . . . . . . . . . . 102 5.5 Remaining intensity variation after subtraction of signals from CCD linescan 1 and 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 5.6 Anatomy of step structure . . . . . . . . . . . . . . . . . . . . . . . . . . 103 5.7 Interference produced by a at mirror . . . . . . . . . . . . . . . . . . . 104 5.8 Interference amplitude . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 5.9 Interference amplitude and surface prole (white line) of periodic step . 106 5.10 Interference amplitude peaks produced by air-glass reections in a stack of 20 glass plates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 A.1 Noise distribution at maximum gain . . . . . . . . . . . . . . . . . . . . 114 A.2 Noise distribution at minimum gain . . . . . . . . . . . . . . . . . . . . 115 A.3 Experimental conguration for the measurement of CCD camera sensitivity116 A.4 Sensitivity calibration (exposure time 1/60 second) . . . . . . . . . . . . 117

ix

Abstract

In this thesis we explore the application of optical Coherence Radar to the study of surface topography and transparent multilayer structures. In particular, we explore the potential of Coherence Radar to obtain tomographic images or sections of the human retina in vivo. Coherence Radar is an interferometric method which relies on the use of lowcoherence illumination to measure the absolute position of reective layers. The measurement of surface topography and in particular, the study and analysis of hypervelocity impacts using Coherence Radar is investigated. We show that the system can deliver topographic measurements with a depth accuracy of about 2 m and that it is ideally suited for measurements of rough surfaces containing large discontinuities and steep walls where sub-micron accuracy is not required. We describe how Coherence Radar can be used to measure the position of reecting interfaces in objects which contain many partially transmitting and reecting layers and demonstrate its application to the assessment of impact damage in a Hubble Space Telescope solar cell. The measurement of the human retina is investigated. We successfully obtain longitudinal images of post-mortem fundus tissue and show that Coherence Radar can potentially oer an attractive alternative to beam scanning low-coherence systems. Finally, we describe a modied Coherence Radar system implementing balanced detection by using two CCD line-scan cameras and a Mach-Zehnder type interferometer. We show that this technique can signicantly reduce the required dynamic range of the analogue to digital converter in the presence of a large number of highly reective layers.

xi

Acknowledgements

xii

First of all, I would like to thank my supervisor, Dr. Chris Solomon, who has given me invaluable guidance. I couldnt have wished for someone more supportive and understanding. Chris, thanks for all the help! I would also like to thank all my friends in the Physics Department for their moral support and useful advice. In particular Dr. Adrian Podoleanu, George Dobre and Dr. Pippa Salmon with whom I shared a laboratory and who have been extremely helpful and great fun. Finally, I would like to thank my parents for their help and support.

xiii

Chapter 1

Three Dimensional Imaging Techniques


1.1 Introduction

This thesis investigates the use of CCD based low-coherence interferometry to obtain three dimensional images of opaque objects, multilayer structures and biological material. In particular, it aims to assess the feasibility of applying this technique to the study of the human retina in vivo. In order to place this method in context, a brief review of other three dimensional imaging techniques is provided. In the following sections, we rst discuss non-optical measurement techniques such as nuclear magnetic resonance imaging and computed tomography (which oer the ability to penetrate opaque substances) and stylus methods which allow proling with atomic scale resolution. We also review optical methods which allow surface topography measurements and sectioning of translucent scattering material - particularly, a number of interferometric methods which have been applied to the high resolution measurement of surface topographies. Finally, we present a number of volume and surface imaging techniques which are based on low-coherence or white-light interferometry and outline the advantages of using CCD based detection in conjunction with these methods.

1.2

Non-Optical 3-D Measurements

The measurement techniques presented here can be broadly categorised into those that require mechanical contact, such as stylus scanning methods used for surface analysis and those which are essentially non-contact methods such as NMR, CT and Ultrasound which are mainly used in medicine.

1.2.1

Stylus Scanning

Scanning Probe Microscope The scanning probe microscope (SPM) [1] relies on scanning a very ne probe tip along the surface of a sample. Electrical or magnetic interactions between the probe tip electrode and the sample allow the measurement of electrical conductivity, electronic structure, atomic structure and topography. By mounting the probe tip on a piezzo 1

CHAPTER 1. THREE DIMENSIONAL IMAGING TECHNIQUES

electric transducer, it can be scanned across the sample in three dimensions. A feedback mechanism adjusts the probe tip height so as to maintain a constant voltage between the tip and the sample. Thus, if the sample is displaced horizontally with respect to the stylus, the probe tip follows the prole of the surface. A measure of the resultant vertical tip displacement and the horizontal position of the sample, can then be used to construct a prole of the sample. SPM oers resolution at an atomic scale ( 0.1 nm) over an area of 1 m2 . The main applications of the SPM include high resolution surface proling, spectroscopy, electro-chemistry, nanofabrication and lithography. Since the development of the rst SPM, the scanning tunnelling microscope (STM) and a variety of other non-contact scanning probe microscopes, such as the atomic force microscope (AFM), have been developed [2]. The AFM is especially suited for the inspection of optical surfaces which are non-conducting [3], and although it does not have atomic resolution it oers a superior scanning range of up to 200 m. Stylus Contact Scanning The most widely used instrument for measuring surface topography is the stylus proler. In contrast to the scanning probe microscope, the stylus makes physical contact with the sample surface. The vertical position of the stylus is then a measure of the surface height at the point of contact between the stylus and the sample. The stylus is loaded with a small force to ensure contact with the surface as the sample or the stylus is moved horizontally at a constant speed. The resultant vertical displacement of the stylus is converted to an electrical signal using a linear variable dierential transformer (LVDT) [2]. This height information together with the horizontal position of the stylus with respect to the sample is then stored and processed digitally to produce a surface prole. Lateral and height resolutions of 0.5m and 0.1nm respectively over a lateral range of 40mm have been demonstrated [2]. The lateral resolution is limited mainly by the shape and tip size of the stylus, but is also determined by features of the sample, since the resulting surface prole is given by the convolution of the sample surface and the stylus tip. The vertical, or height resolution is limited by noise in the stylus position sensor (LVDT). The stylus instrument can achieve very high depth resolution over a large surface area and its transverse resolution is superior to most optical methods. However, mechanical contact between the stylus and the sample can cause scratches in soft surfaces and thus reduces the range of possible applications. In addition, the measurement procedure is slow when compared to optical techniques.

1.2.2

NMR and CT

Nuclear magnetic resonance (NMR) or magnetic resonance imaging (MRI) and x-ray computed tomography (CT) have been extensively used in medicine due to their ability to image sections of optically opaque media. Both conventional CT and MRI images oer an in plane resolution of 1 mm (FWHM) [4, 5]. Unlike CT, NMR/MRI also conveys information about chemical composition and (blood) ow velocity. Studies of the eye have been performed using CT and NMR/MRI techniques [6]. However, the relatively low resolution of both methods has limited the applications to the detection of foreign bodies in the eye and to the elucidation of pathological disease mechanism [7].

CHAPTER 1. THREE DIMENSIONAL IMAGING TECHNIQUES

In addition, there are risks associated with CT due to the use of x-rays. The strength of CT and NMR/MRI methods lies in their ability to penetrate optically opaque material over a large distance, allowing the visualisation of hidden structures.

1.2.3

Ultrasound

Ultrasound (US) imaging operates on the principle of determining the round-trip time of sound emitted by a transducer and reected from a target. Short pulses of high frequency sound are emitted and detected via a transducer so that the echo delay resulting from the target distance can be determined. Imaging can be performed by scanning a directional transducer over the area of interest. Larger frequencies enable a better spatial resolution but reduce the penetration depth. In practice the usable frequency range of ultrasound is limited to 3 10M Hz [4]. Blessing et. al. [8] have demonstrated the ability of US to measure the surface roughness [8] of machine tool surfaces with roughness of 0.1m rms. US is routine in medical diagnosis, especially for foetal examination during pregnancy. Studies of the eye have been performed using 3-D imaging US [9] and colour Doppler Ultrasound [10]. The maximum resolution of conventional ocular ultrasound is limited to 150 m in typical clinical instruments [7]. Recent developments in high frequency ultrasound, however, have allowed resolutions of 20m, at the cost of reduced penetration depth (4mm) [7]. A severe drawback of US examinations of the eye is the requirement to maintain physical contact between the patients eye and the transducer via a saline liquid or gel. However, US oers a convenient method for volume imaging in optically opaque media, and requires only a small transducer in contact with the area of interest.

1.3

Optical Techniques

Using optical techniques for 3-D imaging oers a number of advantages which may be summarised as follows: Non-invasive and non-contact measurement procedure Refractive and reective optics are easily designed and widely available Low health risk, since visible light is non-ionising

1.3.1

Stereo Pair Imaging

Stereo pair analysis is a technique for obtaining 3-D information from 2-D image pairs. The implementation of the stereo pair method is, in principle, independent of the means by which the stereo images are obtained. Therefore, one may fundamentally consider this as a data processing technique rather than a measurement technique. Three dimensional information can be recovered from two stereo images by a method of pair matching, based on triangulation. A topographic map of the object surface can be derived provided the stereo images show the same portion of the object surface from two distinct angles. The height of a feature, z, is derived by measuring the disparity, d, which is the dierence in position of an object feature between the left and right image. They are related by[11]

CHAPTER 1. THREE DIMENSIONAL IMAGING TECHNIQUES

Image 2

Image 1

Camera 1

Camera 2

F * z

Object

Shadows

Figure 1.1: Stereo pair imaging

1 (1.1) z where B and F are the baseline (distance between the two cameras) and focal length respectively. Figure 1.1 shows the production of two stereo images and the resultant disparity between the location of features in images 1 and 2. To automate the disparity measurement an algorithm is used to identify, or match, two corresponding features in the images. The probability of nding a correct set of matching features decreases with larger baselines, since the range of disparities is increased. However, given a correct match, the feature height, z, is determined more accurately using larger baselines. Thus, there is a tradeo between resolution and reliability of the measurements. To overcome the problem of incorrect feature matching, a multiple baseline algorithm has been developed which reconstructs a surface from multiple stereo pairs [11]. Since stereo images may be obtained by a number of means, the size of the objects may vary from the very large such as a city photographed by a passing airplane to small molecular structures imaged using an electron microscope. However, the stereo pair technique suers from inherent shadowing of features as indicated in gure 1.1. This makes the method unsuitable for surfaces containing deep holes or steep walls. d = BF

1.3.2

Confocal Microscopy

Confocal microscopy is a powerful method widely used for non-destructive optical sectioning of thick translucent material or for imaging of opaque surfaces and has found

CHAPTER 1. THREE DIMENSIONAL IMAGING TECHNIQUES

Lens Beamsplitter Source = z

Focal plane Pinhole Detector


Figure 1.2: Confocal system considerable use in biomedical and materials science applications. The superb sectioning capability of confocal microscopes is achieved by the use of a point detector. As shown in gure 1.2, light originating outside the focal plane is largely rejected by the confocal aperture and does not contribute to the image. The axial position, z, of the section can then be adjusted by altering the plane of the confocal aperture (which is conjugate to the section plane). Since a point detector must be used in order to maintain confocality, either the specimen or the optical arrangement have to be raster scanned to form 2-D images. In conventional microscopes, this is best implemented by the use of a translation stage which moves the object. Greater speeds can be obtained by using a (usually laser) beamscanning arrangement or a rotating Nipkow disk which is tted with many pinholes and which is also used in Tandem Scanning Microscopes (TSM) [12]. The depth sectioning property of a confocal arrangement, such as that depicted in gure 1.2, is determined by the intensity response, I(u), of a point object. Paraxial theory [13] predicts the depth point spread function of a confocal arrangement (gure 1.2). For a point object1 , the intensity response, I(u), is given by: I(u) = sin(u/4) u/4
4

(1.2)

where u is the normalised axial distance which is related to the real axial distance, z, and angle (as shown in gure 1.2) by: 8 z sin2 (/2) (1.3) The sectioning capability or depth resolution of a confocal system may be dened as the full width half maximum (FWHM) of the intensity variation along the z axis, I(z), as given by equation 1.2 and is primarily determined by the angle as shown in u=
1

For a plane the intensity response is given by I(u) =

sin(u/2) u/2

CHAPTER 1. THREE DIMENSIONAL IMAGING TECHNIQUES

Figure 1.3: Formation of Moir fringes e gure 1.2. This depth discrimination allows the study of opaque surface structures and translucent volume samples. Computer analysis can greatly enhance the interpretation and visualisation of confocal images. If consecutive sections of a sample are recorded digitally and stored on a computer, re-projections of the resulting volume data can be performed. Also, by locating the peak intensity along the optic axis, surface proles with accuracies of < 1m [12] can be obtained. Applications in material sciences include [14]: examination of surface fracture, lithographic processes, semiconductor materials, integrated circuits, dielectric lms, brereinforced composites, power cable insulation, minerals, soils and optical bres. Confocal imaging is especially well suited to investigation of structures embedded in diusive media, due to its capacity to reject scattered light [15]. These include in vivo biomedical applications. Jester et. al. [16] have demonstrated live cellular imaging of structures in: corneal, kidney, liver, adrenal, thyroid, epididymis, and muscle tissue and in connective tissue of rabbits and rats. One area of special interest is the application of confocal optics in ophthalmic medicine. Traditional fundus cameras are unable to image sections of retinal tissue and images usually suer from poor contrast due to scattered light. Using the principle of confocal microscopy, confocal Scanning Laser Ophthalmoscopes (cSLOs) [17] have been successfully used to achieve superior optical sectioning of the human fundus in vivo, with improved contrast due to rejection of scatter. However, the axial resolution is severely limited by the pupil diameter (equation 1.2) and ocular aberrations. Early cSLO prototypes achieved a depth resolution of 300m, [17, 18] while commercial systems can now obtain resolutions as low as 30m [19] when observing the human fundus in vivo. Consequently cSLOs have been able to facilitate early diagnosis of genetically determined disease and age related macular degeneration [20].

CHAPTER 1. THREE DIMENSIONAL IMAGING TECHNIQUES

1.3.3

Fringe Projection Techniques

Moir Topography e Moir fringes are formed if periodic patterns such as two gratings overlap. This is e illustrated in gure 1.3. This well known phenomenon has been used to provide contour fringes of 3-D objects. Depending on the size of the object, contour fringes may be produced by illuminating an object of interest via a periodic line grating and observing it from a dierent angle through the same grating. Larger objects can be investigated if grating lines are projected on the surface and the object is then viewed through a smaller grating in the focal plane of the image. With the advent of electronic image acquisition, new techniques have emerged which superimpose a virtual second grating electronically (by electronic ltering) once the image of the object has been captured. Moir fringes form topographic lines on the object and thus allow evaluation of the e surface height. Since this process was initially designed for subjective interpretation, the Moir method is poorly suited to allow automated analysis. However, Idesawa et. al. e [21] have demonstrated an automated process using a scanning Moir method. Because e of the relative ease with which gratings may be projected on objects, especially of larger size, Moir topography is best suited for the measurement of large objects (such as a e car) where a sub millimetre resolution is not required. Fourier Transform Prolometry Fourier transform prolometry (FTP) is a surface topography method based on fringe projection and overcomes many of the diculties encountered with Moir fringe anale ysis. The optical arrangement of FTP is similar to that used in Moir topography e projections, except that a second grating is not required to produce fringes. Instead, the 3-D object shape is extracted automatically from a digitised image of the object with projected fringes using an algorithm operating in the spatial frequency domain [22, 23]. A resolution far superior to that of Moir topography is achieved but at the e cost of limited range.

1.4
1.4.1

Optical Interferometry
Two Wavelength

Two wavelength interferometry (TWI) provides a means of using light of two wavelength to obtain an interferogram identical to one produced by a much longer wavelength. This allows the range over which conventional phase measuring interferometry is unambiguous to be extended without sacricing the height measurement precision [24]. TWI has thus found applications in those areas where conventional interferometry provides an inadequate measurement range. Two wavelength interferometry has been implemented for point to point distance measurements as well as for surface topography. The addition of two interferograms recorded with an illumination wavelength of 1 and 2 result in a pattern equivalent to an interferogram recorded using an equivalent wavelength, eq , such that[25] eq = 1 2 |1 2 | (1.4)

CHAPTER 1. THREE DIMENSIONAL IMAGING TECHNIQUES

A similar eect may also be obtained by illuminating an interferometer with light of two distinct wavelengths simultaneously such that beating fringes with eq are formed. Since the eect of TWI is to increase the wavelength of the illuminating light to an equivalent wavelength, interferograms can be used to determine the topography of smooth surfaces by applying conventional phase measurement and phase unwrapping techniques (see section 1.5.1 on page 14). Although TWI increases the measurement range of interferometers its accuracy is ultimately limited by the wavelength stability of the two sources.

1.4.2

Electronic Speckle Pattern Interferometry

Electronic speckle pattern interferometry (ESPI) is the modern equivalent of speckle pattern correlation interferometry (SPCI), since an electronic image sensor (such as a CCD) is used instead of lm. The main application of ESPI lies in revealing dynamic displacements of optically rough surfaces in real time. However, it is also possible to obtain surface topographies of static objects using ESPI [26]. ESPI [27] uses a two-beam interferometer and coherent, monochromatic laser illumination. The surface sample is placed in the object arm of the interferometer and the resultant speckle pattern is imaged onto a CCD detector via a lens. When the surface of interest is optically rough, the interference phase will vary randomly from point to point, making it impossible to measure the phase variations. However, if two images of the speckle eld are acquired, one of which shows the object in a deformed state (such as would be caused by vibrations), a subtraction of the two images will reveal fringes due to a correlation between the two speckle elds. Areas of maximum intensity are observed where the phase of the interference is an integer multiple of 2 and areas of minimum intensity are observed where the phase is an integer multiple of . Since the observed fringe pattern is analogous to that observed in a conventional interferometer, a measure of the surface deformation can be obtained by using a variety of standard phase stepping and phase unwrapping techniques (see also section 1.5.1 on page 14).

1.4.3

Low-Coherence Interferometry

Low-coherence interferometry (LCI) oers absolute distance measurement at micrometer scale resolution over a virtually unlimited range. LCI diers from conventional interferometry through the type of source which is used to illuminate the system. Instead of a monochromatic laser, an incoherent source emitting a band of wavelengths, is used. Since the electric eld vibrations emitted by this type of source are not correlated in time, interference is usually not observed. Interference, however, can be produced if the light is split and recombined so that parts of the same wave, emitted at the same time, are superimposed. In a Michelson interferometer this condition is satised if the optical path dierence (OPD) between the two interferometer arms is within the coherence length of the source. Let us illustrate this principle by considering a Michelson interferometer such as that shown in gure 1.4. A graph of typical intensity variations caused by changes in the OPD and measured at the detector is shown in gure2 1.5. The position of mirror
This graph is representative of most low-coherence sources and was experimentally measured by using a high power super-luminescent diode at = 830nm. However, the width and shape of this function may vary depending on the particular source.
2

CHAPTER 1. THREE DIMENSIONAL IMAGING TECHNIQUES

Detector

z Light Source Mirror 2 Collimating Lens Beamsplitter

Mirror 1
Figure 1.4: Michelson interferometer 2 along the optic axis may be changed so that the OPD between the two interferometer arms is zero. The amplitude of the intensity uctuations then reaches a maximum as shown in gure 1.5. At large OPDs the amplitude of the fringes decays to zero. Low-coherence interferometry relies on this property to make absolute distance measurements. If mirror 1 in gure 1.4 is replaced with an object of unknown location (along the optic axis), the position of mirror 2 may be changed until interference of maximum amplitude is observed. Since the OPD is zero in this case, the location of the unknown surface can be deduced from the position of mirror 2 (z). Thus, low-coherence allows convenient point to point measurements of absolute position. The most fundamental dierence between low-coherence and conventional interferometry lies in the need for path length scanning (i.e. displacement of the reference mirror or object of interest). Low-coherence methods have been extensively used to measure distances in a variety of applications such as point to point ranging [28], surface topography [29, 30] and tomographic imaging [31, 32]. This section concentrates specically on those low-coherence techniques which employ a single photo-detector to measure the interference. Lowcoherence techniques such as Coherence Radar [29, 33] which use a CCD sensor instead, and which are the main subject of this thesis, are discussed in detail in section 1.5.2. When low-coherence interferometry (LCI) is applied to the measurement of distances, it is sometimes referred to as low-coherence reectometry (LCR), since it determines the path travelled by light after reection from objects of unknown position. LCR is also widely associated with techniques that use a single photo-detector. These methods are typically implemented by using a low coherence Michelson or Mach-Zehnder interferometer, and measure the positions of a reecting object surface or structure

CHAPTER 1. THREE DIMENSIONAL IMAGING TECHNIQUES

10

1.0

Intensity (arbitrary units)

0.5

0.0 -40 -30 -20 -10 0 10 20 30

OPD()
Figure 1.5: Interference obtained with a low-coherence source embedded in a volume of (scattering) material with a resolution of 1m. As detection is performed using a single photo detector, transverse imaging is achieved by the use of beam or object scanning. This allows the additional implementation of confocal optics to further enhance the sectioning capability as well as the use of balanced detection [34, 35] to allow the detection of very weak signals. Some LCR systems are implemented in bre, in which case the bre aperture can conveniently act as a confocal aperture. If the reference mirror or the object of interest is scanned along the optic axis at a constant speed, a measure of the strength of interference can be obtained by demodulating the detected photo current at the Doppler frequency [28]. If the optical path remains static, a small phase modulation at frequency fc can be introduced in the reference arm to allow heterodyne detection of the interference signal at multiples of fc [36]. Alternatively, the signal can be low-pass ltered at a bandwidth equal to the spread of frequencies produced by scanning the beam across the sample object [37]. A schematic diagram of a Michelson bre based interferometer is shown in gure 1.6. Two possible methods for transverse image formation can be seen. Figure 1.6a depicts a beam scanning arrangement, in which two orthogonal mirrors deect the focused beam in the transverse (x-y) direction. If a raster scan is performed the interference signal can be displayed as a 2-D image of the section plane. This section is formed in the plane from which the returned light has to travel a similar optical path to that in the reference arm. Adjusting either the reference mirror position or the object in the axial direction (z) changes the position of the (coherent) plane of interest. Figure 1.6b illustrates an alternative method for transversal (x-y) scanning, which can be achieved by mounting

CHAPTER 1. THREE DIMENSIONAL IMAGING TECHNIQUES

11

the object on a x-y translation stage. Swanson et. al. [7] distinguish between Optical Coherence Tomography (OCT), a method which employs transverse scanning along two axes (x, y or z) in order to achieve 2-D tomographic images and Optical Coherence Domain Reectometry (OCDR) where scanning is performed only along one axis. OCT has gained much popularity in the eld of ophthalmics, where it complements or even replaces the confocal scanning laser ophthalmoscope (cSLO) (see also 1.3.2 on page 4) for in vivo studies of the human fundus/retina [7, 3840]. For these investigations OCT provides higher resolution than available with any other existing technique [7]. OCT has been successfully applied to investigations of the cornea [7], corneal thickness [41], eye length [35], in vivo frog tadpole anatomy [42] and, for inert materials, to investigations of ceramic defects [43] and moulded composites bre structure [31]. The scanning speed is of particular importance for in vivo applications as involuntary movements of the subject, especially when observing the eye, can lead to substantial errors due to unwanted displacements. However, a tandem interferometer conguration has been successfully used to compensated for axial movements during in vivo eye length [44] and corneal thickness measurements [41].

1.4.4

Channelled Spectrum

The Channelled Spectrum technique can be used to obtain proles of surfaces and image multi-layer structures. It can, in principle, be classed as non-scanning low-coherence interferometry, since it facilitates the measurement of the optical path dierence (OPD) without the need for axial scanning and also requires the use of a low-coherence source. When monitoring the spectral properties of light returned from a Michelson or MachZender type interferometer with the aid of a dispersive element such as a diraction grating, a series of peaks can be observed in the spectrum of the source. The spatial frequency and phase of these peaks is related to the OPD [45] in the interferometer. By sensing the line-shape with a one dimensional CCD detector an automated analysis, such as a spatial Fourier transform, can be performed and the OPD can be inferred. The accuracy and resolution are limited by the number of grating lines and the resolution of the CCD sensor. Channelled spectrum methods do not require OPD scanning like conventional low-coherence interferometers. However, they suer from a restricted depth range and cannot achieve more than one-dimensional image resolution in the transverse direction. Methods based on this technique have been successfully applied to the measurement of surface proles (with an axial resolution of 0.3m over a range of 70m) [46, 47] and structures in multi-layer samples (thickness resolution of > 2nm and a maximum range of 100m)[48].

1.5

Interference Detection using a CCD Detector

A number of methods discussed in this section originate from standard interference microscopy as developed by Linnik [49] and Mirau. Most commercial microscopes can be modied by the addition of a Mirau microscope objective to yield an improved depth discrimination. The sample can then be seen with multi-coloured white-light fringes across it. A subjective approximate evaluation of the sample shape can be obtained by observing the straightness and frequency of the fringes. Since the advent of low-cost digital imaging and processing equipment, it has been possible to automate the analysis

CHAPTER 1. THREE DIMENSIONAL IMAGING TECHNIQUES

12

Lens

Reference Surface

Low-Coherence Source

X-Y Beam Scanner 50/50 Fibre Coupler Sample

Lens Photodetector

Signal Processing

Storage

a
Lens Reference Surface

Low-Coherence Source

50/50 Fibre Coupler

X-Y Translation Stage Lens Sample

Photodetector

Signal Processing

Storage

b
Figure 1.6: Transverse scanning in a berised low-coherence reectometer

CHAPTER 1. THREE DIMENSIONAL IMAGING TECHNIQUES

13

of these visible fringe patterns in order to gain an objective measure of the sample topography [30, 50]. In these automated prolers, imaging is achieved primarily by the use of a Charged Coupled Device (CCD) sensor. A CCD essentially replaces the eye and allows the objective measurement of intensity and lateral distance across the plane of the sample. The sensor surface is divided into a square grid of picture elements or pixels each of which delivers a charge proportional to the number of photons striking it during the exposure time of the sensor. The photon energy is converted to an accumulated static charge by the silicon layer of the CCD. At the end of each exposure the charges can be shifted along the rows and columns of the pixels to produce an analog output signal. See also appendix A on page 112. Using a suitable analog to digital converter, the image can be stored and analysed on a computer. A distinction has to be made between methods employing low-coherence interferometry and those that, although they employ low-coherence sources, analyse the fringes based on the principle of conventional direct phase-measurement. Although the accuracy of conventional interferometry is high, its range may be limited to /2 when observing discontinuous surfaces. Low-coherence interferometry on the other hand offers a means to absolute distance measurement of both rough and optically smooth surfaces over an almost unlimited range (see also section 1.4.3). A distinction between conventional and low-coherence interferometry can thus be made according to the way in which the interference signal is processed. Direct phase measurement techniques require virtually no axial scanning of the object or reference mirror. The surface topography can be calculated from the phase distribution across the surface. This phase unwrapping process takes into account the phase variations from one pixel to another. Low-coherence interferometry on the other hand allows the absolute position of a surface to be measured at each pixel individually. In order to achieve this however, the object has to be translated though the entire depth range of interest.

1.5.1

Automated Phase Measurement Microscopy

Microscopes traditionally illuminate their samples with an extended white-light (i.e. low spatial and temporal coherence) source, such as a lament or discharge lamp. By introducing colour lters in the illumination path, the coherence length of this light can be increased suciently to allow conventional phase measurements over a large depth range. Because white-light illumination reduces unwanted interference between reections from optical surfaces lying outside the range of interest, this type of illumination has largely been maintained in automated phase measurement microscopes. Three fundamental interferometric congurations have been used in interference microscopy. Figure 1.7 shows Michelson, Mirau and Linnik congurations [51]. The beam-splitter placement is the primary limiting factor in determining the minimum sample-to-microscope-objective distance. This distance in turn determines the maximum magnication of the objective. Values of objective magnication for all three interferometric congurations are shown in gure 1.7. The objective-to-sample distance limitation is avoided by the Linnik arrangement, since the beam-splitter is located before the objective. The drawbacks of this arrangement are the increased back reection from lens interfaces in the objective and the need to use two identical objective lenses to obtain perfect path matching. Also, because the

CHAPTER 1. THREE DIMENSIONAL IMAGING TECHNIQUES

14

common optical path is less than in other congurations, the measurements are more prone to noise induced by mechanical vibrations. Although Mirau objectives do not suer these drawbacks and oer a higher magnication than is possible with Michelson objectives, they can introduce severe aberrations in wide aperture systems [52]. In order to produce a surface topography or prole measurement, the interference pattern recorded by the CCD camera must be interpreted. This is a two stage process consisting of phase-stepping and phase-unwrapping. Phase-stepping computes the phase of the interference based on 3 or more images of phase shifted fringe patterns (interferograms). Phase-unwrapping then determines the true phase from the modulo 2 phase image produced in the previous step. Many phase stepping techniques exist, and for this type of interferometer the most common method, is temporal phase shift interferometry [53, 54]. A phase shift can be induced either by moving the reference surface or the object over a small distance ( /2). By capturing a sequence of phase shifted interferograms, the original phase distribution across the sample object can be computed. A number of algorithms to calculate this phase distribution have been developed. They are usually named according to the number of phase shifted interferograms they require, such as three-step, four-step, ve-step, and multi-step algorithm [26, 54]. To illustrate the general principle of these algorithms, the three step method is described here. The intensity distribution I(x, y), formed by the interference of two coherent light beams, can be described as I(x, y) = a(x, y) + b(x, y) cos[(x, y)] (1.5)

where a(x, y) is the background illumination, b(x, y) is the fringe modulation and (x, y) is the modulo 2 phase corresponding to the height of the sample surface. In the three-step technique the phase (x, y) is calculated based on three intensity distributions, I1 , I2 and I3 , captured at a phase shift of 0, 2 and 4 respectively, such 3 3 that I1 (x, y) = a(x, y) + b(x, y) cos [(x, y)] I2 (x, y) = a(x, y) + b(x, y) cos (x, y) + I3 (x, y) = a(x, y) + b(x, y) cos (x, y) + 2 3 4 3 (1.6) (1.7) (1.8)

The modulo 2 phase distribution, (x, y), can then be computed using [26, 55] 3(I2 I3 ) 1 (x, y) = tan (1.9) 2I1 I2 I3 The phase stepping algorithm described by equations 1.5-1.9 results in a modulo 2 phase map. Before the surface height of the sample can be calculated, any 2 discontinuities must be removed. This is the process of phase unwrapping. Provided the sample surface slope is such that the largest phase change between adjacent pixels is smaller than the phase discontinuities can be removed by adding or subtracting multiples of 2 until the phase dierence between adjacent pixels is less that . A number of more sophisticated phase unwrapping methods exist [56] including those that can unwrap the phase of discontinuous surfaces [57]. Once the phase has been

CHAPTER 1. THREE DIMENSIONAL IMAGING TECHNIQUES

15

Michelson Interferometer

Mirau Interferometer

Microscope Objective (1.5X, 2.5X, 5X)

Microscope Objective (10X, 20X, 40X)

Reference Surface Beamsplitter Reference Surface Beamsplitter

Test Surface

Test Surface

Linnik Interferometer

Beamsplitter

Reference Surface

Microscope Objectives (100X, 200X)

Test Surface

Figure 1.7: Conguration of interference microscopes

CHAPTER 1. THREE DIMENSIONAL IMAGING TECHNIQUES

16

unwrapped, the topography of the sample surface, h(x, y), is given by: h(x, y) = (x, y) 4 (1.10)

1.5.2

CCD Based Low-Coherence Interferometry

Conventional automated phase measurement interferometers suer severe drawbacks if the surface of interest is discontinuous and contains steps larger than /2, because phase-unwrapping is then very dicult or impossible. Low-coherence or white-light interferometry has recently emerged as an attractive alternative, and can be implemented conveniently in a Mirau, Linnik or Michelson interferometer utilising a CCD sensor. Subjective evaluation of white-light zero-order fringes has been applied for a long time to the inspection of discontinuous step surfaces and thin lms. In 1982 Balasubramanian [58] patented the rst test surface measurement system to automate the detection of zero-order fringes. This was achieved by computer based CCD image analysis of the interference in a Twyman-Green interferometer. The principal dierence between this type of automated proling and the conventional phase detection systems discussed in section 1.5.1, are to be found in the process of interferogram analysis. Instead of the two step process of phase stepping and phase unwrapping, low-coherence prolometry simply requires the location of an interference maximum (see also section 1.4.3). In a low-coherence Michelson, for example, interference will be at a maximum only if the path length of the two interferometer arms are matched such that the optical path dierence (OPD) between them is zero. If one of the mirrors in this interferometer is replaced with a surface of interest, there will be a distribution of OPDs between dierent parts of the surface and the plane reference mirror. By superimposing the light reected from the mirror and the sample surface on a CCD sensor, the resultant interference for each part of the surface can be measured. If the measurement is repeated while the sample surface is moved along the optic axis, an interference maximum will be observed at every pixel at some point along the displacement process. The maximum nding process then determines at which object displacement this interference peak occurs so that a topographic map of the surface can be constructed. Compared to conventional phase interferometry, the low-coherence process requires an accurate long range translation stage as well as an increased image storage and processing capability due to the added axial scanning which is performed. Although this slows the acquisition process, the advantages which low-coherence interferometry oers, such as long range and absolute distance measurement, outweigh this disadvantage for certain applications. By implementing automated low-coherence interferometry in a Linnik interference microscope system, Davidson et al [50, 59] have demonstrated an increased axial sectioning capability and lateral resolution as compared to traditional microscopes and confocal scanning laser microscopes. In 1990 Kino et. al. [52] presented a method based on a Mirau interferometer which recovers the interference visibility by ltering the signal in the frequency domain, using a fast Fourier transform (FFT) technique. Their method is able to recover the phase as well as the visibility of the interference fringes, but does not implement a surface nding algorithm. Subsequently they presented a similar method using a Hilbert transform [60] to signicantly increase processing speed.

CHAPTER 1. THREE DIMENSIONAL IMAGING TECHNIQUES

17

In 1992 Dresel et. al.[29, 33] introduced a method based on a Michelson interferometer which measures rough surfaces over a large transverse range ( 1cm) and in deep holes. The Coherence Radar arrangement, on which much of the work in this thesis is based, is shown in gure 2.1 on page 21. This system is illuminated with a spatially coherent source (spatially ltered light), which simplies alignment and increases the visibility of the interference fringes. A collimated beam allows illumination without shadowing and thus makes this method ideal for applications involving the inspection of deep holes. The amplitude of the interference fringes is recovered by the use of a phase stepping algorithm, which records three interferograms with a phase shift of 2/3 between them. The surface position is then determined by a simple maximum nding algorithm. Due to speckle eects, the surface height resolution is limited to the rms roughness of the surfaces when examining optically rough samples. Recent developments have centred mainly around the development of faster and more accurate data processing schemes. By using Fourier transforms and Sub-Nyquist sampling of the interference data in the axial direction, de Groot et. al. [30, 61, 62] demonstrated increased accuracy ( 0.5nm) and a reduction of acquisition time for Mirau based prole measurements. The Mirau microscope, is now commercially available from a number of companies3 .

1.6

Summary

The non-optical, non-contact methods such as nuclear magnetic resonance (NMR), computed tomography (CT) and Ultrasound (US) are invaluable tools for medical in vivo diagnosis since they penetrate (opaque) tissue over large distances and oer a resolution sucient to image most structures of interest. Stylus proling systems and in particular the scanning probe microscope (SPM) are capable of measuring three dimensional surface proles with extremely high resolution and therefore are best suited for industrial type applications which require the inspection of very small surface features down to atomic scale. Optical methods allow non-contact measurements and interferometric techniques in particular oer very high resolution. They are well suited for proling large or fragile surface structures at high speed. Low-coherence interferometry such as optical coherence tomography (OCT) has been successfully used to investigate biological material. Due to its aperture independent depth resolution OCT has found considerable application to the study of the human eye (cornea and retina) in vivo. Figure 1.8 classies the techniques discussed in this chapter by their ability to measure the volume structure of translucent objects or the topography of surfaces. Also, in order to gain an overview of their performance, they are arranged according to the resolution they deliver. The distinction between volume imaging and surface topography is primarily introduced here since it reects the fundamental limitation of some techniques in locating more than one surface along the line-of-sight. Surface topography measurements can be presented as a function z(x, y) such that there is only one unique surface height value, z, for each transverse coordinate x,y. Volume imaging methods do not have this limitation and are, in principle, able to resolve the height or depth of several features (z1 , z2 ...zn ) at each transverse position (x,y). As indicated in gure 1.8 three
Wyko Corporation, Tucson, Arizona Zygo Corporation, Laurel Brook Road, Middleeld, Connecticut 06455 Phase-Shift Technology, 3480 E. Britannian, Suite 110, Tucson, Arizona 85706
3

CHAPTER 1. THREE DIMENSIONAL IMAGING TECHNIQUES

Surface Topography
Figure 1.8: Overview of three dimensional measurement techniques

Resolution: Volume Imaging

Fringe Projection Stereo Pair Imaging 75 ESPI Two-Wavelength CCD based Low-Coherence Interferometry Automated Interference Microscopy Stylus Scanning OCT Channeled Spectrum Confocal Microscopy CT NMR

cm-mm

mm-nm

nm-atomic
18

CHAPTER 1. THREE DIMENSIONAL IMAGING TECHNIQUES

19

dimensional volume imaging can be achieved by non-optical methods such as US, NMR and CT, by optical methods such as OCT and Channelled Spectrum and by confocal methods. Although CCD based low-coherence interferometry should, in principle, fall into the category of volume imaging, in practice it has not been successfully applied in this eld. Yet, it oers a number of advantages over other such methods. Like OCT, CCD based low-coherence interferometry is able to measure absolute distances with an aperture independent depth resolution at very high precision. In addition, the method oers superior performance in terms of speed due to the parallel nature of the imaging process and allows a more simple and robust construction of the apparatus due to lack of mechanical scanning elements. This thesis attempts in a large part to explore the possibility of applying CCD based low-coherence systems to the investigation of volume structures in order to exploit these advantages.

Chapter 2

Surface Topography using Coherence Radar


2.1 Introduction

Surface topography or prole measurements can yield useful information about the quality of fabrication processes [3] and are extensively used for the inspection of optical, automotive and electronic components. These include hard disk substrates, magnetic heads, precision machined and polished surfaces such as gears, bearings, cylinder walls, fuel injector seals, at and spherical optical components, and etched surface textures on semiconductor wafers [63]. Proling has also found considerable application in material sciences research for the study of fractured surfaces, integrated circuits, dielectric lms, bre-reinforced composites, power cable insulation, minerals, soils and optical bres [14]. Other applications include the study of machine tool wear [8] verication of surface scattering theories [3] and quality monitoring of aspheric lens and mirror surfaces [25]. High resolution measurements of rough surfaces are best obtained using low-coherence methods since these do not suer the phase ambiguity of conventional phase-measurement interferometry (see also section 1.5.1). In this chapter we present the results of surface measurement and analysis performed using Coherence Radar [29, 33], a CCD based lowcoherence technique which allows the measurement of rough surfaces topographies with accuracies of 1 2m over a virtually unlimited depth range. The principles of Coherence Radar are introduced in section 2.2 and a detailed description of its experimental implementation is given in section 2.3. In section 2.5 we demonstrate the capability of the system by measuring a 5 pence coin. In Sections 2.6 and 2.7 we present and evaluate a new thresholding technique which prevents the formation of rogue data points. In section 2.8, the study of hypervelocity impact craters is investigated and results of this interesting and new application are presented. Sections 2.9-2.11 conclude the chapter with an analysis of the various noise sources in the system and their eect on the measurement accuracy.

2.2

Principles of Coherence Radar

In this section, the principles of Coherence Radar are presented. We introduce the fundamentals of low-coherence interferometry and discuss the data processing techniques involved in constructing a three dimensional surface topography. 20

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR

21

Translation stage Sample Object and coherence plane Collimating lens BS Fibre source PZT
Lens 1

f1

ND-filter

Translation stage

fc f1
Low-coherence source 5= 830 nm Aperture stop

Mirror

f2
Lens 2

f2

Image plane Frame Grabber CCD camera

Computer

Figure 2.1: Coherence Radar experimental arrangement The Coherence Radar method [29, 33] is based on a low-coherence Michelson interferometer and uses a CCD camera for the detection of interference patterns. A diagram of such an arrangement is given in gure 2.1. A surface of interest is placed in one of the arms of the interferometer such that light from the reference arm is superimposed with light reected from the sample surface. A telescope images the surface and the superimposed reference wave onto the CCD sensor. Due to the low coherence of the source, the superimposed wavefronts interfere only if the path lengths of the two arms are matched to within the coherence length of the source. If the sample surface is displaced along the optic axis the interference will change depending on which part of the surface satises this condition. The ability for absolute distance measurement arises from the use of a low-coherence or broadband light source in the interferometer. The amplitude of the detected interference is at a maximum if the optical path dierence (OPD) is equal to zero. The basis

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR

22

of topography measurements is the detection of this condition which requires a measure of the interference amplitude. Coherence Radar uses a method called phase stepping to determine this amplitude.

2.2.1

Phase Stepping

In order to detect the amplitude of the interference at a given object displacement, three CCD images are recorded while the reference mirror is displaced. In the presence of interference there is an associated sinusoidal change in intensity with respect to the reference mirror position. An analysis of the resulting CCD images using the phase stepping algorithm then gives a measure of the interference amplitude at each pixel. Let us consider the formation of interference in a Michelson with partially coher ent illumination at central wavelength and Gaussian power spectrum of width (FWHM). If the object and reference beam intensities are Io and Ir respectively, it can be shown that the output intensity, I, is given by: 2 I(d) = Io + Ir + 2 Io Ir (d) cos 2d + (2.1)

where d is the position of the object of interest along the optic axis and (d) is the coherence function 1 . This may also be expressed as [29]: 2 I(d) = I + A(d) cos 2d + (2.2)

where A(d) is the amplitude of the interference term as a function of the object displacement, d. This amplitude is detected using the phase stepping algorithm [29] which requires three measurements of the intensity, I(d), such that a relative phase shift of 2/3 exists between each of them (valid for the mean wavelength ). The shifts along the optic axis. Since are introduced by moving the reference mirror in steps of /6 a shift in the reference mirror position is equivalent to a shift in the object position, d, the measurements can be described by: i = 1, 2, 3. Ii = I(d + i ) 6 The interference amplitude can then be computed using:
3 i=1

(2.3)

A(d) = where

Ii I 3/2

2 1/2

(2.4)

I = 1/3

Ii
i=1

(2.5)

If Io = Ir , the coherence function (d) is equal to the fringe visibility, V, dened as V = where Imax and Imin are the maximum and minimum observed intensities respectively.

Imax Imin , Imax +Imin

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR

23

Figure 2.2: The Coherence Radar experimental system

2.2.2

Surface Finding

Since Coherence Radar measures the interference via a two dimensional detector array (CCD), the interference amplitude, A(d), becomes a function not only of the object position, d, but also the pixel co-ordinates, x, y. The relative height of a surface element conjugate to pixel x, y is then given by the object position, ds , at which the maximum interference amplitude was measured. This is determined by a simple peak search algorithm and yields a surface topography.

2.3

Experimental System

Figure 2.1 shows a schematic diagram of the experimental arrangement used for our implementation of Coherence Radar. A photograph of the equipment employed can be seen in gure 2.2. The Coherence Radar arrangement may conceptually be divided into three functional units: the interferometer, the imaging optics and the translation devices. As indicated in gure 2.2 the interferometer is composed of the low-coherence bre source (1) and its collimation lens (2), the beam-splitter plate (3), the PZT mounted reference mirror (4) and the object of interest (5). A neutral density lter (10) is also included to attenuate the reference beam intensity. The imaging optics consists of the telecentric telescope, and the CCD Camera (7). The telecentric telescope, in turn, is composed of two lenses (6a, 6c) and an aperture

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR

24

1.0

)d es il a mr on ( yt is ne tn I

0.8

0.6

0.4

0.2

-40

-20


OPD (microns)

"

Figure 2.3: Coherence function of the super-luminescent diode at a bias current of 139mA stop (6b). Translation devices are used to displace the object during the measurement process (8) and to allow an adjustment of the reference mirror (9).

2.3.1

Michelson Interferometer

The interferometric setup is based on a Michelson interferometer. Illumination of the object is provided by a super-luminescent diode (SLD) which delivers light via a single mode optical bre in the near-infrared (N-IR) range. The source (1) is a high power lowcoherence point source, which is preferable to discharge or lament lamps, because of its high spatial coherence, high power and yet low-temporal coherence (23m at FWHM). At a maximum driving current of 140mA the source delivers up to 1mW of power at a mean wavelength of 830 nm (N-IR range). Figure 2.3 shows the temporal coherence function of the source SLD-361 (SUPERLUM Ltd.). Since the light is emitted from the single mode bre end in only one fundamental mode, it acts as a spatially coherent point source and the light can be collimated by a single lens into a Gaussian beam. Collimated light from the SLD is then incident via the beam-splitter on both the object and reference mirror. This method of collimation was found to be ideal because it assures complete object illumination, even inside deep holes, without shadowing. A non-polarising plate beam-splitter (3) with a 50/50 transmission/reection ratio (at = 830nm ) is used to divide the wavefront. Although, dispersion is not compensated in this type of beam-splitter, the lack of air-glass interfaces at a normal to the transmitted beam eliminates the strong ghost images sometimes observed in dispersion

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR

25

compensated cube beam splitters. An anti-reection coating centred at a wavelength of 830nm on the non-reecting side of the plate, helps reduce unwanted reections. The reference mirror (4) is made from a front silvered glass plate to prevent double reections. Its atness should be of the order of the maximum depth resolution attainable. To simplify alignment, the mirror is mounted on a two-axis tip-tilt mount with micrometer screw adjustment and on a translation stage (9). Detection of the interference signal requires a small modulation of the reference path ( /2 ), which is accomplished by the expansion of a PZT material on which the reference mirror is mounted. A computer controlled high voltage amplier is connected to the PZT material to control its expansion. In principle this allows continuous movements over a range of 1m with a resolution of 5nm. However, a non-reproducible hysteresis behaviour of the material was observed which prevents the accurate calibration of the voltage/expansion coecient and thus limits the positional accuracy to 40nm. Light in the reference arm is attenuated by a neutral density lter (10) to equal the intensity of the object beam. However, since the image intensity of the object is not normally uniform, an equal object and reference intensity cannot be achieved at every point in the image.

2.3.2

Imaging Optics

The necessary integration of the imaging optics with the Michelson interferometer imposes a number of constraints on the choice of lens arrangement. Firstly, a beam-splitter of suitable size needs to be accommodated between the imaging optics and the object of interest. This imposes a practical limit to the object to objective distance, and thus prevents the use of high resolution/magnication optics. Secondly, the beam reected from the reference mirror must be superimposed with the object image. The optics therefore needs to produce an image of the object without altering the divergence of the reference beam. When using a parallel collimated illuminating beam (as indicated in gure 2.1) a telescope is most suited, since this preserves the reference beam divergence for any objective to reference mirror distance. It also has the added benet of depth independent magnication, which reduces the amount of shadowing when attempting to image the bottom surface of deep holes. The telescope consists of two lenses (6a, 6c) with focal lengths f = f1 and f = f2 which are separated by f1 + f2 . The optical magnication of the telescope is given by M = f2 /f1 . An adjustable aperture stop (6b) is placed in the focal plane of both lenses. The diameter of which controls the angle of accepted light rays (numerical aperture), as shown in gure 2.4. If the stop is aligned with the optic axis, only the reections from a surface at 90 to the optic axis will be allowed to pass through it centre and reach the detector (CCD Camera). It is therefore necessary to align the reference mirror normal to the optic axis, so as not to block the beam. When placing a rough object in the interferometer, the stop diameter will determine the maximum surface slope which can be imaged. The diameter also aects the resolution of the optical system. The diraction limited resolution, R, is give by [64]: R= 0.61 sin(/2) (2.6)

where sin(/2) is the numerical aperture of the system and is the central wavelength of the illuminating light source. It is therefore generally benecial to operate the

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR

26

Object of interest

Image plane

,
Optic axis

f Lens 1

f Aperture

f Lens 2

Figure 2.4: Telecentric telescope interferometer with a large stop aperture. Since the object plane of the telecentric telescope is xed, the reference mirror position along the optic axis is adjusted so that the coherence plane (at which OPD = 0) coincides with the object plane. A CCD video camera in the image plane converts the incident light pattern into a video signal, which is digitised by a frame grabber for storage and analysis by computer (see also appendix A).

2.3.3

Translation Devices

To allow the surface of interest to be measured, interference must be present between the reference wave and the light reected from the objects. Since the interference is localised to reections originating from areas of equal height (along the optic axis), the object of interest has to be translated along the optic axis during the data acquisition to cover the range of interest. This is accomplished by mounting the object on a computer controlled translation stage (8). This device has a 1m resolution and a 20cm range. The position feedback signal from this stage is used to dene the surface positions during the Coherence Radar measurement. A further translation stage (9) is used for initial adjustments of the reference mirror position. In this way, the coherence plane (see gure 2.1) can be made to coincide with the focal plane of the objective lens (lens 1 in gure 2.4). Periodic adjustments of this may be necessary due to the varying optical path introduced by neutral density lters of varying thickness.

2.4

Data Processing

The owchart in gure 2.5 outlines the operations performed by the software during a Coherence Radar measurement. This includes both the control of hardware (object

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR

27

translation stage and PZT actuator) as well as the computation required for the phase stepping and surface nding algorithms discussed in sections 2.2.1 and 2.2.2. The phase stepping process computes the interference amplitude based on three phase shifted images. This method yields an interference amplitude matrix A(d, x, y), where d is the object translation stage position and x, y are the pixel coordinates of the CCD sensor. The amplitude at every pixel is compared with that measured at the previous object position, d, in order to nd the occurrence of a maximum. This surface nding process retains three storage arrays: 1. A(dj+1 , x, y) - the interference amplitude at the current object position 2. Am (x, y) - the maximum amplitude encountered up to the last position dj 3. ds (x, y) - the object positions corresponding to the maximum amplitude array Am (x, y) Once the object is translated along the z-axis from dj to a new position dj+1 the surface nding algorithm performs the following operations: Values of the maximum amplitude up to position dj , are compared with the amplitudes measured at the current position dj+1 . If A(dj+1 , x, y) > Am (x, y) , ds (x, y) is set equal to dj+1 and Am (x, y) is set equal to A(dj+1 , x, y). This process is repeated until the object has been translated through the range of interest. The resultant depth matrix, ds (x, y), then contains a surface height measure for each surface element x,y. The array Am (x, y) may be stored to aid in removal of unresolved surface points (see section 2.6).

2.5

Surface Prole Measurements

In an initial experiment designed to establish the correct behaviour of the system, a 5 pence coin (gure 2.6) was measured. This is suitable due to its rough, reective surface, overall size, and small depth range. A topography measurement was performed by translating the coin over a range of 400 m in 1 m steps. The maximum depth resolution is thus limited to 1m. The resultant depth matrix, which contains surface positions at each pixel, is presented as a grey-scale image, where the image intensity is a measure of depth. This is shown in gure 2.7, where the image size is 512 by 512. A prole of the coin surface (gure 2.8) clearly shows the height of the 5 as well as a number of rogue points. Because the scan range in this experiment was not sucient to reach the surface on which the coin was mounted a large number of rogue points is observed to the right of gure 2.8 and at the corresponding position at the bottom of gure 2.7. To reduce the amount of rogue data in measurements a thresholding technique was introduced. This is discussed in the next section.

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR

28

Start

Move translation stage to initial positon

Acquire image, I(x,y), fom CCD camera

Increment reference mirror position by 1/6 of wavelength

Computer average of the three images, I(x,y)

No

Acquired three images?

Compute interference amplitude, A(dj+1,x,y)

Yes

Phase stepping algorithm Increment object translation stage to new position dj+1 Surface finding

Is the amplitude, A(dj+1,x,y), No larger than the maximum amplitude, Am (x,y), encountered up to object position dj ? Yes

continue

No

Covered depth range? Am (x,y) = A(dj+1,x,y)

Yes depth matrix, ds(x,y) = dj+1

Store depth matrix, ds(x,y)

End

Figure 2.5: Flow chart of data acquisition and hardware control

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR

29

Figure 2.6: 5 pence coin (the rulings shown are 0.5mm)

400 50 350 100 300 150 200 250 300 150 350 100 400 450 500 100 200 300 400 500 50 250

200

Figure 2.7: Surface topography of a 5 pence coin; depth is indicated by colour (scale in m)

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR

30

400

350

300

depth (microns)

250

200

150

100

50

0 0

100

200

300 position (pixel no.)

400

500

600

Figure 2.8: Prole cross-section at position indicated by dashed line in gure 2.7

2.6

Noise Thresholding and Surface Interpolation

To assure a reliable topography measurement a noise thresholding algorithm was developed. This determines if the amplitude of the interference is sucient to allow an accurate measurement. A threshold value proportional to the residual noise is computed for each pixel. If the maximum interference amplitude detected during a topography measurement is smaller than this threshold value no surface position is stored. During the Coherence Radar measurement process the surface nding algorithm searches for the object position at which the interference amplitude reaches a maximum. If no interference is present or the CCD detector is either under or over exposed a maximum will not occur and a surface position cannot be found. However, in the presence of noise, a maximum will occur at a random position and will produce an incorrect measurement (rogue point). To make reliable surface measurements, it is important to identify and remove these rogue points. Accordingly, an addition was made to the surface nding algorithm, incorporating a noise threshold. A similar method has also been reported by [65]. The interference amplitude noise threshold value for the pixel position x, y, is determined by positioning the object so that no part of its surface returns light coherent with the reference. In the absence of any interference the resultant interference amplitude must therefore consist entirely of noise. This is repeated several times to obtain a reliable measure of the mean and standard deviation of the noise. The mean, A(x, y), and standard deviation,, of the amplitude noise samples is determined for each pixel individually and the threshold value, Ath (x, y), is computed as A(x, y) + n , where

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR

31

n is determined empirically to give the best noise rejection (see also section 2.7). Once measurement of the surface topography is completed, this threshold value is compared to the maximum detected interference amplitude, Amax (x, y). Any surface element in the depth matrix, ds (x, y), is removed if the underlying amplitude,Amax (x, y), is smaller than the threshold value, Ath (x, y). To allow unambiguous identication of the rogue points, rogue values in the depth matrix, ds (x, y), are set to a unique value reserved for that purpose. However, since the nal depth matrix should contain a realistic surface measure at every pixel position, iterative mean ltering is applied. This interpolates the missing points (which have been set to the unique value) by replacing them with the average of neighbouring points. Since the nearest neighbours may also be missing, the process is iterated until all points are interpolated.

2.7

Evaluation of Noise Thresholding

In this section we present result obtained with the Coherence Radar system by incorporation of the thresholding approach described in the previous section. To this end, its performance is assessed by measuring the surface of an object which would typically produce a large proportion of rogue point. In order to test the thresholding, an object possessing a large reectivity range had to be found. To create a surface with these properties, a steel ball bearing was forced into an aluminium slab at high pressure to form a hemispherical crater (diameter 8 mm). The metal sample was then partly polished to create a rough, but mostly specular reective surface. The reections from this surface create an image with a large range of intensities, since the steep surface gradients of the crater walls return only a small amount of diuse light while the surfaces normal to the optic axis reect a large proportion of the incident illumination. A measurement of the crater shape was made by translating the object over a range of 5.2mm in 1m steps. The CCD exposure time was adjusted to yield a suitable signal from the largest possible number of pixels. The remaining pixels were either saturated or under-exposed, preventing the detection of a sucient interference signal in some regions. In this experiment, the threshold value was computed from 100 samples prior to the acquisition process. The value n was empirically determined to yield an acceptable balance between missing and rogue points. The surface topography in gure 2.9 shows a series of rogues which were removed by this method and replaced with a unique depth value (0), shown as black. The existence of remaining rogue data points can be seen in the surface cross section presented in gure 2.10 (the transverse position of this prole is indicated by the dashed line in gure 2.9). The remaining rogue data points can be attributed to the way in which the threshold value is measured: Since the object has to be positioned so that no interference is present, it has to be placed outside the coherence plane (and thus the focal plane) of the interferometer (gure 2.1). During the surface measurement however, the interfering surface areas always coincide with the focal plane, i.e. the image is out of focus during the noise measurements but in optimal focus during the surface measurement process. A solution to this may be to eliminate interference by blocking the reference arm, rather than displacing the object.

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR

32

5000 50 100 4000 150 200 3000 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500 0

2000

1000

Figure 2.9: Surface of hemispherical crater, depth indicated by colour (microns)


5000 4500 4000 3500 depth (microns) 3000 2500 2000 1500 1000 500 0

100

200

300 position (pixel no.)

400

500

600

Figure 2.10: Surface prole of hemispherical crater after thresholding. The central spike is a remaining rogue point.

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR

33

5000 50 100 4000 150 200 3000 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500 0

2000

1000

Figure 2.11: Surface with missing points interpolated The data shown in gure 2.9 was subsequently interpolated using the iterative mean lter. A subjective impression of the techniques eectiveness can be gained from gure 2.11, which shows the data after interpolation. Parts of the image show a granular appearance which do not correspond to any real surface features. These correspond to areas where little light was returned from the steep walls of the crater and a break down of the method was to be expected. Areas closer to the centre have been interpolated well and are in good agreement with the continuous semi-circular surface of the object. We have found that any remaining rogue points can be removed successfully by median ltering the data.

2.8

Analysis of Hypervelocity Impact Craters

In this section we present a new and interesting application of Coherence Radar, the study and analysis of hypervelocity impact craters. The physical and chemical properties of natural dust, meteoroids and articial debris in the space environment is of considerable interest to space science research. A better understanding of particle ux and composition allows the prediction of impact frequency and hazard to space missions and ultimately aids in the development of components suitable for space missions. Post ight analysis of spacecraft helps to decode the origins of impactors by allowing the study the chemical signatures, impact ux and impact site morphology. This requires in part the calibration of impact signatures using laboratory generated craters [66]. Missions such as LDEF (Long Duration Exposure Facility), HST (Hubble Space

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR

34

Telescope) and EURECA (European Retrievable Carrier), have exposed various materials to the space environment. The recovered materials constitute an immense archive of impact data. Many studies of this and other data have been published (see for example the three Post-Retrieval Symposia Proceedings for LDEF [6769]) and work is ongoing in many laboratories. Most studies of crater shapes have concentrated on simple measures such as maximum depth, central depth, mean diameter at the ambient plane, circularity index etc., together with qualitative accounts of other features such as lip characteristics or deviations from axial symmetry, e.g. up-range/down-range asymmetry [70]. Only a small fraction of the information contained in the shapes of craters has been extracted and used. If all the information contained in the crater shapes were available, it might prove possible to improve signicantly the usefulness of crater morphology for inferring properties of impactors. Topography measurements made by Coherence Radar can supply invaluable information on the deformation caused at impact. The ability to image large areas of interest with high resolution makes this a particularly useful tool in the investigation of craters generated in the laboratory. However, due to the large amount of raw data made available by Coherence Radar measurements, a meaningful quantitative comparison of crater morphology becomes dicult. In an eort to reduce this data to a few parameters describing the overall impact shape, we have tried to approximate the surface by use of Zernike polynomials. Since this allows the representation of surface shapes in parametric form it makes it possible to compress all the numerical data to a smaller set of coecients. Zernike polynomials were primarily chosen because they have a geometry which seems highly suited to the description of typical impact craters. It is expected, therefore, that a good approximation to typical crater shapes may be made with a relatively small number of terms allowing a convenient parametric description of crater features which may thus enable meaningful categorisation and distinction between dierent types of impact events. Surface Measurements In our experiments, four impact craters were generated in the laboratory using the University of Kents light-gas gun facility. Four identical impactors consisting of spherical steel ball-bearings of radius 1 mm were red onto a target consisting of a at plate of aluminium alloy. The impact velocity of all four impactors was estimated to be between 4.7 4.9 kms1 . The rst two craters (craters 1 and 2) were head-on impacts (angle 0 with respect to the normal) and craters 3 and 4 were formed by inclining the target plate such that its normal made an angle of 70 with respect to the trajectory of the impactor. A photograph of crater 2 is shown in gure 2.15. The overall lateral dimension of the craters exceeded that of previously examined objects and required a new telescope with lower magnication. The reduced optical magnication invariably lead to a reduction in transverse image resolution. This, however, corresponds well with the less stringent resolution requirements of this application. In order to aid a meaningful interpretation of the data, the transverse scale of the depth matrix was calibrated. The object size corresponding to one pixel or element in the depth matrix was evaluated (18.7m 0.6) by using a standard USAF resolution chart.

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR

35

Figure 2.12: Three dimensional representation of crater 1 (head-on impact)

8000 7500 1 7000 2 transverse position (mm) 6500 6000 3 5500 4 5000 4500 4000 6 3500

3 4 transverse position (mm)

Figure 2.13: Surface topography of crater 1 (head-on impact)

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR

36

8000

7500

7000

6500 object displacement (m)

6000

5500

5000

4500

4000

3500

3000

3 4 transverse position (mm)

Figure 2.14: Cross section showing surface prole of crater 1 (position indicated in gure 2.13

Figure 2.15: Photograph of crater 2 resulting from a head-on impact (the rulings shown are 0.5mm)

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR

37

8000 7500 7000 2 transverse position (mm) 6500 6000 3 5500 5000 4500 5 4000 3500 6 3000 1 2 3 4 transverse position (mm) 5 6

Figure 2.16: Surface topography of crater 2 (head-on impact) - compare to photograph in gure 2.15
7500

7000

2 transverse position (mm) 6500

3 6000

4 5500 5 5000 6 4500 1 2 3 4 transverse position (mm) 5 6

Figure 2.17: Surface topography of crater 3 (impact 70 to normal)

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR

38

5500

2 transverse position (mm)

5000

3 4500 4

4000 5

3500

3 4 transverse position (mm)

Figure 2.18: Surface topography of crater 4 (impact 70 to normal) The topography of each crater was measured over a depth range of 10mm using 3m steps. In order to reduce noise, each of the 3 images required for the phase stepping process was averaged over 20 measurements. The processing time required to average these images and compute the interference amplitude at each object position (using a 512 by 512 image) was approximately 7 seconds. The acquisition of one crater required processing of 3333 3 20 images (3 images averaged by 20 for each interference amplitude matrix) giving a total of 7 hours. The surface data was interpolated after thresholding (see also section 2.6) and median ltered to remove any remaining rogue points. Figures 2.13, 2.16, 2.17 and 2.18 show a 6.73 mm by 6.73 mm area of craters 1 to 4 respectively. The data is presented as a colour coded (scale in m) representation of the surface height. For comparison, the surface topography of crater 1 (gure 2.13) is also represented by a three dimensional plot (gure 2.12). A cross-sectional prole of crater 1 (gure 2.14) oers yet another representation of the data and shows the steep walls created by the impact. A photograph of crater 2 conrms the similarity between the original crater and the experimental data. The Zernike Circular Polynomials The Zernike polynomials have found considerable application in optics, notably for describing the aberrations of imaging systems and in the statistical analysis of the aberrations produced by turbulence in the earths atmosphere [71]. The Zernike circular polynomials are a complete set of two-dimensional functions dened on and orthogonal over the unit radius circle [64]. They are dened by

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR

39

Zj (r, ) = Zj (r, ) = where


m Rn (r)

m 2(n + 1) Rn (r) cos(m) for j even: m 2(n + 1) Rn (r) sin(m) for j odd:
nm 2

=
s=0

(1)s (n s)! r n2s s!( n+m s)!( nm s)! 2 2

(2.7)

The values of n and m are integral and must satisfy the following relations mn n | m |= even

Use of the index j permits a convenient mode ordering in terms of the radial order n and the azimuthal order m - for a given value of n, modes with a lower value of m are by convention ordered rst. The orthogonality of the Zernike functions is expressed by dxdyZj (x, y)W (x, y)Zk (x, y) = jk (2.8)

1 where the weight function is W (x, y) = and the domain of integration is the unit radius disk (x2 + y 2 1). Accordingly, we designate a circle enclosing the region of interest and approximate the depth function of the crater ds (x, y) by the Zernike expansion up to the nth term N

ds (x, y) =
j=1

aj Zj (x, y)

(2.9)

The expansion coecients are then generated easily by use of the orthogonality relation of eq. 2.8 as aj = ds (x, y)W (x, y)Zj (x, y)dxdy (2.10)

Zernike Decomposition of Laboratory Craters The Zernike decomposition of the craters 1-4 was calculated through use of eq. 2.10. Figures 2.19 to 2.22 show a three dimensional representation corresponding to the Zernike t (N =150 modes) of craters 1-4 respectively as well as the individual contribution from radially symmetric and azimuthally dependent terms. In general the Zernike ts are excellent approximations. The normalised mean-squared deviation2 of all craters is just under 0.05. The only signicant discrepancy between the original data and the Zernike approximations occur where the t does not quite achieve the very steep walls of the craters. The coecients of the dierent Zernike modes can give an indication of the basic shapes, as well as of the higher order properties. It was found for example (as would reasonably be expected) that a contribution from azimuthally dependent modes was higher in the ts of the craters produced by oblique impacts (70 to normal) than
The normalised mean-squared deviation is the mean-squared deviation between the t and the data divided by the square deviation from the zero baseline.
2

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR

40

Figure 2.19: Zernike t of crater 1

Figure 2.20: Zernike t of crater 2

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR

41

Figure 2.21: Zernike t of crater 3

Figure 2.22: Zernike t of crater 4

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR

42

in the ones produces by head-on impacts. Thus, we expect, that the analysis in terms of Zernike polynomials will provide opportunities for fertile comparative studies. The natural matching between Zernike modes and typical crater features and the inherently large amount of information carried in the coecients suggest that they may provide a powerful tool for analysis of crater morphology. This work has led to the publication of two semi-quantitative approaches to analysing the Zernike representation of crater morphologies [72, 73]. Results are very encouraging and it is anticipated that signicant further research eort will now be undertaken in this area [74].

2.9

Noise

In order to gain an understanding of the limiting factors determining accuracy and resolution of surface measurements, we attempt to quantify the amount of noise introduced by components of the Coherence Radar system in this section. Specically, we will examine the eect on the accuracy of interference amplitude measurements of:1. Systematic errors produced by phase stepping 2. Inaccurate displacement of the reference mirror due to PZT hysteresis 3. Mechanical vibrations which can induce random path uctuations 4. Electronic noise present in the digital imaging system. In each case observed values are used as the basis for a theoretical evaluation of interference amplitude errors. Section 2.10 then examines how these errors inuence the accuracy of topography measurements by use of a theoretical model estimating the behaviour of a peak search. Finally, section 2.11 compares these predictions with empirical measurements of topography accuracy.

2.9.1

Phase Stepping Error

Phase stepping (section 2.2.1 on page 22) yields exact results only if the amplitude of interference remains constant for all three intensity samples, Ii = I(d + i ), where 6 i = 1, 2, 3. Since the amplitude is not constant in practice, interference measurements will contain systematic errors. We will now derive this error numerically. Due to the properties of the source, the interference amplitude may be approximated by a Gaussian function of the form A(d) = Am exp d2 2 2 , (2.11)

where d is the object or reference mirror position, Am is the maximum interference amplitude and is the standard deviation3 . An estimate of the systematic interference amplitude error was computed by applying the phase stepping algorithm to a simulated interference pattern (gure 2.23) of the form:
A value of = 5.6m was determined experimentally by tting a Gaussian function to 10 interference amplitude proles.
3

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR

43

Figure 2.23: Numerical simulation of low-coherence interferogram

Figure 2.24: Error in demodulating Gaussian interference amplitude

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR

44

I(d) = B + A(d) cos

4d

(2.12)

where B is the background intensity (B Am ) and = 830nm. The interference amplitude, Aps (d), is then computed using the phase stepping algorithm (as shown in section 2.2.1 on page 22) such that i = 1, 2, 3. (2.13) Ii = I(d + i ) 6 The amplitude error, Ea , is then given by Aps A. A plot of the normalised amplitude error, Ea /Am , versus d is shown in gure 2.24. It can be see that only a relatively small error of 0.9% results from phase stepping, such that its inuence on the nal result will generally be negligible. A comparison of gure 2.23 and 2.24 indicates a correlation between the slope of the interference amplitude, A(d), and the magnitude of this error.

2.9.2

PZT Hysteresis

The phase stepping accuracy can also be adversely aected by positional errors of the PZT mounted reference mirror. The position error incurred by the PZT transducer over a range of 500nm was observed to be as large as 40nm due to hysteresis. At a source wavelength of = 830 nm this corresponds to a phase error of /5 radians. In order to simulate the displacement error of the reference mirror, we attempted to model a worst-case hysteresis behaviour. Figure 2.25 shows how the assumed linear behaviour and the actual hysteresis behaviour may be approximated by linear functions relating voltage and PZT expansion. The voltage applied to the PZT material during an experimental measurement is derived assuming a linear behaviour. However, the actual displacement is related to an unknown hysteresis behaviour. The displacement error, E, is then the dierence between the actual displacement and the assumed displacement. As shown in gure 2.25, E, is proportional to the applied voltage, V , and thus also to the assumed displacement, d (since d V ). We have thus chosen to model E as a linear function of d such that E(d) = E0 d, where E0 is the position error per displacement. An estimate of the interference amplitude error, Ea , resulting from measurements made with a displacement error in the phase shifting device was computed by applying the phase stepping algorithm to a simulated interference pattern of the form: I(d) = B + A cos 4(d + E(d)) (2.14)

where A is the amplitude of the interference signal (assumed to be constant), B is the background intensity (B A) and = 830nm. The interference amplitude, Aps (d), is then recovered using the phase stepping algorithm (section 2.2.1). Using the empirically determined value E0 = 40nm/500nm = 0.08, the normalised amplitude error Ea /A is evaluated for a number of reference mirror displacements, d, as show in gure 2.26. From this it can be seen that the maximum error is as large as 10% and thus may have a signicant eect on the nal measurement.

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR

45

hysteresis

2l/3+E(2l/3)

actual displacement

ac

tu

b al

eh

i av

ou

E(2l/3)

Displacement, d

2l/3

assumed displacement E(l/3)

ass

um

l ed

ine

ar

av beh

iou

l/3 E(l/6) l/6

V1

V2

V3

Voltage

Figure 2.25: Hysteresis of the PZT material

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR

46

Figure 2.26: Amplitude error as a result of PZT hysteresis

Figure 2.27: Numerical simulation of interference in the presence of mechanical vibrations

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR

47

Figure 2.28: Distribution of amplitude error

2.9.3

Vibrational Noise

The entire optical system was supported on a metal honeycomb optical breadboard. It was observed that vibrations transmitted to the apparatus from the oor induced OPD changes of up to 200nm in the interferometer. The associated interference amplitude error resulting from these vibrations was computed numerically by applying the phase stepping algorithm to a simulated interferogram with varying degrees of added vibrational noise. The interference is given by I(d) = B + A cos 4(d + nv ) (2.15)

where d is the object displacement, A is the amplitude of the interference, B is the background intensity (B A), and nv is a random uniformly distributed path length variation (0.1m). A plot of I(d) versus d, showing the simulated interference signal, can be seen in gure 2.27. The interference amplitude, Aps (d), is then computed using the phase stepping algorithm such that Ii = I(d + i ) i = 1, 2, 3. (2.16) 6 The distribution of the normalised interference amplitude error, Ea /A, where Ea is Aps A as before, is shown in gure 2.28 and has a standard deviation of = 0.2. This shows that substantial interference amplitude uctuation of up to 20% can result from mechanical vibrations. Use of vibration isolators may help reduce this eect considerably.

2.9.4

Image Noise

When considering image noise one has to distinguish between xed pattern noise (variations from pixel to pixel in the same frame) and frame to frame noise (variations from frame to frame at the same pixel). Fortunately, since the data processing algorithm

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR

48

0.8

amplitude error (A )

0.6

0.4

0.2

0.0 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4

CCD noise (ccd)

Figure 2.29: Relationship between image noise (ccd ) and the resultant amplitude error (A ) operates on each pixel independently, xed pattern noise (sensitivity variation from pixel to pixel) does not eect the accuracy. Interference amplitude measurements are, however, aected by frame to frame noise which is introduced at a number of dierent stages, such as CCD readout, electronic amplication and digitisation. An estimate of the relationship between image noise and the associated interference amplitude error was derived numerically by recovering the amplitude of a number of simulated interferograms with varying degrees of added noise. A simulated interference pattern was computed using the following relationship (analogous to equation 2.15): I(d) = B + A cos 4d + nccd (2.17)

where the image noise, nccd , was generated randomly with a Gaussian distribution of standard deviation ccd . As in the previous sections the interference amplitude, Aps , was recovered by applying the phase stepping algorithm. The resulting normalised error (EA /A, where Ea = Aps A) and its associated standard deviation (A ) were then computed for a number of dierent ccd . A plot of the image noise (ccd ) versus the resulting interference amplitude noise (A ) can be seen in gure 2.29 and shows an almost linear relationship between the two. We can now use this relationship to deduce the interference amplitude error for a known amount of image noise. First, however, it is necessary to arrive at a realistic value of ccd . Since ccd is a dimensionless number it not only depends on the noise produced in the imaging system (Accd ), but also on the strength of the interference signal (A) present during the measurement (see equation 2.17).

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR

49

First a worst-case estimate is made. We assumed that the smallest interference amplitude (A) required to allow a valid surface measurement is equal to twice the actual image noise (Accd ). Thus, in this case we require no actual measurement of image noise, resulting in ccd = 0.5 and a corresponding value of A = 0.4. A best-case estimate requires a measure of the actual image noise (the standard deviation of this was measured to be 4.5 4 , see also appendix A) such that the best ratio between noise and interference signal may be evaluated. Since the amplitude of the interference signal can be no larger than approximately 125 (since the imaging system has an 8-bit digital intensity range of 0 to 255) the smallest possible value of ccd (4.5/125) yields A 0.04 (by extrapolating the relationship in gure 2.29), an order of magnitude less than the worst case. We have determined that the noise in the imaging system results in an interference amplitude error of up to 40%. This is twice as much as that produced by mechanical vibrations (20%) and four times as much as caused by PZT hysteresis (10%). However, depending on the strength of the interference signal this may be substantially smaller, as shown above. Nevertheless the prevention of this kind of error should be of primary concern when designing Coherence Radar systems. One way to reduce the image noise is to average the intensity measurements over N images before phase stepping. The surface proling software developed for our implementation of Coherence Radar allows this type of averaging, and can thus reduce the noise by a factor of N (this also reduces noise induced by mechanical vibrations). However, we have found that this is achieved at the cost of a diminished signal. If the mechanical vibrations induce a phase shift of 2, the signal may be lost, since in this case an average of I = B + A sin( 4d + ) over many measurements is equal to I = B. In the following section we will discuss how the errors computed in this section aect the accuracy of the nal surface topography measurement.

2.10

Accuracy of Surface Location

Although the surface nding algorithm (a peak search to locate the centre of the coherence function) has no intrinsic sources of error, it is however sensitive to the noise present in the interference amplitude measurements. In this section we establish the relationship between the noise in the interference amplitude measurements and the accuracy of the peak location. When a peak search is performed the maximum accuracy achievable is related to the shape of the interference prole (which approximates to a Gaussian in our case) and the uncertainty of the measurement. Let us assume a Gaussian interference amplitude prole of the form A(d) = Am exp (d ds )2 , 2 2 (2.18)

where d is the object position, Am is the peak amplitude and is the standard deviation dened by the coherence length of the source. If, as in gure 2.30, the maximum interference amplitude occurs at the object position d = ds , and given an interference
4

this is an 8-bit digital number (in the range of 0-255)

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR

50

Ed Interference amplitude, A(d)

A(ds )
FX Draw Unregistered s Evaluation Copy

Ea

A(ds +Ed )

ds

ds +Ed

Object displacement, d
Figure 2.30: Peak search: relationship between amplitude error (Ea ) and position error (Ed )
Source Phase Stepping inherent PZT hysteresis Mechanical vibrations Image noise Section 2.9.1 2.9.2 2.9.3 2.9.4 Interf. amp. error (Ea /A) 0.009 0.110 0.2 0.05 - 0.4 Position error (m) 0.32 1.14 1.58 0.76-2.39

Table 2.1: Values of approximate positional error based on interference amplitude error amplitude error, Ea , the largest resulting position error may be approximated by the distance Ed such that A(ds ) = A(ds + Ed )+ Ea . Since A(ds ) = Am , Ed can be evaluated by solving 1 exp for Ed Ed = 2 ln 1 Ea Am where
Ea Am

(Ed )2 2 2

Ea Am

(2.19)

<1

(2.20)

Using the previously determined value of = 5.6m (based on measurements of the interference prole using our source), the positional accuracy of the peak nding process can be computed given the error values determined in the last section. Table 2.1 summarises all the error sources discussed in the last section and presents the corresponding surface nding error, Ed , determined using equation 2.20.

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR

51

Errors due to phase stepping are smaller than the smallest step size used in displacing the object over the range of interest and thus may be neglected. PZT hysteresis, mechanical vibrations and image noise, however, produce suciently large errors (up to 2.5m) to be be considered limiting factors in the overall measurement accuracy. Superior depth resolution could be achieved by: Decreasing mechanical vibration by introducing isolators. Reducing image noise. Using translation stages with higher accuracy and resolution. Replacing the peak nding method with algorithms already presented for Linnik and Mirau type interferometers, such as centroid nding [75] and frequency domain analysis [62]. To complete the discussion of accuracy, another more fundamental source of error should be considered. Due to the size of a pixel on the CCD sensor, a nite area of the object surface is imaged by each pixel i.e. there is a limited transverse resolution. During the measurement procedure a single surface height is assigned to each pixel location. Therefore, if the surface of interest is rough the depth resolution will be limited to the range of surface positions inside that area. However, even if the size of the pixels is decreased suciently, the transverse resolution is still limited by abberations and diraction. In the case of coherent superposition the nite resolution also gives rise to speckle, which limits the depth resolution to the surface roughness [29].

2.11

Empirical Evaluation of Accuracy

In order to assess the overall accuracy of our Coherence Radar system, the surface position of a at mirror was measured along a row of pixels and the deviation of this prole from a straight line was determined. This data was acquired by scanning an inclined mirror over a range of 64 m at 1 m steps. No averaging was performed. In gure 2.31 the interference amplitude is represented by a gray scale image showing its variation along a row of 512 pixels and over a range of 60 object positions (depth). The result of the amplitude peak search is indicated by a black line and shows the position of the experimentally determined prole. Figure 2.32 shows a plot of the line of best t through these surface positions. The residuals indicate an RMS deviation from the straight line of best t by 1.1m. Because the test surface (mirror) is not optically rough, the resolution is not limited by the surface roughness and the RMS value can be compared to the gures in table 2.1, showing good agreement.

2.12

Conclusion

In this chapter we have presented the principles of Coherence Radar, described our experimental system and have presented results of various surface measurements. We have demonstrated its ability to measure the topography of a rough reective metal surface and have developed a method to lter noise using thresholding. Results of a new and promising application were presented: The measurement of hypervelocity impact craters and their subsequent analysis using Zernike polynomials. This involved

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR

52

10

20 depth (microns)

30

40

50

60 50 100 150 200 250 300 position (pixel no.) 350 400 450 500

Figure 2.31: Interference amplitude vs. depth along a line of 512 pixels, showing measured surface position
Line of best fit through data depth (microns) 35 30 25 20 0

100

200

300 position (pixel no.)

400

500

600

residuals (RMS deviation = 1.1 microns) deviation (microns) 5

5 0

100

200

300 position (pixel no.)

400

500

600

Figure 2.32: RMS deviation of surface position from line of best t

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR

53

measuring surface topographies of objects as large as 9 by 9 mm, containing deep holes and steep walls over a depth range of at least 10 mm. Finally, the accuracy of Coherence Radar was investigated theoretically and compared to empirically determined results. In summary, Coherence Radar is ideally suited for surface measurements of large objects which require a high accuracy and large depth range. Potential applications include the inspection of manufactured parts containing milled slots, drill holes or cracks, comparative analysis of deformations and the study or documentation of biological specimens.

Chapter 3

Imaging of Multiple Reecting Layers


3.1 Introduction

Building on the work described in chapter 2 the Coherence Radar method is modied to allow tomographic and volume imaging. Scanning confocal microscopy [12, 14] and optical coherence tomography (OCT) [31, 43] are capable of delivering tomographic images of both inert and biological material (see also section 1.4.3 on page 8). However these methods suer from drawbacks associated with mechanical beam scanning such as vibration, motor wobble[76], path length modulation[77], geometric distortions[78, 79] and slow speed. Coherence Radar overcomes many of these limitations by the use of a detector array (CCD) and thus potentially oers increased speed and stability. Recently a number of new low-coherence methods have emerged which use a CCD array for the detector of interference. These methods are capable of imaging multilayer structures [48], human skin [80], and highly scattering tissue [81] but do not make use of the CCD as a fast two-dimensional imaging device. Rather the CCD is employed to measure the spectrum of reected light [48, 80] and the radial distribution of scattered light [81]. Although Swanson [82] describes a technique similar to Coherence Radar, designed for the study of translucent materials, to our knowledge, no results have been published to date which document its successful application. Thus, we present for the rst time, tomographic images of multilayer structures obtained using CCD based low-coherence interferometry without the need for mechanical transverse scanning [83]. Potential applications may include the investigation of opaque objects embedded in a transparent medium and the location of interfaces between transparent media of dierent refractive index, such as bre composite materials and ceramic structures. In addition, there is considerable interest in methods capable of imaging biological tissue, in particular structures in the human eye such as the cornea, lens and retina. We note here that most biological tissue is highly scattering and thus reduces both the intensity of the reected light as well as the accuracy and resolution of measurements. We therefore defer discussion of such problems to chapter 4 and here concentrate on application which do not involve scattering media.

54

CHAPTER 3. IMAGING OF MULTIPLE REFLECTING LAYERS

55

Figure 3.1: Interfaces separated by d = 11

3.2

Theoretical Considerations

Compared to the investigation of opaque object surfaces (chapter 2), the study of translucent materials with multiple reecting layers makes more stringent demands on signal detection and analysis. An attempt is thus made here to theoretically outline some of the factors aecting the measurement of multilayer structures and to gauge limitations on performance. A number of complications arise as a consequence of moving from surface investigations to studies of translucent volumes. Axial Resolution: When studying multilayer objects, more than one feature has to be identied along the optic axis. Since several interference maxima can now be observed it is no longer sucient to use a peak search to locate the position of interfaces. Weak Interference Signal: Light reected from a multilayer object contains a coherent as well as an incoherent component. If the number of reections originating from outside the coherence plane is large, returned light may contain only a small fraction of coherent light (useful signal) and a detector with a high dynamic range must be used to measure the signal. Object-Light Interactions: When imaging multi-layered objects using Coherence Radar, light has to travel through the object medium. Physical eects such as delay, refraction, dispersion, scatter and birefringence may aect the measurement.

3.2.1

Resolving Multiple Layers

Requirements for a system which resolves multilayer structures are distinct from those of a prolometer where an accurate measurement of the surface location is of primary concern. Since there are now several, possibly closely spaced, features along the optic axis, it is no longer be sucient to use a peak search to locate these features. Ideally,

CHAPTER 3. IMAGING OF MULTIPLE REFLECTING LAYERS

56

Figure 3.2: Interfaces separated by d = 11 + /8

Figure 3.3: Interfaces separated by d = 11 + /4

CHAPTER 3. IMAGING OF MULTIPLE REFLECTING LAYERS

57

the nal data should contain a value for the location of each reecting interface together with the intensity of the reection. If only a single reecting interface is present along the optic axis, a measure of the interference amplitude and a subsequent peak search can locate the position adequately. However, can a peak search still accurately locate several closely spaced reections? In order to answer this question, let us consider the intensity observed at the output of a low-coherence Michelson interferometer when an object composed of two closely spaced ( 9m) reecting interfaces is placed in one of the arms. Let the position of this object along the optic axis be d (relative to the rst interface) and let the spacing between the interfaces be d. The interference is modelled by assuming a sinu2 soidal intensity variation modulated by a Gaussian coherence function, (d) = d2 2 where = 5.6m, such that (d) corresponds to source coherence length of lc 25m (FWHM). The output intensity, I(d), as a function of the object position is then given by: d2 4d 4(d d) (d d)2 cos cos + exp 2 2 2 2

I(d) = 1 + V

exp

(3.1)

where V is the visibility of the interference and = 0.830m. Figures 3.1, 3.2 and 3.3 show a plot of I(d) versus d for a number of interface separations, d. A dashed line indicates (d) and (d + d) which corresponds to the interference envelope that would be observed for each interface individually. It is interesting to note that fringe beating produces a combined interferogram of unpredictable shape, depending on values of d. A shape equivalent to a simple addition of the interference envelopes (since the sinusoidal terms are in phase) can be observed in gure 3.1. If an additional separation of /4 is introduced, the resultant beating eect aids in the distinction of the two interferograms (gure 3.3). However, this also causes the envelope maxima to occur at dierent positions (compare gure 3.2 and 3.3) even though the separation, d, remains essentially constant. In conclusion, it can be said that the ability to dierentiate between two reections becomes dicult, or impossible, if the separation between the interferograms is of the order of the coherence length, lc . Even though the interferograms of individual reections may still be distinguishable in some cases due to beating eects, the maxima of their envelopes is not a reliable indication of the interface positions.

3.2.2

Simulation of Signal Strength from a Multilayer Object

In chapter 2 the interference of a rough object and a plane mirror were investigated. When the object surface coincides with the coherence plane, interference will occur, and very high visibility fringes can be measured (see equation 2.1 on page 22). When studying translucent multilayer objects the situation is very dierent. Coherent light is returned from interfaces within the coherence length of the source, light returned from all other interfaces is incoherent. The result is a small amount of useful signal-carrying light on a large background of incoherent light. To quantify the amount of signal returned from a multilayer object, we consider a simple mathematical model of a translucent object composed of many identical glass plates, as shown in gure 3.4. The interference of n discrete reecting and transmitting

CHAPTER 3. IMAGING OF MULTIPLE REFLECTING LAYERS

58

Figure 3.4: Model of multilayer object composed of many identical glass plates

CHAPTER 3. IMAGING OF MULTIPLE REFLECTING LAYERS

59

interfaces separated by a distance d can be approximated by reections from the glass-air boundaries in the object (gure 3.4). We simplify the analysis by making the following assumptions: d, is large compared to the coherence length of the source Light is not scattered or absorbed Light is incident as a parallel collimated beam normal to the surface of the glass plates No multiple reections occur between boundaries Consider placing such a stack of glass plates (containing n interfaces) in the object arm of a Michelson interferometer illuminated by a low-coherence source of central wavelength . It can be shown that the output intensity, I(d), as a function of the object position, d , is then given by: I(d) = It + Ir + 2 Ir

n

j=1

where dj is the object position which minimises the optical path dierence (OPD) between interface j and the reference mirror, such that the Gaussian coherence function j (dj ) = 1. Ic (j) is the intensity reected from interface j, It is the total intensity reected from the object and Ir is the intensity reected from the reference mirror. The intensity, Ic (j), returned from interface j is dependent on the Fresnel reectivity (R) and transmissivity (T) of the glass boundaries and may be expressed as Ic (j) = I0 RT 2(j1) , where I0 is the intensity incident on the object. The intensity of light reected from all n interfaces (It ) is then given by
n

2 Ic (j)j (d) cos 2(d dj )

(3.2)

(3.3)

It (n) =
j=1

Ic (j).

(3.4)

Using equation 3.2 and assuming d = dj the amplitude of the interference term i.e. the interference amplitude (see equation 2.2 on page 22) may be expressed as a function of j: A(j) = 2 Ic (j)Ir . (3.5)

Let us assume that in order to maximise the visibility the intensity reected from the reference beam (Ir ) is adjusted to equal the intensity reected from the object (It ). Using equations 3.3 and 3.4 in equation 3.5 we then obtain:
n

A(j, n) = 2I0 R
i=1

T 2(i+j2) .

(3.6)

Figure 3.5 shows a plot of A(j, n)/I0 , versus j for a stack of 100 glass plates (n = 200). Using the Fresnel equation [84], and assuming T = 1 R, the values of R and T were evaluated for two cases:

CHAPTER 3. IMAGING OF MULTIPLE REFLECTING LAYERS

60

Figure 3.5: Interference amplitude versus interface number in a stack of 100 glass plates 1. The gaps between the glass plates are lled with air (assuming a refractive index of air na = 1 this yields R = 0.043 (T = 0.957). 2. The gaps between the glass plates are lled with water (assuming a refractive index of water nw = 1.322 this yields R = 0.005 (T = 0.995). Figure 3.5 shows that the decay of A with respect to j is much less rapid for water as compared to air. Since A represents the strength of the signal recorded by the Coherence Radar technique, this suggests that there is a limit to the number of boundaries which may be detected in such an object and that this limit is much lower at high boundary reectivities, R. In the next section we show that there is indeed such a limit and show how it is determined by the dynamic range of the detector (CCD). Dynamic Range When imaging multilayer structures only the coherent part of the returned light is useful. When a large amount of incoherent light is present it is necessary that the detector resolves the small coherent signal superimposed on a large incoherent background. Even though the incoherent light does not contribute any useful signal, it nevertheless contributes to the saturation of the detector. If the light is attenuated suciently to prevent saturation, the amplitude of the coherent signal may then be less than the noise oor of the detector. Thus, in practice, the detection of a small coherent signal may not be possible since it is limited by the dynamic range of the detector. The dynamic range may be dened as the ratio between the saturation and noise level of a detector. The dynamic range, R, required to detect interference so that the magnitude of the signal, Imax Imin , is S times larger than the noise oor, is given by: R= SImax Imax Imin (3.7)

Using equation 3.2 and assuming d = dj this becomes

CHAPTER 3. IMAGING OF MULTIPLE REFLECTING LAYERS

61

Figure 3.6: Dynamic range required to detect the interference signal from interface j in a stack of 100 glass slides (200 interfaces)

R(j) =

S Ir + It +1 . 2 2 Ir Ic (j)

(3.8)

With the help of the model developed in section 3.2.2 we can now derive the dynamic range required to measure the positions of reective interfaces in a stack of glass plates. Using equations 3.3 and 3.4 in equation 3.8 and again assuming that Ir = It we obtain S R(j) = 2
n

T 2(ij) + 1 .
i=1

(3.9)

Values of R(j)/S are evaluated for n = 200 (a stack of 100 glass plates) and plotted as a function of j in gure 3.6. From gure 3.6 it is evident that a typical CCD signal digitised to 8-bit precision (R = 256:1) does not oer sucient dynamic range to allow position measurements

CHAPTER 3. IMAGING OF MULTIPLE REFLECTING LAYERS

62

of air-glass interfaces beyond j 110. Although these values are based on a simple model, we may conclude that for industrial applications, where objects are composed of a few interfaces of high relative refractive index (such as say 10-50 glass-air boundaries), the system should be adequate, but when measuring complex structures consisting of a large number of such interfaces an imaging system with a much higher dynamic range is required. This is not so for the case of glass-water boundaries. Here a low dynamic range is sucient to image a large number of reective interfaces.

3.2.3
Delay

Eect of the Object Medium on the Measurement

Light travelling in a medium of absolute refractive index n > 1 experiences a time delay relative to a path in vacuum (or in air) of the same length, equal to = Lg n/c, where Lg is the geometric length of the path and c is the speed of light in vacuum. Thus, an optical path Lo measured using a low-coherence interferometer in this medium, corresponds to a geometrical distance given by: Lg = L0 /n. (3.10)

Position measurements performed using a low-coherence interferometer are only accurate if the refractive index prole is the same in both arms. In surface prole measurements this condition is satised since light in both arms travels in air. When observing translucent samples, however, light travels through the object medium and the refractive index must be known in order correct for the delay. Since in practice objects consist of a number of materials with dierent refractive indices a correction becomes dicult in most cases. Refraction Refraction of light at the object medium boundary can alter the path of light entering a multilayer structure. The most severe consequence of this, is a shift in the position of the focus. Figure 3.7 illustrates this in the case of a plane boundary between the surrounding air and the medium of the object. As shown, refraction at the object boundary can be related to the numerical aperture of the imaging system (NA) by using Snells law n1 sin i = n2 sin r = N A (3.11)

where i and r are the angle of incidence and refraction respectively and n1 and n2 are the absolute refractive indices of the surrounding air and the object medium respectively. The position of the focal plane relative to the front of the object (z) is related to the shift in the focal plane relative to its position in air (f ) by z tan i = a = (z + f ) tan r Using equation 3.11 and 3.12 we can then express1 f as a function of z:
1

(3.12)

The author acknowledges the help of George Dobre in deriving this equation.

CHAPTER 3. IMAGING OF MULTIPLE REFLECTING LAYERS

63

focal+coherence plane in n1 lens

focal plane in n2

object medium

n1

n2

a optic axis

z NA=n1 sin(i)

df

Figure 3.7: Focal plane shift caused by refractive object medium

f (z) =

n2 N A2 2 1 z n2 N A2 1

(3.13)

There is also a shift in the position of the coherence plane c (z), give by z(1/n2 1), which is opposite in direction to that experienced by the focal plane. Consequently the plane in which the coherent image is formed will be separated from the focus by a geometrical distance t (z) = f (z) c (z) equivalent to a path in vacuum of t (z)/n2 . It is possible to adjust the reference mirror position so that t = 0, but since z changes during the course of the measurement is dicult to eliminate this eect entirely. Several groups have reported successful simultaneous measurements of refractive index and depth using confocal [85] and transilluminating [86] low-coherence interferometers. These techniques could potentially be implemented to compensate this coherence and focal plane divergence. Unbalanced Chromatic Dispersion The object medium of a translucent sample can also introduce considerable chromatic dispersion. Chromatic dispersion is the dependence of propagation speed on the optical frequency of the light. Since, for the purpose of low-coherence interferometry, a broad band source is used, light passing through dispersive media in the object arm will experience a variable delay, depending on its wavelength. If an identical dispersive process is not present in the reference arm, i.e. the dispersion is not balanced, the observed interference as a function of the optical path dierence will deviate from that observed in a balanced system. Shibata et al. [87] predicted theoretically and conrmed

CHAPTER 3. IMAGING OF MULTIPLE REFLECTING LAYERS

64

experimentally that an unbalanced dispersive path in a two-beam interferometer leads to a signicant loss in the degree of coherence as well as broadening of the temporal coherence prole. Both these eects have independent detrimental eects on the measurement accuracy and resolution. Broadening of the coherence prole can lead to a reduction in axial resolution while a decreased interference amplitude further reduces the maximum amount of interfaces which can be detected in a multilayer object (see section 3.2.1). The amount of dispersion depends on the interface of interest, and the composition of the object. Since the dispersive eect increases with increasing axial distance into the object, interfaces located at the back of the sample will be resolved less accurately than those closer to the front, eectively imposing a limit on the depth to which an object can be investigated at a given resolution. Polarisation Interference of two beams is only possible if both contain electric eld components parallel to each other. The direction of a eld vector can be changed due to polarisation in the object medium. Three eects may aect the polarisation state of a beam. Dichroism Reection Birefringence Dichroism can be described as selective absorption, an eect which causes light of a given linear polarisation state to be absorbed while its orthogonal state is transmitted [88]. Dichroism is most commonly exploited to construct linear polarisers, such as polaroid lms. Given that the illuminating beam in a two beam interferometer is unpolarised, dichroism in the object medium may weaken the visibility of the interference, but will not cause complete signal loss. Reection and transmission through a medium is polarisation dependent if the angle of incidence is oblique [88]. Light reected from one interface may lack an electric eld component which is present in the reference beam. In general, however, this eect is small at small angles of incidence. Since the acceptance angle of the telecentric telescope used in Coherence Radar is small and can be adjusted via an aperture stop (see gure 2.4 on page 26), this eect may be reduced to a negligible level. Birefringence causes light to propagate at a polarisation dependent velocity [88]. Birefringent materials posses a slow and fast axes perpendicular to any propagation direction. Electric eld components parallel to the slow and fast axis propagate at two discrete velocities. This may cause two separate low-coherence interferograms to appear, separated by a distance corresponding to the relative optical path shift introduced between the slow and fast propagation. Due to this, interference fading (if the relative phase shift between the fast and slow axis is a integer multiple of ) and coherence prole broadening can occur. Static strain induced birefringence eects observed in berised low-coherence interferometers can be compensated by polarisation control[89]. However, since birefringence eects may vary along the transverse extent of the sample, an implementation of this in bulk, CCD based interferometers is not practical. A more promising method has been

CHAPTER 3. IMAGING OF MULTIPLE REFLECTING LAYERS

65

presented [90] which allows simultaneous birefringence characterisation and ranging in a berised low-coherence Mach-Zehnder interferometer by means of double detection. It may be possible to implement a similar arrangement in bulk, using two CCD detectors.

3.3

Experimental

Two transparent multilayer objects were measured using Coherence Radar [83]. An initial study concentrated on identifying boundary layers formed by a stack of glass 20 plates, allowing a direct comparison with theoretical predictions presented section 3.2.2. The study of a second multilayer object, a damaged solar cell (retrieved from the Hubble Space Telescope) demonstrates an application relevant to impact analysis and Space Science research.

3.3.1

Method

The work described in this chapter was completed using essentially the same experimental system already described in section 2.3 on page 23. Although some modications were made to the imaging optics (in order to achieve a magnication suitable for individual objects of interest) most modications were made to the data processing software. The peak search process (described in chapter 2) was not implemented for the study of multilayer objects. The main stages in the data processing implemented for translucent object imaging were: Acquisition of the intensity signal from the CCD camera at three dierent phase positions. Processing of the three phase-stepped images to yield a measure of interference amplitude at every lateral (x, y) position in the image (as described in section 2.2.1) Storage of the interference amplitude A(x, y) at object position d (peak search omitted) These processing steps are repeated for all object positions (d) along the optic axis (z), yielding a set of data with transverse (x, y) as well as axial (z) extent. Since volume data cannot be represented adequately, gures in this chapter are displayed as x, y (transverse) or x, z (longitudinal) cross sections.

3.3.2

Investigation of 20 Glass Plates

In this section an experimental investigation of a stack of 20 glass plates is presented. A tomographic cross section perpendicular to the plane of the glass plates (x, z) allows measurement of plate thickness and separation and results are compared with the theoretical model presented in section 3.2.2. Using microscope cover slips and paper spacers a stack was constructed which retains air spaces between the layers of glass. The design of this sample object closely reects that assumed in the simulation of multilayer objects in section 3.2.2 and shown in gure 3.4.

CHAPTER 3. IMAGING OF MULTIPLE REFLECTING LAYERS

66

1982

Glass 8
1828 1737

D = 102

Air Glass 7 Air Glass 6 Air Glass 5 Air Glass 4


D = 110 D = 115 D = 109 D = 130

1540 1462

Object position (m)


1297 1217 1043 959

793 637

Air Glass 3
D = 103

481 404

Glass 2
254 177

D = 99

Air Glass 1
D = 117

Air
Transverse position
Figure 3.8: Interferogram of rst 8 glass plates in a stack of 20

CHAPTER 3. IMAGING OF MULTIPLE REFLECTING LAYERS

67

1.0

interference amplitude (arbitrary units)

0.8

0.6

0.4

0.2

0.0 0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000

object position (m)

Figure 3.9: Average of interference amplitude versus depth (the amplitude is calculated as an average of 10 neighbouring pixels) For the measurement process the sample was mounted approximately perpendicular to the optic axis. Acquisition of a cross-section (along a line of 512 pixels) was obtained by translating the object over a range of 6mm along the optic axis (z) in 1m steps. CCD images were averaged 5 times prior to phase stepping in order to reduce noise and improve contrast. A part of the collected data is displayed in gure 3.8, which shows interference amplitude represented by grey-scale, such that large amplitudes are darkest. By identifying the glass-air boundaries (seen as dark stripes in gure 3.8) values for optical path (OP) could be computed for each interface (shown on the left side of gure 3.8). These OP values were then corrected for the refractive index of glass, assuming ng = 1.51, to yield a measure of true plate thickness, D (shown on the right side of the gure 3.8). We note that ghost images of the boundaries are present due to multiple reections between the glass plates. A plot of interference amplitude as a function of object position is shown in gure 3.9. This data was computed by averaging the interference amplitude measured at ten adjacent pixel positions along the transverse direction (see gure 3.8) in order to increase the signal to noise ratio. This data can be compared with the model developed in section 3.2.2. The empirically determined interference amplitude, Ae can be expressed as Ae (j) = kA(j) (3.14)

where A(j) is the interference amplitude predicted by the model in section 3.2.2 and k is the constant of proportionality relating to the conversion eciency of the detector

CHAPTER 3. IMAGING OF MULTIPLE REFLECTING LAYERS

68

peak values linear fit

interference amplitude (arbitrary units)

0.36788

10

15

20

25

30

35

40

45

interface number

Figure 3.10: Log of maximum interference amplitude, Ae (j), versus interface number, j used in the experiment. Using equations 3.3 and 3.4, Ae can also be expressed as: Ae (j) = 2k Ir I0 RT 2(j1) (3.15)

where R is the reectivity, T is the transmissivity (T = 1 R) and j is the interface number (see gure 3.4). Taking logs of equation 3.15 gives a convenient linear model for comparison with the experimental data: k2 Ir I0 R ln[Ae (j)] = ln + j ln[T ] T (3.16)

In gure 3.10 the natural logarithm of the experimentally obtained interference amplitude values (peak values of data in gure 3.9) are plotted versus the interface number j. A least square linear t to the data yields ln[T ] = 0.0333. The transmissivity, T = 0.967 0.002 may then be compared to a value independently derived using the 4n Fresnel reection formula (at normal incidence) [84], such that T = (n+1)2 = 0.957 where2 n = ng /na = 1.52. The two values of transmissivity are in good accordance with each other and conrm the validity of the model. The observed discrepancy of 1% may be attributed to scatter, absorption, multiple reections, oblique incidence and noise, all of which are neglected in the model.
As given by the manufacturer: Chance Propper ltd, Smethwick, Warley, England, the index of refraction at = 546nm (Mg-line) is ne = 1.524 0.002 and nd = 1.522 0.002.
2

CHAPTER 3. IMAGING OF MULTIPLE REFLECTING LAYERS

69

3.3.3

Solar Cell

Measurements of impact crater damage to spacecraft caused by micrometeoroids in earth orbit yield information about impactor origin, composition and ux and are of considerable interest to space science research (see also section 2.8). As part of the maintenance work carried out on the Hubble Space Telescope in December of 1993 [66], a solar cell array was returned to earth after three and a half years in space and subsequently became available for particle impact damage assessment. In comparison to conventional subjective optical inspection or electron microscopy imaging, low-coherence interferometry can reveal 3-D structures within the solar cell layers and damage not visible from the surface. Each solar cell in the array has a dimension of 21.1mm 40.5mm. The sample measured by us contained a small crater of visible diameter 2mm penetrating the cover glass (gure 3.14). We also observed a crack in the cover glass running along the entire length of the cell and intersecting the crater area. During the measurement the solar cell was mounted approximately perpendicular to the optical axis and aligned so that the crater appeared in the centre of the CCD image. The solar cell was translated along a range of 450m at 1m steps over a period of approximately one hour. Data was acquired (without averaging) using a 400 by 400 pixel subset of the available 768 572 pixel image. The resulting 450 images describe a volume comprising 400 by 400 by 450 elements (or voxels). In principle, cross-sectional tomographic images can be extracted from the volume of data at any conceivable angle. In order to visualise the layers composing the solar cell as well as the damage caused by the impact, a longitudinal section parallel to the optic axis was extracted (as shown in gure 3.13). The transverse position of this cross section is indicated by a line in the image of the solar cell surface in gure 3.123 . The crosssectional slice can be seen in gure 3.13 and clearly shows the impact damage to the layers of the solar cell. The layers which can be identied as dark bands of interference amplitude (a large interference amplitude is represented by dark areas), correspond to the interfaces formed by the cover glass, adhesive and BSFR solar cell material. The interface position is indicated at the right of gure 3.13. By correcting for the absolute refractive index of glass (ng 1.51) a measure of the CMX cover glass thickness was derived (measured value= 145m, known value = 150m). For comparison a diagram of the cross sectional solar cell anatomy is shown in gure 3.14 [66].

3.4

Conclusion

We have demonstrated that Coherence Radar can provide useful information about transparent multi-layered structures at scales of a few microns. In order to gauge the potential performance of Coherence Radar for such applications, a theoretical model of an object comprising a stack of glass slides was formulated. Our model predicts an interference signal of poor visibility when investigating a large number of reective layers. If the reectivity of these layers is high, low-coherence measurements of their location will be limited by the dynamic range of the detection system. Experimental
An approximate intensity image, I(x, y) is derived by adding the interference amplitude A(x, y, d) N at all sample displacements, d, so that I(x, y) = i=1 A(x, y, di )
3

CHAPTER 3. IMAGING OF MULTIPLE REFLECTING LAYERS

70

Figure 3.11: Extraction of a cross-sectional image from a set of transverse images

CHAPTER 3. IMAGING OF MULTIPLE REFLECTING LAYERS

71

Figure 3.12: Image of the Hubble Space Telescope solar cell showing the position of the extracted cross section relative to the impact site measurements made on a similar physical model (consisting of 20 glass plates) showed close agreement with the theoretical model. A number of potential problems arising as a consequence of light propagation in the object medium were identied and discussed. These eects of refraction, dispersion and birefringence were not observed to cause any appreciable decrease in the subjective quality of any of the experimentally obtained cross sectional images. A damaged solar cell retrieved from the Hubble space telescope was successfully analysed yielding information about an impact crater not visible from the surface. In conclusion, the system enables non-destructive testing of reecting interface layers inside transparent objects and oers an attractive alternative to OCT and confocal microscopy.

CHAPTER 3. IMAGING OF MULTIPLE REFLECTING LAYERS

72

crater area

object position (mm)

cover glass

219 (145)

219 247

adhesive

solar cell

transverse position
Figure 3.13: Tomographic image of solar cell (geometric distance is given as m in parenthesis)

CHAPTER 3. IMAGING OF MULTIPLE REFLECTING LAYERS

73

CMX cover glass (150 microns)

DC 93500 Adhesive (40 microns) BSFR Solar Cell (250 microns)

RTV S691 (70-80 microns) Glass fibre filled with DC 93500 (35 microns) Silver Mesh (50 microns) Glass fibre filled with DC 93500 (35 microns)

Figure 3.14: Schematic view of solar cell cross-section

Chapter 4

In Vitro Imaging of the Human Ocular Fundus


4.1 Introduction: Properties of the Human Fundus

Monitoring retinal thickness and retinal nerve bre layer thickness can aid early diagnosis and therapy control of macular degeneration, glaucoma, macular oedema and other optic neuropathies[91]. Because three dimensional structures are not easily revealed by 2-D images obtained from conventional fundus cameras and ophthalmoscopes, considerable interest in 3-D imaging techniques has developed in recent years. Although non-optical methods such as Ultrasound and NMR allow three dimensional imaging of the human eye in vivo, the resolution these methodologies deliver is in general not sucient for accurate fundus examinations[38]. Confocal microscopy and in particular confocal laser scanning ophthalmoscopes (cSLOs) have emerged as an attractive alternative due to their outstanding depth discrimination and scatter rejection capabilities. However, the depth resolution of cSLO studies of the fundus is limited by the eye aperture to approximately 200m which corresponds roughly to the thickness of the retina. Low-coherence interferometry oers superior performance due to its aperture independent depth resolution and has been successfully applied to measurements of eye length[35, 92], corneal thickness [41] and fundus thickness [91]. Optical coherence tomography (OCT), in particular, has been widely used to obtain 3-D in vivo fundus images [7, 32, 34, 3840, 9395] and ophthalmic OCT systems are now commercially available (see also section 1.4.3 on page 8). In this chapter, we investigate the ability to obtain three dimensional images of biological tissue using CCD based low-coherence interferometry. In particular, we aim to demonstrate the feasibility of human fundus investigations using Coherence Radar[96] by obtaining in vitro images of a post-mortem human retina. To allow interference amplitude recovery a new algorithm which is robust in the presence of noise is developed. The possibility of obtaining high resolution in vivo images is discussed and optical designs suitable for adapting the system to ocular measurements are presented.

4.1.1

The Human Eye

First, let us take a brief look at the anatomy of the human eye. Figure 4.1 outlines some of the more important features. The cornea is the main refractive element and its

74

CHAPTER 4. IN VITRO IMAGING OF THE HUMAN OCULAR FUNDUS

75

Vitreous humor (1.34) Iris Cornea (1.38) Choroid Lens (1.40) Macula

Sclera

Fovea Optic disk

Aqueous humor (1.33)

Optic nerve

Retina

Figure 4.1: Anatomy of the human eye (refractive index shown in parentheses) shape is supported by a liquid in the cavity between the cornea and lens, called aqueous humor. Even though the lens has a relatively high optical density, the surrounding medium reduces its refractive power considerably. It can be deformed by the surrounding muscles to yield a variable optical power required for vision. Similarly, the iris contracts or expands to control the amount of light entering the eye. The largest cavity in the eye is lled with a gel-like substance called vitreous humor. Adjacent to this, at the back of the eye, lies the fundus. The Fundus The fundus, the posterior section of the eye, is comprised of a number of tissue layers which are shown in gure 4.2. The retina is perhaps the most important structure of these, since it is primarily responsible for human vision. The retina is composed of light sensitive cells or photo-receptors, located underneath a layer of supporting bloodvessels and nerve bres. The retina is terminated by a pigment layer which absorbs light transmitted through the photoreceptors in order to prevent backscatter. A special area of the retina, called the macular, is situated at the centre of the fundus and receives images at optimal focus (gure 4.1). The fovea, a small central section of the macula, contains a very high concentration of photoreceptors yielding a resolution which gives us, amongst other things, the ability to read. Damage to this area may potentially result in blindness and is of considerable interest to ophthalmologists. In the region of the optic disk, nerve bres connecting the retina and the supporting blood vessels come together to form the optic nerve (gure 4.1). Since damage to the nerves in this area can have severe consequences on vision, the structure of the fundus in the region

CHAPTER 4. IN VITRO IMAGING OF THE HUMAN OCULAR FUNDUS

76

Inner limiting membrane Nerve fiber layer Retina Photoreceptors Pigment ephithelium Bruch's membrane Choroidal stroma Sclera
Figure 4.2: Schematic representation of the fundus layers of the optic disk (or optic nerve head) is also of increased medical interest. The retina is supported by the Choroid, a spongy tissue containing blood vessels, and the Sclera (white of the eye) which lines the outside of the eye. Both of these layers are much less transparent than the retina.

4.1.2

Human Fundus Sample and Tissue Preparation

We obtained several post-mortem human eyes1 allowing us to perform in vitro experimental investigations of the human ocular fundus using Coherence Radar. At this preliminary stage, there are considerable advantages to using post-mortem as compared to in vivo tissue. Involuntary eye movement, lens accommodation, fundus pulsation due to blood ow and acquisition time constraints (due to patient discomfort) are all eliminated entirely in this way. Further, the lens and vitreous humor can be removed to create direct visual access to the fundus. Unfortunately in vitro tissue is changed by physical eects following excision. These include tissue alterations due to lack of blood, the eects of decay, dehydration and temperature change. It is thus, for example, dicult, if not impossible to maintain proper optical transparency and refractive power of the cornea and lens in a postmortem eye. Also retinal detachment due to excision pressure is common and was indeed observed in our samples. One of the post mortem eyes was dissected such that a 1 by 1 cm fundus sample in the region of the optic nerve head could be removed. In order to prevent decay, the sample was stored in Formaldehyde solution. For the purpose of our investigations using Coherence Radar, a means had to be found to store the sample in the formalin so that it remained visible. This was accomplished by constructing a stainless steel container bounded by a glass window, as shown in gure 4.3 and 4.4. A sample support is provided and can be adjusted to hold the tissue rmly against the glass plate. The
The author thanks Dr. Fred Fitzke of the Institute of Opthalmology, University of London for the supply of postmortem tissue samples.
1

CHAPTER 4. IN VITRO IMAGING OF THE HUMAN OCULAR FUNDUS

77

O-ring seal

Thread Formalin

Glass plate Retinal tissue

sample support

Stainless steel

Figure 4.3: Cross section of the fundus tissue container

Figure 4.4: Stainless steel sample container

CHAPTER 4. IN VITRO IMAGING OF THE HUMAN OCULAR FUNDUS

78

Figure 4.5: Fundus tissue in the sample container (scale graduation = 0.5 mm) container is sealed to prevent leakage of liquid and the formation of air bubbles. The retina in our sample was detached in most areas except in the region immediate to the optic nerve head. Several folds of retinal tissue formed as a consequence and these are visible in the photograph of the fundus sample shown in gures 4.5.

4.1.3

Optical Properties of the Eye

In order to examine the structure of the fundus using optical methods, light must pass through the ocular medium and parts of the fundus (gure 4.1). Thus, we will examine the optical properties of the transparent section of the eye, such as cornea, crystalline lens, aqueous humor and vitreous humor as well as the properties of the fundus layers. Ocular Medium The following physical eects may inuence low-coherence measurements of the fundus: 1. Refraction 2. Diraction 3. Optical aberrations 4. Dispersion Fundus examinations using conventional fundus cameras, laser scanning ophthalmoscopes, or low-coherence interferometry (such as OCT) are all aected by the static refractive power of the cornea and lens. In addition it is aected by the shape of the lens which can be changed by the action of the adjacent muscles to provide an image at optimum focus. This involuntary action, also called accommodation, presents a potential problem during in vivo examinations since it may introduce unpredictable changes in the optical path and position of the focal plane.

CHAPTER 4. IN VITRO IMAGING OF THE HUMAN OCULAR FUNDUS

79

In order to obtain images of the fundus, suitable optics must be introduced to compensate for the static refractive power of the cornea and the lens (the focal length of the eye is 20mm). However, the most severe aberrations are introduced by the eye itself and, together with diraction, limit the resolution of fundus images. Generally, a maximum transverse resolution of 50m cannot be exceeded with conventional imaging optics. The vitreous and aqueous humor are transparent water based solutions with a refractive index close to that of water. Thus, dispersion in the eye may be approximated by dispersion in water. To our knowledge, no work has yet been reported which describe signicant dispersion eects on low-coherence measurements in the eye. However, in order to estimate the impact of dispersion in the vitreous humor, we have investigated the inuence of dispersion in water on the coherence length of the source (see section 4.3.2).

4.1.4

Light Scattering in Biological Tissue

The problematic nature of imaging biological tissue using visible or near-infrared light is mainly due to light scattering by the cell structures in the tissue. Because of this, most biological tissue appears opaque on visual inspection, and conventional optical imaging techniques are not suited to revealing structures within tissue. Scattering of light is caused by small particles with the eect that photons follow a more or less random path in the medium. The eect is a loss of image contrast, which is dependent on the number of scattering centres in the medium and the distance through which the light travels in the medium. Apart from reducing image contrast, scatter also increases the path a photon travels inside the medium. This is of special signicance here because Coherence Radar eectively infers the position of a reective interface by the optical path travelled. Figure 4.6 illustrates the path of a photon in a scattering medium. Since the total path travelled by a photon between point a and b is equivalent to the distance between points b-c, the photon travelling from point a to point b will produce a coherent signal when the coherence plane of the interferometer is adjusted to coincide with point c. According to the optical path measured by the interferometer the reection has occurred at point c, whereas in fact, the light was multiply scattered near the surface of the medium. Thus, a photon path like that shown in gure 4.6, will suer a depth error roughly equal to the distance b-c. The transverse accuracy will be reduced by an amount equal to the distance a-b.

4.1.5

Illumination Wavelength

The choice of illumination wavelength is important for a number of reasons: The ocular medium is opaque to wavelengths 1200nm Fundus reectance is higher at longer wavelength [97, 98] Light penetrates more deeply into the choroid at wavelengths above 650nm. [98]. Absorption of light in the retina is less at longer wavelength - resulting in less damage at higher power [98]. Patient discomfort due to high illumination power can be eliminated by using invisible wavelengths.

CHAPTER 4. IN VITRO IMAGING OF THE HUMAN OCULAR FUNDUS

80

a b c

Figure 4.6: False path interpretation due to photon scattering in a diusive medium The availability of aordable superluminescent diodes at 830nm due to single mode bre transmission windows Given these factors, a wavelength of 830nm seems suitable for in vivo investigations of the fundus. In vitro imaging on the other hand, is not bound by the transmission properties of the ocular medium, allowing a much longer wavelength to be employed in order to reduce scatter in the tissue and increase fundus transmission. In practice however, in vitro studies at 830nm promise to deliver a good preliminary indication of how suitable this illumination wavelength is and we have used this wavelength in our experimental investigations.

4.2

Signal Processing

The phase stepping algorithm presented in the previous chapters was highly susceptible to mechanical vibration and image noise since it required accurate phase shifting of the interference signal. Phase stepping was used to compute the interference visibility (or amplitude), based on three intensity images captured at a number of precise reference mirror positions (section 2.2.1 on page 22 ). The only computationally ecient method for noise reduction was to average these images before phase stepping (alternatively averaging after phase stepping requires repeated computation of the algorithm). However, when vibrations are present, this leads to a reduction in signal strength. In this section, we present an improved phase stepping algorithm which computes interference amplitude based on a large number of images captured at random phase, rendering the technique immune to mechanical vibration and reference mirror inaccuracies. In addition, this eectively averages measurements and thus reduces the eects of image noise. The new algorithm is very similar to that presented in section 2.2.1, but has the advantage of not requiring phase shifts at precisely calibrated steps. Instead, random movement of the reference mirror or even mechanical vibrations of sucient amplitude can facilitate accurate measurements of interference amplitude. As in section 2.2.1 on page 22 we may express an interference signal as a function of object displacement, d, given by:

CHAPTER 4. IN VITRO IMAGING OF THE HUMAN OCULAR FUNDUS

81

2 I(d) = I + A(d) cos 2d +

(4.1)

where I is the background intensity, A(d) is the amplitude of the interference signal, is the central wavelength and is a random phase shift. By a procedure similar to that described in section 2.2.1, it can be shown that the interference amplitude, A(d), can be approximated using n intensity measurements, Ii , observed at a random phase , such that A(d) = where I = 1/n
n

n i=1

Ii I

(4.2)

Ii .
i

(4.3)

As in the previous chapters, a PZT mounted reference mirror is used to introduce the random phase shifts. However, in our implementation, the voltage applied to the PZT is ramped over the period of image acquisition to yield an approximate total displacement of /2. The phase shift between images is then approximately 2 /n, such that this new algorithm approximates to the old one if n = 3 (see section 2.2.1). A similar algorithm has also been implemented by Chiang et al. [99] for imaging of structures in scattering media.

4.3
4.3.1

Experimental
Coherence Radar

The experimental system used for fundus imaging is identical to that described in chapter 2. However, in order to achieve greater illumination power a new superluminescent diode (SLD-371) was used. This oers 2mW power at a wavelength of 830nm and with a coherence length of lc 25m.

4.3.2

Coherence Prole Broadening through Dispersion

As discussed in section 3.2.3 on page 63 chromatic dispersion in the object medium can cause coherence prole broadening and loss of fringe visibility. Since fundus examinations require light in the object beam to travel through the vitreous humor ( 2cm) signicant dispersion is introduced. In order to assess the loss of axial resolution due to prole broadening the eect of dispersion in a 2cm water path was measured. The experiment was performed by placing a water lled fundus container (gure 4.3) in the object arm of the Coherence Radar interferometer, as shown in gure 4.8. The interference signal corresponding to the following reections was recorded: 1. The front glass-air interface of the tissue container window 2. The second interface of the glass window in a water lled sample container (i.e. glass-water interface)

CHAPTER 4. IN VITRO IMAGING OF THE HUMAN OCULAR FUNDUS

82

air-glass glass-water (2mm glass) water-metal (2mm glass+20mm water) Gaussian fit

FWHM = 21.7 m

FWHM = 11.7 m

160

interference contrast (arbitrary units)

140 120 100 80 60 40 20 40 50 60 70 80 90 100 110 120 130


FWHM = 11.9 m

object displacement (m)


Figure 4.7: Plot of interference amplitude for dispersive and non-dispersive paths

CHAPTER 4. IN VITRO IMAGING OF THE HUMAN OCULAR FUNDUS

83

SLD Fiber source

Collimating lens Beam-splitter f1 f1 f2 f2 Translation stage Fundus tissue

CCD camera

Optic axis

Lens 1 (f1 ) Aperture stop

Lens 2 (f2 )

Sample container Neutral density filter PZT

Reference Mirror

Translation stage

Figure 4.8: Experimental arrangement to avoid strong back-reections at the air-glass boundary 3. The polished stainless steel tissue support in the water lled sample container (i.e. water-steel interface) A dispersion free interferogram was acquired by placing the empty tissue container in the object arm so that the glass window was perpendicular to the optic axis of the interferometer (maximising the returned light). The interference signal was then measured by displacing the container in 2m steps over a range sucient to capture the entire interferogram. Similarly, the signals returned from the back face of the window and from the curved metal-water interface at the back of the sample container were measured. For the latter, the container was rotated by an angle in order to eliminate the air-glass reections returned from the front and back of the container window (gure 4.8). The resultant three interferograms are shown in gure 4.7. Although dispersion eects due to the glass window are not apparent, a considerable increase in coherence length can be observed due to the path in water. The distance between the metal surface and the inside glass-water boundary is 20mm and the light experiences dispersion over a path twice this amount during a round trip. The least-square ts of a Gaussian to the data in gure 4.7 indicate an almost 100% increase in the width of the interferogram due to dispersion. Since light must travel a comparable distance in the eye ( 30mm), a similar broadening and thus a reduction in axial resolution can be expected for in vivo fundus investigation.

CHAPTER 4. IN VITRO IMAGING OF THE HUMAN OCULAR FUNDUS

84

Figure 4.9: Orientation of longitudinal and transversal sections relative to the eye

4.3.3

Fundus Imaging of a Model Eye

As discussed in chapter 3, Coherence Radar is able to acquire transverse as well as longitudinal sections of multilayer structures. Figure 4.9 illustrates the orientation of these sections relative to the optical system. In the following experiments, we have chosen to perform measurements of longitudinal sections since these best demonstrate the depth sectioning capabilities of Coherence Radar. In order to assess the feasibility of in vivo fundus imaging using Coherence Radar, a model eye and a new optical arrangement were constructed to simulate imaging of a human fundus in vivo. Figure 4.11 shows a Coherence Radar system adapted for the investigation of a model eye, consisting of a lens and a model fundus (a stack of glass plates). A correction lens (fc = 50mm) was used to compensate for refraction in the model eye lens (fe = 30mm). We suggest that this arrangement (as shown in gure 4.10) be used for fundus imaging in vivo. After placing the model eye in the object arm of the interferometer, we were able to successfully measure the position of reecting interfaces in the model fundus. This model was constructed from a stack of 20 microscope cover slips with air spaces between them and is identical to the structure measured in section 3.3.2 on page 65. During the measurement, the model eye (lens + microscope slips) was translated over a range of

CHAPTER 4. IN VITRO IMAGING OF THE HUMAN OCULAR FUNDUS

85

SLD Fiber source

Collimating lens Beam-splitter f1 f1 f2 f2 fc fc fe fe

CCD camera

Optic axis

Lens 1 (f1 ) Aperture stop

Lens 2 (f2 ) Correction lens (fc) Eye lens (fe)

Neutral density filter PZT Reference Mirror Translation stage

Figure 4.10: Experimental arrangement for in vivo imaging using Coherence Radar
SLD Fiber source

Collimating lens Beam-splitter f1 f1 f2 fc fe fe Translation stage

CCD camera

Optic axis

Lens 1 (f1 ) Aperture stop

Lens 2 (f2 ) Correction Eye lens (fc) lens (f ) e

Model eye Multilayer sample (fundus)

Neutral density filter Reference Mirror PZT

Translation stage

Figure 4.11: Experimental arrangement to image a model eye using corrective optics

CHAPTER 4. IN VITRO IMAGING OF THE HUMAN OCULAR FUNDUS

86

Figure 4.12: Longitudinal section of the model fundus 2.7 mm in 1m steps. The resultant longitudinal cross section presented in gure 4.12 clearly shows the positions of each of the glass-air boundaries2 in the stack and conrms that it is, in principle, possible to use Coherence Radar with corrective optics to allow fundus imaging. However, we have found that back-reections from the corrective lens and model eye lens can easily lead to detector saturation. We propose the use of anti-reection coated optics in future implementations.

4.3.4

In Vitro Examination of Fundus Layers

Two longitudinal sections of post mortem fundus samples (section 4.1.2) were measured using Coherence Radar. To our knowledge these are the rst fundus images obtained using a CCD based low-coherence interferometer [96]. The rst cross-section was measured by displacing the sample over a range of 4mm in 2m steps along the optic axis (z). Ten intensity samples were used for phase stepping (n=10) at each object position (section 4.2). The orientation of the section plane relative to the tissue sample is indicated by gure 4.13 (longitudinal section 1) and the resulting cross-sectional image is shown in gure 4.14. Although the low signal-to-noise conditions make the image rather noisy, it is possible to identify two boundaries along the sample displacement axis, corresponding to the retina and choroid. At the top centre of the image a folding of the retina can be observed (compare to gure 4.13) and the thickness of the retina
2

As in gure 3.8 ghost images are visible due to multiple reections between the glass plates.

CHAPTER 4. IN VITRO IMAGING OF THE HUMAN OCULAR FUNDUS

87

Longitudinal section 1

Longitudinal section 2

Optic nerve head

Figure 4.13: Post mortem fundus tissue showing the approximate position of longitudinal sections obtained using Coherence Radar is measured to be approximately 350m (after correcting for the refractive index of water). A second longitudinal section was obtained by displacing the fundus sample over a range of 5mm in 2m steps. The orientation of the section plane relative to the retina is indicated in gure 4.13 (longitudinal section 2). The resulting image, which was computed using 20 intensity samples at each object position (n=20) is shown in gure 4.15. In comparison with gure 4.14, which was computed using n=10, this image shows visibly better contrast and conrms the ability of the new phase stepping algorithm to reduce noise by using a large number of samples. It is possible to identify the optic nerve head as well as reections from the retina surface and other fundus layers in gure 4.15. We attribute the presence of trailing shadows, which are visible in both images, to multiple scattering within the tissue. In addition, we want to point out that the strong fringe pattern visible in gure 4.14 is due to beating between the CCD pixel readout frequency and the analogue to digital conversion sampling rate.

4.4
4.4.1

Discussion
Data Acquisition and Processing Speed

In vivo measurements of the fundus should be performed as quickly as possible in order to reduce the eect of involuntary eye movements, fundus pulsation due to blood

CHAPTER 4. IN VITRO IMAGING OF THE HUMAN OCULAR FUNDUS

88

500

1000 sample displacement (m)

1500

2000

2500

3000

3500

500

1000

1500 2000 transverse position (m)

2500

3000

3500

Figure 4.14: Longitudinal section (1) of post mortem fundus tissue

CHAPTER 4. IN VITRO IMAGING OF THE HUMAN OCULAR FUNDUS

89

500

1000

1500 sample displacement (m)

2000

2500

3000

3500

4000

4500

5000 0 500 1000 1500 2000 transverse position (m) 2500 3000 3500

Figure 4.15: Longitudinal section (2) of post mortem fundus tissue

CHAPTER 4. IN VITRO IMAGING OF THE HUMAN OCULAR FUNDUS

90

Start

Acquire image with xy pixels Increment reference mirror position

No

Number of images = n ?
Yes

Compute interference amplitude based on nxy values

Translate object

Store xy values

No

Cycle = N?
Yes

End
Figure 4.16: Operations performed by Coherence Radar

CHAPTER 4. IN VITRO IMAGING OF THE HUMAN OCULAR FUNDUS

91

ow and patient discomfort. The maximum acquisition speed of Coherence Radar is primarily limited by the following operations: Computation of the interference amplitude Digital image acquisition Object translation between measurements Data storage Figure 4.16 outlines the main operations performed by the Coherence Radar system during a measurement cycle. The total time required to complete this cycle is determined by the performance of the individual hardware elements (such as computer, CCD camera and frame grabber) and by the number of pixels (xy) per frame, the number of object displacements (N) and the number of intensity samples (n) acquired at each object position and used for phase stepping.

4.4.2

Speed Optimisation

Let us examine the maximum speed at which fundus images in vivo may be obtained. The limiting factor in our implementation of Coherence Radar is the computational overhead associated with the calculation of the interference amplitudes during phase stepping. In principle, this may be eliminated by storing all data during the acquisition cycle and performing the necessary computation later on. Continuous displacement of the object could also remove any time delay required for discrete object translation. In addition, imaging speed could be improved drastically by replacing the current imaging equipment with a high performance system. However, regardless of the experimental hardware, the minimum CCD exposure time still remains a fundamental limit. This minimum exposure time is proportional to the power incident on each pixel and the sensitivity of the CCD sensor. In turn the incident optical power is determined by the illumination power of the eye, the fundus reectivity, and the imaging system (including the beam-splitter ratio). In our experiments we were limited to a minimum exposure time of 1/60 sec. at 2mW illumination power. Let us use this information as an approximate indication of the amount of light reected from the fundus and incident on the CCD sensor. Assuming that the exposure time is proportional to the illumination power and camera sensitivity we may estimate the performance of the system under various conditions. First, let us consider the maximum safe illumination power recommended for fundus examinations. According to the experimental laser dose which results in a 50% probability of creating a lesion, termed ED50 , the damage threshold for retinal irradiance (at a wavelength of 400nm-700nm and an exposure duration of up to one second) is 1W/cm2 . To remain on the conservative side, we will only assume a maximum irradiance of 100mW/cm2 . In comparison, the maximum irradiance of our sample was 8mW/cm2 . Thus we may increase the illumination power by a factor of 100/8 = 12.5 without causing damage to the retina. The maximum power available from the SLD source is currently limited to 2mW . For future implementation we propose the use of a spatially incoherent extended source, which can deliver substantially more power. In addition, an extended source provides quasi confocality [100] and the benet of increased scatter rejection.

CHAPTER 4. IN VITRO IMAGING OF THE HUMAN OCULAR FUNDUS

92

Number of pixels Frame rate/line rate Noise equivalent exposure (NEE) Saturation equivalent exposure (SEE) Dynamic range (NEE:SEE) Digital intensity resolution

Pulnix TM-520 + Bit ow Data Raptor 768 572 25 Hz 371 pJ/cm2 2.46nJ/cm2 8:1 8-bit

DALSA CL-C3 (line scan) 1024 1 kHz 125pJ/cm2 125nJ/cm2 1000:1 12-bit

DALSA CA-D1 (area scan) 256 256 110 Hz 20pJ/cm2 28nJ/cm2 1400:1 12-bit

Table 4.1: Comparison of the current imaging hardware (see also appendix A) with commercially available high performance components A large proportion of the light reected from the retina is lost due to the reection at the beam splitter plate surface. Unfortunately this eect cannot be eliminated entirely. It is however possible to increase the amount of light returned from the fundus at the cost of source power. Given a source of unlimited power, the beam-splitter transmission/reection ratio can be altered so that the transmission from the sample (the eye in this case) to the detector is very large. By replacing a 50/50 beam-splitter with a 90/10 beam-splitter for example, a 90/50 = 1.8 increase in the amount of light transmitted to the detector can be obtained. In addition, the exposure time may be reduced and the maximum frame/line rate can be increased by implementing new imaging equipment. Since the requirements for transverse and longitudinal imaging are distinct, we have considered these two cases independently. We propose the use of a line scan camera (1-D CCD) for the acquisition of longitudinal sections and an area scan (2-D CCD) camera to achieve transverse sections (even though both can be achieved using a 2-D sensor). The Noise equivalent exposure (NEE) of a CCD camera denes the minimum detectable intensity. By comparing the NEEs of the current Pulnix camera to that of potential high performance systems, we can derive a ratio between the current exposure time and a theoretical minimum value. Table 4.1 summarises the performance parameters of the current imaging system (Pulnix camera and Bit Flow frame grabber, see also appendix A) and two potential high performance systems (DALSA CL-C3 and CA-D1 digital cameras). The current minimum exposure time may in principle be reduced by a factor of 12.5 if the illumination power is increased to the maximum safe threshold. A further improvement of 1.8 can be gained by implementing a 90/10 beam splitter ratio. In addition the exposure time may be decreased through the use of high sensitivity CCD cameras. Table 4.2 summarises the potential performance of Coherence Radar made possible by the increased intensity, a new beam-splitter ratio and through the use of digital DALSA line and area scan cameras (table 4.1). The acquisition time of both transverse and longitudinal images is limited by the frame or line rate of the CCD cameras. The total amount of data acquired per second is larger when obtaining transverse sections. However, these values may change depending on the imaging system. Most importantly, the table shows that Coherence Radar can in principle obtain both transverse and longitudinal sections of the human fundus in vivo with sucient resolution and dynamic range in an interval no longer than a second, making it suitable for ophthalmic applications and comparable in performance to state of the art OCT systems [94, 95].

CHAPTER 4. IN VITRO IMAGING OF THE HUMAN OCULAR FUNDUS

93

Increase in sensitivity (compared to Pulnix TM-520) Increase in reected power (compared to current system) Minimum exposure time Maximum frame/line rate Number of object positions (N) per second (at n=10) Number of sections per second Data points per second

Longitudinal (using DALSA CL-C3) 2.97 22.5 1/4000 s 1 kHz 100 (limited by line rate) 1 100 1024

Transverse (using DALSA CA-D1) 18.6 22.5 1/25100 s 110 Hz 11 (limited by frame rate) 11 256 256 11

Table 4.2: Potential acquisition speed for longitudinal and transverse sections when using 10 intensity samples (n=10)

4.4.3

OCT versus CCD Based Interferometry

The strength of the CCD based approach lies in its parallel nature. While in an OCT system the surface is illuminated and imaged point by point in a sequential fashion, Coherence Radar illuminates an entire surface and measures the reected light at all pixels simultaneously. For a beam scanning arrangement, the time required to measure N points is N times the minimum exposure time. For CCD based system the exposure time is constant regardless of the number of pixels. Thus, if the required image has Nx by Ny points or pixels, the resultant speed advantage of CCD based systems is theoretically equal to Nx Ny , provided the total illumination power is also increased by the same amount. In practice however, the acquisition speed is limited by the frame rate of the CCD video camera as was shown in the last section. Also, CCD detectors suer from poor dynamic range and low sensitivity when compared to single photo-detectors, and thus require longer exposure times at the same incident intensity. Therefor, while CCD based detection may not oer a speed advantage as large as Nx Ny in practice, one can expect a performance at least comparable to that of OCT. In addition, OCT systems generally employ a point detector and are thus confocal. This advantage is not available with CCD based systems, but schemes have recently been reported which allow confocal imaging even in these circumstances by using an extended source[100]. The comparison of Coherence Radar and OCT is further complicated by safety considerations required for in vivo fundus investigations. As discussed earlier (section 4.1.5) damage threshold values are mainly determined according to power incident on the retina per unit area. Thus, in principle, the power can be increased by the factor Nx Ny , since the illuminated surface area is increased proportionally. In practice however, slightly less power per unit area is permitted in this case, since thermal energy is dissipated more slowly over a larger areas. The remaining acquisition time saving due to the implementation of a CCD sensor (parallel imaging) is still substantial, as was shown in the previous section. In conclusion, the advantage of CCD based low-coherence interferometry over OCT lies in its ability to simultaneously image a large number of pixels in the transverse plane (en face imaging). Although transverse sections best exploit this advantage, the maximum frame rate of commercially available imaging devices limits the acquisition speed such that longitudinal sections may be acquired in a time comparable to that of transverse images.

CHAPTER 4. IN VITRO IMAGING OF THE HUMAN OCULAR FUNDUS

94

4.5

Conclusion

In this chapter, we have presented, what to the best of our knowledge, are the rst images of in vitro fundus tissue obtained using a CCD based low-coherence method. A robust algorithm for improved signal detection was introduced and successfully applied to extract interference signals from images of low dynamic range and high noise. The method was shown to produce a signicant increase in image contrast by means of averaging. The inuence of scatter and dispersion in the eye were discussed and the reduction in depth resolution due to dispersion was experimentally determined. The ability to adapt Coherence Radar for in vivo measurements of the human eye was discussed and conrmed experimentally by using a model eye. Although our implementation of Coherence Radar was not suitable for in vivo measurement due to its slow acquisition speed and low image contrast, it was shown that the illumination power, and hence the acquisition speed could be signicantly increased without risking damage to the eye. We demonstrated that Coherence Radar is potentially capable of measuring both transverse and longitudinal sections in vivo in less than one second - a performance better than state-of-the-art OCT systems. In order to achieve this performance, the following improvement and modications of the existing system are required: A high sensitivity, low noise CCD camera & frame grabber A high power, low-coherence source (possibly discharge lamp) A quasi-confocal arrangement using extended source

Chapter 5

Balanced Detection
5.1 Introduction

In chapter 3 we have shown that the detection of reective interfaces in a multilayer object using Coherence Radar is limited by the dynamic range of the CCD camera and the analogue to digital (A/D) converter or frame grabber (see appendix A). In this chapter, we describe a new dierential detection method for Coherence Radar which signicantly reduces the required dynamic range of the A/D converter. A new experimental system is implemented through the use of two line-scan CCD cameras and a Mach-Zehnder interferometer. We demonstrate the capabilities of this system by measuring the prole of a test surface and the location of several air-glass boundaries in a stack of glass plates [101].

5.2

Balanced Detection

It is well known that a Mach-Zehnder interferometer can provide two dierential interference signals which are out of phase by radians [102]. Let us examine the interference produced by a Mach-Zehnder interferometer such as depicted in gure 5.1. When using a low-coherence source, the intensity observed by detector 1 and 2 is [88, 103]: 2 I1 (d) = 1 + V (d) sin d and 2 I2 (d) = 1 V (d) sin d (5.2) (5.1)

respectively, where d is the position of a reecting surface in the object arm, V is the visibility at maximum coherence, (d) is the coherence function of the source and is the central wavelength. By subtracting the two photodetector outputs, we obtain a dierential signal proportional to 2 I1 (d) I2 (d) = 2V (d) sin d (5.3)

and thus eliminate any common bias. In practice, to eliminate the eect of detector response or gains variations between the two detectors, the signals are amplied 95

CHAPTER 5. BALANCED DETECTION

96

Object

Light Source

Mirror 1

Collimating Lens

Beamsplitter

Beamsplitter Detector 1 Mirror 2

Detector 2

Figure 5.1: Mach-Zehnder interferometer

CHAPTER 5. BALANCED DETECTION

97

accordingly before subtraction to equalise the background intensity. This method is called balanced detection and has largely been used in bre optic sensing [104, 105] and in low-coherence imaging using beam scanning arrangements [35]. In this chapter, we implement Coherence Radar using a Mach Zehnder interferometer and detect the dierential signal with two CCD line-scan sensors. In order to maintain the imaging properties of Coherence Radar, each point in the object plane is imaged identically onto pixels in the image plane of the two CCD sensors. Both CCD cameras should ideally be identical geometrically and have the same uniform response. However, even in the presence of asymmetries and non-uniform response we may reasonably expect the method to yield superior performance when compared to conventional detection.

5.3

Dynamic Range

The dynamic range, R, of a detector is dened as the ratio of the maximum light intensity at saturation (Is ) to the light intensity that produces an output (In ) equal to the residual noise in the system (noise oor), so that R= Is . In (5.4)

In Coherence Radar the required dynamic range, R of the CCD sensor and the subsequent A/D conversion is determined by the object of interest. As in section 3.2.2 on page 60, let us express the minimum dynamic range necessary to detect interference, in terms of the required ratio, S, between an interference signal (Imax Imin ) and the noise oor (In ), such that R= SImax , Imax Imin (5.5)

where Imax Imin /S = In and Imax = Is . The minimum required dynamic range in the case of interference between reections from a multilayer object, such as a stack of glass plates, and a reference mirror, was derived in section 3.2.2 on page 60 and is given by: R(j) = S Ir + It +1 . 2 2 Ir Ic (j) (5.6)

where j is the interface number in a stack of n reecting interfaces, S is the required ratio of the signal to the noise oor, Ir is the reference beam intensity, It is the object beam intensity and Ic (j) is the intensity reected from interface j. Assuming that the reference intensity (Ir ) is adjusted to equal the intensity reected from the object (It ) equation 5.6 can be simplied to give: R= S 2 It +1 . Ic (j) (5.7)

We now show that the required dynamic range of the A/D conversion can be signicantly reduced by the use of dierential detection. As shown by equation 5.3, Imax Imin = Imax applies in general for a dierential signal such that the required dynamic range is always

CHAPTER 5. BALANCED DETECTION

98

R = S.

(5.8)

Thus using dierential detection the dynamic range can be improved by a factor of 1 2 It +1 . Ic (5.9)

However, from equation 5.9 it appears that no improvement can be expected when measuring opaque surfaces (i.e. when It = Ic ). We will now show that dierential detection can be advantageous even in this case. When measuring objects of nonuniform reectivity large intensity uctuations may be present in the image, i.e. the incident intensity varies with pixel position x. Let us assume interference between light reected from a non-uniform opaque object (It (x)) and a reference beam of uniform intensity (Ir (x) = Ir ). The interference at pixel position, x is then given by [88, 103]: 2 I(x, d) = It (x) + Ir + 2 It (x)Ir (d) sin 2d , (5.10)

where d is the position of the object, (d) is the coherence function of the low coherence source, and is the central wavelength. Let us dene the pixel positions xmax and xmin such that It (xmax ) = Itmax and It (xmin ) = Itmin , where Itmax and Itmin are the maximum and minimum object beam intensities respectively. Using equation 5.4, Imax (xmin )Imin (xmin ) = In and SImax (xmax ), the dynamic range can now be expressed S as: R= SImax (xmax ) Imax (xmin ) Imin (xmin ) (5.11)

where Imax (x) and Imin (x) are the maximum and minimum intensity incident on pixel x, respectively. Let us assume that Ir = Itmax (since this minimises the dynamic range). Substituting equation 5.10 into 5.11 and assuming = 1, the required dynamic range of an unbalanced system is given by: R=S Itmax . Itmin (5.12)

Assuming balanced detection, we obtain a dierential signal (analogous to equation 5.3) proportional to 2 I1 (x, d) I2 (x, d) = 4 It (x)Ir (d) sin 2d . (5.13)

Using equation 5.11 and again assuming that = 1 and Ir = Itmax , we can show that the required dynamic range in the case of dierential detection is: R= S 2 Itmax . Itmin (5.14)

This is an improvement of two over the unbalanced case (equation 5.12).

CHAPTER 5. BALANCED DETECTION

99

5.4

Experimental System

Figure 5.2 illustrates our implementation of Coherence Radar with balanced detection. The system is based on a Mach-Zehnder interferometer but still retains the telecentric telescope and collimated illumination as described in previous chapters. The collimated illumination beam is reected by the beam splitter onto the object of interest (which is mounted on a translation stage) and is transmitted to reference mirror 1. Light returned from the object is imaged by a telecentric telescope in the object arm (lens 1 and 2) onto two CCD line-scan cameras (Thomson, TH 7811A) via an additional beam splitter. Light travelling in the reference arm (via mirror 1 and 2) is subject to identical magnication due to a telecentric telescope in the reference arm (lens 3 and 4) and is also incident on both camera via the second beam splitter. Both mirror 1 and 2 as well as the second telecentric telescope (lens 1 and 2) are mounted on a translation stage to allow the optical path length to be equalised initially. The electrical signals generated by both CCD line-scan cameras are balanced and dierentially amplied before being digitised by an A/D converter (with 8-bit precision). Due to the increased frame rate of the line-scan cameras, the sample object can be displaced at constant speed, and acquisition time can be reduced signicantly.

5.5

Data Processing

The amplitude of the interference signal is recovered using a method which is in principle identical to that described in section 4.2. However, phase shifts are induced exclusively by the continuous displacement of the object along the optic axis (no reference mirror shift). The acquisition software was modied to allow post processing of data collected in real time. As the sample object is displaced over a range of d at a constant speed, v, data from the CCDs is collected and stored continuously (see gure 5.3). The intensity is integrated over the exposure time of the CCD and measured at regular intervals corresponding to the line rate, fl , of the CCD camera. Given that the object displacement during this exposure time is small, the intensity values, I(x, di ) at pixel position, x, corresponds to regular discrete object positions, di . For a set of N subsequent ob ject positions dj , ...dj+N (adjacent di object positions), an average intensity, I(x, dj ) is computed: . N The interference amplitude is then approximated as: A(x, di ) = 2
j+N i=j

I(x, dj ) =

j+N i=j Ii (x, di )

(5.15)

Ii (x, di ) I(x, dj ) . (5.16) N A(x, dj ) characterises the interference amplitude and is independent of the incoher ent background, ( I(x, dj )/N ). The image contrast can be improved and noise reduced by increasing N. However, the speed of the object translation stage, v, and the line rate of the CCD camera, fl , should be chosen so that the interference amplitude does not f change appreciably in the interval dj dj + N , i.e. N lcv l , where lc is the coherence length.

CHAPTER 5. BALANCED DETECTION

100

Translation stage Sample Object and coherence plane Collimating lens BS Fibre source

f1

ND-filter

Translation stage Mirror 1

Lens 1

Lens 3

fc f1
SLD 830nm Image plane Aperture stops

f3

f2
Lens 2

f4
Lens 4

BS

Linear CCD camera

XYZ - stage

Computer

f2

Mirror 2

Frame Grabber
Linear CCD camera

Image plane

A/D converter 8-bit 2 MHz

XYZ - stage

+ -

Differential amplifier

Figure 5.2: Experimental Arrangement implementing a balanced Coherence Radar technique

CHAPTER 5. BALANCED DETECTION

101

Start

Move object to initial position

Start moving object at a constant speed

Digitise and store signal from CCD cameras

No

Covered depth range?

Yes Stop object translation stage

Compute interference amplitude

End

Figure 5.3: Data Processing

CHAPTER 5. BALANCED DETECTION

102

110

100

90

intensity (8bit digital no.)

80

70

60

50

40

30

20

100

200

300

400 pixel number

500

600

700

800

Figure 5.4: Intensity variation along the CCD line-scan sensor

5.6

Experimental Results

Using the experimental arrangement in gure 5.2 we investigated a number of reective sample objects to assess the performance of balanced Coherence Radar. Initially, we assessed the eciency with which local reectivity variations in the image can be compensated by balanced detection. A test object with a periodic reectivity variation (gure 5.9) was placed in the object arm of the interferometer while the reference arm was blocked. The intensity variation due to the object reectivity was measured using just one CCD camera and can be seen in gure 5.4. Then, using both cameras and balanced detection, we measured the variations again. As the results in gure 5.5 demonstrate, some residual intensity variations still remain. However, although perfect bias compensation could not be achieved, the amplitude of the variations are reduced by a factor of approximately two, compared to the unbalanced case. A second experiment was performed which demonstrates the ability of the system to measure interference. The intensity variations produced by displacing a at mirror over a range of 45 m were measured and are presented as a false colour image in gure 5.7. This shows the sinusoidal variations due to the object displacement as a function of the pixel position. Variations of fringe visibility (or amplitude of variations) along the object displacement axis are due to the low-coherence property of the source, while the variations along the pixel axis are due to non-uniform illumination. In this

CHAPTER 5. BALANCED DETECTION

103

50

45

40 intensity (8bit digital no.)

35

30

25

20

15

10

100

200

300

400 pixel number

500

600

700

800

Figure 5.5: Remaining intensity variation after subtraction of signals from CCD linescan 1 and 2

200 mm 1 mm
Figure 5.6: Anatomy of step structure

CHAPTER 5. BALANCED DETECTION

104

20 40 transverse position (pixel no.) 60 80 100 120 140 160 180 200

10

15

20 25 30 object position (microns)

35

40

Figure 5.7: Interference produced by a at mirror

CHAPTER 5. BALANCED DETECTION

105

20 40 transverse position (pixel no.) 60 80 100 120 140 160 180 200

10

15

20 25 30 object position (microns)

35

40

Figure 5.8: Interference amplitude experiment, the mirror was aligned approximately perpendicular to the optic axis of the interferometer and displaced at a speed of 125 m/sec. Since the line rate of the CCD cameras was 1000Hz the intensity was sampled every 125 nm, yielding the very good fringe resolution seen in gure 5.7. The eectiveness of the interference amplitude recovery algorithm (section 5.5) can be seen in gure 5.8 which shows the data after the interference amplitude was computed using N=20. We can observe a good correspondence between the large amplitude variation in gure 5.7 and the computed amplitude in gure 5.8. To demonstrate the surface proling capabilities of the system we constructed a periodic step structure consisting of alternating mirror and metal surfaces. Figure 5.6 illustrated how this structure was made using a mirror and a metal grating. The structure was placed in the object arm of the system and the interference was measured while translating the object over a range of 330 m. The acquisition was performed in only 2.2 seconds. A surface prole of the step structure was then achieved by nding the object positions at which the maximum interference amplitude occurs. Figure 5.9, shows the interference amplitude computed using N=10, as well as the position of the surface obtained using the peak search (indicated by a white line). The periodic step structure is clearly visible and measurements of the step height ( 200m) and width ( 500m) correspond well with the dimensions of the structure (gure 5.6). Finally we demonstrate the ability of the balanced technique to measure multilayer

CHAPTER 5. BALANCED DETECTION

106

250

50 200 object position (microns) 100 150 150

200

100

250 50 300 0 0 500 1000 1500 2000 2500 transverse position (microns) 3000 3500

Figure 5.9: Interference amplitude and surface prole (white line) of periodic step

CHAPTER 5. BALANCED DETECTION

107

300

250 interference contrast (8bit digital no.)

200

150

100

50

0 0

50

100

150 200 object position (microns)

250

300

350

Figure 5.10: Interference amplitude peaks produced by air-glass reections in a stack of 20 glass plates objects. This was investigated by measuring a stack of glass plates (as in section 3.3.2 on page 65). The object, consisting of 20 microscope cover slips of 120 m thickness, was placed in the object arm and translated over a range of 600 m at a speed of 125 m/sec. The intensity was sampled with a CCD line-rate of 1500Hz, yielding 7200 lines in just 4.8 seconds. Figure 5.10 shows the amplitude of the interference signal as measured for one pixel. We can clearly see the interference peaks associated with the reections from the rst four air-glass interfaces in the stack of glass plates.

5.7

Conclusion

In this chapter, we have demonstrated a new balanced technique which allows detection of weak interference signals without using A/D converters of high dynamic range. The advantage gained was theoretically derived: 1. For the investigation of a multilayer object when a large fraction of the returned light is incoherent. 2. When the object of interest produces an image with a large variations of intensity from pixel to pixel.

CHAPTER 5. BALANCED DETECTION

108

A new data acquisition and processing scheme was developed which allows continuous object movement and post processing, thus substantially reducing the required acquisition time. In addition, the following capabilities of the balanced system were veried experimentally: 1. Reduction of static intensity variations across the detector when observing a surface (in the absence of interference). 2. Interference detection using dierential signals from two CCD line-scan cameras. 3. Interference amplitude recovery using the new processing scheme. 4. Proling of discontinuous surfaces. 5. Imaging of multilayer objects.

Chapter 6

Conclusion
6.1 Summary

In this thesis, we have described optical Coherence Radar, a low-coherence interferometry method which allows three dimensional imaging. We have used this method to study the surface topography of opaque objects and the internal structure of translucent samples and have explored a number of potential applications. In the past, methods similar in nature to Coherence Radar have been almost exclusively used to measure the surface topography of rough opaque objects. In this thesis we have demonstrated that, with suitable modications, it is possible use Coherence Radar to obtain tomographic images or sections of translucent objects composed of several partially reecting layers. We have successfully obtained longitudinal images of human fundus tissue, demonstrating the potential of the technique as a clinical tool for fundus examinations. In addition, we have demonstrated the use of balance detection in conjunction with two CCD line-scan cameras, enabling the construction of high performance systems with low-cost, standard components. In chapter 2 the measurement of surface topography and in particular, the study and analysis of hypervelocity impact craters using Coherence Radar was investigated. We have shown that the system can deliver topographic measurements with a depth accuracy of about 2 m. Coherence radar is ideally suited for measurements of rough surfaces containing large discontinuities and steep walls where sub-micron accuracy is not required. Large objects can be accommodated, and surface measurements over areas of at least 10 by 10 cm may be obtained. Shadowing in the presence of steep walls is avoided through the use of collimated illumination, making Coherence Radar suitable for the inspection of mechanical parts containing drill holes and milled slots. Potential applications could also include monitoring of manufacturing tolerances, inspection of gears, seals and injection moulds, assessment of deformation and quality assurance of many types of manufactured parts. The study of impact craters has demonstrated the applicability of Coherence Radar in space science. In addition, the Coherence Radar arrangement for such applications is opto-mechanically simple, robust and may be constructed from standard low cost components. In chapter 3 we have made, what is to our knowledge, the rst successful attempts to measure the position of reecting layers in objects which are partially transmitting using Coherence Radar. By displaying the strength of the interference signal measured by Coherence Radar it was possible to locate the boundaries between materials of dierent

109

CHAPTER 6. CONCLUSION

110

refractive index. In an initial experiment, we obtained a longitudinal tomographic image of a number of thin glass plates arranged in a stack. A theoretical model of such an object predicts a limit to the total number of such reective interfaces which can be measured and we assume that these results generally apply even to objects constructed of less regular structures. A rst application of multilayer imaging is the evaluation of impact damage sustained by a solar cell retrieved from space. A series of transverse sections obtained using Coherence Radar show a crater which has penetrated several solar cell layers. Other potential applications may include surface measurements of opaque structures embedded in or covered by transparent medium, measurement of layer thickness, deformation studies of transparent objects such as plastics, quality assurance and monitoring of impurities and inclusions in manufactured parts. In chapter 4 the measurement of the human fundus layers was investigated. The measurement of retinal thickness and shape is of particular interest to ophthalmology. However, high resolution measurements of the human fundus have, until now, been obtained exclusively by beam scanning low-coherence methods such as OCT. We propose that Coherence Radar can potentially oer an attractive alternative to OCT and demonstrated this by obtaining two longitudinal section of post-mortem retinal tissue. The use of a CCD sensor oers speed, cost and stability advantages not enjoyed by OCT and other low-coherence systems. In addition, the experimental arrangement is simple and robust. We suggest that the main potential of Coherence Radar may well lie in the area of clinical ophthalmic imaging, where speed is essential. Coherence Radar can potentially acquire a transverse (or en face) image section of the retina in less than 0.1 seconds using eye safe illumination at 830nm. This is substantially faster than demonstrated by current OCT systems [94]. Thus, coherence radar may provides a convenient means of investigating the human fundus without the need for mechanical scanning. Although results were of low contrast and required long acquisition times, we estimate that coherence radar can oer a signicant time advantage over OCT if an improved CCD camera and a high power SLD are implemented. Chapter 5 describes a modied coherence radar system implementing the use of balanced detection, through the use of two CCD line-scan cameras and a Mach-Zehnder type interferometer. This technique signicantly reduces the required dynamic range of the analogue-to-digital converter in the presence of a large number of highly reective layers. Although the alignment of the two identical CCD detectors proved dicult we were able to use the system to measure the step height of a periodic pattern and implement a new measurement and data processing technique which signicantly increased the acquisition of longitudinal sections.

6.2

Conclusion

Coherence Radar is a tool for surface measurements of opaque objects as well as for three dimensional imaging of partially transparent objects and, as such, covers similar applications as low-coherence methods employing beam scanning arrangements. Compared to competing techniques it oers signicant advantages of opto-mechanical robustness, speed and simplicity. A number of factors currently limit the performance of the technique. In particular, the relative poor scatter rejection and limited acquisition speed obtained using standard hardware. However, these factors could be addressed in future developments of the method.

CHAPTER 6. CONCLUSION

111

6.3

Future Work

We suggest a number of solutions to these problems. Although the measurement of surface topography was on the whole satisfactory, the acquisition speed and the eectiveness of the thresholding method could be improved substantially. The acquisition speed was primarily limited by the computational overhead associated with phase stepping and thus could be increased by the use of faster hardware such as dedicated signal processing units. In addition we suggest that a more ecient and accurate algorithm be used for surface nding and the method of threshold evaluation be improved. The main problem in multilayer imaging, especially in conjunction with scattering materials such as the fundus, is the low contrast of the images, and the slow acquisition speed. As has been already been shown by us, the required dynamic range of the analogue to digital conversion may be reduce by the use of balanced detection. In addition, we expect that the availability of high performance imaging systems and more powerful computers in the near future will eliminate many of the time constraints related to image acquisition and data processing speed. Currently, the system performance is still limited by the lack of available high power spatially ltered light sources and by the low scatter rejection of the optical arrangement. Both of these problems may potentially be reduced by the use of an extended source and a modied optical arrangement. As has been demonstrated by Sun et al. [100] the use of an extended source can provide a quasi confocality due to the low spatial coherence of the source. Extended sources are available at much less cost than SLDs, do not require expensive power supplies and cooling units and supply very much more power. Further the quasi confocality may reduce the appearance of speckle and signicantly reduce the eect of scatter in biological tissue.

Appendix A

Digital Imaging System


This appendix describes the components of the digital imaging system employed in our experiments and discusses some of the parameters aecting its performance. The imaging system used in this thesis consists of an analog CCD camera and a frame grabber. The frame grabber converts the analog video signal into a digital image so that it can be stored and processed. The basic functions performed by a video camera and frame grabber may be summarised as follows: analog signal processing in the camera transmission of data from camera to frame grabber analog to digital conversion data storage

A.0.1

CCD Sensor

A CCD sensor is a silicon based semiconductor which is divided into many small capacitors. Photons incident on the silicon layer produce photo electrons which are collected by the capacitors. Each picture element (pixel) is composed of several capacitors to facilitate the movement of charge across the device. Once the device has been exposed to light, photo electrons which have accumulated in the capacitors can be transferred along a row of adjacent pixels into a storage area of similar structure. This process called the readout clears the charges and allows a new exposure. In CCDs, noise is introduced mainly as a result of the readout process, but also as a result of thermal electrons. The noise oor is an important parameter in determining the sensitivity and dynamic range of a CCD array. Noise may be reduced by decreasing the readout time (at the cost of frame rate) and temperature by cooling the CCD sensor. The size of the capacitor which collects the photoelectrons, the full well capacity, limits the amount of exposure before saturation. A large (full well) capacity and a low noise oor are desirable since the dynamic range is determined by their ratio.

A.1

CCD Camera

CCD detectors oer a high sensitivity and a linear intensity response, but suer from a poor dynamic range when compared to single photo detectors. A CCD video camera 112

APPENDIX A. DIGITAL IMAGING SYSTEM

113

combines such a solid state sensor with suitable electronics for continuous readout and signal conditioning. The supplied signal is usually analog and conforms to a standard video format such as CCIR or RS-170 which deliver interlaced images at a rate of 25-30 full frames a second. The exposure time can be controlled by the readout process of the CCD and usually does not require a mechanical shutter. When operating according to the CCIR standard, 50 interlaced images or half frames are read out from alternating odd or even rows every second, limiting the maximum exposure time to 1/50 seconds per half frame. In order to adapt the camera to display optimum contrast on a tube monitor, most video cameras oer a selectable non-linear intensity response, termed gamma correction. Also, the electronic gain of the signal can be increased in most cameras, but invariably at the cost of increased noise.

A.1.1

The TM520 Video Camera

The Pulnix TM520 video camera employs a Sony ICX039BLA 1/2 inch CCD image Sensor and operates according to the CCIR monochrome video standard. The CCD has 752(H) by 582(V) eective pixels and supports exposure times from 1/60 to 1/10000 second. Gamma correct and gain settings can be congured internally. In order to obtain accurate intensity measurements the gamma correct was selected to deliver a linear response (gamma correct = 1). The gain setting was varied according to the application, but a low gain setting is preferable where possible, since it reduces the noise.

A.1.2

The Thomson Linescan Camera

In chapter 5 two linear CCD cameras where used. Each was assembled from a Thomson TH7811A linear CCD sensor and a Thomson TH 7931D drive module. The sensors have 1728 pixels each (pixel size: 13 by 13 m) and provide a dynamic range of 6000:1. The linear CCD cameras were able to operate at maximum line rate of approximately 1kHz.

A.2

Frame Grabber

The general function performed by a frame grabber is analog signal conditioning, analog to digital conversion and data communication with a host computer. Some frame grabbers perform a variety of additional functions such as on board processing, display and storage. The accuracy of the frame grabber aects the quality of the measurements as it may introduce further noise, distortions and reduce the dynamic range. A frame grabber may be characterised primarily by its intensity resolution and its acquisition speed. There is usually a trade-o between the two. Since the majority of analog video cameras have a dynamic range of less than 1000 : 1, most frame grabbers digitise a video signal with only 8-bit resolution, i.e. a maximum dynamic range of 256 : 1.

A.2.1

The Bit Flow Frame Grabbers

Two frame grabbers were used for work presented in this thesis. The systems described in chapters 2 to 4 consisted of the Pulnix TM520 video camera and a Bit Flow Video Raptor standard frame grabber. In chapter 5 this was replaced with a Bit Flow Data

APPENDIX A. DIGITAL IMAGING SYSTEM

114

Frequency count

100

Gaussian fit: std. = 8.5 mean = 14.8

50

0 0 5 10 15 20 25

Intensity (8-bit DN)


Figure A.1: Noise distribution at maximum gain Raptor which although identical in many respects has the additional capability to syncronise and condition signals derived from non-standard video cameras, such as our two Thompson Linescan cameras. Both frame grabbers digitise the video signal with 8-bit accuracy. With a maximum clock speed of 40MHz they can perform 40 million analog to digital conversions per second i.e. acquire images with a total of 1 million pixels per second. This allowed us to acquire 25 full frames per second of standard video in real time. However, since the Video Raptor accepts only standard video signals and its clock speed is xed it can not be adjusted to the number of pixels on the CCD sensor of the Pulnix camera. As a consequence the signal consisting of 752 pixels per line was oversampled to yield 768 values. Although no information is lost in this way, the physical location of CCD pixels does not correspond to the digitised images and aliasing may occur resulting in artifacts.

A.3

Noise

We measured the noise of the Pulnix TM520 Video camera and Bit Flow Video Raptor by repeatedly recording the signal from one pixel in an image. A noise distribution was then derived using 1000 samples. Since the gain setting of the video camera aects the noise, the noise was investigated for both maximum and minimum gain settings. A low gain setting was used in the work presented in chapters 2 to 4, whereas a high gain setting was used for the investigation of the human fundus in chapter 5. The

APPENDIX A. DIGITAL IMAGING SYSTEM

115

400

Frequency count

300

Gaussian fit: std. = 2 mean = 9

200

100

0 5 6 7 8 9 10 11

Intensity (8-bit DN)


Figure A.2: Noise distribution at minimum gain

APPENDIX A. DIGITAL IMAGING SYSTEM

116

SLD Fiber source

Collimating lens

Power meter

Detector

CCD camera

Frame grabber

Computer

Figure A.3: Experimental conguration for the measurement of CCD camera sensitivity distribution of noise for both low and high gain is shown in gures A.1 and A.2 and mean and standard deviation were evaluated using a Gaussian t. We dene the noise oor as the mean plus standard deviation (14.8 + 8.5).

A.4

Sensitivity

We have also investigated the sensitivity of the Pulnix Video camera operating at a high gain setting. The experimental arrangement used for this is shown in gure A.3. The camera was illuminated with a superluminescent diode (SLD) emitting light at a wavelength of 830nm. A calibrated power meter was used to measure the total power, P0 , incident on the CCD sensor area. The detector was then removed from the beam and the CCD was exposed for 1/60 seconds. The resulting image was stored for further analysis. The measurements were repeated for dierent illumination powers. Since the power is not distributed evenly across the area of the CCD device, the power incident per pixel cannot be derived simply by dividing the total power by the CCD sensor area. However, we observed that the intensity distribution approximated to a Gaussian. Thus we can describe the power incident per pixel, P (x, y), as a twodimensional Gaussian function of the form: P (x, y) = (x x0 )2 + (y y0 )2 1 exp 2 2 2 2 (A.1)

where x and y are the pixel coordinates. The standard deviation, , is given by:

APPENDIX A. DIGITAL IMAGING SYSTEM

117

260 240 220 200

Intensity (8-bit DN)

180 160 140 120 100 80 60 40 20 0 0 1x10-5 2x10-5 3x10-5 4x10-5 5x10-5 6x10-5 7x10-5 8x10-5 9x10-5 1x10-4

Power per pixel (nW)

Figure A.4: Sensitivity calibration (exposure time 1/60 second)

Type Digital noise level NEE Digital saturation level SEE Dynamic range

Units DN pJ/cm2 DN pJ/cm2

Value 23.3 371 255 2456 7:1

Table A.1: Digital imaging system performance determined experimentally at high gain setting

APPENDIX A. DIGITAL IMAGING SYSTEM

118

r1 = e 2
e e e e e

(A.2)

maximum power per pixel. Sigma can be determined experimentally from the recorded images by nding r 1 . The maximum power per pixel is then given by:
e

where r 1 = x2 + y 2 is the radius at which P (x 1 , y 1 ) = Pmax /e, and Pmax is the 1 1

P0 (A.3) 2 2 and corresponds to the peak numerical value in the digitised image. Figure A.4 was derived by plotting 5 such pairs for dierent powers. A linear t was performed to estimate the sensitivity. The relationship between the power incident per pixel, P , and the digital output number (DN) is as follows: Pmax = DN = A + B P (A.4)

where A = 18 35DN/nW and B = 2.6 106 5 105 DN/nW . Using the relationship between input power and output digital number (DN) as well as the noise oor at high gain (gure A.1) the noise equivalent exposure (NEE) and saturation equivalent exposure (SEE) can be determined. NEE is dened as the power per unit area required to generate and output signal equal to the output noise level (23.3 DN). This gure describes the lower limit on detectable light energy. SEE is the amount of power per unit area which produces an output equal to the saturation level (255 DN). The dynamic range is equivalent to the ratio between NEE and SEE. Values of SEE and NEE are quoted in pJ/cm2 determined by using the area of one pixel = 4.28 105 cm2 .

Appendix B

Publications Arising from this Thesis


B.1 Refereed Journal Papers

1. L. Kay, A. Podoleanu, M. Seeger, and C. J. Solomon. A new approach to the measurement and analysis of impact craters. International Journal of Impact Engineering, 19(8):739753, 1996. 2. Adrian Gh. Podoleanu, Mauritius Seeger, George M. Dobre, David J. Webb, and David A. Jackson. Transversal and longitudinal images from the retina of the living eye using low-coherene reectometry. Journal of Biomedical Optics, 3(1), 1997. 3. Adrian Gh. Podoleanu, George Dobre, Mauritius Seeger, David J. Webb, and David A. Jackson. Low-coherence interferometry for en-face imaging of the retina. Submitted to Laser and Light in Ophthalmology, 1997. 4. C.J.Solomon, M. Seeger, L. Kay, and J. Curtis. Automated compact parametric representation of impact craters. Submitted to International Journal of Impact Engineering, 1997.

B.2

Conference Papers

1. Mauritius Seeger, Adrian Podoleanu, Chris J. Solomon, and David A. Jackson. 3-D low-coherence imaging for multiple-layer industrial surface analysis. In Conference on Lasers and Electro-Optics, volume 9, page 328, Washington DC 200361023, June 1996. Optical Society of America. OSA Technical Digest Series. 2. Mauritius Seeger, Adrian Gh. Podoleanu, and David A. Jackson. Preliminary results of retinal tissue imaging using coherence radar technique. In K.T.V.Grattan, editor, Applied Optics and Optoelectronics, pages 6468, Techno House, Redclie Way, Bristol BS1 6NX, UK, September 1996. Institute of Physics, Institute of Physics Publishing.

119

APPENDIX B. PUBLICATIONS ARISING FROM THIS THESIS

120

3. Mauritius Seeger, Adrian Gh. Podoleanu, and David A. Jackson. CCD based lowcoherence interferometry using balanced detection. Submitted to the Conference on Lasers and Electro-Optics, 1998.

Bibliography
[1] J. Jahanmir, B. G. Haggar, and J. B. Hayes. The scanning probe microscope. Scanning Microscopy, 6(3):625660, 1992. [2] J. F. Song and Theodore V. Vorburger. Stylus proling at high resolution and low force. Applied Optics, 30(1):4250, 1991. [3] Jean M. Bennett, Virgil Elings, and Kevin Kjoller. Recent developments in proling optical surfaces. Applied Optics, 32(19):34423447, 1993. [4] Christopher John Solomon. Studies of a Semiconductor-Based Compton Camera for Radionuclide Imaging in Biology and Medicine. PhD thesis, Royal Marsden Hospital, Sutton, Surrey, September 1988. [5] Paul.T.Callaghan. Principles of Nuclear Magnetic Resonance Microscopy. Clarendon, Oxford, 1991. [6] Joseph W. Sassani and Mary D. Osbakken. Anatomic features of the eye disclosed with nuclear magnetic resonance imaging. Arch Opthalmol, 102:541546, 1984. [7] Joseph A. Izatt, Michael R. Hee, David Huang, James G. Fujimoto, Eric A. Swanson, Charles P. Lin, and Joel S. Shuman. Ophthalmic diagnostics using optical coherence tomography. In Ophthalmic Technologies III, volume 1877, pages 136 144. SPIE, 1993. [8] Gerald V. Blessing, John A. Slotwinski, Donald G. Eitzen, and Harry M. Ryan. Ultrasonic measurements of surface roughness. Applied Optics, 32(19):34333437, 1993. [9] Dnal B. Downey, David A. Nicolle, Morris F. Levin, and Aaron Fenster. Threeo dimensional ultrasound imaging of the eye. Eye, 10:7581, 1996. [10] Tom H. Williamson and Alon Harris. Color doppler ultrasound imaging of the eye and orbit. Survey of Ophthalmology, 40(4):255267, 1996. [11] M. Okutomi and T. Kanade. A multiple-baseline stereo. IEEE Transactions on Pattern Analysis and Machine Intelligence, 15(4):353363, 1993. [12] M. G. Gee and N. J. McCormick. The application of confocal scanning microscopy to the examination of ceramic wear surfaces. Journal of Physics D: Applied Physics, 25:A230A235, 1992. [13] David Shotton, editor. Electronic Light Microscopy: Techniques in Modern Biomedical Microscopy, pages 231246. Wiley-Liss, 1993. 121

BIBLIOGRAPHY

122

[14] R. G. King and P. M. Delaney. Confocal microscopy. Materials Forum, 18:2129, 1994. [15] D. S. Dilworth, E. N. Leith, and J. L. Lopez. 3-Dimensional confocal imaging of objects embedded within thick diusing media. Applied Optics, 30(14):17961803, 1991. [16] J. V. Jester, P. M. Andrews, W. M. Petroll, M. A. Lemp, and H. D. Cavanagh. In vivo, real-time confocal imaging. Journal of Electron Microscopy Techniques, 18(1):5060, 1991. [17] Robert H. Webb, George W. Huges, and Francois C. Delroi. Confocal scanning laser ophthalmoscope. Applied Optics, 26(8):14921499, 1987. [18] W. H. Woon, F. W. Fitzke, A. C. Bird, and J. Marshall. Confocal imaging of the fundus using a scanning laser ophthalmoscope. British Journal of Ophthalmology, 76:470474, 1992. [19] Dov Weinberger, Hadas Stiebel, Dan D. Gaton, Ethan Priel, and Yuval Yassur. Three-dimensional measurements of idiopathic macular holes using a scanning laser tomograph. Ophthalmology, 102(10):14451449, 1995. [20] A. von Rckmann, F. W. Fitzke, and A. C. Bird. Distribution of fundus autouou rescence with a scanning laser ophthalmoscope. British Journal of Ophthalmology, 79:407412, 1995. [21] Masanori Idesawa, Toyohiko Yatagai, and Takashi Soma. Scanning moir method e and automatic measurement of 3-D shapes. Applied Optics, 16(8):21522162, 1977. [22] Mitsuo Takeda, Hideki Ina, and Seiji Kobayashi. Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry. J. Opt. Soc. Am., 72(1):156160, 1982. [23] Mitsuo Takeda and Kazuhiro Mutoh. Fourier transform prolometry for the automatic measurement of 3-D object shapes. Applied Optics, 22(24):39773982, 1983. [24] Katherine Creath. Step height measurement using two-wavelength phase-shifting interferometry. Applied Optics, 10(9):2113, 1971. [25] J. C. Wyant. Testing aspherics using two-wavelength holography. Applied Optics, 10(9):2113, 1971. [26] Hashim Atcha. Optoelectronic Speckle Pattern Interferometry. PhD thesis, Craneld University, December 1994. [27] Deepak Uttamchandani and Ivan Andonovic, editors. Principles of Modern Optical Systems, Volume 2. Artech, 1992. [28] E. A. Swanson, D. Huang, M. R. Hee, J. G. Fujimoto, C. P. Lin, and C. A. Puliato. High-speed optical coherence domain reectometry. Optics Letters, 17(2):151153, 1992.

BIBLIOGRAPHY

123

[29] Thomas Dresel, Gerd Husler, and Holger Venzke. Three-dimensional sensing of a rough surfaces by coherence radar. Applied Optics, 31:919, 1992. [30] Leslie Deck and Peter de Groot. High-speed noncontact proler based on scanning white-light interferometry. Applied Optics, 33(31):7334, 1994. [31] Eric A. Swanson, Michael R. Hee, Guillermo J. Tearney, and James G. Fujimoto. Application of optical coherence tomography in non-destructive evaluation of material microstructure. In Conference on Lasers and Electro-Optics, volume 9, pages 326327, Washington DC 20036-1023, June 1996. Optical Society of America. OSA Technical Digest Series. [32] E. A. Swanson, J. A. Izatt, M. R. Hee, D. Huang, C. P. Lin, J. S. Schumann, C. A. Puliato, and J. G. Fujimoto. In vivo retinal imaging by optical coherence tomography. Optics Letters, 18(21):18641866, 1993. [33] Gerd Husler and Jochen Neumann. Coherence Radar - an accurate 3-D sensor a for rough surfaces. In Optics, Illumination and Image Sensing for Machine Vision VII, volume 1822, pages 200205. SPIE, 1992. [34] Adrian Gh. Podoleanu, George M. Dobre, David J. Webb, and David A. Jackson. Fiberised set-up for retinal imaging of the living eye using low coherence interferometry. In Biomedical Applications of Photonics, Savoy Place, London WC2R 0BL, UK, April 1997. IEE, The Institution of Electrical Engineers. Reference Number: 1997/124. [35] Adrian Gh. Podoleanu, George M. Dobre, David J. Webb, and David A. Jackson. Fiberised set-up for eye-length measurement. Optics Communications, 137:397 405, 1997. [36] X. Clivaz, F. Marquis-Weible, R. P. Salath, R. P. Nov`k, and H. H. Gilgen. Highe a resolution reectometry in biological tissues. Optics Letters, 17(1):46, 1992. [37] Adrian Gh. Podoleanu, George M. Dobre, David J. Webb, and David A. Jackson. Coherence imaging by use of a newton rings sampling function. Optics Letters, 21(21):17891791, 1996. [38] Carmen A. Puliato, Michael R. Hee, Charles P. Lin, Elias Reichel, Joel S. Schuman, Jay S. Duker, Joseph A. Izatt, Eric A. Swanson, and James G. Fujimoto. Imaging of macular diseases with optical coherence tomography. Ophthalmology, 102(2):217229, 1995. [39] Joseph A. Izatt, Michael R. Hee, Eric A. Swanson, Charles P. Lin, David Huang, Joel S. Schuman, Carmen A. Puliato, and James G. Fujimoto. Micrometer-scale resolution imaging of the anterior eye in vivo with optical coherence tomography. Arch Opthalmol, 112:15841589, 1994. [40] Michael R. Hee, Carmen A. Puliato, Carlton Wong, Elias Reichel, Jay S. Duker, Joel S. Schuman, Eric A. Swanson, and James G. Fujimoto. Optical coherence tomography of central serous chorioretinopathy. American Journal of Ophthalmology, 120(1):6574, 1995.

BIBLIOGRAPHY

124

[41] Christoph K. Hitzenberger. Measurement of corneal thickness by low-coherence interferometry. Applied Optics, 31(31):66376642, 1992. [42] Stephen A. Boppart, Gary J. Tearney, Brett Bouma, James G. Fujimoto, and Mark E. Brezinski. Optical coherence tomography of developing embryonic morphology. In Conference on Lasers and Electro-Optics, volume 9, pages 5556, Washington DC 20036-1023, June 1996. Optical Society of America. OSA Technical Digest Series. [43] Mark Bashkansky, M. D. Duncan, Manfred Kahn, J. Reintjes, and Phillip R. Battle. Subsurface defect detection in ceramic materials using an optical gated scatter reectometer. In Conference on Lasers and Electro-Optics, volume 9, pages 327328, Washington DC 20036-1023, June 1996. Optical Society of America. OSA Technical Digest Series. [44] D. N. Wang, S. Chen, K. T. V. Grattan, and A. W. Palmer. A low coherence white light interferometric sensor for eye length measurement. Rev. Sci. Instrum., 66:3438, 1995. [45] A. Gh. Podoleanu, S. R. Taplin, D. J. Webb, and D. A. Jackson. Channelled spectrum display using a CCD array for student laboratory demostartions. European Journal of Physic, 15:266271, 1994. [46] H. Perrin, P. Sandoz, and G. Tribillion. Longitudinally dispersive prolometer. Pure Applied Optics, 4:219, 1995. [47] J. Schwider and Liang Zhou. Dispersive interferometric prolometer. Optics Letters, 19(13):995, 1994. [48] Thomas M. Merklein. High resolution measurement of multilayer structures. Applied Optics, 29(4):505, 1990. [49] W. Linnik. Ein apparat fr mikroskopisch-interferometrische untersuchung reeku tierender objekte (mikrointerferometer). Akad. Nauk. SSSR Dokl., 1:18, 1933. [50] Mark Davidson, Kalman Kaufman, Isaac Mazor, and Felix Cohen. An application of interference microscopy to integrated circuit inspection and metrology. In Proc. of Integrated Circuit Metrology, Inspection, and Process Control, volume 775, page 233. SPIE, 1987. [51] James C. Wyant and Katherine Creath. Advances in interferometric optical proling. Int. J. Machine Tools and Manufacture, 32(12):510, 1992. [52] Gordon S. Kino and Stanley S. C. Chim. Mirau correlation microscope. Applied Optics, 29(26):3775, 1990. [53] S. M. Pandit and N. Jordache. Data-dependent-systems and fourier-transform methods for single-interferogram analysis. Applied Optics, 34(26):59455951, 1995. [54] Z. Wang and P. J. Bryanston-Cross. An algorithm of spatial phase-shifting interferometry. In K.T.V.Grattan, editor, Applied Optics and Optoelectronics, pages 6468, Techno House, Redclie Way, Bristol BS1 6NX, UK, September 1996. Institute of Physics, Institute of Physics Publishing.

BIBLIOGRAPHY

125

[55] J. C. Wyant, B. F. Oreb, and P. Hariharan. Testing aspherics using twowavelength holography: use of digital electronic techniques. Applied Optics, 23(22):40204023, 1984. [56] Paul J. Caber. Interferometric proler for rough surfaces. Applied Optics, 32:3438, 1993. [57] Hong Zhao, Wonyi Chen, and Yuahan Tan. Phase-unwrapping algorithm for the measurement of three-dimensional object shapes. Applied Optics, 33(20):4497 4500, 1994. [58] N. Balasubramanian. Optical system for surface topography measurement. Technical Report 4340306, United States Patent, July 1982. [59] M. Davidson, K. Kaufman, and I. Mazor. The coherence probe microscope. Solid State Technology, page 57, 1987. [60] Stanley S. C. Chim and Gordon S. Kino. Three-dimensional image realization in interference microscopy. Applied Optics, 31:25502553, 1992. [61] P. de Groot and Leslie Deck. Three-dimensional imaging by sub-nyquist sampling of white-light interferograms. Optics Letters, 18(17):1462, 1993. [62] Peter de Groot and Leslie Deck. Surface proling by analysis of white-light interferograms in the spacial frequency domain. Journal of Modern Optics, 42(2):389 401, 1995. [63] Zygo industry applications. Webpage, 1997. http://www.zygo.com/. [64] Born and Wolf. Principles of Optics, pages 767772. Pergamon Press, sixt edition, 1993. [65] Patrick Sandoz and Gilbert Tribillon. Prolometry by zero-order interference fringe identication. Journal of Modern Optics, 40(9):16911700, 1993. [66] Hajime Yano. The Physics and Chemistry of Hypervelocity Impact Signatures on Spacecraft: Meteoroid and Space Debris. PhD thesis, The University of Kent at Canterbury, Canterbury, Kent, UK, September 1995. [67] A. S. Levine, editor. First Post-Retrieval Symposium, volume 3134 of LDEF-69 Month in Space. NASA, 1991. [68] A. S. Levine, editor. Second Post-Retrieval Symposium, volume 3194 of LDEF-69 Month in Space. NASA, 1992. [69] A. S. Levine, editor. Third Post-Retrieval Symposium, volume 3275 of LDEF-69 Month in Space. NASA, 1993. [70] D. C. Hill, M. F. Rose, S. R. Best, and M. S. Crumpler. The eect of impact angle on craters formed by hypervelocity particles. In Third Post-Retrieval Symposium, volume 3275 of LDEF-69 Month in Space. NASA, 1993. [71] R. J. Noll. Zernike polynomials and atmospheric turbulence. J. Opt. Soc. Am., 66:207211, 1976.

BIBLIOGRAPHY

126

[72] L. Kay, A. Podoleanu, M. Seeger, and C. J. Solomon. A new approach to the measurement and analysis of impact craters. International Journal of Impact Engineering, 19(8):739753, 1996. [73] C.J.Solomon, M. Seeger, L. Kay, and J. Curtis. Automated compact parametric representation of impact craters. Submitted to International Journal of Impact Engineering, 1997. [74] Laurie Kay. Development of a new method for the measurement, analysis and iterpretaion of impact craters. Grant Application made to the Particle Physics and Astronomy Research Council, September 1997. [75] S. Chen, A. W. Palmer, K. T. V. Grattan, and B. T. Meggit. Fringe order identication in optical bre white-light interferometry using centroid algorithm method. Applied Optics, 28(6):553555, 1992. [76] Masao Shimoji. Analysis of a conical optical beam deector insensitive to motor wobble. Applied Optics, 34(13):23052315, 1995. [77] Yajun Li and Joseph Katz. Laser beam scanning by rotary mirrors. I. modelling mirror-scanning devices. Applied Optics, 34(28):64036415, 1995. [78] P. J. Brosens. Dynamic mirror distortions in optical scanning. Applied Optics, 11(12):29872989, 1972. [79] R. Hradaynath and A. K. Jaiswal. Distortion in a 2-D scan patter generated by combining a plane mirror and a regular polygon scanner. Applied Optics, 22(4):615619, 1983. [80] M. Bail, Gerd Husler, J. H. Herrmannn, M. W. Linder, and R. Ringler. Optical a coherence tomogrpahy with the Spectral Radar - Fast optical analysis in volume scatterers by short coherence interferometry. volume 2925, pages 298303. SPIE, 1996. [81] H. Brunner, J.Strohm, M. Hassel, and R. Steiner. Optical coherence tomography (OCT) of human skin with a slow-scan CCD-camera. volume 2626, pages 273282. SPIE, 1995. [82] Eric A. Swanson. Method and apparatus for acquiring images using a CCD detector array and no transverse scanner. Technical Report 5465147, United States Patent, Nov 1995. [83] Mauritius Seeger, Adrian Podoleanu, Chris J. Solomon, and David A. Jackson. 3-D low-coherence imaging for multiple-layer industrial surface analysis. In Conference on Lasers and Electro-Optics, volume 9, page 328, Washington DC 20036-1023, June 1996. Optical Society of America. OSA Technical Digest Series. [84] Born and Wolf. Principles of Optics, page 42. Pergamon Press, sixt edition, 1993. [85] Takashi Fukano and Ichirou Yamaguchi. Simultaneous measurement of thickness and refractive indices of multiple layers by a low-coherence confocal microscope. Optics Letters, 21(23):19421944, 1996.

BIBLIOGRAPHY

127

[86] W.V. Sorin and D.F.Gray. Simultaneous thickness and group index measurements using optical low-coherence reectometry. IEEE Photonics Technology Letters, 4(1):105107, 1992. [87] Nori Shibata, Makoto Tsubokawa, Takashi Nakashima, and Shigeyuki Seikai. Temporal coherence properties of a dispersive propagating beam in a ber-optic interferometer. J. Opt. Soc. Am. A, 4:494497, 1987. [88] Eugene Hecht, editor. Optik. Addison-Wesley, 1989. [89] A. D. Kersey, M. J. Marrone, A. Dandrige, and A. B. Tveten. Optimization and stabilization of visibility in interferometric ber-optic sensors using inputpolarization control. Journal of Lightwave Technology, 6(10):15991609, 1988. [90] Michael R. Hee, David Huang, Eric A. Swanson, and James G. Fujimoto. Polarization-sensitive low-coherence reectometer for birefringence characterization and ranging. J. Opt. Soc. Am. B, 9(6):903908, 1992. [91] Wolfgang Drexler, Christoph K. Hitzenberger, Harald Sattmann, and Adolf F. Fercher. Measurement of the thickness of fundus layers by partial cohrence tomography. Optical Engineering, 34(3):701709, 1995. [92] A. F. Fercher, K. Mengedoht, and W. Werner. Eye-length measurements by interferometry with partially coherent light. Optics Letters, 14:186188, 1988. [93] Michael R. Hee, Joseph A. Izatt, Eric A. Swanson, David Huang, Joel S. Schuman, Charles P. Lin, Carmen A. Puliato, and James G. Fujimoto. Optical coherence tomography of the humam retina. Arch Opthalmol, 113:325332, 1995. [94] Adrian Gh. Podoleanu, Mauritius Seeger, George M. Dobre, David J. Webb, and David A. Jackson. Transversal and longitudinal images from the retina of the living eye using low-coherene reectometry. Journal of Biomedical Optics, 3(1), 1997. [95] Adrian Gh. Podoleanu, George Dobre, Mauritius Seeger, David J. Webb, and David A. Jackson. Low-coherence interferometry for en-face imaging of the retina. Submitted to Laser and Light in Ophthalmology, 1997. [96] Mauritius Seeger, Adrian Gh. Podoleanu, and David A. Jackson. Preliminary results of retinal tissue imaging using Coherence Radar technique. In K.T.V.Grattan, editor, Applied Optics and Optoelectronics, pages 6468, Techno House, Redclie Way, Bristol BS1 6NX, UK, September 1996. Institute of Physics, Institute of Physics Publishing. [97] Francois C. Delori and Kent P. Pibsen. Spectral reectance of the human ocular fundus. Applied Optics, 28(6):10611077, 1989. [98] Francois C. Delori and Stephen A. Burns. Fundus reectance and the measurent of crystaline lens density. J. Opt. Soc. Am. A, 13(2):215226, 1996. [99] Hai-Pang Chiang, Wei-Sheng, and Jyhpyng Wang. Imaging through random scattering media by using cw broadband interferometer. Optics Letters, 18(7):546548, 1993.

BIBLIOGRAPHY

128

[100] P. C. Sun and E. Arons. Nonscanning confocal ranging system. Applied Optics, 34(7):12541261, 1995. [101] Mauritius Seeger, Adrian Gh. Podoleanu, and David A. Jackson. CCD based lowcoherence interferometry using balanced detection. Submitted to the Conference on Lasers and Electro-Optics, September 1998. [102] Brian Culshaw and John Dakin, editors. Optical Fibre Sensors: Systems and Application Volume 2. Artech House, 1989. [103] Born and Wolf. Principles of Optics. Pergamon Press, sixt edition, 1993. [104] W. V. Sorin, D. M. Barns, and S. A. Newton. Optical low-coherence reectometry with -148 dB sensitivity at 1.55 m. In Eighth International Conference on Optical Fibre Sensors, pages 14, 1993. [105] K. Takada, A. Himeno, and K. Yukimatsu. Phase-noise and shot-noise limited operation of low coherence optical time domain reectometry. Applied Physics Letters, 59(20):24832485, 1991.

S-ar putea să vă placă și