Sunteți pe pagina 1din 24

Modes of sonography

Several different modes of ultrasound are used in medical imaging.[7] These are:

• A-mode: A-mode is the simplest type of ultrasound. A single transducer scans a line through the
body with the echoes plotted on screen as a function of depth. Therapeutic ultrasound aimed at a
specific tumor or calculus is also A-mode, to allow for pinpoint accurate focus of the destructive
wave energy.
• B-mode: In B-mode ultrasound, a linear array of transducers simultaneously scans a plane through
the body that can be viewed as a two-dimensional image on screen.
• M-mode: M stands for motion. In m-mode a rapid sequence of B-mode scans whose images
follow each other in sequence on screen enables doctors to see and measure range of motion, as
the organ boundaries that produce reflections move relative to the probe.
• Doppler mode: This mode makes use of the Doppler effect in measuring and visualizing blood
flow
o Color doppler: Velocity information is presented as a color coded overlay on top of a B-
mode image
o Continuous doppler: Doppler information is sampled along a line through the body, and
all velocities detected at each time point is presented (on a time line)
o Pulsed wave (PW) doppler: Doppler information is sampled from only a small sample
volume (defined in 2D image), and presented on a timeline
o Duplex: a common name for the simultaneous presentation of 2D and (usually) PW
doppler information. (Using modern ultrasound machines color doppler is almost always
also used, hence the alternative name Triplex.)
o 4B Mode: A four dimensional B-mode ultrasound means length, width, and
depth over time so that a moving three-dimensional image is seen on the
monitor.
o In 3D ultrasound (US) several 2D images are acquired by moving the
probe across the body surface or rotating inserted probes . 3D-mode uses
the same basic concept of a 2D ultrasound but rather than take the image
from a single angle, the sonographer takes a volume image. The volume
image that is displayed on the screen is software rendering of all of the
detected soft-tissue combined by specialized computer software to form
three-dimensional images.
The 3D volume rendering technique (VR) does not rely on segmentation
(segmentation techniques are difficult to apply to ultrasound pictures) and
makes it possible to obtain clear 3D ultrasound images for clinical
diagnosis. A 3D ultrasound produces a still image. Diagnostic US systems
with 3D display functions and linear array probes are mainly used for
obstetric and abdominal applications. The combination of contrast agents ,
harmonic imaging and power Doppler greatly improves 3D US
reconstructions.

3D imaging shows a better look at the organ being examined and is used
for: Detection of abnormal fetus development, e.g. of the face and limbs

Visualization of e.g. the colon and rectum

Detection of cancerous and benign tumors, e.g. tumors of the prostate gland, and
breast lesions

Pictures of blood flow in various organs or a fetus


4D mode: As far as ultrasound is concerned, 4D ultrasound (also
referred to as live 3D ultrasound or 4B-mode ) is the latest ultrasound
technology - the fourth dimension means length, width, and depth over time. 4D
Ultrasound takes 3D ultrasound images and adds the element of time to the
progress so that a moving three-dimensional image is seen on the monitor. A
4D scan takes the same amounts of time as a 2D or 3D scan; the difference is
the ultrasound equipment being used. One advantage of a 4D fetal ultrasound
to a 2D-mode is that parents can see how their baby will generally look like.
However, there are different opinions over the medical advantages.
To scan a 3D ultrasound image, the probe is swept over the maternal abdomen .
A computer takes multiple images and renders the 3D picture. With 4D imaging,
the computer takes the images as multiple pictures while the probe is hold still
and a 3D image is simultaneously rendered in real time on a monitor.
In most cases, the standard 2D ultrasound is taken, and then the 3D/4D scan
capability is added if an abnormality is detected or suspected. The 3D/4D
sonogram is then focused on a specific area, to provide the details needed to
assess and diagnose a suspected problem. A quick 4D scan of the face of the
fetus may be performed at the end of a routine exam, providing the parents with
a photo.

Magnetic resonance cholangiopancreatography (MRCP) is a medical imaging technique that


uses magnetic resonance imaging to visualise the biliary and pancreatic ducts in a non-invasive
manner.[1] This procedure can be used to determine if gallstones are lodged in any of the ducts
surrounding the gallbladder.

Comparison to other techniques

In the diagnosis of biliary and pancreatic disorders, MRCP is a much less invasive investigation when
compared to endoscopic retrograde cholangiopancreatography (ERCP). Although both techniques can
image the ductal system in detail, MRCP also allows imaging of the surrounding parenchyma. In a recent
study of 269 patients undergoing both ERCP and MRCP, the MRCP compared favourably with the more
invasive technique [2]. As with other forms of Magnetic Resonance imaging, appearances can sometimes be
deceptive [3].

Functional MRI or functional Magnetic Resonance Imaging (fMRI) is a type of specialized MRI
scan. It measures the hemodynamic response (change in blood flow) related to neural activity in
the brain or spinal cord of humans or other animals. It is one of the most recently developed
forms of neuroimaging. Since the early 1990s, fMRI has come to dominate the brain mapping
field due to its relatively low invasiveness, absence of radiation exposure, and relatively wide
availability.

fMRI statistics (yellow) overlaid on an average of the brain anatomies of several humans (gray)

Background

it has been known that changes in blood flow and blood oxygenation in the brain (collectively
known as hemodynamics) are closely linked to neural activity. [1] When neural cells are active they
increase their consumption of energy from glucose and switching to less energetically effective,
but more rapid aerobic glycolysis.[2][3] The local response to this energy utilization is to increase
blood flow to regions of increased neural activity, which occurs after a delay of approximately 1–5
seconds. This hemodynamic response rises to a peak over 4–5 seconds, before falling back to
baseline (and typically undershooting slightly). This leads to local changes in the relative
concentration of oxyhemoglobin and deoxyhemoglobin and changes in local cerebral blood
volume and in local cerebral blood flow.
Physiology

As neurons do not have internal reserves for glucose and oxygen, more neuronal activity requires
more glucose and oxygen to be delivered through blood stream rapidly. Through a process called
the hemodynamic response, blood releases glucose to neurons and astrocyte at a greater rate
than in the area of inactive neurons. It results in a surplus of oxyhemoglobin in the veins of the
area and distinguishable change of the local ratio of oxyhemoglobin to deoxyhemoglobin, the
"marker" of BOLD for MRI.[3]

Hemoglobin is diamagnetic when oxygenated (oxyhemoglobin) but paramagnetic when


deoxygenated (deoxyhemoglobin).[10] The magnetic resonance (MR) signal of blood is therefore
slightly different depending on the level of oxygenation. Higher BOLD signal intensities arise from
increases in the concentration of oxygenated hemoglobin since the blood magnetic susceptibility
now more closely matches the tissue magnetic susceptibility. By collecting data in an MRI
scanner with sequence parameters sensitive to changes in magnetic susceptibility one can
assess changes in BOLD contrast. These changes can be either positive or negative depending
upon the relative changes in both cerebral blood flow (CBF) and oxygen consumption. Increases
in CBF that outstrip changes in oxygen consumption will lead to increased BOLD signal,
conversely decreases in CBF that outstrip changes in oxygen consumption will cause decreased
BOLD signal intensity. The signal difference is very small, but given many repetitions of a
thought, action or experience, statistical methods can be used to determine the areas of the brain
which reliably show more of this difference as a result, and therefore which areas of the brain are
active during that thought, action or experience.

Almost all current fMRI research uses BOLD as the method for determining where activity occurs
in the brain as the result of various experiences, but because the signals are relative and not
individually quantitative, some question its rigor.[11] Other methods which propose to measure
neural activity more directly have been attempted (for example measurement of the Oxygen
Extraction Fraction (OEF) in regions of the brain, which measures how much of the
oxyhemoglobin in the blood has been converted to deoxyhemoglobin [12] or direct detection of
magnetic fields generated by neuronal currents[13]), but because the electromagnetic fields
created by an active or firing neuron are so weak, the signal-to-noise ratio is extremely low and
statistical methods used to extract quantitative data have been largely unsuccessful as of yet.

Neural correlates of BOLD

The precise relationship between neural signals and BOLD is under active research. In general,
changes in BOLD signal are well correlated with changes in blood flow. Numerous studies during
the past several decades have identified a coupling between blood flow and metabolic rate [14];
that is, the blood supply is tightly regulated in space and time to provide the nutrients for brain
metabolism. However, neuroscientists have been seeking a more direct relationship between the
blood supply and the neural inputs/outputs that can be related to observable electrical activity and
circuit models of brain function.

While current data indicate that local field potentials, an index of integrated electrical activity, form
a marginally better correlation with blood flow than the spiking action potentials that are most
directly associated with neural communication [15], no simple measure of electrical activity to date
has provided an adequate correlation with metabolism and the blood supply across a wide
dynamic range. Presumably, this reflects the complex nature of metabolic processes, which form
a superset with regards to electrical activity. Some recent results have suggested that the
increase in cerebral blood flow (CBF) following neural activity is not causally related to the
metabolic demands of the brain region, but rather is driven by the presence of neurotransmitters,
like glutamate [16], serotonin, nitric oxide[17], acetylcholin, dopamine and noradrenalin.
Some other recent results suggest that an initial small, negative dip before the main positive
BOLD signal is more highly localized and also correlates with measured local decreases in tissue
oxygen concentration (perhaps reflecting increased local metabolism during neuron activation)[18]
[19]
. Use of this more localized negative BOLD signal has enabled imaging of human ocular
dominance columns in primary visual cortex, with resolution of about 0.5 mm[20]. One problem with
this technique is that the early negative BOLD signal is small and can only be seen using larger
scanners with magnetic fields of at least 3 Tesla. Further, the signal is much smaller than the
normal BOLD signal, making extraction of the signal from noise more difficult. Also, this initial dip
occurs within 1–2 seconds of stimulus initiation, which may not be captured when signals are
recorded at long repetition (TR). If the TR is sufficiently low, increased speed of the cerebral
blood flow response due to consumption of vasoactive drugs (such as caffeine [21]) or natural
differences in vascular responsivnesses may further obscure observation of the initial dip.

The BOLD signal is composed of CBF contributions from larger arteries and veins, smaller
arterioles and venules, and capillaries. Experimental results indicate that the BOLD signal can be
weighted to the smaller vessels, and hence closer to the active neurons, by using larger magnetic
fields. For example, whereas about 70% of the BOLD signal arises from larger vessels in a 1.5
tesla scanner, about 70% arises from smaller vessels in a 7 tesla scanner[22]. Furthermore, the
size of the BOLD signal increases roughly as the square of the magnetic field strength [23]. Hence
there has been a push for larger field scanners to both improve localization and increase the
signal. A few 7 tesla commercial scanners have become operational, and experimental 8 and 9
tesla scanners are under development.

Technique

BOLD effects are measured using rapid volumetric acquisition of images with contrast weighed
by T1 or T2*. Such images can be acquired with moderately good spatial and temporal resolution;
images are usually taken every 1–4 seconds, and the voxels in the resulting image typically
represent cubes of tissue about 2–4 millimeters on each side in humans. Recent technical
advancements, such as the use of high magnetic fields[24] and multichannel RF reception[25][26][27],
have advanced spatial resolution to the millimeter scale. Although responses to stimuli presented
as close together as one or two seconds can be distinguished from one another, using a method
known as event-related fMRI, the full time course of a BOLD response to a briefly presented
stimulus lasts about 15 seconds for the robust positive response.fMRI studies draw from many
disciplines

fMRI is a highly interdisciplinary research area and many studies draw on knowledge in several
fields:

• Physics: Physical principles underlie fMRI signals and many studies require an
understanding of these underlying principles.
• Psychology: Almost all fMRI studies are essentially cognitive psychological, cognitive
psychophysiological, and/or psychophysical experiments in which the MRI scanner is
used to obtain an extra set of measurements in addition to behavioral or
electroencephalographic measurements.
• Neuroanatomy: The fMRI signals can be put into the context of previous knowledge only
with an understanding of the neuroanatomy.
• Statistics: Correct application of statistics is essential to "tease out" observations and
avoid false-positive results.
• Electrophysiology: Familiarity with neuronal behavior at the electrophysiological level can
help investigators design a useful fMRI study.

Advantages and Disadvantages of fMRI


Like any technique, fMRI has advantages and disadvantages, and in order to be useful, the
experiments that employ it must be carefully designed and conducted to maximize its strengths
and minimize its weaknesses.

Advantages of fMRI

• It can noninvasively record brain signals without risks of radiation inherent in other
scanning methods, such as CT or PET scans.
• It has high spatial resolution. 2–3 mm is typical but resolution can be as good as 1mm.
• It can record signal from all regions of the brain, unlike EEG/MEG which are biased
towards the cortical surface.
• fMRI is widely used and standard data-analysis approaches have been developed which
allow researchers to compare results across labs.
• fMRI produces compelling images of brain "activation".

Disadvantages of fMRI

• The images produced must be interpreted carefully, since correlation does not imply
causality, and brain processes are complex and often non-localized.
• Statistical methods must be used carefully because they can produce false positives.
One team of researchers studying reactions to pictures of human emotional expressions
reported a few activated voxels in the brain of a dead salmon when no correction for
multiple comparisons was applied, illustrating the need for rigorous statistical analyses[28].
• The BOLD signal is only an indirect measure of neural activity, and is therefore
susceptible to influence by non-neural changes in the body.
• BOLD signals are most strongly associated with the input to a given area rather than with
the output. It is therefore possible (although unlikely) that a BOLD signal could be present
in a given area even if there is no single unit activity.[29]
• fMRI has poor temporal resolution. The BOLD response peaks approximately 5 seconds
after neuronal firing begins in an area. This means that it is hard to distinguish BOLD
responses to different events which occur within a short time window. Careful
experimental design can reduce this problem. Also, some research groups are attempting
to combine fMRI signals that have relatively high spatial resolution with signals recorded
with other techniques, electroencephalography (EEG) or magnetoencephalography
(MEG), which have higher temporal resolution but worse spatial resolution.
• fMRI has often been used to show activation localized to specific regions, thus minimizing
the distributed nature of processing in neural networks. Several recent multivariate
statistical techniques work around this issue by characterizing interactions between
"active" regions found via traditional univariate techniques.

For these reasons, Functional imaging provides insights into neural processing that are
complementary to insights of other studies in neurophysiology.

Scanning in practice

Subjects participating in a fMRI experiment are asked to lie still and are usually restrained with
soft pads to prevent movement from disturbing measurements. Some labs also employ bite bars
to reduce motion, although these are unpopular as they can be uncomfortable. Small head
movements can be corrected for in post-processing of the data, but large transient motion cannot
be corrected. Motion in excess of around 3 millimeters results in unusable data. Motion is an
issue for all populations, but most especially problematic for subjects with certain medical
conditions (e.g. Alzheimer's Disease or schizophrenia) or with young children. Participants can be
habituated to the scanning environment and trained to remain still in an MRI simulator.
An fMRI experiment usually lasts between 15 minutes and an hour. Depending on the purpose of
study, subjects may view movies, hear sounds, smell odors, perform cognitive tasks such as n-
back, memorization or imagination, press a few buttons, or perform other tasks. Researchers are
required to give detailed instructions and descriptions of the experiment plan to each subject, who
must sign a consent form before the experiment.

Safety is an important issue in all experiments involving MRI. Potential subjects must ensure that
they are able to enter the MRI environment. The MRI scanner is built around an extremely strong
magnet (1.5 teslas or more), so potential subjects must be thoroughly examined for any
ferromagnetic objects (e.g. watches, glasses, hair pins, pacemakers, bone plates and screws,
etc.) before entering the scanning environment.

Related techniques

Aside from BOLD fMRI, there are other related ways to probe brain activity using magnetic
resonance properties:

Diffusion based functional MRI

Neuronal activity produces some immediate physical changes in cell shape that can be detected
because they affect the compartment shape and size for water diffusion. A much improved spatial
and temporal resolution for fMRI data collection has now been achieved by using diffusion MRI
methodology that can detect these changes in neurons.[30].The abrupt onset of increased neuron
cell size occurs before the metabolic response commences, is shorter in duration and does not
extend significantly beyond the area of the actual cell population involved.[31] This technique is a
diffusion weighted technique (DWI). There is some evidence that similar changes in axonal
volume in white matter may accompany activity and this has been observed using a DTI (diffusion
tensor imaging) technique.[32] The future importance of diffusion-based functional techniques
relative to BOLD techniques is not yet clear.

Diffusion MRI is a magnetic resonance imaging (MRI) method that produces in vivo images of
biological tissues weighted with the local microstructural characteristics of water diffusion. The
field of diffusion MRI can be understood in terms of two distinct classes of application—diffusion
weighted MRI and diffusion tensor MRI.

In diffusion weighted imaging (DWI), each image voxel (three dimensional pixel) has an image
intensity that reflects a single best measurement of the rate of water diffusion at that location.
This measurement is more sensitive to early changes after a stroke than more traditional MRI
measurements such as T1 or T2 relaxation rates. DWI is most applicable when the tissue of
interest is dominated by isotropic water movement e.g. grey matter in the cerebral cortex and
major brain nuclei—where the diffusion rate appears to be the same when measured along any
axis.

Diffusion tensor imaging (DTI) is important when a tissue—such as the neural axons of white
matter in the brain or muscle fibers in the heart—has an internal fibrous structure analogous to
the anisotropy of some crystals. Water will then diffuse more rapidly in the direction aligned with
the internal structure, and more slowly as it moves perpendicular to the preferred direction. This
also means that the measured rate of diffusion will differ depending on the direction from which
an observer is looking. In DTI, each voxel therefore has one or more pairs of parameters: a rate
of diffusion and a preferred direction of diffusion—described in terms of three dimensional space
—for which that parameter is valid. The properties of each voxel of a single DTI image is usually
calculated by vector or tensor math from six or more different diffusion weighted acquisitions,
each obtained with a different orientation of the diffusion sensitizing gradients. In some methods,
hundreds of measurements—each making up a complete image—are made to generate a single
resulting calculated image data set. The higher information content of a DTI voxel makes it
extremely sensitive to subtle pathology in the brain. In addition the directional information can be
exploited at a higher level of structure to select and follow neural tracts through the brain—a
process called tractography.[1][2]

A more precise statement of the image acquisition process is that, the image-intensities at each
position are attenuated, depending on the strength (b-value) and direction of the so-called
magnetic diffusion gradient, as well as on the local microstructure in which the water molecules
diffuse. The more attenuated the image is at a given position, the greater diffusion there is in the
direction of the diffusion gradient. In order to measure the tissue's complete diffusion profile, one
needs to repeat the MR scans, applying different directions (and possibly strengths) of the
diffusion gradient for each scan.

Traditionally, in diffusion-weighted imaging (DWI), three gradient-directions are applied, sufficient


to estimate the trace of the diffusion tensor or 'average diffusivity', a putative measure of edema.
Clinically, trace-weighted images have proven to be very useful to diagnose vascular strokes in
the brain, by early detection (within a couple of minutes) of the hypoxic edema.

More extended diffusion tensor imaging (DTI) scans derive neural tract directional information
from the data using 3D or multidimensional vector algorithms based on three, six, or more
gradient directions, sufficient to compute the diffusion tensor. The diffusion model is a rather
simple model of the diffusion process, assuming homogeneity and linearity of the diffusion within
each image voxel. From the diffusion tensor, diffusion anisotropy measures such as the fractional
anisotropy (FA), can be computed. Moreover, the principal direction of the diffusion tensor can be
used to infer the white-matter connectivity of the brain (i.e. tractography; trying to see which part
of the brain is connected to which other part).

Recently, more advanced models of the diffusion process have been proposed that aim to
overcome the weaknesses of the diffusion tensor model. Amongst others, these include q-space
imaging and generalized diffusion tensor imaging.

Diffusion-weighted imaging

Diffusion-weighted imaging is an MRI method that produces in vivo magnetic resonance images
of biological tissues weighted with the local characteristics of water diffusion.

DWI is a modification of regular MRI techniques, and is an approach which utilizes the
measurement of Brownian motion of molecules. Regular MRI acquisition utilizes the behaviour of
protons in water to generate contrast between clinically relevant features of a particular subject.
The versatile nature of MRI is due to this capability of producing contrast, called weighting. In a
typical T1-weighted image, water molecules in a sample are excited with the imposition of a
strong magnetic field. This causes many of the protons in water molecules to precess
simultaneously, producing signals in MRI. In T2-weighted images, contrast is produced by
measuring the loss of coherence or synchrony between the water protons. When water is in an
environment where it can freely tumble, relaxation tends to take longer. In certain clinical
situations, this can generate contrast between an area of pathology and the surrounding healthy
tissue.

In diffusion-weighted images, instead of a homogeneous magnetic field, the homogeneity is


varied linearly by a pulsed field gradient. Since precession is proportional to the magnet strength,
the protons begin to precess at different rates, resulting in dispersion of the phase and signal
loss. Another gradient pulse is applied in the same direction but with opposite magnitude to
refocus or rephase the spins. The refocusing will not be perfect for protons that have moved
during the time interval between the pulses, and the signal measured by the MRI machine is
reduced. This reduction in signal due to the application of the pulse gradient can be related to the
amount of diffusion that is occurring through the following equation:

where S0 is the signal intensity without the diffusion weighting, S is the signal with the gradient, γ
is the gyromagnetic ratio, G is the strength of the gradient pulse, δ is the duration of the pulse, Δ
is the time between the two pulses, and finally, D is the diffusion-coefficient.

By rearranging the formula to isolate the diffusion-coefficient, it is possible to obtain an idea of the
properties of diffusion occurring within a particular voxel (volume picture element). These values,
called apparent diffusion coefficient (ADC) can then be mapped as an image, using diffusion as
the contrast.

The first successful clinical application of DWI was in imaging the brain following stroke in adults.
Areas which were injured during a stroke showed up "darker" on an ADC map compared to
healthy tissue. At about the same time as it became evident to researchers that DWI could be
used to assess the severity of injury in adult stroke patients, they also noticed that ADC values
varied depending on which way the pulse gradient was applied. This orientation-dependent
contrast is generated by diffusion anisotropy, meaning that the diffusion in parts of the brain has
directionality. This may be useful for determining structures in the brain which could restrict the
flow of water in one direction, such as the myelinated axons of nerve cells (which is affected by
multiple sclerosis). However, in imaging the brain following a stroke, it may actually prevent the
injury from being seen. To compensate for this, it is necessary to apply a mathematical operator,
called a tensor, to fully characterize the motion of water in all directions.

Diffusion-weighted images are very useful to diagnose vascular strokes in the brain. It is also
used more and more in the staging of non small cell lung cancer, where it is a serious candidate
to replace positron emission tomography as the 'gold standard' for this type of disease. Diffusion
tensor imaging is being developed for studying the diseases of the white matter of the brain as
well as for studies of other body tissues (see below).

Diffusion tensor imaging

Diffusion tensor imaging (DTI) is a magnetic resonance imaging (MRI) technique that enables the
measurement of the restricted diffusion of water in tissue in order to produce neural tract images
instead of using this data solely for the purpose of assigning contrast or colors to pixels in a cross
sectional image. It also provides useful structural information about muscle—including heart
muscle, as well as other tissues such as the prostate.[4]

History

In 1990, Michael Moseley reported that water diffusion in white matter was anisotropic—the effect
of diffusion on proton relaxation varied depending on the orientation of tracts relative to the
orientation of the diffusion gradient applied by the imaging scanner. He also pointed out that this
should best be described by a tensor.[5] Aaron Filler and colleagues reported in 1991 on the use
of MRI for tract tracing in the brain using a contrast agent method but pointed out that Moseley's
report on polarized water diffusion along nerves would affect the development of tract tracing. [6] A
few months after submitting that report, in 1991, the first successful use of diffusion anisotropy
data to carry out the tracing of neural tracts curving through the brain without contrast agents was
accomplished.[1][7][8] Filler and colleagues identified both vector and tensor based methods in the
patents in July 1992,[8] before any other group, but the data for these initial images was obtained
using the following sets of vector formulas that provide Euler angles and magnitude for the
principal axis of diffusion in a voxel, accurately modeling the axonal directions that cause the
restrictions to the direction of diffusion:

The use of mixed contributions from gradients in the three primary orthogonal axes in order to
generate an infinite number of differently oriented gradients for tensor analysis was also identified
in 1992 as the basis for accomplishing tensor descriptions of water diffusion in MRI voxels. [9][10][11]
Both vector and tensor methods provide a "rotationally invariant" measurement—the magnitude
will be the same no matter how the tract is oriented relative to the gradient axes—and both
provide a three dimensional direction in space, however the tensor method is more efficient and
accurate for carrying out tractography.[1] Practically, this class of calculated image places heavy
demands on image registration—all of the images collected should ideally be identically shaped
and positioned so that the calculated composite image will be correct. In the original FORTRAN
program written on a MacIntosh computer by Todd Richards in late 1991, all of the tasks of image
registration, and normalized anisotropy assessment (stated as a fraction of 1 and corrected for a
"B0" (non-diffusion) basis), as well as calculation of the Euler angles, image generation and tract
tracing were simplified by initial development with vectors (three diffusion images plus one non-
diffusion image) as opposed to six or more required for a full 2nd rank tensor analysis.

The use of electromagnetic data acquisitions from six or more directions to construct a tensor
ellipsoid was known from other fields at the time, [12] as was the use of the tensor ellipsoid to
describe diffusion.[13][14] The inventive step of DTI therefore involved two aspects:

1. the application of known methods from other fields for the generation of MRI tensor data;
and
2. the usable introduction of a three dimensional selective neural tract "vector graphic"
concept operating at a macroscopic level above the scale of the image voxel, in a field
where two dimensional pixel imaging (bit mapped graphics) had been the only method
used since MRI was originated.

Single Voxel Spectroscopy

In vivo proton MRS offers the unique opportunity of monitoring human brain metabolism in a non-
invasive manner. Specific brain metabolites, such as glutathione (GSH) or γ-amino butyric acid
(GABA) are of high interest in brain research, as they are involved in a wide range of neurological
and psychiatric diseases, such as epilepsy and schizophrenia.
Conventional 1H spectra of human brain tissue contain signals arising from approximately twenty
low molecular weight metabolites. However, only relatively few of them may be identified
unambiguously or even quantified. One approach to extracting information from overcrowded
spectra is spectral editing, which filters out undesired signal contributions, leading to a tailored
information content of the spectra. Our group has traditionally been active in the development of
spectral editing techniques. Recently, a novel spectral editing acquisition scheme was designed
that detects GSH in human brain tissue with high sensitivity and 3D localization. Using our
technique implemented on a clinical 1.5 Tesla scanner, GSH levels in the brain of schizophrenic
and epileptic patients were quantified and shown to be significantly reduced compared to healthy
volunteers. Applying a different spectral editing technique, the levels of GABA were monitored in
epileptic patients and healthy volunteers to characterize the response of patients to a novel anti-
epileptic drug.
The detection of small changes in the metabolite levels remains a challenge, mainly due to
sensitivity issues. In this respect, the use of higher field strength is a promising approach. Initial
experiments performed on our 3 Tesla unit showed potential, and the small GSH concentration in
healthy brain was readily detectable. The use of higher field strength is also promising for
detecting GABA due to improved sensitivity and favorable spin evolution.
Spectral editing is applicable to a wide variety of further metabolites. e.g. lactate, glutamate, and
potentially taurine. Combined with the advantages of high magnetic fields, further research in this
field seems to hold great promise for numerous applications from basic brain research to drug
design.

In SVS, the signal is received of a volume limited to a single voxel. This acquisition is fairly fast (1
to 3 minutes) and a spectrum is easily obtained. It is performed in three steps:

• Suppression of the water signal: the quantity of hydrogen nuclei in the water molecules in
the human body is such that the water peak at 4.7 ppm “drowns” and masks the
spectroscopic signal from the other metabolites. It is therefore vital to suppress the water
peak to observe the metabolites of interest.
• Selection of the voxel of interest
• Acquisition of the spectrum, for which two types of sequence are available (PRESS:
Point-RESolved Spectroscopy, STEAM: STimulated Echo Acquisition Mode)
Water signal suppresion

The most commonly used method to suppress the water peak is CHESS (CHEmical Shift
Selective). CHESS consists in applying three couples (90° RF pulses + dephasing gradients) in
each spatial direction. The bandwidth of these RF pulses is narrow and centered on the
resonance frequency of the water peak in order to saturate the water signal and preserve the
signal from the other metabolites.

Techniques applying a 180° inversion pulse with adapted TI, like those used in FLAIR and STIR
sequences, can also be used to eliminate the water signal (WEFT: Water Elimination Fourier
Transform) or suppress the fat signal in breast spectroscopy, for example. In practice, CHESS is
more commonly used than WEFT.
Principles of volume selection

The analyzed volume is selected by a succession of three selective radiofrequency pulses


(accompanied by gradients) in the three directions in space. These pulses determine three
orthogonal planes whose intersection corresponds to the volume studied. Only the signal of this
voxel will be recorded, by selecting only the echo resulting from the series of three radiofrequency
pulses.
PRESS and STEAM sequences

Acquisition of the signal from the selected voxel can be performed using two different types of
sequence (figure 15.6):

STEAM (Stimulated Echo Acquisition Mode)


The three voxel-selection RF pulses have flip angles of 90°. The stimulated echo is recorded from
the cumulated effect of the three pulses, thus corresponding to the signal from the only voxel of
interest (cf. chapter 6.7.2. Hahn echo and stimulated echo). The TE of the stimulated echo
corresponds to double the time interval between the first two pulses. The delay between the
second and third RF pulses is the mix time TM. This technique is particularly adapted to short TE
spectral acquisitions.

PRESS (Point RESolved Spectroscopy)

In the PRESS method, the RF pulses have flip angles of 90° - 180° - 180°. The signal emitted by
the voxel of interest is thus a spin echo. The amplitude of this spin echo is two times greater than
the stimulated echo obtained by STEAM. The PRESS technique thus offers a better signal-to-
noise ratio than STEAM. It can be used with short TE (15 – 20 ms) or long TE (135 – 270 ms).

Whatever the case spectroscopic signal recording does not use a frequency encoding readout
gradient, as the frequency is used to constitute the spectrum (rather than the position), after
Fourier transform of the signal.

Contrast MR

An injected contrast agent such as an iron oxide that has been coated by a sugar or starch (to
hide from the body's defense system), causes a local disturbance in the magnetic field that is
measurable by the MRI scanner. The signals associated with these kinds of contrast agents are
proportional to the cerebral blood volume. While this semi-invasive method presents a
considerable disadvantage in terms of studying brain function in normal subjects, it enables far
greater detection sensitivity than BOLD signal, which may increase the viability of fMRI in clinical
populations. Other methods of investigating blood volume that do not require an injection are a
subject of current research, although no alternative technique in theory can match the high
sensitivity provided by injection of contrast agent.

Arterial spin labeling

By magnetic labeling the proximal blood supply using "arterial spin labeling" (ASL), the
associated signal is proportional to the cerebral blood flow, or perfusion. This method provides
more quantitative physiological information than BOLD signal, and has the same sensitivity for
detecting task-induced changes in local brain function.[citation needed]

Magnetic resonance spectroscopic imaging

Magnetic resonance spectroscopic imaging (MRS) is another, NMR-based process for assessing
function within the living brain. MRS takes advantage of the fact that protons (hydrogen atoms)
residing in differing chemical environments depending upon the molecule they inhabit (H 2O vs.
protein, for example) possess slightly different resonant properties (chemical shift). For a given
volume of brain (typically > 1 cubic cm), the distribution of these H resonances can be displayed
as a spectrum.

The area under the peak for each resonance provides a quantitative measure of the relative
abundance of that compound. The largest peak is composed of H2O. However, there are also
discernible peaks for choline, creatine, N-acetylaspartate (NAA) and lactate. Fortuitously, NAA is
mostly inactive within the neuron, serving as a precursor to glutamate and as storage for acetyl
groups (to be used in fatty acid synthesis) — but its relative levels are a reasonable
approximation of neuronal integrity and functional status. Brain diseases (schizophrenia, stroke,
certain tumors, multiple sclerosis) can be characterized by the regional alteration in NAA levels
when compared to healthy subjects. Creatine is used as a relative control value since its levels
remain fairly constant, while choline and lactate levels have been used to evaluate brain tumors.

Perfusion is defined as the passage of fluid through the lymphatic system or blood vessels to an
organ or a tissue.[1] The practice of perfusion scanning, is the process by which this perfusion can
be observed, recorded and quantified. The term perfusion scanning encompasses a wide range
of medical imaging modalities.

Applications

Being able to observe and quantify perfusion in the human body has been an invaluable step
forward in medicine. With the ability to ascertain data on the blood flow to vital organs such as the
Heart and the Brain, doctors are able to make quicker and more accurate choices on treatment
for patients. Nuclear medicine has been leading perfusion scanning for some time, although the
modality has certain pitfalls. It is often dubbed 'unclear medicine' as the scans produced may
appear to the untrained eye as just fluffy and irregular patterns. More recent developments in CT
and MRI has meant clearer images and solid data such as graphs depicting blood flow and blood
volume charted over a fixed period of time.

Methods

• CT
• MRI
• Nuclear medicine or NM

CT Perfusion

The method by which perfusion to an organ measured by CT is still a relatively new concept. It is
most commonly carried out on a specific area of the brain to ascertain information on blood flow
following a stroke or Intracranial hemorrhage. CT Perfusion was pioneered by Dr. Ting-Yim Lee,
a scientist at both the Lawson Health Research Institute and Robarts Research Institute in
London, Ontario, Canada.[2]

MR Perfusion

There are different techniques of detecting perfusion parameters with the use of MRI, the most common
being dynamic susceptibility contrast imaging (DSC-MRI) and arterial spin labelling (ASL). In DSC-MRI,
Gadolinium contrast agent is injected and a time series of fast T2*-weighted images is acquired. As
Gadolinium passes through the tissues, it produces a reduction of T2* intensity depending on the local
concentration. The acquired data are then postprocessed to obtain perfusion maps with different parameters,
such as BV (blood volume), BF (blood flow), MTT (mean transit time) and TTP (time to peak).

NM Perfusion

Nuclear medicine uses radioactive isotopes for the diagnosis and treatment of patients. Whereas
radiology provides data mostly on structure, nuclear medicine provides complementary
information about function.[3] All nuclear medicine scans give information to the referrering
clinician on the function of the system they are imaging. However, there are only a few specific
scans which are dedicated to looking at only one organ.

• VQ Scans
• SPECT

VQ Scans

Ventilation Perfusion Scans, sometimes called a VQ (V=Ventilation, Q=perfusion) scan, is a way


of identifying mismatched areas of blood and air supply to the lungs. It is primarily used to detect
a Pulmonary embolus.

The perfusion part of the study uses a radioisotope tagged to the blood which shows where in the
lungs the blood is perfusing. If the scan shows up any area missing a supply on the scans this
means there is a blockage which is not allowing the blood to perfuse that part of the organ.

Myocardial perfusion imaging

Myocardial perfusion imaging (MPI) is a form of functional cardiac imaging, used for the diagnosis
of ischemic heart disease. The underlying principle is that under conditions of stress, diseased
myocardium receives less blood flow than normal myocardium. MPI is one of several types of
cardiac stress test.

A cardiac specific radiopharmaceutical is administered. E.g. 99mTc-tetrofosmin (Myoview, GE


healthcare), 99mTc-sestamibi (Cardiolite, Bristol-Myers Squibb now Lantheus Medical Imaging).
Following this, the heart rate is raised to induce myocardial stress, either by exercise or
pharmacologically with adenosine, dobutamine or dipyridamole (aminophylline can be used to
reverse the effects of dipyridamole).

SPECT imaging performed after stress reveals the distribution of the radiopharmaceutical, and
therefore the relative blood flow to the different regions of the myocardium. Diagnosis is made by
comparing stress images to a further set of images obtained at rest. As the radionuclide
redistributes slowly, it is not usually possible to perform both sets of images on the same day,
hence a second attendance is required 1–7 days later (although, with a Tl-201 myocardial
perfusion study with dipyridamole, rest images can be acquired as little as two-hours post stress).
However, if stress imaging is normal, it is unnecessary to perform rest imaging, as it too will be
normal – thus stress imaging is normally performed first.

MPI has been demonstrated to have an overall accuracy of about 83% (sensitivity: 85%;
specificity: 72%) [1], and is comparable (or better) than other non-invasive tests for ischemic
heart disease, including stress echocardiography.

Dental radiographs, commonly referred to as X-ray films, or informally, X-rays, are pictures of
the teeth, bones, and surrounding soft tissues to screen for and help identify problems with the
teeth, mouth, and jaw. X-ray pictures can show cavities, cancerous or benign masses, hidden
dental structures (such as wisdom teeth), and bone loss that cannot be seen during a visual
examination. Dental X-rays may also be done as follow-up after dental treatments.

A radiographic image is formed by a controlled burst of X-ray radiation which penetrates oral
structures at different levels, depending on varying anatomical densities, before striking the film or
sensor. Teeth appear lighter because less radiation penetrates them to reach the film. Dental
caries, tooth decay, infections and other changes in the bone density, and the periodontal
ligament, appear darker because X-rays readily penetrate these less dense structures. Dental
restorations (fillings, crowns) may appear lighter or darker, depending on the density of the
material.

The dosage of X-ray radiation received by a dental patient is typically small, equivalent to a few
days' worth of background radiation environmental radiation exposure, or similar to the dose
received during a cross-country airplane flight. Incidental exposure is further reduced by the use
of a lead shield, lead apron, sometimes with a lead thyroid collar. Technician exposure is reduced
by stepping out of the room, or behind adequate shielding material, when the X-ray source is
activated.

Once photographic film has been exposed to X-ray radiation, it needs to be developed,
traditionally using a process where the film is exposed to a series of chemicals in a dark room, as
the films are sensitive to normal light. This can be a time-consuming process, and incorrect
exposures or mistakes in the development process can necessitate retakes, exposing the patient
to additional radiation. Digital x-rays, which replace the film with an electronic sensor, address
some of these issues, and are becoming widely used in dentistry as the technology evolves. They
may require less radiation and are processed much quicker than conventional radiographic films,
often instantly viewable on a computer. However digital sensors are extremely costly and have
historically had poor resolution, though this is much improved in modern sensors.

It is possible for both tooth decay and periodontal disease to be missed during a clinical exam,
and radiographic evaluation of the dental and periodontal tissues is a critical segment of the
comprehensive oral examination. The photographic montage at right depicts a situation in which
extensive decay had been overlooked by a number of dentists prior to radiographic evaluation of
the area.

1. Intraoral radiographic views

Placing the radiographic film or sensor inside the mouth produces an intraoral radiographic view.

1. 1. Periapical view

The periapical view is taken of both anterior and posterior teeth. The objective of this type of view
is to capture the tip of the root on the film. This is often helpful in determining the cause of pain in
a specific tooth, because it allows a dentist to visualize the tooth as well as the surrounding bone
in their entirety. This view is often used to determine the need for endodontic therapy as well as
to visualize the successful progression of endodontic therapy once it is initiated. It can be used in
case of detection supernumerary teeth & impacted teeth.

The name periapical is derived from the Greek peri, which means "around," and apical, which
means "tip."

1. 2. Bitewing view

The bitewing view is taken to visualize the crowns of the posterior teeth and the height of the
alveolar bone in relation to the cementoenamel junctions, which are the demarcation lines on the
teeth which separate tooth crown from tooth root. When there is extensive bone loss, the films
may be situated with their longer dimension in the vertical axis so as to better visualize their
levels in relation to the teeth. Because bitewing views are taken from a more or less
perpendicular angle to the buccal surface of the teeth, they more accurately exhibit the bone
levels than do periapical views. Bitewings of the anterior teeth are not taken.
The name bitewing refers to a little tab of paper or plastic situated in the center of the X-ray film,
which when bitten on, allows the film to hover so that it captures an even amount of maxillary and
mandibular information.

1. 3. Occlusal view

The occlusal view is indicated when there is a desire to reveal the skeletal or pathologic anatomy
of either the floor of the mouth or the palate. The occlusal film, which is about three to four times
the size of the film used to take a periapical or bitewing, is inserted into the mouth so as to
entirely separate the maxillary and mandibular teeth, and the film is exposed either from under
the chin or angled down from the top of the nose. Sometimes, it is placed in the inside of the
cheek to confirm the presence of a sialolith in Stenson's duct, which carries saliva from the
parotid gland. The occlusal view is not included in the standard full mouth series.

1. 4. Full mouth series

A full mouth series is a complete set of intraoral X-rays taken of a patients' teeth and adjacent
hard tissue. [1] This is often abbreviated as either FMS or FMX. The full mouth series is composed
of 18 films:

• four bitewings
o two molar bitewings (left and right)
o two premolar bitewings (left and right)
• eight posterior periapicals
o two maxillary molar periapicals (left and right)
o two maxillary premolar periapicals (left and right)
o two mandibular molar periapicals (left and right)
o two mandibular premolar periapicals (left and right)
• six anterior periapicals
o two maxillary canine-lateral incisor periapicals (left and right)
o two mandibular canine-lateral incisor periapicals (left and right)
o two central incisor periapicals (maxillary and mandibular)

2. Extraoral radiographic views

Placing the radiographic film or sensor outside the mouth, on the opposite side of the head from
the X-ray source, produces an extra-oral radiographic view.

A lateral cephalogram is used to evaluate dentofacial proportions and clarify the anatomic basis
for a malocclusion, and an antero-posterior radiograph provides a face-forward view.

2. 1. Panoramic films

Panoramic films are extraoral films, in which the film is exposed while outside the patients' mouth,
and they were developed by the United States Army as a quick way to get an overall view of a
soldiers' oral health. Exposing eighteen films per soldier was very time consuming, and it was felt
that a single panoramic film could speed up the process of examining and assessing the dental
health of the soldiers; soldiers with toothaches are not very effective. It was later discovered that
while panoramic films can prove very useful in detecting and localizing mandibular fractures and
other pathologic entities of the mandible, they were not very good at assessing periodontal bone
loss or tooth decay. [2]
3. Computed Tomography

There is increasing use of CT (computed tomography) scans in dentistry, particularly to plan


dental implants; there may be significant levels of radiation and potential risk. Specially designed
CBCT (cone beam CT) scanners can be used instead, which produce adequate imaging with a
tenfold reduction in radiation.(Dr. Bernard Friedland, Harvard School of Dental Medicine, 1dec07)

Virtual colonoscopy (VC) is a medical imaging procedure which uses x-rays and computers to
produce two- and three-dimensional images of the colon (large intestine) from the lowest part, the
rectum, all the way to the lower end of the small intestine and display them on a screen.[1] The
procedure is used to diagnose colon and bowel disease, including polyps, diverticulosis and
cancer. VC is performed via computed tomography (CT), sometimes called a CAT scan, or with
magnetic resonance imaging (MRI).[2]

Not to be confused with a similar procedure called a CT Pneumocolon, a virtual colonoscopy can
provide 3D reconstructed endoluminal views of the bowel.

Procedure

While preparations for VC vary, the patient will usually be asked to take laxatives or other oral
agents at home the day before the procedure to clear stool from the colon. A suppository is also
used to cleanse the rectum of any remaining fecal matter. The patient is also given a solution
designed to coat any residual faeces which may not have been cleared by the laxative. This is
called 'faecal tagging'. This allows the user (usually a consultant radiologist), viewing the 3D
images to effectively subtract the left over faeces, which may otherwise give false positive results.

VC takes place in the radiology department of a hospital or medical center. The examination
takes about 10 minutes and does not require sedatives.

During the procedure:

• The patient is placed in a supine position on the examination table


• A thin tube is inserted into the rectum, so that air can be pumped through the tube in
order to inflate the colon for better viewing.
• The table moves through the scanner to produce a series of two-dimensional cross-
sections along the length of the colon. A computer program puts these images together
to create a three-dimensional picture that can be viewed on the video screen.
• The patient is asked to hold his/her breath during the scan to avoid distortion on the
images.
• The scan is then repeated with the patient lying in a prone position.

After the examination, the images produced by the scanner must be processed into a 3D image,
+/- a fly through (a cine program which allows the user move through the bowel as if performing a
normal colonoscopy). A radiologist evaluates the results to identify any abnormalities.

The patient may resume normal activity after the procedure, but if abnormalities are found and the patient
needs conventional colonoscopy, it may be performed the same day.[3]

Advantages

VC is more comfortable than conventional colonoscopy for some people because it does not use
a colonoscope. As a result, no sedation is needed, and the patient can return to his/her usual
activities or go home after the procedure without the aid of another person. VC provides clearer,
more detailed images than a conventional x-ray using a barium enema, sometimes called a lower
gastrointestinal (GI) series. Further, about 1 in 10 patients will not have a complete right colon
(cecum) evaluation completed with conventional colonoscopy.[4] It also takes less time than either
a conventional colonoscopy or a lower GI series.[5]

VC provides a secondary benefit of revealing diseases or abnormalities outside the colon.[6]

Disadvantages

According to an article on niddk.nih.gov, the main disadvantage to VC is that a radiologist cannot


take tissue samples (biopsy) or remove polyps during VC, so a conventional colonoscopy must
be performed if abnormalities are found. Also, VC does not show as much detail as a
conventional colonoscopy, so polyps smaller than between 2 and 10 millimeters in diameter may
not show up on the images.[7] Furthermore Virtual Colonoscopy performed with CT exposes the
patient to ionizing radiation, however some research has demonstrated that ultra-low dose VC
can be just as effective in demonstrating colon and bowel disease due to the great difference in x-
ray absorption between air and the tissue comprising the inner wall of the colon.

Optical colonoscopy is taken as the "gold standard" for colorectal cancer screening by the vast
majority of the medical and research communities. Some radiologists recommend VC as a
preferred approach to colorectal screening. However, optical colonoscopy is considered the gold
standard by some professionals because it permits complete visualization of the entire colon,
hence providing the opportunity to identify precancerous polyps and cancer, and then to do
diagnostic biopsies or therapeutic removal of these lesions, as soon as possible.

Osteo CT: Technologists often diagnose osteoporosis by measuring a patient's bone mineral
density (BMD). Bone mineral density measures the amount of calcium in regions of the bones.
Most methods for measuring BMD (also called bone densitometry) are fast, non-invasive,
painless and available on an outpatient basis. Bone densitometry can also be used to estimate a
patient's risk of fracture. BMD methods involve taking dual energy x-rays (DEXA) or CT scans
(Osteo CT or QCT) of bones in the spinal column, wrist, arm or leg. These methods compare the
numerical density of the bone (calculated from the image), with empirical (historical) data bases
of bone density to determine whether a patient has osteoporosis, and often, to what degree.

Densitometry is the quantitative measurement of optical density in light-sensitive materials, such


as photographic paper or film, due to exposure to light. Optical density is a result of the darkness
of a developed picture and can be expressed absolutely as the number of dark spots (i.e., silver
nitrate grains in developed films) in a given area, but usually it is a relative value, expressed in a
scale.

Since density is usually measured by the decrease in the amount of light which shines through a
transparent film, it is also called absorptiometry, the measure of light absorption through the
medium. The corresponding measuring device is called a densitometer (absorptiometer). The
logarithm of the reciprocal of the transmittance is called the absorbance or density.[1]

DMax and DMin refer to the maximum and minimum density that can be recorded on the
material. The difference between the two is the density range.[1] The density range is related to
the exposure range (dynamic range), which is the range of light intensity that is represented by
the recording, via the Hurter–Driffield curve. The dynamic range can be measured in "stops",
which is the binary logarithm of the ratio of highest and lowest distinguishable exposures.
Uses

According to the principle of operation of the densitometer, one can have:

• spot densitometry: the value of light absorption is measured at a single spot


• line densitometry: the values of successive spots along a dimension are expressed as a
graph
• bidimensional densitometry: the values of light absorption are expressed as a 2D
synthetic image, usually using false-color shading

Dual energy X-ray absorptiometry is used in medicine to evaluate calcium bone density, which is
altered in several diseases such as osteopenia and osteoporosis. Special devices have been
developed and are in current use for clinical diagnosis, called bone densitometers.

Photometry: is the measurement of light's brightness, or luminous intensity. Photometry


frequently focuses on the perceived brightness to the human eye. As such, it takes into account
the eye's sensitivity to varying degrees of light and focuses primarily on the visible light spectrum.

In astrophysics & cosmology, photometry is the term used for the study of light emitted by a star
or other astronomical object. This brightness is generally defined in absolute terms. Because
astronomical objects frequently emit electromagnetic radiation in non-visible wavelengths, the
more precise term for this is radiometry.

Lens (optics)
Lenses can be used to focus light.

A lens is an optical device with perfect or approximate axial symmetry which transmits and
refracts light, converging or diverging the beam. A simple lens consists of a single optical
element. A compound lens is an array of simple lenses (elements) with a common axis; the use
of multiple elements allows more optical aberrations to be corrected than is possible with a single
element. Lenses are typically made of glass or transparent plastic. Elements which refract
electromagnetic radiation outside the visual spectrum are also called lenses: for instance, a
microwave lens can be made from paraffin wax.

Construction of simple lenses

Most lenses are spherical lenses: their two surfaces are parts of the surfaces of spheres, with the
lens axis ideally perpendicular to both surfaces. Each surface can be convex (bulging outwards
from the lens), concave (depressed into the lens), or planar (flat). The line joining the centres of
the spheres making up the lens surfaces is called the axis of the lens. Typically the lens axis
passes through the physical centre of the lens, because of the way they are manufactured.
Lenses may be cut or ground after manufacturing to give them a different shape or size. The lens
axis may then not pass through the physical centre of the lens.
Toric or sphero-cylindrical lenses have surfaces with two different radii of curvature in two
orthogonal planes. They have a different focal power in different meridians. This is a form of
deliberate astigmatism.

More complex are aspheric lenses. These are lenses where one or both surfaces have a shape
that is neither spherical nor cylindrical. Such lenses can produce images with much less
aberration than standard simple lenses.

Types of simple lenses

Lenses are classified by the curvature of the two optical surfaces. A lens is biconvex (or double
convex, or just convex) if both surfaces are convex. If both surfaces have the same radius of
curvature, the lens is equiconvex. A lens with two concave surfaces is biconcave (or just
concave). If one of the surfaces is flat, the lens is plano-convex or plano-concave depending on
the curvature of the other surface. A lens with one convex and one concave side is convex-
concave or meniscus. It is this type of lens that is most commonly used in corrective lenses.

If the lens is biconvex or plano-convex, a collimated or parallel beam of light travelling parallel to
the lens axis and passing through the lens will be converged (or focused) to a spot on the axis, at
a certain distance behind the lens (known as the focal length). In this case, the lens is called a
positive or converging lens.

If the lens is biconcave or plano-concave, a collimated beam of light passing through the lens is
diverged (spread); the lens is thus called a negative or diverging lens. The beam after passing
through the lens appears to be emanating from a particular point on the axis in front of the lens;
the distance from this point to the lens is also known as the focal length, although it is negative
with respect to the focal length of a converging lens.
Convex-concave (meniscus) lenses can be either positive or negative, depending on the relative
curvatures of the two surfaces. A negative meniscus lens has a steeper concave surface and will
be thinner at the centre than at the periphery. Conversely, a positive meniscus lens has a steeper
convex surface and will be thicker at the centre than at the periphery. An ideal thin lens with two
surfaces of equal curvature would have zero optical power, meaning that it would neither
converge nor diverge light. All real lenses have a nonzero thickness, however, which affects the
optical power. To obtain exactly zero optical power, a meniscus lens must have slightly unequal
curvatures to account for the effect of the lens' thickness.

Lensmaker's equation

The focal length of a lens in air can be calculated from the lensmaker's equation:[10]

where f is the focal length of the lens, n is the refractive index of the lens material,

R1 is the radius of curvature of the lens surface closest to the light source,
R2 is the radius of curvature of the lens surface farthest from the light source, and
d is the thickness of the lens (the distance along the lens axis between the two surface
vertices).

Sign convention of lens radii R1 and R2

The signs of the lens' radii of curvature indicate whether the corresponding surfaces are convex
or concave. The sign convention used to represent this varies, but in this article if R1 is positive
the first surface is convex, and if R1 is negative the surface is concave. The signs are reversed for
the back surface of the lens: if R2 is positive the surface is concave, and if R2 is negative the
surface is convex. If either radius is infinite, the corresponding surface is flat. With this convention
the signs are determined by the shapes of the lens surfaces, and are independent of the direction
in which light travels through the lens.

Thin lens equation

If d is small compared to R1 and R2, then the thin lens approximation can be made. For a lens in
air, f is then given by
[11]

The focal length f is positive for converging lenses, and negative for diverging lenses. The
reciprocal of the focal length, 1/f, is the optical power of the lens. If the focal length is in metres,
this gives the optical power in dioptres (inverse metres).

Lenses have the same focal length when light travels from the back to the front as when light
goes from the front to the back, although other properties of the lens, such as the aberrations are
not necessarily the same in both directions.

Imaging properties

This image has three visible reflections and one visible projection of the same lamp; two
reflections are on a biconvex lens.

As mentioned above, a positive or converging lens in air will focus a collimated beam travelling
along the lens axis to a spot (known as the focal point) at a distance f from the lens. Conversely,
a point source of light placed at the focal point will be converted into a collimated beam by the
lens. These two cases are examples of image formation in lenses. In the former case, an object
at an infinite distance (as represented by a collimated beam of waves) is focused to an image at
the focal point of the lens. In the latter, an object at the focal length distance from the lens is
imaged at infinity. The plane perpendicular to the lens axis situated at a distance f from the lens is
called the focal plane. (Note: In the figure below the image is actually larger than the object; this
is a function of f and S1, described below)

If the distances from the object to the lens and from the lens to the image are S1 and S2
respectively, for a lens of negligible thickness, in air, the distances are related by the thin lens
formula

This can also be put into the "Newtonian" form:

[12]

where x1 = S1 − f and x2 = S2 − f.
What this means is that, if an object is placed at a distance S1 along the axis in front of a positive
lens of focal length f, a screen placed at a distance S2 behind the lens will have a sharp image of
the object projected onto it, as long as S1 > f (if the lens-to-screen distance S2 is varied slightly,
the image will become less sharp). This is the principle behind photography and the human eye.
The image in this case is known as a real image.

Note that if S1 < f, S2 becomes negative, the image is apparently positioned on the same side of
the lens as the object. Although this kind of image, known as a virtual image, cannot be projected
on a screen, an observer looking through the lens will see the image in its apparent calculated
position. A magnifying glass creates this kind of image.

The magnification of the lens is given by:

where M is the magnification factor; if |M|>1, the image is larger than the object. Notice the sign
convention here shows that, if M is negative, as it is for real images, the image is upside-down
with respect to the object. For virtual images, M is positive and the image is upright.

In the special case that S1 = ∞, then S2 = f and M = −f / ∞ = 0. This corresponds to a collimated


beam being focused to a single spot at the focal point. The size of the image in this case is not
actually zero, since diffraction effects place a lower limit on the size of the image (see Rayleigh
criterion).

The formulas above may also be used for negative (diverging) lens by using a negative focal
length (f), but for these lenses only virtual images can be formed.

For the case of lenses that are not thin, or for more complicated multi-lens optical systems, the
same formulas can be used, but S1 and S2 are interpreted differently. If the system is in air or
vacuum, S1 and S2 are measured from the front and rear principal planes of the system,
respectively. Imaging in media with an index of refraction greater than 1 is more complicated, and
is beyond the scope of this article.

Compound lenses

Simple lenses are subject to the optical aberrations discussed . In many cases these aberrations
can be compensated for to a great extent by using a combination of simple lenses with
complementary aberrations. A compound lens is a collection of simple lenses of different shapes
and made of materials of different refractive indices, arranged one after the other with a common
axis.

The simplest case is where lenses are placed in contact: if the lenses of focal lengths f1 and f2 are
"thin", the combined focal length f of the lenses is given by

Since 1/f is the power of a lens, it can be seen that the powers of thin lenses in contact are
additive.

If two thin lenses are separated in air by some distance d (where d is smaller than the focal length
of the first lens), the focal length for the combined system is given by

The distance from the second lens to the focal point of the combined lenses is called the back
focal length (BFL).

As d tends to zero, the value of the BFL tends to the value of f given for thin lenses in contact.

If the separation distance is equal to the sum of the focal lengths (d = f1+f2), the combined focal
length and BFL are infinite. This corresponds to a pair of lenses that transform a parallel
(collimated) beam into another collimated beam. This type of system is called an afocal system,
since it produces no net convergence or divergence of the beam. Two lenses at this separation
form the simplest type of optical telescope. Although the system does not alter the divergence of
a collimated beam, it does alter the width of the beam. The magnification of such a telescope is
given by

which is the ratio of the input beam width to the output beam width. Note the sign convention: a
telescope with two convex lenses (f1 > 0, f2 > 0) produces a negative magnification, indicating an
inverted image. A convex plus a concave lens (f1 > 0 > f2) produces a positive magnification and
the image is upright.
Uses of lenses

A single convex lens mounted in a frame with a handle or stand is a magnifying glass.Lenses are
used as prosthetics for the correction of visual impairments such as myopia, hyperopia,
presbyopia, and astigmatism. (See corrective lens, contact lens, eyeglasses.) Most lenses used
for other purposes have strict axial symmetry; eyeglass lenses are only approximately symmetric.
They are usually shaped to fit in a roughly oval, not circular, frame; the optical centers are placed
over the eyeballs; their curvature may not be axially symmetric to correct for astigmatism.
Sunglasses' lenses are designed to attenuate light; sunglass lenses that also correct visual
impairments can be custom made.

Other uses are in imaging systems such as monoculars, binoculars, telescopes, microscopes,
cameras and projectors. Some of these instruments produce a virtual image when applied to the
human eye; others produce a real image which can be captured on photographic film or an
optical sensor, or can be viewed on a screen. In these devices lenses are sometimes paired up
with curved mirrors to make a catadioptric system where the lenses spherical aberration corrects
the opposite aberration in the mirror (such as Schmidt and meniscus correctors).

Convex lenses produce an image of an object at infinity at their focus; if the sun is imaged, much
of the visible and infrared light incident on the lens is concentrated into the small image. A large
lens will create enough intensity to burn a flammable object at the focal point. Since ignition can
be achieved even with a poorly made lens, lenses have been used as burning-glasses for at least
2400 years.[13] A modern application is the use of relatively large lenses to concentrate solar
energy on relatively small photovoltaic cells, harvesting more energy without the need to use
larger, more expensive, cells.

Radio astronomy and radar systems often use dielectric lenses, commonly called a lens antenna
to refract electromagnetic radiation into a collector antenna.

Lenses can become scratched and abraded. Abrasion resistant coatings are available to help
control this.[14]

S-ar putea să vă placă și