Sunteți pe pagina 1din 21

TERM PAPER OF MATHEMATICS

SUBMITTED TO:-MISS SALONI


SUBMITTED ON:-12-11-2010

SUBMITTED BY:-
• NAME:-HAOBIJAM BANA
• ROLL NO.:-RG6009A62
• REG. NO.:-11013742
• Section: - G6009
• TOPIC: - APPLICATION OF
EIGENVALUES
Acknowledgement

I would like to express our gratitude to all those who gave me


the possibility to complete this project. We want to thank the
Department of Mathematics of the Lovely professional
university (Punjab) for giving me permission to commence this
project in the first instance, to do the necessary research work
and to use departmental data. I have furthermore to thank our
teacher, Miss Saloni and who gave and confirmed this
permission and encouraged me to go ahead with my project. I
am bound to the Honorable Head of department and our Dean
Mam for their stimulating support.

I am deeply indebted to all our supervisors whose help,


stimulating suggestions and encouragement helped me in all the
time of research for and writing of this project.

It has been a quite interesting task to work on this project,

Has been new to us and I am almost done with it……

It’s our team work…….hope our team makes it to the goal...


THANK YOU;
INTRODUCTION:-
In mathematics, eigenvalue, their magnitude, and leaving their
eigenvector, and eigenspace are direction unchanged (or possibly
related concepts in the field of reversing it). These vectors are
linear algebra. The prefix Eigen- the eigenvectors of the matrix. A
is adopted from the German word matrix acts on an eigenvector by
"Eigen" for "innate", multiplying its magnitude by a
"idiosyncratic", "own", Linear factor, which is positive if its
algebra studies linear direction is unchanged and
transformations, which are negative if its direction is
represented by matrices acting on reversed. This factor is the
vectors. Eigenvalues, eigenvalue associated with that
eigenvectors and eigenspaces are eigenvector. An eigenspace is the
properties of a matrix. They are set of all eigenvectors that have
computed by a method described the same eigenvalue, together
below, give important with the zero vectors.
information about the matrix, and
can be used in matrix These concepts are formally
factorization. They have defined in the language of
applications in areas of applied matrices and linear
mathematics as diverse as transformations. Formally, if A is
economics and quantum a linear transformation, a non-
mechanics. null vector x is an eigenvector of
A if there is a scalar λ such that
In general, a matrix acts on a
vector by changing both its
magnitude and its direction.
However, a matrix may act on The scalar λ is said to be an
certain vectors by changing only eigenvalue of A corresponding to
the eigenvector x.

• Properties of eigenvectors

• Non-zero values
Contents:-
• Scalar multiples
• Overview
• Components of eigenvectors
• Mathematical definition
• Linear independence
• Alternative definition
• Eigenvectors of a diagonal
• Computation of Eigenvalues,
matrix
and the characteristic
equation • Properties of Eigenvalues
• Shear • Eigen decomposition

• Applications

Overview:-

In linear algebra, there are two of a given matrix is the set of all
kinds of objects: scalars, which eigenvectors of the matrix with
are just numbers; and vectors, that eigenvalue.
which can be thought of as
arrows, and which have both An important benefit of knowing
magnitude and direction (though the eigenvectors and values of a
more precisely a vector is a system is that the effects of the
member of a vector space). In action of the matrix on the
place of the ordinary functions of system can be predicted. Each
algebra, the most important application of the matrix to an
functions in linear algebra are arbitrary vector yields a result
called "linear transformations," which will have rotated towards
and a linear transformation is the eigenvector with the largest
usually given by a "matrix," an eigenvalue.
array of numbers. Thus instead of
writing f(x) we write M (v) where Many kinds of mathematical
M is a matrix and v is a vector. objects can be treated as vectors:
The rules for using a matrix to ordered pairs, functions,
transform a vector are given in harmonic modes, quantum states,
the article linear algebra. and frequencies are examples. In
these cases, the concept of
If the action of a matrix on a direction loses its ordinary
(nonzero) vector changes its meaning, and is given an abstract
magnitude but not its direction, definition. Even so, if this
then the vector is called an abstract direction is unchanged
eigenvector of that matrix. Each by a given linear transformation,
eigenvector is, in effect, the prefix "Eigen" is used, as in
multiplied by a scalar, called the eigenfunction, eigenmode,
eigenvalue corresponding to that eigenface, eigenstate, and
eigenvector. The eigenspace eigenfrequency.
corresponding to one eigenvalue

Mathematical definition:- vector x is defined to be an eigenvector of


the transformation if it satisfies the
eigenvalue equation
Given a linear transformation A, a non-zero
For some scalar λ. In this situation,
the scalar λ is called an eigenvalue
of A corresponding to the
eigenvector x.

The key equation in this


definition is the eigenvalue
equation, Ax = λx. The vector x
has the property that its direction
is not changed by the
transformation A, but that it is Geometrically (Fig. 2), the
only scaled by a factor of λ. Most eigenvalue equation means that
vectors x will not satisfy such an under the transformation A
equation: a typical vector x eigenvectors experience only
changes direction when acted on changes in magnitude and sign—
by A, so that Ax is not a scalar the direction of Ax is the same as
multiple of x. This means that that of x. The eigenvalue λ is
only certain special vectors x are simply the amount of "stretch" or
eigenvectors, and only certain "shrink" to which a vector is
special scalars λ are Eigenvalues. subjected when transformed by
Of course, if A is a multiple of A. If λ = 1, the vector remains
the identity matrix, then no unchanged (unaffected by the
vector changes direction, and all transformation). A
non-zero vectors are transformation I under which a
eigenvectors. vector x remains unchanged, Ix =
x, is defined as identity
transformation. If λ = −1, the
vector flips to the opposite
direction; this is defined as
reflection.

Alternative definition:-

Mathematicians sometimes
alternatively define eigenvalue in
the following way, leading to Any vector u in V which
differences of opinion over what satisfies for some
constitutes an "eigenvector." eigenvalue λ is called an
eigenvector of A corresponding
Alternative Given a linear transformation to λ.
Definition on a linear space V over a field
F, a scalar λ in F is called an The set of all eigenvectors
eigenvalue of T if there exists a corresponding to λ is called the
nonzero vector x in V such that eigenspace of λ.[4]
Notice that the above definitions
are entirely equivalent, except for
in regards to eigenvectors, where If there exists an inverse
they disagree on whether the zero
vectors are considered an (A − λI) − 1,
eigenvector. The convention of
allowing the zero vector to be an Then both sides can be left
eigenvector (although still not multiplied by the inverse to
allowing it to determine obtain the trivial solution: x = 0.
Eigenvalues) allows one to avoid Thus we require there to be no
repeating the non-zero criterion inverse by assuming from linear
every time one proves a theorem, algebra that the determinant
and simplifies the definitions of equals zero:
concepts such as eigenspace,
which would otherwise be non-
intuitively made up of vectors
that are not all eigenvectors.

Computation of Eigenvalues, The determinant requirement is


and the characteristic called the characteristic equation
equation (less often, secular equation) of
A, and the left-hand side is called
Main article: Characteristic the characteristic polynomial.
polynomial When expanded, this gives a
polynomial equation for λ.
When a transformation is Neither the eigenvector x nor its
represented by a square matrix A, components are present in the
the eigenvalue equation can be characteristic equation.
expressed as
Example 1

The matrix
Where I is the identity matrix.
This can be rearranged to

Defines a linear transformation of


the real plane. The Eigenvalues
of this transformation are given The roots of this equation (i.e. the
by the characteristic equation values of λ for which the
equation holds) are λ = 1 and λ =
3. Having found the Eigenvalues,
it is possible to find the
eigenvectors. Considering first
the eigenvalue λ = 3, we have
Properties of
eigenvectors:-

After matrix-multiplication Non-zero values

The requirement that the


eigenvector be non-zero is
imposed because the equation A0
= λ0 holds for every A and every
This matrix equation represents a λ. Since the equation is always
system of two linear equations 2x trivially true, it is not an
+ y = 3x and x + 2y = 3y. Both interesting case. In contrast, an
the equations reduce to the single eigenvalue can be zero in a
linear equation x = y. To find an nontrivial way. Each eigenvector
eigenvector, we are free to is associated with a specific
choose any value for x (except 0), eigenvalue. One eigenvalue is
so by picking x=1 and setting associated with an infinite
y=x, we find an eigenvector with number of eigenvectors that span,
eigenvalue 3 to be at least, a one-dimensional
eigenspace, as follows.

Scalar multiples:-

We can confirm this is an If x is an eigenvector of the linear


eigenvector with eigenvalue 3 by transformation A with eigenvalue
checking the action of the matrix λ, then any scalar multiple αx is
on this vector: also an eigenvector of A with the
same eigenvalue, since A(αx) =
αAx = αλx = λ(αx). The set of all
such eigenvectors belonging to
the same scaling factor, λ, is
called an eigenline.
Any scalar multiple of this
eigenvector will also be an More generally, any linear
eigenvector with eigenvalue 3. combination of eigenvectors that
share the same eigenvalue λ, will
For the eigenvalue λ = 1, a itself be an eigenvector with
similar process leads to the eigenvalue λ. Together with the
equation x = − y, and hence an zero vector, the eigenvectors of A
eigenvector with eigenvalue 1 is with the same eigenvalue form a
given by linear subspace of the vector
space called an eigenspace, Eλ. In
case of dim (Eλ) = 1, it is called
an eigenline and λ is called a
scaling factor.
Components of more than n eigenvectors with
eigenvectors:- different Eigenvalues.

If a basis is defined in vector A matrix is said to be defective if


space, all vectors can be it fails to have n linearly
expressed in terms of independent eigenvectors. All
components. For finite defective matrices have fewer
dimensional vector spaces with than n distinct Eigenvalues, but
dimension n, linear not all matrices with fewer than n
transformations can be distinct eigenvalues are defective.
represented with n × n square
matrices. Conversely, every such Eigenvectors of a diagonal
square matrix corresponds to a matrix:-
linear transformation for a given
basis. Thus, in a two-dimensional If a matrix is a diagonal matrix,
vector space R2 fitted with then its eigenvalues are the
standard basis, the eigenvector numbers on the diagonal and its
equation for a linear eigenvectors are basis vectors (of
transformation A can be written the standard basis) to which those
in the following matrix numbers refer.
representation:
Properties of Eigenvalues:-

Let A be an n×n matrix with


Eigenvalues λi,
Where the juxtaposition of . Then
matrices denotes matrix
multiplication. • Trace of

Since, by definition of λ, det(λI- .


A)=0, the solution is not unique • Determinant of
— at least one coordinate is a .
free variable. This results in a • Eigenvalues of Ak are
whole dimension of λ-associated
eigenvectors, as explained in the • If A = AH, i.e., A is
property above. Hermitian, every
eigenvalue is real.
Linear independence:- • Every eigenvalue of a
Unitary matrix has
The eigenvectors corresponding absolute value | λ | = 1.
to different Eigenvalues are
linearly independent meaning, in Left and right
particular, that in an n- eigenvectors:-
dimensional space the linear
transformation A cannot have
The word eigenvector formally matrices have real Eigenvalues.
refers to the right eigenvector xR. This was extended by Hermite in
It is defined by the above 1855 to what are now called
eigenvalue equation AxR = λRxR, Hermitian matrices. Around the
and is the most commonly used same time, Brioschi proved that
eigenvector. However, the left the Eigenvalues of orthogonal
eigenvector xL exists as well, and matrices lie on the unit circle, and
is defined by xLA = λLxL. Clebsch found the corresponding
result for skew-symmetric
History:- matrices. Finally, Weierstrass
clarified an important aspect in
Eigenvalues are often introduced the stability theory started by
in the context of linear algebra or Laplace by realizing that
matrix theory. Historically, defective matrices can cause
however, they arose in the study instability.
of quadratic forms and
differential equations. In the meantime, Liouville
studied eigenvalue problems
Euler studied the rotational similar to those of Sturm; the
motion of a rigid body and discipline that grew out of their
discovered the importance of the work is now called Sturm–
principal axes. Lagrange realized Liouville theory. Schwarz studied
that the principal axes are the the first eigenvalue of Laplace's
eigenvectors of the inertia matrix. equation on general domains
In the early 19th century, Cauchy towards the end of the 19th
saw how their work could be century, while Poincaré studied
used to classify the quadric Poisson's equation a few years
surfaces, and generalized it to later.
arbitrary dimensions. Cauchy
also coined the term racine At the start of the 20th century,
caractéristique (characteristic Hilbert studied the Eigenvalues
root) for what is now called of integral operators by viewing
eigenvalue; his term survives in the operators as infinite matrices.]
characteristic equation. He was the first to use the
German word Eigen to denote
Fourier used the work of Laplace Eigenvalues and eigenvectors in
and Lagrange to solve the heat 1904, though he may have been
equation by separation of following a related usage by
variables in his famous 1822 Helmholtz. For some time, the
book Théorie analytique de la standard term in English was
chaleur. Sturm developed "proper value", but the more
Fourier's ideas further and distinctive term "eigenvalue" is
brought them to the attention of standard today.
Cauchy, who combined them
with his own ideas and arrived at The first numerical algorithm for
the fact that real symmetric computing Eigenvalues and
eigenvectors appeared in 1929, least one root, and thus the linear
when Von Mises published the transformation has at least one
power method. One of the most eigenvalue. Over a complex
popular methods today, the QR space, the sum of the algebraic
algorithm, was proposed multiplicities will equal the
independently by John G.F. dimension of the vector space,
Francis and Vera Kublanovskaya but the sum of the geometric
in 1961. multiplicities may be smaller. In
a sense, it is possible that there
Existence and may not be sufficient
multiplicity:- eigenvectors to span the entire
space. This is intimately related
For a given matrix, the number of to the question of whether a
Eigenvalues and eigenvectors and given matrix may be
whether they are distinct depends diagonalized by a suitable choice
on properties of that matrix. of coordinates.

The algebraic multiplicity of an As well as distinct roots, the


eigenvalue is defined as the characteristic equation may also
multiplicity of the corresponding have repeated roots. However,
root of the characteristic having repeated roots does not
polynomial. The geometric imply there are multiple distinct
multiplicity of an eigenvalue is (i.e., linearly independent)
defined as the dimension of the eigenvectors with that
associated eigenspace, i.e. eigenvalue.
number of linearly independent
eigenvectors with that These concepts are illustrated by
eigenvalue. considering several common
geometric transformations.
For transformations on real
vector spaces, the coefficients of
the characteristic polynomial are
all real. However, the roots are
not necessarily real; they may
include complex numbers with a Shear:-
non-zero imaginary component.
For example, a matrix
representing a planar rotation of
45 degrees will not leave any
non-zero vector pointing in the
same direction.

However, over a complex vector


space, the fundamental theorem
of algebra guarantees that the
characteristic polynomial has at
The last equation is equivalent to
Horizontal shear. The shear angle y = 0, which is a straight line
φ is given by k = cot φ, where k along the x axis. This line
is the shear factor represents the one-dimensional
eigenspace. In the case of shear
Shear in the plane is a the algebraic multiplicity of the
transformation in which all points eigenvalue (2) is greater than its
along a given line remain fixed geometric multiplicity (1, the
while other points are shifted dimension of the eigenspace).
parallel to that line by a distance The eigenvector is a vector along
proportional to their the x axis. The case of vertical
perpendicular distance from the shear with transformation matrix
line. Shearing a plane figure does
not change its area. Shear can be
horizontal − along the X axis, or is dealt with in a similar
vertical − along the Y axis. In way; the eigenvector in vertical
horizontal shear (see figure), a shear is along the y axis.
point P of the plane moves Applying repeatedly the shear
parallel to the X axis to the place transformation changes the
P' so that its coordinate y does direction of any vector in the
not change while the x coordinate plane closer and closer to the
increments to become x' = x + k direction of the eigenvector.
y, where k is called the shear
factor. Uniform scaling and
reflection:-
The matrix of a horizontal shear
transformation is

The characteristic equation is

λ2 − 2λ + 1 = (1 − λ)2 = 0

Which has a single, repeated root


λ = 1. Therefore, the eigenvalue λ
= 1 has algebraic multiplicity 2.
The eigenvector(s) are found as Fig. When a surface is stretching
solutions of equally in all directions (a
homothety) each of the radial
vectors is an eigenvector.

As a one-dimensional vector
space, consider an elastic string
or rubber band tied to an
unmoving support at one end.
Pulling the string away from the
point of attachment stretches it
and elongates it by some scaling
factor λ which is a real number.
Each vector on the string is
stretched equally, with the same
scaling factor λ, and although
elongated, it preserves its original
direction.
Vertical shrink (k2 < 1) and
For a two-dimensional vector horizontal stretch (k1 > 1) of a
space, consider a rubber sheet unit square. Eigenvectors are u1
stretched equally in all directions and u2 and Eigenvalues are λ1 =
such as a small area of the k1 and λ2 = k2. This
surface of an inflating balloon transformation orients all vectors
(Fig. 3). All vectors originating at towards the principal eigenvector
the fixed point on the balloon u1.
surface (the origin) are stretched
equally with the same scaling For a slightly more complicated
factor λ. This transformation in example, consider a sheet that is
two-dimensions is described by stretched unequally in two
the 2×2 square matrix. perpendicular directions along
the coordinate axes, or, similarly,
Expressed in words, the stretched in one direction, and
transformation is equivalent to shrunk in the other direction. In
multiplying the length of any this case, there are two different
vector by λ while preserving its scaling factors: k1 for the scaling
original direction. Since the in direction x, and k2 for the
vector taken was arbitrary, every scaling in direction y. The
non-zero vector in the vector transformation matrix is
space is an eigenvector. Whether
the transformation is stretching
(elongation, extension, inflation),
or shrinking (compression, ,
deflation) depends on the scaling
factor: if λ > 1, it is stretching; if And the characteristic equation is
λ < 1, it is shrinking. Negative
values of λ correspond to a (k1 − λ)(K2 − λ) = 0.
reversal of direction, followed by
a stretch or a shrink, depending The Eigenvalues, obtained as
on the absolute value of λ. roots of this equation are λ1 = k1,
and λ2 = k2 which means, as
expected, that the two
Unequal scaling:- Eigenvalues are the scaling
factors in the two directions. y-axis, which will gradually
Plugging k1 back in the shrink away to nothing.
eigenvalue equation gives one of
the eigenvectors: Rotation

For more details on this topic, see


Rotation matrix.
Or, more simply, y = 0.
A rotation in a plane is a
transformation that describes
Thus, the eigenspace is the x-
motion of a vector, plane,
axis. Similarly, substituting λ =
coordinates, etc., around a fixed
K2 shows that the corresponding
point. Clearly, for rotations other
eigenspace is the y-axis. In this
than through 0° and 180°, every
case, both Eigenvalues have
vector in the real plane will have
algebraic and geometric
its direction changed, and thus
multiplicities equal to 1. If a
there cannot be any eigenvectors.
given eigenvalue is greater than
But this is not necessarily true if
1, the vectors are stretched in the
we consider the same matrix over
direction of the corresponding
a complex vector space.
eigenvector; if less than 1, they
are shrunken in that direction.
A counterclockwise rotation in
Negative Eigenvalues correspond
the horizontal plane about the
to reflections followed by a
origin at an angle φ is represented
stretch or shrink. In general,
by the matrix
matrices that are diagonalizable
over the real numbers represent
scaling and reflections: the
Eigenvalues represent the scaling
factors (and appear as the
diagonal terms), and the The characteristic equation of R
eigenvectors are the directions of is λ2 − 2λ cos φ + 1 = 0. This
the scaling. quadratic equation has a
discriminant D = 4 (cos2 φ − 1) =
The figure shows the case where − 4 sin2 φ, which is a negative
k1 > 1 and 1 > k2 > 0. The rubber number whenever φ is not equal
sheet is stretched along the x axis to a multiple of 180°. A rotation
and simultaneously shrunk along of 0°, 360°, … is just the identity
the y axis. After repeatedly transformation (a uniform scaling
applying this transformation of by +1), whiles a rotation of 180°,
stretching/shrinking many times, 540°, …, is a reflection (uniform
almost any vector on the surface scaling by -1). Otherwise, as
of the rubber sheet will be expected, there are no real
oriented closer and closer to the Eigenvalues or eigenvectors for
direction of the x axis (the rotation in the plane.
direction of stretching). The
exceptions are vectors along the
Rotation matrices on complex Eigenvalues appear on the
vector spaces diagonal. Thus rotation matrices
acting on complex spaces can be
The characteristic equation has thought of as scaling matrices,
two complex roots λ1 and λ2. If with complex scaling factors.
we choose to think of the rotation
matrix as a linear operator on the Infinite-dimensional
complex two dimensional spaces and spectral
vectors, we can consider these
theory:-
complex Eigenvalues. The roots
are complex conjugates of each
If the vector space is an infinite
other: λ1, 2 = cos φ ± i sin φ = e ±
iφ dimensional Banach space, the
, each with an algebraic
notion of Eigenvalues can be
multiplicity equal to 1, where i is
generalized to the concept of
the imaginary unit.
spectrum. The spectrum is the set
of scalars λ for which (T − λI)−1 is
The first eigenvector is found by
not defined; that is, such that T −
substituting the first eigenvalue,
λI has no bounded inverse.
λ1, back into the eigenvalue
equation:
Clearly if λ is an eigenvalue of T,
λ is in the spectrum of T. In
general, the converse is not true.
There are operators on Hilbert or
Banach spaces which have no
The last equation is equivalent to eigenvectors at all. This can be
the single equation x = iy, and seen in the following example.
again we are free to set x = 1 to The bilateral shift on the Hilbert
give the eigenvector space ℓ 2(Z) (that is, the space of
all sequences of scalars … a−1, a0,
a1, a2, … such that

Similarly, substituting in the


converges) has no eigenvalue but
second eigenvalue gives the
does have spectral values.
single equation x = − iy and so
the eigenvector is given by
In infinite-dimensional spaces,
the spectrum of a bounded
operator is always nonempty.
This is also true for an
unbounded self adjoint operator.
Although not diagonalizable over Via its spectral measures, the
the real, the rotation matrix is spectrum of any self adjoint
diagonalizable over the complex operator, bounded or otherwise,
numbers, and again the can be decomposed into
absolutely continuous, pure
point, and singular parts. (See real valued function by a single
Decomposition of spectrum.) real variable. We seek a function
(equivalent to an infinite-
The hydrogen atom is an example dimensional vector) which, when
where both types of spectra differentiated, yields a constant
appear. The eigenfunction of the times the original function. In
hydrogen atom Hamiltonian are this case, the eigenvalue equation
called eigenstate and are grouped becomes the linear differential
into two categories. The bound equation
states of the hydrogen atom
correspond to the discrete part of
the spectrum (they have a
discrete set of Eigenvalues which
can be computed by Rydberg Here λ is the eigenvalue
formula) while the ionization associated with the function, f(x).
processes are described by the This eigenvalue equation has a
continuous part (the energy of the solution for any value of λ. If λ is
collision/ionization is not zero, the solution is
quantized).

Eigenfunction:-
Where A is any constant; if λ is
A common example of such non-zero, the solution is the
maps on infinite dimensional exponential function
spaces is the action of differential
operators on function spaces. As
an example, on the space of
infinitely differentiable functions,
If we expand our horizons to
the process of differentiation
complex valued functions, the
defines a linear operator since
value of λ can be any complex
number. The spectrum of d/dt is
therefore the whole complex
plane. This is an example of a
continuous spectrum.
Where f(t) and g(t) are
differentiable functions, and a
Waves on a string:-
and b are constants.
The shape of a standing wave in a
The eigenvalue equation for
string fixed at its boundaries is an
linear differential operators is
example of an eigenfunction of a
then a set of one or more
differential operator. The
differential equations. The
admittable Eigenvalues are
eigenvectors are commonly
governed by the length of the
called eigenfunction. The most
string and determine the
simple case is the eigenvalue
frequency of oscillation.
equation for differentiation of a
The displacement, h(x, t), of a and
stressed rope fixed at both ends,
like the vibrating strings of a
string instrument, satisfies the
wave equation
Thus, the constant ω is
constrained to take one of the

values , where n is
Which is a linear partial any integer. Thus the clamped
differential equation, where c is string supports a family of
the constant wave speed? The standing waves of the form
normal method of solving such
an equation is separation of
variables. If we assume that h can
be written as the product of the Applications:-
form X(x)T(t), we can form a pair
of ordinary differential equations:

and

Each of these is an eigenvalue


equation (the unfamiliar form of
the eigenvalue is chosen merely
for convenience). For any values
of the Eigenvalues, the
eigenfunction are given by

A The wave functions associated


nd with the bound states of an
electron in a hydrogen atom can
If we impose boundary be seen as the eigenvectors of the
conditions (that the ends of the hydrogen atom Hamiltonian as
string are fixed with X(x)=0 at well as of the angular momentum
x=0 and x=L, for example) we operator. They are associated
can constrain the Eigenvalues. with Eigenvalues interpreted as
For those boundary conditions, their energies (increasing
we find downward: n=1,2,3,...) and
angular momentum (increasing
across: s, p, d,...). The illustration
, and so the
shows the square of the absolute
phase angle value of the wave functions.
Brighter areas correspond to represented by . In this
higher probability density for a notation, the Schrödinger
position measurement. The center equation is:
of each figure is the atomic
nucleus, a proton.

An example of an eigenvalue
equation where the Where is an eigenstate of
transformation T is represented in H. It is a self adjoint operator, the
terms of a differential operator is infinite dimensional analog of
the time-independent Hermitian matrices (see
Schrödinger equation in quantum Observable). As in the matrix
mechanics: case, in the equation above
is understood to be the
vector obtained by application of
the transformation H to .
Where H, the Hamiltonian, is a
second-order differential operator Molecular orbital:-
and ψE, the wave function, is one
of its eigenfunction In quantum mechanics, and in
corresponding to the eigenvalue particular in atomic and
E, interpreted as its energy. molecular physics, within the
Hartree–Fock theory, the atomic
However, in the case where one and molecular orbital can be
is interested only in the bound defined by the eigenvectors of the
state solutions of the Schrödinger Fock operator. The
equation, one looks for ψE within corresponding Eigenvalues are
the space of square integrable interpreted as ionization
functions. Since this space is a potentials via Koopmans'
Hilbert space with a well-defined theorem. In this case, the term
scalar product, one can introduce eigenvector is used in a
a basis set in which ψE and H can somewhat more general meaning,
be represented as a one- since the Fock operator is
dimensional array and a matrix explicitly dependent on the
respectively. This allows one to orbital and their Eigenvalues. If
represent the Schrödinger one wants to underline this aspect
equation in a matrix form. (Fig. 8 one speaks of nonlinear
presents the lowest eigenfunction eigenvalue problem. Such
of the hydrogen atom equations are usually solved by
Hamiltonian.) an iteration procedure, called in
this case self-consistent field
Bra-ket notation is often used in method. In quantum chemistry,
this context. A vector, which one often represents the Hartree–
represents a state of the system, Fock equation in a non-
in the Hilbert space of square orthogonal basis set. This
integrable functions is particular representation is a
generalized eigenvalue problem Principal components
called Roothaan equations. analysis:-
Geology and glaciology:- Main article: Principal
components analysis
In geology, especially in the See also: Positive semi definite
study of glacial till, eigenvectors matrix and Factor analysis
and Eigenvalues are used as a
method by which a mass of
information of a clast fabric's
constituents' orientation and dip
can be summarized in a 3-D
space by six numbers. In the
field, a geologist may collect
such data for hundreds or
thousands of clasts in a soil
sample, which can only be
compared graphically such as in a
Tri-Plot (Sneed and Folk) PCA of the multivariate Gaussian
diagram or as a Stereonet on a distribution centered at (1,3) with
Wulff Net. The output for the a standard deviation of 3 in
orientation tensor is in the three roughly the (0.878, 0.478)
orthogonal (perpendicular) axes direction and of 1 in the
of space. Eigenvectors output orthogonal direction.
from programs such as Stereo32
are in the order E1 ≥ E2 ≥ E3, with The eigendecomposition of a
E1 being the primary orientation symmetric positive semi definite
of clast orientation/dip, E2 being (PSD) matrix yields an
the secondary and E3 being the orthogonal basis of eigenvectors,
tertiary, in terms of strength. The each of which has a nonnegative
clast orientation is defined as the eigenvalue. The orthogonal
eigenvector, on a compass rose of decomposition of a PSD matrix is
360°. Dip is measured as the used in multivariate analysis,
eigenvalue, the modulus of the where the sample covariance
tensor: this is valued from 0° (no matrices are PSD. This
dip) to 90° (vertical). The relative orthogonal decomposition is
values of E1, E2, and E3 are called principal components
dictated by the nature of the analysis (PCA) in statistics. PCA
sediment's fabric. If E1 = E2 = E3, studies linear relations among
the fabric is said to be isotropic. variables. PCA is performed on
If E1 = E2 > E3 the fabric is the covariance matrix or the
planar. If E1 > E2 > E3 the fabric correlation matrix (in which each
is linear. See 'A Practical Guide variable is scaled to have its
to the Study of Glacial sample variance equal to one).
Sediments' by Benn & Evans, For the covariance or correlation
2004. matrix, the eigenvectors
correspond to principal Eigenvalue problems occur
components and the Eigenvalues naturally in the vibration analysis
to the variance explained by the of mechanical structures with
principal components. Principal many degrees of freedom. The
component analysis of the Eigenvalues are used to
correlation matrix provides an determine the natural frequencies
orthonormal Eigen-basis for the (or eigenfrequencies) of
space of the observed data: In vibration, and the eigenvectors
this basis, the largest Eigenvalues determine the shapes of these
correspond to the principal- vibrational modes. The
components that are associated orthogonality properties of the
with most of the co variability eigenvectors allows decoupling
among a number of observed of the differential equations so
data. that the system can be
represented as linear summation
Principal component analysis is of the eigenvectors. The
used to study large data sets, such eigenvalue problem of complex
as those encountered in data structures is often solved using
mining, chemical research, finite element analysis.
psychology, and in marketing.
PCA is popular especially in
psychology, in the field of
psychometrics. In Q-
methodology, the Eigenvalues of
the correlation matrix determine
the Q-methodologist's judgment
of practical significance (which
differs from the statistical Eigenfaces:-
significance of hypothesis
testing): The factors with
Eigenvalues greater than 1.00 are
considered to be practically
significant, that is, as explaining
an important amount of the
variability in the data, while
Eigenvalues less than 1.00 are
considered practically
insignificant, as explaining only a
negligible portion of the data
variability. More generally,
principal component analysis can
be used as a method of factor
analysis in structural equation
modeling. Eigenfaces as examples of
eigenvectors
Vibration analysis:- Main article: Eigenfaces
In image processing, processed principal axes of a rigid body.
images of faces can be seen as The tensor of inertia is a key
vectors whose components are quantity required in order to
the brightnesses of each pixel. determine the rotation of a rigid
The dimension of this vector body around its center of mass.
space is the number of pixels.
The eigenvectors of the Stress tensor:-
covariance matrix associated with
a large set of normalized pictures In solid mechanics, the stress
of faces are called Eigenfaces; tensor is symmetric and so can be
this is an example of principal decomposed into a diagonal
components analysis. They are tensor with the Eigenvalues on
very useful for expressing any the diagonal and eigenvectors as
face image as a linear a basis. Because it is diagonal, in
combination of some of them. In this orientation, the stress tensor
the facial recognition branch of has no shear components; the
biometrics, Eigenfaces provide a components it does have are the
means of applying data principal components.
compression to faces for
identification purposes. Research The kth largest or kth smallest
related to Eigen vision systems eigenvalue of the Laplacian. The
determining hand gestures has first principal eigenvector of the
also been made. graph is also referred to merely
as the principal eigenvector.
Similar to this concept,
eigenvoices represent the general The principal eigenvector is used
direction of variability in human to measure the centrality of its
pronunciations of a particular vertices. An example is Google's
utterance, such as a word in a PageRank algorithm. The
language. Based on a linear principal eigenvector of a
combination of such eigenvoices, modified adjacency matrix of the
a new voice pronunciation of the World Wide Web graph gives the
word can be constructed. These page ranks as its components.
concepts have been found useful This vector corresponds to the
in automatic speech recognition stationary distribution of the
systems, for speaker adaptation. Markov chain represented by the
row-normalized adjacency
Tensor of inertia:- matrix; however, the adjacency
matrix must first be modified to
In mechanics, the eigenvectors of ensure a stationary distribution
the inertia tensor define the exists.

S-ar putea să vă placă și