Sunteți pe pagina 1din 52

Functional Analysis Notes Fall 2014 Spring 2015

V.Sverak, notes taken by Tianyu Tao


January 24, 2015
Sep 3,5,8: Some preliminary discussion about Banach Spaces:
A Banach Space is a (usually real or complex) vector space X equipped with
a norm k k : X R (positivity, triangle inequality, homogeneity hold) and X is
complete with respect to the distance function induced by the norm.
An example of (real) B-Space is the space of continuous function on [0, 1] to
R, denoted by C([0, 1]), and the norm is the sup norm:
kf k = sup |f (x)| = max |f (x)|.
x[0,1]

x[0,1]

There is the notion of separability, meaning the existence of a countable dense


subset, we have C([0, 1]) is separable: this is a result of the Stone-Weirestrass: the
set of polynomials with rational coefficients serves the purpose well. But Cb (R),
the space of bounded continuous real-valued functions on R is not separable: there
exist an uncountable subset of it whose members are distance 1 apart from each
other, consider the set of functions which is either 0 or 1 at the integers.
In fact, every separable normed space can be isometrcially embedded into
C([0, 1]), this is the content of the Banach-Mazur Theorem, actually even for nonseparable Banach Space, we can still embed it into C(K) for some K compact.
The rest is devoted to some basic properties of linear maps on normed spaces.
Theorem Suppose X, Y are normed vector spaces, and f : X Y is linear, then
the following are equivalent:
(i) kf (x)k ckxk for some constant c;
(ii) f is continuous;
(iii) f is continuous at 0.
(iv) f is bounded on some open ball of 0.
Proof. Let me just show some interesting cases:
1

Functional Analysis

Tianyu Tao

(iv) (i): Let B be the open ball of 0 on which f is bounded by some C,


then
1
x
C
x
kxk) = kxkf (
) kxk = Ckxk
f (x) = f (
kxk

kxk

if x 6= 0.
(i) (ii): Clear such a function is Lipschitz
(ii) (iii): Trivial
(iii) (iv): Definition of continuity.

When l is an linear functional, meaning the space Y is R, we have


Theorem For l : X R is a linear functional, we have the following are equivalent:
(i) l is continuous.
(ii) ker l is closed
Proof. Continuity clearly implies the closedness of kernel, on the other hand, if
the kernel is closed, there exist a ball B(a, ) = B for which it is disjoint from the
kernel, if we show f is bounded on this ball, it follows then f is bounded on some
ball around 0 which shows continuity.
To see f is bounded on B, by symmetry of the ball, l(B) is an interval center
around l(a), (image of convex set under linear maps is again convex) and this
interval does not contain 0, thus l(B) is a bounded interval.
To see an example
R 1 of a non-continuous linear functional, let X = C([0, 1]) with
the norm kf k = 0 |f |, and l : X R with l(f ) = f (1/2). Then there are fn such
that kfn k 0 but l(fn ) (think of the delta function at 21 ).
Now we talk about the notion of factor, for Y a subspace of X, the factor space
X/Y is the set of cosets [x] = x + Y, x X, two elements x1 , x2 in X are in the
same cosets iff x1 x2 Y . The dimension of X/Y is called the codimension of
Y in X.
If Y is a closed subspace, it is possible to define a norm on X/Y by
k[x]k := inf{kx0 k : x0 [x]}
intuitively, this is just the minimal distance from the line [x] to the origin. In
fact, the closedness of Y ensures X/Y is a Banach space, and that the norm is
indeed a norm, not just a semi-norm (k[x]k = 0 for any x in the closure of Y ).
Our next topic will be about the structure of finite dimensional vector space
when endowed with the norm topology, first we have:
2

Functional Analysis

Tianyu Tao

Lemma Let Y X be a closed subspace, suppose b X Y , then the space


L = Y + Rb = {y + tb, y Y, t R}
then L is still a closed subspace.
Proof. Pick any x L, there is t = t(x) R such that x = y + tb, and the map
l : x 7 t(x) is linear. The kernel of l is Y , since it is closed, by previous theorem l is
continuous, take xn L with xn x, we know xn = yn + t(xn )b, now as xn x,
continuity of l shows t(xn ) t(x) for some t(x) R, hence yn = xn t(xn )b
converges as well, and y Y by closedness, and hence x L.
As a corollary by induction, adding finitely many Rbn to Y still makes it closed,
hence we have:
Corollary Any finite dimensional subspace is closed.
Proof. Let X be such a space and with basis en , then X = Re1 + +Ren +{0}.
and we also have
Corollary If X is finite dimensional, and l is a linear functional on X, then l is
continuous.
Proof. l1 (0) is a subspace of the finite dimensional space X, hence finite dimension, and is closed by above corollary.
In fact, for any finite dimensional space X of dimension n, we have a map
f : X Rn
which maps basis in X to the standard basis in Rn , and is extended on all of X
by linearity, this map is continuous by above, and is invertible, and the inverse is
continuous by the same corollary because the component functions are continuous
linear functionals. Hence any finite dimension space is homeomorphically isomorphic to Rn , this says there is only one topology on finite dimensional spaces: Any
two norms on Rn is equivalent: consider the identity map i on Rn equipped with
different norms, continuity of i means it is bounded, and hence equivalence of
norms.
The relation between convex set and norms are subtle, we will discuss them
later.
Another result about finite dimensional space genrealizes the second theorem:
suppose f : X Y where Y is finite dimension and the kernel of f is closed, then
f is continuous.
3

Functional Analysis

Tianyu Tao

Proof. We may write f (x) = (f1 (x), . . . , fn (x)), and ker f is of finite codimension
in each ker fi (to see this, consider the simple case when Y has dimension 2, then
ker f is of codimension 1 in ker f1 , indeed, let [x1 ], [x2 ] be two nonzero elements
(x1 )
in ker f1 / ker f , then [x1 ] = b[x2 ] where b = ff22 (x
since f (x1 bx2 ) = 0, the case
2)
for dim Y = n is dealt similarly). Hence each fi is continuous by theorem, and
consequently f is.
Sep 10: Today we talk about the problem about the complement of subspaces
in Banach Spaces. Let X be a vector space, and Y a subspace of X, an algebraic
complement space of Y in X is a space Z such that the following two conditions
hold:
(i) X = Y + Z
(ii) Y Z = {0}
We denote X = Y Z, the direct sum of Y and Z. It is equivalent to the Axiom
of Choice that every subspace has a complement, what we are interested, is the
notion of topological complement: Let X = Y Z, then we have the projection
map Y : X Y and Z : X Z, if those are continuous maps, we say the
space Y is topologically complemented in X by Z, we use the notation to mean
topological complement from here on.
For example, the space X = l (N) of all bounded sequences has the subspace
c0 = {x l : limk xk = 0} which closed but not complemented. (for a proof,
check complement of c0 in l on SE)
When Y is finite dimensional or when Y is closed and has finite codimension,
then we can show Y is complemented, the former requires Hahn-Banach as follows:
Finite dimensional subspace is complemented: let a basis of Z be z1 , . . . , zn , let
l1 , . . . , ln be the dual basis of Z, meaning they are linear functional on Z such that
lj (bk ) = jk (dirac delta).
Invoking the Hahn-Banach theorem (which we will discuss later) to extend
each lj from Z to all of X. Consider the projection map Z : X Z defined by
x 7 l1 (x)b1 + + ln (x)bn , for x Z we have Z (x) = x, so Z2 = Z , we can then
define another map Y (x) by Y (x) = x Y (x). The complement Y is given by
{x : z (x) = 0}.
For the second proposition, the proof is not hard:
Closed subspace of finite codimension is complemented: Let Y be the said space,
by assumption let [x1 ], . . . , [xn ] be a basis of X/Y , where xi X, then put Z =
span[x1 , . . . , xn ], we have X = Y Z: indeed if x Y Z, then (x) = [0]
4

Functional Analysis

Tianyu Tao

P
P
and x = ai xi so
[x]
=
a
[xi ] shows ai = 0 since [xi ] isPa basis for X/Y ; by
i
P
definition [x] ai [xi ] = [0] for some ai , this shows x ai xi Y .
Sep 12,15,17:
We will start our discussion on compact operators, first let us
review compactness in functional spaces:
The following criterion is most useful when talking about compactness in metric
spaces in our discussion:
Theorem A space is compact if and only if it is complete and totally bounded,
where totally bounded means for every  > 0, there exist a finite cover of X by
balls of X.
The proof can be find on anywhere.
we prove:
Theorem (Rietz 1915 or 1916?) For a normed vector space X, dim X < if and
only if X is locally compact, which is equivalent to the closed unit ball in X being
compact.
Proof. If X is finite dimensional, we already showed the closed unit ball is compact,
since X is homeomorphically isomorphic to Rn for some n, and the closed unit ball
is compact by Heine-Borel.
On the other hand, if B, the closed unit ball in X is compact, we may pick
< 1 and a finite collection of points b1 , . . . , bn such that B is covered by B,bj .
We shall show bj spans X.
If bj does not span X, let us set Y be the span of the bj , Y is finite dimensional
subspace of X and is therefore a proper closed subspace of X. Let x
/ Y , then
0
dist(x, Y ) = inf yY ky xk > 0 by closedness of Y . There exist x Y achieving
xx0
this infimum, by Heine-Borel, that is kx x0 k = dist(x, Y ). Set a = kxx
0 k , we
have a B, therefore ka bj k < for some bj , this shows dist(a, Y ) < , but on
the other hand:


x x0


dist(a, Y ) = inf ka yk = inf

y

yY
yY kx x0 k
1
kx x0 ykx x0 kk
yY kx x0 k
1
=
inf kx yk = 1
kx x0 k yY

= inf

since x0 Y , now choose < 1 we have a contradiction.


Now let us start talk about compact operators, here is the definition:
5

Functional Analysis

Tianyu Tao

Definition A linear operator T : X Y between Banach space X, Y is said to


be compact if the image of the unit ball B in X is precompact in Y , meaning the
closure of T (B) in Y is compact.
An example would be finite rank operator: the image of X under T is finite
dimensional, not all compact operator are finite dimensional, the operator T :
l2 l2 defined by (x1 , x2 , . . .) 7 (x1 , x22 , . . .) is compact, but not finite rank, the
compactness of T follows from the fact that the Hilbert cube is compact.
The set of compact operators is closed under the topology generated by the
operator norm: if Tn is a sequence of compact operators and T is their norm limit:
kTn T k = supkxkX 1 k(T Tn )xkY 0 as n , then T is compact, the
proof of this fact is a good exercise using the criterion of compactness by total
boundedness.
It is also an easy exercise to show the sum, scalar product, composition of compact operators with bounded linear operators are all compact: the set of compact
operators forms a two-sided ideal in the space of bounded linear operators.
As an example of compact operator, we consider the integral operator
Z b
T f (x) =
k(x, y)f (y)dy
a

on the space of continuous functions on [a, b] with the kernel k(x, y) jointly continuous in x and y, one way to show the compactness is to approximate the continuous
functions k(x, y) by polynomials using the Stone-Weirstrass theorem, and the fact
Rb
that an operator of the form a p(x, y)f (y)dy is finite rank, hence compact (the
space of polynomials of degree less than or equal to n is finite dimensional, a basis
would be 1, x, x2 , . . . , xn , recall.
More generally, we can prove its compacness using an important theorem, the
Arzela-Ascoli theorem.
Theorem Let M be a set of functions in C([a, b]), then the following are equivalent:
(i) M is totally bounded;
(ii) M is bounded and uniformly equicontinuous: for all  > 0, there is > 0
such that |h| < implies |f (x + h) f (x)| <  for every f M .
The compactness of T follows by apply A-A: it suffices to check T (B) is bounded
an uniformly continuous, but
Z b
|T f (x + h) T f (x)| =
(k(x + h, y) k(x, y))f (y)dy.
a

Continuity of k apply immediately.


6

Functional Analysis

Tianyu Tao

One can generalize such result, for example T will still be compact when we
extend it on the space L2 , then we need a similar theorem to that quoted above,
this is called the Kolmogorov-Rietz compactness theorem, we state it here:
Theorem Let M be a set of functions in Lp ([a, b]), then the following are equivalent:
(i) M is totally bounded;
(ii) M is bounded and uniformly equicontinuous in the integral sense: for all
 > 0, there is > 0 such that |h| < implies kf (x + h) f (x)kp <  for
every f M .
we will discuss Fredholm theory next time.
Sep 19: This time we start talk about Fredholms theory on integral equations,
we first talk about an abstract equation of the form
Sx = b
Where x, b are elements in the Banach Space X and S = I T where I is the
identity operator from X to X and T is a compact operator from X to X.
We have
Lemma Suppose we have an operator S : X X as above, then we have
(i) ker S is finite dimensional;
(ii) S(X) which is the range of S is closed and of finite co-dimension;
(iii) If ker S = {0}, then S : X S(X) has a continuous inverse.
Proof. (i) x ker S if and only if x = T x, thus, x T (X), but T is compact,
so T (X) is a pre-compact set. Also, T when restricted on ker S is the identity
operator, so if B is the unit ball in X, we have T (ker S B) = ker S B, this
shows the unit ball in ker S is pre-compact with respect to the norm, therefore
ker S is of finite dimensional as it is locally compact.
(ii) We first show S(X) is closed, let xn X be a sequence such that
yn = Sxn
converges to some y X, we want to show that y S(X).
We need to discuss two cases, for brevity let Z denote ker S:

Functional Analysis

Tianyu Tao

xn are uniformly bounded, after passing to subsequence, we may assume


T xn converges to some z X, then xn = T xn + yn converges to z + y, by
continuity of S, we have Sxn converges to y.
xn are not uniformly bounded. This means dist(xn , Z) goes off to infinity
as n increases. Put x0n = xn /dist(xn , Z), then Sx0n = yn /dist(xn , Z) =
yn0 converges to 0 as n since yn is convergent. And dist(x0n , Z) =
dist(xn /dist(xn , Z), Z) = 1.
Now x0n x0 for some x0 and Sx0 = 0 but dist(x0 , Z) = 1.
To show S(X) has finite co-dimesion, we prove by contradiction: if S(X) is not of
finite co-dimension, we can extended S(X) Yn Y1 = X Now pick
bk Yk with kbk k = 1 and dist(bk , Yk+1 ) > 1  by Rietz Lemma, for l > k, we
have
T bk T bl = (bk bl ) (Sbk ) + Sbl
This contradicts the compactness of T and our choice of bk , as bl (Sbk ) + Sbl
Yk+1 .
Sep 24: We finished with prove lemma (i) and (i) last time, for (iii), one note
it is a consequence of the open mapping theorem, but we can show it directly:
Proof. Suppose Sxn = yn and yn y for some y X. Again, we want to show
that xn converges. Similarly, there are two cases to discuss, which resembles the
discussion last time very much.
xn are bounded uniformly: xn = T xn + yn , compactness of T ensures we can
pick subsequence so T xn converges, then xn will converge.
xn are not bounded uniformly, for simplicity assume ker S = {0}, then put
x0n = xn /kxn k, note kxn k , we know what to do.

Our next step it to prove:


Lemma Suppose we have an operator S : X X as above, put Zi = ker S i and
Yi = S i (X), then:
(i) dim Zk <
(ii) Yk is closed and has finite co-dimension;
(iii) there exist k0 such that Zk0 +l = Zk0 and Yk0 +l = Yk0 for l = 1, 2, . . .
8

Functional Analysis

Tianyu Tao

Proof. First observe that


{0} Z1 Z2
and
Y1 Y2
.
Also, S k is of the form I Tk where Tk is a compact operator, claim (i) and
(ii) are obvious from last lemma, to show claim (iii), suppose the conclusion does
not hold, we have the proper containment Z1 Z2 , let us pick ai Zi with
kai k = 1 and dist(ai+1 , Zi ) > 1/2, this is possible by Rietz Lemma. Consider then
Tak Tal for k > l, this is equal to
(ak al ) Sak + Sal
now ak Zk but not in Zk+1 , and the remaining term are all in Zk1 , hence
kT ak T al k > 1/2, contradicts to compactness of T , the stabilization of Yk is
proved similarly.
Our final result will be
Lemma Let k0 be in the last lemma, then X = Zk0 Yk0 , and Zk0 , Yk0 are Sinvariant, the restriction of S on Yk0 is an isomorphism.
Proof. To show the sum is direct, we must show it is not possible to find nonzero
x Zk0 Yk0 , for brevity let Z, Y denote the kernel and range.
Suppose x Z Y , then x = S k0 x0 for some x0 , and by stabilization:
0 = S k0 x = (S k0 )2 x0 = S k0 x0
Hence x0 Z, and x = S k0 x0 = 0.
For x X, we have
x = x S k0 x + S k0 x
which is the desired decomposition. The isomorphism part follows from the lemma
of last time.

Sep 26:
We have proved that for S = I T , where T is compact and I the
identity operator on X, we can decompose X as the direct sum Y Z, where S|Y
is an isomorphism, Z is of finite dimension and both subspaces are S invariant.
A corollary to the above result is that, the dimension of ker S must be equal
to the co-dimension of the range of S. This is the Fredholm alternative for the
operator S.
9

Functional Analysis

Tianyu Tao

Proof. To see this, observe by above the injectivity and surjectivity of S are equivalent: if S is injective, then S n is by induction, hence Z = ker S n = {0} and
X = Y = S n X = SX; if S is surjective, then S n is surjective, so X = Z X
shows Z = {0}.
Now suppose x1 , . . . , xm spans ker S and (image of )y1 , . . . , yn spans X/S(X),
if m n, put T 0 = T +F where F xi = yi and F is 0 on the complement of ker S. If
(T 0 I)x
our choice of yi , thus
P = 0, we must have F x = Sx, so F x =
PSx = 0 byP
x = ai xi as x ker S, therefore 0 = F x = ai F xi = ai yi shows ai = 0 for
all i, and x = 0 so T 0 is injective, so surjective by compactness, and hence m = n.
(in essence the image of F is the complement of S(X) and the kernel of F is the
complement of ker S), if m n, similarly put T 0 = T + F with the intent that
F xi = yn for i > n, this will ensure that T 0 is surjective, so injective and showing
m = n again. (proof by Dr.Garrett)
Lets talk about the dual of X, which is denoted by X , the set of all continuous
linear functional on X. It is a normed space with the operator norm
klk = sup |lx|
kxk1

When X is finite dimensional, X agrees with the Algebraic dual of X, and


has the same dimension as X.
There is a natural map from X to its double dual X , the set of linear
functional on X , by the identification i(x)(l) = x(l), also known as the evaluation
map, Hahn-Banach theorem will show this is an isometric embedding, which we
will prove later.
Suppose we have a linear operator A : X Y , it induces a dual operator

A : Y X , defined by A (l)(x) := l(Ax) for l Y . For N X, we define


the set N := {l X : l|N = 0} and for M X , we also define the set
M := {x X : l(x) = 0, l M }. Consider then the equation Sx = b where
S = I T , T being compact, the following fact is important:
Proposition (X/S(X)) = ker S
Proof. Let : ker S (X/S(X)) be the map l 7 with ([x]) = l(x). This is
well-defined, as ([Sx]) = l(Sx) = 0 for any x X.
To find inverse of , for (X/S(X)) , let l ker S be the linear functional
defined by: l = where : X X/S(X) is the projection map, to see
l ker S , note S (l)(x) = l(Sx) = (Sx) = ([0]) = 0 by our choice.
As a consequence, we know X/S(X) is of finite dimension, hence its dual has
the same dimension, this says dim ker S = dim ker S = co- dim S(X).

10

Functional Analysis

Tianyu Tao

Let {l1 , . . . , ln } = M be a set of basis of ker S , then S(X) = M . Indeed,


if y S(X), we have y = S(x) for some x X, and let lj M , we have
lj (x) = lj (S(x)) = S (lj )(x) = 0 since lj ker S , moreover, linearity shows
S(X) = (ker S ) , we may thus rephrase the Fredholm alternative as
Theorem For the equation Sx = b, where x, b X and S = I T , we have
(i) either Sx = 0 has only the trivial solution x = 0, and in this case Sx = b is
uniquely solvable for each b, or;
(ii) Sx = b is solvable exactly when b ker S .
Explanation. Before we have the dimension of the kernel of S is equal to the
codimension of the range, this tells us that either S has non-trivial kernel, or
S = I T is invertible. Thus, Sx = b is uniquely solvable for each b X if and
only if Sx = 0 = x = 0. When the kernel is not trivial, we know S is onto its
range S(X), and the characterization of S(X) shows the theorem holds.

Sep 29:
The relationship between ker S, ker S and X/S(X) between Banach
Space is not as clear as the setting in Hilbert Space, which we will talk about
today:
Definition A (real) Hilbert Space H is a vector space over R equipped with an
inner product h, i : H R such that it satisfies positive definiteness, bi-linearity,
and H is complete with respect to the norm induced by the inner product kxk2 =
hx, xi, if it is not complete, we call H a pre-HIlbert space.
There are a lot of nice properties in Hilbert Space:
Lemma For K H, which is a closed convex subset of a Hilbert space H, we pick
x
/ K, then there is x0 K such that dist(x, K) = kx x0 k. And x0 is uniquely
characterized by hy x0 , x x0 i 0 for all y K.
Proof. This is not always true in Banach space, for example, please see the SE
post On the norm of a quotient of a Banach space.
Back to the proof, we pick xn K with kx xn k converges to the infimum
d = dist(x, K). We will show xn is Cauchy. We need the paralleogram law, which
asserts the following:
2(kak2 + kbk2 ) = (ka + bk2 + ka bk2 )
for any a, b H, which can be shown directly by expanding the norm using inner
product and its properties.
11

Functional Analysis

Tianyu Tao

Put a = x xn , b = x xm for m, n large, we have


1
1
1
kx xn k2 + kx xm k2 = (k2x xn xm k2 + kxn xm k2 )
2
2
4
xm + xn 2 1
= kx
k + kxn xm k2
2
4
by convexity,

xm +xn
2

K, thus the second term on RHS is at least d2 so


1
kxn xm k2 (d2 + ) d2 = 
4

If n, m are large enough so that kx xm k, kx xn k are within  to d. This shows


xn is Cauchy, so convergent by completeness of H and closedness of K, there is
x0 K achieving this distance.
Consequently:
If K is a subspace, it is automatically convex, and the following observation
holds: the map P : X K by (x 7 x0 ) is an orthogonal projection from X
to K: hx x0 , y x0 i 0 for all y K, so hx x0 , zi 0 for all z K, and
by symmetry hx x0 , zi = 0 for all z K shows x x0 K.
P is a linear map, such that P is identity on K, and P 2 = P , also kxk2 =
kP xk2 +kxP xk2 , and kP k = 1 if K is non-trivial. Consequently Q = I P
is also a projection with norm 1, we have P Q = QP = 0. This shows for
any closed subspace Y of X, it has an orthogonal complement Y , this is in
contrast with the situation in Banach space, where not every closed subspace
is complemented.
Perhaps the following theorem is the most important one:
Theorem (Rietz Representation) For any l H , the dual of a HIlbert space H,
there exist a unique al H such that l(x) = hx, al i. In other words, the dual of a
Hilbert space could be identified with itself.
Proof. Can be found on Rudins real and complex analysis. Here we demonstrate
some different ideas:
the kernel of l is closed subspace of codimension 1, let Y denote this subspace,
we know X = Y Y , where Y is the orthogonal complement of Y , it
must then has dimension 1, and is isomorphic to R. Consider the projection
Q : X Y , it is then a linear functional by our identification between Y
and R. This shows l and Q has the same kernel, thus l must be a multiple
12

Functional Analysis

Tianyu Tao

of Q, that is, we have Qx = hx, bib for some b Y , and l(x) = ahx, bi for
some scalar a.
For the same kernel fact, if N = ker f = ker g, consider f , g : X/N R
defined by f ([x]) = f (x), similarly for g. These are well-defined maps by
definition. But dim X/N = 1, so f = ag, pull the maps back, we have
f = ag.
Consider the functional (x) = 21 kxk2 l(x) on H, there exist a such that a
attains minimum, in other words
d
(a + y) = 0
d
for every y, this says l(y) = ha, yi, for more details, see Existence of a
minimum notes.

Oct 01: The Hilbert space version of the Fredholm Alternative is very simple,
since X is naturally identified with X by Rietz representation theorem. And the
adjoint operator A is very easy to define: For y H, consider the functional
l(x) = hAx, yi, by Rietz, there exist a unique by such that l(x) = hx, by i. And this
defines the adjoint A by A y = by , it satisfies the relation:
hAx, yi = hx, A yi
It is easy to see that A = A, and compactness of A implies that of A .
Proof. Let yn H be a sequence of points in the unit ball B, we want to find a
convergent subsequence of A yn , let ly (x) = hx, yi, consider the subset M = {lyn :
kyn k 1}, this is an equicontinuous family of H. We wish to apply Arzela-Ascoli
theorem, on C(X), for some X compact, take X = A(B), A-A will be applicable
to the set of functionals lyn A, we find that myn (x) = hx, A yn i is a pre-compact
subset in H , which is what we want.
For a Fredholm equation Sx = b where b H and we require solution x H,
in general we have (Y ) = Y , rewrite the definitions we find (ker S ) = S(H),
(ker S) = S (H), so H = ker S S (H) shows co dim S (H) = dim ker S, and we
have:
dim ker S = dim ker S = co dim S(H) = co dim S (H)
Now, consider H = L2 ([a, b]), one can translate the Fredholm theorems we proved
so far to get result for the solvablility of integral equation of the form:
Z b
f
k(x, y)f (y)dy = g
a

13

Functional Analysis

Tianyu Tao

where g H, k is jointly continuous in x and y, and we seek solution f H, we


also have the Banach space version when we consider our space be C([a, b]).
Oct 06:
We start with an example about the dual of the space X = C([a, b])
which is the space of continuous functions on the compact interval [a, b]. We know
X is the space of Radon measures (locally finite, inner regular measure) on [a, b]
where the norm is the total variation.
We have the integral operator
Z b
k(x, y)f (y)dy
T f (x) =
a

The dual operator T : f


k(x, y)f (y)dyd(x) is equal to (by fubini)
Z
Z
Z
Z
f (y) k(x, y)d(x)dy = f (y) (y)dy = f (y)d(y)
RR

Thus T = .
We now switch to talk about spectral theory of compact operators. The set up
is as follows: We have X a Banach Space over C, and L(X) the space of continuous
linear operator from X to X with the operator norm kAk = supkxk1 kAxk, which
makes it a Banach Space. We denote L (X) the space of invertible bounded linear
operators on X.
For A L(X), the spectrum of A, denoted by (A), is the set of complex
numbers such that A is not in L (X).
The spectral radius (A) is the number sup(A) ||.
Some important facts about the spectrum and spectral radius:
Theorem We have:
(i) (A) is nonempty and compact for A L(X).
(ii) (A) = limn kAn k1/n , this is called Gelfands formula.
Before prove these results, some machinery is needed: Observe first that if
A L(X) with kI Ak < 1 we will have A L (X). To see this, simply put
B = I A, then kBk < 1, then the series
I + B + B2 +
converges absolutely and is the inverse of I B = A by direct computation.
This immediately tells us that L (X) is an open set, for suppose A is invertible,
let B be such that kBk < for some small, then we want A B is invertible as
well, indeed, rewrite A B as A(I A1 B), if kBk < 1/kA1 k we are in business.
14

Functional Analysis

Tianyu Tao

We can find (A B)1 explicitly when B satisfies the choices, we have


(I A1 B)1 A1 = A1 + A1 BA1 + A1 BA1 BA1 +
Hence (A + B)1 = A1 + A1 BA1 + O(2 ), etc.
We now consider the function f : C L(X) defined by f () = A for a
fixed operator A, by definition we have
C\(A) = f 1 (L (X))
and continuity of f and openness of L shows (A) is closed, we can then establish
the boundedness of (A) at this point: write A as (I 1 A), if is that
k A k < 1, then it will become invertible and
/ (A), this shows (A) kAk.
With this and the fact that (A) is compact, since it is both closed and bounded,
by the Heine-Borel theorem.
The main reason we introduce the operator-valued function f is that we are
able to apply complex analysis, especially the theory of holomorphic function to
it with pretty much no difficulty here.
Definition The function f defied above is holomorphic if for every z C, there
exist > 0, and coefficients ai L(X) such that
f (z) = a0 + a1 (z z0 ) + a2 (z z0 )2 + ,
where the series converges absolutely for |z z0 | < .
Oct 08:
Let us continue talk about the holomorphicity of operator-valued
complex variable functions, for holomorphic f , we define
r=

1
lim sup kan k1/n

as the radius of convergence, f now is a power series at each z where is


the domain of f , we wish to have Cauchys formula, and this is true with our
definition.
We state it here without proof:
Z
f ()
1
f (z) =
d
2i z
for a suitable oriented contour . Now let us prove the first claim, that (A) is
nonempty:

15

Functional Analysis

Tianyu Tao

Proof. Let f : ( 7 ( A)1 ), this is a holomorphic function and f 0 at ,


as kf ()k C/|| for some constant C. Indeed, we may write
( A)1 =

X An
A
1
(I )1 = 1

If is large enough so that the series converges, f is holomorphic, and Liouville


theorem is applicable here, f is constant as it is bounded if we assume the domain
of f is C, this is absurd, hence (A) is not empty, there are C so that A
is not invertible.
Lets turn to the proof of Gelfands formula. We already have the bound
(A) kAk, for brevity let denote our spectral radius of A, the next step is to
show n kAn k.
Suppose n An L (X), we claim that A is also invertible, this means
(A)n (An ).
Proof. Write
n An = ( A)(n1 + n2 A + + n An2 + An1 ) = ( A)B
so (n An )1 = B 1 ( A)1 . If n
/ (An ), we have
/ (A), or
(A) = n (An ),
this shows |n | kAn k for each (A), thus, lim inf n kAn k1/n .
Oct 10:
We continue with the proof of Gelfands formula. We wish to show
lim supn kAn k1/n . To this end we need the following claim: if A L(X), then
we have
Z
1
n
n ( A)1 d
A =
2i
for some suitable contour .
Proof. for the claim see my homework. The rest is the application of the claim to
show the other direction.
The main point then is that, we can deform the contour continuously as long
as it does not cross any (A), this can be shown by draw a big circle around
and chop the piece between the circle and and apply Cauchys theorem, in
particular, for any R0 > , we have


Z
1 Z

1


n
n
1
kA k =
( A) d
||n k( A)1 kd C(R0 )n ,
2i R0
2i R0
16

Functional Analysis

Tianyu Tao

for some constant C independent of n, so


kAn k1/n C 1/n R0
shows lim sup kAn k1/n .
After proving the Gelfands formula, we are ready to discuss the spectral theory
of compact operators. We know in the case of finite dimensional spaces, the
spectrum consists exactly the eigenvalues of the linear operator/matrix A. But in
infinite dimensional case, things are different.
For example, in the case where X = L2 (0, 1), let A be the operator
Af (x) = xf (x)
One can see the specturm is actually the entire [0, 1], but A has no eigenvalues.
We are onto the following
Theorem Let X be a complex Banach Space, and T L(X). Then
(i) (T ) {0} implies is isolated in (T ).
(ii) each (T ) {0} is an eigenvalue, and X = Z Y where Z =
ker( T )m for some m, is finite dimensional, and Y = ( T )m X is
of finite co-dimension, moreover Z , Y are T invariant and T is an
isomorphism when restricted to Y .
Oct 13:
here:

Let us start prove the theorem from last time, we only discuss the idea

Sketch of proof. Pick 0 (T ) {0}, we need to show it is isolated. Rewrite


0 T as 0 (I 1
0 T ), and we may apply our previous result on operators which
is the difference of identity and a compact operator. In particular, there exist Z, Y
such that X = Z Y and Z is finite dimensional and 0 T is isomorphism when
restricted to Y , to show 0 is an isolated point, then, it is enough to show it if

/ (T ), then a small pertubation of to 0 would still make 0 T invertible,


it suffices to show we can do this when consider only T restricted to Z and
Y . The situation when restricted to Z is the case of a nilpotent matrix, we know
we a perturbation of a nilpotent matrix will make it invertible, and the situation
when restricted to Y is not hard too: we know already 0 T is invertible, and
the set of invertible matrix is open, hence a small perturbation of it still makes it
invertible when restricted on Y , in general, a small pertubation of 0 to 0 +  T
makes it invertible, so 0 is isolated.
Let us see some examples:
17

Functional Analysis

Tianyu Tao

Let X be the space of square summable sequence l2 (N), and T be the operator
(x1 , x2 , . . . , xn , . . .) 7 (x1 ,

x2
xn
, . . . , , . . .)
2
n

the spectrum is the set { n1 } {0} for n = 1, 2, . . .. Each n1 is an eigenvalue


with eigenvector en (the nth spot is 1 and 0 elsewhere), but 0 is not an
eigenvalue although it is in the spectrum, the obvious inverse of T is not a
bounded operator!
Consider a slight modification of the above T :
xn
x2
(x1 , x2 , . . . , xn , . . .) 7 (0, x1 , , . . . , , . . .)
2
n
Now T is invertible for each 6= 0, a simple computation shows kT n k
1
, which implies
n!
1
(T ) = lim kT n k1/n lim
=0
n
n
n
n!

by Gelfands formula and sterling approximation n! 2n( ne )n . So the


only possible member is 0 in (T ), and we know its here because (T ) is
not empty.
Oct 15: Let us shift our attention from general Banach space to Hilbert space,
in particular, consider the compact self adjoint operators on Hilbert Space, we
have a complete description of the spectrum:
Theorem Let X be a complex Hilbert Space, and T is compact self adjoint in
L(X). The whole spectrum theory of compact operator on Banach Space of course
inherits here, and moreover we have:
(i) each (T ) {0} is real;
(ii) distinct eigenvalues corresponds to orthogonal eigenspaces and X is the Hilbert
sum of the eigenspaces, in this case we say a complete Hilbert basis exist for
X.
Proof. see homework and various notes from Dr.Garrett.
Let us talk briefly about the notion of HIlbert sum, the setting is we have a
family of Hilbert spaces H where L
is in some arbitrary index set. We define the
Hilbert sum space X, denoted by X , as the set of sequence
X
{(x ),
kx k2 < }

18

Functional Analysis

Tianyu Tao

It will become a inner product space after we introduce the inner product
X
h(x ), (y )i =
hx , y iX

It is true that X is complete with respect to the norm induced by this inner
product, but we will not prove it here.
Recall the usual finite dimension Hilbert space over R or C is just isomorphic
to Rn or the direct sum of n copies of R.
A similar situation occurs if X is inifinite
and is separable, in this
Ldimension

case we are certain that X is isomorphic to 1 R. The details can be looked up


on Rudin, for example.
But as Banach space, they might be different when endowed with
Pdifferent
norms, for example, there is the sup norm among , the 1-norm kxk1 = kx k...
Oct 17: Today we talk about complex vs. real vector space, and the notion of
complexifying a real vector space.
Suppose X is a real vector space, we can make it over complex space as follows:
just define it to be the set {x1 + ix2 , x1 , x2 X}, and define addition and scalar
multiplication in the usual way.
The more fancy way (in my opinion) is to think XC as the tensor product of
X and C, with the norm
kx + iykXC := sup kx cos + y sin kX
[0,2)

.
The way to think about tensor product is to view the elements in X C as
Rlinear functional from C to X. Then, the norm can be seen as a special case
of the operator norm (a very general sketch...)
In order to complexify a Hilbert space, one has to be careful with the inner
product, we usually require: hx, yi = hx, yi, hx, yi = hy, xi for C. The
natural way to do so is to make
hx1 + iy1 , x2 + iy2 iC = hx1 , x2 iR + hy1 , y2 iR + i(hy1 , x2 iR hx1 , y2 iR )
For a linear operator T : X X, we want to extend it after we complexify X,
the natural definition would be
Text (x + iy) = T x + iT y
in the process of this extension, some interest question arises:
recall in the case X being finite dimensional, and we have a matrix A : Rn
Rn , it does not in general always have an eigenvalue over R, but it does always
19

Functional Analysis

Tianyu Tao

have one eigenvalue over C by the fundamental theorem of Algebra, in other words,
the existence of a 1(complex) dimension (real 2 dimension) invariant subspace
depends on the base field, and one might expect more delicate things to happen
when one considers the infinite dimensional case.
We study this question in the case where A is a self-adjoint operator.
Oct 20:
So we talked about complexification of real-Banach Space last time
and started talk about self-adjoint operators on (complex) Hilbert Space, recall
the Hilbert adjoint A for a linear operator A is defined as
hAx, yi = hx, A yi
for any x, y H, whose existence is ensured by Rietzs representation (Hilbert
Space version). The self-adjoint operators are those such that T = T (obvious
from the name).
Here is a classical calculation, suppose , are eigenvalues of T with eigenvector
x,
hx , x i = hT x , x i = hx , T x i = hx , x i = hx , x i
This shows, in particular, if we take = so hx , x i = 1 which implies = ,
so necessarily R; also, if 6= , then hx , x i = 0, so different eigenvalues
correspond to orthogonal eigenvectors.
Now recall if T happens to be compact on a Banach Space X, we have the
decomposition
X = Z1 Zn Y (1 , . . . , n )
T
Where Zi = ker(i T )mi for some integer mi and Y (1 , . . . , n ) = Yi with
each Yi = range(i I T )mi
In Hilbert space we have
H = Z1 Zn Y (1 , . . . , n )
To indicate the above remark.
We have a
Lemma If A L(H) is self-adjoint, then the spectral radius (A) = kAk.
Proof: By Gelfands formula (A) = limn kAn k1/n kAk. For the other direction, we have:
sup kAxk2 = sup hAx, Axi = sup hA2 x, xi
kxk1

kxk1

kxk1

sup hA2 x, yi = sup kA2 xk


kxk,kyk1

20

kxk1

Functional Analysis

Tianyu Tao

(in general we have kbk = supkyk1 hb, yi, Cauchy-Schwartz gives |hb, yi|
kbkkyk kbk, the one show equality is achieved by taking y = b/kbk.) So we
have kAk2 kA2 k, so kA2 k2 kA4 k . . . induction gives kA2n k = kAk2n . Use
Gelfands formula, taking the limit along the subsequence 2n : n = 1, 2, 3, . . ., we
have (A) = kAk.
The lemma has some interesting consequences, consider the decomposition
H = Z1 Zn Y (1 , . . . , n )
where i are the eigenvalues for a self-adjoint compact T on L(H) once more, order
n in decreasing sequences, we know n 0 monotonically, denote Y (1 , . . . , n )
by Yn , note by lemma we have kT |Yn k =T(T |Yn ) < n 0 because the spectrum
will contract to 0 as n increase, let Y = Yn , then kT |Y k = 0 thus Y = ker T we
have the complete diagonal-decomposition of H:
!

M
H=
Z Y
:6=0

Example We want to look at the concrete example when H = L2 (0, 1). We


consider the BV problem

u00 (x) = f (x)
()
u(0) = u(1) = 0
We know
R 1the integral operator G with kernel G(x, y) (Greens function) such that
u(x) = 0 G(x, y)f (y)dy is a solution to () is

x(1 y)
xy
G(x, y) =
y(1 x)
yx
Formally one has

2
G(x, y)
x2

= (x y) so
Z 1
00
(x y)f (y)dy = f (x)
u (x) =
0

Even if one starts with f L2 (0, 1), from u = G(f ) a bootstrapping argument immediately recovers that f is actually smooth. We want to compute the eigenvalues
of G, so we set up the equation:
Gf = f = (f )00 = f
Solving this ODE with BV u(0) = u(1) = 1 we have fk (x) = sin(kx) with
k = 1/ 2 k 2
21

Functional Analysis

Tianyu Tao

Oct 22: The discussion is continued from last time about the BV problem
example u00 = f, u(0) = u(1) = 0, we considered G(x, y) = 21 |x y| + L(x, y)
where L(x, y) is affine in x and y jointly, the heuristic is that L(x, y) is chosen to
satisfy the specific boundary condition for our problem while 12 |xy| : +G0 (x, y)
satisfies the differential equation u00 (x) = y
Last time we computed k = 21k2 with eigenfunction fk (x) = sin(kx), by our
spectral theorem, the space L2 (0, 1) is has the decomposition:
L2 (0, 1) =

R sin kx

k=1,2,...

Note 0 is not an eigenvalue of G since if Gf = 0 we have u = 0 so f = 000 = 0,


this also says the kernel istrivial.
The above result says 2 sin kx is a Hilbert basis of L2 (0, 1), which is due to
Rietz in 1912....
We are now interested in an object which will be similar to that of the trace
of a matrix, recall in finite dimensional case the trace of a matrix is the sum of its
diagonal elements, which is an important invariant under change of basis, in fact,
if i (A) are the eigenvalues for a n by n matrix A, we have
T r(A) =

n
X

i=1

R1
For our operator G, an analog for the trace would be, for example 0 G(x, x)dx
just by comparison and the instinct of being a mathematician, and one reasonably
think we should have
Z 1

X
k
G(x, x)dx =
0

k=1

which is morally correct! Since it leads to the result


Z 1

X
2
1
2
=

x(1

x)dx
=
k2
6
0
k=1

[?!!]

2
2
Similarly one can
R 1 obtain (4) by considering T r(G ), in which case G has
kernel G2 (x, y) = 0 G(x, z)G(z, y)dz... We will try to prove this in the next few
lectures.

22

Functional Analysis

Oct 24:

Tianyu Tao

We continue our discussion about trace formula involving the kernel:


Z
T r(G)? = G(x, x)dx

Suppose k is a H-basis for L2 (0, 1), then i (x)j (y) for i, j = 1, 2, 3, . . . will
be a H-basis for L2 ((0, 1) (0, 1)). Recall we have the formula
X
x=
hx, ek ik
k

whenever ek is an H-basis for the Hilbert space H and x H, note the above
series converge in the HIlbert-space norm.
So let H = L2 ((0, 1)2 ), with x = G and the H-basis be i (x)j (y) (dont confuse
x H and x (0, 1)!)
Then
Z 1Z 1
G(x, y)i (x)j (y)dxdy
hx, ek i =
0
0
Z 1
Z 1
i i (y)j (y)dy = i ij
(G(i )j (y)dy =
=
0

Hence one has


G(x, y) =

i i (x)i (y)

()

With the understanding that the series on the LHS converges in the sense of
L2 ((0, 1)2 ). At this point we can formally do the following calculation:
Z X
Z 1
XZ
X
G(x, x)dx =
i i (x)i (x)dx =
i i (x)i (y)dx =
i
0

This is, of course, not a proof, but it shows if under some assumption the
convergence in () is uniform so that one can freely change the order of limits then
the formula is indeed correct, indeed, we have the
Theorem (Mercer 1909? :) (For simplicity we just consider the case of real vector
2
space) Suppose
R 1 T is an integral operator with kernel k(x, y) on the space L [0, 1],
so T f (x) = 0 k(x, y)f (y)dy, which satisfies the following conditions:
k(x, y) is continuous in x and y and is symmetric: k(x, y) = k(y, x) on
[0, 1]2 ,
T is positive definite: that is hT f, f i 0 for any f L2 [0, 1]
23

Functional Analysis

Tianyu Tao

Then if k are eigenfunctions of T , with norm equals 1 for all k, we have


k(x, y) =

k k (x)k (y)

k=1

Where the series converges uniformly on the square [0, 1]2 .


Proof Sketch: Step 1: We show k(x, x) 0, this can be done by approximating
k(x, x) formally by the integral
Z 1Z 1
k(x, y)f(x0 ) (y)f(x0 ) (x)dydx ( 0 since T is positive definite)
0

Where f(x0 ) are smooth functions with integral equals 1 with support on (x0
, x0 + ), so as  0 it will approach the dirac delta at x0 , so the integral will
converge to k(x0 , x0 ), which remains
Pnnonnegative.
Step 2: Define kn (x, y) =
k=1 k k (x)k (y), write k(x, y) = kn (x, y) +
Rn (x, y), then we claim that Rn is positive:
Z
Rn (x, y)f (y)f (x) 0
P
for any f L2 (0, 1). Indeed, Rn (x, y) =
n+1 k (x)k (y) where the series converges in L2 , one can switch integrals and sums to see the claim hold, then by
same reasoning, Rn (x, x) 0, therefore kn (x, x) k(x, x) for each fixed x and n
and kn (x, x) converges (monotone and bounded by k(x, x)) for each x.
Step 3: We need to show kn (x, y) converges uniformly to the right thing. Fix
y, by Cauchy-Schwartz, for each x [0, 1]:
q
X

!2
|k k (x)k (y)|

k=p

q
X
k=p

!
k 2k (x)

q
X

!
k 2k (y)

k=p

Since kn (y, y) converges for fixed y, the second product can be chosen to be
small (< ), while for each x the first
P term in above product is bounded by k(x, x)
M = max k(x, x), thus k(x, y) = k k (x)(y) where the convergence is uniform
in x for each y fixed. In particular the series does converge pointwise, for any
x, y, to k(x, y). However, on the diagonal, kn (x, x) converges monotone to k(x, x),
hence one can invoke Dinis Theorem, which asserts actually kn (x,Px) converges
uniformly to k(x, x). But by the samePCauchy-Schwartz inequality, qk=p k 2k (x)
can be chosen small, thus kn (x, y) = nk=1 k k (x)k (y) k(x, y) uniformly.
Sorry for going over time...
24

Functional Analysis

Tianyu Tao

Oct 27:
We started talking randomly about kdV equation and Lax pair... an
open problem about existence of invariant subspace (Banach Space-No, Hilbert
Space ?)
Last time we finished proving Mercers theorem, thus we proved the formula
Z 1
X
G(x, x)dx =
k ()
0

. But this proof is somewhat unsatisfying, we should really derive this as an


analog in finite dimensional case, that is: trace should be invariant under a change
of basis. This motivates the following definition:
Definition Assume T is compact, T = T and hT x, xi 0, we say T is Trace
Class if there exist a Hilbert basis ek such that
X
hT ek , ek i <
k

One immediately object the well-definedness of this definition: why should we


only require there is such a basis? Indeed, we can show the sum defined above
doesnt depend on the choice of basis.
We mentioned another H-basis for L2 ([0, 1]): the Haar wavelets. (Wikipedia)
We can see how this will help proving (): Let YN be the span by the first N
Hbasis e1 , . . . , eN , let PN be the projection of H onto YN , then PN T |YN will be
a finite dimensional operator (so a matrix), from which we know the trace of a
matrices is independent of basis, so we just calculate the trace of PN T |YN in some
convenient basis:
N
X

N
N
X
X
0
0
hT ej , ej i =
hT ej , ej i =
j

j=1

j=1

j=1

If e0j is chosen to be those eigenfunctions, and ej those Haar-wavelets.


On the other hand, hT ej , ej i translates into an integral
Z
0

Z
0

1
G(x, y)ej (x)ej (y)dydx
h

Z
G(x, y)dxdy hG(xj , xj ) + (error)
Ij Ij

Where
P Ij is square with hPside length and the error term goes to 0 as N ,
hence hT ej , ej i converts to j hG(xj , xj ), a Riemann Sum [!], which as N
R1
becomes 0 G(x, x)dx. :)

25

Functional Analysis

Tianyu Tao

Oct 29:
Today we discuss Hilbert-Schmidt norm for operators A H, motivated by
the finite dimensional case, where the H S norm is defined by
qP
a2ij where aij is the matrix coefficients when choosing a fixed bakAk2 =
sis, note this is independent of the choice of basis since it equals T r(A A): if
A = T AT = A = T AT , A A = T A AT ...
Definition For A H, let ek be a Hbasis, define the Hilbert-Schmidt norm
of A, denoted by kAk2 to be
!1/2
X

kAek k2

We must check this is well-defined:


Claim: The definition above is independent of the choice of Hbasis
Proof. Let fk be another Hbasis of H, we need to show
X
X
kAek k2 =
kAfk k2 .
k

We have
Aek =

X
hAek , el iel
l

Hence by Parsevals identity:


X
X
X
X
X
kA el k2 ,
|hek , A el i|2 =
|hAek , el i|2 =
hAek , Aek i =
kAek k2 =
l

k,l

k,l

therefore, by a similar observation we have


X
X
X
X
kA fl k2
|hAek , fl i|2 =
Aek =
hAek , fl ifl =
kAek k2 =
k,l

Now by the first observation we have


X
X
X
kAfl k2 =
kA fl k2 =
kAek k2
l

So after assure ourselves this is a good definition, we consider the collection


of operators {A L(H), kAk2 < } := L2 (H), which is called the HilbertSchmidt Operators. It is worth noting that the above proof shows A L2 (H)
iff A L2 (H).
26

Functional Analysis

Tianyu Tao

Now observe L2 (H) is a Hilbert Space, as one can


P recover the inner product
via the polarization identity, and we have hA, Bi = k hAek , Bek i by picking any
Hbasis {ek }.
P
So we have hA, Bi = k hB Aek , ek i which is T r(B A).
Observe that kAk kAk2 , indeed, kAk2 = supkxk1 kAxk2 , pick x1 so that
2
2
kAx
P 1 k 2 kAk 2 , completing x1 to an Hbasis {xk }, we will have kAk2 =
k kAek k kAk , let  0.
Rb
Example: We consider the integral operator T f = a k(x, y)f (y)dy with k
L2 ([a, b]2 ), we claim that kT k2 = kkkL2 .
Indeed, pick a Hbasis {k } for L2 ([a, b]) (over R), so k (x), l (y) forms a
Hbasis for L2 ([a, b]2 ), we have
X
X
kT k k2 =
|hT k , l i|2
k

k,l

P R
Since hT k , l i = k,l k(x, y)k (y)l (x)dydx, which is precisely the Hbasis
in L2 ([a, b]2 ) coefficients for k(x, y), we must have the above sum equals kkkL2 , by
Parseval.
P
Note well that the identity operator is not H-S, as kIk2 =
kek k2 = .
Our last observation is that
Claim: H-S operators are compact,
Proof. indeed, fix a Hbasis {ek }, let Pn be the projection of X onto the space
spanned by the first n basis elements, take A L2 (H), put An = APn , then An ,
as Pn is finite rank, must be finite rank as well, but
X
X
kAek k2
kAek An ek k2 =
kA An k2 kA An k22 =
k

k>n

Since A is H-S, the sum converges to 0, so A is the norm limit of the finite rank
operator An , hence compact.
Remark: this proof shares the same spirit of the proof of the fact that every
Hilbert space has the approximation property (a.p): any compact operator T
on H is a norm limit of finite rank operators, basically one just construct the finite
rank operators as A compose with some projection operators, this does not work
in general Banach Space, as one lacks a good notion of projection, in fact Enflo
gives a counter example to the a.p. problem on Banach Space, which is rather
complicated....

27

Functional Analysis

Tianyu Tao

Oct 31:
We continue the discussion about the Trace Class. Recall a positive
definite, self adjoint compact operator T is Trace Class if there is Hbasis ek such
that
X
hT ek , ek i <
Our goal is to show the sum above does not depend on the choice of basis. The first
approach is to notice that T = S 2 for some S, we can think of T as the diagonal
matrix (1 , 2 , . . .) with i are eigenvalues, which
arepositive by assumption,
then we will define S to be the diagonal matrix ( 1 , 2 , . . .), in particular S
is self-adjoint with i being eigenvalue correspond to the same eigenvector as
that of T .
X
X
X
hT ej , ej i =
hS 2 ej , ej i =
hSej , Sej i = kSk2
j

whose independence of basis is proved in Oct 27.


Theres also a direct approach, sketched as below: by symmetry, we only need
to show for ek , fn two set of Hbasis, we have
X
X
hT ej , ej i
hT fj , fj i
j

Fix  > 0, there is M and e0j span{f1 , . . . , fM } such that


kej e0j k < /n
So that one has

n
n
X
X
hT ek , ek i =
hT e0k , e0k i + E
k=1

k=1

Where E 0 as n .
Now it is clear that e0j spans the same set as fj , so by the result in Linear
Algebra, trace is invariant, we have
M
M
X
X
0
0
hT ej , ej i =
hT fj , fj i
1

Pn

So that k=1 hT ek , ek i
desired inequality.

P
1

hT fj , fj i + E, take n again produce the

P
Observation Having proved hT ek , ek i is independent of the basis ekP
, let us pick
ek to be the eigenvectors of T , then we see T is trace class if and only if k k < ,
one thing worth noting is that we sum the eigenvalues with multiplicities!
For A L(H), A A is positive definite and self adjoint, by definition we see
A A is trace class if and only if A is Hilbert Schmidt.
Our next task is to define square root for some operators.
28

Functional Analysis

Tianyu Tao

Proposition Let A H by positive definite and self adjoint, then there exist
a
2
unique B such that B is positive and self adjoint with B = A, we denote B := A

 2

1/2
Proof 1: Consider the power series expansion 1 x = 1 1/2
x
+
x ,
1
2
we know that this will converge for |x| 1.
So consider the operators A with kAk 1, actually it is enough to consider
these operators by homogeneity. Recall the formula
kAk = sup |hAx, xi|
kxk=1

for self adjoint A, subsitute A by I A, note


0 h(I A)x, xi = hx, xi hAx, xi hx, xi
since kAk 1 and A is positive definite. In particular we must have kI Ak 1,
and consequently A = I (I A), the power series
 
 
1/2
1/2
I
(I A) +
(I A)2
1
2
Converges absolutely and uniformly, so define a operator, say B, and one can
check it has the desired properties. Its kinda tricky to show the uniqueness of B,
suppose B1(2) are two square root of A, then B12 = B22 = A, so B13 = AB1 = B1 A,
note by definition B2 commutes with any operator which commutes with A, since
it is a norm limit of I A, thus B1 and B2 commute, the trick then is to write
(B1 B2 )B1 (B1 B2 ) + (B1 B2 )B2 (B1 B2 )
= (B1 B2 )(B1 + B2 )(B1 B2 )
= (B12 B22 )(B1 B2 ) = (A A)(B1 B2 ) = 0
But h(B1 B2 )B1 (B1 B2 )x, xi = hB1 (B1 B2 )x, (B1 B2 )xi 0 since B1 is
positive definite and (B1 B2 ) is self adjoint, thus both (B1 B2 )B1 (B1 B2 )
and (B1 B2 )B2 (B1 B2 ) are 0, so is their difference (B1 B2 )3 , but then
0 = k(B1 B2 )3 k = k(B1 B2 )4 k = k(B1 B2 )k4 = B1 = B2
by a Lemma in Oct 20.
Proof 2(sketch:) Recall homework
1
A =
2i
k

k ( A)1 d

The idea is to try to put k = 1/2 here, one can do this since we do have a branch
of logarithm on a neighbourhood of 1...
29

Functional Analysis

Tianyu Tao

Now if A L(H), as noted above A A is positive definite and self-adjoint, we


may take its square root, which we denote by |A|. We say A (now not necessarily
positive and self adjoint) is trace class if |A| is trace class (as defined previously),
when A is self adjoint and positive, we do have |A| = A (A A = A2 ....) so the
definition agrees with previous one.
Remarks
Both L1 (H) and L2 (H) are two-sided ideals on L(H).
write L1 (H) as the space of trace class operators, then A L1 (H) iff A =
B1 B2 where B1(2) L2 (H), i.e. they are Hilbert Schmidt. Indeed, one
can prove a similar result to the polar decomposition of complex numbers:
A = U |A| where U is an isometry when restricted to Ker(U ) ,
on one hand, if A = BC, then |A| = (U )BC so it is enough to assume
|A| is product
but then one can use Cauchy-Schwartz:
P of two H-S,
P

hABe
,
e
i

kAe
kkB
e
k
k k
k
k kAk2 kB k2 , On the other hand, havk
ing polar decomposition allows me to write A = U |A| = U |A|1/2 |A|1/2 and
k|A|1/2 k2 = kAk1 .
A result by Lidskii says for T L1 (H) we have
X
X
k = T r(A) =
h|A|ek , ek i
k

Nov 3:
Let us see an example of the trace formula we got last time, we are
considering the function space H = L2 (S 1 ) where S 1 is the unit circle in R2 ,
functions in H could be identified with 2periodic L2 locally integrable functions.
For a > 0, we consider the operator
La :=

2
+ a2
2
x

Note a = 0 will make La not invertible. Consider then the problem


La u = f
on H, notice we dont have boundary conditions, we want to solve u by writing
u = Ga f will Ga should be the integral operator which invert La , this is done,
again, by finding a suitable Greens function.
Recall, a Greens function formally satisfies (or in distributional sense) the
equation:
La G(x, y) = (x y)
30

Functional Analysis

Tianyu Tao

Now fix y = , denote the function x 7 G(x, y) by u(x) for x S 1 , for x 6= y, we


have u00 + a2 u = 0, one can solve this ODE quite easily, solutions are spans of the
function eax , eax , which we can change to another basis cosh x, sinh x So we have
G(x, ) = C1 cosh x + C2 sinh x for x 6= , and notice we should have rotational
symmetry for the Green function: G(x, y) = G(x , y ), with this and the fact
that Greens function are symmetric G(x, y) = G(y, x), we have
u(x) = G(x, ) = C cosh x.
To determine C, we use the property that u0 has a jump of 1 at , apart from
, u0 (x) = Ca sinh(ax), so we need
Ca sinh(a ) Ca sinh(a + ) = 1
When traversing backwards Ca sinh(a + ) = Ca sinh(a), so C = 1/2a sinh(a)
cosh x
:= ka (x), thus Ga (x, y) = ka (x y + ) in
and we have Ga (x, ) =
2a sinh a
general. And our integral operator Ga is given by:
Z
Ga f (x) =
ka (x y + )f (y)dy
S1

Now we can calculate the eigenvalus and eigenfunctions, to do so we follow


the usual trick: convert the integral equation Ga f = f to the corresponding
differential equation La f = 1 f ( = 0 will not be an eigenvalue, as this will
imply that f = 0) by a bootstrapping argument to establish the regularity of f .
Thus we need to solve
f 00 (x) + a2 (x)f (x) =

1
(:= )f (x)

This could be done via undetermined coefficients, we have the following list of
solutions:
when = a2 , this is a single eigenvalue, correspond to eigenfunction f = 1
when = k 2 + a2 (k Z) which are double eigenvalue, correspond to
eigenfunctions cos kx, sin kx.
kx , cos
kx forms an
And the spectral decomposition theorem tells us that 12 , sin
2
2
Hbasis for L2 (S 1 ).
The trace formula we prove previously now give us an interesting identity:
Z
X
1
cosh a
=
Ga (x, x)dx = 2G(, ) =
/a
2
2
k
+
a
a
sinh
a
1
S
kZ

31

Functional Analysis

Tianyu Tao

Which can be written:


d
da

log a +

!
log(a2 + k 2 ) log(k 2 )

k=1

And it gives
a


Y
k=1

a2
1+ 2
k

d
log sinh a
da


= C sinh(a)

For some constant C, to determine it, let a 0, we see C = 1 , hence we have




Y
a2
sinh(a)
1+ 2 =
k
a
k=1
Now try to put a = it...
Nov 5:
Today we talk about self adjoint operators which are not necessarily
compact, first we claim:
Proposition If A L(H) is self adjoint A = A, then (A) R.
Proof. If z = x + iy, then zI A = xI + iyI A = y(iI + xy I y1 A) = y(iI B)
where B = xy I + y1 A is certainly self-adjoint. Thus without loss of generality, we
only need to show i A is invertible for A self-adjoint.
i+A
Imagine A is real number, we may write i A = 1+A
2 , thus, we want to show
2
the operator I + A is invertible. We need a result:
Claim: if A is self-adjoint with A i.e. hAx, xi kxk2 , then A is
invertible.
proof of claim: Define hx, yiA = hAx, yi, this is an inner product on H which
induces the same topology as the original inner product h, i. Since
kxk2 kxk2A = hx, xiA = hAx, xi kAkkxk2
Fix y H, let ly (x) = hx, yi, this is a continuous linear functional on H, thus, by
Rietz representation, we have ly (x) = hx, y 0 iA for an unique y 0 H. This means
hx, yi = hAx, y 0 i = hx, yi = hx, Ay 0 i
for any x H, so A is surjective, it is clear also that A is injective since
kxk2 hAx, xi kAxkkxk = kxk 1 kAxk so if Ax = 0 then x = 0, thus
A is bijective, and the inverse of A satisfy the estimate (denote y 0 above as A1 y):
ky 0 k2 hAy 0 , y 0 i = hy, y 0 i kykky 0 k = kA1 yk 1 kyk
Hence kA1 k 1 and A is invertible.
32

Functional Analysis

Tianyu Tao

Now note I + A2 is self-adjoint and positive: h(I + A2 )x, xi = kAxk2 + kxk2


kxk2 , thus by claim is invertible. Define
B1 = (iI + A)(I + A2 )1 , B2 = (I + A2 )1 (iI + A),
it is clear that B1 and B2 are left and right inverse of iI A, and hence B1 = B2 ,
showing iI A is invertible.
Put + = supkxk=1 hAx, xi and = inf kxk=1 hAx, xi, then (A) [| |, |+ |].
We put the Lax-Milgram lemma here, as we think this is good:
Theorem (Lax-MIlgram Lemma) Let A L(H) with hAx, xi kxk2 for
some > 0, then A is invertible with kA1 k 1 . If H is over complex, we need
RehAx, xi kxk2 .
Note: the difference of L-M with the claim above is that we does not require
A is self-adjoint!
Proof. It suffices to show A A and AA are invertible (only showing, for example, A A is not enough, consider the unitary left shift on l2 ), because B1 :=
A (AA )1 , B2 := (A A)1 A will then be the left and right inverse of A.
It is enough to show hA Ax, xi 2 kxk2 , i.e. show kAxk2 2 kxk2 , as A A
is self-adjoint.
But
kxk2 hAx, xi kAxkkxk = kxk kAxk
And similarly kA xk2 2 kxk2 . (over real we have hAx, xi = hx, Axi
The Lax-Milgram Lemma has important application in showing the existence
of solutions for certain types of differential equations:
Example: We consider the space L2 (a, b) and the space H01 (a, b), which was defined
to be the completion of test function space on (a, b), which is equivalent to the
space
{u : [a, b] R, u is absolute continuous and u(a) = u(b) = 0, u0 L2 (a, b)}
Rb
An inner product on H01 (a, b) is hu, viH01 := a u0 v 0 . Note H01 could be identified
Rb
to the subspace ofRV L2 (a, b) such that the integral a v = 0 for all v V via
x
the isometry v 7 a v 0 .
Rb
Now for f L2 , define l(v) = a f v for v H01 , this is a continuous linear functional on H01 with respect to the norm induced by h, iH01 , by Rietz representation,
there exist an unique u such that
hu, viH01 = hv, f iL2
33

Functional Analysis

Tianyu Tao

Write out the inner product explicitly as integrals, we see this actually gives
(via integration by parts)
Z b
Z b
Z b
0 0
(u00 f )v = 0 for all v H01 = u00 = f
f v =
uv =
a

Next time we will then use L-M to show this problem has a solution...
Nov 7 Continue about the Application of the Lax-Milgram to the problem
u00 = f . Recall H01 (a, b) C0 (a, b) = {u C(a, b), u(a) = u(b) = 0}, moreover, the inclusion map i : H01 C0 is continuous:

Z x
Z x


0 2
0

|u | (x a)1/2
u (s)ds
kukC0 = sup |u(x)| =
u(a,b)

Rx

If x (a, a+b
), if x ( a+b
, b) we write u(x) = b u0 (s)ds and we see
2
2
r
ba
kukC0
kukH01 .
2
On the other hand kukL2 (b a)1/2 kukC0 , hence H01 , L2 , in fact, H01 is
compactly embedded in L2 : the unit ball {u : kukH01 1} is compact in L2 (a, b).
This is immediate from Arzela-Ascoli, as we have
Z y


|u(x) u(y)| =
u0 ku0 kL2 (x y)1/2 = kukH01 (x y)1/2 (x y)1/2
x

So u is Holder continuous with the Holder exponent being 1/2...and one see H01 is
actually compactly
R embedded in1 C0 (a,2 b)!
Now let v 7 vf for v H0 or L , this a continuous linear functional on the
corresponding spaces, by previous estimates. Hence by Rietz representation, there
exist unique u H01 (a, b) such that
Z
Z
0 0
u v = vf (for every v H01 )
Note this is a weaker notion then u00 = f , as u00 does not necessarily exist.
Using this, we will prove a Lemma
R
Lemma Suppose w L2loc (I) where I (a, b) is some interval. If w0 = 0 for
D(I), then w is constant.
This result, can be rephrased as saying if w = u0 is equal to 0 in the sense of
distributions, then u will be a constant, and this allows us to assert uniqueness of
solution that was form in the weak formulation of our problem u00 = f .
34

Functional Analysis

Tianyu Tao

Proof. Pick x1 , x2 which are Lebesgue points of w, it is enough to show w(x1 ) =


w(x2 ) as Lebesgue points has full measure.
Let (x)
R
R be a test function which is
supported around x1 and x2 and 0 < B(x1 ) = B(x2 ) where B(x1 ), B(x2 ) are
Rx
suitable neighbourhoods of x1(2) . Then put  (x) = 1 (1 x) and (x) = a1 
where a1 , x I. Then  is a test function and we have
Z
Z
0
0 = w = w0
I

By hypothesis, on the other hand


Z
lim w = w(x1 ) = w(x2 )
0

Since x1(2) are Lebesgue points.

35

Functional Analysis

Tianyu Tao

R
Nov 10 Recall the lemma we did from last time: if w0 = 0 for all D(a, b),
then w is constant almost everywhere if we only assume w L2loc .
Now if we assume w L2loc and take f C(a, b), and have
Z b
Z b
0
w =
f
a

Then formally the above implies w0 = f , can we actually prove that w C 1 (a, b)?
The answer is yesR and this is an easy yes.
x
R = a f (s)ds, then we know w C 1 and w 0 = f , thus we have
R Put0 w(x)
w
= f for any test function .
As a consequence
Z
Z
0
w
= w0 = w w = c
For some constant c.
1
R Now for f C(a, b) defines a continuous linear functional on H0 (a, b) by v 7
vf , using Rietz, we get
Z
Z
1
0 0
!u H0 (a, b) such that
u v = fv
For all v H01 . By above remark, we conclude that u0 C 1 (a, b) so u is in C 2 , it
does make sense then to conclude that u00 = f !!
We talked about Eulers equation, where a weak formulation leads to existence
of certain solutions with unexpected properties (compactly supported in spacetime...), and then we talked about isometry from S 2 into R3 , where requirement
for the isometry to be C 1 is not enough to gurantee the unique existence of only
trivial cases but C 2 works, the general philosophy behind this is that in a weak
formulation, if you want to get existence, you usually require less regularity, but for
uniqueness you want more regularity, what we did above is a lucky coincidence....
One could recover the Greens function for the BVP: u00 = f, u(a) = u(b) = 0
by a procedure similar to our discussion...
Fix y (a, b), consider the evaluation functional ey (v) = v(y) for v H01 , it is
given by integration against the dirac measure y (characterized by y (A) = 1 if
y A ow y (A) = 0):
Z b
ey (v) =
vy
a

We know v(y) kvkC(a,b) Ckvk , so ey is a continuous linear functional on


H01 (a, b), by Rietz there exist u H01 such that
Z
u0 v 0 = v(y)
H01

36

Functional Analysis

Tianyu Tao

Now to find u, we can take v such that suppv (a, y), this shows u is affine on
(y, b), similarly u is affine on (a, y), moreover, u should be continuous, since H01
embedds into C(a, b)...
The last topic for today is the following: Consider the equation u00 = f for
f L2 , we know a unique weak solution u H01 (a, b) satisfying
Z
Z
0 0
u v = f v for all v H01
exists. And we have the estimate
Z
Z
2
0 2
kukH 1 = |u | = uf kukL2 kf kL2 CkukH01 kf kL2
0

so that kukH01 Ckf kL2 .


For f L2 given, let Gf denote the unique solution u H01 we find above,
and call G the solution operator to the equation u00 = f , it can be regarded as
an operator from L2 to L2 since H01 , L2 . Now we
Claim G is compact and self-adjoint.
Nov 12 We give proof to our claim from last time
Proof. Compactness is clear, since the unit ball {f, kf kL2 1} in L2 (a, b) is
mapped into (by the estimation kukH01 Ckf kL2 ) the set {u H01 , kukH01 C},
which is a compact subset in L2 since H01 compactly embedds into L2 .
self-adjointness is not much harder: we need to show hGf, gi = hf, Ggi for
f, g L2 (a, b).
Let u = Gf, v = Gg, then u, v H01 , so
Z
Z
0 0
hf, vi = u v = v 0 u0 = hg, ui = hu, gi (we are over reals)
since thats how we get our solution operator by Rietz.
Since G is compact and self adjoint, it is not hard to continue from here by
applying spectral theorems about such operators....one find eigenvector/values by
solving Gu = u which is equivalent to find weak solution of u00 = 1 u....
Application to system of ODE: We can now consider a more general equation for
u(x) = (u1 (x), . . . , un (x)) : (a, b) Rn :


d
duj
duj
d

aij (x)
+ bij (x)
(bij (x)uj (x)) + cij (x)uj = fj (x)
dx
dx
dx
dx
37

Functional Analysis

Tianyu Tao

with dirichlet boundary conditions: ui (a) = ui (b) = 0. For 1 i, j n, and the


coefficients a, b, c, b are L (a, b) functions, and aij xi xj |x|2 .
One can write above in matrix form:
0 + Cu = f
(Au0 )0 + Bu0 (Bu)
Wherer the u, f are vector-valued functions on (a, b), and A, B, C are matrix
and A is strictly positive definite.
The weak formulation for this equation, then becomes (after some integration
by parts):
Z
(W)

(Au )v + (Bu)v + (Bu)v


+ (Cu)v =
0

fv
a

for any v H01 (a, b).


The left hand side of (W) defines a bilinear form B(u, v) on H01 H01 , and by
the following estimates 1
R
R
(Au0 )(v 0 ) kAk|u0 ||v 0 | kAkkukH01 kvkH01
R
0

(Bu)v
kBkkuk
L kvkH01 KkBkkukH01 kvkH01
R
(Bu0 )v kBkkukH01 kvkL KkBkkukH01 kvkH01
R
Cuv kCkkukL kvkL KkukH01 kvkH01
we have |B(u, v)| KkukH01 kvkH01 for some constant K. Then by Rietz, B(u, v) =
hAu, vi defines a continuous operator A on H01 (a, b).
We wish to apply Lax-Milgram to A, in order to do so, we need B(u, u) =
hAu, ui kxk2 with > 0, but unfortunately this is not true in general, for
example, consider u00 ku for k large....
However, we do have a remedy for this: (Clever trick!)
Lemma There exist 0 > 0, such that for > 0 , we have
B(u, u) + hu, uiL2 kuk2H 1
0

for some > 0.


Here the u are vector valued functions,R so |u0 | really means theRusual Euclidean norm of the
vector (u01 (x), . . . , u0n (x)) and the integral |u(x)|dx really means (u21 (x) + + u21 (x))1/2 dx.
1

38

Functional Analysis

Tianyu Tao

R
Proof. The key estimate is for the (Bu0 )u terms:
Z
Z
0
(Bu )u kBk|u0 ||u| kBkku0 kL2 kukL2


1
1
kBk 0
0 2
2
ku kL2 + kukL2
ku kL2 kukL2 kBk
=
2
2

A large coefficient for kukL2 would not harm, since we are subtracting the
(Bu0 )u term, what does matter is the ku0 kL2 term, which by above can be taken
care of by making  small.
Since A is strictly positive definite by assumption, we have (Au0 )0 u0 k|u0 |2 .,
and the Cu2 term is just harmless.

So pick  small such that kBk/2, kBk/2


k/8, we have
Z
Z
Z
Z
Z
k
1
1
0 2
0 2
2
2

|B(u, u)| k |u |
|u | kBk
u kBk
u kCk |u|2
4
2
2
3k
kukH01 0 hu, uiL2
=
4
with 0 = kCk +

kBk+kBk
.
2

Hence the form B(u, v)+hu, vi is coercive for large , and Lax-Milgram finally
get applied to provided a unique u H01 (a, b) such that
B(u, v) + hu, viL2 = hf, viL2
for all f L2 (a, b). Which is a weak solution to the modified version of (W)(which
is by adding the u term at the end).

39

Functional Analysis

Tianyu Tao

Nov 14 The application of Lax-Milgram to our system of ODE can be extended


to a more general case. But first let us make it precise some points we glossed over
last time:
We are considering a bilinear form B(u, v) on H01 (a, b), in general, we consider
a subspace X such that
H01 X H 1
where X is chosen to encode specific boundary conditions. A bilinear form B(x, y)
on Banach Space X means we have the following inequality:
|B(x, y)| Ckxkkyk
for some constant C. It is clear how this can be generalized to nlinear form on
X. Now for a continuous bilinear form B, consider the map
x 7 B(x, ),
this is a continuous map from X to X , since for each x fixed, B(x, ) is a bounded
linear functional on X by continuity. We want to ask the inverse question: for
each continuous linear form M : X X , does there exist a bilinear form and
x X such that B(x, y) = M (y) for all y Y ?
If X is a Hilbert Space, then X is naturally identified with X itself, since the
map B(x, ) L(X) for each x fixed, by Riesz we know there exist a unique u such
that B(x, y) = hu, yi. This implicitly defines an operator A with u = Ax, then the
above is
B(x, y) = hAx, yi
The above inverse question about M : X X in this Hilbert Space setting
becomes equivalent to whether we can find an inverse to the operator A, the LaxMilgram lemma answers this question positively in the case of B being coercive:
Theorem (Lax-Milgram, bilinear form version) For a Hilbert Space X, and continuous bilinear form B(x, y) on X which is coercive: there exist > 0 such that
B(x, x) kxk2 . We have for each u X X, let u X be the element that
represents u as in the Riesz, then there exist a unique x X such that
B(x, y) = u(y) = h
u, yi
for every y X and kxk 1 kuk. Of course, one can say this defines an operator
G with x = G u implicitly.
To finish the discussion on the application to the ODE
0 + Cu = f
(Au0 )0 + Bu0 (Bu)
40

Functional Analysis

Tianyu Tao

L2 and C L1 , plus hA, i kkk2 for some k > 0, and


Where A L , B, B
we consider the bilinear form B on H01 :
Z
0

B(u, v) = (Au0 )v 0 + (Bu0 )v + (Bu)v


+ Cuv
we want to prove the following:
Proposition (1-D Gardings inequality) There exist 0 such that
Z b
B(u, u) + 0 hu, uiL2
|u0 |2
a

C having L entries.
Proof. Last time we showed this in the case where B, B,
L2 , kCkL1 are small:
Now consider a case where kBkL2 , kBk
Rb
To see how small it needed to be, consider the term a (Bu0 )u first, we have
the following estimate:
Z b
(Bu0 )u kBkL2 ku0 kL2 kukL ckBkL2 ku0 k2L2
a

For some constant c, whose existence is established in (Nov 7). We can similarly
C and we have
estimate B,
Z
0

l2 + kCkl1 }(ku0 k2 2 )
(Bu0 )v + (Bu)v
+ Cuv c{kBkl2 + kBk
L
recall ku0 k2L2 = kukH01 ), we already know A satisfies hAu, ui kkuk2H 1 , so if we
0
l2 + kCkl1 | k/2 we are good.
have c|kBkl2 + kBk
The general case will follow by combining the above and the validity for the

L case: For B L2 , we may write B = B1 + B2 where B1 L and B2 L2


have a small norm, this can be done, for example by first fix M > 0 and define
B1 (x) = B(x) if |B(x)|
M and equals 0 otherwise, and put B2 = B B1 , as
R
M we have |B2 |2 0. Do similar things to C L1 will yield the result.
Finally we note: To solve
Lu = f
we instead consider
Lu + u = f + u
where now the left hand side is invertible by Proposition and Lax-Milgram, say
G(Lu + u) = u, we have then the Fredholm equation:
(I G)u = Gf
We can use Fredholm theory to get information of G and then translate them back
to L, for example we have
41

Functional Analysis

Tianyu Tao

Lu = f is uniquely sovalble if and only if Lu = 0 has trivial solution.


ker L is finite dimensional in general and Lu = f is solvable iff f (L2 ) ker L .
Nov 24 At this point we want to summarize the various objects we have dealt so
far in applying Lax-Milgram to certain types of equations. We have the following
The space H 1 (a, b) and H01 (a, b), and in general any X with
H01 (a, b) X H 1 .
The choice of X is a way to encodes the boundary conditions, as we will see.
Bilinear form B(u, v) on X, in previous lectures we discussed the following
b

(Au0 )v 0 + (Bu0 )v + (Bu)v


+ Cuv

B(u, v) =
a

C are matrix satisfying the conditions specified before. The


where A, B, B,
bilinear form is a straightforward object, as compared to the differential
operator below:
The original differential operator that is associated to our initial differential
equation, formally, L is an operator such that
Lu =

d
0 + Cu
(Au0 ) + Bu0 (Bu)
dx

is just for convenience of discussion), but L is not


(the minus sign in B
defined for all u H 1 , the term (Au0 )0 does not necessarily make sense, since
we do not know u has a well-defined second derivative if u H 1 (a, b).
But, as we will show, it is possible to get a well-defined differential operator
just from the bilinear form, because in many situations, for example problems from
continuum mechanics, a problem is posed first in the form of a bilinear form, then
one wants to find the associated differential equation and relate other things to
them.
The setting is as follows: we have a continuous bilinear form on X, we know
this means we have
|B(u, v)| CkukH 1 kvkH 1
for some constant C and all u, v X.
Our goal is to establish:
42

Functional Analysis

Tianyu Tao

Proposition For some bilinear form B, fix some u X, there then exist a constant c such that
() |B(u, v)| ckvkL2
for all v X.
What are some bilinear forms to work with? we shall first consider
Z b
u0 v 0
B(u, v) =
a

on X = H01 (a, b).


Note without loss of generality, it suffices to show () holds for smooth v
since they are dense in X, then we can do integration by parts:
Z b
Z b
0 0
uv =
u00 v
a

If we fix u H01 with u00 L2 , () will follow by Cauchy-Schwartz. Notice


how the boundary term disappears in the process of integration by parts,
thats because we have X = H01 (a, b) which secretly tells us that we have
zero boundary conditions v(a) = v(b) = 0.
We consider another case where X = H 1 (a, b) and B is still the same. Now
integration by parts gives
Z b
Z b
0 0
0
0
u v = (u (b)v(b) u (a)v(a))
u00 v
a

In order to have (), the only way this could happen is to have u0 (a) = u0 (b) =
0, since the evaluation map v 7 v(a) is not continuous for v H 1 (a, b), the
pointwise value of v can be anything!
So () will hold if we fix u with u00 L2 and u0 (a) = u0 (b) = 0.
Yet another variation, this time X = H 1 (a, b) but the bilinear form is
changed to
Z b
B(u, v) =
u0 v 0 + Ca u(a)v(a) + Cb u(a)v(b)
a

for some constant Ca , Cb . After integration by parts we find that we need


u0 (b) + Cb u(b) and u0 (a) + Ca u(a) to vanish in order for () to hold. This is
Robins boundary condition on u.

43

Functional Analysis

Tianyu Tao

Now we know in some cases () does hold, we want to get a differential operator
from B, in general the domain of L might not be X itself, but some dense subspace
of it. We want to define Dom(L) = {u X : |B(u, v)| ckvkL2 }, then define L
as an densely defined operator:
Definition For u Dom(L), we have
|B(u, v)| Cu kvkL2
for some constant Cu depending on u, then B(u, ) is a bounded linear operator on
Hilbert space L2 (a, b), by Riesz representation, there exist a unique element which
we denote by Lu L2 (a, b) such that
B(u, v) = hLu, viL2
One can compare this L and the G we get from Lax-Milgram from last time,
moreover we can consider adjoint problems for L, which are defined precisely
because L is densely defined...
Nov 26 Continue from the discussion of last time, we are able to consider the
adjoint of the operators/forms B (u, v) = B(v, u), L , G where G = L1 is the
operator obtained from L by Lax-Milgram.
Consider f, g L2 (over the reals), and B a coercive bilinear form, we know
for every v L2 there is a unique u0 satisfy the equation
B(u0 , v) = hf, vi
And u0 is denoted by Gf . Similarly, there exist a unique v0 such that
B(u, v0 ) = B (v0 , u) = hg, ui = hu, gi
Then
for any u L2 . We shall denote this v0 by Gg.
= hf, v0 i = B(u0 , v0 ) = hu0 , gi = hGf, gi
hf, Ggi
This shows G = G , and this is an extension of the famaliar property of adjoint
of matrix: (A )1 = (A1 ) . Where G is the inverse of some differential operator
L.
Now we recall the Fredholm theorems, we have a bilinear form version of this,
for the equation
() B(u, v) = hf, viL2
We may use Fredholm alternative to say things like
44

Functional Analysis

Tianyu Tao

() has a unique solution iff B(u, v) = 0 for all v X has only the trivial
solution u = 0.
ker L = {u : B(u, v) = 0} has finite dimension and () is solvable iff f
ker L where L is the differential operator associated to B.
Finish talking about Fredholm theories, next time we will talk about Hahn Banach
and separation by convex sets.
Dec 01 As promised earlier in the semester, we will start discuss about the
Hahn-Banach theorems. We need to have some preliminary concepts:
Let X be a real Banach Space, and p : X R be a sub-linear functional,
meaning
for all t 0 we have p(tx) = tp(x);
p(x + y) p(x) + p(y).
this is saying that p is a convex function along lines, the above condition im) 12 (p(x) + p(y)), which in turn is equivalent to the condition being
plies p( x+y
2
convex (this is a theorem by Sierpinski, which says measurability along is enough
to guarantee mid-point convex is convex).
Given a real vector space X, and a convex subset K of X, such that 0 K,
we define the Minkowski functional associated to K, as
pK (x) := inf{t > 0,

x
K}
t

pK will become a sub-linear functional if we require K to be absorbing: the set


{t : tx K} contains a neighborhood of 0 for each x X (easy exercise).
Now we can state the Hahn-Banach theorem:
Theorem (Hahn-Banach:) Let X be a vector space, and Y X be a subspace of
X. Let p be a sublinear functional on X, and l a linear functional on Y which
is dominated by p: l(x) p(x) for all x Y , then we can extend l to a linear
functional L on X such that L(x) p(x) for all x X.
Before turning to the proof, we shall see some important consequences of H-B:
If X is a Banach space, we take p(x) = kxk, now fix x0 X, consider the one
dimensional subspace Y spanned by x0 , we define a linear functional l on Y
by l(tx0 ) = tkx0 k, then l(tx0 ) |t|kx0 k = ktx0 k = p(tx0 ), so the domination
condition hold, we may apply H-B to get a linear functional L on X with
L(x0 ) = kx0 k, note klkY = 1, so kLk = 1.
45

Functional Analysis

Tianyu Tao

The above says, for any x X, we may find a Lx X such that kLx k = 1
and Lx (x) = kxk, in other words, we have
kxk = sup{Lx, kLk 1}
this means the embedding i of X to its double dual X , where x is identified
with the functional i(x)(l) = l(x) for l X is an isometry: kxk = ki(x)k.
The first item also says that the unit ball B = {x : kxk < 1} lies on one
side of the hyperplane H = {x : Lx = 1}, since kLk = 1, in particular
x0 /kx0 k H, if any other points x1 H, by convexity of the unit ball, the
point tx0 + (1 t)x1 H, but ktx + (1 t)x1 k can be made less than 1, even
in finite dimension.
Dec 03 Consider the sequence space c0 = {(x1 , x2 , . . .), limk xk = 0}. We
want to find the dual of c0 , note first c0 contains the basis ek = (0, . . . , 1, 0, . . .)
where the 1 is on the kth spot. Let l c0 , put k = l(ek ). Then
l((x1 , x2 , . . . , xm , 0, . . .) = 1 x1 + + m xm .
We have
klk = sup |l(x)|
kxk1

This supremum can be approximated by the number


sup

|l(x)| = sup

kxk1,x=(x1 ,...,xm ,0,...)

m
X

|xi |1 i=1

i xi =

m
X

|i |

i=1

where the last equality is


by pick xi equals the sign of i . thus, the norm
Pobtained

on the dual is equal to j=1 |j |. One sees that the dual of c0 is the space l1 , we
identify the member in c0 with the sequence (1 , 2 , . . .) l1 .
Similarly, we can find the dual of l1 by find the norm of the dual using the
method of taking supremum of finite dimensional subspace: this time we need to
maximize:
x1 1 + + xn n
P
with the constraint kkl1 =
|i | 1, it is natural to put all the weight on one
the maximum of the xi s, hence the norm on the dual of l1 is the sup-norm, in
other words, the dual of l1 is l , which is strictly bigger than c0 .
This shows X and X are not necessarily equal, for an example where this
does hold, considerPX = lp for 1 < p < , the space of psummable sequence
with norm kxkpp =
|xi |p .
46

Functional Analysis

Tianyu Tao

We still use the same method to find the dual norm of lp : we will be dealing
with the maximization problem, finding
sup 1 x1 + + m xm
kxkp 1

we shall use the Method of Lagrange multiplier (as an sketch), consider the function
X
X p
i xi
xi
1

take gradient, set it equal to zero, we get i = pxp1


. We have xi = ci ip1 where
i
ci is the sign of i , so the maximum value is
1
X
X
|i ip1 | =
||q
where q is the conjugate exponent of p:
lq .
We shall now prove Hahn-Banach

1
p

1
q

= 1. This explains the dual of lp is

Proof of Hahn-Banach theorem: We pick some b


/ Y and put Z = span[Y, b], we
will show first that we can extend l on Z. We have a natural definition for the
extension of l on Z: just define lZ : Z R by lZ (y + tb) = lZ (y) + tlZ (b), the only
problem is to choose lZ (b) := properly so that we have the domination condition
hold: lZ p on Z. This leads to the following conditions:
l(y) + t p(y + tb), l(y) t p(y tb)
where t > 0, hence, needs to satisfy:
p(y b) l(y) p(y 0 + b) l(y 0 )
for any y, y 0 Y , using the homogeneity of p. So we need
p(y b) + l(y) p(y 0 + b) l(y 0 )
for any y, y 0 Y , this becomes l(y + y 0 ) p(y + y 0 ), using the sub-additivity of p,
hence all our wishes become true, we can extend the functional l by 1 dimension.
The see we can extend l on all of X, it is necessary to use some equivalent result
of AC, we define a partial ordering on the set of ordered pairs (Y, l), where Y is a
subspace of X and l is a linear functional on Y with l p on Y , by (Y, l) (Y 0 l0 )
if Y Y 0 and l0 extend l, it is easy to see every chain of such pair has an upper
bound (the union), so by Zorns lemma, there exist an maximal element (Y0 , L),
we must have Y0 = X, otherwise we may extend up by one-dimension as before
and contradicts the maximality, then L is the desired extension.
47

Functional Analysis

Tianyu Tao

Dec 05 We talk about another important consequence of Hahn-Banach, namely


separation by convex sets. Let us assume K1 and K2 are two convex sets in X, both
are closed, and their distance is positive: dist(K1 , K2 ) = inf x,K1 ,yK2 kx yk > 0.
We want to show the following: There exist l X , such that
sup lx inf lx.
xK2

xK1

Proof. Consider the set K1 + B , K2 + B , these two sets are still have positive
distance by picking small, and have non-empty interior, let us temporarily denote
them by K1 , K2 , define K = K1 K2 = {x1 x2 : x1 K1 , x2 K2 }. Now K is
a convex set such that
K has non-empty interior;
0
/ K.
= K x0 , then K
contains 0, consider
We pick x0 in the interior of k, put K
we have x0
because 0
the minkowski functional p associated to K,
/ K
/ K,
which implies p(x0 ) > 1 because in general we have A {x : pA (x) 1} for any
convex absorbing A.
Now use Hahn-Banach we pull out an linear functional l (the starting space is
the one-dimensional space spanned by x0 ) such that l(x) p(x), and we have

p(x0 ) < 1, p(x) 1 for x K


we have p(x) < 1 + 1 = 0. Since l is dominated by p,
thus for x K = x0 + K,
l(x) < 0 for all x K. This shows l(x) < l(x0 ) for x K1 , x0 K2 , note the K1
and K2 are the closure of the original K1 , K2 plus some small balls, which contains
the original K1 , K2 , so this l satisfies our requirement.
Dec 08 The last topic we will be talking about in this semester is about weak
topology: It is a topology on X such that, it is the coarsest (having the least
amount of open sets) that makes every l X continuous.
This requires a more general notion than that of normed vector space, since
such a topology is not necessarily induced from a norm, instead, we need to discuss
about Topological Vector Space (TVS). We say X is such a space if it is both a
vector space and has a topology on it which makes the algebraic operations (scalar
multiplication and addition) continuous, this means:
(X, +) is a topological group, and the map x 7 x + a is a homeomorphism
for any a X fixed, similarly x 7 x is a homeomorphism for any in the
base field.
48

Functional Analysis

Tianyu Tao

By above, any neighborhood of a point x X can be translated to a neighborhood of 0, continuity of addition and scalar multiplication means for any
neighborhood U of 0, there exist a neighborhood V U of 0 such that
V + V U , and U is an absorbing neighborhood: there is > 0 and a
neighborhood U 0 U of 0 such that |x| < and v U 0 , then xv U .
We are mostly interested in the case where X is a locally convex TVS: there is
a local basis U at 0 where each U U is convex.
Let us now describe the weak topology on X more precisely: Give any l X ,
the set {x : |l(x)| < } is a neighborhood of 0, and is a sub-basis for the weak
topology on X, in general, a basis of the weak topology consists any set which
is a finite intersection of the elements in a sub-basis: ket l1 , . . . , lm X and
1 , . . . , m > 0, we set
Oli ,j = {x : |li (x)| < j }
for i, j = 1, 2, . . . , m. This becomes a locally convex basis at 0. We know any open
sets in a topology is a union of members of the basis elements, this specifies the
weak topology on X.
Now we talk about the weak* topology on X, this is the coarsest topology on

X , such that the any member in the image of X in the double dual X by the
inclusion map i(x)(l) = x(l) is a continuous linear functional on X .
A sub-basis element if of the form
Oxi ,j = {l X : |l(xi )| < j }
where i is some finite collection of positive numbers and same for xi X.
The reason we talk about these nonsense is because of the following important
theorem:
Theorem (Banach Alaoglu:) Let X be a Banach space and X be its dual, the
unit ball B = {l X : klkX 1} in X is compact in the weak* topology.
Before turning to the proof of B-A, we have some comments to make:
If X is separable, then B in the weak* topology is metrizable (fact). Then
sequential compactness can be substituted on B for the general compactness, thus

pick {li }
1 is a bounded sequence in X , we can then find a subsequence lk0 such

that lk0 converges to some l X in the weak topology.


We can prove above directly, let xi be a dense subset in X by separability, there
exist lk0 such that lk0 (xj ) converges to some l(xj ) for each xj fixed by a diagonal
selection process, now pick any x X, we can estimate:
lk0 (x) = lk0 (x xj + xj ) klk0 kkx xj k + |lk0 (xj )|
Shows the sequence of numbers lk0 (x) is Cauchy, and converges to some l(x) for
all x X, define l such that lx = lim lk0 , it is easy to check l is a bounded linear
functional.
49

Functional Analysis

Tianyu Tao

Dec 10 Let us prove Banach-Alaoglu theorem to end this semester:


Proof of Banach Alaoglu: We need to use Tychonov theorem, which asserts the
space [1, 1]A = {f : A [1, 1]} is compact in the product topology, here A is
any set, a proof of this could find on Follands real analysis or Terrence Taos note
online.
Now we consider the closed unit ball B in X with the weak* topology, clearly,

B is a subset of [1, 1]B where B is the closed unit ball in X with the strong
(norm) topology, now the weak* topology on B is just the subspace topology
of B inherited from the weak* topology on [1, 1]B , which is the same as the
product topology. (here one need to unravel the definitions a bit).
By Tychonov, we only need to show B is closed, since a closed subset of a
compact set is compact. We can do this by using the theories of nets.
As a corollary, suppose X is reflexive: X = X , then the closed unit ball B
of X will be compact in the weak topology on X, because the weak* topology on
X = X agrees with the weak topology on X.
Jan 21 OK, welcome back we started by briefly reviewing the definition of weak
topology in general, where we have a space X and a set of functions F from X to
R, the weak topology induced by F has a subbasis for the weak neighbourhood
consists sets of the form
Ox,f1 ,...,fm ,1 ,...,j = {y X : |fj (x) fj (y)| < j }
where fj F and x X, j > 0.
Now let I be any index set, we denote the set RI to mean it equipped with
product topology: this is the topology such that all the projection maps
i0 : ({xi }iI RI ) = xi0
are continuous.
If F is a normed space which possesses a sub-collection F 0 such that for any
f F there exist f 0 F 0 such that kf f 0 k <  given any  > 0, it is clear that
F and F 0 induces the same topology.
Another fact that is if I is a countable family, then RI is a metrizable space
whose metric is given as follows: for x = (xn )nN ,y = (yn )nN , we define the metric
d by

X
|xi yi |
d(x, y) =
2j
1 + |xi yi |
i=1

50

Functional Analysis

Tianyu Tao

it is a standard exercise to show that d is indeed a metric which induces the product
topology.
Now back to Banach space, we have in mind a Banach space X, its dual X , we
consider the weak topology on X, denoted by (X, w), which is the weak topology
on X induced by X . And the weak* topology on X which is induced by the
image of X in X , denoted by (X , w ). Let B = BX (0, 1), the (norm) closed
unit ball in X, and B be the (norm) closed unit ball in X .
Our first observation is that:
Proposition If X is separable, then (B, w) is metrizable, if X is separable, then
(B , w ) is also metrizable.
Proof. If X is separable, let L be a countable dense subset of X , by above two
observations, the weak topology on X is generated by L, and is metrizable since L
is countable, so the subspace topology on B is metrizable. The same proof applies
to the second assertion.
Recall the Banach-Alaoglu theorem, which asserts that B is compact in the
weak* topology on X , this shows then in the case that if X is separable, (B , w )
becomes a compact metric space, which possess many convenient topological properties.
We next consider the situation when X is reflexive: i.e. X = X , now X =

(X ) , so it has a weak* topology which induced from X , on the other hand,
X = X , and the weak topology on X is also induced by X , thus (X, w) =
(X , w ), in particular, we know that B is compact in the weak topology since
((BX , w ) = (B, w).
In fact, we give a characterization of reflexivity:
Theorem The Banach space X is reflexive if and only if (B, w) is compact.
Proof. The above observation already showed one direction, on the other hand,
suppose (B, w) is compact, we claim that the image of B under the canonical
projection map i : X 7 X is dense in (BX , w ), then from (B, w) is compact,
and we know i is an isometry by Hahn-Banach, we have that i(B) = BX , which
shows i is surjective.
To do this, given BX , and l1 , . . . , ln X , we need to show any weak*
neighbourhood of in BX , which is of the form
O,lj ,j = { : |(lj ) (lj )| < j }
contain an element from i(B), that is, we want to find x B such that |(lj )
lj (x)| < .

51

Functional Analysis

Let X0 =

Tianyu Tao

ker lj , then X/X0 is finite dimensional with norm


k[x]k = inf0 {kx0 k : x0 x X0 }.
x

Let L be the linear span of lj , it is equal (isomorphic) to (X/X0 ) , since X/X0


is finite dimensional, it is reflexive, so for BX BL = BX/X0 , there is
[x] BX/X0 such that i([x]) = |L , undoes the infimum give you the correct
estimate.

52

S-ar putea să vă placă și