0 Voturi pozitive0 Voturi negative

962 (de) vizualizări47 paginiFeb 12, 2010

© Attribution Non-Commercial (BY-NC)

DOC, PDF, TXT sau citiți online pe Scribd

Attribution Non-Commercial (BY-NC)

962 (de) vizualizări

Attribution Non-Commercial (BY-NC)

- Hidden Figures: The American Dream and the Untold Story of the Black Women Mathematicians Who Helped Win the Space Race
- Hidden Figures Young Readers' Edition
- The Law of Explosive Growth: Lesson 20 from The 21 Irrefutable Laws of Leadership
- The Art of Thinking Clearly
- The E-Myth Revisited: Why Most Small Businesses Don't Work and
- The Wright Brothers
- The Other Einstein: A Novel
- State of Fear
- State of Fear
- The Power of Discipline: 7 Ways it Can Change Your Life
- The Kiss Quotient: A Novel
- The 10X Rule: The Only Difference Between Success and Failure
- Being Wrong: Adventures in the Margin of Error
- Algorithms to Live By: The Computer Science of Human Decisions

Sunteți pe pagina 1din 47

• Elementary row(and column)transformations

• L-U-Decomposition

• Summary

Summary

1. Rank of a matrix: The rank of a matrix is the order r of the largest non-

vanishing minor of the matrix.

a)Row Transformations :

i) Interchange of ith and jth row …….Rij

ii)Multiplication of ith row by non-zero scalar l….Ri(l)

iii)Addition of l times the elements of jth row to corresponding elements of

ith row---Rij(l)

b)Column Transformations are similar to above to (a)---replace R by C

Method – I:

Echelon Form: Transform the given matrix to an echelon form using

elementary transformations. The rank of the matrix is equal to the number

of non-zero rows of echelon form.

Method II:

Canonical Form OR Normal Form:

Reduce the given matrix A to one of the normal forms

Ir 0 , Ir , Ir 0 or Ir , using elementary transformation,

0 0 0 Then Rank of A = r

form as AX =B,where A = a11 a12 a1n , X = x1 , B = b1

a21 a22 a2n x2 b2

….. ….. …… …. ….

am1 am2 amn xn bn

If bi = 0 ,the system is homogeneous i.e. B=0 ,otherwise ,it is

non- homogeneous.

consistent iff the rank of A is equal to the rank of the augmented matrix

[A/B]

3.Solution of AX= B Working rule:

ii)Ifr(A)=r(A/B)=n ,(n,being number of unknowns).The system is

consistent and has unique solution.[In case │A│≠ 0]

iii)If r(A)=r(A/B)<n,the system is consistent and has infinite solutions.

iv)If r(A) ≠ r(A/B),the system is inconsistent and has no solution.

ii)Cramer’s Rule:[Method of determinant]:Let │A │≠ 0

Let Δ = │A│: we obtain 3 more determinants 1,2,3, of 3 matrices

obtained by replacing the 1st,2nd and 3rd columns of A by the column

matrix B of the system respectively.

Then x1 = Δ1/Δ , x2 = Δ2/Δ , x3 = Δ3/Δ

iii)Gauss Jordan Method: Reduce the augmented matrix(A/B) to the form

[I3 X] where I3 is the unit matrix. Then X = [ x1 x2 x3 ] is solution.

5.Gauss elimination:

obtain an upper triangular matrix.

Step2.The last equation of this matrix gives the values of xn.

Step3.The back substitutions of unknowns in the other equations give

the other unknowns.

ii) L = 1 0 0 and U = u11 u12 u13 then A=LU, LUX=B

l21 1 0 0 u22 u23

l31 l32 1 0 0 u33

iii)From LU = A ,we get both L and U

iv)From LY = B ,we get y1,y2,y3 : and then

v) From UX=Y , we get x1,x2,x3

7.Tri-Diagonal matrix:

a21 a22 a23 0

0 a32 a33 a34

0 0 a43 a44

X=[x1,x2,,….xn]1: solution of AX=0 can be done by elementary

transformations.

Conclusions:

i)The system AX=0 is always consistent. Since the trival solution x1= x2=

x3=…=xn=0 always exist.

ii)If rank of (A/B) = rank of A = n , [│A│≠ 0] , then the trival solution is

the only solution.

iii)If the rank of (A/B)= rank of A = r < n, [│A│≠ 0] the solution has

infinite number of non-trival solutions involving (n-r) arbitrary

constants.

UNIT-II

& THEIR APPLICATIONS

Hamilton theorem

• Diagonalization of a matrix

• Summary

Summary

1. Eigen values & Eigen vectors : Let A = (aij)mxn

(a)Characteristic equation of A is given by │A- λ I│ =0

(b)Roots of this equation are λ1, λ2 , λ3,…………….., λn. They are

called Eigen values of A.

(c)A non-zero vector X = [x1,x2,x3,……………..xn]1 which satisfies

the relation [A- λ I]X=0 (or) AX= λX, is called the Eigen vector

of A corresponding to λ. This each Eigen value has an Eigen

vector.

1.The sum of the Eigen values of a square matrix A is its trace &

their product is │A│.

2.The Eigen values of A and AT are equal.

3.If A is non-singular matrix and λ is an eigen values of A,

then 1/λ is an Eigen values of A-1.

4.If λ is an eigen values of A , then µλ is an eigen values of

µA where µ is a non-zero scalar.

5.If λ is an eigen values of A, then λm is an eigen values of Am,

m being any positive integer.

6.The eigen values of a diagonal matrix are its diagonal elements.

7.If B be a non-singular matrix, and A,B are matrices of same

order,then A and B-1AB have same eigen values.

8.λ is a characteristic root of a square matrx A iff their exists a

non- zero vector X such that AX= λ X.

9.If X be an eigen vector of A corresponding to the eigen value λ

,then c X is also an eigen vector of A corresponding to λ , c

being a non-zero scalar.

10.If X is an eigen vector of a square matrix A, then X cannot

corresponding to more than one eigen values of A.

11.Zero is an eigen value of a matrix iff it is singular.

12.If λ is an eigen value of a non-singular matrix A, then

│A│ / λ is an eigen values of Adj A.

‘Every square matrix satisfies its own characteristic equation.’

4. To find the inverse of a square matrix A; by using C-H theorem.

Let A be a square matrix and and

λn+a1 λn-1+a2 λn-2+……+an= 0….(1)be the characteristic

equation of A. (ai,i=1 to n are constants).

Then C-H theorem gives An +a1An-1+………anI=0…(2)

(2) x A-1 = An-1+a1An-2+……an-1I+anA-1 =0

A-1 = (-1)/an[An-1+An-2+……an-1I]

Let m ≥ n, be a positive integer.

Then Am-n x (2) ==Am+a1Am-1+…..anAm-n = 0,from which we can

find Am interms of powers of lower order of A.

Let A be a square matrix of order n having n linearly

independent Eigen vectors. Then there exists a non singular

matrix P such that P-1AP =D is a diagonal matrix, and

D=Diag[λ1, λ 2,….., λn ]

Step1:Find Eigen values λi (i=1,2,….,n)of A.

Step2:Find Eigen vectors Xi corresponding to λi (λi,i=1,2,….,n are

distinct).

Step3:Form the matrix P=[X1 X2 X3 Xn]where column

vectors Xi are the Eigen vectors of λi.

(The matrix P is known as the Modal matrix of A)

Step4:Find D= P-1AP=Diag[λ1 λ2 λ3 λn].This is the

diagonalisation of A.

The matrix D is known as the Spectral matrix of A.

Computation of positive powers of A:

If m is a positive integer of A: Then,

Am=(PDP-1)m= [P DmP-1]

UNIT-III

LINEAR TRANSFORMATIONS

Properties

Sylvester Law

• A Summary

Summary

1. Definitions: and properties of some real and complex matrices are

following

characterstic root.

2.The eigen values of an orthogonal matrix are of unit modulus.

3. The eigen values of a hermitian matrix are all real.

4. The eigen values of a real symmetric matrix are all real.

5. The eigen values of a skew hermitian matrix are either purely

imaginary or zero.

6.The eigen values of a real skew symmetric matrix are purely

imaginary or zero.

7. The eigen values of a unitary matrix are of unit modulus.

8. If A is nilpotent matrix, then 0 is the only eigen value of A

9. If A is involuntary matrix its possible eigen values are 1 and -1

10.If A is an idempotent matrix its possible eigen values are 0 and 1

3. Transformations:

Y = columns of [y y2 …. yn]; transforms vector Y to vector X over the

matrix ‘A’.

The transformations is linear.

(i) If ‘A’ is non-singular, (׀A 0 ≠ )׀then Y = AX is non-singular

transformation.

(ii) Then, X = A-1Y is inverse transformation of Y = AX.

Y = AX is an orthogonal transformation;

A is orthogonal , A1 = A- 1 => Y1Y = X1X

i.e., Y = AX transforms ( x12 + x22 +….+ xn2) to (y12 + y22 +…..

+yn2)

4. Quadratic forms: A homogeneous polynomial of 2nd degree in ‘n’

variables x1, x2,…xn is called of quadratic form.

(or) q = [a11x12 + a22x22 +……+ annxn2 + (a12+a21)x1x2 +

(a13+a31)x1x3 +…+…]

is a quadratic form in ‘n’ variables x1,x2……xn.

of ‘q’ where ,(aij+aji)=2aij is coefficient of xixj.

[i.e. aij=aji=1/2 coefficient of xixj]

Then q = X1AX = [x1x2.xn] A columns of[x1 x2.. xn]

quadratic form ‘q’

(b) If r < n , ‘q’ is singular

which product terms are missing (i.e. all terms are square terms only)

is called the canonical form of q.

i.e. q = a1x12 + a2x22 + …+ anxn2 is canonical form.

diagonalization of A, then q1 = d1x12 + d2x22 + …. + drxr2 ,

(where r = rank of A) is canonical form of q = X1AX.

1. If q= X1AX is the given quadratic form (in n variables) of rank ‘r’,

then, q1=d1x12 + d2x22 +….+ drxr2 is the canonical for of q.

[di is +ve, -ve, or zero]

(a) Index: The number of +ve terms in q1 is called the index ‘s’ of

quadratic form ‘q’

(b) The number of non +ve terms = r-s

(c) Signature = S- (r-s)= 2s-r.

2. The quadratic form ‘q’ is said to be

(b) –ve definite if r=n, and s=0

(c) +ve semi-definite if r<n and s=r

(d) –ve semi-definite if r<n and s=0

(e) Indefinite in all other cases

be the principal minors of A.

(b) ‘q’ is –ve definite if M1,M3,M5….are all –ve and M2,M4,M6,….be

the principal minors of A.

(c) ‘q’ is +ve semi-definite if Mj≥0 for every j≤n and at least one

Mj=0.

(d) ‘q’ is –ve semi-definite if in case (b) some Mj are =0.

(e) In all other cases ‘q’ is indefinite

If ‘q’ =X1AX is quadratic form in ‘n’ variables then, it is

b. –ve definite iff all eigen values are –ve

c. +ve semi-definite if all eigen values are ≥0 and at least one

eigen value =0.

d. –ve semi-definite if all eigen values are ≤0 and at least one

eigen value is zero.

e. Indefinite if A has +ve as well as –ve eigen values.

10. Methods of Reduction of quadratic form to the canonical form.

method to a canonical form by completion of squares.

elementary row transformation on L.H.S and on prefactor of

R.H.S. Apply corresponding column transformations on L.H.S

as well as the post-factor of R.H.S continue this process till the

equation is reduced to the form,

[0 d2 0]

[0 0 d3]

Y = [y1 y2 y3], i.e., if q = X1 A X, X = [x1 x2 x3] ,

q1 = d1y12 + d2y22 + d3y32.

Here X=PY is corresponding transformation.

(i=1,2,…n) of A.

(ii) Find modal matrix B = [X1 X2 … Xn]

(iii) Normalize each column vector Xi of B by dividing it with its

magnitude and write the normalized modal matrix P which is

orthogonal (i.e. P1 = P-1)

(iv) Then X = PY reduces ‘q’ to q1

where q1 = λ1 y12 + λ2 y22 + …+ λn yn2

= Y1(P1AP)Y

( X=PY is know as orthogonal transformation)

invariant for all normal reductions.

Symmetric matrix

In linear algebra, a symmetric matrix is a square matrix, A, that is equal to its transpose

The entries of a symmetric matrix are symmetric with respect to the main diagonal (top

left to bottom right). So if the entries are written as A = (aij), then

negative. The following 3×3 matrix is skew-symmetric:

Skew-symmetric matrix

In linear algebra, a skew-symmetric (or antisymmetric or antimetric[1]) matrix is a

square matrix A whose transpose is also its negative; that is, it satisfies the equation:

Compare this with a symmetric matrix whose transpose is the same as the matrix

The following matrix is neither symmetric nor skew-symmetric:

Every diagonal matrix is symmetric, since all off-diagonal entries are zero. Similarly,

each diagonal element of a skew-symmetric matrix must be zero, since each is its own

negative.

Orthogonal matrix

In linear algebra, an orthogonal matrix is a square matrix with real entries whose

columns (or rows) are orthogonal unit vectors (i.e., orthonormal). Because the columns

are unit vectors in addition to being orthogonal, some people use the term orthonormal

to describe such matrices.

alternatively,

(OR)

AT A = I .

EXAMPLE:

−1 0 1 0 −1 0 cosθ −sinθ

, , ,

0 − 1 0 −1 0 1 sinθ cosθ

Conjugate transpose

"Adjoint matrix" redirects here. An adjugate matrix is sometimes called a "classical adjoint

matrix".

n matrix A with complex entries is the n-by-m matrix A* obtained from A by taking

thetranspose and then taking the complex conjugate of each entry (i.e. negating their imaginary

parts but not their real parts). The conjugate transpose is formally defined by

where the subscripts denote the i,j-th entry, for 1 ≤ i ≤ n and 1 ≤ j ≤ m, and the overbar denotes a

scalar complex conjugate. (The complex conjugate of a + bi, where a and b are reals, isa − bi.)

where denotes the transpose and denotes the matrix with complex conjugated entries.

Other names for the conjugate transpose of a matrix are Hermitian conjugate, or transjugate.

The conjugate transpose of a matrix A can be denoted by any of these symbols:

(sometimes pronounced "A dagger"), universally used in quantum mechanics

, although this symbol is more commonly used for the Moore-Penrose

pseudoinverse

In some contexts, denotes the matrix with complex conjugated entries, and thus the

conjugate transpose is denoted by or .

EXAMPLE:

then

Hermitian matrix

A Hermitian matrix (or self-adjoint matrix) is a square matrix with complex entries which is

equal to its own conjugate transpose – that is, the element in the ith row and jth column is equal

to the complex conjugate of the element in the jth row and ith column, for all indices i and j:

If the conjugate transpose of a matrix is denoted by , then the Hermitian property can be

written concisely as

Hermitian matrices can be understood as the complex extension of a real symmetric matrix.

For example,

is a Hermitian matrix

Skew-Hermitian matrix

In linear algebra, a square matrix with complex entries is said to be skew-Hermitian or

antihermitian if its conjugate transpose is equal to its negative.[1] That is, the matrix A is

skew-Hermitian if it satisfies the relation

where denotes the conjugate transpose of a matrix. In component form, this means that

for all i and j, where ai,j is the i,j-th entry of A, and the overline denotes complex

conjugation.

symmetric matrices, or as the matrix analogue of the purely imaginary numbers.[2]

Unitary matrix

In mathematics, a unitary matrix is an n by n complex matrix U satisfying the condition

where is the identity matrix in n dimensions and is the conjugate transpose (also

called the Hermitian adjoint) of U. Note this condition says that a matrix U is unitary if

and only if it has an inverse which is equal to its conjugate transpose

A unitary matrix in which all entries are real is an orthogonal matrix. Just as an

orthogonal matrix G preserves the (real) inner product of two real vectors,

for all complex vectors x and y, where stands now for the standard inner product on

.

UNIT-IV

• Solution of Algorithm and Transcendental Equations

1. Bisection Method

2. Method of False Position

3. The Iteration Method

4. Newton Raphson Method

• Interpolation

- Finite Differences

- Forward Differences

- Backward Differences

- Central Differences

• Summary

Summary

If a function F(x) is continuous between a and b,

f(a) & f(b) are of opposite sign then there exists at least one root

between a and b. The approximate value of the root between them is

Xo=(a+b)/2

If F(Xo)≠0, then the root either lies in between [a, (a+b)/2] or

[(a+b)/2,b] depending on whether F(xo) is negative or positive .

Again bisection the interval and repeat same method until the

accurate root is obtained.

method):

This is another method to find the root of F(x)=0. in this method,

we choose two points and taking the point of intersection of the

chord with the x-axis as an approximate root (using y=0 on x-axis) is

X1=[aF(b)- b(F(a)]/[F(b)-F(a)]

Repeat the same process till the root is obtained to the desired

accuracy.

(iii) Iteration method:

If a function F(x) is continuous between a and b,

f(a) & f(b) are of opposite sign then there exists at least one root

between a and b. The approximate value of the root between them is

Xo=(a+b)/2

We can use this method,if we can express f(x)=0 , as

X =Φ(X0) such that │Φ1(X0)│< 1 then

The successive approximate roots are given by

Xn =Φ(Xn-1), n=1,2----

(iv) Newton Raphson method: The successive

approximate roots are

given by Xn+1=Xn- F(Xn) /F1(Xn) , n=0,1,2----

close to root of F(x)=0

2. Interpolation

Let y=F(x) be the function which take the values Yo,Y1,Y2,….Yn

corresponding to the equally spaced values Xo, X1, X2…..Xn of X

with h as the interval length between two consecutive points.

The Newton’s forward interpolation formula is

F(Xo+ph)=Yp=Yo+pΔYo=[P(P-1)]/ 2! Δ2Yo+ [P(P-1)(P-2)]/3! Δ3Yo+…+

[P(P-1)(P-2)….(P-n+1)]/ n! ΔnYo

This is also called Newton- Gregory forward interpolation formula.

(ii) Newton’s backward interpolation formula;

Where P=[X-Xn]/h

(iii) Gauss forward interpolation formula:

using central differences, delta as an operator

the Gauss forward interpolation formula is

Yp=Yo + P δY1/2 + [(P)(P-1)] /2! δ2yo+ [(P+1)(P-1)]/3! δ3Y1/2

+ [(P+1)P(P-1)(P-2)] /4! δ4Yo +…

Where P=[X-Xo]/w

+[(P+2)(P+1)(P-1)]/4! Δ4Y-2 +…..

(v) Lagrange’s Interpolation formula:

Let Yo, Y1, Y2,…..Yn be the values of Y=ƒ(x) corresponding to

Xo, X1, X2,…Xn (not necessarily equispaced)

[(Xo-X1)(Xo-X2)…..(X-Xn)] (X1-Xo)(X1-X2)(X1-X3)(X1-Xn)

……………..

+ (X-Xo)(X-X1)(X-X3)…(X-Xn) Yn

(X2-Xo)(X2-X1)(X2-X3)…(X2-Xn)

3. Spline interpolation and cubic Splines

[Xo,X1], [X1,X2],…..[Xn-1,Xn] where a = Xo<X1<X2<…<Xn= b, the

points Xo,X1…Xn are called nodes.

(i) Spline function:

A Spline function of degree n with nodes Xo, X1, X2,…Xn is a

function F(x) satisfying the following properties

interpolation)

(b) In each subinterval [xi-1,xi], ≤i≤n

(c) F(x) and its first (n-1) derivatives are continuous

on [a,b]

(b) In each sub interval [xi-1, xi], 1<i<m F(x) is a

third degree (cubic)

polynomials.

(c) F(x), F1(x),F11(x), are continuous an [a,b]

OR

i) f(a) = +ve , f(a) = -ve then c ∈(a,b)

ii) f(x0) = 0 ⇒ x0

≠ 0 ⇒ f(x0) = +ve ⇒ the root lies in a & x0 , x0 = (a+b)/2

≠0 ⇒ f(x0) = -ve ⇒ the root lies in x0& b

iii) f(x0) = +ve ⇒then second approximation x1 = (a+x0)/2

= -ve ⇒then x1 = (x0+b)/2

till we get an repeated end values.

i) f(a) = +ve, f(b) = -ve ∴ c∈(a, b)

ii) let x1 = a f(b) – b f(a)

f(b) – f(a)

a) f(x1) & f(a) are opposite then x2 = af(x1) – x1f(a)

f(x1) – f(a)

b) f(x1) & f(a) are same then x2 = x1f(b) – b f(x1)

f(b) – f(x1)

up to accuracy root i.e. repeated end values.

i) f(a) = +ve, f(b) = -ve ∴ c∈(a,b)

ii) f(x) = 0 ⇒ x= φ (x) since | φ 1(x) | < 1

let x0 = a+b ⇒ x1 = φ (x0)

2 x2 = φ (x1)

x3 = φ (x2)

and so on up to accuracy root i.e., repeated end values

i) f(a) = +ve, f(b) = -ve ∴ c∈(a,b)

ii) let x0 = a+b ⇒ x1 = x0 – f(x0)

2 f1(x0)

x2 = x1 – f(x1)

f1(x1)

and so on up to accuracy root i.e. repeated end values.

FINITE DIFFERENCE :-

⇒ ∆ y0 = y1 – y0

⇒ ∇ y1 = y1 – y0

⇒δ y1/2 = y1 – y0

2

⇒ µ yx = 1/2[yx + h/2 + yx - h/2 ]

7. E = 1 + ∆

9. δ = E1/2 – E-1/2

10. ∆ = E ∇ = ∇ E = δ E1/2

11. δ 2

= ∆∇ = ∇∆

y = f(x) = y0 + p ∆ y0 + p (p-1) ∆ 2y0 + p (p-1)(p-2) ∆ 3y0+…..

2! 3! Where p = x - x0

h

2. Newton Gregory Backward Interpolation Formula:-

y =f(x) = yn+ p∇ yn + p(p+1) ∇ 2yn + p (p+1)(p+2) ∇ 3yn +...Where p = x-xn

2! 3! h

II Central Difference Interpolation Formula: -

1. Gauss Forward: -

yp = y0 + p ∆ y0 + p(p-1) ∆ 2y-1 + (p+1)p(p-1) ∆ 3y-1+….. Where p = x-

x0

2! 3! h

y0 --- ∆ y-1 --- ∆ y-2

2 4

--- ∆ y-3

6

--- ∆ y-4

8

2. Gauss Backward:-

yp = y0 + p ∆ y-1 + (p+1)p ∆ 2y-1 + (p+1)p(p-1) ∆ 3y-2+(p+2)(p+1)p(p-1) ∆ 4y-1+…..

2! 3! 4!

Where p = x-x0

h

∆ y-1 ∆ 3y-2 ∆ 5y-3 ∆ 7y-4

3.Stirling’s: -

yp = y0+p ∆ y0+∆ y-1 + p2 ∆ 2y-1 + p(p2-1) ∆ 3y-1 + ∆ 3y-2 +p2(p2-1) ∆ 4y-2+

…….

2 2! 3! 2 4!

y0 ---- ∆ 2y-1 ---- ∆ 4y-2 ---- ∆ 6y-3 ---- ∆ 8y-4

∆ y0 ∆ 3y-1 ∆ 5y-2 ∆ 7y-3

y = f(x) = (x-x1)(x-x2)……(x-xn) f(x0)+ (x-x0)(x-x2)…..(x-xn) f(x1)+

(x0-x1)(x0-x2)…..(x0-xn) (x1-x0)(x1-x2)…(x1-xn)

(x2-x0)(x2-x1)…(x2-xn) (xn-x0)(xn-x1)…(xn-xn-1)

UNIT-V

• Curve Fitting

• Trapezoidal Rule

• Gaussian Integration

• Summary

Summary

1. Curve Fitting

a∑xi2 + b∑xi +c∑1 = ∑yi i from 1 to n

2. Interpolation

Formula

(1/4)Δ4Yo +…]

▼3yn+(11/12)▼4yn+(5/6)▼5yn+…]

4. Trapezoidal Rule

I=(h/2)[(Yo+Yn)+2(Y1+Y2+…+Yn-1)]

Where yo, y1, …yn i.e.,yi= ƒ(xi) are the values corresponding to the

argument xo=a,X1,=Xo+h…Xn=Xo+nh=b

I = ∫ƒ(x)dx=( h/3)[(yo + yn) + 4(y1+y3+..+yn-1)+ 2(y2+ y4+…+ yn-2)]

in-between a to b

This rule can be applied when the given internal (a,b) is divided into

even number of sub intervals of length ‘h’

6. Gaussian Integration

=∑ wi ƒ(xi) i from 1 to n

weights and xi are called abscissa. The weights and abscissa are

symmetrical with respect to the middle points of the interval.

OR

1. Fitting of a Straight Line (y=a+bx): -

∑ y = na + b ∑ x

∑xy = a ∑x + b ∑ x2

∑ y = na + b∑ x + c ∑ x2

∑xy = a ∑x + b ∑x2 + c ∑ x3

∑y = na + b ∑x2

(NUMERICAL DIFFERENTIATION)

1. Newton Forward:

y = y0+p∆ y0+ p (p-1) ∆ 2y0 + p(p-1)(p-2) ∆ 3y0+……….

2! 3!

…..

h 2 6 24

h2 12

h3 2 4 h

2. Newton Backward:

2! 3!

h 2 6

h2 12 h

3.Stirling’s: -

……… h 2 6 2

12

(NUMERICAL INTEGRATION)

2 12 24 5 2 3 4!

+(n5-2n4+35n3-50n2+12n)∆ 5y0+(n6-15n5+17n4-225n3+274n2-60n) ∆ 6y0+…]

6 4 3 5! 6 6 4 3 6!

Note: Number of subintervals odd or even.

Note: Number of subintervals should be even.

Note: Sub intervals should be multiples of 3.

∫ y dx = (2h/45) [7y0+32y1+12y2+32y3+14y4+32y5+12y6+……….]

Note: Subintervals should be multiples of 4.

yn-5+yn-1)+ 6(y3+y9+y15+…..+yn-3)+2(y6+y12+……+yn-6)]

UNIT-VI

Ordinary Differential Equations

• Picard’s method

• Euler’s method

• Predictor – corrector method

• Summary

Summary

numerically are

1. Taylor’s series method

2. Picard’s method

4. Runge-kutta method

equation

2. Picard ‘s method

using Picard’s method of successive approximations with the help of y(x) =

yo + ∫ xo to x ƒ(x,y) dx which is called an integral equation.

…

OR

2! 3!

y2= y0 + ∫ f(x, y1) dx

y2 = y1+ h f(x1,y1)

y3 = y2+h f(x2,y2)

4.Runge-Kutta Order 4: -

y 1 = y0 + (1/6)[k1+2k2+2k3+k4]

where k1 = h f(x0,y0) , k2 = h f(x0+h/2, y0+k1/2)

5.Milne’s Predictor-corrector :-

3

3

UNIT-VII

FOURIER SERIES

• Periodic Functions

• Fourier Series

• Euler’s Formulae

• Summary

Summary

1. Periodic functions

number T such that ƒ(x+T)= ƒ (x)for all x belongs to R, T is called period of

ƒ (x).

(ii) A function ƒ (x) is said to be odd if ƒ (-x)= -ƒ (x)

constant

these formulae are called Euler’s formula

Note:

4. Dirichlet’s conditions:

represented as a Fourier series if ƒ(x) satisfies the following

conditions in the interval

(i) ƒ(x) and its integrals are finite and single valued

(ii) ƒ(x) has a finite number of discontinuities

(iii) ƒ(x) has finite number of maxima and minima . Then the

Fourier series converges to ƒ(x) at all points where ƒ(x) is

continuous. Also the series converges to the average of the

left limit and the right limit of ƒ(x) at each point of

discontinuity of ƒ(x).

If a function ƒ(x) is defined in (C, C+21). The Fourier expansion for

ƒ(x) is

to ∞

If c = -l then the interval becomes (-l,l)

(i) If ƒ(x) is an even function in (0,2π) or (-π;π) the Fourier Series for

ƒ(x) is

(ii) If ƒ(x) is an odd function in (0,2π) or (-π;π) the Fourier Series for

ƒ(x) is

(i) Half range fourier sine series for f(x) in (0, )

where n from 1 to ∞

UNIT-VIII

constants and arbitrary functions

• Laplace’s equation

• Summary

Summary

1. Formation of Partial equations by the elimination of arbitrary

constants and arbitrary function.

be the equation where a,b are arbitrary constants.

Differentiating partially w.r.to x and y

∂f + ∂f . ∂z = 0 ∂f + p ∂f = 0 ………….(2)

∂y ∂z ∂x ∂x ∂z

∂f + ∂f . ∂z = 0 ∂f + q ∂f = 0 …………..(3)

∂y ∂z ∂y ∂y ∂z

equation of the form Φ (x, y, z, p, q)=0 which is the first order

P.D.E.

independent variables, then the result of eliminating the

constants will give rise to a P.D.E of higher order than the

first.

Let Φ (u,v)=0 ……………..(1)

Be the equation where u,v are functions of x, y, z and Φ be the

arbitrary function.

Differentiating (1) partially with respect to x and y

∂Φ ∂u + ∂u . ∂z = 0 ∂Φ ∂v + ∂v∂z = 0

∂u ∂x ∂z ∂x ∂v ∂x ∂z∂x

And

∂Φ ∂u + ∂u . ∂z = 0 ∂Φ ∂v + ∂v∂z = 0

∂u ∂y ∂z ∂y ∂v ∂y ∂z∂y

∂u ∂v

∂u ∂v + ∂u ∂v = 0

∂x ∂y ∂y ∂x

P = ∂z , q = ∂z

∂x ∂y

Where, P = ∂u ∂v - ∂u ∂v

∂x ∂z ∂z ∂y

Q = ∂u ∂v - ∂u ∂v

∂z ∂x ∂x ∂z

R = ∂u ∂v - ∂u ∂v

∂x ∂y ∂y ∂x

Is called Lagrange’s first order partial differential equation

P = ∂u ∂v - ∂u ∂v

∂x ∂z ∂z ∂y

Q = ∂u ∂v - ∂u ∂v

∂z ∂x ∂x ∂z

R = ∂u ∂v - ∂u ∂v

∂x ∂y ∂y ∂x

To solve (1),

first write Lagrange’s auxiliary equation (subsidiary equation)

∂x = ∂y = ∂z ………………….(2)

P Q R

where u, v are functions of x, y, z

From these two solutions, the general solutions is Φ (u,v)=0

If F(x, y, z, p, q)=0 ………………(1)

Is the non linear partial differential equation of first order then the

equation

Φ(x, y, z, a, b)=0 ……………..(2)

Which contains as many constants as the number of independent

variables is called the complete integral.

particular values is called the complete integral.

y, z, a, b)=0 …………….(2)

Partially w. r. to ‘a’ and ‘b’ and then equate to zero

δΦ/δa=0 ……………..(3)

δΦ/δb=0 ………............(4)

Elimination of ‘a’ and ‘b’ from (2) (3) (4) gives an equation of the

form f(x, y, z)=0 is called singular integral.

differential equations.

The equation of the form f(p,q) = 0

(i.e., the equation in terms of p and q only)is called standard type I.

The solution of equation is Z=ax+by+c

a = ∂z , b = ∂z

∂x ∂y

Now replacing p= ∂z =a ,and q = ∂z = b in the given

P.D.E ∂x ∂y

F(a,b) = 0 b = Φ (a)

Z = ax + Φ (a)y + c is called complete integral

The equation of the form ƒ(x, y, p, q) = 0…(1) is called standard

type II

From these two equations we get p=Φ1(x, a) and q=Φ2(y, a)

Substituting in

∫dz =Φ1(x, a)dx + Φ2(y, a)dy

(iii) Standard form III:

The equation of the form ƒ(z, p, q)=0…..(1)

Substituting q=ap ………..(2)

In (1) we get

P = Φ(z) ………………(3)

from (2) , (3) q = a Φ(z) ……………….(4)

dz = p. dx + q dy

dz = Φ(z) dx + a Φ(z) dy

∂z / Φ(z) = dx + a dy

is called clairaut’s equation

‘a’ and ‘b’

0 = x + ∂f ……………..(3)

∂a

0 = y + ∂f ……………..(4)

∂b

The eliminate of a, b from (3) (4) gives the singular integral

(1) One- dimensional wave equation : ∂u/∂t = c2 ∂2u/∂x2

(2) One- dimensional wave equation :

∂2u/∂x2 + ∂2u/∂y2 = 1/c2. ∂2u/∂t2

(3) One- dimensional wave equation : ∂u/∂t = c2 ∂2u/∂x2

(4) Laplace’s equation : ∂2u/∂x2 + ∂2u/∂y2 = 0

Problems which satisfy certain initial and boundary conditions are

called boundary value problems. The suitable method to solve such

problems is the method of separation of variables also known as

product method.

- Lecture 2.pdfÎncărcat deJay Patel
- Linear systemsÎncărcat dewelcometoankit
- w16_206_hwk12_solnsÎncărcat deanthalya
- IpseIlsen (1)Încărcat deJuan David Giraldo Otálvaro
- Discrete sine transformÎncărcat deManuel Florez
- Review MatrixÎncărcat dekina
- matlabÎncărcat deamitkumar
- unmc_HG_2007_HG1M11E1-07Încărcat deAndrea Winnie
- HJM Fast IntroductionÎncărcat deAlexis Maenhout
- Matlab Matrices.docÎncărcat deMeddyDanial
- Some Basic Proofs in Quantum MechanicsÎncărcat deIndrajit Kamalanathan
- mm-6Încărcat deSomasekhar Chowdary Kakarala
- 510-570-1-PB.pdfÎncărcat deartovolasti
- UCLA Math 33B Differential Equations RevieqÎncărcat deprianna
- tut08(1)Încărcat deCindy Ding
- Henry Infovis06 MatrixExplorerÎncărcat deAshok Kumar
- Analysis, NotesÎncărcat deΣωτήρης Ντελής
- 8 Positive Subdefinite Matrices, Generalized Monotonicity, And Linear Complementarity ProblemsÎncărcat dePham Duy Khanh
- Maths Assignment 1Încărcat deIdris Bohra
- 13.mechtronics(1)Încărcat deRaj Nivas
- Opencv UserÎncărcat deful138
- Mathematics Part IB and II (General) Paper 1 2007.psÎncărcat dejai ramanathan
- Aplicación del Álgebra Lineal al estudio de poblacionesÎncărcat deAlbertoAlcalá
- SF_THEORÎncărcat deriefsaldy
- 12 Mathematics Impq CH3-4 Matrices and Determinants 01Încărcat derahulsingh
- 1RV17CS134_1RV17CS202Încărcat deSrijith
- Rotation MatrixÎncărcat dechessgeneral
- Computer Mathematics for the Engineer: Efficient Computation and Symbolic ManipulationÎncărcat deNabeelKadhimAbbood
- IJIRAE:: A Statistical Approach to Identify Cancer-Related Vital GenesÎncărcat deIJIRAE- International Journal of Innovative Research in Advanced Engineering
- MECO2017tkÎncărcat derigoletto007

- m III Course File 2011 12Încărcat deshantan02
- 3375714 API 4th Amentment Regulations 2016Încărcat deAnita Sukhwal
- PMKVY User ManualÎncărcat deshantan02
- MM COURSE FILE -2010-2011Încărcat deshantan02
- Matrices DefinitionÎncărcat deshantan02
- Course File MMÎncărcat deshantan02
- III Assignment MmÎncărcat deshantan02
- UntitledÎncărcat deshantan02
- II Assignment MmÎncărcat deshantan02
- II Assignment MmÎncărcat deshantan02
- i Assignment MmÎncărcat deshantan02
- Matrix (Mathematics)Încărcat deshantan02

- Mm u III SummaryÎncărcat degnathw
- ABAQUS Theory ManualÎncărcat deHurricanes31
- MATLABÎncărcat deRatulKumarMajumdar
- matrices and determinantsÎncărcat deMonte Carlo Palado
- 2013 Cape maths Specimen Paper Unit 2 Paper 1Încărcat deDaveed
- Darve_cme102_matlabÎncărcat deougueway
- MATLAB DocumentationÎncărcat deGandhi Hammoud
- Some theorems on the generalized numerical ranges - Yik-Hoi Au-yeung , Nam-Kiu Tsing.pdfÎncărcat dejulianli0220
- Barbados 2016Încărcat deknuth2009
- Solution for Linear SystemsÎncărcat deshantan02
- refernce 1. Linear-System-Theory-Second-Edition.pdfÎncărcat deYonas Ghiwot
- 04 04 Unary Matrix OperationsÎncărcat deJohn Bofarull Guix
- Manavazhaganr@Bsnl.inÎncărcat deManavazhagan
- Matlab r / r ReferenceÎncărcat demohamed ahmed
- IMC 2015 (2)Încărcat deAri Wibisana
- Gerson J. Ferreira-Introduction to Computational Physics with examples in Julia programming language-self-published (2016).pdfÎncărcat decquina
- 106 TutÎncărcat devinay333dubey
- MatrixÎncărcat deMohit Chawla
- Linear Algebra - Kowalski.pdfÎncărcat dePaula Gherghinescu
- PID3855285.pdfÎncărcat defake id
- Capacity Analysis of Non-Orthogonal MultipleÎncărcat dehendra
- Matrix Theory and Applications With MATLABÎncărcat deFreddy Maxwell
- Handbook for Mechanical EngineeringÎncărcat deABIN
- Class 12 Computer science(C++ and My SQL) Important Programs Part 3Încărcat deHritam Sharma
- MatlabLectureNote.pdfÎncărcat deMessyAnd InStyle
- Physical MathematicsÎncărcat derholandsiii
- MatLab Assignment by NipunaÎncărcat deNipuna Lakmal Weerakkody
- Op Tim IzationÎncărcat deUjjwal Kumar Das
- matrices and determinants notes.pdfÎncărcat deayush mittal
- Linear Algebra Done WrongÎncărcat deJamieP89

## Mult mai mult decât documente.

Descoperiți tot ce are Scribd de oferit, inclusiv cărți și cărți audio de la editori majori.

Anulați oricând.