Sunteți pe pagina 1din 13

Question 1: (a) A can solve 90% of the problems given in a book and B can solve 70%.

What is the probability that at least one of them will solve a problem selected at random? Ans 1 p(A cannot solve) = 0.10 P(B cannot solve) = 0.30 Prob(atlest one can solve prob)= 1- P(none can solve problem) = = = = 1 - 0.1x0.3 1 - 0.03 0.97 97%

(b) When a computer uses a number base 2, how many significant decimal digits are contained in the mantissa of floating numbers? Ans: Assume that the binary computer has p-bit mantissa. Then the error bound is 2 p . This computer will have q significant digits with symmetric rounding , if 2 p = x 10 q+1 Taking logarithms to the base 10 we get q=1 + (p-1)log 10 2 if we assume p=24 then q=1+23 log 10 2~=7.9 we may say that the computer can store numbers with seven significant decimal digits (c) Evaluate the integral 101dxIx using (i) composite trapezoidal rule, (ii) composite Simpons rule, with 2, 4 and 8 equal subintervals. Ans: When N=2, we have h=0.5, Nodes are 0, 0.5 and 1 We obtain lT = [f(0) + 2f(0.5) + f(1) ] = [1+ 4/3 + ] =0.708333 lS = 1/6 [ f(0) + 4f(0.5) + f(1) ] = 1/6 [1+ 8/3 + ] = 0.694444 N=4 : h = 0.25 , Nodes are 0, 0.25 , 0.5 , 0.75 and 1 . We obtain lT = 1/8 [ f(0) + 2{f(0.25) +f(0.5) +f(0.75)} +f(1) ] = 0.697024 lS=1/12 [f(0) + 4f(0.25) + 2f(0.5) + 4f(0.75) + f(1) ]

= 0.693254 N = 8 : h = 0.125 , nodes are 0, 0.125 , 0.25 , .1.0 We have eight subintervals for trapezoidal rule and four subintervals for Simpsons rule. We get lT = 1/6 [ f(0) + 27i=1 f(i/8) + f(1) ] = 0.694122 lS= 1/24 [ f(0) + 44i=1 f(2i-1)/8 + 23i=1 f (2i)/8 + f(1) ] = 0.693155 The exact value of the integral is I = 0.693147 Question 2: Write the following programme in C language: (a) Write a programme in C to find a root using bisection method. Perform the five iterations of the bisection method to obtain the smallest positive root of the equation f(x) = x3 5x +1 = 0, verify your answer with the programme written by you. Ans: #include<stdio.h> #include <math.h> #include<conio.h> #define ESP 0.001 #define F(x) (x)*(x)*(x) - 5*(x) + 1 void main() { int i = 1; float a,b,c; double f1,f2,f0,t; clrscr(); printf("\n Enter the value of a: "); scanf("%f",&a); printf("\n Enter the value of b: "); scanf("%f",&b); printf("\n Accuracy upto %g",ESP); printf("\n________________________________________________________\n"); printf("\n n a b c f0 f1 f2"); printf("\n________________________________________________________\n"); do { c=(a+b)/2; f0=F(a); f1=F(b); f2=F; printf("\n %d %f %f %f %lf %lf %lf", i, a,b,c,f0,f1,f2); getch(); if(f0*f2<0) { b=c; } else

{ a=c; } i++; }while(fabs(f2)>ESP); printf("\n________________________________________________________\n"); printf("\n\nApp.root = %f",c); getch(); } Output a. When ESP=0.01 Enter the value of a: 0 Enter the value of b: 1 Accuracy upto 0.01 _____________________________________________________________________ n a b c f0 f1 f2 ______________________________________________________________________ 1 0.000000 1.000000 0.500000 1.000000 -3.000000 -1.375000 2 0.000000 0.500000 0.250000 1.000000 -1.375000 -0.234375 3 0.000000 0.250000 0.125000 1.000000 -0.234375 0.376953 4 0.125000 0.250000 0.187500 0.376953 -0.234375 0.069092 5 0.187500 0.250000 0.218750 0.069092 -0.234375 -0.083282 6 0.187500 0.218750 0.203125 0.069092 -0.083282 -0.007244 _____________________________________________________________________ App.root = 0.203125 b) when ESP = 0.001 Enter the value of a: 0 Enter the value of b: 1 Accuracy upto 0.001 _____________________________________________________________________ n a b c f0 f1 f2 ______________________________________________________________________ 1 2 3 4 5 0.000000 0.000000 0.000000 0.125000 0.187500 1.000000 0.500000 0.250000 0.250000 0.250000 0.500000 0.250000 0.125000 0.187500 0.218750 1.000000 1.000000 1.000000 0.376953 0.069092 -3.000000 -1.375000 -0.234375 -0.234375 -0.234375 -1.375000 -0.234375 0.376953 0.069092 -0.083282

6 0.187500 0.218750 0.203125 0.069092 -0.083282 -0.007244 7 0.187500 0.203125 0.195312 0.069092 -0.007244 0.030888 8 0.195312 0.203125 0.199219 0.030888 -0.007244 0.011813 9 0.199219 0.203125 0.201172 0.011813 -0.007244 0.002282 10 0.201172 0.203125 0.202148 0.002282 -0.007244 -0.002482 11 0.201172 0.202148 0.201660 0.002282 -0.002482 -0.000100 _____________________________________________________________________ App.root = 0.201660 (b) Write a programme in C to find a root using Newton Raphson Method. Apply Newton-Raphson s method to determine a root of the equation f (x) = cos x xex= 0, if exists verify your answer with the programme written by you. /* Root Using Secant Method*/ #include <stdio.h> #include <math.h> #define f(x) (exp(x)*log(x)-x*x) main() /* Main program to use the Secant Method to find the root of f(x)=exp(x)*ln(x)-x*x=0. Copyright Tao Pang 1997. */ { int istep; double dl=1e-6; double a,b,xi,xf,di,df; void secant (); a = 1; b = 2; di = (b-a)/10; xi = (a+b)/2; secant (dl,xi,di,&xf,&df,&istep); printf("%4d %16.8lf %16.8lf\n", istep,xf,df); } void secant (dl,xi,di,xf,df,istep) /* Subroutine for the root of f(x)=0 with the secant method. Copyright Tao Pang 1997. */

int *istep; double dl,xi,di; double *xf,*df; { double x0,x1,x2,d; x0 = xi; x1 = xi+di; *df = di; *istep = 0; while (fabs(*df) > dl) { d = f(x1)-f(x0); x2 = x1-f(x1)*(x1-x0)/d; x0 = x1; x1 = x2; *df = x1-x0; *istep = *istep+1; } *xf = x0; }

Question 3: (a) Find the probability of getting between 6 and 9 tails inclusive in 20 tosses of a fair coin by using (i) the binomial distribution, (ii) the normal approximation to the binomial distribution.
Ans: (i) Let us call it a 'success' when tails comes . Then , p = 1/2 , q=1/2 and n=20 Therefore the probability of getting between 6 and 9 tails in 20 tosses are as follows p(6)+p(7)+p(8)+p(9) = 20 C 6 (1/2)^6 (1/2)^14 +20 C 7 (1/2)^7 (1/2)^13 +20 C 8 (1/2)^8 (1/2)^12

+20 C 9 (1/2)^9 (1/2)^11 = fact20/fact6 fact14*64*16384 + fact20/fact7 fact13*128*8192 + fact20/fact8 fact12*256*4096 + fact20/fact9 fact11*512*2048 =4845/131072+77520/1048576+125970/1048576+1175720/1048576 =1.3521 (ii) Here n=20 , p=1/2 , q=1/2 ^2=npq=5 =2.236 =np=10 We require P[6<X<9] Using standard normal distribution , we get p[6-10/2.236<Z<9-10/2.236] =p[-1.78<Z<-0.44] =0.2925

(b) Table 2 below shows the respective heights x and y of a sample of 12 fathers and their oldest sons. (i) Construct a scatter diagram. (ii) Find the least-squares regression line of y on x. (iii) Find the least-squares regression line of x and y. Table 2 Height x 165 16 170 16 17 158 17 168 173 17 175 of 0 3 3 8 0 Father (centim etres) Height y 173 16 173 16 17 168 17 165 180 17 173 of Son 8 5 5 3 0 (centim etres)
Ans:From given table we have, x=2033 , Mean(x) =169.42, y=2061, Mean(y) =171.75 , u=x-169, v=y-171 X 165 160 170 163 173 y 173 168 173 65 175 u -104 -9 1 -6 4 u2 10816 81 1 36 16 v 2 -3 2 -6 4 v2 4 9 4 36 16 u*v -208 27 2 36 16

180

178

158 178 168 173 170 175 180

168 173 165 180 170 173 178

-11 9 -1 4 1 6 11

121 81 1 16 1 36 121

-3 2 -6 9 -1 2 7

9 4 36 81 1 4 49

33 18 6 36 -1 12 77

From above table we get, u=-95, v=9, uv=54, u^2=11327, v^2=253 Therefore, b_yx = uv-1/n uv / u^2-(u) ^2/n =1503/126899 =0.01 b_xy = uv-1/n uv / v^2-(v) ^2/n =1503/2955 =0.51 Therefore, the least-square regression line of y on x is: y-171.75=(0.01)(x-169.42) = y=0.01x+169.31 Regression line of x on y is: x-169.42=(0.51)(y-171.75) = x=0.51y+169.31 = x=0.51y+81.83

(c) Find the area under the standard normal curve (a) between z = 0 and z = 1.2, (b) between z = 0.68 and z = 0, (c) between z = 0.46 and z = 2.21, (d) between z = 0.81and z = 1.94. Ans:
(a) z=0 and z=1.2 p (0 z 1.2) =F (1.2)-F (0) =.3849 (from normal dist table)

(b) z=-0.68 and z=0 p (-0.068 z 0) =p (0 z 0.068) =F (0.068)-F(0) =0.7517 (from normal dist table)

(c)

z=-0.46 and z=2.21 p (-0.46 z 2.21) =F (2.21)-[1-F (0.46)] =0.9864-0.3228 (from normal dist table) =0.6636 z=0.81 and z=1.94 p (0.81 z 1.94) =F (1.94)-F (0.81) =0.9738-0.7910 (from normal dist table) =0.1828

(d)

Question 4: Solve the following equation (if a solution exists) using given method: (a) Solve the equation: 10x1 x2 + 2x3 = 4 x1 + 10x2 x3 = 3 2x1 + 3x2 + 20x3 = 7 using the Gauss elimination method. Ans: 2nd blk 8page (b) Solve the questions: 10x1 x2 + 2x3 = 4 x1 + 10x2 x3 = 3 2x1 + 3x2 + 20x3 = 7

using the LU decomposition method. Ans:


I got the values as L11 = 10 L21 = 1 L22 = 101/10 L31 = 2 L32 = 32/10 L33 = 2018/101 U12 = -1/10 U13 = 1/5 U23 = -12/101 Using LUX = B let UX = y LY = B 10y1 = 4 y1 = 4/10 y1 + 101/10 y2 = 3 y2 = 26 /101 2y1 +32/10 y2 + 2018/101 y3 = 7 y3 = 543/2018 Now solving UX = Y The equation are.. x1 -1/10 x2 + 1/5 x3 = 4/10 x2 - 12/101 x3 = 26/101 x3 = 543/2018 = 0.269 x2 = 58984/203818 = .289 x1 = 76457/203818 = 0.375 You can verify the answer by substituting it in the given equation .

Question 5: Out of the three method i.e. Secant method, Regula Falsi method and the Newton Raphson method which method is more efficient and why? Determine the efficiency or the order of these three methods?
Ans: The Newton-Raphson method is considered to be the fastest method. But the method approaches to the root only when the initial guess is chosen very near the root. If the initial guess is not near the actual root, the method may diverge. The Secant method gives better results than Regula Falsi method method. But the Secant method may not converge to the root sometimes. But the Regula Falsi method always converges.

If f '(x) evaluation is not difficult then Newton Raphson method is recommended. If evaluation of f '(x) is difficult then Secant method is recommended. But the convergence must be verified using bisection method once in every 10 iterations. This is a detailed examination of four methods of discovering roots and the effects of different equations on the efficiency of each method and the amount of error produced by each method. Although some calculation here has been done by hand, the vast majority of our conclusions were arrived at by using computer programs for each of the methods. The four methods examined were bisection, false-position, secant, and Newton. Each method seemed to have a particular type of equation or desirable outcome for which it is most efficient. Secant One of the faster methods, compared to bisection and false position, for approximating roots is the secant method. Unlike the other two methods, f(a) and f(b) of the two initial values that we pick to start with do not have be on opposites sides of the root and have a sign change. This one works by always making the old b value into the new a value. The new b value then becomes the zero of the secant line. One of the problems with this method is that the slope of the secant line can become very small and this will cause it to move very far from the points you have. This method fails when estimating the roots of an equation, which has a very low slope near the root because the secant line will jump large amounts due to the low slope. Another problem is that this method like the false position method does not know when to stop. It must be performed several times until the f of the current guess is very small. Newtons Method Newtons method is generally used in calculus to approximate roots. One main difference between Newton and other methods is that it only requires one initial guess instead of two. Also, if the guess is relatively close to a root, Newton's method works incredibly fast. If the guess is not close to a root, however, Newtons method can work slowly and, depending on the function, may be impossible to use. One similarity between Newtons method and the bisection method is that both approximate roots in a consistent fashion. This allows the user to predict exactly how long it will take to arrive at a root, assuming the function and the original guess are sufficiently "nice" for Newtons method. Newtons method doubles in precision with each step. This is why Newton requires the same number of steps in Table B, regardless of the value of n in the equation. Newtons method uses not only the function itself but also the derivative of that function to approximate the root. One chooses a point a. Throughout the course of the method, this will be replaced by the zero of the tangent line (the zero of the derivative). To find the new point, use the formula [a (f(a))/(f(a))]. This will be the new a, and one repeats this process until the root is found to sufficient precision. Another interesting possibility with Newtons method is error calculation. Error calculations simply show the difference between the values arrived at by some method and the actual value. Notice below where R is the root and x is the result of the most recent Newton iteration.

Error = x - R New Error = x- (f(x)/f'(x))-R ~ f(x) and f'(x) are expanded out to a power series. With the Newton's method, we make the assumption that X is close enough to R so that the first nonzero term in the power series in numerator and denominator is almost the entire error. Finally, following this assumption, one can determine that the new error is about (xR)/2, which means that the error is cut in half each time. False Position Another method used for estimating roots of equations is the false position method. This method functions by first starting with two endpoints a and b. If f(a) and f(b) are not roots, then f(a) times f(b) is a negative value since they both need to lie on opposites sides of the root. This follows the logic that a negative value times a positive value equals a negative value. After the endpoints are chosen, false position finds the zero of the secant line joining (a,f(a)) and (b,f(b)) and then setting that as its new endpoint. This is done repetitively as the endpoints become continually closer to the root. An examination of the computer code for the false position method shows that it is very similar to the bisection method except that it uses the zero of the secant line to approach the root rather then finding the midpoint each time. The false position method is often significantly faster then the bisection method for equations such as y = x n 2 for a small value of n. When n becomes large then false position often becomes the slowest method. In the data section of this paper, table B shows the number of repetitions for each method and value of n using the equation y = x n-2. Data Table A: f(x) = x2 2 a = 1 b = 2 accuracy within 0.00001

Method Name

Number of steps

Bisection

17 steps

False Position

8 steps

Secant

5 steps

Newton Table B:

4 steps

For f(x) = xn 2, a = 1, b = 2, n = {3, 4,, 10} and accuracy within 0.0001

Number of steps

n=3

n=4

n=5

n=6

n=7

n=8

n=9

N = 10

Bisection

17

17

17

17

17

17

17

17

False Position

13

20

31

49

76

120

189

296

Secant

Newtons Table C:

For f(x) = x2 2, a = 1, b = 2, done for 25 iterations of each method, the error was:

Method:

Error:

Bisection

9.501 x 10-8

False-Position

-2.420323 x 10-8

Secant

-2.420323 x 10-8

Newton

-2.420323 x 10-8

For secant and Newton, the program was not able to complete 25 iterations. After 6 iterations, the memory overflowed. Analysis of Data The above data produces "nice" endpoints since the functions that were examined are also ideal. Even so, there was a lot of variation in the speed, accuracy, and reliability of each method. Table A shows the variation in speed between all four methods. It is clear that in this case Newtons method would be a good choice for finding the root.

In Table B, one can see the consistency with which the bisection and Newton methods work for nice functions. This same consistency could be seen using the bisection method on any function, but Newton requires a relatively calm function in order to be helpful. One can also see that one methods definition of a nice function is not the same as another methods definition. Newton will work well on any non-oscillating, non-asymptotic function whose slope is not very close to zero. Nearly any polynomial fits this description. However, the false -position methods effectiveness is inversely proportional to the degree of the polynomial. False-positions definition of a nice function would be something more on the order of a low degree function. Table C shows the degree of accuracy, which can be achieved in 25 iterations of each method. The errors for all the methods are the same except for the bisection method. This once again emphasizes how relatively fast the other three are for functions they consider nice. The main problem encountered in this exercise had to do with the speed of these functions. Because they worked so quickly, within six steps secant and Newtons methods had produced results, which were as accurate as the computers memory could hold. After six iterations, the computer would have a memory overflow. We corrected this by altering the number of iterations. Conclusion It was shown first-hand in this analysis that the speed of the method was inversely proportional to the consistency of the method to deal with non-nice functions. Newtons method originally seemed to be an exception to this rule, because it was about five times as fast as bisection, but just as consistent for x n 2 for all n. It was after trying some functions of different forms that one can find that the functions which are nice for Newtons method are different than those for false-position and secant methods. These non-nice functions are easily found for Newtons method, which makes Newton inconsistent and unable to efficiently solve for the root. For nice functions and guesses relatively close to the root, each of the methods used require relatively few iterations before coming very close to that root. By examining the type of function, one can determine which sort of method would be most efficient in finding the root.

S-ar putea să vă placă și