Sunteți pe pagina 1din 5

Numerical PDEs : Homework #1

Due on Thursday, September 4, 2014

--

Peter Caya

1
Peter Caya Numerical PDEs (- -): Homework #1

Problem 1

Calculate 10 to seven decimal places using Herons rule. Begin with x0 = 2.

Calculating the square root of 10 with the below code and an initial value of 2, I obtained the answer
below : = 3.1622776:

Problem 2
Using an initial value of x0 = 2, use Newtons method until you trust the approximation to the sixth decimal
places.
To answer this question, I wrote a Newtons method algorithm which continued the iterations until
the value of x3 2x5 = 0 is approximated to 109 . Using the initial guess of 2, I received the below results:

x0 = 2
Iterations: 4
x4 = 2.0945514815423265098
Function at x4 = 8.8817841970012523234 1016

Displayed below is the code I used for this problem:

Page 2 of ??
Peter Caya Numerical PDEs (- -): Homework #1 Problem 2

Problem 3
(a)

The code displayed below simply runs the approximation 100 times. Here are the first 5 itera-
tions:

1 1.15562e+15
2 7.70416e+14
3 5.13611e+14
4 3.42407e+14
5 2.28271e+14

Consider the value of the derivative 3x2 1 at 1/ 3: 0. So, we are attempting Newtons method
at the exact point where the slope is lowest for the function. When using Newtons method, we construct
a line using the function and its derivative at xn and then determine where the line intercepts the x-axis.
In this case, the slope is extremely low so our approximation start off extremely large before converging to 1.

Show that xn converges to 1 for any x0 > 1/ 3.

The value that Newtons method converges to is dependent on the sign of the slope of the line
which we draw. If the sign of ff0(x n)
(xn ) is negative, then the line which is drawn using Newtons method will
intercept the x-axis to the right of xn . For all values of x > 13 , we receive a positive value for Newtons
method. A graph of the values produced with Newtons method for the formula x3 x is displayed below:

a1q3_graph.jpeg

p
When Newtons method is applied with the first approximation as 1/ (3), we get a first high number
for our next guess. When we use Newtons method, weuse the function and its derivative to generate a
new approximation of x on the x-axis. Plugging 1/ 3 into the derivative of x3 x we see that it
produces a value at zero. This means that this is the point at which the slope is the lowest locally for the
function. Because of this, when we draw our tangent line out, and then see what value of x we have for
the x-axis, we get an extremely large value. It will be, in fact, the largest value we could possibly get
that will converge to 1.
Consider some x0 > 1/ 3. The slope of the line we draw out will be much lower, meaning a more
accurate first approximation. As a result these will converge more quickly to the nearest root, which in
this case is of course, 1. Due to the negative
value for the function x3 x at this value, it produces a
positive number. If we start with 1 / 3 then the sign simply flips for the function and we get the exact
same outcome only in reference to -1. (b)
When x0 = 1/ 5 Newtons method oscillates back and forth between 1/ 5. This cubic function is
symetric with the value xn = 1/ 5as the point at which xn+1 = xn .
Considering an initial approximation x0 < |1/ 5|, we see that the result that is produced has
progressively smaller absolute values which then converge towards zero.
(c)
In this case, I will use the starting approximation of x0 = 1/ 3 .0001. In this case the algorithm I
Problem 3 continued on next page. . . Page 3 of ??
wrote produced a root of xn = 1.
INSERT IMAGE OF PAPER SOLUTION HERE.
When we let x0 = .46, the approximation which is produced is 1.
Peter Caya Numerical PDEs (- -): Homework #1 Problem 3 (continued)

Problem 4
R .5
Calculate 0
ex dx.
(a)
To six decimals analytically.
R .5
0
ex dx = 0.648721

(b)

Applying the algorithm for the trapezoidal rule displayed below, with step size h = 1/4 I get an
approximation of .65209651301 .

(c)

Using the same algorithm with step size h = .5 I received an approximation of 0.66218031768.
By using these two approximations I can use Richardson extrapolation to raise the order of the error term
from the h2 normal for the trapezoidal rule, to h3 . Then:

2k T (h/2) T (h) 22 .65209651301 .66218031768


k
= = 0.648735244
2 1 22 1
. (d)

Step Size Approximation Error Error Ratio


.5 .66218031768 0.0134593177 0.0010583588
.25 .65209651301 0.003375513 0.0042200361
Richardson 0.6487352448 0.0000142447866666728 -

// beg is the beginning of the integration region, ending is the end of the region.
// Step is the size of the step (h) for the method.
// node represents the beginning of the section of the integral we are taking.
// cur_reg is the local integral taken for the approximation.
// total_reg is the summed value of the current regions.

double trapezoid (double beg, double ending, double step)


{
double node=beg, cur_reg=0, total_reg=0;
cout << "node" << "\t" << "Total Region" << endl;

while (node+step <= ending )


{
cur_reg = step*(func(node) + func(node+step))/(2);
node = node+step;
total_reg = total_reg + cur_reg;

Problem 4 continued on next page. . . Page 4 of ??


Peter Caya Numerical PDEs (- -): Homework #1 Problem 4 (continued)

cout << node << "\t" << total_reg << endl;


}
return total_reg;
}

Problem 5
Displayed below is a table showing the results generated by the code copied at the end of this problem:

h Approximation Error
.1 -0.8246481 0.0006875505
.05 -0.8251637 0.0001719306
.01 -0.8253287 0.000006877773
.001 -0.8253355 0.00000006878116
For the above table, it can be seen that the smallest step size of h=.001 provides the most accu-
rate approximation.

Discuss qualitatively the influences of both the rounding errors in the function values and the error
in the approximation of a derivative with a difference quotient on the result for various values of h.

Before qualitative analysis - here is another table comparing the ratio of one step size to the next
smallest, and their respective ratio in error terms:

h hn1 /hn Error Ratio


.1
.05 1/2 .2506 1/4
.01 1/5 .04 1/20
.001 1/10 .01 1/100
One obvious conclusion from this table is that a decrease in the step size for which we are calcu-
lating the second derivative does not result in a proportional relationship in the reduction in the error
term. Halving the step size quarters the error. Cut the step size to a fifth, reduces the error to a twentieth
of what it was. We could say then that there is an exponential relationship between the reduction of the
step size and the increase in accuracy. Displayed below is a simple graph showing how the error grows as
the step size grows:

GRAPH HERE
This does not mean that when I compute this value, I can simply rely on decreasing the step size ad
infinitum. Consider step size h = .0000001. When I run the algorithm I receive the approximation of
the second derivative as -0.8326673 with error equal to 0.007331654. By reducing the step size to one-
ten-thousandth of the next smallest step, I managed to create what was easily the largest error term.
The reason for this is that despite the algorithm working properly, the computer can only keep track of
the answer to so many decimal places. At a certain point, the computer no longer keeps track of the
approximation past a certain value and returns the same value rounded off. While not serious if done
once, when repeated the round-off error snowballs until the approximations are no longer valuable.

Page 5 of ??

S-ar putea să vă placă și