Sunteți pe pagina 1din 9

Assignment 3

July 12, 2017

1 Introduction
In this assignment we attempt to study the least squares method in greater
detail by tting a model for the bessel functiion. We t three models:
1. f (x) = Acos(x) + B sin(x)
2. f (x) = A cos(x)

x
+ B sin(x)

x

3. tting model 2 with noise added to the the bessel function

2 Code
Listing 1: Code for least squares tting
import numpy as np
import m a t p l o t l i b . pyplot as p l t
import s c i p y . s p e c i a l as sp
maxlim= 40
numpts= 101
def J ( x , v ) :
return sp . j v ( 1 , x )
x , spac= np . l i n s p a c e ( 0 , maxlim , numpts , endpoint=True , r e t s t e p=True )#uses hard coded
x= x . reshape ( numpts , 1 )

def matfeats1 ( x ) :
return np . hstack ( ( ( np . cos ( x ) ) , np . s i n ( x ) ) )
def l e a s t s q (A, f v a l s ) :
return np . l i n a l g . inv (A.T. dot (A) ) . dot (A.T. dot ( f v a l s ) )
def matfeats2 ( x ) :
return np . hstack ( ( np . cos ( x )/ x 0 . 5 , np . s i n ( x )/ x 0 . 5 ) )
def n u c a l c ( x , x0 , col , eps , model ) :
vec = ( x [ ( x>=x0 ) ] )
vs= vec . shape [ 0 ]
vec= vec . reshape ( vs , 1 )

1
a= l e a s t s q ( model ( vec ) , J ( vec ,1)+ eps np . random . randn ( vec . shape [ 0 ] , 1 ) )
return ( 2 ( np . a r c c o s ( a [ 0 ] / ( a [0] 2+ a [ 1 ] 2 ) 0 . 5 ) np . p i /4)/ np . p i )

xvec = x [ ( x>=spac )&(x<=x [ 5 ] ) ]

b=np . array (map( lambda z : n u c a l c ( x , z , ' r ' , 0 , matfeats1 ) , xvec ) )


c=np . array (map( lambda z : n u c a l c ( x , z , ' r ' , 0 , matfeats2 ) , xvec ) )
dtr=np . array (map( lambda z : n u c a l c ( x , z , ' r ' , 1e 2, matfeats2 ) , xvec ) )
z= dtr [ : ]
e=0 dtr
#here we observe the e f f e c t of of the function over many i t e r a t i o n s by calculatin
for i in xrange ( 1 , 5 0 0 ) :
z=np . array (map( lambda z : n u c a l c ( x , z , ' r ' , 1e 2, matfeats2 ) , xvec ) )
e+= ( zc ) ( zc )
d=abs (1 c )
p l t . p l o t ( xvec , e , ' g ' )
plt . ylabel ( ' variance ' )
p l t . x l a b e l ( ' $x_0$ ' )
p l t . t i t l e ( ' v a r i a n c e vs $x_0$ ' )
p l t . show ( )
c o s t = 0 . 9 1 / ( np . max( d) np . min ( d ) ) ( dnp . min ( d ) ) +0.1 1/( np . max( e) np . min ( e ) ) (
p l t . p l o t ( xvec , c o s t )
p l t . t i t l e ( ' c o s t f u n c t i o n { l e s s e r i s a b e t t e r f i t } vs $x_0$ ' )
plt . ylabel ( ' cost function ' )
p l t . x l a b e l ( ' $x_0$ ' )
p l t . show ( )

print xvec [ np . argmin ( c o s t ) ]

d=np . array (map( lambda z : n u c a l c ( x , z , ' r ' , 1e 2, matfeats2 ) , xvec ) )


p l t . p l o t ( xvec , b , ' bo ' , l a b e l = ' model a , ep=0 ' )
p l t . p l o t ( xvec , c , ' ro ' , l a b e l= ' model b , ep=0 ' )
p l t . p l o t ( xvec , d , ' go ' , l a b e l= ' model b , ep= 1e 2 ' )
p l t . y l a b e l ( ' e s t i m a t e d value o f $ \\nu$ ' )
p l t . x l a b e l ( ' $x_0$ ' )
p l t . t i t l e ( ' e s t i m a t e d $ \\nu$ vs $x_0$ ' )
p l t . l e g e n d ( l o c= ' bottom l e f t ' )
p l t . show ( )

3 Observations
On running the code above fro various values of numpts(i.e, the number of
measurements), we see that that the t for models 1 and 2 do not change signif-

2
icantly. However, due to the added noise model 3 shows deviations particularly
for higher values of x0 . This deviation(measured in the code using mean squared
error) becomes less prevalent in the lower values of x0 as the number of measure-
ments increases(see gures). The eect of noise is best seen from the variance
plots(shown for numpts=41, 81, and 300). We then procced to dene a best t
by dening a cost function, which is minimum when it gets an optimal1 sum of
variance and estimated value.

graphs

Figure 1: variance for numpts= 300

3
Figure 2: variance for numpts= 81

4
Figure 3: variance for numpts=41

model ts:

5
Figure 4: value for numpts= 41

6
Figure 5: for numpts= 81

7
Figure 6: for numpts=300

Table 1: The best x0 values(model 3) are


numpts x0
41 11.0
81 13.0
300 17.9

4 Results and discussion


The following conclusions can be drawn from the graphs above:
The best value of x0 obtained for model 3 increases with an increasing
number of points considered(numpts).

8
A higher number of points leads to a sharper increase in variance, further
this rise is seen primarliy in the higher(highest 20%) values of points. The
variance stays almost the same for lower values of x0 .
These points clearly indicate that a large number of equations can mitigate
noisy signals, thus better ts need large sets of equations.
We also see that the lower values of x0 do not accurate produce values of .
This is because the bessel function approximates to our selected model only for
higher values of x.

S-ar putea să vă placă și