Sunteți pe pagina 1din 10

23

Function Approximation
Case Study:
Smart Sensor

1
23 Smart Sensor Diagram

y
v1

Shadow

v2

v2
v1

y
2
23 Collected Data
Raw Data Normalized Data
7
1

5 0.5

v 4

2 0.5

1
0
0 1 2 3 1 0.5 0 0.5 1

v1
p =
v2
t=y

3
23 Network Architecture

Inputs Tan-Sigmoid Layer Linear Layer

p1 a1 a2
2x1
W1 1
S x1
W2 1x1
1
S x2
n1 1xS 1 n2
1
S x1 1x1
1 b1 1 b2
2 S1x1 S1 1x1 1
1 1 1 2 2 1 2
a = tansig(W p + b ) a = purelin(W a + b )

4
23 Training Performance (S1=10)
2
10

1
Bayesian Regularization
10

0
10

SSE
1
10

2
10

3
10 0 1 2
10 10 10

Iteration #
Final performance using five random initial weights.

1.121e-003 8.313e-004 1.068e-003 8.672e-004 8.271e-004


5
23 Effective Number of Parameters

20

15

10
n = 41

0
0 20 40 60 80 100

Iteration #

6
23 Number of Hidden Layer Neurons

Sum Squared Error

S1 = 3 S1 = 5 S1 = 8 S1 = 10 S1 = 20

4.406e-003 9.227e-004 8.088e-004 8.672e-004 8.096e-004

After a sufficient number of neurons is reached (~5) the


error does not go down, if Bayesian regularization is used.

7
23 Scatter Plots

Training Testing
1 1

0.5 0.5

a 0
a 0

0.5 0.5

1 1
1 0.5 0 0.5 1 1 0.5 0 0.5 1

t t

8
23 Error Histogram
15

10

0
0.02 0.01 0 0.01 0.02

Inches

m ax mi n
a = (a + 1 ) .*(-----------------2--------------) + tm in
n t t
9
23 Trained Network Response

2.5

y
2

1.5

0.5

0
0

2
6
4

v2 4
2
0 v1

10

S-ar putea să vă placă și