Sunteți pe pagina 1din 29

Identication of a Flexible Beam Setup

SC4040 - Filtering and Identication


Roxana Bucsa - 4330110 Radu Florea - 4330358 Remy Kabel - 4132165

SC4040 Filtering and Identication

Delft Center for Systems and Control

Table of Contents

1 Introduction 1-1 Description of the system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2 Description of the assignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Data Pre-processing 2-1 Data analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2 Spectrum analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3 Detrending, Data Compression and Order Estimation . . . . . . . . . . . . . . . 3 Model Estimation, Validation and Optimization 3-1 Hankel Matrix Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2 Estimation of States-Space Matrices . . . . . . . . . . . . . . . . . . . . . . . . 3-3 3-4 3-5 3-6 3-7 3-8 Model Validation . . . Auto-Correlation Test . Cross-Correlation Test . Cross-Validation Test . Variance Accounted For Model Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 1 2 3 3 5 6 7 7 8 8 8 9 11 11 12 15 17

4 Conclusion A Identication Matlab Script

ii

Table of Contents

ii

Chapter 1 Introduction

As a nalizing project for the TU Delft course SC4040 Filtering and System Identication, the task has been assigned to us to identify, validate, and optimize several data sets for a certain experimental model. This puts the knowledge gathered throughout the course to the test in a practical setting.

1-1

Description of the system

Figure 1-1: The experimental setup of the exible beam

The experimental setup considers a exible beam structure as depicted in Figure 1-1. Six piezos are bonded to the beam, three of which are visible in the photograph and three of which are located at the rear of the beam. These piezos can either be used to develop a strain when a voltage is applied to them (causing the beam to bend) or they can detect a strain by 1

Introduction

emitting a voltage when the beam is bent. In the setup used here, piezos 1 and 3 are used as actuators, whereas piezos 2, 4, 5 and 6 are used as sensors. For this system the maximum voltage that can be supplied is 250 Volt before the actuators saturate. The constraint on the sensor outputs is 10 Volt before the electronic ampliers saturate.

1-2

Description of the assignment

As a starting point, seven dierent data sets are given with inputs and corresponding outputs for all piezos. The channels (rows) in the data sets are dened as: 1 - time (seconds) 2-5 - measurements from piezos 2, 4, 5 and 6 (Volts) 6-7 - signals sent to piezos 1 and 3 (Volts) 8 - sample counter (not used) The task at hand is to identify the real life system. First, however, all data sets must be pre-processed to nd any unsuitable sets. In the following chapters we will follow the method of identication as described in gure 4.1 in [1, p.65].

Chapter 2 Data Pre-processing

As noted previously, before starting the identication of the data gathered in the experiment, the datasets must rst be analyzed to discard any unusable sets. As such, this chapter describes how the available data sets are processed in order to decide upon the discarding of the sets.

2-1

Data analysis

Whereas one might like to jump in to start doing any such processing operations, it is a good idea to rst do a general analysis of the collected data and try to observe any particularities within it. To this end, the obtained input-output data has been plotted; to better study it. The obtained plots are presented in Figure 2-1 and Figure 2-2. It can immediately be observed that, for dataset 3 (Figure 2-2), the output electronic ampliers have been saturated. This renders the whole data set unusable due to the fact that it is not possible to extract any knowledge on the systems behavior beyond the saturation point. This is information which might be crucial in obtaining an accurate estimate of the system. As such, dataset 3 will be dismissed for the rest of the identication process. Another way of analyzing the datasets is by looking at the provided .mat data le. From this we can obtain the actual number of samples that the dierent data sets contain, as well as the sampling period that was used to obtain the data. Using the notation: ts,i for the time sample of data set i, and Ns,i for the number of samples of data set i, with i = 1, ..., 7, the following values are observed: ts,1 = 0.001s Ns,1 = 50000 ts,2 = 0.001s Ns,2 = 50000 ts,3 = 0.001s Ns,3 = 20000 ts,4 = 0.001s Ns,4 = 20000

ts,5 = 0.001s Ns,5 = 750

ts,6 = 0.00005s Ns,6 = 10000 3

ts,7 = 0.001s Ns,7 = 10000

Data Pre-processing

100 0 100 200 0 200 500 0

100 0 100 200 0 200 500 0 500 10 0 10 100 0 100 100 0 100 100 0 100

10

15

20

25

30

35

40

10

15

20

25

30

35

40

10

15

20

25

30

35

40

10

15

20

25

30

35

40

Amplitude [V]

500 10 0 10 100 0 100 100 0 100 100 0 100

10

15

10

15

10

15

10

15

0.1

0.2

0.3

0.4

0.5

0.1

0.2

0.3

0.4

0.5

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

Time [s]

Figure 2-1: Input signals of the data sets. Piezo 1 (left) and piezo 3 (right). From top to bottom are datasets 1 to 7.

10 0 10 10 0 10 10 0

10 0 10 10 0 10 10 0 10 1 0 1 5 0 5 10 0 10 5 0 5

10 0 10 10 0 10 10 0 10 1 0 1 5 0 5 10 0 10 5 0 5

10 0 10 10 0 10 10 0 10 1 0 1 5 0 5 10 0 10 5 0 5

20

40

20

40

20

40

20

40

20

40

20

40

20

40

20

40

Amplitude [V]

10 1 0 1 5 0 5 10 0 10 5 0 5

10

20

10

20

10

20

10

20

10

20

10

20

10

20

10

20

0.2

0.4

0.2

0.4

0.2

0.4

0.2

0.4

0.2

0.4

0.2

0.4

0.2

0.4

0.2

0.4

10

10

10

10

Time [s]

Figure 2-2: Output signals of the data sets. From left to right: piezo 2, piezo 4, piezo 5, and piezo 6 From top to bottom are datasets 1 to 7.

2-2 Spectrum analysis

Through looking at the values for the running time of the experimental analyses and the number of samples used for this timespan, we can exclude two datasets: dataset 5 and 6. Similar ndings can be gathered from the data presented in Figure 2-2. Despite dataset 6 having a large number of samples, the sample frequency used is very large resulting in the shortest running time of the experiment, namely 0.5 seconds. Similarly, dataset 5 also only encompasses a running time of 0.75 seconds. Basically, such running times of gathering experimental data are too short to analyze the slower dynamics of the system. As such, part of the information of the experiment is lost upon using either of these datasets.

2-2

Spectrum analysis

In order to reduce the inuence of noise in certain areas of the spectrum, we pre-lter the remaining four signals. Before pre-ltering the data, it should be noted that the useful signal and the disturbance can be separated because the signal has signal power mainly in the low frequencies, while the noise contains mostly high frequencies. This may be observed by plotting the spectrum of the signal with the help of the MATLAB function pwelch. This returns the power spectral density estimate of the discrete-time signal vector using the Welchs averaged, modied periodogram method. The results have been plotted in Figure 2-3.

30

30

30

30

20

20

20

20

Power/frequency (dB/rad/sample)

10

10

10

10

10

10

10

10

20

20

20

20

30

30

30

30

40

40

40

40

50

50

50

50

60

60

60

60

70

0.5

70

0.5

70

0.5

70

0.5

Normalized Frequency ( rad/sample)

Figure 2-3: Power spectrum of the output signals. From left to right: dataset 1, dataset 2, dataset 4, and dataset 7.

Based on the spectrum in Figure 2-3 and on output signals from Figure 2-2, a rather low power and low output amplitude can be observed for data set 4. This gives an indication that this data set can be highly sensible to noise, thus it is not as reliant as the rest, therefore it will be eliminated from the data sets considered for the identication process. 5

Data Pre-processing

10

10

10

10

10

10

10

10

10

10

10

20

30

10

10

20

30

10

10

15

20

Svalue

Svalue

Svalue

Figure 2-4: Singular values From left to right: dataset 1, dataset 2, dataset 4, and dataset 7)

2-3

Detrending, Data Compression and Order Estimation

Before starting the subspace identication of the model, rst, some more preliminary actions have to be taken. To start, the signals will be detrended. This pre-ltering is done using the MATLAB function detrend and removes unknown osets and/or linear trends from signals. Furthermore, in order to later be able to do the cross-validation procedure (presented in Chapter 3), the measured data is split into two parts: an identication part (representing the rst 2/3 of the data set) and a validation part (which is the remaining 1/3 of the data). As a method of subspace identication, we are looking to implement the PO-MOESP using the supplied MATLAB toolbox. For this, several steps have to be taken. Firstly, the order of the system should be estimated. Consecutively, the A and C matrices may be estimated and nally, the B and D matrices of the system subspace may be estimated. In this chapter we will restrict ourselves to the necessary preliminary actions. The actual estimation of the state-space of the system will be discussed in Chapter 3. In pursuance of the order of the system, an estimate may be drawn from the singular values. To obtain the singular values, we use the available "ord" function - dordpo in the case of PO-MOESP - in order to compress the data and to generate a model order estimate. For this, we only need to specify the block-size parameter s, which should be considered larger than the expected order of the system, s=30. The obtained vector S contains the singular values and will be presented shortly. Additionally, a compressed matrix R is obtained which will further be used to estimate the system matrices A and C in the next chapter. The singular values are plotted by the use of the MATLAB function semilogy. Based on the results plotted in Figure 2-4, we can choose to estimate a model of order 9-16.

Chapter 3 Model Estimation, Validation and Optimization

In the previous chapter, the datasets have been analyzed and pre-processed. Remaining are the three datasets (data set 1, data set 2 and data set 7) which are usable and, supposedly, grant enough information to construct an accurate model of the entire system. As such, this chapter will entail the estimation and validation of the model. Upon the completion of this, the gathered results may be optimized.

3-1

Hankel Matrix Test

In order to verify the richness property of the input, which is mathematically called persistency of excitation, the Hankel matrices specic to the inputs of every data set need to be veried. This actually means that for the available data sets,the order of every Hankel matrix is checked to have full rank s, where s = 3n and n is the desired system order. The Hankel matrix has the form [2]:
U0,s,N =

u (1) u (N 1) u (2) u (N ) . . . . .. . . . u (s 1) u (s) u (N + s 2) u (0) u (1) . . .

(3-1)

It is computed using the MATLAB function hankel. Running the Hankel matrix test for a desired model of order 9 (n = 9, s = 27) and order 16 (n = 16, s = 48), we obtain the ranks for the Hankel matrices, presented in Table 3-1. From the gathered results it is clear that the inputs used in data set 2 are not persistently exciting of order s, when s [27, 48] (i.e. n [9, 16]). Therefore, it is safe to conclude that data set 2 is of no use in estimating a system of the desired order. 7

Model Estimation, Validation and Optimization Table 3-1: Hankel matrix rank for dierent desired model orders

Data set 1 2 7

Hankel matrix rank for input 1 n = 9 (s = 27) n = 16 (s = 48) 27 48 14 14 27 48

Hankel matrix rank for input 2 n = 9 (s = 27) n = 16 (s = 48) 27 48 14 14 27 48

3-2

Estimation of States-Space Matrices

As described before in paragraph 2-3, after estimating the order of the system, the a model of the system may be created by constructing the state-space matrices. First, the system matrices A, C as well as the Kalman gain K are estimated using the toolbox function dmodpo. These estimates are of the system in innovation form. In order to establish this, the information previously obtained, namely the data matrix R, is used. Consecutively, the toolbox function dac2bd is used to estimate B and D.

3-3

Model Validation

Considering that an estimated model (Ae, Be, Ce, De) is available, the next step is to validate this model on the basis of a more detailed inspection. In order to do this, we use the tests available within the class of residual tests, namely the auto-correlation and cross-correlation tests. These validation methods are applied to innovation models in which an optimal model means that the variance of the one-step ahead predictor error is minimal. For a parameter estimate of , the one-step-ahead prediction error k, ) can be estimated via the Kalmanlter equations [2, p.387]. If the system under consideration can be represented by a model in the innovation form, the prediction-error sequence for equal to the global optimum, calculated using data from an open-loop experiment, should be the following: The sequence k, ) is a zero-mean white-noise sequence. The sequence k, ) is independent of the input sequence u(k ). In order to obtain the vector k, ) we use the toolbox function dltisim. For this, we have to provide the innovation state-space model previously obtained and we obtain the residual vector, as the dierence between the actually measured output y(k) and the innovation model output y k, ).

3-4

Auto-Correlation Test

After obtaining the residual vector k, ), we have to check the whiteness of the residual vector. We can do this by computing its auto-correlation function. To do this, we use the MATLAB Signal Proceesing Toolbox function xcorr. The output of this function will give us the cross-correlation between two signals, and the cross correlation of k, ) and k, ) equals 8

3-5 Cross-Correlation Test

2 0

x 10

2 3000 4 x 10 1 0 1 3000 4 x 10 2 0 2 3000 4 x 10 1 0 1 3000

2000

1000

1000

2000

3000

2000

1000

1000

2000

3000

2000

1000

1000

2000

3000

2000

1000

0 Lag

1000

2000

3000

Figure 3-1: The auto-correlation function for dataset 1. From top to bottom: outputs 1 to 4.

an auto-correlation. The auto-correlation is dened as a signal of length 2N+1, corresponding to the auto-correlation from lag -MAXLAG up to and including lag +MAXLAG. The results have been plotted in Figure 3-1 for dataset 1 and in Figure 3-2, for dataset 7. The decision to rst estimate an order 9 model has been taken in order to be able to evaluate the auto-correlation and cross-correlation functions specic to the estimated models from dataset 1 and dataset 7. This will help in deciding if both the datasets are suited for system identication.

3-5

Cross-Correlation Test

As previously mentioned, the cross-correlation test is a residual test that can be used on innovation state-space models. The role of this test is to check the second property of the residual vector, as specied in the previous section: the residual vector has to be independent of the input signal u(k). For this, we are going to use the same residual vector computed before. The only dierence is the fact that we now compute the cross-correlation between the residual and the input signal using the same MATLAB function xcorr. We use the same value for the lag as before. The results have been plotted in Figure 3-3 for dataset 1 and in Figure ?? for dataset 7. Based on these, it can clearly be seen that the residual sequence (k, ), of the model estimated using dataset 7, is not a zero-mean white-noise sequence and the cross-correlation between it and the input is higher than the one of the residual vector obtained from the estimated model using dataset 1. Thus, dataset 7 does not give out a good model estimate, therefore it will be discarded. 9

10

Model Estimation, Validation and Optimization

50 0 50 3000 100 0 100 3000 100 0 100 3000 100 0 100 3000

2000

1000

1000

2000

3000

2000

1000

1000

2000

3000

2000

1000

1000

2000

3000

2000

1000

0 Lag

1000

2000

3000

Figure 3-2: The auto-correlation function for dataset 7. From top to bottom: outputs 1 to 4.

2 1 0 1

x 10

2 1 0 1

x 10

2 1 0 1

x 10

2 1 0 1

x 10

2 5000 x 10
5

2 5000 5000 x 10
5

2 5000 5000 x 10
5

2 5000 5000 x 10
5

5000

2 1 0 1

2 1 0 1

2 1 0 1

2 1 0 1

2 5000

2 5000 5000

2 5000 5000

Lag

2 5000 5000

5000

Figure 3-3: The cross-correlation function for dataset 1. From top to bottom: outputs 1 to 4.

10

3-6 Cross-Validation Test

11

1000 500 0 500 1000 5000 1000 500 0 500 1000 5000

1000 500 0 500 1000 5000 1000 500 0 500 1000 5000

1000 500 0 500 1000 5000 1000 500 0 500 1000 5000

1000 500 0 500 1000 5000 1000 500 0 500 1000 5000

5000

5000

5000

5000

5000

5000

Lag

5000

5000

Figure 3-4: The cross-correlation function for dataset 7. From top to bottom: outputs 1 to 4.

3-6

Cross-Validation Test

It may happen that the we cannot know for sure that the model adequetly describes the system dynamics. This may happen due to overitting, which actually means that the model complexity or the number of model parameters has become so large with respect to the length of the data sequence that the predicted output matches very accurately the identication data. In order to prevent this from happening, the measured data is split in two parts as touched upon in section 2-3: an identication part and a validation part. This means that the identication part (we consider the rst 2/3 of all samples), after which we can apply the validation tests on the second part (the remaining 1/3 of all samples) of the measured data. This procedure has been applied to all datasets before starting any other procedure. The presented auto and cross-correlation plots are obtained using the validation part of the given data.

3-7

Variance Accounted For

The variance accounted for (VAF) is dened as a scaled variant of the cost function JN ) and it is dened as (formula page 4.4 page 78 companion). This can be easily computed in MATLAB by using the toolbox function vaf for all the components of the output signal. It requires the measured output y(k) (only the validation part) and the predicted output y k ). The values obtained are between 0% and 100%; the higher the value of the VAF, the lower the prediction error and the better the model obtained through identication. In order to decide the order of the most accurate model, the system has been estimated for eight dierent orders (n = 9...16), after which the VAF, for each one of these models, has 11

12

Model Estimation, Validation and Optimization Table 3-2: VAF for dierent estimated model orders

Model order 9 10 11 12 13 14 15 16

VAF of Output 1 90.4186 90.3580 90.3500 90.2027 90.0076 89.8875 89.7378 89.5581

VAF of Output 2 91.4660 91.5305 91.5012 91.6933 91.7611 91.7555 91.7518 91.5508

VAF of Output 3 93.0888 93.2522 93.2521 93.5027 93.5970 93.6187 93.5852 93.6436

VAF of Output 4 93.7086 93.8219 93.8259 94.0585 94.1724 94.1894 94.1758 94.2237

been computed. The results are presented in Table 3-2. As can be seen, the estimated model of order 9 gives a very good accuracy, its output VAFs being comparable to the the ones of the higher order estimates. This is a very good reason to use the order 9 model estimate, as it reduces system complexity and, at the same time, computation time for the rest of the operations, such as estimate optimization, future controller computations (a systems model is estimated so that a controller can be designed afterwards) etc.

3-8

Model Optimization

Using the initial estimated (Ai , Bi , Ci , Di , Ki ) matrices, the model is through an optimization algorithm so that better results can be obtained. This is achieved through the use of the provided MATLAB toolbox command doptlti. The obtained model matrices are give by:
Ao =

0.7724 0.7935 0.7920 0.6872 0.0337 0.0122 0.0929 0.1910 0.0605 0.0489 0.1919 0.2100 0.0078 0.0195 0.1477 0.0034 0.0508 0.0477

0.1698 0.1471 0.9382 0.3808 0.1664 0.1458 0.3554 0.4658 0.0049

0.107 0.1296 0.0899 0.0351 0.0448 0.1057 0.8952 0.1502 0.0404 0.9939 0.2375 0.0054 0.0923 0.0404 0.0546 0.1428 0.0717 0.0481

0.2650 0.1336 0.0374 0.1153 0.0925 0.0610 0.0803 0.0207 0.0516 0.2970 0.0050 0.0101 0.0251 0.0151 0.0543 0.8045 0.2785 0.0420 0.0707 0.1141 0.7133 0.0705 0.9261 0.3279 0.0242 0.0725 0.0129

0.3688 0.3206 0.1421 0.1580 0.0050 0.2024 0.0453 0.0087 0.9184

Bo =

0.0007 0.0000 0.0011 0.0011 0.0026 0.0008 0.0062 0.0010 0.0028 0.0028 0.0030 0.0013 0.0174 0.0138 0.0082 0.0122 0.0013 0.0013 12

3-8 Model Optimization

13

2000 1000 0 1000 3000 4000 2000 0 2000 3000 2000 1000 0 1000 3000 2000 1000 0 1000 3000 2000 1000 0 1000 2000 3000 2000 1000 0 1000 2000 3000 2000 1000 0 1000 2000 3000 2000 1000 0 1000 2000 3000

Lag

Figure 3-5: The auto-correlation function the optimized system. From top to bottom: outputs 1 to 4.

Co =

0.2020 0.2522 0.2812 0.2393 0.1023 0.3575 0.0723 0.1542 0.2048 0.2983 0.0734 0.2876 0.1158 0.0334 0.4502 0.2364 0.0584 0.3457 0.0774 0.1791 0.1279 0.4579 0.2260 0.0498 0.3501 0.0713 0.1689 0.1247

0.1073 1.1787 0.7015 0.2465 0.5601 0.0420 0.5389 0.0594

Do =

0.0088 0.0019 0.0015 0.0005

0.0070 0.0038 0.0003 0.0011

Ko =

0.2944 0.3286 0.8820 0.2048 0.3168 0.2226 0.2730 0.4411 1.0180

0.2694 0.3719 0.3281 0.2746 0.2919 0.5999 0.3820 0.0147 0.0674 0.1973 0.2087 0.4069 0.5600 0.2664 0.5226 0.1155 0.2229 0.7588

0.0525 0.0259 1.3204 0.2319 0.6010 0.3511 0.2181 1.3118 0.6026

The optimized model estimate gives better VAFs: 98.6287, 98.1546, 99.2903, 99.2927[], which is a signicat increase in model accuracy. This can be also observed in the auto-correlation and cross-correlation plots presented in Figure 3-5 and Figure 3-6.

13

14

Model Estimation, Validation and Optimization

x 10 6

6 4

x 10

x 10

x 10

4 2 0 2 4 4 6 5000 0 5000 6 5000 0 5000 5 5000 0 5000 2 0 2 0

0.5

0.5

1 5000

5000

x 10 4 2 0 2 4 5000

4 3 2 1 0 1 2 3 0 5000

x 10

x 10 4 2 0 2 4 0 5000 5000

x 10 4 2 0 2 4 0 5000 5000

5000

5000

Lag

Figure 3-6: The cross-correlation function the optimized system. From top to bottom: outputs 1 to 4.

14

Chapter 4 Conclusion

The task at hand was to identify the real life system as presented in the introduction. Through data pre-processing, model estimation and model validation, an accurate data set has been identied, modeled and optimized. Firstly, the data sets have been pre-processed after which four out of seven datasets have been discarded for various reasons. Through a simple analysis of the datasets in terms of running time and sampling frequency, datasets 5 and 6 were discarded due to short running times of only 0.75s and 0.5s respectively. Such short running times of gathering experimental data are too short to analyze the slower dynamics of the system. Consecutively, the remaining four datasets have been analyzed using spectrum analysis. The power spectra found showed that a rather low power and low output amplitude can be observed for data set 4. This gives an indication that this data set can be highly sensible to noise, thus it is not as reliant as the rest. With only three datasets remaining the nal pre-processing started by detrending and compressing the data after which the order is estimated through singular value analysis, showing an order of around 9-16. Considering this fact, a Hankel matrix test shows that the inputs used in data set 2 are non persistently exciting of sucient order and as such it is discarded. Continuing, the models state-space matrices have been estimated and validated. Furthermore, auto- and cross-correlation tests could be processed for datasets 1 and 7. This showed that for dataset 7, the residual sequence of the estimated model is not a zero-mean white-noise sequence and the cross-correlation is higher than the one obtained from dataset 1. As such the 7th dataset is also discarded and we are left with solely dataset 1. Finally, the model is optimized to improve accuracy which is analyzed using the VAF-values. For the rst output an increase of about 8% is found and for the second output and increase of about 6% is found.

15

16

Conclusion

16

Appendix A Identication Matlab Script

As explained in Chapter 2, the datasets have been preprocessed using the following MATLAB script. Upon its completion, the results have been used as argument to reject several sets.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

clc ; clear all ; close all ; load ( datasets .mat ) % % % % % % % data3 - clipped ( outputs saturated ) data6 - short experiment data5 - not enough information data2 - not persistently exciting data4 - low power ( spectrum ), low amplitude (more sensible to noise disturbances ), not as dense data7 - corellation is not good

u = []; y = []; t = []; u_val = [ ] ; y_val = [ ] ; t_val = [ ] ; % Split data for cross - validation for i = 1 : 7 data = eval ( sprintf ( data%d , i ) ) ; N = length ( data ) ; % Get identification data u { i } = data ( 6 : 7 , 1 : floor ( 2 N / 3 ) ) ; y { i } = data ( 2 : 5 , 1 : floor ( 2 N / 3 ) ) ;

17

18

Identication Matlab Script

32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84

t { i } = data ( 1 , 1 : floor ( 2 N / 3 ) ) ; % Get evaluation data u_val { i } = data ( 6 : 7 , floor ( 2 N / 3 ) +1: N ) ; y_val { i } = data ( 2 : 5 , floor ( 2 N / 3 ) +1: N ) ; t_val { i } = data ( 1 , floor ( 2 N / 3 ) +1: N ) ; end %% Plot data % Input figure ( Position , [ 4 0 0 150 820 4 2 5 ] ) ; for i = 1 : 7 N = length ( data ) ; subplot ( 7 , 2 , 2 i 1) plot ( t { i } , u { i } ( : , 1 ) ) ; grid on ; subplot ( 7 , 2 , 2 i ) plot ( t { i } , u { i } ( : , 2 ) ) ; grid on ; end % Output figure ( Position , [ 4 0 0 150 820 4 2 5 ] ) ; for i = 1 : 7 subplot ( 7 , 4 , 4 i 3) plot ( t { i } , y { i } ( : , 1 ) ) ; grid on ; subplot ( 7 , 4 , 4 i 2) plot ( t { i } , y { i } ( : , 2 ) ) ; grid on ; subplot ( 7 , 4 , 4 i 1) plot ( t { i } , y { i } ( : , 3 ) ) ; grid on ; subplot ( 7 , 4 , 4 i ) plot ( t { i } , y { i } ( : , 4 ) ) ; grid on ; end %% Eliminate data3 , data5 and data6 u {3} = [ ] ; u {5} = [ ] ; u {6} = [ ] ; y {3} = [ ] ; y {5} = [ ] ; y {6} = [ ] ; u_val {3} = [ ] ; u_val {5} = [ ] ; u_val {6} = [ ] ; y_val {3} = [ ] ; y_val {5} = [ ] ; y_val {6} = [ ] ; u = u (~ cellfun ( isempty , u ) ) ; y = y (~ cellfun ( isempty , y ) ) ;

18

19

85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137

u_val = u_val (~ cellfun ( isempty , u_val ) ) ; y_val = y_val (~ cellfun ( isempty , y_val ) ) ; %% Obtain signal spectrum figure ( Position , [ 4 0 0 150 820 4 2 5 ] ) ; for i = 1 : 4 subplot ( 1 , 4 , i ) ; pwelch ( y { i } ) ; end %% Eliminate data4 ud {3} = [ ] ; yd {3} = [ ] ; ud_val {3} = [ ] ; yd_val {3} = [ ] ; ud = ud (~ cellfun ( isempty , ud ) ) ; yd = yd (~ cellfun ( isempty , yd ) ) ; ud_val = ud_val (~ cellfun ( isempty , ud_val ) ) ; yd_val = yd_val (~ cellfun ( isempty , yd_val ) ) ; %% Detrend data ud = [ ] ; yd = [ ] ; ud_val = [ ] ; yd_val = [ ] ; for i = 1 : 3 ud { i } = detrend ( u { i } ) ; yd { i } = detrend ( y { i } ) ; ud_val { i } = detrend ( u_val { i } ) ; yd_val { i } = detrend ( y_val { i } ) ; end %% Check singular values to obtain an estimate of the system order s = 30; R = []; figure ( Position , [ 4 0 0 150 820 4 2 5 ] ) ; for i = 1 : 3 [ S , R { i } ] = dordpo ( ud { i } , yd { i } , s ) ; subplot ( 1 , 4 , i ) semilogy ( S , * ) ; grid on ; end %% Obtain Hankel matrices of the inputs ( check if U is rank 9 -> input is % persistently exciting )

19

20

Identication Matlab Script

138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190

n = 9; s = 3 n ; r = []; for i = 1 : 3 N = length ( ud { i } ) ; U1 = hankel ( ud { i } ( 1 : s , 1 ) , ud { i } ( s : N , 1 ) ) ; U2 = hankel ( ud { i } ( 1 : s , 2 ) , ud { i } ( s : N , 2 ) ) ; r ( i , 1 ) = rank ( U1 ) ; r ( i , 2 ) = rank ( U2 ) ; if r ( i , 1 ) ~= s str = sprintf ( Data set %d is not good , i ) ; disp ( str ) ; end if r ( i , 2 ) ~= s str = sprintf ( Data set %d is not good , i ) ; disp ( str ) ; end end r %% Eliminate data2 ud {2} = [ ] ; yd {2} = [ ] ; ud_val {2} = [ ] ; yd_val {2} = [ ] ; ud = ud (~ cellfun ( isempty , ud ) ) ; yd = yd (~ cellfun ( isempty , yd ) ) ; ud_val = ud_val (~ cellfun ( isempty , ud_val ) ) ; yd_val = yd_val (~ cellfun ( isempty , yd_val ) ) ; %% PO - MOESP Subspace Identification yi = [ ] ; for i = 1 : 2 [ S , R ] = dordpo ( ud { i } , yd { i } , s ) ; [ Ai , Ci , Ki ] = dmodpo ( R , n ) ; [ Bi , Di ] = dac2bd ( Ai , Ci , ud { i } , yd { i } ) ; Ae Be Ce De = = = = AiKi Ci ; [ BiKi Di Ki ] ; Ci ; [ Di eye ( 4 , 4 ) ] ;

% Obtain residual vector epsilon { i } = dltisim ( Ae , Be , Ce , De , [ ud { i } yd { i } ] ) ; end

20

21

191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243

%% Obtain the auto - correlation plots maxlag = 3 0 0 0 ; for i = 1 : 2 figure ( Position , [ 4 0 0 150 820 4 2 5 ] ) ; for j = 1 : 4 aci = xcorr ( epsilon { i } ( : , j ) , epsilon { i } ( : , j ) , maxlag ) ; subplot ( 4 , 1 , j ) plot ( maxlag : maxlag , aci ) ; grid on ; end end %% Obtain the cross - correlation plots for i = 1 : 2 figure ( Position , [ 4 0 0 150 820 4 2 5 ] ) ; for j = 1 : 4 xci = xcorr ( epsilon { i } ( : , j ) , ud { i } ( : , 1 ) , maxlag ) ; subplot ( 2 , 4 , 2 j 1) plot ( maxlag : maxlag , xci ) ; grid on ; xci = xcorr ( epsilon { i } ( : , j ) , ud { i } ( : , 2 ) , maxlag ) ; subplot ( 2 , 4 , 2 j ) plot ( maxlag : maxlag , xci ) ; grid on ; end end %% Eliminate data7 ud {2} = [ ] ; yd {2} = [ ] ; ud_val {2} = [ ] ; yd_val {2} = [ ] ; ud = ud (~ cellfun ( isempty , ud ) ) ; yd = yd (~ cellfun ( isempty , yd ) ) ; ud_val = ud_val (~ cellfun ( isempty , ud_val ) ) ; yd_val = yd_val (~ cellfun ( isempty , yd_val ) ) ; %% Obtain VAFs n = 9:16; for i = 1 : length ( n ) s = 3 n ( i ) ; [ S , R ] = dordpo ( ud { 1 } , yd { 1 } , s ) ; [ Ai , Ci , Ki ] = dmodpo ( R , n ) ;

21

22

Identication Matlab Script

244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296

[ Bi , Di ] = dac2bd ( Ai , Ci , ud { 1 } , yd { 1 } ) ; Ae = AiKi Ci ; Be = [ BiKi Di Ki ] ; Ce = Ci ; De = [ Di zeros ( 4 , 4 ) ] ; % Simulate the identified system yi { i } = dltisim ( Ae , Be , Ce , De , [ ud_val {1} yd_val { 1 } ] ) ; % Calculate the VAF disp ( VAF for order ) ; n ( i ) vaf_po = vaf ( yd_val { 1 } , yi { i } ) end %% Run optimization for model of order 9 n = 9; s = 3 n ; [ S , R ] = dordpo ( ud { 1 } , yd { 1 } , s ) ; [ Ai , Ci , Ki ] = dmodpo ( R , n ) ; [ Bi , Di ] = dac2bd ( Ai , Ci , ud { 1 } , yd { 1 } ) ; [ Ao , Bo , Co , Do , ~ , Ko ] = doptlti ( ud { 1 } , yd { 1 } , Ai , Bi , Ci , Di , [ ] , Ki ) ; Ae = AiKi Ci ; Be = [ BiKi Di Ki ] ; Ce = Ci ; De = [ Di zeros ( 4 , 4 ) ] ; % Simulate the identified system yic = dltisim ( Ae , Be , Ce , De , [ ud_val {1} yd_val { 1 } ] ) ;

Aeo = AoKo Co ; Beo = [ BoKo Do Ko ] ; Ceo = Co ; Deo = [ Do zeros ( 4 , 4 ) ] ; % Simulate the optimized system yoc = dltisim ( Aeo , Beo , Ceo , Deo , [ ud_val {1} yd_val { 1 } ] ) ; % Calculate the VAF disp ( VAF non - optimized ) ; vaf_po = vaf ( yd_val { 1 } , yic ) disp ( VAF optimized ) ; vaf_po = vaf ( yd_val { 1 } , yoc ) %% Ae Be Ce De Obtain auto and cross - correlation for optimized system = AoKo Co ; = [ BoKo Do Ko ] ; = Co ; = [ Do eye ( 4 , 4 ) ] ;

% Obtain residual vector epsilon = dltisim ( Ae , Be , Ce , De , [ ud {1} yd { 1 } ] ) ;

22

23

297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318

figure ( Position , [ 4 0 0 150 820 4 2 5 ] ) ; for j = 1 : 4 aci = xcorr ( epsilon ( : , j ) , epsilon ( : , j ) , maxlag ) ; subplot ( 4 , 1 , j ) plot ( maxlag : maxlag , aci ) ; grid on ; end figure ( Position , [ 4 0 0 150 820 4 2 5 ] ) ; for j = 1 : 4 xci = xcorr ( epsilon ( : , j ) , ud { 1 } ( : , 1 ) , maxlag ) ; subplot ( 2 , 4 , 2 j 1) plot ( maxlag : maxlag , xci ) ; grid on ; xci = xcorr ( epsilon ( : , j ) , ud { 1 } ( : , 2 ) , maxlag ) ; subplot ( 2 , 4 , 2 j ) plot ( maxlag : maxlag , xci ) ; grid on ; end

23

24

Identication Matlab Script

24

Bibliography

[1] M. Verhaegen, V. Verdult, and N. Bergboer, Filtering and System Identication: An Introduction to using Matlab Software. Delft University of Technology, August 2007. [2] M. Verhaegen and V. Verdult, Filtering and System Identication. Cambridge University Press, 2007.

25

S-ar putea să vă placă și