Documente Academic
Documente Profesional
Documente Cultură
CHAPTER 4
METHODOLOGY
1. Bhakra (Punjab),
4. Hirakud (Orissa),
5. Maithon (Bihar),
9. Guernsey (USA).
32
2. Annual rainfall.
Using the above details the inflow in to the Vaigai reservoir and
sediment deposition due this inflow is calculated as below.
I( t 1) RI t (1 R ) I (1 R 2 )1 / 2 * Z (4.1)
Where,
_
I = Mean annual inflow of historic data
33
N 1 1 N 1 N 1
i 1 Xi * Xi 1 ( i 1 X i )( i 1 Xi 1 )
R N 1
N 1 2 1 N 1 2 1/ 2 N 1 2 1 N 1 2 1/ 2
( i 1 Xi ( i 1 Xi ) ) *( i 1 Xi 1 ( i 1 Xi 1) )
N 1 N 1
(4.2)
34
where,
N = Number of years
(c) Random normal deviate (Z) is found out by the Box Muller
method using rectangular distributed random numbers using
the equations.
U1 and U2 are given in the Table A 2.1 and normal random deviate
(Z) also furnished in Table A 2.2
Vw Ic / B (4.8)
Where,
Using Equation (4.1), the inflow into the reservoir for future years
was generated. Flow was generated from 2000 to 2150, say 150 years, which
is given in the Table 5.2 of Chapter 5
4.2.1 Overview
neuron, the summer adds all the scaled values together and the output
function produces the final output of the neuron. Often, one additional input,
known as the bias is added to the system. If a bias is used, it can be
represented by a weight with a constant input of 1. This description is laid out
visually in Figure 4.1.
I1
W1
I2 W2 x f(x) a
W3
I3
B
where, I1, I2, and I3 are the inputs, W1, W2, and W3 are the weights,
B is the bias,
a is final output.
where, f could be any function, most often, f is the sign of the argument (i.e.
1 if the argument is positive and -1 if the argument is negative), linear (i.e. the
output is simply the input times some constant factor), or some complex curve
used in function matching.
38
a1 f1 (W1 * I B1 ) (4.10)
a2 f 2 (W2 * I B2 ) (4.11)
a3 f 3 (W3 * I B3 ) (4.12)
a f (W * I B) (4.13)
forward to compute the output information signal at the output unit, and a
backward phase, in which modifications to the connection strengths are made
based on the differences between the computed and observed information
signals at the output units (Eberhart and Dobbins 1990).
In which, P1, P2P7 are the annual rainfall which is taken as input
to the model and output is the Runoff (R). H1, H2 and H3 are the hidden layers
of the model.
46
P1 wij
H1
P2
wjk
P3
H2
P4
R
P5
H3
P6
H4
P7
There are no fixed rules for developing an ANN model, even though a general
framework can be followed based on previous successful applications in
engineering. In the present study, multilayer perceptrons (MLP) ANN model
architectures to estimate volume of sediment retained in the reservoir were
developed as shown in Figure 4.4. (Jothiprakash et al 2009). Using available
data of the study area, a trial and error approach was employed in the present
analysis to select the appropriate ANN architecture. The number of input
parameters in the ANN was determined on basis of parameters causing and
affecting the underlying process which are also easily measurable at the
reservoir site. The number of hidden layers and the number of nodes in each
hidden layer were also determined by a trial-and-error procedure. The number
of nodes in the hidden layer play a significant role in ANN model
performance. The Sigmoid and Hyperbolic Tangent (tan h) transfer functions
corresponding to a single sediment yield output were used to select the best
ANN architecture. H1, H2 H5 are the hidden nodes.
47
H1
Annual rainfall
I1
H2
Volume of sediment
Annual inflow
I2
O1
H3
Annual capacity
I3 H4
H5
Figure 4.4 Neural Network Model Used for Sediment Yield Prediction
predicted by the model were compared with the observed values. If the
prediction error statistics for these data sets were acceptable, then the neural
network structure was considered to perform well for predicting sediment
yield with different sets of data. The networks were trained with various
available input and output parameters. The performance of the models was
tested through statistical indicators such as coefficient of correlation (R2),
root-mean-square error (RMSE), mean absolute error (MAE) (Srinivasulu and
Jain 2006).