Documente Academic
Documente Profesional
Documente Cultură
www.elsevier.com/locate/foodcont
Received 13 January 2006; received in revised form 18 May 2006; accepted 22 May 2006
Abstract
This paper is concerned with optimizing the neural network topology for predicting the moisture content of grain drying process using
genetic algorithm. A structural modular neural network, by combining the BP neurons and the RBF neurons at the hidden layer, was pro-
posed to predict the moisture content of grain drying process. Inlet air temperature, grain temperature and initial moisture content were
considered as the input variables to the topology of neural network. The genetic algorithm is used to select the appropriate network archi-
tecture in determining the optimal number of nodes in the hidden layer of the neural network. The number of neurons in the hidden layer was
optimized for 6 BP neurons and 10 RBF neurons using genetic algorithm. Simulation test on the moisture content prediction of grain drying
process showed that the SMNN optimized using genetic algorithm performed well and the accuracy of the predicted values is excellent.
2006 Elsevier Ltd. All rights reserved.
Keywords: Grain drying; Predicting; Neural network; Genetic algorithm; Moisture content
0956-7135/$ - see front matter 2006 Elsevier Ltd. All rights reserved.
doi:10.1016/j.foodcont.2006.05.010
X. Liu et al. / Food Control 18 (2007) 928933 929
ture. Farkas, Remenyi, and Biro (2000a, 2000b) set up a Because of the dierent response characteristics of hid-
NN to model moisture distribution in agricultural xed- den neurons in these two kinds of neural networks, the
bed dryers. It is clear from past literature that NNs are interpolation problems can be solved more eciently with
good for modelling drying process. a BPNN, and the extrapolation problems are better to be
The selection of an appropriate NN topology to predict dealt with an RBFNN.
the drying process is important in terms of model accuracy Since the dierent properties of the BPNN and the
and model simplicity. The architecture of a NN greatly inu- RBFNN are complementary, Nan Jiang, Zhao, and Ren
ences its performance. Many algorithms for nding the opti- (2002) designed a structural modular neural network
mized NN structure are derived based on specic data in a (SMNN) with genetic algorithm and showed that the
specic area of application (Blanco, Delgado, & Pegalajar, SMNN constructed a better inputoutput mapping both
2000; Boozarjomehry & Svrcek, 2001), but predicting the locally and globally. The SMNN combine the neurons in
optimal NN topology is a dicult task since choosing the the generalization capabilities of BPNN and the computa-
neural architecture requires some priori knowledge of grain tional eciency of RBFNN together in one network struc-
drying and/or supposes many trial-and-error runs. ture. Its architecture is shown in Fig. 1, which has three
In this paper, we present a genetic algorithm capable of layers: the input layer which takes in the input data; the
obtaining not only the trained optimal topology of a neural hidden layer which comprises both the sigmoid neurons
network but also the least number of connections necessary and the Gaussian neurons; and the output layer where a
for solving the problem. In the following sections, the tech- linear function is used to combine the BP part and the
niques used in this paper are briey reviewed, and the RBF part.
design of the NN system for predicting the grain drying In this research, we adapt their SMNN for predicting
process is discussed in detail. A grain drying process is used moisture content of grain drying process. The number of
to demonstrate the eectiveness of the neural network. The neurons in the input and output layers are given by the
nal section draws conclusions regarding this study. number of input and output variables in the process. The
inputs of the structure can be variables such as inlet mois-
2. Materials and methods ture content, grain temperatures, and air temperatures,
which are easily measurable. The output of the system is
2.1. Neural network system the moisture content of the grain.
The back-propagation neural network (BPNN) is a mul- 2.2. Design structural modular neural network using GA
tilayer feed-forward network with a back-propagation
learning algorithm. The BPNN is characterized by hidden The network conguration of the SMNN can be trans-
neurons that have a global response. The commonly used formed into two subset selection problems: one is the num-
transfer function in the BPNN is the sigmoid function ber of BP hidden neurons; and the other is the distinct
1 terms nc which are selected from the N data samples as
f sj 1 the centers of RBF hidden neurons.
1 expsj
There are a few types of representation schemes avail-
where sj is the weighted sum of inputs coming to the jth
able for decoding the neural network architecture, such
node.
as the binary coding and the gray scale. In the present
Usually, there is only one hidden layer for the BPNNs as
work, the chromosome in the GAs population is divided
the availability of such a layer is sucient to produce the
into two parts. One part is a xed length chromosome that
set of desired output patterns for all of the training vector
contains the number of BP hidden neuron in binary form.
pairs.
The other part is a variable length chromosome (i.e. real
The radial basis function neural network (RBFNN)
coding) that represents the number and position of the
belongs to the group of kernel function nets that utilize
RBF hidden neurons. The centers of the RBF part are ran-
simple kernel functions as the hidden neurons, distributed
domly selected data point from the training data set and
in dierent neighborhoods of the input space, and whose
the center locations proposed here are also restricted to
responses are essentially local in nature. The RBF produces
be the data sample. The data sample xi is labeled with index
a signicant nonzero response only when the input falls
i (i = 1, 2, . . . , N), then the RBF neurons can be coded as a
within a small-localized region of the input space. The most
common transfer function in an RBFNN is the Gaussian
activation function output
Pn !
2
i1 xi C ki
/k exp ; k 1; 2 . . . q 2
b2 BP hidden node RBF hidden node
where xi is the ith variable of input; Cki the center of the kth input
RBF unit for input variable i; and b2 is the width of the kth
RBF unit. Fig. 1. Structural modular neural network architecture.
930 X. Liu et al. / Food Control 18 (2007) 928933
ospring: 120
T7
T6
(a) Use roulette wheel selection to produce the 100 T5
temperature/C
reproduction pool. 80 T4
T3
(b) Apply two-step crossover with given probability 60 T2
to two parent chromosomes in the reproduction 40 T1
11
16
21
26
31
36
41
46
51
56
1
6
bit of the ospring. -20
(d) Apply deletion and addition with given probabil- time/h
ity to the RBF part strings of ospring, produce Fig. 3. The grain temperatures (T1T8) and drying-air temperatures (TU,
the new generation. TM, TL).
(e) Decode each chromosome in new generation.
Train each network and compute the new RMSE
values of the training data and the testing data 30
13
17
21
25
29
33
37
41
45
49
53
57
61
1
5
9
ow grain dryer with high of 26 m, section area of 16 m2 time/h
and solid ow rate from 2.4 to 4.0 m/h (see Fig. 2). The
dryer is quadrate in shape with the air in the drying section Fig. 4. The inlet and outlet moisture contents for training and testing
neural network.
owing through the grain column from the air plenum to
the ambient, and in the reverse direction in the cooling sec-
tion. A grain turn-ow is located midway in the drying In order to study the dynamics of grain drying, about
column. 60 h of data were collected while the dryer operated
The controller of the dryer consists of the temperature under manual performance, with air ow rate from 0.27
sensors, the data acquisition system, and a personal com- to 0.42 m/s, the surrounding temperature from 27 to
puter. The PC communicates with the sensors and the 10 C, and the drying-air temperature from 80 to
grain-discharged motor through a data acquisition card. 125 C. One set data per hour are chose, so there are 60 sets
The rpm of the grain-discharged motor is proportional to data to be used to training and testing the neural network.
05 V input to the driver of the grain-discharged motor. Figs. 3 and 4 show all the input graphs used for training the
NN.
string length is 20. The population size is chosen as 20. The Table 1
probabilities for the crossover and the mutation are 0.5 and MSE of grain drying process prediction
0.02, respectively, and the probability of deletion and addi- Number of MSE of MSE of
tion is taken as 0.04. The above GA parameters are selected hidden neurons training data testing data
after a series of trial and error runs. Since the training set SMNN 6-BP, 10-RBF 0.0298 0.0312
containsP40 distinct terms, the search space therefore con- BPNN 22 0.0304 0.0368
RBFNN 42 0.0309 0.0336
tains 25 20 i
i2 C 40 1:98 10
13
dierent networks. The gen-
eration number is set to be 50.
The evolution of average and minimum MSE of the test-
ing data are shown in Fig. 5, where the average MSE is the
17
average value of MSE in the whole chromosomes for each
13
17
21
25
29
33
37
41
45
49
53
57
1
5
9
0.0312, as shown in Fig. 6. The algorithm automatically
time/h
searches for the appropriate network size according to
the given objective. The best SMNN is at the 32th genera- Fig. 7. The predicted outlet moisture contents by SMNN. Solid line:
tion which has least neurons (6 BP neurons and 10 RBF predicted data; dash line: measured data.
0.08 the testing data and the complexity of the evolved SMNN
is signicantly reduced compared with the other two
0.06
networks.
The predicted result from simulation test on the mois-
0.04
ture content prediction of grain drying process based on
0.02 the SMNN is shown in Fig. 7. The gure shows that the
accuracy of predicted value is excellent.
0
1
4
7
10
13
16
19
22
25
28
31
34
37
40
43
46
49
4. Conclusions
generation
Fig. 5. The average and minimum MSE for testing data in each As would be expected there was a fairly strong inuence
generation. of the NN topologies on the accuracy of the estimation.
Therefore, the selection of the most appropriate NN topol-
ogy was the main issue. In this paper, the SMNN has been
0.12 proposed which comprises sigmoid and Gaussian neurons
0.11 in the hidden layer of the feed-forward neural network.
0.1 testing results
0.09
The GA is used to select the appropriate network architec-
training results
0.08 ture in determining the optimal number of nodes in the hid-
0.07 den layer of the SMNN. Since the GA is a global search
MSE
tal range. The technological interest of this kind of model- Farkas, I., Remenyi, P., & Biro, A. (2000a). A neural network topology
ing must be related to the fact that it is elaborated without for modelling grain drying. Computers and Electronics in Agriculture,
26, 147158.
any preliminary assumptions on the underlying mecha- Farkas, I., Remenyi, P., & Biro, A. (2000b). Modelling aspects of grain
nisms. The applications of neural networks and genetic drying with a neural network. Computers and Electronics in Agricul-
algorithm can be used for the on-line prediction and con- ture, 29, 99113.
trol of drying process. Huang, B., & Mujumdar, A. S. (1993). Use of neural network to predict
industrial dryer performance. Drying Technology, 11(3), 525541.
Jay, S., & Oliver, T. N. (1996). Modelling and control of drying processes
Acknowledgements using neural networks. In Proceedings of the tenth international drying
symposium (IDS96), Krakow, Poland, 30 July2 August, Vol. B, pp.
This work was elaborated within the project of Precise 13931400.
Drying System of Maize, No. 05EFN217100439 funded Jiang, N., Zhao, Z., & Ren, L. (2002). Design of structural modular neural
by the Ministry of Science and Technology of Peoples networks with genetic algorithm. Advances in Engineering Software, 34,
1724.
Republic of China. Kaminski, W., Strumillo, P., & Tomczak, E. (1998). Neurocomputing
approaches to modelling of drying process dynamics. Drying Techn-
References ology, 16(6), 967992.
Lin, C. T., & Lee, C. S. G. (1995). Neural Fuzzy Systems. Englewood
Blanco, A., Delgado, M., & Pegalajar, M. C. (2000). A genetic algorithm Clis, NJ: Prentice Hall.
to obtain the optimal recurrent neural network. International Journal Sreekanth, S., Ramaswamy, H. S., & Sablani, S. (1998). Prediction of
of Approximate Reasoning, 23, 6783. psychrometric parameters using neural networks. Drying Technology,
Boozarjomehry, R. B., & Svrcek, W. Y. (2001). Automatic design of 16(35), 825837.
neural network structures. Computers and Chemical Engineering, 25, Thyagarajan, T., Panda, R. C., Shanmugam, J., Rao, P. G., &
10751088. Ponnavaikko, M. (1997). Development of ANN model for non-linear
De Baerdemaeker, J., & Hashimoto, Y. (1994). Speaking fruit approach to drying process. Drying Technology, 15(10), 25272540.
the intelligent control of the storage system. In Proceedings of 12th Trelea, I. C., Courtois, F., & Trystram, G. (1997). Dynamic models for
CIGR World Congress on Agricultural Engineering, Vol. 2, Milan, drying and wet-milling quality degradation of corn using neural
Italy, 29 August1 September, 1994, pp. 14931500. networks. Drying Technology, 15(3 and 4), 10951102.