Sunteți pe pagina 1din 16

Accepted Manuscript

Hardware implementation of an artificial neural network model to


predict the energy production of a photovoltaic system
Daro Baptista , Sandy Abreu , Carlos Travieso-Gonzalez
,

Fernando Morgado-Dias
PII:
DOI:
Reference:

S0141-9331(16)30326-X
10.1016/j.micpro.2016.11.003
MICPRO 2473

To appear in:

Microprocessors and Microsystems

Received date:
Revised date:
Accepted date:

26 February 2016
9 September 2016
4 November 2016

Please cite this article as: Daro Baptista , Sandy Abreu , Carlos Travieso-Gonzalez
,

Fernando Morgado-Dias , Hardware implementation of an artificial neural network model to predict the energy production of a photovoltaic system, Microprocessors and Microsystems (2016), doi:
10.1016/j.micpro.2016.11.003

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service
to our customers we are providing this early version of the manuscript. The manuscript will undergo
copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please
note that during the production process errors may be discovered which could affect the content, and
all legal disclaimers that apply to the journal pertain.

ACCEPTED MANUSCRIPT

Hardware implementation of an artificial neural network model to predict the energy production of a photovoltaic system
Daro Baptista1a, Sandy Abreu a, Carlos Travieso-Gonzlez c, Fernando Morgado-Dias a,b
a

Madeira Interactive Technologies Institute, Funchal, Portugal


b
University of Madeira, Funchal, Portugal
University of Las Palmas de Gran Canaria, Las Palmas de Gran Canaria, Spain

Abstract

Key words:
network

AN
US

CR
IP
T

An artificial neural network trained using only the data of solar radiation presents a good
solution to predict, in real time, the power produced by a photovoltaic system. Even
though the neural network can run on a Personal Computer, it is expensive to have a control room with a Personal Computer for small photovoltaic installations. A FPGA running
the neural network hardware will be faster and less expensive. In this work, to assist the
hardware implementation of an artificial neural network with a FPGA, a specific tool was
used: an Automatic General Purpose Neural Hardware Generator. This tool allows for an
automatic configuration system that enables the user to configure the artificial neural
network, releasing the user from the details of the physical implementation. The results
show that it is possible to accurately model the photovoltaic installation based on data
from a nearby meteorological installation and the hardware implementation produces low
cost and precise results.

Hardware implementation, photovoltaic system, artificial neural

1. Introduction

AC

CE

PT

ED

The capacity to predict the energy production by a photovoltaic (PV) system is


relevant from an economical point of view and when controlling the stability of
the electrical grid.
The payback calculation is the prediction of the expected revenue, which is related to the production of the PV system. So, its accurate prediction is very important. Also, the prediction of production of PV system can be useful to monitor
and evaluate the PV system to detect eventual PV system faults. So, it is important have a way to calculate, with accuracy, the PV system capacity production easily.
In this paper, a methodology is presented to implement an artificial neural
network (ANN) within the hardware to predict the energy production in the PV
system. This ANN predicts, in real time, the power produced in the PV system
from data of irradiance.
In this work, an artificial neural network trained using only the data of solar
radiation from a nearby meteorological station presents a good solution for predicting, in real time, the power produced by the photovoltaic system. Even
though an ANN can run on a Personal Computer (PC), it is expensive to have a
control room with a PC for a small photovoltaic installation. A FPGA running
the ANN will be faster and less expensive (Angepat, Chiou, Chung, & Hoe,
2014) (Yin, Wang, & Guo, 2004) (Sartin & Silva, 2014) (Omondi & Rajapakse,
2006)
To assist the hardware implementation of artificial neural network with a
FPGA, a specific tool was used: ANGE (Baptista & Morgado-Dias, 2015). This
tool allows for an automatic configuration system that enables the user to configure the ANN, releasing the user from the details of the physical implementation.
1

Corresponding author. Tel. +351967378952.


E-mail address: fdariobaptista@gmail.com (D. Baptista).

ACCEPTED MANUSCRIPT
This work has a big impact on two different areas: renewable energy and
hardware for an artificial neural network. On the one hand, the accuracy prediction of power in the photovoltaic system will increase. On the other hand, the
hardware implementation of ANN will be much simpler because the ANGE tool
allows for a simple and fast implementation.
2. Artificial Neural Network

f (

CR
IP
T

The ANN that has been employed is a Multi-Layer Perceptron topology


(MLP) with three layers: input layer, hidden layer and output layer. Each of these
layers, with the exception of the input layer, contains elements, which process the
information described by the following equation (Morgado-Dias, Antunes,
Vieira, & Mota, 2006):

AC

CE

PT

ED

AN
US

Where f is the activation function, n is the number of inputs, x is the input and
W is the corresponding weight.
To design a MLP for a specific problem it is necessary to know the number
of artificial neurons that each layer should have. The number of neurons in the
input,
and output layer,
will be always defined by the problem being modeled. The main difficulty is to know how many neurons should be in the hidden
layer,
without unnecessarily increasing the complexity. There is not an exact
solution to this problem, but there are solutions that attempt to answer it (Sjberg
& Ljung, 1992). These solutions are based on the equilibrium between the convergence and the generalization of the ANN (Sjberg & Ljung, 1992).
Kolmogorovs Mapping Neural Network Existence Theorem is based on the
interpretation of Kolmogorovs superposition theorem of continuous functions as
an ANN (Ciuca & Ware, 1997). The application of this theorem consist in
neurons in the input layer (
) and
neurons in the hidden layer
(Gupta, Jin, & Homma, 2004). The ANNs implemented in this work have only
one input variable (
). Although the theorem provides an answer for
only, it can be checked, in an empirical way if it is possible to find the best ANN
using this theorem for
.
The Rule of Thumb is another solution for the direct influence of size of the
training sample on the ANNs performance. The number of training samples
(Ntrain) required to classify the test data with 90% accuracy (i.e. 10% of error)
should be 10 times larger than the number of weights (i.e.
)
(Principe, Euliano, & Lefebvre, 2000). If it can be assumed that the number of
weights is given by
, it is possible to determine that:
(

Data must also be collected to exemplify the operation of the PV system and
train the neural network models. The models will then be tested against real data
to verify the accuracy of prediction. For that, the dataset was divided into three
sets: the training dataset, the validation dataset and the test dataset. 70% of the
data of each month for training, the next 15% of the data of each month for validating and the next 15% of the data of each month for testing. This method allows for the capture of enough information each month, to develop an ANN with
a performance level good enough to use for predictions at any time of year. The
training algorithm used in this work is the Levenberg-Marquard. During training,
the behaviour of the Mean Square Error (MSE) is a monotonic decreasing function on the training sequence. However, the behavior of the validation sequence
is different. In the first epochs the MSE decreases, corresponding to a phase

ACCEPTED MANUSCRIPT
where the network is learning the main features of the system. At a certain point
the MSE begins to grow, corresponding to a greater influence of variance error
(Sjberg & Ljung, 1992). At this stage, the network is learning the characteristics
of the noise or the training sequence (overtraining). To avoid this problem the
equivalent to Early Stopping (Sjberg & Ljung, 1992) was used and kept the
ANN at the point where it exhibited the best performance.
3. Photovoltaic Installation Description

ED

Row of PV modules

AN
US

CR
IP
T

Two types of sample are used in this work namely solar production samples
and solar radiation samples.
The solar production samples were taken from a PV system installation located on the Island of Madeira, in Portugal. The PV system consists of 21 PV
Suntech monocrystalline modules of 175Wp each and 1 SMA Sunny Boy
3800V inverter. The 21 PV modules are separated into 3 strings of 7 PV modules
with an orientation of 24 degrees off South as shown in Figure 2. The Installed
power of the installation is 3.675kW. The samples were collected by a certified
company in the solar energy area and were taken on site from the Sunny Beam
Monitoring device that is linked to the SMA 3800V Sunny Boy Inverter through
Bluetooth. The Sunny Beam Monitoring device calculates the daily production
and stores the data in the computer linked to it.

Figure 1: Orientation of the PV installation (left side).Photo of the PV system (right side).

AC

CE

PT

The solar radiation samples were acquired by IPMA (https://www.ipma.pt/),


which has fourteen meteorological stations throughout the Island of Madeira. In
order to determine which of the meteorological stations should be used together
with the daily production data extracted on site, the correlation was calculated
between the daily production of the PV system and the daily solar radiation of
every meteorological station. The meteorological station that presented the highest correlation value with the daily production of the PV system was the one
located in the Funchal Observatory.
Figure 2 shows the location of the Meteorological station in turquoise and
the PV system in yellow. The distance between the PV system and the meteorological station is approximately 3.2km.

Figure 2: Localization of the PV system and the meteorological station.

ACCEPTED MANUSCRIPT

4. Sample Description

AN
US

CR
IP
T

To make a decision about what kind of model (parametric or nonparametric)


to implement in the hardware, it is important to verify if the data differs from
a normal distribution (Gaussian). Based on the normality tests (Coolican, 2009),
such as: histograms, probabilityprobability plot test (P-P plot), Kolmogorov
Smirnov test (K-S test), Shapiro-Wilk (S-W test), it is possible to choose the best
classifier for modelling the data. Therefore, it is assumed that:
H0: Both data follow a normal distribution.
H1: Both data do not follow a normal distribution.
Figure 3 presents the histograms, including the normal curve (Gaussian
curve), of the solar radiation in Funchal (on the left) and the power generated by
the PV system (on the right). Figures 4 and 5 present the PP plot test of the solar
radiation and power generated by PV system. This plots the cumulative probability of a variable against the cumulative probability of a normal distribution. If the
data has a normal distribution, it will result in a straight diagonal line.

CE

PT

ED

Figure 3. Histograms and normal curve (Gaussian curve) of the solar radiation in Funchal
(on the left) and the power generated by photovoltaic system (on the right).

AC

Figure 4. PP plot test of the solar radiation in Funchal (on the left) and their deviation of
their respective values (on the right).

Figure 5. PP plot test of the power generated by PV system (on the left) and their deviation of their respective values (on the right).

ACCEPTED MANUSCRIPT

Table 1: KS test and the S-W test.

AN
US

CR
IP
T

Looking at the histograms, the first thing to notice is that the data is not nearly symmetrical. Furthermore, looking at the PP plots, the first thing to notice is
that the data presents values with some deviation from the ideal diagonal and,
consequently, its distributions are not normal. However, it can be observed,
mainly in Figure 6, that there are some values very close to the ideal diagonal
line. To clarify this problem the K-S test is used as well as the S-W test.
Within the S-W statistic, W, is based on the comparison between the regression of the sample order statistics onto the theoretical order statistics of a normal
sample (Lewis & Orav, 1989) (Shapiro & Wilk, 1965). Once a value for W has
been computed, it can be compared with the distribution of the S-W statistic
(available in (Shapiro & Wilk, 1965) from
) to determine the significance value, p (Lewis & Orav, 1989). For
, p is found using a
normalizing transformation of W, as outlined in (Royston, 1982), and then comparing it to the standard normal quantiles (Lewis & Orav, 1989). For that reason,
the S-W test has a good performance for samples sizes between 3 and 2000. In
the K-S statistic, D, provides a means of testing whether a set of observations are
from some completely specified continuous distribution (Lilliefors, 1967). The
significance value, p, is determined by the comparison of the value of D and the
critical values for testing normality (Dallal & Wilkinson, 1986). In both tests, if p
> 0.05, it means that the data distribution is not significantly different from a
normal distribution. However, if p < 0.05 the data distribution is significantly
different from a normal distribution.
Table 1 shows the results of the KS test and the S-W test for the solar radiation and PV system power generated samples.

K-S test
Statistic (D) Sig (p)
,090
,000
Solar Radiation (Funchal/Obs.)
,056
,074
Power generated by photovoltaic system

Dataset

S-W test
Statistic (W) Sig (p)
,970
,000
,977
,001

AC

CE

PT

ED

In analyzing the data of solar radiation, it can be clearly verified that this
presents a non-normal distribution because the p values are lower than 0.05 for
both tests. However, it can be seen that for the data of power generated by the PV
system, the K-S test indicates that it presents a normal distribution while the S-W
test indicates that it presents a non-normal distribution. The K-S test and the S-W
test should always be used in conjunction with visual inspection of histograms
and PP plot test (Marques de S, 2003). Therefore, analyzing all the tests, the
hypothesis H0 is rejected and, consequently, it is concluded that the best solution
is to use a nonparametric model, such as an ANN, for modeling the data.
Nevertheless it is still necessary to know if the radiation data is enough to
model the power generated by the photovoltaic system. To verify this, it is necessary to measure the correlation between the input and the output. The Spearman's
correlation coefficient is used because the data has a nonparametric distribution
(Hauke & Kossowski, 2011) (Bauer, 2007). The Spearman correlation coefficient
value calculated was high (approx. 0.95). It can be concluded that it is enough to
have the solar radiation as input to develop a nonparametric model because it
contains information to predict with accuracy the power generated by the PV
system.
5. Methodology for building an ANN in hardware using ANGE
An ANN finds its simplest habitat within the Personal Computer (PC)
(Baptista & Morgado-Dias, 2015). For large installations, a control room with a
PC is necessary for the prediction of power generated by the PV system. However, it is expensive to have a control room with a PC for each small PV system
installation. So, in these cases, it is better to use an ANN implemented within
hardware (FPGA) because it is cheaper and faster than using a control room with

ACCEPTED MANUSCRIPT

AN
US

CR
IP
T

a PC (Angepat, Chiou, Chung, & Hoe, 2014) (Yin, Wang, & Guo, 2004) .
ANGE is an easy bridge between the PC and the hardware implementation
of the ANN (Baptista & Morgado-Dias, 2015). This tool presents a simple graphical user interface (user-friendly interface), where the functional modules can be
accessed by users with minimal hardware knowledge. The user must select a few
options such as the size and precision for input, hidden layer and output and the
type of activation function (hyperbolic tangent function and linear function)
(Baptista & Morgado-Dias, 2015). In this tool, there are three different designs
for implementing the hyperbolic tangent. The first one consists of storing its
values in Read Only Memory (ROM), the second consists of using a polynomial
interpolator to calculate its values and the third implementation is to use a piecewise linear to calculate its values. While the ROM solution uses more memory,
the second solution uses more multipliers and the third one more memory, giving
the user the freedom to make the selection according to his needs (Baptista &
Morgado-Dias, 2015).
The following steps create a Simulink model that is processed by System
Generator into HDL Code to program the FPGA. It presents a user-friendly interface where the user inserts the structure of an ANN and, after that, ANGE will
automatically produce the code necessary to synthesize the hardware.

Figure 6: ANN development tool.

5.1.

Development of an ANN model

AC

CE

PT

ED

To evaluate these two proposals for the number of hidden neurons, ANNs
with 2, 3, 4, 5, 6, 7 and 8 hidden neurons (20 of each kind) were developed. In
Figures 7, 8 and 9 the location of the best ANN structure according to Kolmogorovs Mapping Neural Network Existence Theorem (
) and Rule
of Thumb (
) is marked.
Figure 7 shows the mean of MSE with the standard deviation and the boxplot for each ANN as a function of the number of neurons used in the train dataset. During training, the behavior of the MSE decreases monotonically in the
training sequence with the exception of the ANNs with 5 and 8 hidden neurons
where there is a slight increase.

Figure 7. Top: the mean of MSE and Error Bars (Error Bars: +/- 1 Standard Deviation
(SD)) of the training dataset for each ANN as a function of the number of neurons. Down:
Boxplot of the training dataset for each ANN as a function of the number of neurons.

Figure 8 shows the mean of MSE with the standard deviation and boxplots
for each ANN as a function of the number of neurons using the validation dataset. It can be verified that, in the mean of each set of ANNs with the same order
of magnitude, that ANNs with 6, 7 and 8 hidden neurons present the best perfor-

ACCEPTED MANUSCRIPT

AN
US

CR
IP
T

mance. In more detail, a slight increase of performance can be seen (i.e. a slight
decrease of MSE) in the ANN with 6 hidden neurons. Therefore, analyzing the
graphs with the means and keeping the agreement between the performance and
the complexity, the best networks lie in the set of ANNs with 6 hidden neurons.
Here, it is possible to see the veracity of the Rule of Thumb.
However, in analyzing the boxplot of Figure 8, a little detail that should be
highlighted is a good ANN with 3 hidden neurons because it was difficult to
identify in the graphs with the means in Figure 7. The detail lies in the set of
ANNs with 3 hidden neurons. This is a network with very good performance
compared with the other ANNs. This case corresponds to the lower red cross for
3 neurons and is statistically denominated as an outlier and can be defined as a
network that presents a large separation from the other networks of its set. This
indicates that there is an ANN which stands out from the others. Here, it is possible to see some veracity of Kolmogorovs Theorem because it is possible to get a
good network, but the probability of finding it is small. Thus in general, when
looking for ANNs which produce good performance without increasing their
complexity, it is possible to confine the search of ANN structures using the two
proposed theorems.

ED

Figure 8. Top: the mean of MSE and their Error Bars (Error Bars: +/- 1 SD) of the validation dataset for each ANN as a function of the number of neurons. Down: Boxplot of the
validation dataset for each ANN as a function of the number of neurons.

AC

CE

PT

After training the ANN, the test datasets will be used to test all the ANNs.
As previously mentioned, the test dataset was not used during training or validating, therefore the test dataset provides an "out-of-sample" dataset to test the network on. The performance accuracy values provided after testing the networks
indicate how well the network will do when tested with data from the real world.
Figure 9 shows the mean of MSE with its standard deviation and the boxplot for
each ANN as a function of the number of neurons using the test dataset. In the
same figure, the places which show the best ANN structures are marked according to the Kolmogorovs Theorem and the Rule of Thumb.

Figure 9. Top: the mean of MSE and their Error Bars (Error Bars: +/- 1 SD) of the test
dataset for each ANN as a function of the number of neurons. Down: Boxplot of the test
dataset for each ANN as a function of the number of neurons.

ACCEPTED MANUSCRIPT

Table 2 shows the best and worst ANN performances according to the number of neurons, as well as the epoch where each of them achieved the best performance with the validation dataset. Taking into consideration the equilibrium
between the complexity (number of hidden neurons) and the accuracy, the best
ANN to implement in the hardware is the one that has 3 hidden neurons. Analyzing the ANN with the best performance, it is clear that this model presents a
reasonable prediction of power generated by the PV system.

Best performance
Test Validation Epoch
0.1116 0.0959
8
0.1124 0.0935
28
0.1202 0.0956
492
0.1215 0.0955
610
0.1118 0.0949
9
0.1100 0.0947
5
0.1135 0.0953
7

Train
0.0786
0.0786
0.0784
0.0784
0.0761
0.0753
0.0762

Worst performance
Test
Validation Epoch
0.1191
0.0985
7
0.1191
0.0985
82
0.1178
0.0986
174
0.1183
21
0.0994
0.1149
0.0973
8
0.1101
0.0971
11
0.1135
0.0978
6

CR
IP
T

2
3
4
5
6
7
8

Train
0.0798
0.0815
0.0776
0.0773
0.0761
0.0797
0.0763

AN
US

N hidden neurons

Table 2: The best and worst performance for each ANN as a function of the number of
neurons.

5.2.

Construction of the ANN using ANGE and co-simulation block


to hardware-in-loop verification

AC

CE

PT

ED

In the previous section, it was verified that the best ANN to implement in the
hardware has 1 input, 3 neurons in the hidden layer with hyperbolic tangent function and 1 neuron in the output layer with linear activation function. After selecting the configuration and characteristics of the ANN, ANGE will automatically
generate a Simulink Model file, similar to the one represented in Figure 10.
The large blocks represented in this figure are the neurons, the inputs are
identified by the nomenclature In and the weights are introduced using the constant blocks. As can be seen, the ANN is implemented using a full set of neurons
and has a full parallel structure.

Figure 10. FANN generated by ANGE with the weights loaded.

System Generator supports FPGA hardware-in-the-loop verification for Xilinx FPGA boards. This means that the FPGA implementation can be tested while
still connected to the PC. The HDL Verifier provides co-simulation interfaces,
which are connected to MATLAB and Simulink. To create the block necessary to
program the FPGA to work in co-simulation mode, Hardware Co-Simulation was
selected and then the desired FPGA was chosen. This co-simulation block was
created for the Zynq-7000 EPP 7Z020 ZedBoard Kit.
The generation of the co-simulation block starts with the click in the Generate button. A new model library window opens with the co-simulation block, as
can be seen in Figure 11. These blocks can be inserted into a Simulink library
and used as Simulink models, as shown in Figure 12, inserting the FPGA in the

ACCEPTED MANUSCRIPT

CR
IP
T

loop and allowing the simulation to approach the real functioning of the system.
Thus, it is possible run the ANN and analyse performance and limitations introduced by the FPGA implementation.

ED

AN
US

Figure 11. Co-simulation block containing the ANN.

Figure 12. An ANN generated by ANGE using a co-simulation block.

6. Results and discussion

AC

CE

PT

In order to determine the limitations introduced by the FPGA, a test to compare the same ANN implemented in a FPGA and MatLab was performed. For the
FPGA, it was also necessary to test the different implementations to compare
their performances. The first implementation consists of storing the values of the
hyperbolic tangent in a ROM. The second implementation uses polynomials of
third order to define the hyperbolic tangent. The third implementation uses
piecewise linear sections to define the hyperbolic tangent. In Table 3, the MSE of
the software simulation and hardware co-simulation can be seen.
Table 3. MSE of software prediction and hardware prediction (FPGA co-simulation using
different hyperbolic tangent)

ROM (5000 values)


Polynomials
Piecewise Linear section
MatLab Simulation

FPGA
Co-simulation

MSE
0.0880638
0.0880493
0.0880464
0.0880401

Table 4 shows the percentage of error between the software simulation and
hardware co-simulation, using the three different implementations of the hyperbolic tangent. To calculate this percentage, the following equation was used:

ACCEPTED MANUSCRIPT

Table 4. Error between the software simulation and hardware co-simulation.

FPGA Co-simulation using


ROM (5000 values)
Polynomials
Piecewise Linear section

Error between MatLab Simulation and


FPGA co-simulation
0.026 %
0.010 %
0.007 %

PT

ED

AN
US

CR
IP
T

In Table 3 it is possible to verify that the piecewise linear section for the activation function presents the best results to implement an ANN in hardware
(MSE = 0.0880464). Looking at Table 4, it can be concluded that the error between software simulation and hardware co-simulation is 0.007%. However, the
polynomial solution also presents good performance. This solution presents a
0.010% error between the software simulation and hardware co-simulation. So, it
is possible to find the two best solutions for implementing an ANN. These two
solutions are the implementations of the hyperbolic tangent with polynomials and
the piecewise linear, where both solutions present almost similar errors. The
ROM based solution presents a MSE equal to 0.0880638 and 0.026% error between software simulation and hardware co-simulation.
Figures 13, 14 and 15 present the response from the block co-simulation.
Here, it is possible to do a comparison, for each value, between the software and
hardware prediction using three different implementations of the hyperbolic
tangent available in ANGE.

AC

CE

Figure 13. Comparison of results between the ANN implement in the software and the
ANN implement in the hardware using ROM. Top: Prediction of power generated by the
PV system. Down: Error between the prediction calculated by the software and hardware.

Figure 14. Comparison of results between the ANN implement in the software and the
ANN implement in the hardware using polynomials. Top: Prediction of power generated
by the PV system. Down: Error between the prediction calculated by the software and
hardware.

CR
IP
T

ACCEPTED MANUSCRIPT

Figure 15. Comparison of results between the ANN implement in the software and the
ANN implement in the hardware using piecewise linear. Top: Prediction of power generated by the PV system. Down: Error between the prediction calculated by software and
hardware.

AN
US

Table 5 shows the cells used to implement ANN in the hardware using different ways to calculate the hyperbolic tangent. The use of the ROM is a solution
which allows for saving more resources of the FPGA. However, this solution
presents a disadvantage because it results in a higher MSE. It is important to note
that this solution uses 5000 values and that the representation for each of these
values is done with 32 bits. If one wishes to decrease the MSE, the number of
values in the ROM must necessarily be increased. However, also it is important
to bear in mind not to exceed the memory and resources of the FPGA. Analysing
Table 5, it can be seen that if Digital Signal Processing (DSP) are used among
the polynomials and piecewise linear section solutions, other resources can be
saved. However, the result of the larger number of parameters to be stored in the
piecewise linear approach has a higher device utilization rate for this solution.
Table 5: The FPGA cells used for each of the system described above.

Polynomial

Piecewise Linear section

1
1
1
1
3
194
9
3
337
397
53
68
8
148
1
6
32
90
197
21
1

1
1
1
1
3
764
60
313
805
329
140
2471
481
1
3
32
176
1

1
1
1
1
1
11377
24
315
721
3275
489
62276
3433
1
360
32
176
1

ED

ROM
(5000 values)

AC

CE

PT

UltraFIFO_1to32_bbox
UltraFIFO_32to1_bbox
BSCANE2
BUFG
BUFGCTRL
CARRY4
DSP48E1
DSP48E1_1
LUT1
LUT2
LUT3
LUT4
LUT5
LUT6
MMCME2_ADV
MUXF7
SRL16E
FDCE
FDE
FDRE
FDSE
IBUFG

ACCEPTED MANUSCRIPT
7. Conclusions

ED

AN
US

CR
IP
T

This work presents an ANN solution to predict the power generated by the
photovoltaic system based on the solar radiation measurements. Its structure
consists of 3 neurons in the hidden layer whose activation function is a hyperbolic tangent and 1 neuron in the output layer with a linear activation function.
Two different approaches to the number of neurons that a hidden layer of an
ANN should have were investigated: Kolmogorovs Theorem and Rule of
Thumb. With these data samples, it was concluded that the models which are
better at making predictions are the ANNs which have a number of hidden layer
neurons of between 3 and 5/6 according the Kolmogorovs Theorem and the
Rule of Thumb, respectively. It has been empirically verified that the ANN with
the best performance had 3 hidden neurons. However, it is only one ANN which
stands out against other networks with the same structure. For this reason, the
probability of obtaining this result is small.
Given these results, it can be stated that an ANN is a good alternative to predict the power generated by a PV system. These results are for data relating to a
period of one year, and it can be concluded that, given a good prediction of solar
radiation, despite being from a meteorological station which is not on site, it is
possible to accurately anticipate the production of the power generated by a PV
system and, consequently, it could be used to adjust the parameters to maximize
the renewable energy production.
After finding the ANN with the best performance, it was implemented in
hardware (Zynq-7000 EPP 7Z020 ZedBoard Kit). For this purpose a tool called
ANGE (Automatic Neural Hardware Generator) was used and this shows some
of the results obtained with it. The first implementation consists of storing the
values of the hyperbolic tangent in a ROM. The second implementation is to use
polynomials of third order to define the hyperbolic tangent. The third implementation consists of using a piecewise linear approach to define the hyperbolic tangent. When comparing the solutions, the ROM holds the highest error
(MSE=0.0880638 which correspond to 0.026% error). Whereas the piecewise
linear solution (MSE= 0.0880464 which correspond to 0.007% error) and the
polynomial approach hold almost the same error (MSE=0.0880493 which correspond to 0.01% error).
In generic way, these results show an accuracy prediction of PV system production, which could be useful for the future to the payback calculation, the electrical grid stabilization and the monitoring and evaluation of PV systems.

PT

8. Acknowledgements

AC

CE

Acknowledgments to Portuguese Foundation for Science and Technology


for their support through project PEst-OE/EEI/LA0009/2011.
Acknowledgments to Funding Program + Conhecimento II: Incentive System to Research and Technological Development and Innovation of Madeira
Region II, through the project Vision 3D MADFDR 01 0190 FEDER
000014.
Acknowledgments to ARDITI - Agncia Regional para o Desenvolvimento
da Investigao Tecnologia e Inovao through the support provided by the FSE
- Madeira 14-20.
References
Angepat, H., Chiou, D., Chung, E. S., & Hoe, J. (2014). FPGA-Accelerated
Simulation of Computer Systems. Morgan & Claypool Publishers.
Baptista, D., & Morgado-Dias, F. (2015, August). Automatic generalpurpose neural hardware generator. Neural Computing and Applications, 1-12.
doi:10.1007/s00521-015-2034-5
Bauer, L. (2007). Estimao do coeficiente de correlao de spearman
ponderado. Dissertao de Mestrado, Universidade Federal do Rio Grande do
Sul, Porto Alegre.

ACCEPTED MANUSCRIPT

AC

CE

PT

ED

AN
US

CR
IP
T

Best, D., & Roberts, D. (1975). Algorithm AS 89: The Upper Tail Probabilities of Spearman's Rho. Journal of the Royal Statistical Society. Series C (Applied Statistics), 24, 377-379.
Ciuca, I., & Ware, J. (1997, April). Layered Neural Networks as Universal
Approximators. Computational Intelligence. Theory and Applications: International Conference, 5, 411-415.
Coolican, H. (2009). Research Methods and Statistics in Psychology. Hodder
& Stoughton.
Dallal, G., & Wilkinson, L. (1986, November). An Analytic Approximation
to the Distribution of Lilliefors's Test Statistic for Normality. The American
Statistician, 40, 294-296. doi:10.2307/2684607
Gner, B., Frankford, M., & Johnson, J. (2009, June). A study of the
Shapiro-Wilk test for the detection of pulsed sinusoidel radio frequency interference. IEEE Transactions on Geoscience and Remote Sensing, 47, 1745-1751.
Gupta, M., Jin, L., & Homma, N. (2004). Static and Dynamic Neural Networks: From Fundamentals to Advanced Theory. John Wiley & Sons.
Hauke, J., & Kossowski, T. (2011). Comparison of values of Pearson's and
Spearman's correlation coefficients on the same sets of data. Quaestiones Geographicae.
Haykin, S. (1995). Artificial Neural Networks: A Comprehensive Foundation. New York: Pearson Education.
Hedges, L., & Olkin, I. (2014 ). Statistical Methods for Meta-Analysis. United Kingdom, London: Academic Press.
Heiberger, R., & Holland, B. (2004). Statistical Analysis and Data Display:
An Intermediate Course with Examples in S-Plus, R ans SAS. USA: Springer.
Lewis, P., & Orav, E. (1989). Simulation Methodology for Statisticians, Operations Analysts, and Engineers (Vol. 1). California: Wadsworth &
Brooks/Cole.
Lilliefors, H. (1967, June). On the Kolmogorov-Smirnov Test for Normality
with Mean and Variance Unknown. Journal of the American Statistical
Association, 62, 399-402.
Lira, S. (2004). Anlise de correlao: abordagem terica e de construo
dos coeficientes com aplicaes. Dissertao de Mestrado, Universidade Federal
do Paran, Curitiba.
Maia Silva, R. (2005). Redes Neurais Artificiais aplicadas Deteco de
Intruso em Redes TCP/IP. Dissertao de Mestrado, Pontifcia Universidade
Catlica do Rio de Janeiro, Departamento de Engenharia Eltrica da PUC-Rio,
Rio de Janeiro.
Marques de S, J. (2003). Applied Statistics Using SPSS, STATISTICA and
MATLAB. Porto: Springer.
Maxfield, C. (2004). The Design Warrior's Guide to FPGAs. New York,
USA: Elsevier.
Morgado Dias, F., Antunes, A., & Mota, A. (2004). Artificial Neural Networks: a Review of Commercial Hardware. Engineering Applications of
Articial Intelligence, 17/8, 945-952.
Morgado-Dias. (2005). Tcnicas de controlo no-linear baseadas em Redes
Neuronais: do algoritmo implementao. University of Aveiro, Electronic and
Telecomunication Departament.
Morgado-Dias, F., Antunes, A., Vieira, J., & Mota, A. (2006). A sliding
window solution for the on-line implementation of the Levenberg-Marquardt
algorithm. Engineering Applications of Artificial Intelligence, 19.
Nangolo, & Musingwini . (2011). Empirical correlation of mineral commodity prices with exchange-traded mining stock prices. Journal of the Southern
African Institute of Mining and Metallurgy, 111 .
Omondi, A., & Rajapakse, J. (2006). FPGA Implementations of Neural
Networks. Netherlands: Springer.
Pednault, E. (2006). Transform Regression and the Kolmogorov Superposition Theorem. Proceedings of the 2006 SIAM () International Conference on
Data Mining, 35-46.

ACCEPTED MANUSCRIPT

CR
IP
T

Principe, J. C., Euliano, N., & Lefebvre, C. (2000). Neural and Adaptive
Systems: Fundamentals through Simulations. John Willey & Sons, Inc.
Royston, J. (1982). An Extension of Shapiro and Wilk's W Test for Normality to Large Samples. Journal of the Royal Statistical Society, 31, 115-124.
doi:10.2307/2347973
Sartin, M., & Silva, A. (2014, October). ANN in Hardware with Floating
Point and Activation Function Using Hybrid Methods. Journal of Computers, 9,
22582265.
Shapiro, S., & Wilk, M. (1965, December). An analysis of variance test for
normality (Complete Samples). Biometrika, 52, 591-611.
Sjberg, J., & Ljung, L. (1992). Overtraining, Regularization and Searching
for minimum in Neural Networks. IFAC Symposion on Adaptive Systems in
Control and Signal Processing 1992, 669-674.
Wright, S., & Marwala, T. (2007). Artificial Intelligence Techniques for
Steam Generator Modelling. School of Electrical and Information Engineering.
Yin, F., Wang, J., & Guo, C. (2004). FPGA Implementation of Feature Extraction and Neural Network Classifier for Handwritten Digit Recognition. Advances in Neural Networks - ISNN 2004: International Symposium on Neural
Networks (pp. 988-995). Dalian, China: Springer.

AN
US

Daro Baptista received his Masters degree in Telecommunications and Networks from the University of Madeira, Portugal
in 2009. He has been involved in research projects since 2010
at Madeira Interactive Technologies Institute and the Centre of
Exact Sciences and Engineering of University of Madeira.
Since 2015 he has been enrolled in a PhD program called
NETSyS at the Instituto Superior Tcnico in Lisbon. His research interests include Automation, Artificial Neural Networks, Field Programmable Gate Array implementations and Renewable Energy.

ED

Sandy Rodrigues received her Masters degree in Telecommunications and Networks from the University of Madeira, Portugal in 2009. She worked in a project called Smart Solar in
2014/2015 and since 2015 has been enrolled in a PhD program
called NETSyS at the Instituto Superior Tcnico in Lisbon. Her
research interests include Renewable Energy, Wireless Sensor
Networks, Automation and Artificial Neural Networks.

AC

CE

PT

Carlos M. Travieso-Gonzlez received a M.Sc. degree in 1997


in Telecommunication Engineering at Polytechnic University
of Catalonia (UPC), Spain; and a Ph.D. degree in 2002 from
the University of Las Palmas de Gran Canaria (ULPGC-Spain).
He has been an Associate Professor at ULPGC since 2001,
teaching subjects of signal processing and learning theory. He
has passed the first of three steps to become a Full Professor.
His research lines are biometrics, biomedical signals, data mining, classification
system, signal and image processing, pattern recognition, and environmental
intelligence. He has researched on more than 45 international and Spanish research projects, some of them as head researcher. He is co-author of 3 books, coeditor of 10 Proceeding Books, Guest Editor of 5 JCR-ISI international journals
and author of 14 book chapters. He has over 320 papers published in international journals and conferences (46 of them indexed on JCR ISI Web of Science).
He has published three patents and four more are under revision at the Spanish
Patent and Trademark Office. He has been supervisor on five PhD Theses (with 5
more in progress), and 100 Masters Theses. He has been a reviewer in different
international journals (<35) and conferences (<60) since 2001. He has been a
member of IASTED Technical Committee on Image Processing since 2007, a
member of IASTED Technical Committee on Artificial Intelligence and Expert
Systems since 2011 and member of IASTED Technical Committee on Signal
Processing since 2014. He is the founder and president of the IEEE-IWOBI con-

ACCEPTED MANUSCRIPT
ference series and InnoEducaTIC symposium series. He was General Chair on
IEEE-IWOBI 2015, InnoEducaTIC 2014, IEEE-IWOBI 2014, IEEE-INES 2013,
NoLISP 2011, JRBP 2012 and IEEE- Co-Chair on IEEE-ICCST 2005. He was
the Vice-Dean from 2004 to 2010 at the Higher Technical School of Telecommunication Engineers in ULPGC. He ha been the Vice-Dean of Head of Graduate and Postgraduate Studies since March 2013.

AC

CE

PT

ED

AN
US

CR
IP
T

Fernando Morgado-Dias received his Masters degree in Microelectronics from the University Joseph Fourier in Grenoble,
France in 1995 and his PhD from the University of Aveiro,
Portugal, in 2005 and is currently Assistant Professor at the
University of Madeira and Researcher at Madeira Interactive
Technologies Institute. He was Pro-Rector from 2009 to 2013
at the University of Madeira, was the President of the Portuguese Automatic Control Association from January 2013 to December 2014 and
is currently Vice-President. He has published more than 80 research papers and
served as reviewer in many journals and conferences. His research interests include Field Programmable Gate Array implementations, Renewable Energy and
Artificial Neural Networks.

S-ar putea să vă placă și