Documente Academic
Documente Profesional
Documente Cultură
https://doi.org/10.1007/s12652-018-0785-4
ORIGINAL RESEARCH
Abstract
The foreseen increase of IoT connected to the Internet is worrying the ICT community because of its impact on network
Infrastructure when the number of requesters become larger and larger. Moreover also reliability of network connection and
real-time constraints can affect the effectiveness of the Cloud Computing paradigm for developing IoT solutions. The neces-
sity of an intermediate layer in the whole IoT architecture that works as a middle ground between the local physical memories
and Cloud is proposed by the Fog paradigm. In this paper we define and use a methodology that supports the developer to
address the Fog Service Placement Problem, which consists of finding the optimal mapping between IoT applications and
computational resources. We exploited and extended a Fog Application model from the related work to apply the proposed
methodology in order to investigate the optimal deployment of IoT application. The case study is an IoT application in the
Smart Energy domain. In particular, we extended a software platform, which was developed, and released open source by
the CoSSMic European project, with advanced functionalities. The new functionalities provide capabilities for automatic
learning of energy profiles and lighten the platform utilization by users, but they introduce new requirements, also in terms of
computational resources. Experimental results are presented to demonstrate the usage and the effectiveness of the proposed
methodology at deployment stage.
13
Vol.:(0123456789)
S. Venticinque, A. Amato
with computing, storage, and network connectivity can be a computing for IoT applications. Section 3 discusses improve-
fog node. Examples include industrial controllers, switches, ment of Fog to Cloud. In Sect. 4 the Fog Application model
routers, embedded servers, and video surveillance cameras and the metodology to address the Fog Service Placement
Evans (2015). Problem are described. Section 4 presents the case study.
By acting at the peripheral level, that is, on the Edge of The application of the proposed methodology and experi-
the network, it is possible to manage vast amounts of data mental results are described in Sect. 5. Section 6 reviews
without necessarily passing from the Cloud, with two unde- the related work providing an overview of works related
niable advantages: on the one hand, it reduces the required to Fog Computing in Smart Grid. Finally conclusions are
bandwidth needed to reach the Cloud or the date corporate drawn in Sect. 7.
center, and on the other one can assume an increase in the
level of security, because the infrastructure may be more
controllable. In fact processing does not take place on the 2 Cloud computing for IoT applications
Cloud, but on local smart structures, capable of providing its and smart grids
own limited computational power to perform simple tasks,
however, designed to allow the sending and processing in The main strengths introduced by the Cloud Computing in
the Cloud only critical and useful data. In this way, they IoT sector are the high quality and fast services at the low-
decrease latency and saves bandwidth. est cost to the user. The limited autonomy, due to a small
This paper addresses the Fog Service Placement Prob- capacity of batteries, creates an upper limit with regard to
lem (FSPP), that aims to determine an optimal mapping the hardware components of the device itself. As the vast
between IoT applications and computational resources majority of IoT devices has significantly limited resources
with the objective of optimizing the fog landscape utiliza- compared to the Cloud platforms, Cloud is considered an
tion while satisfying QoS requirements of applications. In essential model to allow mobile customers to perform bur-
the FSPP, we consider a methodology to design the best densome operations.
deployment configuration in the context of smart energy Nevertheless, from the point of view of the Internet of
domain. To account for QoS requirements of applications, Things, the real limitation of Cloud Computing actually
deadlines and throughput of application must be preserved. stems from the type of network for which it was designed
The propose methodology allows to estimate an upper bound a decade ago. Given the inevitable convergence of Mobile
(i.e., worst-case estimation) of the response time of relevant and Cloud Computing, the focus shifts to the actual condi-
transactions, which guides the design of test scenarios to be tions of utilization of these models, which are often placed
executed on a real testbed. in a hostile operating environment. The additional resource
The main contributions made in this work are briefly shortage of IoT devices, compared to current smartphones
described as follows: and tablets, is due to their spread even in very specialized
domains where they often need a miniaturization of the
a) a new methodology to address the Fog Service Place- components which does not allow high performance. For
ment Problem has been defined; the whole category of objects connected wearable, for sen-
b) a Fog Application model from the related work has been sors, actuators and small gateway IoT, the limited available
extended and used to apply the proposed methodology; resources force users to move a quantity of data, and related
c) an IoT application from the Smart Energy domain has computations, on the Cloud; with a frequency and quantity
been chosen as real case study to demonstrated the effec- greater than in the case of mid range devices such as tablets,
tiveness of the methodology; smartphones and laptops.
d) new advanced functionalities have been developed to Internet of Things (IoT) needs to operate on a fast net-
extend a peer to peer platform that implemented the cho- work topologies that provides end-to-end connection and
sen case study in the CoSSMic European project; real-time responses. For instance the frequent disconnec-
e) three different deployment solutions of the platform have tions and reconnections by the devices, or notifications of
been investigated using the proposed methodology dem- a disaster or an imminent collapse of the system. In many
onstrating the effectiveness of the Fog computing para- cases, decisions must be taken in a short time and it is nec-
digm; essary to be able to rely on a reliable connection between
f) experimental results have been described and discussed the customer and the corresponding servant who performs
to validate the methodology and to estimate the process- complex tasks. In many situations, especially dictated by
ing capability of the extended CoSSMic platform. the overload of communications in multi-hop WAN net-
works, these qualities are not guaranteed by the Cloud. In
The remaining materials are organized in this fashion. fact, it aims to manage the modern real-time responsiveness
In Sect. 2 introduce advance and limitations of Cloud requirements in a secure data and user applications, in more
13
A methodology for deployment of IoT application in fog
relaxed timings of which the IoT needs. In addition, Cloud is On the other had, data security and privacy remain rel-
designed to work on large networks, which may contain bot- evant obstacles to migration Smart Grid applications in
tlenecks or even interruptions. The lack of connection is not Cloud Simmhan et al. (2010). In particular, scalable access
actually the only service interruption reason. Let’s consider to information on energy assets need to be balanced with
the case in which a shared access to a resource is present data privacy and security (using identified data), which must
for all user connected to the same WAN. As often happens not affect the performance of such mission-critical applica-
in networks which convey a large number of users that can tions. The resilience of the ICT infrastructure is so relevant
be both human and artificial, as more and more frequently to make available data from which information is extracted
occur in modern IoT scenarios, in correspondence of peak in order to detect faults, isolate the fault, and then resolve
usage there is an high probability that the resource sharing faults. Also the security of the ICT infrastructure becomes
becomes effectively unusable by users, due to an overload a requirements to avoid failures caused by cyber-attacks
of network communications. Cloud is not designed for the or by combined attacks to the power grid and to the ICT
reception of data with frequency and speed like those that infrastructures.
are produced by numerous IoT devices; moreover analysis of So for an important domain, such as Smart Grid, is really
those data on Cloud platforms involves a continuous move- important to go towards an architecture that overcome the
ment of a huge amount of data. All this leads to congestion, described limitations.
which directly results in denial of service, slowdowns or
disconnections Papageorgiou et al. (2015).
In addition to the congestion problems, Cloud is unsuited 3 From cloud to Fog: benefits
to mobile users, since it is not rare that they change their and challenges
address, while using the same service, to move from one net-
work to another. Moreover, Cloud Computing suffers other Moving resources in a point of the network closest to the
problems, such as those related to security and law issues. production data is the solution to meet the high latencies
With regard to security, it can be said that the main issues that impede the execution of applications that require near
are related to the location of user’s data. It is not always pos- real-time to the timing and generate high data traffic. Fog
sible, or convenient, to move the computational load on the Computing aims at moving the Cloud Computing paradigm
network to the Cloud because performance are not guaran- to the edge of networks, in particular those that connect all
teed in some critical situations of particular interest. the devices belonging to the IoT Bonomi et al. (2012). The
Smart Grid represents the most relevant IoT application edge of the network is identified by the subnet of the devices
in the wider Smart Metering context. It has emerged to pro- in the network and can have a reduced or a wide extension
vide an intelligent power infrastructure, but it is integrating depending on whether the internal devices are wireless or
IoT devices to gather more and more detailed information broadband.
and introducing additional requirements. The integration of The nodes, even if are less performing compared to the
WSNs, actuators, smart meters, and other components of cloud platforms, are suitable for execution of complex tasks
the power grid with together with information and commu- and the storing of data. They are also scalable and adaptable
nication technology (ICT), is referred to as the Internet of to different types of deployment.
Energy (IoE). IoT technology integrated within the smart In this place of the network, Cloud services can operate
power grid comes with a cost of storing and processing large directly, or at the most by a single intermediary, with mobile
volume of data every minute. This data includes end users customers that consume or produce data. According to the
load demand, power lines faults, network’s components sta- commonly accepted definition of Fog Computing Vaquero
tus, scheduling energy consumption, forecast conditions, and Rodero-Merino (2014), it represents a scenario where an
advanced metering records, outage management records, high number of wireless devices communicate and cooper-
enterprise assets, and many more. Hence, utility companies ate among them and with network services to support the
must have the software and hardware capabilities to store, storage of data and the computational processes without
manage, and process the collected data efficiently. In Witt intervention from third party.
(2015) explains how the high volume data gather in smart The conceptual model that describes the Fog archi-
grid is similar in size and characteristics to the concept of tecture fits all devices that extend the capabilities of the
big data. Cloud in a dedicated intermediate level of connection
The utilization of Cloud architecture provides, in addi- to end devices for data protection. Fog nodes have the
tion to high scalability, a number of other the advantages characteristic of being distant from the Cloud data center
in the field Smart Grid such as better interoperability, rapid with which they communicate, but very close to the user
elasticity, better maintainability, as well as cost reduction. devices. Fog nodes also are in large number and scattered
13
S. Venticinque, A. Amato
information about the status of the radio channel used for type Ta specifies the requirement of a service to be executed
13
A methodology for deployment of IoT application in fog
CFN and storage CFS . Each control node has to perform resource
provisioning by exploiting the computational resources of the
Cloud to overcome its own limitations. A Cloud R has theoreti-
cally unlimited resources. The logical link between a control
node F and the Cloud R has a not negligible delay dR.
13
S. Venticinque, A. Amato
(Eq. 4). This is necessary for asymmetric channels, such As a third constraint, we account for the response time rAi
as the ones available in residential buildings or in mobile of an application, so that it does not violate the application
networks. Equation 3 computes the bandwidth cF u used by deadline as in 5.
N
13
A methodology for deployment of IoT application in fog
Fig. 3 Architecture overview
1. Benchmarking: The computation and the communication Smart Energy represents a killer use case for the adoption
requirements will be estimated on the target platform for of the Fog paradigm.
different workloads. In particular each service will run In order to demonstrate the application of the proposed
on the target platform in order to estimate the execution approach we extended the CoSSMic platform described
time d(a, F) and d(a, R) with increasing input sizes. in Jiang et al. (2016) and evaluated three different deploy-
2. Evaluation: the presented model will be used to compute ment solutions. In the CoSSMic scenario consuming appli-
an upper bound to the workload that can be processed by ances and photo-voltaic panels of a neighbourhood are
different deployment configurations. In particular, Eq. 2 continuously monitored. Users plan the utilization of their
in the case of CPU utilization becomes: devices using a software application, and the system com-
putes and enforces the best schedule of consuming appli-
ances, to optimize the global self-consumption at neighbour-
∑Ai
a
cCa ∗ (ya + za )
= hood level. Users’ preferences and constraints include some
CFC
(7) parameters that allow to define the flexibility of the schedule,
n Ai
� � but they need to set in advance which kind of program they
(d(a, F) ∗ ya + d(a, R) ∗ za ) ∗ 𝜆a ≤𝜇
i a
are running.
Here we focus on some extended functionalities that have
been developed to automatically learn energy profiles cor-
where 𝜆a is the arrival rate of requests for the service responding to different working programs of an appliance,
a that must be maximized ∀a ∈ Res(F)a and others and to predict at device switch time which program is actu-
parameters have been estimated by benchmarks. Equa- ally going to run.
tion 7 defines the same problem in terms of percentage Let’s suppose an users has switched-on her washing
of time used by services. The objective is to maximize machine. The start will be detected by the system, which
the throughput of all services 𝛌 = {𝜆1 , ..., 𝜆n } keeping will stop the device and will schedule the next switch-on
free 𝜇 percentage of CPU, which takes into account the at the best time of the day, according to the energy require-
error introduced by the model and the overhead con- ments of the predicted working program. At the end of the
suming processng resources. The analytical solution of run measured energy consumption will be used to improve
Eq. 7 provides a pareto front of solution consisting of the average consuming profiles and to update the prediction
thresholds values for the arrival rates in different deploy- model.
ment configurations. In Fig. 3 we show an overview of the CoSSMiC platform.
3. Testing: the evaluation results will be used to reduce It is possible identify four main layers:
the space dimension of test cases. It provides an upper
bound to the maximum throughput that can be processed – The Sensor Layer is composed of different kind of sen-
by the nodes in each deployment configuration. Testing sors Gupta et al. (2016), digital smart meters, digital
will be used to validate the evaluation results, to tune the controls, and analytic tools to monitor and control two-
deployment configuration reproducing, by a testbed, the way energy flow.
real environment and to estimate the overhead.
13
S. Venticinque, A. Amato
– The Sensor Network, collects and disseminates envi- – The User Interface (UI) supports interactive control and
ronmental’s data. Wireless sensor networks facilitate configuration of the system. It provides to the user real
monitoring and controlling of physical environments time monitoring information, statistics on historical data
from remote locations at any time. Devices in the grid and feedback from the coaching system. XMPP messages
can send information through wireless interfaces (for are used to trigger alerts, to communicate with other
example UHF or Zigbee). Such a communication layer users and with the application layer providing informa-
allows for transmission of data and control signals using tion and feedbacks to the Application Logic.
heterogeneous technologies and across different kinds of
area networks. For example data can be collected locally
using embedded systems that are hosted at the user’s 4.3 The software platform
home, in a server, or directly to the user’s smartphone,
according to the complexity of applications, the amount In Fig. 4 the component diagram of the CoSSMic applica-
of data and the provided functionalities. tion is shown.
– Drivers implement the bridge between sensors and the
integration platform. In fact sometimes the same tech- 1. The Data collector stores measures for monitoring, real
nology cannot adequately manage the characteristics of time processing and analysis.
data from different sources. The lack of standards and 2. The Event detector is in charge to detect the start and the
the availability of many proprietary solutions require an stop of a consuming devices.
effort to develop a new driver for each different technol- 3. The Profile clustering identifies from historical measures
ogy to be integrated into the architecture. At the next the energy profiles of different working programs for
layer data are collected in centralized or distributed each device.
repositories. 4. The Profile modeler computes the average profile for
– The Integration Layer integrates data coming from each cluster.
a number of sensors drivers providing a uniform rep- 5. The Usage modeler learn from historical measures the
resentation model, which is used to store and manage common usage of a device by each user.
the information at the Data Layer. Data flowing from 6. The Profile prediction predicts for each user and for each
the sensor layer are characterized here by complexity device the next program is running, when a device is
of different type, that make challenging the extraction switched-on.
of relevant information they provide as a whole. In fact 7. The Device scheduler according to PV production,
data are heterogeneous as they come from very different energy requirements and users’ constraints computes
sources or they are representative of different phenomena the best schedule of appliances.
(the sources can be rather than utility meter sensors that 8. The Device controller enforce the schedule automati-
detect environmental quantities or human phenomena). cally switching on/off the controlled devices.
– The Data Layer holds all data that have to be processed.
– The Application Logic. Specifically-designed data analy- Profile clustering, Profile modeler and Profile prediction
sis procedures are also used to detect and analyse data implement the new functionalities that automatically execute
Patel et al. (2012). On relevant situations, productions tasks which were manually performed by users.
activate messages which are sent to the User Interface The main issue investigated in this section and in the
and notify users about dangerous situations or recom- following ones will be the deployment of services, which
mend correct behaviors. XMPP1 (Extensible Messag- implement the monitoring and the learning functionali-
ing and Presence Protocol) is used as transport layer to ties, in the Fog nodes rather than in the Cloud. The energy
deliver such messages to final users reusing the mecha- scheduler is a peer to peer application that exploits the col-
nisms of protocol (friendship, presence, multi-user chat, laboration of all control nodes and is out of scope in this
etc.) to identify available receivers, groups, etc. Other contribution.
actions consist of real time commands to electric devices In Table 2 the learning and monitoring software compo-
in the smart house. They are delivered using specific API nents are renamed according to the notation defined in the
of the integration, which are implemented by the differ- previous Section. For convenience, the terms service and
ent drivers. the For example, through device drivers the component will be used interchangeably.
mediator APIs allow to switch on or switch off devices,
when there is a dangerous situation. 4.4 Deployment configurations
13
A methodology for deployment of IoT application in fog
in Amato et al. (2014b) and in Amato et al. (2014a). Here The All-in-fog solution hosts every component in Fog
we focus on deployment of new monitoring and learning nodes. Here we have no privacy issues and reduced
components to evaluate the new overhead introduced and latency, but we need to tune the software components in
the compliance with some relevant requirements. For sake order to comply with the limited resource of each node.
of simplicity we will deploy the full applications A0−3 in The Half-in-fog solution will try to find a good compro-
Cloud or in Fog according to the following configurations: mise between performance and system capabilities.
The All-in-cloud configuration locates all the software The main requirements for this application is the response
components remotely. The Fog nodes will just host a time of prediction ( A1 ) and of detection, in fact it is relevant
queue service that is in charge to route energy measures to apply the reaction in real time for switching-off the device
to the Cloud and to receive commands from remote. and for triggering the scheduler to allocate the required
The cost of a centralized and complete knowledge about energy. Another requirement we investigate in this paper is
the distributed system will be a greater latency and the bandwidth in upload that can be limited when the Fog
eventually privacy issues to be addressed. control node is installed in an household by an ADLS link.
13
S. Venticinque, A. Amato
A0 0 1 0 0 1 0 0 1 0
A1 0 0 1 0 1 0 0 1 0
A2 0 0 1 0 1 0 0 0 1
A3 0 0 1 0 1 0 0 1 0
Obviously, the compliance with this requirements depends the scheduler is also running in Cloud, the only informa-
both on the hardware resources and on the workload to be tion to be communicated to the gateway is the assigned
processed, but they can change according to the deployment start time. We assume that the delay of the assigned start
configuration. time is not relevant respect to the dynamic of the schedule,
The placement of each application according to the pro- which needs a certain degree of flexibility from the user to
posed model is shown in Table 3. Additional technological optimize the energy utilization.
details are provided in the following sub-sections.
4.4.1 All‑in‑cloud 4.4.2 All‑in‑fog
As it shown in Fig. 5 measures and application data for As it shown in Fig. 5 measures and application data of
all users and all devices are stored in Cloud. Also all ser- each user are stored locally, in her own Fog node. Also all
vices run in Cloud, except IoT drivers. The data collector services run locally. In this configuration all components
is implemented by a web service with a REST interface will run on the Fog node. This was the default configura-
that receives energy samples in real time from devices tion deployed by the CoSSMic project in all trials. No data
through the Fog control node. The event detector works are transferred to the Cloud, and only the average profiles
in pipeline with the data collector in order to identify the need to be communicated between the distributed instance
switch-on and the switch-off of each device. That means of the scheduler to compute the assigned start time. It is
there will be relevant latency from the switch-on detec- straightforward to observe that there are not latency issues
tion and the switch-off control that must be applied in real in this case, a part the time required to update the sched-
time. On the other hand, this solution has the advantage ule. Even security and privacy issues are reduced, because
to store all data in Cloud, which means a centralized full only an analytical model of the average energy profile is
knowledge base of energy measures. For this reason, all shared among gateways. On the other hand, the utilization
the needed information are available for clustering the of local data will limit the learning process of energy pro-
time-series of the same device type. Clustering is trig- files, that cannot exploit time-series of other households.
gered when the completion of a device run is detected. Focusing on the scope of this contribution, it is critical
It allows to classify different working programs, which to evaluate the performance resulting from the execution
will be used to predict the energy requirements at the next of all software components on the embedded platform
switch-on. Of course, time-series recorded for the same hosting the gateway, in fact the original software did not
device type at different households allow to characterize include the new learning capabilities (Fig. 6).
earlier and better the energy profiles, which can be used
for new households, or for those that have not been col-
lected any measures yet. The clustered time-series are used
by the Profile modeler to compute the representation of
each energy profile. There will be one average profile for
cluster. The clustering will also allow to learn the user’s
behaviour in terms of device utilization. It means that,
for each user and for each device a regression model will
be estimated to predict the next program will be planned,
observing the sequence of previous runs. Such a model
is used by the Profile prediction when the switch-on of a
device has been detected. The energy profile correspond-
ing to the same model is used by the Device scheduler to
compute the assigned start time for the device. In the case Fig. 5 All-in-cloud deployment configuration
13
A methodology for deployment of IoT application in fog
5 Experimental results
2
tsung.erlang-projects.org.
13
S. Venticinque, A. Amato
Fig. 8 Testing scenarios
one message at the end of the execution. Moreover the pre- increases slightly linear with the length of the sequence, and
diction model is asynchronously received from the Cloud, the time to complete the prediction is comparable with the
but it does not belong to a transaction because we are inter- classification.
ested only to its effect on the bandwidth utilization. In Fig. 10 performance of processing elements has been
evaluated in the case all the software components run on the
5.1 Benchmarking raspberry. Here we can observe that performance results are
obviously worse, but they are comparable with the case they
A preliminary analysis about the performance of software run in Cloud.
components has been investigated running all functions in They relevant issue to be addressed deals with the the
Cloud with different workloads. We emulated the sequence number of devices and data to be handled locally. In fact the
of 200 runs of a washing machine. Each clustering and the processing time of each element will affect the maximum
learning algorithm is executed on all the collected profiles, acceptable arrival rate of incoming requests. We expect a
as well as the prediction use the model trained on the full reduced number of devices to be handled locally, but such a
history. But the spline approximation is computed for each solution does not allow to exploit data from other fog nodes
cluster, but using just the last five recorded profiles assigned to speed up and improve the learning process. Benchmark
to each cluster. results will be assigned to the parameters of Eq. 7 to select
In Fig. 9 the blue points represent the time for complet- the feasible deployment configurations.
ing each computation. The red line is the linear approxima-
tion of the time distribution that allows to predict the per- 5.2 Evaluation
formance figures with increasing workload. In Fig. 9a the
time increases with number of processed individuals. The Here we evaluate the model formulated in Sect. 4.1, setting
time to compute the splines models depends on the number the arrival rates of energy samples. In Table 4 evaluation
of clusters identified and on the number of time-series in results are shown for different values of the arrival rate of the
each clusters. For this reason in Fig. 9b on the x axis we energy samples. We also supposed that 20% of energy sam-
plotted the product between the number of identified clus- ples belong to switching events. Values have been chosen
ters and the number of time-series. Such a number varies to reproduce underloaded, overloaded and normal working
between 1 and 45 because just the last five runs are used to conditions.
compute the spline. Figure 9c, d show the time needed to We expect that when either cCF > 1 or cCR > 1 the system
build a classification model using the random forest algo- will be overloaded and the execution time will continuously
rithm and the application of the same model to predict which increase until a failure.
program is starting, according to the switch-on date-time The system will work in normal condition when both
and the previous occurrences. We can observe that the time cCF < 𝜇F or cCR < 𝜇R where the 𝜇 parameter depends on the
13
A methodology for deployment of IoT application in fog
overhead and the amount of resources used by the operating Here we reproduce the scenario shown in Fig. 8a to
systems. In normal conditions, we expect that the execu- evaluate the resource utilization in term of CPU load and
tion time will match the values provided as an input to the communication overhead at Cloud side, and to estimate the
model, even if fluctuation of the workload can introduce latency of each transaction. In particular, we are presenting
several deviation. performance results with different values of the arrival rate.
The system will work in normal condition when both Each test will last for 2 min.
cCF << 𝜇F or cCR << 𝜇R . It will be able to reduce deviations
and process longer and greater fluctuation. We do not dis-
tinguish in this case upload and download. In this case cNR 5.3.1 All in cloud scenario
and cFR are the same as we consider one Fog control node.
In this scenario the fog node is sending all sensors data to the
cloud node and receive both the prediction about next load
5.3 Testing profile to schedule and the energy profile itself. Every com-
putation is done in the Cloud and all the historical data are
The performance evaluation of processing element allowed stored there. In three different experiments requests arrives
only for an estimation of the processing capability of at a different rate. For this scenario values of inter-arrivals
working nodes (Fog and Cloud). It allows to estimate a are 1, 0.5, and 0.1 s. In each experiment 20% of requests
upper bound for the arrival rate of processing requests. corresponds to the start and to the stop of a device, 80%
However in order to evaluate the latency, bandwidth and of requests are any other energy samples. The arrival rate
the computation overhead it needs to test realistic work- of request during the run of each experiment is shown in
load on real testbeds. Fig. 11a.
13
S. Venticinque, A. Amato
13
A methodology for deployment of IoT application in fog
The effect of the higher overhead of the Cloud node is the same throughput cannot be processed locally. For this
observed by a longer duration of the prediction. In the case reason, in three different experiments sessions are started
of the highest arrival rate the prediction is received after 20 with the following values of inter-arrivals: 10, 1, 0.5 s.
s, that becomes not acceptable after two minutes, and it is In each experiment 10% of requests corresponds to the
expected to increase because the Cloud node is not able to switch-on and 10% to the switch-off of a device, 80% of
process the workload. requests are any other energy samples. The arrival rate of
In fact another effect, not shown in the Fig. 12 is repre- requests during each experiment is shown in Fig. 13 a).
sented by the number of requests processed after the end of In Fig. 13 the throughput of transactions in different
the experiment, when all the message have been sent. experiments is shown. The arrival rate has been chosen to
Average arrival rate and average duration of transactions show the result in overloaded, normal and edge conditions.
are provided in Table 5. In Fig. 14 the duration of transactions in different
experiments show that in the overloaded and in edge con-
5.3.2 All in Fog scenario dition the prediction increases as the throughput is not sus-
tainable by the fog node. With 10 s inter-arrivals the node
In this scenario the fog node never sends data to the Cloud. is able to process the incoming workload. Let’s observe
All data are stored locally and reactions are locally com- that it means we collect receive a measure from any device
puted and applied. Clustering and learning is computed in each 10 s and a device can switch-on or switch-off not
the Fog node as well as prediction is performed locally. more often than 100 s, in average. Additional details are
Because of the limited capability of the raspberry platform provided in Table 6.
13
S. Venticinque, A. Amato
5.3.3 Half in Fog scenario during the run of each experiment is shown in Fig. 15a.
Even if the arrival rate, that means the number of devices,
In this scenario the fog node is sending only time-series is the same of the first scenario, the workload is shared
of runs to the Cloud when appliance stops. The fog node with the Cloud node. Offloading clustering and learning he
receives prediction models and spline representations of fog node can use its own computing capability to increase
clustered profiles. All data are stored locally and reactions its throughput. In Fig. 16 the duration of transactions in
are locally computed and applied. Clustering and learn- different experiments is shown. We observe that the even
ing is computed in the Cloud, but prediction is performed in the third experiment, when the node cannot process the
locally. In three different experiments sessions are started incoming requests, the duration of prediction is less than
with the following values of inter-arrivals: 1, 0.5, 0.1 s. In in the first scenario. On the other hand, the fog node is
each experiment 10% of requests correspond to the switch- delegated to handle only local devices and the arrival rate
on and 10% to the switch-of of a device, 80% of requests of 0.5 s would allow to serve the switch-on every 5 s that
are any other energy samples. The arrival rate of requests
13
A methodology for deployment of IoT application in fog
is far from being a constraints. Average values of duration down the following operations and the bandwidth utilization
of transactions and of arrival rates are provided in Table 7. decreases very fast. The same effect is not shown in the half-
in-cloud experiment because the number of HTTP transac-
5.4 Communication overhead and bandwidth tions is 10% respect to the all-in-cloud scenario.
13
S. Venticinque, A. Amato
to the surroundings of the device. Hence, such requests fog computing aims at processing incoming data closer
can be served without the help of the global information to the data source itself, it reduces the burden of that
stored in the Cloud. processing on the Cloud, thus addressing the scalability
– Low latency requirement: Mission critical applications issues arising out of the increasing number of endpoints.
require real-time data processing. The control system
running on the Cloud may make the sense-process-actu- In particular there are several application scenarios that
ate loop slow or unavailable as result of communication could benefit from fog computing that has been investigated
failures. Fog computing helps by performing the process- in several research papers. In Patil (2015) some application
ing required for control system very close to the robots of fog computing and the benefit that fog provides in those
- thus making real-time response possible. contexts are described.
– Scalability: Even with virtually infinite resources, the The key advantages of Fog Computing in Smart Grid
Cloud may become the bottleneck if all the raw data are described in Gia et al. (2015). The challenges regard
generated by end devices is continued sent to it. Since latency-sensitive issues, location awareness and large data
13
A methodology for deployment of IoT application in fog
transmission. Undoubtedly, the more data is transmitted over deadlines on the execution time of applications. Authors
a network, the higher possibility of error occurs because bit also provide a model for an IoT application and a resource
errors, data transmission latency and packet dropping pos- model for the fog landscape. Afterwards, the FSPP is
sibility are proportional to the volume of transmitted data. described, and an according optimization model is formal-
Paper Perera et al. (2017) focuses on Fog Computing in ized and validated using experimental evaluation.
Smart Grid and describes several inspiring use case sce- The Fog Service Placement Problem has been inves-
narios of Fog computing, identify ten key characteristics tigated also in Skarlat et al. (2017) where authors pre-
and common features of Fog computing, and compares more sent a conceptual fog computing framework and model
than 30 existing research efforts in this domain. Based on the service placement problem for IoT applications over
their review, authors further identify several major function- fog resources as an optimization problem, which explicitly
alities that ideal Fog computing platforms should support considers the heterogeneity of applications and resources
and a number of open challenges toward implementing them, in terms of Quality of Service attributes. Authors also
to shed light on future research directions on realizing Fog propose a problem resolution heuristic based on a genetic
computing for building sustainable smart cities. algorithm showing, through experiments, that the service
In Minh et al. (2017) authors present present the Fog execution can achieve a reduction of network communica-
Service Placement Problem (FSPP), which allows to tion delays when the genetic algorithm is used, and a better
place IoT services on virtualized fog resources while tak- utilization of fog resources when the exact optimization
ing into account Quality of Service (QoS) constraints like method is applied.
13
S. Venticinque, A. Amato
13
A methodology for deployment of IoT application in fog
Fig. 17 Network utilization
13
S. Venticinque, A. Amato
et al. (2016), however effect on performance and service Business Solutions Group (IBSG). http://www.cisco.com/web/about
levels cannot be neglected. The changed not functional /ac79/docs/innov/IoT_IBSG_0411FINAL.pdf. Accessed 5 Apr 2018
Evans D (2015) Fog computing and the internet of things: extend the
requirements have been analyzed for an optimal deploy- cloud to where the things are. Tech. rep., Cisco Internet Business
ment according to the Fog computing paradigm. Experi- Solutions Group (IBSG). https://www.cisco.com/c/dam/en_us/solut
mental results demonstrated the application of the BET ions/trends/iot/docs/computing-overview.pdf. Accessed 5 Apr 2018
methodology that allowed for an effective deployment of Gentile U, Marrone S, Mazzocca N, Nardone R (2016) Cost-energy mod-
elling and profiling of smart domestic grids. Int J Grid Util Comput
new learning functionalities integrated in the CoSSMic 7(4):257–271. https://doi.org/10.1504/IJGUC.2016.081012
platform. In particular we were able to evaluate and vali- Gia TN, Jiang M, Rahmani AM, Westerlund T, Liljeberg P, Tenhunen H
date the maximum workload can be processed with the (2015) Fog computing in healthcare internet of things: a case study
available computational resources in different deployment on ecg feature extraction. In: Computer and information technology;
ubiquitous computing and communications; dependable, autonomic
configurations. However, the precision of estimated per- and secure computing; pervasive intelligence and computing (CIT/
formance is limited by the availability of a testbed, which IUCC/DASC/PICOM), 2015 IEEE international conference on,
is as similar as possible to the real case. We think that IEEE, pp 356–363
the proposed methodology can provide guidelines to the Gupta H, Dastjerdi AV, Ghosh SK, Buyya R (2016) ifogsim: A toolkit
for modeling and simulation of resource management techniques
developer at programming and deployment stage to meet in internet of things, edge and fog computing environments. arXiv
application requirements and to optimize performance and :abs/1606.02007
utilization of available resources. The next step will focus Jiang S, Venticinque S, Horn G, Hallsteinsen S, Noebels M (2016) A
on the development of a tool that allows for running the distributed agent-based system for coordinating smart solar-pow-
ered microgrids. pp 71–79. https://doi.org/10.1109/SAI.2016.75559
benchmarking, evaluation and testing automatically, col- 64. https : //www.scopu s .com/inwar d /recor d .uri?eid=2-
lecting performance results could speed-up and improve s2.0-84988841992&doi=10.1109%2fSAI.2016.7555964&partn
the job of the developer. The design and development of a erID=40&md5=dd0b5c7229b86373bf3d3dc0378b8425, cited By
decision support system, which automatically recommends 2. Accessed 5 Apr 2018
Ling C, Lifang L, Xiaogang Q, Gengzhong Z (2017) Cooperation for-
the optimal deployment configurations, and the estimated warding data gathering strategy of wireless sensor networks. Int
performance figures, is a future work. The dynamic opti- J Grid Util Comput 8(1):46–52. https://doi.org/10.1504/IJGUC
mization of deployment of a Fog Application by reconfigu- .2017.10003009
ration is another hint for future investigation on autonomic Minh QT, Nguyen DT, Le AV, Nguyen HD, Truong A (2017) Toward
service placement on fog computing landscape. In: 2017 4th
fog applications. NAFOSTED conference on information and computer science, pp
291–296. https://doi.org/10.1109/NAFOSTED.2017.8108080
References Papageorgiou A, Cheng B, Kovacs E (2015) Real-time data reduction
at the network edge of internet-of-things systems. In: Tortonesi M,
Amato A, Aversa R, Di Martino B, Scialdone M, Venticinque S, Hall- Schnwlder J, Madeira ERM, Schmitt C, Serrat J (eds) 11th Interna-
steinsen S, Horn G (2014a) Software agents for collaborating tional Conference on Network and Service Management, CNSM
smart solar-powered micro-grids. In: Caporarello L, Di Martino 2015, Barcelona, Spain, November 9–13, 2015, IEEE Computer
B, Martinez M (eds) Smart organizations and smart artifacts: fos- Society, pp 284–291. https://doi.org/10.1109/CNSM.2015.7367373
tering interaction between people, technologies and processes, Patel S, Park H, Bonato P, Chan L, Rodgers M (2012) A review of wear-
Springer International Publishing, Cham, pp 125–133. https://doi. able sensors and systems with application in rehabilitation. J Neu-
org/10.1007/978-3-319-07040-7_14 roEng Rehabilit 9(1):21. https://doi.org/10.1186/1743-0003-9-21
Amato A, Di Martino B, Scialdone M, Venticinque S, Hallsteinsen S, Patil PV (2015) Fog computing. In: International Journal of Computer
Jiang S (2014b) A distributed system for smart energy negotiation. Applications (0975 – 8887), National conference on advancements
In: Fortino G, Di Fatta G, Li W, Ochoa S, Cuzzocrea A, Pathan M in alternate energy resources for rural applications (AERA-2015),
(eds) Internet and distributed computing systems: 7th international pp 1–6
conference, IDCS 2014, Calabria, Italy, September 22–24, 2014. Perera C, Qin Y, Estrella JC, Reiff-Marganiec S, Vasilakos AV (2017) Fog
Proceedings, Springer International Publishing, Cham, pp 422–434. computing for sustainable smart cities: a survey. ACM Comput Surv
https://doi.org/10.1007/978-3-319-11692-1_36 50(3):32:1–32:43. https://doi.org/10.1145/3057266
Arridha R, Sukaridhoto S, Pramadihanto D, Funabiki N (2017) Clas- Simmhan Y, Giakkoupis M, Cao B, Prasanna VK (2010) On using cloud
sification extension based on iot-big data analytic for smart envi- platforms in a software architecture for smart energy grids. In: Inter-
ronment monitoring and analytic in real-time system. Int J Space national conference on cloud computing technology and science
Based Situated Comput 7(2):82–93. https://doi.org/10.1504/IJSSC (CloudCom), IEEE, poster [CORE C]
.2017.10008038 Skarlat O, Nardelli M, Schulte S, Borkowski M, Leitner P (2017) Opti-
Bonomi F, Milito R, Zhu J, Addepalli S (2012) Fog computing and its mized iot service placement in the fog. Serv Orient Comput Appl
role in the internet of things. In: Proceedings of the first edition 11(4):427–443. https://doi.org/10.1007/s11761-017-0219-8
of the MCC workshop on mobile cloud computing, ACM, New Vaquero LM, Rodero-Merino L (2014) Finding your way in the fog:
York, NY, USA, MCC ’12, pp 13–16. https://doi.org/10.1145/23425 Towards a comprehensive definition of fog computing. SIGCOMM
09.2342513 Comput Commun Rev 44(5):27–32. https://doi.org/10.1145/26770
Dastjerdi AV, Gupta H, Calheiros RN, Ghosh SK, Buyya R (2016) 46.2677052
Fog computing: principles, architectures, and applications. arXiv Witt S (2015) Data management and analytics for utilities. http://www.
:1601.02752[Cs]. Accessed 5 Apr 2018 smartgridupdate.com. Accessed 29 Sep 2016
Evans D (2011) The internet of things: How the next evolution of the
internet is changing everything. Tech. Rep. April, Cisco Internet
13