Sunteți pe pagina 1din 18

Journal of Ambient Intelligence and Humanized Computing

https://doi.org/10.1007/s12652-018-0914-0

ORIGINAL RESEARCH

A lightweight decentralized service placement policy for performance


optimization in fog computing
Carlos Guerrero1   · Isaac Lera1 · Carlos Juiz1

Received: 11 May 2018 / Accepted: 10 June 2018


© Springer-Verlag GmbH Germany, part of Springer Nature 2018

Abstract
A decentralized optimization policy for service placement in fog computing is presented. The optimization is addressed to
place most popular services as closer to the users as possible. The experimental validation is done in the iFogSim simula-
tor and by comparing our algorithm with the simulator’s built-in policy. The simulation is characterized by modeling a
microservice-based application for different experiment sizes. Results showed that our decentralized algorithm places most
popular services closer to users, improving network usage and service latency of the most requested applications, at the
expense of a latency increment for the less requested services and a greater number of service migrations.

Keywords  Fog computing · Service placement · Performance optimization

1 Introduction IoT and cloud generates new problems, as for example the
increase in the service latency that is a critical requirement
The emerging of application development for Internet of for some e-health or gaming IoT applications (Dastjerdi and
Things (IoT) has revitalized the popularity of wearables, Buyya 2016). Fog computing emerged to cover these limi-
logistics, smart cities, or e-health environments (Atzori et al. tations and it opened a broad range of renewed challenges
2010; Li et al. 2015; Ko et al. 2016; Darwish et al. 2017). in topics such as security, reliability, sustainability, scaling,
The number of users in these systems, and their performance IoT marketplaces, or resource management (Varghese and
requirements, is increasing continuously. Those requirement Buyya 2017; Mahmud et al. 2018; Chiang and Zhang 2016).
increments were initially satisfied by integrating IoT and Fog computing exploits resource capacities of networking
cloud architectures (Botta et al. 2016; Diaz et al. 2016). The components to allocate services or to store data. Therefore,
combination of both technologies allows, firstly, IoT appli- firstly, services are closer to the clients and, secondly, data
cations to dispose of unlimited computational and storage do not need to be transfered in its whole to the cloud. Both
capacities, and secondly, to expand the scope of cloud sys- issues have a notable performance impact, reducing the net-
tems by dealing with real-life components (Cavalcante et al. work latency and usage. Resource managers are important
2016; Darwish and Hassanien 2017). But the integration of components to improve this performance.
On the one hand, fog data managers need to deal with the
This research was supported by the Spanish Government selection of the data that is stored in the cloud providers and
(Agencia Estatal de Investigación) and the European Commission which is stored in fog devices. For the latter case, placement
(Fondo Europeo de Desarrollo Regional) through Grant Number policies need to be considered to optimize the use of the
TIN2017-88547-P (MINECO/AEI/FEDER, UE).
storage capacities of the fog layer. On the other hand, fog
* Carlos Guerrero service orchestrators need to decide the allocation of the
carlos.guerrero@uib.es services in the fog devices to improve the Quality of Service
Isaac Lera (QoS) and the usage of the fog layer considering scalability
isaac.lera@uib.es and dynamicity requirements (Wen et al. 2017). They should
Carlos Juiz deal with the placement and scalability of user-shared ser-
cjuiz@uib.es vices by creating, in the fog devices, additional instances of
services already available in the cloud.
1
Computer Science Department, University of Balearic
Islands, Crta. Valldemossa km 7.5, E07122 Palma, Spain

13
Vol.:(0123456789)
C. Guerrero et al.

IoT environments, as for example smart cities, usually defined for environments such as industrial IoT, smart cit-
have thousands, even millions, of devices, clients or ser- ies, eHealth, or mobile micro-cloud.
vices. Centralized management could reach in a non-afforda- The characteristics of the related work have been sum-
ble optimization. Previous solutions to solve the Fog Service marized in Table 1, by indicating the IoT scope for which
Placement Problem (FSPP) are based on centralized solu- the solution was proposed (column Scope), the optimiza-
tions, with the following main drawbacks: (a) scalability, tion purpose (column Objective functions), the elements
as the execution time of a global optimization algorithm that the optimization algorithm manages to improve the
is usually increased as the number of devices to manage is objective functions (column Decision variables), if the
increased; (b) network overhead, as the fog devices need to broker manager is centralized or not (column Broker),
send their performance data to the centralized broker; (c) the optimization algorithm (column Alg.), and the tech-
reliability, as the broker is a single point of failure (SPOF); nique or tool used in the validation (column Val.). The
(d) decision latency, as the decisions need to be transmitted related studies are grouped in terms of their optimization
from the centralized broker to each device; and (e) hetero- algorithms.
geneity, as the central broker needs to deal with data from We explain the related researches grouped by their opti-
very different types of fog devices, and the communication mization algorithms.
protocol or the orchestrating mechanism could be also very Linear programming is a common approach for resource
different. optimization. Arkian et  al. (2017) formulated a mixed-
We propose to address the problem of service placement integer non-linear program that was linearized into a mixed
in a decentralized way, where each device takes the local integer linear program for the optimization of cost. Gu et al.
optimization decisions, by considering only its own resource (2017) also used this optimization approach. They integrated
and usage data and consequently, the execution time is inde- medical cyber-physical systems and fog computing and opti-
pendent of the number of devices, the performance data is mized the cost by considering the base station association,
not send between devices, SPOF are avoid, the decision task distribution and virtual machine placement. The work
are taken locally and all the resources and the inputs of the of Velasquez et al. (2017) was addressed to reduce the num-
algorithm are homogeneous inside of the same device. We ber of service migrations and the network latency.
propose to base the decisions of the algorithm in placing the Huang et  al. (2014a) presented a quadratic program-
most popular services as closer as possible to the clients, ming formulation for the problem of reducing the power
using the hop distance as the indicator of this proximity. consumption in fog architectures by co-locating neighbor-
We raise two research questions: (RQ1) Is it possible to ing services on the same devices. Huang et al. (2014b) also
define a local and decentralized optimization algorithm that presented a previous work where the problem was modeled
would be able to place the most popular services in the fog as a Maximum Weigthed Independen Set problem (MWIS).
devices with smallest hop distances to the clients?; (RQ2) Souza et al. (2016) studied an allocation algorithm based
Would this placement policy result in an optimization of the on Integer Linear Programming that minimized the service
performance of the fog computing architecture in terms of, latencies in a fog computing environment, while the ful-
for example, service latency or network usage? fillment of capacity requirements were guaranteed. Skarlat
The three main contributions of our work are: (a) an up- et al. (2017b) studied the service placement problem by con-
to-date brief survey of researches addressed to optimize the sidering the QoS requirements of the applications executed
Fog Service Placement Problem (FSPP); (b) a decentralized in a fog architecture. Zeng et al. (2016) proposed to manage
and low overhead proposal to reduce the network usage in a the task scheduling, the task storage placement and the I/O
fog computing architecture; (c) an experimental validation balanced use to reduce the task completion time in software-
based on a microservice-based application. defined embedded systems. Barcelo et al. (2016) formulated
a service placement optimization to reduce power consump-
tion in IoT environments as a minimum mixed-cost flow
2 Related work problem.
A second set of studies were implemented with genetic
Previous fog service management algorithms have algorithms (GA). Wen et al. (2017) presented a parallel GA
explored a wide range of optimization techniques, such to reduce the response time. Skarlat et al. (2017a) introduced
as heuristics, greedy algorithms, linear programming, or the concept of fog colonies for a hierarchical optimization
genetic algorithms, between others. These service man- process. Each colony used a GA to decide the services that
agers have defined several aspects of the fog resources, were placed in the colonies and which ones were propagated
such as placement, scheduling, allocation, provisioning, to neighbor colonies. Yang et al. (2016) compared three opti-
or mapping for services, resources, clients, tasks, virtual mization algorithms based on a greedy heuristic, a linear
machines, or even fog colonies. These solutions have been programming and a GA. Additionally, a model to predict

13
A lightweight decentralized service placement policy for performance optimization in fog…

Table 1  Summary of the brief survey for Fog Service Placement Problem approaches
Authors Scopea Objective functions Decision variables Brokerb Alg.c Val.d

Arkian (2017) C Cost Client association, resource provi- C MILP O


sioning , Task distribution, VM
placement
Gu (2017) H Cost Base station association, task distri- C ILP O
bution, VM placement
Velasquez (2017) G Network latency, service migrations Service placement C ILP –
Huang (2014a, b) G Communication power consumption Service merging and placement C ILP, MWIS O
Souza (2016) G Service delay Service placement C ILP P
Skarlat (2017b) G Deadline violations, cost, response Service placement C ILP F
time
Zeng (2016) E Task completion time Task scheduling, task image place- D MILP O
ment, workload balancing
Barcelo (2016) G Power consumption Service placement C ILP O
Wen (2017) G Response time, QoS Service placement C GA O
Skarlat (2017a) F Resource waste, execution times Fog colony service placement H GA F
Yang (2016) M Cost, latency and migration Service placement and load dispatch- C Greedy, ILP, GA O
ing
Ni (2017) G Response time, cost Resource allocation and scheduling C PN O
Urgaonkar (2015a) G Queue length, cost Service migration C D-MDP O
Brogi (2017, 2017) G Resource consumption, QoS Service placement C MC O
Colistra (2014) G Resource usage Resource allocation Ds Cons. O
Wang (2015, 2015b) M Cost Look-ahead service placement C SP and MDP O
Billet (2014) G Resource usage, power consumption Task placement C BP Heur. Greedy O
and load balancing
Taneja (2017) G Network usage, power consumption, Service placement C Own F
latency
Wang (2017) M Load balancing Service placement C Own O
Bittencourt (2017) G Network usage and delay Service placement C Own F
Farris (2015) F Executed tasks Resource provisioning C Own O
Deng (2015) G Power consumption, response time Workload placement C D O
Saurez (2016) G Latency Service placement and migration C Own B
Venticinque (2018) G Performance Service placement C BET T
Gupta (2017) G Energy, network usage, latency Sevice placement D FIFO F
(This work) G Hop count Service placement D Own F
a
 Scope: G general IoT systems; I industrial IoT systems; H eHealth IoT systems; C crowdsensing apps; E embedded systems; M mobile micro-
clouds (MMC)
b
 Broker: C centralized; D decentralized; Ds  distributed ; H hierarchical (two broker levels, global and fog colonies)
c
 Algorithm: ILP integer linear programming; MILP mixed integer linear programming; GA genetic algorithm; MDP Markov Decision Process;
D-MDP decoupled Markov Decision Process; MWIS maximum weighted independent set; ADMM  alternating direction method of multipliers;
VM virtual machine; PN petri nets; MC Monte Carlo; Cons. consensus algorithm; SP  shortest-path; Heur.  heuristics; BP binary programming;
D  decomposition; BET benchmarking, evaluation and testing; Own own algorithm
d
 Validation: O own simulation; F iFogSim; A Analytically; B benchmarking; P PuLP and Gurobi Optimizer; T testbed

the distribution of user’s future requests was also presented proposal optimized price and time costs to complete a
to adapt the service location. task. Their simulation results showed a higher efficiency
Linear programming and GA are the most popu- than static allocation strategies. Urgaonkar et al. (2015a)
lar solutions, but there are also an important number of addressed the objective of minimizing the operational
related researches exploring other alternatives, such as, costs of service placement in fog, while the performance
Petri nets, Markov decision or new and own algorithms. was guaranteed. They modeled the scenario as a sequential
Ni et al. (2017) proposed to use priced timed Petri nets decision using a Markov Decision Process (MDP), decou-
(PTPNs) for resource allocation in fog computing. Their pling de problem into two independent MDPs and, finally,

13
C. Guerrero et al.

optimizing with the Lyapunov technique. Brogi et al. pre- Taneja and Davy (2017) proposed a service placement
sented the tool FogTorch in two research works (Brogi and algorithm for efficient use of the network and power con-
Forti 2017; Brogi et al. 2017). The first one presented the sumption. The algorithm sequentially assigned the highest
model for the QoS-aware deployment of multicomponent demanding application modules to the nodes with biggest
IoT applications in fog infrastructures. The second one capacities. Wang et al. (2017) proposed, a first optimization
presented the results of using Monte Carlo simulations in stage modeled as linear graphs to be later extended to tree
the FogTorch tool, classifying the deployments in terms application models using algorithms with polynomial-loga-
of QoS and resource consumptions. Do et al. (2015) pro- rithmic ratios. Bittencourt et al. (2017) compared three ser-
posed a decentralized algorithm for resource allocation in vice allocation algorithms to illustrate that these strategies
fog environments for the specific case of video streaming. depend on the demand coming from mobile users and can
Their solution was based on the proximal algorithm and take advantages of fog proximity and cloud elasticity. Farris
alternating direction method of multipliers. The model et al. (2015) proposed that the edge nodes orchestrated the
was validated analytically. Colistra et al. (2014) adapted provisioning of resources in micro-cloud federations using a
the consensus algorithm to allow devices to cooperate in decomposer module. Deng et al. (2015) studied the trade-off
the distributed resource allocation problem to adequately between power consumption and delay in fog computing.
share the resources. Wang et al. (2015) presented a ser- They decomposed the initial allocation problem into three
vice placement algorithm that used predicted costs for subproblems independently solved with convex optimiza-
look-ahead optimizing the total provider cost for a given tion, ILP and Hungarian method. Saurez et al. (2016) pre-
period of time. The authors validated their proposal using sented a service migration algorithm based on the mobility
real-world user-mobility traces in simulations. Urgaonkar pattern of the sensors and the dynamic computational needs
et al. (2015b) also proposed a solution for this last scenario of the applications. The solution was built using contain-
but considering a MDP. Billet and Issarny (2014) formu- ers and the experimental results showed improvements in
lated the task allocation problem in IoT as a binary linear the migration latencies. Venticinque and Amato (2018) pre-
optimization that its computational cost was reduced by sented a methodology based on three phases: Benchmarking,
including a heuristic and a greedy algorithm. Evaluation and Testing. This methodology helps developers
Ni et al. (2017) proposed to use priced timed Petri nets to meet application requirements and to optimize perfor-
(PTPNs) for resource allocation in fog computing. Their mance and utilization of available resources.
proposal optimized price and time costs to complete a task. The developers of the iFogSim simulator implemented
Urgaonkar et al. (2015a) addressed the objective of minimiz- a decentralized service placement policy called Edge-
ing the operational costs of service placement in fog, while wards (Gupta et al. 2017). It placed the services in each
the performance was guaranteed. They modeled the sce- single path between clients and the cloud. The services were
nario as a sequential decision using a Markov Decision Pro- placed in a First-In-First-Allocated policy. Services from
cess (MDP), decoupling the problem into two independent different paths were merged if they were placed in the same
MDPs and, finally, optimizing with the Lyapunov technique. device and migrated to upper devices if necessary. Addition-
Brogi et al. presented the tool FogTorch in two research ally, instances from other paths in upper devices of a candi-
works (Brogi and Forti 2017; Brogi et al. 2017). The first date service were considered and it was placed in the upper
one presented the model for the QoS-aware deployment of device to merge both instances even when closer devices
multicomponent IoT applications in fog infrastructures. The had enough resources. Despite the placement algorithm
second one presented the results of using Monte Carlo simu- was decentralized, it needed some general information of
lations in the FogTorch tool, classifying the deployments the placement status, such as the already placed services for
in terms of QoS and resource consumptions. Colistra et al. each path between the clients and the cloud. Their experi-
(2014) adapted the consensus algorithm to allow devices to ments compared the results with the allocation of all the
cooperate in the distributed resource allocation problem to services in the cloud provider.
adequately share the resources. Wang et al. (2015) presented Most of those previous works were modeled as a central-
a service placement algorithm that used predicted costs for ized broker or orchestrator that needs information from all
look-ahead optimizing the total provider cost for a given the components in the system (fog devices, clients, cloud,
period of time. The authors validated their proposal using services) and takes global decisions to optimize the service
real-world user-mobility traces in simulations. Urgaonkar placement. Problems with the scalability and the computa-
et al. (2015b) also proposed a solution for this last scenario tional complexity of the algorithm are clear when the num-
but considering a MDP. Billet and Issarny (2014) formulated ber of elements is very high such as, for example, in smart
the task allocation problem in IoT as a binary linear optimi- cities. Decentralized service orchestration in fog computing
zation that its computational cost was reduced by including arises as a current open challenge. It is necessary to define
a heuristic and a greedy algorithm. solutions that deal with a smaller number of elements, as in

13
A lightweight decentralized service placement policy for performance optimization in fog…

Fig. 1  Fog computing architecture Fig. 2  Example of network delay benefits for a service migration
scheme within the shortest path

the case of fog colonies (Skarlat et al. 2017a), or even com-


pletely decentralized optimizations. We propose a decen- resources in the fog devices, the optimal service place-
tralized service placement orchestrator to minimize the hop ment would be to place instances of each service in all the
count of the most requested services. gateways. Since the resource capacity limits the number of
service instances in the devices, some services need to be
migrated to other devices. Our proposal migrates those ser-
3 Architecture proposal vices to devices within the shortest path between the gate-
way and the cloud provider.
Fog computing is an architecture pattern where clients Migrating the services along the devices in the shortest
request services to cloud providers through a network com- path with the cloud provider, instead of any other devices
posed by fog devices. These devices have computational around them, is based on the idea that, sooner or later, the
and storage capacities that allow them to allocate data and execution flow of the interrelated services would need to
instances of the cloud services. Therefore, data and service execute some service in the cloud provider. Moreover if data
management policies are needed to decide when and where need to be stored in a centralized place. Thus, if the migrated
to place the services and the data. Our architecture proposal services are placed in devices out of the shortest path, the
is focused on the fog service placement problem (FSPP). total path to the cloud provider will be increased. Figure 2
A general fog computing architecture is represented in illustrates an example where a service, S2 is migrated from
Fig. 1 where three layers can be identified: cloud layer, fog a device D1 . The application communication times is one
layer and client layer. The architecture can be modeled as step bigger when service S2 is placed in D2 , a device out of
a graph where the nodes are the devices and the edges the the shortest path between D1 and the cloud provider, with
direct network links between devices. Three types of devices regard to the case of placing it in the next device in the
can be differentiated: a device for the cloud provider of the shortest path, D3.
cloud layer; the gateways, that are the access points for We propose a priority rule by placing first the most popu-
the clients; the fog devices, the network devices between lar services, in terms of service requests, in the devices that
the cloud provider and the gateways. All the devices have are closer to the clients, and migrating the less requested to
resources to allocate and execute services. upper devices in the shortest path with the cloud provider
We consider that the applications follows a microservice in case that the closest device does not have enough free
based development pattern, that is increasingly being used in resources. For the sake of simplicity, in the rest of the arti-
IoT applications (Vogler et al. 2016; Krylovskiy et al. 2015; cle upper devices refers to any of the devices that are in the
Saurez et al. 2016). This type of applications are modeled as shortest path between the device and the cloud provider.
a set of small and stateless services that interoperate between We finally consider that once that a service is migrated to
them to accomplish a complex task (Balalaie et al. 2016). an upper device, it is also better to migrate all its interoper-
Thus, the services can be easily scale up, by downloading ated services, to avoid device loops in the service execution
the code and executing it, or scale down, by just stopping flow. An illustrative example is showed in Fig. 3. Consider
the execution of the service. We assume that there is at least a service execution flow as S1 → S2 → S3 → S4 and that
one instance of each service running in the cloud provider. the service S2 needs to be migrated to the upper device D3 .
We base our proposal on the idea that the best place- If we only migrate service S2 to device D3 , the application
ment of the services is in devices as closer as possible to makespan will be increased in two communication steps,
the clients’ gateways. In an unreal scenario with unlimited with regard to the case of also migrating service S3 to D3 ,

13
C. Guerrero et al.

in Sect. 4.2. If the SAR is accepted by the SPRM, a service


image is downloaded from the cloud provider to be executed
in the current device. The placement process could gener-
ate deallocation of other services already instantiated in the
device. A SAR is sent to upper devices for each deallocated
service instance. Deallocated instances are just deleted from
the device, as the new instances in the upper devices would
be downloaded from the cloud provider. All this process is
repeated recursively in the upper devices when a SAR is
shifted.
The algorithm is running in each fog device, and the algo-
rithm only considers variables that are obtained locally in
Fig. 3  Example of network delay benefits for a service migrating the device by the defined monitors (SPM and SUM), such
scheme with migration of interoperated services as the device service request rate, or the device resources
(demanded, used or available), between others. Sending
performance or system data between the devices is not nec-
essary and delays in the decision due to network transmis-
sion or overheads in the network are avoid. Additionally,
the overall computational complexity of the algorithm is
very low, and the placement decision is obtained in a limited
period of time and, moreover, this time is not increased as
the number of devices is scaled up.

4 Problem statement

4.1 System model
Fig. 4  Decentralized service placement manager The Fog Service Placement Problem (FSPP) considers a set
of clients Cn that request applications that are hosted in a
which will keep the same application makespan. This strat- cloud provider Dcloud  . The applications are modeled as a
egy is reinforced with the idea that once that a service has set of services modules, Sx that are related through a many-
been migrated to an upper devices due to not enough free to-many consumption relationships, cons ∶ {Sx } → {Sx� } ,
resources, the placement of any of the remaining services as the microservice-based application development model
will be also very unlikely in the lower device. defines (Balalaie et al. 2016). This model has been also pro-
We propose to execute a decentralized service broker in posed to deploy applications in fog computing (Vogler et al.
each device to implement our strategy. Figure 4 shows the 2016; Krylovskiy et al. 2015; Saurez et al. 2016). Appli-
components of the device broker: Service Usage Monitor cations are defined as a directed graph, where the nodes
(SUM), gathers information of the services’ performance are the services and the edges indicate the services that are
and resource usage and the information about the services requested by other ones, i.e. their interoperability.
interoperability; Service Migration (SM), sends allocation For an easier explanation of our algorithm in Sect. 4.2,
requests to other devices; Service Placement Request Man- we also consider the transitive closure of a service repre-
ager (SPRM), is the local and decentralized optimization sented as TS+ . In graph theory, the transitive closure of a node
algorithm that decides if a given service is allocated or, on
x

is the set of nodes than can be achieve from that node, i.e.,
the contrary, migrated to other device; and Service Popular-
there is at least one path between both nodes (Munro 1971).
ity Monitor (SPM), gathers information about the request
In our particular case, the transitive closure of a service rep-
rate of each service.
resents all the services that need to be executed when the
When a new client is connected to one leaf device, or
service is requested. Figure 5 shows an example of an appli-
gateway, one Service Allocation Request (SAR) is sent to
cation and each of the transitive closures, TS+ , obtained for
the gateway for each service the client requires. All these x

requests are received by the SPRM and it decides if the ser- each service.
vice is placed in the current device or whether the SAR is The clients request the applications in the cloud pro-
shifted to upper devices, based on the algorithm explained vider through a set of interconnected network devices, Di .

13
A lightweight decentralized service placement policy for performance optimization in fog…

The devices are characterized by their resource capacities.


These resources are defined as a set of n-values, one for each
resource element, such as processor, memory, or storage. We
represent the capacity of a device Di as a n-tuple
RD = ⟨r0 , r1 , … rn−1 ⟩ . For the sake of simplicity, we have
cap
i

only considered one resource in this present study, the com-


putational capacity of⟨ a device.
⟩ Thus, the resource capacity
is defined as RD = rcpu  . Services allocated in a device
cap
i

generate a resource consumption that can be defined as a


tuple Rcon
S
= ⟨r0 , r1 , … rn−1 ⟩ where the consumption of each
x
Fig. 5  Example of transitive closures for each service of an applica-
tion individual resource element is indicated. As we have men-
tioned before, we⟨only⟩consider the CPU consumption in this
study, so Rcon = rcpu  . The cloud provider is consider as an
These network devices have processing capacities, and they S x

are able to allocate service modules to reduce the network special device where resources are unlimited as they can be
latency or hop distance between clients and services. The scaled horizontally as much as it is necessary. The cloud
physical interconnections of the devices create a graph struc- resource capacity is defined as Rcloud = ⟨∞⟩ . The total
ture where the nodes are the devices and the edges are the resource usage of a device, RuD  , can be consequently calcu-
i
direct network links between devices. lated as the sum of the resource consumptions of all its allo-
For a clearer explanation of the algorithm in Sect. 4.2, we cated services multiplied by the request rate of each
also defined SPcloud
S
as the shortest path between a device and services:
x

the cloud provider, that is a path of ordered devices whose Sx



sum of the network distances of its constituent network links RuD = Rcon i D
∀ Sx | Sx ∈ alloc(Di ) (2)
S × 𝜆S
is the minimum for all the paths between the device and the
i x x

cloud provider. Additionally, we defined the father of a Table 2 summarizes the list of the variables defined in the
device, father(D1 ) , as the first device in the shortest path to system model, that are used in the following sections of the
the cloud provider. article.
Several instances, Sx , of the same service, Sx , can be
y

allocated across the system, i.e., the services can be hori-


zontally scaled and clients, or other services, can request 4.2 Optimization model
any of these instances. The allocation function is defined
as a many-to-many relationship alloc ∶ {Sx } → {Di }  , if The optimization algorithm is based on the idea of placing
we refer to the services, or as a many-to-one relationship the most popular services closer to the client as it has been
alloc ∶ {Sx } → {Di }, if we refer to the instances. Similarly,
y
done traditionally in other architecture, as for example, in
the many-to-many relationship that represents the services content delivery networks with the most popular contents
allocated in a device is defined as alloc ∶ {Di } → {Sx }. (Borst et al. 2010; Vakali and Pallis 2003) or in web cach-
The clients need to be connected to the network to request ing (Guerrero et al. 2013). We use the service request rate
the service modules. These clients are connected to one and to measure the most popular services.
only one leaf device, but several clients can be connected in The algorithm analyzes the request rates of each service
the same leaf device. These clients are mobile devices, sen- in each device, and take local decisions by migrating the
sors, actuators, or others. This connection is modeled less requested services in the device to upper devices—
through a many-to-one relationship conn ∶ {Cn } → {Di } . any device in the shortest path between the current device
Additionally, each client Cn in the system is characterized by and the cloud provider. This decision is based on the idea
its service request rate, 𝜆S n . Consequently, each device Di in
C
explained in Sect. 3 and Fig. 2, that considers that once that
a service is migrated from the desired device, it is better
x

the system can be also characterized by the request rates that


to bring it closer to the cloud provider, i.e., a device in the
it receives for each service, 𝜆S i , and calculated as the sum-
D
x shortest path.
mation of request rates of the clients whose shortest paths to Additionally, if a service is migrated, all its interoper-
the cloud providers ( SPcloud
C
 ) include that device Di: ated services, that are allocated in the same device, are also
n
migrated. This is based on the idea of the device loops also
Cn
∑ explained in Sect. 3 and in Fig. 3.
D
𝜆S i =
C
𝜆S n ∀ Cn | Di ∈ SPcloud (1)
x x C n

13
C. Guerrero et al.

Table 2  Summary of the Variable Description


variables of the system model
Cn A client in the system
Sx A service in the system
y
Sx An instance of a service Sx
TS+ The set of services executed when Sx is requested
x
cons(Sx ) Function that returns the list of services that are requested by a given service Sx
Di A fog device in the system
Dcloud Identification of the cloud provider
SPcloud
S
The shortest path between SX and the cloud provider
x
father(Dx ) Function that returns, the first element in the shortest path between Sx and the
cloud provider
alloc(Sx ) Function that returns the set of devices where a given service Sx is allocated
Function that returns the device where a given service instance Sx is allocated
y y
alloc(Sx )
alloc(Di ) Function that returns the set of services that a given device Di allocates
RD
cap
Tuple for the resource capacities of a device Di
i
Rcon
Sx
Tuple for the resource consumption required by a service Sx
RuD Tuple for the total resource usage in a device Di
i
C
𝜆S n The request rate generated by a client Cn over a service Sx
x
D
𝜆S i The request rate that arrives to a device Di for the service Sx
x
conn(Di ) Function that returns the list of clients connected to a given leaf device Di
hop(Di , Di� ) Hop count, number of devices, between Di and Di′

Algorithm 1 shows the pseudo code of our optimization following service subsets: 𝕄Di = { {S1 , S2 , S5 , S6 }, {S2 , S6 },
policy. The SAR is only considered when the candidate ser- {S5 , S6 }, {S6 } }.
vice is not already allocated in the device (Line 1). The ser- Our proposal migrates the candidates by ascending order
vice is directly allocated when the device is the cloud provider of the request rates. This is done in line 19 of Algorithm 1,
(Line 3) or it has enough free resources (Line 6). Additionally, where , Mmin , is the services’ subset in 𝕄Di with the smallest
the service allocation can be done only if the total capacity of request rate, and ΛMmin this smallest request rate. Thus, the
the device is enough to satisfy the requirements of the service sets in 𝕄Di are sequentially selected and deallocated from
(Line 10), on the contrary, the SAR is shifted to the upper the less requested one to the most popular, until the freed
father device (Line 12). resources are enough to allocate the candidate service in
If those previous conditions are met, it is necessary to the device (Line 19) or until the remaining candidates have
deallocate other services from the device. Since our policy higher request rates (Line 28).
migrates all the interoperated services, instead of a sin- The deallocation is not done straightforward, because
gle service, the set of candidates for the migration, 𝕄Di , is the algorithm needs to guarantee that the sets of services
formed by all the possible subsets of interoperated services with lower rates release enough resources to satisfy the
that are currently allocated in the device. Each subset of this requirements of the candidate service. Thus, a list of pre-
candidates’ set is obtained from the intersection between the released services, 𝕄deallocate , is created (Line 22) and they
services allocated in the device and each of their transitive are finally deallocated only if the freed resources are enough
closures: (Lines 29–31).
The request rate of a services’ subset, ΛMSx , is calculated
Di
{ }
just by the summation of the single request rates, for that fog
D
𝕄Di = MS i , ∀ Sx ∈ alloc(Di ) ,
x
(3)
device, of each service in the MS i:
D

where
x

{ } D
∀ Sx ∈ MS i
D
MS i = alloc(Di ) ∩ TS+ . (4) D
MS i
∑ x
D (5)
x x Λ x = 𝜆S i
x

For example, if we recover the example in Fig. 5 and we


suppose that a given device currently allocates services S1 , The allocation process starts with each new client connec-
S2 , S5 , and S6 , the migration candidate set is formed by the tion to a leaf device. The service popularity monitor (SPM)

13
A lightweight decentralized service placement policy for performance optimization in fog…

creates a SAR each time that a new client is detected. This y


Di ,Sx

D
𝜆S i
detection is performed by analyzing the services requests Weighted average hop count = x
∑Di� ,Sy� × hop(Cn , Di )
D�
that are received from the client. x 𝜆S i�
x
We consider that the hop count is a good indicator to (6)
measure the service proximity. The hop count is the number where hop(Cn , Di ) is the number of fog devices between
of devices that a request of a client pass through to achieve the clients and the device that allocates a given instance of
the device where the requested service is placed. Our algo- a service. A value of 1.0 for the hop count means that all
rithm’s optimization objective is to reduce the hop count, the services are placed in the gateways or leaf devices. The
but favoring the more popular services. We consider the maximum value for the hop count is the number of levels
weighted average hop count as an indicator to measure the in the network.
proximity of the most popular services. We define it as the The hypothesis of our second research question is that per-
average of the distance between each service and the clients formance metrics, such as network usage or service latency,
weighted with their popularity, in terms of request rate: are improved when the average hop count is reduced.

Algorithm 1: Algorithm for module placement in devices


y
Input: Sx , Di
1 / alloc(Sx ) then
if Di ∈
/* service not allocated in device */
2 if Di = Dcloud then
/* device is cloud provider */
y
3 alloc(Sx ) ← Dcloud
4 else
con D cap u
5 if RS x
× λSxi < RD − RD then
i i
/* service demanded resources < device available resources */
y
6 alloc(Sx ) ← Di /* device allocates the service */
7
8 else
9 Df ather ← f ather(Di )
con D cap
10 if RS x
× λSxi < RD then /* service resources < device resources
i
*/
11
12 Placement(Sx
y
, Df ather ) /* candidate service migrated to father
device */
13 else
D cap
14 RtoF ree ← RS
con
x
× λSxi - (RD u
− RD ) /* service demanded
i i
resources - device available resources */
15 Mdeallocate ← ∅
16 Sallocated ← alloc(Di )
17 λDi ← calculateMDi RequestRates(MDi )
18 Mordered ← orderAscBy(MDi , λDi )
/* calculate request rate of each subset and order them */
19 while Mmin , Λmin ← Mordered .next(), do
D
20 if λSxi > Λmin then /* candidate service has higher
request rate */
21
22 Mdeallocate ← Mdeallocate ∪ Mmin /* add service set to
released list */
23 Sallocated ← Sallocated − Mmin /* remove released
services */
y y
24 for Sx , ∀ Sx ∈ Mmin do /* released services from
device */
25
D
26 RtoF ree ← RtoF ree - RS
con

× λS i
x x
27 else
28 break
29 if RtoF ree <= 0.0 then
/* enough resources once lower request rate services
released */
y y
30 for Sx  , ∀ Sx ∈ Sdeallocate do
y
31 Placement(Sx  , Df ather ) /* released services migrate
to father device */
32 else
33 Placement(Sx y
, Df ather ) /* candidate service migrated to
father device */

13
C. Guerrero et al.

Table 3  Summary of experimental settings


Element Parameter Units Value

Cloud CPU MIPS 4,480,000


RAM MB 4,000,000
Bandwidth (up/ bytes/ms 20000 + 20000
down)
Link latency ms 100
Fog devices CPU MIPS 2800
RAM MB 4000
Bandwidith (up/ bytes/ms 20000 + 20000
down)
Link latency ms 2
Applications # applications – [1, 2, 3, 4, 5] Fig. 6  Services, interoperability and application edges settings for
Services RAM MB 1 Sock shop demo application
Service edges CPU Instr. × 106 1000
Bandwidth bytes 10
The resource capacities for the cloud were defined high
Selectivity Fractional 1
enough to behave as a device with unlimited resources. The
Users Request rate req/ms [10  , 20 , 25 , 30 , 35]
1 1 1 1 1
memory and the bandwidth of the fog devices were also high
# user per gateway – [1, 2, 3, 4, 5]
enough to be able to allocated as many services as necessary,
Network topology # levels – [1, 2, 3, 4, 5]
keeping the computational capacity the only resource limita-
# child devices – [1, 2, 3, 4, 5]
tion in the service placement, since we only considered CPU
resources (Sect. 4.2).
A microservice-based application were modeled in
5 Evaluation the simulator as this type of development pattern is also
common in IoT applications (Vogler et al. 2016; Krylovs-
The evaluation of our proposal was done by simulating a kiy et al. 2015; Saurez et al. 2016). We modeled the Socks
microservice-based application in the iFogSim simula- Shop (Weaveworks and ContainerSolutions 2016) applica-
tor (Gupta et al. 2017). The simulator’s service placement tions, a microservice-based demo that was developed to test
policy was modified by extending the class ModulePlace- the benefits of deploying applications in a container plat-
ment. The results of our algorithm were compared with the form. Its characterization and modeling parameters were
ones obtained with the simulator’s built-in placement policy obtained from previous research works  (Guerrero et  al.
(Edgewards), since it was, to the best of our knowledge, the 2017). Figure 6 shows the application graph. The number
only previous decentralized service placement policy. of applications was varied by replicating the same applica-
Several scenario settings were considered by modifying tion several times, but considering different request rate for
the number of clients, the number of applications and the each replica.
number of fog devices. Table 3 includes a summary of the
simulation settings.
The experiments were characterized with the same con- 6 Results
figuration parameters than in the study of the simulator’s
developers (Gupta et al. 2017). Those experiments con- Our results are presented in terms of average hop count,
sidered a tree-based network topology where the number Eq. 6, network usage, and service latency. The equations
of devices was varied in two dimensions, by ranging the for the calculation of the two latter metrics, Eqs. 7 and 8,
number of levels in the tree, and by ranging the number of were obtained from the analysis of the source code of the
children of each fog device. Although our model allows to iFogSim simulator.2
model the fog architecture as a graph network, the iFogSim ∑Requ (Di ,Di� ) � lat �
only allows to define the architecture like a tree1. TD ,D � × NetSizeRequ (Di ,Di� )
i i
Network usage =
Simulation time
(7)
1
  The devices of the iFogSim are related with a list of children iden-
tificators and just one father, as it can be seen in lines 59 and 68 of
2
class https​://githu​b.com/Cloud​slab/iFogS​im/blob/maste​r/src/org/fog/   Obtained from the analysis of the source code available in https​://
entit​ies/FogDe​vice.java. githu​b.com/harsh​itgup​ta133​7/fogsi​m.

13
A lightweight decentralized service placement policy for performance optimization in fog…

(a) (b)

(c) (d)

Fig. 7  Results for hop count

where TDlat,D ′ is the network latency between the devices that variations in the number of levels of fog devices between
the users and the cloud provider to evaluate the influence of
i i

are the origin and the target of the request, and


the path length; and finally, (d) variations in the number of
NetSizeRequ (Di ,Di� ) is the total size of the request sent by the
children devices that each fog device is connected to with
network. The network usage is calculated as the sum of the
the objective of evaluating the influence of the number of
network usages generated by each request Requ (Di , Di� ) sent
devices.
during the total simulation time.
Figure 7 shows the results for the hop count. It plots the
The service latency is measured in the simulator as the
weighted average—calculated with Equation 6 and labeled
average time to execute a path of interoperated services,
as weighted—and the arithmetic mean—labeled as arith-
called application edge loop. This service latency is calcu-
metic. The arithmetic mean represents the overall proximity
lated as the time between the point in time the request for the
between the users and all the services. On the contrary, the
first service in the path arrives, tSfirst , and the point in time the
weighted mean represents how closer the most requested
last service in the path ends its execution, tSend: services are to the clients.
Figure 8 shows the results calculated by the simulator
∑Requ
(tSend − tSfirst ) with respect to the network usages. Figure  9 shows the
Service latency =
�Requ �
∀ Requ ∈ Loop(Sfirst , Send ) latency results for a representative service loop of the appli-
(8) cation. We chose the loop of edge, frontend, orders, and
In the following figures, the results obtained with our algo- accounts services as it includes the service with the higher
rithm are labeled as Pop and the ones for the Edgewards rate, the accounts with 3.0 requests for each request that
policy of the iFogSim are labeled as Edge. arrives to the edge. The value that the simulator calculates
The first set of figures (Figs. 7, 8, 9, 10) includes four for the loop represents the time between the edge server is
subfigures one for each of the size variations in the execu- requested and the accounts one finishes. To illustrate the
tion settings: (a) variations in the number of users connected benefits of our algorithm for the services with the highest
to each gateway to evaluate different levels of workload; request, the plots represent the service latency for the loops
(b) variations in the number of applications available in the of the applications with the lowest, labeled as lowest, and
system to evaluate different number of services to place, highest request rate, labeled as highest.
i.e., different number of fog devices allocating services; (c)

13
C. Guerrero et al.

(a) (b)

(c) (d)

Fig. 8  Results for network usage

(a) (b)

(c) (d)

Fig. 9  Results for service latency for the service loop edge, frontend, orders, accounts of the applications with the highest and the lowest request
rates

13
A lightweight decentralized service placement policy for performance optimization in fog…

(a) (b)

(c) (d)

Fig. 10  Results for the number of migrations

Fig. 11  CPU usage of the


devices with regard to their
topology distribution. Experi-
ment with 2 applications,
2 users, and 2 levels of fog
devices

Figure 10 represents the total number of migrations per- services and consequently a migration consist on removing
formed during the service placement process. It is important the current service instance and downloading a new instance
to remember that the applications are defined as stateless from the cloud to the new device.

13
C. Guerrero et al.

Fig. 12  Number of services
in the devices with regard to
their topology distribution.
Experiment with 2 applications,
2 users, and 2 levels of fog
devices

Fig. 13  Distribution of the ser-


vices across the device topology
with regard to their request rate.
Experiment with 2 applications,
2 users, and 2 levels of fog
devices

The last set of figures (Figs. 11, 12, 13) represent the rate), the number of services in a device and the CPU usage.
relationship of the distance between the IoT devices and the Those distributions are represented in one independent plot
service placement distribution, measured in terms of the hop for each experiment size. Consequently, they are grouped in
count, in regard with the popularity of the services (request sets of five plots, one for each single size of a variation set.

13
A lightweight decentralized service placement policy for performance optimization in fog…

Thus, there are four sets of five plots of figures for each of between 23 and 362%, with an mean value of 114%. Sec-
the three cases (request rate, number of services, and CPU ondly, the analysis of the service latency is separately done
usage). We present only some representative cases of those for the case of the applications with the highest and the
plots, particularly, we include the cases for the children lowest request rates. In general terms, our policy shows an
variations. improvement of the service latency for the application with
Figure 11 shows the CPU usage of the devices by clas- the highest rate, at the expense of degradation for the appli-
sifying the devices by their distance from the IoT devices cation with the lowest request rate in some experiments.
(sensors). The distance is measured in terms of hop count The benefit of our policy for the application with the
(x-axis) and the CPU usage is the rate between consumed highest request rate is observed in Fig. 9 where the series
and total resources (y-axis). Each point of the plot represents Pop (highest) are the ones with the smallest values. Our pol-
a device with its corresponding usage value. The number of icy’s improvement is measured in a speedup around 300 and
each point indicates the total number of samples (devices) 500%, obtaining even an speedup of 1300% for the second
with the same hop count and CPU usage. The figure includes last case of the number of users variations.
the results for the experiment with a size of 2 applications, The degradation of our policy for the less requested
2 users per IoT gateway, 2 fog devices levels and a range of application is observed, mainly, for the experiments where
1–5 children devices per device. the number of applications is increased, i.e. the cases with
Figure 12 is very similar to the previous one, with the higher workload in the system. On the contrary, our policy
only difference that it represents the number of services allo- is better even for the low requested services when the work-
cated in the devices instead of the CPU usage. load of the system is low. This is expected since the more
Finally, Fig. 13 represents how the services are distrib- application replicas, the more devices allocating services,
uted across the devices in the topology (classified by the and consequently, the less requested applications are much
hop count with the user) with regard to their request rate further from the clients, and their latencies are increased.
(y-axis). The request rate is measured in terms of frequency Edgeward policy shows improvements ranging from 60 to
(the inverse of the time unit). Consequently, each point of 250% for this second type of applications in the high work-
the plot represent a service with its allocation in the topol- load experiments.
ogy (the hop count of the device where it is allocated) and From the analysis of the number of migrations required
its request rate. for the deployment of the services (Fig. 10), it is observed
that our solution clearly needs a higher number of migra-
tions (or service deployments from the cloud provider). This
7 Discussion generates a higher network usage due to the download of the
services from the cloud provider. It is important to highlight
We use the weighted average hop count as an indicator of that this process is just performed during the deployment
the proximity between the clients and the most popular ser- of the application. This migration cost is made up with the
vices. On the contrary, the arithmetic average hop count is benefits of our service distribution, except for the cases with
an indicator of the proximity between all the services and high rates of application deployments.
the clients, independently of their request ratios. Therefore, It is important to highlight that the number of devices
the first research question (RQ1) is answered by the analysis does not influence in the latency neither the hop count in
of the series labeled as weighted in Fig. 7. This metric is, in the experiments for the variation in the number of levels
general terms, smaller for the case of our policy than for the or children. This is explained because all the service place-
Edgewards, obtaining an overall improvement of the 12%. ments are done in devices in the lowest levels, and devices
There are only two cases in which the Edgewards policy from upper levels are not necessary and, consequently,
shows smaller values (2 users, 3 levels and 2 applications). the service latency is constant as the number of devices
On the contrary, Edgewards policy shows smaller values for is increased. This is also validated with the results of, for
the arithmetic average hop count. It means that our policy example, Fig. 12, where the number of allocated services is
obtains better proximity for the most requested services, at 0 for the devices with a hop count higher than 2. On the con-
the expense of increasing the overall distance of the ser- trary, the network usage is influenced because the number of
vices. We finally observe that cases with only 1 user can connections between the devices is varied, and this metric is
be managed by placing all the services in the leaf devices influenced by the number of those connections.
(gateways), since the hop count value is 1.0. Figure 12 also shows us that our policy migrates more
The second research question (RQ2) is answered by the services to upper devices, than the Edgewards algorithm.
analysis of results in Figs. 8 and 9. Firstly, our policy’s This is because we do not migrate only one service but also
network usage is always smaller than the Edgewards. The all the consumed services ( MS i ) to avoid device loops in the
D
x
improvement in this metric is measured with an speedup

13
C. Guerrero et al.

service execution flow. By this, the less requested services References


are placed in the devices with a hop distances of 2, and the
most requested one in the devices with hop distance of 1. Arkian HR, Diyanat A, Pourkhalili A (2017) Mist: Fog-based data
Consequently, we can also observed how the devices in the analytics scheme with cost-efficient resource provisioning for iot
crowdsensing applications. J Netw Comput Appl 82(Supplement
lower levels of the topology have smaller usages than in the C):152–165. https​://doi.org/10.1016/j.jnca.2017.01.012. http://
case of the Edgewards (Fig. 11). This offers a side effect of www.scien​cedir​ect.com/scien​ce/artic​le/pii/S1084​80451​73001​88
a lower saturation of the devices and an evenly distribution Atzori L, Iera A, Morabito G (2010) The internet of things: a survey.
of services. Comput Netw 54(15):2787–2805
Balalaie A, Heydarnoori A, Jamshidi P (2016) Microservices archi-
Finally, Fig. 13 reflects how the services that are placed tecture enables devops: migration to a cloud-native architecture.
in lower devices (devices with a hop count of 1) have higher IEEE Softw 33(3):42–52. https​://doi.org/10.1109/MS.2016.64
rates in the case of our policy with regard to the Edgewards. Barcelo M, Correa A, Llorca J, Tulino AM, Vicario JL, Morell A
Likewise, the devices with a hop count of 2 allocate services (2016) Iot-cloud service optimization in next generation smart
environments. IEEE J Sel Areas Commun 34(12):4077–4090.
with lower request rates. https​://doi.org/10.1109/JSAC.2016.26213​98
To sum up, we can conclude that our policy reduces the Billet B, Issarny V (2014) From task graphs to concrete actions: a
distance between the clients and the most requested services new task mapping algorithm for the future internet of things.
(RQ1), and the latency of those services and the overall net- In: 2014 IEEE 11th International Conference on Mobile Ad
Hoc and Sensor Systems, pp 470–478. https​://doi.org/10.1109/
work usage are improved (RQ2). On the contrary, the latency MASS.2014.20
of the less requested application is degraded in experiments Bittencourt LF, Diaz-Montes J, Buyya R, Rana OF, Parashar M (2017)
with the highest workloads. This is obtained by increasing Mobility-aware application scheduling in fog computing. IEEE
the number of service migrations. Cloud Comput. 4(2):26–35. https:​ //doi.org/10.1109/MCC.2017.27
Borst S, Gupta V, Walid A (2010) Distributed caching algorithms for
content distribution networks. In: 2010 Proceedings IEEE INFO-
COM, pp 1–9. https​://doi.org/10.1109/INFCO​M.2010.54619​64
Botta A, de Donato W, Persico V, Pescape A (2016) Integration of
8 Conclusions cloud computing and internet of things: a survey. Future Gener
Comput Syst 56((Supplement C)):684–700
We have presented a decentralized algorithm for the Fog Brogi A, Forti S (2017) Qos-aware deployment of iot applications
through the fog. IEEE Internet Things J 4(5):1185–1192. https​://
Service Placement Problem to optimize the distance between doi.org/10.1109/JIOT.2017.27014​08
the clients and the most requested services. The algorithm is Brogi A, Forti S, Ibrahim A (2017) How to best deploy your fog appli-
locally executed in each fog device, by considering only per- cations, probably. In: 2017 IEEE 1st International Conference
formance and usage data obtained in the device itself. The on Fog and Edge Computing (ICFEC), pp 105–114. https​://doi.
org/10.1109/ICFEC​.2017.8
decision of the service placement is addressed to migrate Cavalcante E, Pereira J, Alves MP, Maia P, Moura R, Batista T, Deli-
services with smaller request rates to upper devices, as new cato FC, Pires PF (2016) On the interplay of internet of things and
placements are necessary. The interoperability between the cloud computing: a systematic mapping study. Comput Commun
services is considered and the migration of a service also 89-90(Supplement C):17–33. https​://doi.org/10.1016/j.comco​
m.2016.03.012. http://www.scienc​ edir​ect.com/scienc​ e/artic​le/pii/
involves the migration of all its consumed services. S0140​36641​63007​06 (internet of Things: Research challenges
We have evaluated our placement policy with the iFog- and Solutions)
Sim simulator by modeling a microservice-based application Chiang M, Zhang T (2016) Fog and iot: an overview of research
with several experiment sizes. Our results have been com- opportunities. IEEE Internet Things J 3(6):854–864. https​://doi.
org/10.1109/JIOT.2016.25845​38
pared with the simulator’s built-in policy, Edgewards. The Colistra G, Pilloni V, Atzori L (2014) The problem of task allocation in
results showed that our policy reduces the distance between the internet of things and the consensus-based approach. Comput
the clients and the most requested services, measured in Netw 73(Supplement C):98–111. https:​ //doi.org/10.1016/j.comne​
terms of the weighted average hop count. Consequently, the t.2014.07.011. http://www.scien​cedir​ect.com/scien​ce/artic​le/pii/
S1389​12861​40026​55
network usage and the service latency of those services are Darwish A, Hassanien AE (2017) Cyber physical systems design,
improved. These improvements are obtained at the expense methodology, and integration: the current status and future out-
of an increment of the latency in the less requested services. look. J Ambient Intell Human Comput. https​://doi.org/10.1007/
Our proposal shows that decentralized placement optimi- s1265​2-017-0575-4
Darwish A, Hassanien AE, Elhoseny M, Sangaiah AK, Muhammad K
zation in fog computing is able to improve the overall per- (2017) The impact of the hybrid platform of internet of things and
formance of the system. This opens new challenges to apply cloud computing on healthcare systems: opportunities, challenges,
other optimization techniques to decentralized managers to and open problems. J Ambient Intell Human Comput. https​://doi.
avoid, between others, scaling problems of the centralized org/10.1007/s1265​2-017-0659-1
Dastjerdi AV, Buyya R (2016) Fog computing: helping the internet of
ones. things realize its potential. Computer 49(8):112–116. https​://doi.
org/10.1109/MC.2016.245

13
A lightweight decentralized service placement policy for performance optimization in fog…

Deng R, Lu R, Lai C, Luan TH (2015) Towards power consumption- Saurez E, Hong K, Lillethun D, Ramachandran U, Ottenwälder B
delay tradeoff by workload allocation in cloud-fog computing. In: (2016) Incremental deployment and migration of geo-distributed
2015 IEEE International Conference on Communications (ICC), situation awareness applications in the fog. In: Proceedings of
pp 3909–3914. https​://doi.org/10.1109/ICC.2015.72489​34 the 10th ACM International Conference on Distributed and
Diaz M, Martin C, Rubio B (2016) State-of-the-art, challenges, and Event-based Systems, DEBS ’16. ACM, New York, pp 258–
open issues in the integration of internet of things and cloud com- 269. https​://doi.org/10.1145/29332​67.29333​17. http://doi.acm.
puting. J Netw Comput Appl 67(Supplement C):99 – 117. https​ org/10.1145/29332​67.29333​17
://doi.org/10.1016/j.jnca.2016.01.010. http://www.scien​cedir​ect. Skarlat O, Nardelli M, Schulte S, Borkowski M, Leitner P (2017a)
com/scien​ce/artic​le/pii/S1084​80451​60002​8X Optimized IoT service placement in the fog. Serv Oriented Com-
Do CT, Tran NH, Pham C, Alam MGR, Son JH, Hong CS (2015) put Appl. https​://doi.org/10.1007/s1176​1-017-0219-8
A proximal algorithm for joint resource allocation and minimiz- Skarlat O, Nardelli M, Schulte S, Dustdar S (2017b) Towards qos-
ing carbon footprint in geo-distributed fog computing. In: 2015 aware fog service placement. In: 2017 IEEE 1st International Con-
International Conference on Information Networking (ICOIN), pp ference on Fog and Edge Computing (ICFEC), pp 89–96. https​://
324–329. https​://doi.org/10.1109/ICOIN​.2015.70579​05 doi.org/10.1109/ICFEC​.2017.12
Farris I, Militano L, Nitti M, Atzori L, Iera A (2015) Federated edge- Souza VBC, Ramrez W, Masip-Bruin X, Marn-Tordera E, Ren G,
assisted mobile clouds for service provisioning in heterogeneous Tashakor G (2016) Handling service allocation in combined
iot environments. In: 2015 IEEE 2nd World Forum on Internet fog-cloud scenarios. In: 2016 IEEE International Conference
of Things (WF-IoT), pp 591–596. https​://doi.org/10.1109/WF- on Communications (ICC), pp 1–5. https​://doi.org/10.1109/
IoT.2015.73891​20 ICC.2016.75114​65
Gu L, Zeng D, Guo S, Barnawi A, Xiang Y (2017) Cost efficient Taneja M, Davy A (2017) Resource aware placement of IoT application
resource management in fog computing supported medical cyber- modules in fog-cloud computing paradigm. In: 2017 IFIP/IEEE
physical system. IEEE Trans Emerg Topics Comput 5(1):108– Symposium on Integrated Network and Service Management
119. https​://doi.org/10.1109/TETC.2015.25083​82 (IM), pp 1222–1228. https:​ //doi.org/10.23919/​ INM.2017.798746​ 4
Guerrero C, Lera I, Juiz C (2013) Performance improvement of Urgaonkar R, Wang S, He T, Zafer M, Chan K, Leung KK (2015a)
web caching in web 2.0 via knowledge discovery. J Syst Softw Dynamic service migration and workload scheduling in edge-
86(12):2970–2980. https​://doi.org/10.1016/j.jss.2013.04.060. clouds. Perform Eval 91(Supplement C):205–228. https​://doi.
http://www.scien​cedir​ect.com/scien​ce/artic​le/pii/S0164​12121​ org/10.1016/j.peva.2015.06.013. http://www.scien​cedir​ect.com/
30012​09 scien​ce/artic​le/pii/S0166​53161​50006​19 (special Issue: Perfor-
Guerrero C, Lera I, Juiz C (2017) Genetic algorithm for multi-objective mance 2015)
optimization of container allocation in cloud architecture. J Grid Urgaonkar R, Wang S, He T, Zafer M, Chan K, Leung KK (2015b)
Comput. https​://doi.org/10.1007/s1072​3-017-9419-x. Dynamic service migration and workload scheduling in edge-
Gupta H, Vahid Dastjerdi A, Ghosh SK, Buyya R (2017) ifogsim: clouds. Perform Eval 91(C):205–228. https​://doi.org/10.1016/j.
a toolkit for modeling and simulation of resource management peva.2015.06.013
techniques in the internet of things, edge and fog computing Vakali A, Pallis G (2003) Content delivery networks: status and
environments. Softw Pract Exper 47(9):1275–1296. https​://doi. trends. IEEE Int Comput 7(6):68–74. https​://doi.org/10.1109/
org/10.1002/spe.2509. MIC.2003.12505​86
Huang Z, Lin KJ, Yu SY, Hsu JY (2014a) Co-locating services in iot Varghese B, Buyya R (2017) Next generation cloud computing: new
systems to minimize the communication energy cost. J Innov Digit trends and research directions. Future Gener Comput Syst. https​
Ecosyst 1(1):47–57. https​://doi.org/10.1016/j.jides​.2015.02.005. ://doi.org/10.1016/j.future​ .2017.09.020. http://www.scienc​ edire​ ct.
http://www.scien​cedir​ect.com/scien​ce/artic​le/pii/S2352​66451​ com/scien​ce/artic​le/pii/S0167​739X1​73022​24
50000​61 Velasquez K, Abreu DP, Curado M, Monteiro E (2017) Service place-
Huang Z, Lin KJ, Yu SY, Hsu JY (2014b) Building energy efficient ment for latency reduction in the internet of things. Ann Telecom-
internet of things by co-locating services to minimize commu- mun 72(1):105–115. https​://doi.org/10.1007/s1224​3-016-0524-9
nication. In: Proceedings of the 6th International Conference on Venticinque S, Amato A (2018) A methodology for deployment of iot
Management of Emergent Digital EcoSystems, MEDES ’14, vol application in fog. J Ambient Intell Human Comput. https​://doi.
18. ACM, New York, pp 101–108. https​://doi.org/10.1145/26682​ org/10.1007/s1265​2-018-0785-4
60.26682​70 Vogler M, Schleicher JM, Inzinger C, Dustdar S (2016) A scalable
Ko IY, Ko HG, Molina AJ, Kwon JH (2016) Soiot: Toward a user- framework for provisioning large-scale iot deployments. ACM
centric iot-based service framework. ACM Trans Internet Technol Trans Internet Technol 16(2):11. https​://doi.org/10.1145/28504​
16(2):8. https​://doi.org/10.1145/28354​92 16. http://doi.acm.org/10.1145/28504​16
Krylovskiy A, Jahn M, Patti E (2015) Designing a smart city internet Wang S, Urgaonkar R, Chan K, He T, Zafer M, Leung KK (2015)
of things platform with microservice architecture. In: 2015 3rd Dynamic service placement for mobile micro-clouds with pre-
International Conference on Future Internet of Things and Cloud, dicted future costs. In: 2015 IEEE International Conference on
pp 25–30. https​://doi.org/10.1109/FiClo​ud.2015.55 Communications (ICC), pp 5504–5510. https​://doi.org/10.1109/
Li S, Xu LD, Zhao S (2015) The internet of things: a survey. Inf Syst ICC.2015.72491​99
Front 17(2):243–259 Wang S, Zafer M, Leung KK (2017) Online placement of multi-compo-
Mahmud R, Kotagiri R, Buyya R (2018) Fog computing: a taxonomy, nent applications in edge computing environments. IEEE Access
survey and future directions. Springer, Singapore, pp 103–130 5:2514–2533. https​://doi.org/10.1109/ACCES​S.2017.26659​71
Munro I (1971) Efficient determination of the transitive closure Weaveworks, ContainerSolutions (2016) Socks shop—a microservices
of a directed graph. Inf Process Lett 1(2):56–58. https​://doi. demo application. https​://micro​servi​ces-demo.githu​b.io/
org/10.1016/0020-0190(71)90006​-8. http://www.scien​cedir​ect. Wen Z, Yang R, Garraghan P, Lin T, Xu J, Rovatsos M (2017) Fog
com/scien​ce/artic​le/pii/00200​19071​90006​8 orchestration for internet of things services. IEEE Internet Com-
Ni L, Zhang J, Jiang C, Yan C, Yu K (2017) Resource allocation strat- put 21(2):16–24. https​://doi.org/10.1109/MIC.2017.36
egy in fog computing based on priced timed petri nets. IEEE
Internet Things J 4(5):1216–1228. https​ : //doi.org/10.1109/
JIOT.2017.27098​14

13
C. Guerrero et al.

Yang L, Cao J, Liang G, Han X (2016) Cost aware service placement software-defined embedded system. IEEE Trans Comput
and load dispatching in mobile cloud systems. IEEE Trans Com- 65(12):3702–3712. https​://doi.org/10.1109/TC.2016.25360​19
put 65(5):1440–1452. https​://doi.org/10.1109/TC.2015.24357​81
Zeng D, Gu L, Guo S, Cheng Z, Yu S (2016) Joint optimization of Publisher’s Note Springer Nature remains neutral with regard to
task scheduling and image placement in fog computing supported jurisdictional claims in published maps and institutional affiliations.

13

S-ar putea să vă placă și