Sunteți pe pagina 1din 3

CactoOpt: Workload Analysis and Classification for Optimizing Capacity

Auto-scaling

Ahmed Ali-Eldin, Johan Tordsson, Erik Elmroth


Department of Computing Science
Umeå University, Sweden
{ahmeda, tordsson, elmroth}@cs.umu.se

0.95

0.9

Accuracy
0.85

0.8

0.75
Uniform
Proportional
0.7
0 10 20 30 40 50 60 70 80 90 100
K

Figure 2. Classification results shows that a cloud provider can use the
Figure 1. WAC: A Workload Analysis and Classification Tool. tool to assign workloads to autoscalers with high precision.
Table I
N UMBER OF SCENARIOS IN WHICH EVERY AUTOSCALER OUTPERFORMS
THE OTHER AUTOSCALERS FOR THE DIFFERENT WORKLOADS . Nearest Neighbor trained classifier, reducing the risk of bad
autoscaler Real workloads scenarios Generated workloads scenarios predictions and improving the provided QoS. WAC is part
React [3] 6.55% 0.10% of the CactoOpt tool for datacenter management [9].
Reg [4] 33.72% 61.33%
Adapt [5] 47.17% 34.30% Figure 1 shows the two main components of WAC. The
Hist [6] 12.56% 4.27% analyzer extracts and quantifies two characteristics from the
workload traces, periodicity and burstiness. The classifier
uses these two characteristics to classify the workloads and
The performance of server systems is affected by three
assigns each workload to a suitable elasticity autoscaler.
main factors: the design of the system, the implementation
Periodicity describes the similarity between the value of a
of the system, and the load on the system [1]. Cloud
workload at any point in time to the workload’s history.
autoscaling algorithms dynamically change the amount of
Burstiness measures load spikes in the system, when a
resources allocated to a service according to the current and
running service suddenly experiences huge increases in the
predicted future load in order to reduce the effect a changing
number of requests.While there are many other properties
workload has on a cloud-running service performance while
that can be used to characterize a workload, we choose these
maintaining a lower total cost. Since there are no perfect
two properties to demonstrate the feasibility of the approach.
predictors, designing an autoscaler that produces accurate
predictions for all possible datacenter workloads and ap- We test WAC with 14 real workloads and 55 synthetic
plications is infeasible [2]. Autoscalers’ performance varies workloads. In the experiments, we compare the performance
with workloads and system dynamic changes. of 4 different autoscalers from the literature. We assume
There is a plethora of suggested algorithms for autoscaling different QoS requirements where, e.g., service owner is
in the literature [7]. Each of these algorithms is designed willing to have some extra machines provisioned in order
with certain assumptions on the workload dynamics and the to be able to handle sudden spikes and to what extent.
target QoS. If the workload or system dynamics change, Using simulations, in total we test the performance of the
the autoscaler might affect the performance in a negative controllers in more than 500 scenarios for each workload
manner [8]. It is thus important to detect such changes and Table I summarizes the performance of the 4 autoscalers.
to adaptively assign an autoscaler that is more suited to We combine the scenarios shown in Table I in a single set.
the new dynamics. This work presents WAC, a Workload This set is randomly shuffled and divided into a training set
Analysis and Classification tool for cloud infrastructures. for WAC and a test set to test the ability of WAC to assign
The tool analyzes the history of the running/new workloads workloads to autoscalers. Figure 2 shows the classification
and classifies them based on their dynamics. Workloads accuracy of the classifier with 4 controller against K, the
are then assigned to the most suitable available elasticity cluster size of KNN. The maximum accuracy achieved is
autoscaler based on the extracted characteristics using a K- 0.92 using equal weights for all neighbors for K = 3.
R EFERENCES
[1] D. Feitelson, Workload modeling for computer systems perfor-
mance evaluation. Cambridge University Press, 2015.

[2] F. Almeida Morais, F. Vilar Brasileiro, R. Vigolvino Lopes,


R. Araujo Santos, W. Satterfield, and L. Rosa, “Autoflex:
Service agnostic auto-scaling framework for IaaS deployment
models,” in IEEE/ACM CCGrid, 2013, pp. 42–49.

[3] T. Chieu, A. Mohindra, A. Karve, and A. Segal, “Dynamic


scaling of web applications in a virtualized cloud computing
environment,” in IEEE ICEBE, 2009, pp. 281 –286.

[4] W. Iqbal, M. N. Dailey, D. Carrera, and P. Janecek, “Adaptive


resource provisioning for read intensive multi-tier applications
in the cloud,” FGCS, vol. 27, no. 6, pp. 871–879, 2011.

[5] A. Ali-Eldin, J. Tordsson, and E. Elmroth, “An adaptive hybrid


elasticity controller for cloud infrastructures,” in IEEE NOMS.
IEEE, 2012, pp. 204–212.

[6] B. Urgaonkar, P. Shenoy, A. Chandra, P. Goyal, and T. Wood,


“Agile dynamic provisioning of multi-tier internet applica-
tions,” ACM TAAS, vol. 3, no. 1, pp. 1:1–1:39, 2008.

[7] T. Lorido-Botran, J. Miguel-Alonso, and J. A. Lozano, “A


review of auto-scaling techniques for elastic applications in
cloud environments,” Journal of Grid Computing, vol. 12,
no. 4, pp. 559–592, 2014.

[8] M. Morari and E. Zafiriou, Robust process control. Morari,


1989.

[9] P O Ostberg et al., “The cactos vision of context-aware


cloud topology optimization and simulation,” in Proceedings
of IEEE CloudCom 2014, the 6th International Conference on
Cloud Computing Technology and Science, Singapore 15-18
December 2014, 2014.
CactoOpt: Workload Analysis and Classification for Optimizing
Capacity Auto-scaling
Ahmed Ali-Eldin, Johan Tordsson & Erik Elmroth
Department of Computing Science
Umeå University, Sweden
{ahmeda, tordsson, elmroth}@cs.umu.se

Capacity Scaling WAC design

*Capacity auto scaling for cloud services adapts to changing workload dynamics. * The analyzer extracts and quantifies two characteristics from the workload
*Instead of provisioning for the maximum anticipated load, provision minimal resources to achieve a traces, periodicity and burstiness.
predefined level of QoS. * The classifier uses these two characteristics to classify the workloads and assigns
each workload to a suitable elasticity autoscaler.
* Periodicity describes the similarity between the value of a workload at any point
in time to the workload’s history.
*Burstiness measures load spikes in the system, when a running service suddenly
experiences huge increases in the number of requests.
*Both properties are chosen to demonstrate the feasibility of the approach.

Results
* We combine the scenarios shown in Figure 3 in a single set.
* Set is randomly shuffled and divided into a training set for WAC and a test set
to test the ability of WAC to assign workloads to autoscalers.
Fig. 1: Autoscalers adapts to the workload rather than costly overprovisioning. *Figure 5 shows the classification accuracy of the classifier with 4 controller
against K, the cluster size of KNN. T he maximum accuracy achieved is 0.92
using equal weights for all neighbors for K = 3.
Plethora of Autoscalers in the literature 1

0.95
* Each autoscaler is designed with certain assumptions and thus has a different performance
*Figure 2 shows the performance of four published autoscalers [1, 2, 3, 4] compared to a perfect 0.9
Accuracy

predictor.
0.85

0.8

0.75
Uniform
Proportional
0.7
0 10 20 30 40 50 60 70 80 90 100
K
Fig. 5: Classification results shows that a cloud provider can use the tool to assign workloads to autoscalers with
high precision.

* Savings using the tools vary with scenarios and workloads considered. For one
real workload taken from Wikipedia, the best performing controller underprovi-
sions 40 times less than the worst performing controller.

Fig. 2: Autoscalers performance differ.

* Figure 3 summarizes the performance of the 4 autoscalers using simulations using 55 synthetic and
Future work
14 real workload. For each workload, test using a few hundred QoS requirement specifications.
* Better controller design.
* Testing on a large scale testbed.
* Using more features for the classifier such as response time measurements.

Fig. 3: Number of scenarios in which every autoscaler outperforms the other autoscalers for the different workloads.
References
[1] T. Chieu, A. Mohindra, A. Karve, and A. Segal, “Dynamic scaling of
WAC: Workload Analysis and Classification for Cloud web applications in a virtualized cloud computing environment,” in
IEEE ICEBE, 2009, pp. 281 –286.
workloads [2] W. Iqbal, M. N. Dailey, D. Carrera, and P. Janecek, “Adaptive re-
source provisioning for read intensive multi-tier applications in the
* We introduce WAC, a Workload Analysis and Classification tool for automatic selection of cloud
cloud,” FGCS, vol. 27, no. 6, pp. 871–879, 2011.
auto-scaling methods based on workload dynamics and the QoS requirements of a service
*Figure 4 shows the two main components of WAC. [3] A. Ali-Eldin, J. Tordsson, and E. Elmroth, “An adaptive hybrid elas-
ticity controller for cloud infrastructures,” in IEEE NOMS. IEEE,
2012, pp. 204–212.
[4] B. Urgaonkar, P. Shenoy, A. Chandra, P. Goyal, and T. Wood, “Ag-
ile dynamic provisioning of multi-tier internet applications,” ACM
TAAS, vol. 3, no. 1, pp. 1:1–1:39, 2008.

Fig. 4: WAC: A Workload Analysis and Classification Tool.

LATEX Tik Zposter

S-ar putea să vă placă și