Documente Academic
Documente Profesional
Documente Cultură
By
Krishna Murthy P
Deloitte Consulting India Pvt. Ltd. HYDERABAD
Email @ kpradhan@deloitte.com
Abstract –
Enterprise level product development is growing beyond the boundaries of silo-ed enterprise applications to meet the
requirements of rapidly changing markets. They are getting integrated within and across the enterprise thereby
metamorphosing into ecosystems. This has resulted in complexity which seems to be ever increasing and also in a high number
of changes these systems have to go through. Nevertheless to say, the tester’s job gets more complex in propionate to this as
well. For example, when the product is claiming to scale 1000 or more nodes/users/transaction/browser instances, the
organization should be in the position to test and certify accordingly. Most of the time product/applications fail due to lack of
the infrastructure.
The above mentioned challenges can be tackled by shifting the organizational IT landscape to Cloud computing. Focus should
be on Cloud based testing with niche capabilities in the areas of performance testing, security testing, reliability testing, and
experience in virtualization technologies. Cloud Computing is an innovation we can apply to bring economy to our solution in
terms of time, resources and of course, the money!
Applying Cloud based testing facilities can help us accrue many benefits such as Reduction in Cost – Capital Expenditure
(owing to zero infrastructures), Reduction in execution time => Rapid Feedback cycle, Quick Turn-around => Increase in
Productivity, Reduction in Risk => Increase in Quality
Introduction–
The recent sharp downturn in the economy is forcing organizations to reconsider their approach towards IT investments. In a
world, where companies are more focused towards improving efficiencies and return of capital employed, one has need to re-
consider how they can reduce their technology investments, or get higher return on the same or incremental investments.
Testing is crucial to enhance user satisfaction and reduce support cost. However, testing requires organizations to invest in
people, tools and environments and can take up a significant percentage of the available budget. But quality can never be
compromised. New ways of development and testing are enabling organizations to ensure higher quality but with significantly
lower investments. These challenges can be tackled by shifting the organizational IT landscape to Cloud computing. Focus
should be on Cloud based testing with niche capabilities in the areas of performance testing, security testing, reliability
testing, and experience in virtualization technologies. Cloud Computing is an innovation we can apply to bring economy to our
solution in terms of time, resources and of course, the money!
Objectives –
Here are the key objectives; we are going to focus while we detail the rest of the document.
To bring a better quality product on Cloud (Ensure the Product is ready and can work on Cloud Environment)
Ensure the required planning is done in terms of testing cycle for the Product before moving to Cloud
Ensure the business benefits of a Product Testing on Cloud and its related activities accomplishments
Help educate and prepare the Testing Team with required knowledge and skills to prepare for Cloud Testing
Manageability
Most of these Cloud computing vendors, lack required infrastructures and platforms that do not have great management
capabilities. This is not unusual. Throughout computing history, raw capabilities will generally appear on the market first, and
then management of these raw capabilities becomes a differentiator when competition heats up. An example of missing
management capabilities for cloud infrastructures is auto-scaling. Amazon EC2 claims to be elastic; however, it really means
that it has the potential to be elastic. Amazon EC2 will not automatically scale your application as your server becomes heavily
loaded. It is still up to the developer to manage that scalability problem. So who’s tackling this problem? As a tester, one has to
keep management team in loop build required infrastructure on the Product to be self scaling and Cluster enabled.
Monitoring
Monitoring, whether is for performance or availability, is critical to any Enterprise. We are not talking about just how much
CPU or memory the machines are using. We are talking about performance of transactions and disk IO and others. CPU and
memory usage are misleading most of the time in virtual environments. The only real measurement is how long your
transactions are taking and how much latency there are.
Autonomy determines how the service functions on its own, including any dependencies that may be present.
Integration testing is required to see how the service works when leveraged by other systems, systems that perhaps are
known or unknown at the time of development.
Granularity testing determines if the service was created with too much of a coarse- or fine-grain leaning, which has
an effect on performance and the value of a service.
Stability testing insures that the services built won’t fall down at the worst of times. This is usually simple regression
testing, with some integration testing here, as well.
Performance testing is just what you would expect: the ability to determine if the services can handle many
simultaneous requests, and any special architecture that may be required to insure good performance, such as load
balancing with transactions.
POC approach
In Sever-Agent-DB enterprise based architecture, to start with we need to do the POC for 3-tier applications (with few servers say 3 servers,
4 agents) and few scaling brokers with environment on Cloud. If it gets deploy without any blockage then we can go for deploying the large
no. of agents or connection with n-users.
We need to make sure that data is secure and encrypted when the transition/updating is happening from the Database and Server level
components. If the communication is not secure but the data is encrypted then it is safer to communicate from the cloud configure machines
to local premises machines
Secure Data communication and synchronization
Keeping the Database on the cloud configure environment once the transaction is over then the database can be replicate to the local
premises so that the server is getting the information from the database should be very fast and the it is secure.
Creation of test lab/test scenarios
We need to decide which part of testing is going to be in cloud, based on that we need to prepare the specific test scenarios to
validate functionality of the features in the application. For better approach to make small prototypes based on the components and
in the same time prepare cloud based test lab to test the developed prototype. The real scenarios of testing an application it based on
i.e. high end machines, Database can be configured on Cloud and other machines like Servers and Agents (say in 3-tier applications)
can be deploy on the local premises.
The scope of this document is to prove that ACM (Application Configuration Manager) product can be compatible with Cloud environment by testing the
various scenarios on Cloud. To test the various scenarios we have considered 6 ACM agents, 2 Grid nodes and 10 components keeping in/out of the Cloud.
Based on the scenarios estimation is made using Amazon Machine Images from Amazon EC2.
As the current ACM product requirement is to scale more than 10000 servers, and it is very hard to bring up the lab for testing the scaling feature. We need
to think of the cost before moving and testing the product on Cloud. This document explains the estimation based on proof of concept of ACM product on
Cloud.
Configuration:
ACM Server: One physical machine with 4GB RAM 80GB disk space and Core 2 duo processor with Windows 2003 Server
Amazon Machine Images (AMIs): Amazon EC2 Small Instance (Default) 1.7 GB of memory, 1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute Unit),
160 GB of local instance storage, 32-bit platform, I/O Performance: Moderate
ACM Agents: Six physical machines with 2GB RAM 80GB disk space and Core 2 duo processor with Windows 2003 Server and Red hat Linux
Amazon Machine Images (AMIs): Amazon EC2 Small Instance (Default) 1.7 GB of memory, 1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute Unit),
160 GB of local instance storage, 32-bit platform, I/O Performance: Moderate
Grid nodes: Two physical machines with 2 GB RAM 40GB disk space and Core 2 duo processor with Windows 2003 Server
Amazon Machine Images (AMIs): Amazon EC2 Small Instance (Default) 1.7 GB of memory, 1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute Unit),
160 GB of local instance storage, 32-bit platform, I/O Performance: Moderate
ACM Database: One physical machine with 4 GB RAM, 160 GB disk space and Core 2 duo processor with Windows 2003 server and SQL Server 2005
Amazon Machine Images (AMIs): Amazon EC2 Large Instance (Default) 7.5 GB of memory, 2 EC2 Compute Unit (2 virtual core with 2 EC2 Compute Unit
each), 420 GB of local instance storage, 32-bit platform, I/O Performance: High
Managed Components:
1. Discovery operations
2. Refresh operations
3. Change detection operations
4. Compare Server operations
Test Scenarios:
ACM Server, Agents, DB operated Locally and Grid nodes in the cloud
DB
ACM Server
Grid Nodes
ACM Agents
ACM Server and Agents operat
3. Deploying ACM Remote Agents, Grid nodes and Database in Cloud
ACM Server operated Locally and ACM Agent, Grid nodes and DB in the cloud
DB Grid Nodes
ACM Server
ACM Server
4. Deploying Grid nodes, ACM Remote Agents in Cloud
ACM Agents
ACM Server and DB operated locally and ACM Agents and Grid nodes placed in the cloud
DB
ACM Agents operated Locally and ACM Server, Grid nodes and DB in the cloud
ACM Agents
Grid Nodes DB
ACM Agents
Estimation:
Based on the current setup (i.e. without Cloud) I have estimated the time it takes to perform below mentioned operations:
Operations Time
estimation
Estimation is taken using this link http://aws.amazon.com/ec2/instance-types/ based the instances are going to use.
As of now Internet data transferred “in” and “out” of Amazon EC2 is free till Jun 30, 2010
10 AMI’s machines are required, out of which 9 machines are Standard small (Default) Instances and one Large On demand Instances. We are assuming
that continuous 4 days are required to test the above scenarios.
The cost is associated with the Small default instances for Windows usage: (($0.13 *24)*4)) = $12.48 for single machine.
For 8 Standard on-Demand small instances (Windows Usage), the cost is: ($12.48 * 8) = $112.32
For 1 Standard on-Demand small instances (Linux Usage), the cost is (($0.095 *24)*4)) = $9.12
For 1 Standard on-Demand Large instances (Windows Usage), the cost is: (($0.52*24)*4) = $49.92
Dependencies:
• CMDB
So while testing the ACM product on Cloud we need to make sure that other product also supports Cloud environment.
Risks Encountered:
The Cloud demands that we be as nimble as possible, delivering features and fixes in almost real-time fashion. Both customer and provider rely on
software development that can maintain quality while being light on its feet and constantly moving. In addition, Cloud-oriented systems tend to be highly
complex and dynamic in structure -- more than our industry has ever seen before. The traditional software development and testing models do not support
this constant “diligence in motion”; therefore a new Cloud delivery model must be adopted. Traditional models worked reasonably well in the world of
client / server, since users were most often internal resources. User experience was downplayed, and glitches tolerated.
The lengthy cycle for requirements generation and feature development, followed by a set of testing cycles, allows for extended periods of time without
testing. But these gaps do not correlate with the needs of Cloud consumers. For them, ongoing, reliable, uninterrupted experience is everything.
The only way companies can realistically achieve this model is to have superior test sets, that are fully automated – and to go about automation the right
way. Otherwise it can quickly become unachievable and unmanageable. When automation efforts fail to achieve high percentages on tests, the method is
often considered faulty. But when test automation follows specific and unique guidelines, its success can be measured again and again.
When an automation team spends a disproportionate amount of time on automating tests, and the resulting automation then ends up covering only about
30% of the tests, the automation policy has failed. A much higher percentage is needed to "test everything always” in Cloud applications. Additionally,
automation should not dominate the test process. The actual automation development, and more importantly maintenance effort should only have a modest
footprint in terms of time and resources. While many testing organizations mistakenly approach automation from the perspective of tooling or
programming, an approach centered on automation effective test design combined with an agile test development process yields far better results. When
done right, the result is a set of automated tests with on-the-fly adaptability that readily meets the facile requirements of testing in the Cloud.
Tools have their place in the process, but frequently steal the center of attention, viewed as panaceas. Primary focus goes to buying and learning “the tool”
rather than expending the time, effort and cost involved in revisiting test design. Establishing a test design process allows for more possible tests that are
readily available, improving development cycles through flexibility. The approach aims to have at least 95 percent of tests automated, and 95 percent of
testers’ time spent on test development, not automation. These tests are not based on regression or bug validation, but are calibrated to find and hunt for
bugs, boundary conditions, state transitions, exploratory testing and negative tests.
Today's companies increasingly find that they are in an ever more competitive market, especially in the drive to implement more robust, capable and
pioneering Cloud-based products and services. Product delivery times are decreasing, customers demand higher and higher levels of product quality, and
failure to deliver within the customer's expectations can be swiftly punished with whole scale product abandonment and erection of barriers to market
reentry.
Adopting a testing paradigm that is designed specifically for the requirements of Cloud-computing is a fundamental requirement for the new standards of
quality being set by customer-driven demand.
Testing environment - Where are the applications located? Where is the data located? What is the network and performance constructs?
People resources and skill sets - What is the competency makeup of your current test team? What are the skill sets required for moving
forward?
Time-to-deliver constraints - What are the barriers to testing efficiency? What are the efforts that take more time than others and why?
Reporting and progress visibility - Do you have consistent visibility into your testing status? Are you surprised when testing efforts are
late? Are statuses verbal, or based on actual metrics?
Remote, Public or External Cloud - Public clouds are sometimes referred to as "Regular" cloud computing. Completely separate from a user's
desktop or the corporate network that they belong to, public clouds offer a pay-per-use service model because the user is leveraging outside
compute resources for the particular service they are seeking. This approach offers economies of scale, but their shared infrastructure model can
raise concerns about configuration restrictions, adequate security, and service-levels (available uptime). These concerns might make you think
twice about subjecting sensitive data that is subject to compliance or safe harbor regulations.
Because "public" clouds are typically made available via the public internet they may be free or inexpensive to use. A well known example of a
public cloud is Amazon EC2 which is available for use by the general public.
Internal or Private Cloud - Private cloud computing extends the same infrastructure concepts firms already have in their data centers. The
motivation for private clouds appears to be to resolve security and availability concerns inherent in the public cloud paradigm. As such, private
clouds seemingly are not burdened by network bandwidth, availability issues or potential security risks that may be associated with public clouds.
However, this thinking belies the very intent of cloud computing which is predicated on hardware-software extensibility, dramatic reduction in
infrastructure costs, and an elimination of the management concerns governing private networks.
Mixed or Hybrid Cloud - Many of the leading engineering thinkers in the industry suggest that the most workable cloud computing approach is the
"hybrid" approach. The hybrid solution combines the best of the Public and Private Cloud paradigms. Considering that some applications may be
too complex or too sensitive to risk operating from a public cloud it makes sense for a firm to protect those application and data assets within the
construct of a private cloud where they have total control.
Less sensitive applications and data can be migrated to a public cloud freeing compute resources that can be repurposed for the complex
applications that need to stay home. The hybrid approach does sound like the best of both worlds. It makes sense from a technology and economic
standpoint. It allows for control, flexibility and growth. The trick to managing hybrid clouds comes when you consider spikes in demand.
When demand spikes pummel the performance of your applications located within the private cloud -and you need additional computing power
(such as is experienced by web-based news media when critical events occur) -you will need to develop a management policy that can be
responsible for when to reach out to the public cloud for those additional resources.
Defining the Future-State
Envisioning the results of your testing transformation requires solid understanding of your organization's business goals and objectives, the Cloud
computing paradigms that may help your testing effort contribute to those goals, and development of a sound plan to move in the new direction.
When documenting your planned future state, address each of these categories:
Architectural - Consider the different Cloud paradigms as they pertain to your business model, goals and objectives, and application and
data sensitivity.
Organization - Ensure organizational review and consensus of new Cloud testing direction as it pertains to business goals and objectives
and priorities.
Financial - Define the benefits, know where the real costs lie, and define the budget.
Implementation - Adopt an incremental improvement approach, and choose the correct tools and partners.
Monitor and measure - Develop a consistent set of metrics for measuring and monitoring your new foray into testing in the Cloud.
Other organizational goals and objectives that are important to include:
○ Provide sufficient support for distributed test teams
References –
• Fundamental Requirements of Successful Testing in the Cloud
• Computing in the Clouds
• Cloud computing shapes up as big trend for 2009
• Cloud testing White Paper
• Cloud Testing - Wiki
• Cloud Testing Blog
Author’s Biography –
This is Sailaxmi R, have around 8+ yrs of Software development & testing experience. Spent almost 6+ years in CA HYD
INDIA, as a Principal Quality Assurance Engineer and lead couple of Cloud based testing teams and initiatives. Currently,
working with Cura Software, HYD on the Cloud based services platforms.
Sailaxmi has deep expertise in both Product development & testing using various tools. Work ed on the Network related
products like system management and application management etc. Mainly focused on the manual testing, performance and
scalability testing using gird technology etc
Appendix
ACM – Application Configuration Manager
SaaS – Software as Service
EC 2 – Elastic Computing Cloud