Sunteți pe pagina 1din 18

3.

Challenges/ risk involved in adopting cloud computing Despite the myriad benefits of cloud computing solutions, a several challenges still exist. Being a young industry, there are few tools, procedures or standard data formats or service interfaces in place to guarantee data, computer application and service portability. As evidenced with the recent situation involving the services a failure of Amazonas Elastic Compute Cloud, outages can be a potential risk and can have widespread implications for consumers of cloud services. This risk becomes even more severe if a mission-critical environment could be impacted.

General challenges Here is a short list in no particular order that I have accumulated over a year by now and something that drives many of the improvements in GridGain that are currently in the works: 1. Most likely you do NOT need cloud computing but if you do you would know it for sure by now; people who have legitimate both technical and business-wise use cases for cloud computing have been trying to do it internally for many years

2. The best way to think about cloud computing is Data Center with API that should clarify most of the questions 3. Creating the image for something like Amazon EC2 is worth about 45 minutes of your effort but you will spend weeks and months after that fine tuning your application and developing additional functionality; plan accordingly 4. You are about to deal with 100s and 1000s of remote nodes Things that worked in 10s of nodes often mysteriously dont work on the cloud scale. We were surprised by the amount of configuration tweaks we had to make to run GridGain on 512 nodes on Amazon EC2 under the load. Proven grid middleware is essential (quite obviously) 5. You cannot rely on the fact that environment will be homogeneous most likely it will be not: different CPUs, different amount of memory, etc. 6. Debugging problem on that scale requires pretty deep understanding of distributed computing; learning curve is very steep; trial and error is often the only solution; plan accordingly 7. IP multicast will likely not work or work with significant networking limitations. For example, you may not get all the computers in your cloud in the same IP multicast group. QoS on IP multicast can be unknown, at best. 8. Traffic inside is very cheap or free but traffic outside is expensive and can get you very quickly 9. If you have to use cloud all the time economics go down and in many case it is cheaper to traditionally rent in data center; that means that in many cases using clouds is best as an options to outsource pick loads in such cases the economic effect can be dramatic 10. Up time and per-computer reliability is low comprehensive failover support on grid middleware is a must 11. Static IPs are not guaranteed it kills automatic deployment for 90% of the grid framework out there 12. Almost always plan on having multiple clouds, at least one internal and one or many external; you are always going to have data and processing that cannot cross the boundaries of your internal data center; without comprehensive support from grid middleware for location transparency (a.k.a. virtual cloud) this is a show stopper 13. External clouds (i.e. hosted NOT by you) present problem of sharing data: o Do you copy data to the cloud? o Usually no local-DB access from the cloud o Can you legally copy the data? o Double storage of data locally and on the cloud? Synchronization? o Security? o Data affinity? o Local data is removed once image is undeployed o Etc, etc. 14. Carefully think through dev/qa/prod layout and how this is all organized things get way hairy with multiple cloud, etc. 15. Clunky (re)deployment of your application onto the cloud can slow down development process to a halt support from grid middleware is absolutely essential here 16. Often connections are one-directional, i.e. you can connect to the cloud but NOT from the cloud back to you comprehensive communication capabilities supporting one-directional connectivity and disjoint clouds in grid middleware is a must 17. Cloud are implemented based on hardware virtualization make sure your grid middleware can dynamically provision such images on demand, i.e. basically start the image (start paying) when certain conditions are met and stop the image (stop paying) when other conditions in your system are met 18. Stick with open source stack (no, this is not a plug) having a source code helps greatly during debugging in such unusual situations

19. Linier scalability can only be achieved in a control test environment (like in our recent test) real world applications will exhibit some sort of non-linier scalability; it is essential to have at least a ballpark number of what you are expecting the scalability and performance should be when you run your application on the cloud battery of performance and scalability tests developed upfront is usually the best option 20. Personal recommendation: use Amazon EC2/S3 services the best offering at this point by a long, long mile Challenges of Adaptive Security Systems The cloud computing model should drive and potentially apply for the design and development of the next generation of adaptive security systems. This essay shows some conceptual ideas and directions based on systems engineering methods and architecting principles. The new cloud-based system security model should be directed to the definition of a cloud infrastructure, and related security services can be available as services in the cloud through a provider in a pervasive environment. Rittinghouse and Ransome (2010) have defined the concept of Security-as-aService, which can be achieved by collecting service security requirements mainly through an Enterprise Architecture approach and specifying the set of services that are needed for an organization viewed as an enterprise. By applying cloud concepts, these services can be dynamically adapted to the organizations security requirements and practices. On the other hand, applying cloud computing to organizations or enterprises is providing advantages toward reducing costs, increasing the affordability of computing services including security services and adopting green ICT systems through reducing carbon footprint. However, like other new technologies and services of this scale and complexity, there are bound to be vulnerabilities. Therefore, the uncertainties and concerns about the technologys maturity/readiness should be addressed. The concept of cheap utility computing power and high level of functionalities or capabilities available is attractive, but concerns persist about resilience, security, potential lock-in to providers due to lack of interoperable services, legal problems concerning location of data, and many other aspects.. These problems could be more effectively addressed by applying systems thinking, holistic problem solving and/or systems engineering methods as suggested below. 1. Cloud Security Critical Issues Privacy issues in cloud environments have been described by Pearson (2009), and some interesting security aspects are presented by Siebenlist (2009). A complete survey of security in the context of cloud

storage is provided by Cachin et al. (2009). Kandukuri et al. (2009) have provided insights of the requirements for the service level agreement (SLA), which is the document that defines the relationship between the provider and recipient of services.

Fig. GIS application using Cloud Computing Infrastructure (Source: Suraj Pandey, Cloud Computing Technology & GIS Applications) An exhaustive cloud security risk assessment has been presented by European Network Information Security Agency - ENISA (2009). A cloud-free security model for cloud computing proposed by Yunis (2009) considers the following critical security issues for the related infrastructures: 1. 2. 3. 4. 5. 6. Extensive resource sharing Lack of data ownership Reducing encryption in order to increase the speed of service delivery Refusal of service Loss of data due to technical failure Unknown attacks

However, the above security issues are valid to some extent for web enterprise systems and services defined within an enterprise service-oriented platform. It appears cloud computing is not fundamentally different from existing web infrastructure that is vulnerable to various threats and attacks, especially due to lack of protection through adequate mechanisms, regulations and policies. It also represents an increased danger in the changing nature and evolution of attacks.

An initial analysis of the general requirements for cloud computing has identified the following:

Reliability and liability, which are the requirements for the cloud to be a reliable resource, especially if a cloud provider will run mission-critical tasks and will expect a clear delineation of liability if serious problems occur. Security, privacy, and anonymity, which are the requirements needed to prevent unauthorized access to both data and code and to ensure that sensitive data remains private. Security is required at the different access levels such as server, internet, data, and program (code) (Kandukuri et al., 2009). Users will also expect that the cloud provider, other third parties, and governments will not monitor their activities. The exception may be for cloud providers, who may need to selectively monitor usage for quality control purposes Access and usage capabilities, which are the requirements to be able to access and use the cloud as needed without hindrance from the cloud provider or third parties, while their intellectual property rights are upheld.

1.1 Applying Systems Engineering Process Systems Engineering Process (SEP) as defined by INCOSE (1995) includes four main components: requirements analysis, functional analysis, synthesis, system analysis, and controls (Figure 1). The aim of applying SEP for cloud computing systems is mainly the requirements analysis and functional allocation in order to identify and construct an agile adaptive system security model. Considering the identified requirements outlined in the previous section, the following categories are defined and could be included within a framework of requirements engineering for secure cloud systems:

Technical Requirements are the providers capabilities; User Requirements should meet the recipient requirements of trusted and reliable services;

Functional requirements are the virtualization translation capabilities of the clouds and associated services.

Lombardi and Di Petro (2010) have proposed the Advanced Cloud Protection System (ACPS), which is intended to actively protect the integrity of the guest virtual machines and of the distributed computing middleware by allowing the host to monitor guest virtual machines and infrastructure components. The identified set of requirements to be met by a security monitoring system for clouds is as follows (Lombardi and Di Pietro, 2010):

Effectiveness: the system should be able to detect most types of attacks and integrity violations. Precision: the system should be able to (ideally) avoid false positives; that is, mistakenly detecting malware attacks where authorized activities are taking place. Transparency: the system should minimize visibility, and potential intruders should not be able to detect the presence of the monitoring system. Non-subvertability: the host system, cloud infrastructure and the virtual machine should be protected from attacks proceeding from a compromised guest, and it should not be possible to disable or alter the monitoring system itself. Deployability: the monitoring system should be deployable on the vast majority of available cloud middleware and different configurations. Dynamic / Adaptive Reaction: the system should detect an intrusion attempt over a component and, if required by the security policy, it should take appropriate action against the attempt and against the compromised guest and/or notify remote middleware security-management components. Accountability: the system should not interfere with other cloud application actions, but should collect data and snapshots to enforce accountability policies.

However, ACPS is too restrictive and could compromise the system performance and privacy through monitoring activities. Also, it is not flexible enough to accommodate changing threats and actions of the adversarial communities. Standardization principles described as follows should be also adopted: ISO/IEC 15288 (INCOSE, 2007), which establishes a common framework for describing the life cycle of systems; and ISO 12207, which includes systems level descriptions such as requirements analysis, architectural design, systems integration and qualification testing. For adoption of clouds, the data security standards such as ISO 27001/ISO 27002 are essential because of data protection problems in the clouds having a huge potential to disclose data.

Architecting cloud-driven adaptive security systems Based on the linkage between security systems engineering toward agile strategies for the development of adaptive security systems and cloud computing paradigms an architectural infrastructure could be suggested, and it is depicted in figure 2. Some challenges that need to be solved in order to realize this synergy have been discussed in the essay, and these are mainly related to dealing with systems requirements according to SEP (INCOSE, 1995). The cloud computing model for adaptive security systems engineering could be developed through the application of model driven engineering as suggested by Brumiliere et al. (2010), but this is still ongoing work. A framework for Enterprise Security Architecture is provided by Sherwoods Applied Business-driven Security Architecture (SABSA) (Sherwood et al, 2005). The Open Group Architecture Framework (TOGAF) describes an Architecture Development Method (ADM) that can be used to deliver an enterprise architecture. A current development is the integration of security features represented in SABSA into TOGAF. The idea is that SABSA can provide the security architectural models within TOGAF. When the link between SABSA and TOGAF is defined, it will be possible to use SABSA for organizations/enterprises that already use TOGAF (TOGAF & SABSA Working Group, 2010). Fig. Building blocks of a Private cloud We plan to continue our work by further exploring these challenges and breaking new ground. Due to the lack of maturity of cloud computing technology, there are several key aspects requiring efforts of different communities of software, systems and security researchers and practitioners. Risk Management Managing enterprise IT is a lot like trying to run the federal government. Everyone wants you to cut the budget, but no one wants to give up any services and entitlements.

For this reason, we see IT organizations actively investigating cloud computing in the hopes that they will be able to cut spending on IT infrastructure, while at the same time providing more services to the business.

But like all great changes, cloud computing comes with a lot of political and personal challenges. For example, if you lean hard on cost cutting, you could be on a path to eliminating most of your staff along with the IT budget. Most chief financial officers think that cloud computing is all about cutting the IT

budget so they can drop more profit to the bottom line. That creates a perilous situation for an IT department that could quickly become a shadow of its former self. Cloud computing, of course, isnt going away. An IT organization can forestall it, but it cant ignore it. That means that the next best course of action is to take on as many additional IT projects as the IT organization can reasonably stand to handle in 2011. The thinking here should be that because the IT department is more efficient thanks to cloud computing it can do a lot more for the business.

Fig. Benefits of a Cloud

Managed correctly, cloud computing should allow an IT organization to modestly cut the budget enough to keep the CFO happy, while still leaving enough money and IT staff in place to service all the additional workloads. Obviously, this cloud computing scenario is going to take some time to play out. But its important to put the horse before the proverbial cart. If you go down the cost cutting path first, chances are things may end badly for the IT department.

Most senior IT leaders intuitively know this. This is the reason that we hear so much about security when it comes to cloud computing. In reality, security is really all about determining whether IT organizations can really trust the cloud computing model, says Tom Roloff, senior vice president for the consulting division of EMC. Without that trust, IT organizations are not going to be able to deliver more services. And if they cant deliver additional services to the business, then cloud computing winds up effectively gutting the IT budget and the staff along with it.

The business side doesnt really care how all this turns out. From their perspective, a smaller IT budget is desirable, and a smaller IT budget that delivers a lot more in the way of valuable services to the business is even better.

Roloff says all this really means is that IT organizations will have to rethink how they manage IT. Instead of building and distributing IT services, they will need to think of themselves as brokers of IT services. And an increasingly larger share of those services are going to come from external service providers, especially as the concept of virtual private data centers takes hold in 2011.

Fig. Risk Management Scenario Of course, there are a lot of technology issues that still need to be addressed beyond the rise of virtual data centers, most notably how we manage data management in the cloud along with data governance. But while these challenges are hardly inconsequential, they are a matter of execution. In the coming year, we will see a raft of tools to address both of these issues from the perspective of cloud computing.

The only real question at this juncture is: To what degree will your IT organization embrace cloud computing? Some will see cloud computing as a more efficient approach to running their internal IT systems; others will see cloud computing as a fundamental shift in the way IT services are delivered across a federated network of third-party service providers.

What is most certain, however, is that, in one form or another, cloud computing will shape your IT strategy for 2011 and beyond.

4. Different business models for CC technology

Fig. Cost for Adding more Core (Source: Using cloud computing for parallel analysis of genome-wide dataset of genome-wide datasets.)

Fig. Comparison between local cluster and cloud computing (Source: Using cloud computing for parallel analysis of genome-wide dataset of genome-wide datasets.)

Fig.PPI: Parallelisation Performance Index (Source: Using cloud computing for parallel analysis of genome-wide dataset of genome-wide datasets.)

Conventional: Initial Investment of IT infra Whats the difference between an application service and an infrastructure service? To answer this question, think first about the obvious distinction between applications and infrastructure: Applications are designed to be used by people, while infrastructure is designed to be used by applications. Its also air to say that infrastructure usually provides a general, relatively low-level service, while applications provide more specific, higher-level services. An infrastructure service solves a broad problem faced by many different kinds of applications, while an application service solves a more targeted problem. And just as its possible to identify different kinds of infrastructure services, its also possible to distinguish different categories of application services, as this section illustrates. SaaS Application Services Users in most enterprises today rely on both purchased and home-grown applications. As these applications expose their services to remote software, they become part of the onpremises platform. Similarly, SaaS applications today frequently expose services that can be accessed by on-premises applications or by other cloud applications. Salesforce.coms CRM application, for example, makes available a variety of services that can be used to integrate its functions with on-premises applications. As organizations begin to create their own SaaS applications running on a cloud foundation, those applications will also expose services. Just as packaged and custom on-premises applications today are part of the on-premises platform, the services exposed by packaged and custom SaaS applications are becoming part of the cloud platform. Search Services exposed by SaaS applications are useful, but theyre not the whole story. Other kinds of cloud application services are also important. Think, for example, of search engines such as Google and Live

Search. Along with their obvious value to people, why cant they also offer cloud application services? The answer, of course, is that they can. Microsofts Live Search, for example, exposes services that allow on-premises and cloud applications to submit searches and get results back. Suppose a company that provided a database of legal information wanted to let customers search both its own data and the Web in a single request. They could accomplish this by creating an on-premises application that both searched their proprietary data and, via the Live Search application service, the entire Web. Its fair to say that not many applications are likely to need this kind of service, but thats one reason why its most accurate to think of search as an application service rather than an infrastructure service. Mapping Many Web applications today display maps. Hotel Web sites plot their locations, retailers provide store locators, and more. The people who create these applications probably dont have the time, interest, or budget to create their own mapping database. Yet enough applications need this function to justify creating a cloud application service that provides it. This is exactly whats done by mapping services such as Google Maps and Microsofts Virtual Earth. Both provide cloud-based services that application developers can use to embed maps in Web pages and more. And as with search, these mapping services are adjuncts to existing Web sites that target users directly, i.e., theyre cloud application services. Other Applicatio n Services Many other application services are available today. In fact, almost any Web site can expose its functionality as a cloud service for developers to use. Photo-sharing sites such as Googles Picasa and Microsofts Windows Live Photo Gallery do this, for example, as do online contacts applications such as Google Contacts and Microsofts Windows Live Contacts. One big motivation for exposing services is to make it easier to create mash-ups that exploit the functions of diverse Web applications. Vendors sometimes group cloud application services together under a common umbrella. The services or accessing information in Google Contacts, Picasa, and more are all part of the Google Data APIs, for instance. Similarly, Microsoft groups several of its services together under the Live Platform brand, including Live Search, Virtual Earth, Windows Live Contacts, Windows Live ID, an Alerts service, a specialized storage service called Application-Based Storage, and several more.

Source: Thomas A Winans & John Seely Brown, Cloud Computing: A collection of working papers, May 2009.

CC: Pay as use

5. Trust, Privacy, Security Strongly related to the issues concerning legislation and data distribution is the concern of data protection and other potential security holes arising from the fact that the resources are shared between multiple tenants and the location of the resources being potentially unknown. In particular sensitive data or protected applications are critical for outsourcing issues. In some use cases, the information that a certain industry is using the infrastructure at all is enough information for industrial espionage. Whilst essential security aspects are addressed by most tools, additional issues apply through the specifics of cloud systems, in particular related to the replication and distribution of data in potentially worldwide resource infrastructures. Whilst the data should be protected in a form that addresses legislative issues with respect to data location, it should at the same still be manageable by the system. In addition, the many usages of cloud systems and the variety of cloud types imply different security models and requirements by the user. As such, classical authentication models may be insufficient to distinguish between the Aggregators / Vendors and the actual User, in particular in IaaS cloud systems, where the computational image may host services that are made accessible to users.

In particular in cases of aggregation and resale of cloud systems, the mix of security mechanisms may not only lead to problems of compatibility, but may also lead to the user distrusting the model due to lack of insight.

-Insights Data Management The amount of data available on the web, as well as the throughput produced by applications, sensors etc. increases faster than storage and in particular bandwidth does. There is a strong tendency to host more and more public data sets in cloud infrastructures so that improved means of managing and structuring the size of data will be necessary to deal with future requirements. Hence in particular storage clouds should be able to cater for such means in order to maintain availability of data and thus address quality requirements etc. Not only data size poses a problem for cloud systems, but more importantly consistency maintenance (see section III on Data Management), in particular when scaling up. As data may be shared between tenants partially or completely, i.e. either because the whole database is replicated or indeed a subset is subject to concurrent access (such as state information), maintaining consistency over a potentially unlimited number of data instances becomes more and more important and difficult (cf. section III on Multi-tenancy). One of the main research gaps and efforts in the area is how to provide truly transactional guarantees for software stacks (e.g. multi-tier architectures as SAP NetWeaver, Microsoft .NET or IBM WebSphere) that provides large scalability (100s of nodes) without resorting to data partitioning or relaxed consistency (such as eventual consistency). Clearly ACID 2-phase commit transactions will not work (timing) and compensating transactions will be very complex. Worse, the use of caching on distributed database systems means we have to validate cache coherency. At the moment, segmentation and distribution of data occurs more or less uncontrolled, thus not only leading to efficiency issues and (re)integration problems (see section III on Data Management), but also potentially to clashes with legislation (cf. below). In order to be able to compensate this, further control capabilities over distribution in the infrastructure are required that allow for context analysis (e.g. location) and QoS fulfilment (e.g. connectivity) - an aspect that is hardly addressed by commercial and / or research approaches so far (see section III on Elasticity). As most data in the web is unstructured and heterogeneous due to various data sources, sensible segmentation and usage information requires new forms of annotation. What is more, consistency maintenance strategies may vary between data formats, which can only be compensated by maintaining meta-information about usage and structure. But also with the proprietary structures of individual cloud systems, moving data (and / or services ) between these infrastructures is sometimes complicated, necessitating new standards to improve and guarantee long term interoperability (see section III.A.4). Work on the eXternal Data Representation (XDR) standard for loosely coupled systems will play an important role in this context. Cloud resources are potentially shared between multiple tenants this does not only apply to storage (and CPUs, see below), but

potentially also to data (where e.g. a database is shared between multiple users) so that not only changes can occur at different locations, but also in a concurrent fashion. This necessitates improved means to deal with multi-tenancy in distributed data systems. Classical data management systems break down with large numbers of nodes even if clustered in a cloud. The latency of accessing disks means that classical transaction handling (two-phase commit) is unlikely to be sustainable if it is necessary to maintain an integral part of the system global state. Efficiency efforts (such as caching) compound the problem needing cache coherency across a very large number of nodes. As current clouds typically use either centralized Storage Area Networks (e.g. Amazon EBS), unshared local disk (e.g. Amazon AMI) or cluster file-systems (e.g. GFS; but for files, not entire disk images), commodity storage (such as desktop PCs) can currently not be easily integrated into cloud storage, even though Live Mesh already allows for synchronization of local storage in / with the cloud.

References 1. www.searchcloudsecurity.techtarget.com/tip/Maintaining-security-after-a-cloud-computingimplementation 2. www.forbes.com/sites/kevinjackson/2011/08/28/implementation-of-cloud-computing-solutionsin-federal-agencies-part-2-challenges-of-cloud-computing/ 3. www.itbusinessedge.com/cm/blogs/vizard/coping-with-the-challenges-of-cloud-computing-in2011/?cs=44640 4. www.java.dzone.com/articles/20-real-life-challenges-cloud5. www.cloudbook.net/resources/stories/how-cloud-computing-paradigm-can-meet-the-challengesof-adaptive-security-systems6. Bruneliere, H., Cabot, J. and Jouault, F. 2010 Combining Model-Driven Engineering and Cloud Computing, INRIA Report, Cachin C., Keidar I., and Shraer A. 2009 Trusting the cloud. SIGACT News 40(2): 816. 7. European Commission (2010) The Future of Cloud Computing - Opportunities for European Cloud Beyond 2010, European Commission Public Report. 8. ENISA (European Network Information Security Agency) 2009, Cloud computing risk assessment. http://www.enisa.europa.eu/act/rm/ files/deliverables. 9. IDC 2010 Leveraging the benefits of Cloud Computing with Specialized Security, White Paper, 2010. 10. INCOSE (International Council on Systems Engineering) 1995 Metrics Guidebook for Integrated 11. Systems and Product Development. Seattle, WA, USA. 12. INCOSE (International Council on Systems Engineering) 2007 Systems Engineering Handbook A guide for system life cycle processes and activities, V3.1. 13. Lombardi F, and Di Pietro R. 2010 Secure virtualization for cloud computing. Journal of Network and Computer Applications, Elsevier Ltd. 14. Jaeger, P. T., Lin, J. and Grimes, J. M.(2008) 'Cloud Computing and Information Policy: Computing in a Policy Cloud?', Journal of Information Technology & Politics, 5: 3: 269 -283. Publisher Routledge.

15. Grace, L. (2010), Basics about Cloud Computing, Software Engineering Insititute, Carnegie Mellon University, USA at: http://www.sei.cmu.edu/library/assets/whitepapers/Cloudcomputingbasics.pdf 16. Grobauer, B., Walloschek T., and Stcker, E (2010), Understanding Cloud Computing Vulnerabilities, accepted for publication in IEEE Security and Privacy, Special Issue on Cloud Computing, 2010 IEEE 17. Gu L, Cheung S-C. (2009) Constructing and testing privacy-aware services in a cloud computing environment: challenges and opportunities. In Internetware 09: Proceedings of the first AsiaPacific symposium on internetware. ACM New York, NY, USA, pp. 110. 18. Kandukuri, B.R Paturi V, R. and Rakshi, A. (2009) Cloud Security Issues, 2009 IEEE International Conference on Services Computing, pp. 517-520. 19. Mell, P. and Grance, T. (2009) Effectively and Securely Using the Cloud Computing Paradigm (v0.25), NIST, http://csrc.nist.gov/groups/SNS/cloud-computing/index.html 20. Yunis, M.M. (2009) A cloud free security model for cloud computing in Int. J. of Services and Standards 5(4): 354 - 375, Inderscience. 21. Pearson S. (2009) Taking account of privacy when designing cloud computing services. In Cloud09: Proceedings of the 2009 ICSE workshop on software engineering challenges of cloud computing, IEEE Computer Society, Washington, DC, USA, pp. 4452. 22. Rittinghouse, J.W. and Ransome, J.F. (2010) Cloud Computing Implementation, Management and Security, CRC Press Taylor and Francis, 2010. 23. Siebenlist F. (2009) Challenges and opportunities for virtualized security in the clouds. In SACMAT 09: Proceedings of the 14th ACM symposium on access control models and technologies, ACM, New York, NY, USA, 2009. pp. 12. 24. Sherwood, J., Clark, A. and Lynas, D. (2005) Enterprise Security Architecture: A Business Driven Approach, CMP Press. 25. OGAF & SABSA Working Group (2010). TOGAF-SABSA integration, Version 1.0 Prepared by Pascal de Koning.

S-ar putea să vă placă și