Sunteți pe pagina 1din 29

Voices from the Web: SOA - Top down or bottom up approach?

There has been a great deal of discussion on the need to better align business with IT in order to successfully implement service-oriented architectures (SOAs). While many developers agree an SOA should ultimately cater to the needs of the business, there are differing opinions on how exactly this should occur. Should a top-down, business-centric approach be employed or a bottom-up approach, in which the Business Unit (BU) is more reactive and sensitive to the realities of IT? In John Crupi's Weblog, the chief technology officer of enterprise Web services practice at Santa Clara, Calif.-based Sun Microsystems Inc., said that SOA is a business-driven architectural style, and for it to be successful, it must employ a "top-down" approach. The BU should own the business drivers, use-cases and processes, according to Crupi. It is then IT's job to implement the BU requirements and own the service definitions. Crupi advised against using a "bottom-up" approach to SOA development, in which existing systems are simply wrapped using Web services to create a service layer. Crupi, who has worked on large projects such as the re-architecting of the eBay 3.0 application, has had many discussions with customers who have failed in their attempts to weave their systems into an SOA by simply wrapping them in Web services. Taking an existing asset or system and making it a Web service creates an immediate mismatch between the new Web service interaction style and the existing system, Crupi said. Meanwhile, J2EE [Java 2 Enterprise Edition] architect Bill de hra argued in his Weblog that the probability of failure is higher with a big, top down approach that has the ambition of spanning an enterprise. "The difficulty with a solely top-down approach is that there is no top," de hra said. "SOA systems in reality tend to be decentralized - there's no one point of architectural leverage or governance." "The goal for any enterprise should be to wean off building big, centralized systems and focus on how to network smaller, more adaptable ones together," de hra said.

Business and IT crossing the great divide


"The reality is that IT and (the) BU typically function as disparate groups and rarely work together to have the business use-cases drive the process and service definition," Crupi said.

Evidence of this disconnect can be seen in efforts to implement Business Process Management (BPM) projects, according to a recent report from Cambridge, Mass.-based Forrester Research Inc. The success of a BPM project depends on how effectively the "top-down" and "bottom-up" cultures in an organization can be made to co-operate, according to the report. It said the "bottom-up" phenomenon occurs when implementers' resistance to change is driven by the fear of potential job losses. On the other hand, the "top-down" approach is the willingness of senior managers to forcefully drive process improvement amongst implementers, driven by the fear of "loss of authority." As BPM tools continue to mature, organizations are leveraging them to enforce business processes in their SOAs. SOA provides a sufficient level of abstraction which BPM systems can leverage to enforce a company's process definitions. According to Forrester, integrated suites for enterprise application integration (EAI) and BPM are empowering business users with the tools to develop composite applications, potentially replacing the need for programmers.

SOA's middle way?


Meanwhile, in his Weblog, Random Stuff, J2EE architect Stefan Tilkov offered a middle-way to SOA development, one in which a top-down, high-level vision and bottom-up, "quick-win scenarios" gradually converge toward each other. Tilkov suggested an adaptive software development approach that requires the BU and IT to continuously work together to optimize and refine business processes. "When the business people and the architects are thrashing out the documents and processes, you cannot have the programmers sitting on their hands," de hra said. "They need to build something and show it to the business, so the business can ratify and refine what it's asking for." "A culture of continuous deployment and tending to front line services will help the organization infrastructure become robust to continuous requirements changes," de hra said.

SOA From the Bottom Up - The Best Approach to Service Oriented Architecture
The general attitude of business organizations towards Service Oriented Architecture, or SOA, has changed significantly over the course of the term's existence. When SOA first made its appearance as a buzzword in the early 2000s, enthusiasm for the new model quickly reached a fever pitch. Companies with big infrastructure problems were so sure that SOA was the fix they'd been waiting for that they were willing to pour millions of dollars into massive top-down SOA initiatives with long, hazy ROI timelines. By 2009, things had changed. Service Oriented Architecture was no longer the belle of the ball, to say the least. The vast majority of the sweeping top-down SOA initiatives that had been launched with such high hopes had failed miserably, leaving companies millions of dollars in the hole and years behind on architectural improvements. Some studies estimate that as few as 20% of the SOA initiatives launched at the peak of the model's popularity were ever been fully realized. The negative backlash towards SOA was so immediate and strong that one industry analyst went so far as to post a mock-obituary for SOA on their blog in January of 2009.

Why SOA Still Matters


In the face of so much failure, the backlash is perhaps understandable. However, it couldn't be more off-base. Far from being dead, SOA is more relevant than ever. The same infrastructure problems that existed in the early 2000s still plague companies today, and with today's economic climate demanding even more agility from companies that want to stay at the forefront of their industries, finding a way to implement SOA is crucial. Meanwhile, those companies that did manage to successfully complete their SOA initiatives - Bechtel being the most frequently cited example - saw exactly the incredible ROI that was promised at the outset of the process. From this, we can conclude one thing: the top-down, drop-everything approach to SOA is to blame for the perceived failure of the model, not SOA itself. In this article, we'll take a look at some of the reasons why these early top-down SOA efforts failed, and how open source integration frameworks like Mule ESB are making the holy grail of SOA a reality for many organizations, using a new bottom-up model of SOA adoption.

A Brief History of Top-Down SOA


To CIOs faced with the task of managing increasingly complex infrastructures, the benefits offered by Service Oriented Architecture sounded like a dream come true - costs would be slashed, developer and business productivity would increase, and the company would be prepared for an agile future. The big change introduced by the SOA model was architectures designed around services, rather than applications. The concept of services - small, independent pieces of software that executed a single task for whatever program called upon them - was nothing new, having been in use in enterprise infrastructure since the 1980s. What SOA brought to the table was a vastly increased scope of use for these small units of functionality.

The Top Down SOA Model


At the time, enterprise infrastructures were becoming increasingly bloated and unwieldy. New business services or automation needs were usually handled by developing new in-house software. These new programs often duplicated functionalities that already existed in other internal programs. For example, if multiple programs required credit check information, each program would duplicate all the code required to perform the credit check (or in the worst cases, use a different implementation altogether). Each new program represented an additional codebase that the company's IT team would be responsible for supporting, as well as additional overhead for the network. In other cases, the complexity of building a new application in-house would result in expensive outside contract work that might not integrate smoothly with other existing programs. SOA aimed to solve these problems by shifting internal application development practice towards the creation of re-usable components called services. First, the company would make a comprehensive map of the actual functionalities they needed from their infrastructure - what were the tasks that all of these custom programs had been created to automate in the first place? How did they relate? What kinds of data formats and protocols had to interoperate? Next, the company would determine how each of these functionalities could be expressed not as a single application, but as a collection of services. For example, an ordering system would not be a thought of as a single functional piece, but as a logical combination of credit card handling services, inventory maintenance services, customer data services, and more. From this assessment, the organization would be able to identify those services that were common to every application, and build them in such a way that the same service would be re-usable in every application. Once these services had been created, the various applications that the company had been using before could be recreated with minimal duplicated code by using these common parts. As an additional piece of the plan, any functionalities specific to a new application would also be created as new services, and be made available for re-use by any later applications that might require them. SOA would create an ecosystem of actively updated components of business logic, that could be quickly linked with minimal amounts of new code to create ad-hoc programs to handle any business need, no matter how specific.

Things Turn Sour - Why Top Down SOA Doesn't Work


While this bold plan for implementing SOA looked great on paper, when companies attempted to put it into action, they quickly ran into difficulties. Most of the problems were caused by similar sets of naive assumptions about organizational behavior that were included in every top-down adoption plan. To understand what went wrong, let's take a look at some of the flawed ideas that cause so many topdown SOA initiatives to fail.

Top Down SOA Wisdom: The SOA "Adoption Team" selects and purchases a proprietary SOA Governance product. Development teams will then learn and use this product, both to re-design all existing systems and to design future projects. Why It Doesn't Work: Massive expense combined with lack of developer input and vendor lock-in is a recipe for disaster. In the top-down SOA model, companies often sought to pass the complex task of SOA adoption to a single team. This team would then be responsible for driving all aspects of adoption. In the days when SOA was the hottest buzzword around, these departments were under heavy pressure to put an SOA in place as soon as possible, and SOA vendors were more than happy to prey on these fears. As a result, the first step in the SOA process for many companies was the purchase of a multimillion dollar SOA Governance Framework. There are three problems with this approach. First, it virtually guarantees vendor lock-in. While vendor lock-in is sometimes tolerated by companies in application server products, where the loop of interoperability is fairly closed, it has absolutely no place in an integration architecture. It's hard enough to make accurate predictions about how your needs may shift in the future without having made a multi-million dollar commitment to a single company's roadmap. SOA is about what YOUR organization needs - not what a vendor tells you that you need. Don't forget that your needs aren't just a list of systems that need to work together - your solution needs to make things easier for your developers and users, too. This brings us to the second problem with the top-down model - developer adoption. Your development team isn't sitting around waiting for the chance to implement SOA for you - in fact, in addition to their regular workload, they're probably also kept busy putting out the day to day fires that already plague your network. The effort required to switch to a new model is not trivial on its own. When coupled with a mandate from on high to use a new tool simply because that's what the company has purchased, the task can become insurmountable. Just a few bugs or design flaws in the SOA tool can be enough to make a busy development team less than enthusiastic about the whole project. Finally, let's talk about the money. SOA is a big change. Making a huge initial investment in a single product is a sure-fire way to kick your entire organization into panic mode, when what you need is a clear, orderly plan that you can implement incrementally, with plenty of input from all arms of your organization. This allows you to ensure that each part works perfectly - integration, services, best practices, adoption - without interrupting your day to day operations or overloading your teams. Top Down SOA Wisdom: SOA means an organization-wide paradigm shift, and everyone's efforts rely on everyone else's. Thus, the whole shift must happen simultaneously. Why It Doesn't Work: The majority of organizations do not have the resources to drop everything and focus on SOA. SOA that falls from the sky is a pipe dream - well-planned incremental adoption is not. When dealing with a task as complicated as implementing SOA, the amount of changes that need to be made can be daunting. It's tempting to think of the situation as a "catch 22" - we can't start using SOA without writing the services, and we can't write the services without understanding our SOA model. There's only one way out of this catch 22 - the drop everything, rip and replace SOA model, where everything, from development processes to hardware is changed simultaneously.

The problem? This approach is statistically proven to fail. For most organizations, choosing this model is a surefire way to kill your SOA plans. Fortunately, SOA is not as much of a Catch 22 as it seems. From a top-down perspective, SOA can seem like an irreducibly complex initiative. But from the bottom-up, SOA is a manageable, sensible proposition. We've seen this time and time again in the Mule user community. Good developers understand the value of service-oriented development. Open-source ESB technologies such as Mule allow teams to follow best practices for SOA without a heavyweight governance model, building out RESTful interfaces that can be re-used right away, and will integrate seamlessly with any SOA governance model as the company moves forward. Sometimes, a new Web Service isn't even what you need - if you have a well-designed solution in place already, simply use Mule components to quickly connect it to the rest of your architecture, and move on to an area where the initial outlay associated with building a new service yields a bigger margin of value to your organization. Begin evangelizing your teams today, get them hooked, and then gradually introduce smart, lightweight SOA Governance at a pace that matches your actual available developer resources. Top Down SOA Wisdom: The SOA Service Repository saves developers time by giving them reusable components. That's why teams must keep all the information in the repository up to date. Why It Doesn't Work: The point of SOA is to make development easier, not load developers down with menial tasks. Your SOA solution should automate the cataloging of services. Like many assumptions made by top-down SOA advocates, this approach is based on the idea that developers are a resource, not teams of skilled professionals. Think of the switch to SOA as a sale you're making to your company. The value proposition is faster development, ease of management, and less time doing tedious integration work. That's why it is a case of serious cognitive dissonance to make your developers responsible for keeping your repository up to date - you're basically saying they will become more productive and do less tedious work by doing additional tedious work. A good SOA Governance model always makes things easier and reduces complexity. Using Mule's open source components, our users have built some amazing, bottom-up SOA enabling tools - things like Java classes with metadata that automatically populate the repository with service information, or REST integration of the repository, placing all services directly in front of developers. When combined with an approach that does not require millions of dollars of lock-in dollars as its first step, this means you can add additional complexity-reducing tooling as SOA technology continues to mature or real world pain points surface, future proofing your architecture and continually improving your ROI. Top Down SOA Wisdom: The key to successful SOA is an organizational culture shift towards "virtuous" architecture decisions. Why It Doesn't Work: The key to successful SOA is a plan made up of clear, achievable goal sets with well-defined benefits. Yes, "virtuous" was really a word that was used by top-down SOA advocates to describe the adoption process. The idea was that SOA was so feel-good that everyone would adopt it not only as a timesaving technology but as an ideology.

This is a nice way to think, but it's also a good way to sink your SOA effort by leaving your team in the dark. SOA is not about ideology. It's about doing things in the simplest, most efficient way. Teams are motivated by clear, achievable development goals that have proven, clearly defined benefits. Ditch the top-down SOA soft sales pitch, and show your teams how simple changes in the way they think about development will result in greater productivity down the line.

Bottom-Up SOA - An SOA Adoption Model That Actually Works


After over 10 years of failed SOA efforts, its clear that the traditional top-down philosophy for SOA is outmoded and outdated a new approach is needed for todays organizations to see real value. The focus should be on making things easier for everyone, not about virtuous architecture; on improving existing organizational structures and processes rather than wholesale re-engineering; on implementing pragmatic tools at the work-group level rather than crippling them with bloated multimillion dollar governance tools. Mule ESB, the world's most widely used open source ESB, is an open source integration framework that just works. Open source and lightweight tools like Mule ESB completely change the cost equation, as well as the adoption pattern of SOA, allowing development teams to implement SOAenabling projects in a bottom-up fashion. In fact, Mule is so simple that we've seen some developers successfully implement service-oriented development and follow SOA principles without knowing (or caring) that they are, in fact, doing SOA. No SOA salesman needs to come knocking on the door pitching SOA, and you dont need to justify big expenditures for proprietary software licenses and training classes (not to mention the new hardware or upgraded development systems that might be required to run the typical vendor stack). You recognize a problem (or an opportunity), discover how others are currently building similar solutions, and get started learning which available tools fit your project the best. Don't be scared of Service Oriented Architecture, and don't wait. Download Mule ESB today, and start building your SOA solution the right way - from the bottom up!

SOA and Service Identification


Introduction
Service Oriented Architecture (SOA) has been widely accepted as an approach that facilitates business agility by aligning IT with business. The prime differentiator with this approach is the ease with which such an agility is quickly achieved at a relatively lower cost. At a high level, this approach attempts to drive down the incremental cost of addressing the 'n'th business change to zero or closer to zero. Organizations pursue SOA initiatives in order to achieve this elusive 'n'th iteration as early as possible in their SOA journey. In practice, it may take years, if not longer to achieve such an optimized state. This article is intended for business analysts and enterprise/system architects involved in a SOA initiative.

WYI2WYG (What You Identify Is What You Get)


Service identification is a crucial first step in the long journey towards SOA end state. It's an iterative step that needs to be organized and initiated early on during analysis and planning. It determines the overall service landscape with services being identified, named, categorized, prioritized and even associated with appropriate roadmap phase for implementation. Service identification phase lays the foundation for a service based ecosystem and this article highlights best practices associated with business service identification.

1) Know the nature of your initiative


Based on the context in which SOA initiatives are taken, they can be classified under categories below: 1. SOA transformation - This refers to initiatives undertaken by an enterprise to move towards SOA from an existing one. Almost all the applications are functionally mature, though not conducive for quick change. An enterprise in this state usually faces stiff resistance to change. The focus here would be to expose and/or re-align services from legacy applications and commercial off the shelf (COTS) applications while addressing service gaps. 2. SOA adoption - This refers to initiatives in an enterprise that already has applications serving critical business processes, though the scope for new applications to support certain other core business functions exist. The focus here is to develop new applications and services while gradually re-aligning existing ones. 3. SOA embarkment - This is applicable for service based product development and for enterprises that have disconnected systems that need to be re-engineered. The focus here is to look at functionality and develop services from scratch. Contract-first approach is usually followed. Such an enterprise is relatively well placed to realize returns over time.

Figure 1: Initial business agility and functional maturity of various categories are shown here. It also shows the preferred state to be achieved through SOA. Figure 2: Relative ROI over time for four hypothetical but related changes is shown here. An enterprise under category SOA transition (i) leverages functional maturity to reap quick benefits early on but has relatively flat returns over time. On the other hand, an enterprise under category SOA embarkment (iii) has steep returns in the long run leveraging predictable and shorter timeto-market potential and less integration costs. Identifying the category would help set the focus and expectations. It also helps in narrowing down the service identification approaches.

2) Let the business lead the charge


There is an elevated need to show tangible and quick Return on Investment (ROI) against the backdrop of a gloomy global economy. Business and IT teams should work closer and leverage each other's capability to swiftly support drastic and unconventional business moves in an increasingly competitive environment. Need for service identification cannot be overemphasized and this is largely a business driven, IT guided step.

3) Align with enterprise vision and associated enablers


Inputs from strategy, vision and long term business goals/objectives are critical to establish a strong base that accommodates changes. Such inputs are accounted for in SOA roadmaps and associated phases. It impacts service definition by injecting a level of abstraction future-proofing service landscape and extending service contract's useful life (e.g. A financial enterprise specializing in credit based products may have the intention to venture in to insurance or brokerage and this may add few product agnostic services to the landscape).

4) Focus on end-to-end business processes


These are predominant processes that form the bulk of the process landscape comprising of core and noncore functionality. Such processes can usually be represented at various abstraction levels referred to as process levels in a process model. Services can then be extracted from multiple such levels with a top-down approach. Higher abstraction levels provide inputs for composite services while lower levels provide inputs for fine grained candidates. Such a focus on processes and service candidates would help identify functional redundancy across

enterprise. Regardless of the nature of the SOA initiative (highlighted above), this practice is vital.

5) Leverage tools to expedite identification


SOA planning and governance tools may streamline and expedite service identification and communication process. They can even visually depict service relationships. Service repository related tools can be useful to identify impact of a proposed service change.

6) Reuse industry artifacts


Over the years, there has been significant progress in SOA space within industry verticals like telecom, utilities etc spearheaded by relevant groups (IFX, SWIFT, OAGi, Accord, ETIS, ISO etc). Few cross-industry artifacts too are available. Such standardized service contracts (interfaces and operations), data models and third party service lists can be reviewed and leveraged. It could jump-start the move towards SOA and for the most part the artifacts (XML based) are likely to be extensible and backward compatible. However, such artifacts must undergo context specific scrutiny before adoption. Examples of questions that enterprises might need to ask when adopting these standards are: Should the data model be adopted in part or whole? Is it necessary to provide native support to the canonical model or would it suffice to limit it to external integration touch points? Is there any hidden cost? Does it command broader vendor support?

7) Establish a contract baseline


A service contract should highlight both functional and non-functional capabilities. A baseline needs to be established by identifying relevant attributes that are part of a contract. Functional attributes may include details like service description, message structure and data model. Nonfunctional attributes may include details like Quality of Service (e.g. Response time, Availability), Cost base (Count or Period) and Security. Such a baseline standardizes the content of WSDL files, XML Schemas and WS-Policy definitions (Metadata for enforcing behavioral constraints) and ensures contract consistency across the enterprise.

8) Refine with Service Attribute Rating Matrix (ARMS)


It's quite natural to just extend processes, business goals etc to identify a huge list of candidate services with corresponding contracts. The services can be just as good as their sources in aiding agility and may lead to service proliferation. A matrix with service attributes (e.g. reusability, composability, abstraction, competitive differentiation etc) and their relative weighted average can be used to screen, rate and refine the candidate list. This rating should not be less than the predetermined target rating based on category, roadmap phase etc to make the cut. Granularity can be modified and contract can be repetitively changed to meet matrix requirements.

9) Test the waters with Business Agility Scenario Simulation (BASS)


This can be a very handy step to evaluate the agility of a system well before implementation. Business scenarios and use cases from a subsequent roadmap phase or typical business scenarios not catered to in current phase would need to be compiled. Service inventory with

contracts would then be used to address these scenarios through simulation, while assessing the impact on the system. Key metrics for assessment can include: i) Service reuse ratio (Number of services reused / Total number of services in scenario) ii) Service leverage ratio (Number of services reused / Total number of services in inventory) iii) Service revision ratio (Number of services revised / Number of services reused) iv) Service creation ratio (Number of new services created/Total number of services in scenario) and v) Service utilization ratio (For a given service, Number of service consumers identified / Total number of services in scenario )

These IT metrics can be normalized for complexity and analyzed together to assess impact on time-to market characteristics. This may result in further restructuring impacting service inventory. An enterprise that has gone past initial roadmap phases can include historical data to generate business and financial metrics that are even better during BASS.

10) Embrace change


The hallmark of a service based system is to accept both planned and unplanned changes. The service inventory is a key enabler and accommodating an iterative service identification process that would fit hand in glove with SOA governance processes is critical. The iteration may align with roadmap phases, continuous improvement initiatives or result from internal/external triggers (e.g. Technical advancements that even out service overhead may allow for additional fine grained services). Such changes may result in new services, service revisions and service retirements.

Conclusion:
Composite applications assembled from an inventory of services enable business agility. Service identification yields this list of business and technical services. It's relatively easy to identify a set of services, however the ROI would be governed by the nature of SOA initiative, frequency of changes and the magnitude of changes. This article highlights key best practices to identify, validate and verify service inventory content well before implementation.

Ten ways to identify services


Introduction: Service fundamentals
In a service-oriented IT landscape that has been shaped according to the principles of service-orientation, functionality and data are provided by means of services. These services can be defined in a uniform fashion based on international standards (XML, SOAP, WS-*), or they may be based on more traditional and proprietary mediums. Modern integration technologies, such as the ESB, enable consumption of services regardless of location, platform or programming language. By orchestrating services in the desired order, business processes can be optimally supported with functionality and data from systems both internal and external to the IT enterprise. Along with the rise in popularity of SOA began a community-wide discussion about how to best identify services. When is a service "too big" or "too small", when is it "too specific" or "too generic", or exactly "right"? Before getting into the most common approaches we need to establish some fundamentals. First and foremost, generally accepted architectural principles for services exist. These SOA design principles should always play a guiding role when identifying services. The eight established service-orientation principles [REF-1] are: Standardized Service Contract, Service Loose Coupling, Service Abstraction, Service Reusability, Service Autonomy, Service Statelessness, Service Discoverability and Service Composability. The golden rule to successful service identification is that services should adhere to these principles. Secondly, there are various ways of typing services using service classifications or service models. Examples include presentation services, process services, business services, application services and data services [REF-2]. Naturally there are approaches that are particularly suitable for each type of service. In this article we primarily focus on methods for business and application-centric services. When desired, these two service types can be further subdivided (e.g. into create, read, update, delete, transform, generate, select, value, validation and calculation services [REF-3]). Finally, what determines if a service is well-defined depends on perspective: an administrator will have different requirements than a process designer or a tester. Ultimately, the success of a service is measured by the value it provides to the organization over time.

Ten common methods for service identification and definition


Even though these methods are listed individually, in practice they can be combined to shape unique approaches and methodologies.

Method 1: Business process decomposition


One of the most common approaches for identifying and deriving services uses business processes as a starting point. The business process is subdivided into sub-processes or decomposed into granular activities and tasks. The lowest level tasks can consist of small, cohesive "logical units of work" that are supported by the functionality offered by distinct services. This results in services that are very "demand-driven". A great benefit of this approach is that the resulting services have a guaranteed fit with an organization's functional needs. This method is also very intuitive, allowing project teams to use it for proof-of-concepts and pilot projects. The strong focus on the demand side can have its challenges. A (too large) gap between business process and applications may result if the services are only modeled according to business process definition specifications and without taking implementation considerations into account. In addition, unless the modeling effort involves iterating through multiple business processes, services can be tailored too specifically to the tasks and activities of one business process (resulting in services that may not be reusable). Even when deriving services from multiple processes, several activities might require similar functions. On-going coordination is required to avoid unintended redundancy across services defined by project teams working in parallel.

Method 2: Business functions


As just mentioned, a crucial point in the identification of services based on a single business process is the tight coupling between the process and these services. This is in stark contrast to the underlying idea that services should provide the means to decouple business logic from IT facilities. A possible solution to this problem is to start from a business function model. This abstracts from the way the business processes have been implemented. Analogous to the business process approach a step-wise refinement towards services is used. In this case, however, the most detailed business functions in the functional decomposition are translated into services. Just like the process-based approach, the function-based method is demand-driven and carries the same risks. Using business functions as a starting point mitigates the risk of redundancy of services and functional overlap is eliminated via the business function model.

Method 3: Business entity objects


The purpose behind most services is to process business information. By modeling services according to business object models, business entity-based services can be identified

commonly requiring CRUD-type functions. This approach relies on the use of canonical data models (CDMs) that standardize information exchanged between services. Therefore, this approach can be considered supply-driven. Canonical services rely on technology resources that use their own particular data models. Data consistency is achieved by mapping between the applications' data models and the CDM. The data elements that comprise a single CDM object can hence be managed in different applications. Nearly all current ESB products offer support for CDMs. The real challenge lies in achieving consensus with regard to the exact definitions of common objects. A strong point of this approach is that the semantics of services receive attention in early modeling stages, thereby reducing the amount of undesirable design changes required when projects get closer to production phases. The main pitfall of this method is the need for standardized data models. Depending on the scope of the SOA project, this requirement can result in "analysis paralysis", despite the fact that only the business objects that play a role in exchanging data need to be modeled. A domain-based roll-out of services can help overcome concerns about having to establish global data models.

Method 4: Ownership and responsibility


This approach is not a "true" method, but it is recommended for making choices about which services should be offered. SOA requires a well-defined structure for decision-making, where roles and responsibilities for processes and services are clearly allocated and assigned. When identifying services, the party that carries the responsibility to make available the required functionality determines which services will ultimately be offered. Naturally methodical design approaches play a role in this process, but there are several other concerns that must be taken into account when settling on the final choice: development costs, maintenance costs, lifecycle management of underlying applications and platforms, priorities, availability of human resources, and so on. The biggest advantage is that it is always clear who owns a service. In other methods this issue can become a point of debate and can even result in political consequences. On the downside, services from different domains may end up overlapping because of their required ownership structure. Additionally, the organization must have a functioning governance platform in place, which implies a level of maturity that is not yet commonplace.

Method 5: Goal-driven
With this approach a project team decomposes a company's goals down to the level of services. In this context, a service is regarded as a goal that can be executed through automated support [REF-5]. For example, a goal such as "increase customer retention" can result in a service called "register customer for loyalty program".

The advantage is the strong relationship forged between services and company strategy. However, there are two distinct problems to this method: goals tend to be subjective, and a fair amount of IT cannot be directly aligned to business goals. Subjectivity may well cause two business goals to be decomposed into two distinct services, even though the desired functionality is identical (which means that using a single service would have been preferable). Also, because many IT capabilities cannot be directly related to business goals, there is a constant risk that many potentially useful services will simply be overlooked.

Method 6: Component-based
The essence of using components is to divide IT-functionality into units with maximal internal cohesion and minimal external coupling. Components are truly self-contained units of functionality. Various methods to identify components have already been introduced in the realm of component-based development. A guiding principle in these approaches is that each component has exactly one owner and that the responsibilities of each component have to be defined as precisely as possible. These responsibilities can be used as a starting point for identifying services. In theory, component-based development results in a functionally organized IT enterprise. Components can be custom-coded or purchased off-the-shelf. Additionally, a need arises to compose services offered by components into composite services. Currently suppliers of large monolithic applications (such as ERP and CRM systems) tend to organize their applications in a more modular fashion and to make them available through services. These modules correspond roughly with components. The benefit to basing services on components is that the service identification process is greatly simplified. The bulk of the analysis work has already been carried out as part of the component-based development method. However, in reality, this can lead to several problems. Modern-day services and traditional components rarely share the same goals, requirements, and expectations. Creating a series of fine-grained services that mirror underlying components can severely inhibit an SOA initiative from attaining strategic goals that were never taken into consideration when the components were first designed.

Method 7: Existing supply (bottom-up)


A pragmatic way of quickly defining services is to base them on immediate requirements for information and functionality. In this case, the starting point is the functionality provided by existing legacy applications. The systems that provide the bulk of the automated support required by the primary business processes are selected. Using tools and wizards the existing interfaces, APIs, screens, transactions, queries and tables are made accessible through services. This classic bottom-up approach is supply-driven by nature, and does not focus on reusability (or even usability!) of the identified services. Hence it is commonly necessary to

cluster the functionality and remove functional overlaps by combining similar services into a single (often monolithic) service. The main advantage of bottom-up delivery is that it requires little time to reach a first definition of services. It is an appropriate approach if the functionality of the existing applications is urgently required and perhaps also sufficient to support both current and future business processes. A potentially positive side-effect of this method is that it can be used in a context where little process or function models are available. However, ultimately this is not a recommended approach for defining services in support of service-orientation. The Law of Conservation of Challenges will rear its ugly head in that badly designed applications that have been adapted to changing circumstances many times over (and are tightly coupled to the business processes) will make it very difficult to design reusable and future-proof services. In the end, this approach almost always leads to the creation of new application silos. It just happens that in this case, the silos are themselves services.

Method 8: Front-office application usage analysis


Charting the current demand for information and functionality is a pragmatic way of identifying services quickly. In this approach, a set of applications is selected that support the majority of the primary business processes. The functionality, as offered by the backoffice applications, is then surveyed by making a list of the queries and transactions used by the set of front-office applications. These functions are clustered, redundancy is removed, and finally an optimization step combines comparable functions into a single service. The most obvious advantage is that services can (again) be quickly identified. In a way, the applications provide a view on the business function model and the services that are found using this method can score well on immediate reusability, as long as clustering and optimization steps are properly carried out. Another side-effect is that this approach can be utilized in a context that lacks usable process or function models. However, at the end of the day, the Law of Conservation of Challenges applies once more. Applications that suffer from bad design or that are tightly coupled to the current implementation of the business process can pose severe challenges to designing quality services that are expected to remain reusable on an on-going basis. This approach should really only be considered when you have a great deal of confidence in the quality of existing application designs.

Method 9: Infrastructure
Platform independence might well be an accepted architectural principle for services, but composite services in particular demand extra attention. This method acknowledges that services cannot always be identified independently of the technical infrastructure that is being used.

How convenient is it when a service composes two utility-centric services that run on separate platforms (e.g. a mainframe and a Unix machine)? Consider the required connectivity, execution and potential rollback of transactions, variations in availability, security and monitoring, network traffic, etc. Note that even though it is nearly always technically possible to solve these issues (using modern middleware), a cost-benefit analysis might indicate that an alternative solution is called for. Non-functional requirements also play a part in this analysis (see Method 10). When discussing the advantages of this approach we can be brief: it should only be used when absolutely necessary. The core idea of service-orientation is to hide and abstract the underlying application environment (and especially its supporting infrastructure layer!).

Method 10: Non-functional requirements


This method uses non-functional requirements as the primary input to identify services. It is not a "method" per se, but more like a set of techniques that can be used on top of other methods. After a preliminary set of services has been identified using one of the other service identification approaches, the (future) service provider verifies the feasibility of the nonfunctional requirements of the (future) service consumer(s). If one or more of the nonfunctional requirements are deemed infeasible, but still more important than the design principles of the identification method, the services may be redesigned. Two common non-functional requirements that can play a role here are security and performance. Consider the following example: to realise Service X, functionality is needed from Applications A and B. If Application A conforms to the security requirements imposed on Service X and Application B cannot, it may make sense to split Service X into separate secured and non-secured services. Alternatively, if an organization chooses to implement SOA by means of Web services, some services may not be able to deliver the required performance. Choices that can be made are the merging of multiple services (reducing the amount of inter-service communication requirements) or deviating from Web services standards for that particular service. Something to keep in mind is that if performance considerations necessitate redesigning a large number of services, further analysis is required to determine if the current IT architecture is suitable for SOA in general (or only for this part of the IT landscape).

Assessing methods with the service pattern language


Regardless of the approach you choose to identify and define services, it is essential that you understand the fundamental theory behind service design in support of serviceorientation.

The Basic Service Design Pattern Language [REF-4] establishes the foundation for service identification, definition, and design by providing seven basic design patterns that form a fundamental service pattern language. This pattern language can be considered a primitive process that addresses only the most necessary steps for creating services. You can trace all ten of the methods explored in this article back to this fundamental process. Of course, each method has its own priorities and trade-offs that can affect the extent to which any given service design pattern is supported. But by understanding this basic design pattern language, you can better evaluate these methods as to how well they support service-orientation in general.

General pitfalls
As with the pursuit of anything worthwhile, the road to a attaining a good service portfolio is lined with pitfalls. Here are some common examples that apply to service identification:

Services in name only - The terms "SOA" and "service" are used rather loosely in many IT environments. Project teams may choose to label their applications as "serviceoriented" simply because it sounds more cutting edge or due to the common misperception that the use of Web services alone constitutes an SOA. Either way, when it comes to implementing the "services" in these types of initiatives, the programmers tend to run the show. They create a plethora of (mostly technology-centric) Web services, disregarding business/IT-alignment, reusability or any other properties a service should have. The end-result is an application that uses Web services but is not itself service-oriented. Ultimately, this pitfall leads to great disappointment when the expected benefits of SOA are never realized. Perfect non-existent services - On the other side of the spectrum lies the danger of having analysts and architects model wonderful services that simply cannot be built using today's technologies - or - they can only be realized via murderous costs. This pitfall can be avoided by constantly ensuring that all modeling efforts are balanced with a dose of reality. And never shall they meet services - When different project teams within the same organization commit to radically different service definition and delivery methods (such as the opposing top-down and bottom-up approaches), collections of services can be created that will simply never be compatible. These can form natural silos that will eventually impose significant integration effort when cross-silo communication is required. Babel services - If an organization does not have a canonical data model (and therefore the definition of the service semantics is not clear), the services are automatically incompatible. The result is an environment that will depend on transformation and

bridging technologies for many, many years to come. This will ultimately inhibit every aspect of service-orientation. Spaghetti services - A problem that can occur when services are been defined on multiple levels of granularity is that technical terminology and business terminology can get so mixed up that the services themselves can become unintuitive, confusing, and sometimes just unusable. There are some simple rules to avoiding all of these pitfalls: 1. Adhere to the principles of service-orientation. These are essential and fundamental to creating well-defined services that support the strategic goals of SOA. 2. Understand your options when it comes to service identification and definition approaches by studying the methods covered in this article. 3. Measure the effectiveness of service identification approaches you are considering by mapping proposed service design processes to the fundamental service design pattern language. Conclusion Using proven methods to tackle the issue of creating well-defined services is certainly recommended. It allows you to leverage the experience of those that have already been through this process many times. However, no one method is perfect. Each has its own benefits and trade-offs. It's always in your best interest to take proven methods as a starting point and then consider how they can be optimized in support of realizing the requirements that are specific to your SOA goals.

Issues in Service Identification The identification of services can have much effect on the resulting IT landscape. In fact most of the advantages a Service-Oriented Architecture can offer depend on what the system classifies as services. More specifically: the granularity of the services is very important in reaching flexibility and reuse of services. Some other issues are also related to this granularity, a couple of them are explained now.. Note: I talk about automated services now, of course services exist which need human involvement, how to deal with that can be read in theprevious post . Flexibility: By employing a SOA one tries to establish a flexible IT landscape which is easily adaptable when changing business needs demand this. However, when all the needed functionality is defined in, for instance, three different services, not much differentiation is possible in orchestrations. By distinguishing a lot of small services, a lot of different orchestrations can be developed, which can also be reused as services. Roughly speaking: the higher the granularity, the higher the resulting flexibility. Performance: Using orchestration and services can affect the performance of the total system. Nowadays BPEL (Business Process Execution Language) mainly is used for defining the orchestration, WSDL (Web Service Definition Language) for defining the service interface and SOAP (Simple Object Access Protocol) for defining the messages. All these standards are XML based. This makes them human -readable', but also creates a lot of overhead for system-to-system communication. Imagine the difference in performance between a Java function call (which is compiled into byte code) and a service invoke sending a SOAP message over HTTP. Conclusion: the higher the granularity, the more SOAP calls are needed. More SOAP calls means a lower performance. Reuse: The use of services, defined in a unified way, gives the opportunity to reuse these services in an easy way. A service directory can be created and different process orchestrations can reuse the same service. But the granularity of the services also affects the possibilities you have. Again, when specifying just a few services reuse is very hard or isn't possible at all. It is easy to see that using smaller services will give more opportunities for reuse. Complexity: Implementing a SOA for a big enterprise can result in a lot of services. The governance of all these service is a big challenge. A service directory is needed with good search capabilities. Furthermore all services need to be specified in a clear, unified way. These metadata specifications are hot issues in the current market. But that's not all! Think about different versions of the same service due to further development, changing regulations, bug fixes, and so on. Also services developed in different business units of a company which are just slightly different can give a lot of trouble. When different business units use the same service the question arises who's responsible for it. You can imagine that a higher granularity will push the complexity to a maximum. Figure 2 summarizes the issues in service identification. Neither try to see relations between distances and curves nor search for scientific foundations, the Figure just attempts to clarify how they relate. On the x-axis the granularity is shown. From left to right the granularity increases meaning that the services become smaller. On the y-axis the four issues are shown. Low and high complexity, flexibility and performance are straightforward. By a high reuse the percentage of services that are used in more than one orchestration is meant.

Figure 2 - Overview of issues in service identification How can we find the optimal decomposition of our business needs into services? First some issues in component identification are explained, subsequently a strategy for choosing the service landscape which best fits the enterprise architecture is proposed.

Issues in Component Identification Which services and data should be in the same component? This question doesn't have one right' answer. When identifying the different components also a lot of issues play a role. a couple of them are explained in detail. Existing systems: When attempting to identify business components one always has to deal with existing systems. A green-field approach will (almost) never occur. This means that the existing IT landscape has to be analyzed to determine which components already exist and which services they deliver. This existing components can be best-of-breed or custom-made applications. The problem with these components is that they not always fit exactly into the new service-oriented architecture. If they deliver more services than you need, these services should be disabled. However, not every application supports that. In some cases it is also difficult to make system-to-system connections with such applications. For example, when you need data from an existing application for use in another component, or when you would like to force an existing application to use a data source you provide. This article will not delve deeper into this field, called Enterprise Application Integration, but be aware of the difficulties influencing the identification of business components. For more information concerning this subject, a good starting point is the book of Linthicum [4]. Performance: Which services and data are coupled in the same component can affect the performance of the whole system. In principle the following heuristic can be applied: " Choose the elements so that they are as independent as possible; that is, elements with low external complexity (low coupling) and high internal complexity (high cohesion)". This is not only a good heuristic for reducing the complexity in your system, it is easy to see that coupling elements with a high cohesion reduces the component-to-component communication needed. When components are deployed on different servers or when communication protocols are used with some overhead, the performance advantages delivered by a good component identification process are huge. Maintainability: One of the most important issues in IT is maintainability. Maintainability can be defined as "The ease with which a software system or component can be modified or adapt to a changed environment "[5]. Using small, well-defined components satisfying the heuristic stated above can help a lot in increasing the maintainability of a system. Big, monolithic components often lead to so-called spaghetti-code, meaning that they are horrible to maintain for anyone else than the developers who build the component. This leads to the same

conclusion as for the previous issue: a good component identification process can increase maintainability significantly. So component identification is important, as is service identification. Can this be achieved optimally? Or do we have some best practices? Strategy Service identification can best be performed in a top-down manner. After defining the process architecture the needed services can be determined. The issues mentioned before, complexity, flexibility, reuse and performance, should be kept in mind. As we have seen using small services gives us a high flexibility and services can be reused very much. On the other hand complexity will become higher and higher, while performance decreases. An optimal size for each service doesn't exist. The best approach is to determine the processes within the process architecture of the enterprise with which the enterprise differentiates itself from competitors. For example, processes for accounting mostly are not the differentiators, but processes describing the handling of client support could be. The services used in these processes should be kept small to achieve high adaptability. The alignment of business and IT should be as high as possible for these processes. For other processes bestof-breed applications can be bought.

Service Identification - Top-Down or Bottom-Up?


In terms of the use-case and business process techniques, these are more top-down approaches whereas the remaining techniques are more focused on the analysis of existing assets. So, let's take a look at the advantages and disadvantages of the top-down vs. bottom-up approaches in general.

Top-down - Has the advantage that the services identified throughout the layers of the solution are aligned with the business processes which provided the scope for the solution. It is also attractive from a project management perspective in that the business process under consideration provides a natural project scope for the development effort. However, the major drawback to this approach (and the reason the customer I mentioned earlier got unstuck) is that it becomes harder to ensure that you develop services for reuse (some thoughts on SOA and Reuse) as developers are looking to develop services that support this process rather than ones that will contribute to an enterprise-wide service portfolio.

Bottom-up - A bottom-up approach has the potential to develop a set service that can support a number of processes, addressing the concern above, as the developers are looking across a broad set of artifacts. The issues here are that where data is the focus of the artifact analysis the tendency is to generate CRUD services (which is bad) or to develop access operations that do not match well the requirements of processes and therefore require business services to make multiple calls into datamanagement services.

The best advise right now seems to be that you should lead with a top-down approach, that development teams and projects should be managed in such a way, but, in parallel you should have an SOA architecture team that can review service specifications, propose existing services for reuse and validate new services to ensure they fit into the enterprise service portfolio. This really frees project teams from having to understand all the existing portfolio, and allows the architecture team to also capture reuse guidelines and to act as intermediaries between different project teams. This does not imply that there is no bottom-up identification, but that it happens within the scope of a project which is top-down.

Service Identification - Top-Down or Bottom-Up?


Again, this seems to be an interesting topic - how should you identify the services you develop for a given solution? Top-down would suggest that you analyze your business processes, identifying activities in support of your process that become candidate operations on service specifications (see a previous post on the view that services and processes are the primary concerns of Enterprise SOA). A Bottom-up approach would suggest that you analyze existing applications and assets to identify those that will are candidates to be "service-ified". The reason I want to step into this debate is that there seem to be more and more people writing on the subject of development processes for SOA development who take very specific stands on which of these are more valuable. So, which is right for you? Well as usual the answer is that there is no right answer; this is a sliding scale and how much of each technique you use will depend very much on the solution you are developing and the development processes you follow. I met with a customer a week or so ago who is very much struggling with this decision, and specifically feels that the current literature indicates that Within the RUP rather than using these top-down and bottom-up terms which instantly give the feeling of contradiction we simply identify a set of techniques that may be used for service identification. The following picture is taken from the RUP update for SOA and demonstrates these techniques quite simply. they have to pick one over the other. In terms of the use-case and business process techniques, these are more top-down approaches whereas the remaining techniques are more focused on the analysis of existing assets. So, let's take a look at the advantages and disadvantages of the top-down vs. bottom-up approaches in general.

Top-down - Has the advantage that the services identified throughout the layers of the solution are aligned with the business processes which provided the scope for the solution. It is also attractive from a project management perspective in that the business process under consideration provides a natural project scope for the development effort. However, the major drawback to this approach (and the reason the customer I mentioned earlier got unstuck) is that it becomes harder to ensure that you develop services for reuse (some thoughts on SOA and Reuse) as developers are looking to develop services that support this process rather than ones that will contribute to an enterprise-wide service portfolio.

Bottom-up - A bottom-up approach has the potential to develop a set service that can support a number of processes, addressing the concern above, as the developers are looking across a broad set of artifacts. The issues here are that where data is the focus of the artifact analysis the tendency is to generate CRUD services (which is bad) or to develop access operations that do not match well the requirements of processes and therefore require business services to make multiple calls into datamanagement services.

The best advise right now seems to be that you should lead with a top-down approach, that development teams and projects should be managed in such a way, but, in parallel you should have an SOA architecture team that can review service specifications, propose existing services for reuse and validate new services to ensure they fit into the enterprise service portfolio. This really frees project teams from having to understand all the existing portfolio, and allows the architecture team to also capture reuse guidelines and to act as intermediaries between different project teams. This does not imply that there is no bottom-up identification, but that it happens within the scope of a project which is top-down.

P.S. Beware any development process that dictates such a rigid set of techniques and an equally rigid step-bystep approach - for there you will find a process that has either only been used on a single project or never been used at all.

The Top-Down vs Bottom-Up SOA Debate Revisited


A long standing debate in the SOA community about top down vs. bottom up approaches to SOA resurfaced recently, after open source ESB maker MuleSoft announced the release of a management console said to support their bottom-up approach to SOA management philosophy. Rob Barry from SearchSOA gathered some opinions about bottom-up vs top-down approaches: When building out a SOA, a bottom-up governance approach focuses on integrating services around individual ESBs that can be quickly assembled. The approach has been criticized for requiring excessive updates and rework later on. Meanwhile, an opposing "top-down" governance approach involves extensive planning and strict policy enforcement. This approach has been faulted for taking much more time to produce results. The opinions gathered on the post agree: the bottom-up is a good approach to start, basically when the main objective is to integrate. They also agree the top-down approach requires much more business involvement. They conclude that deciding which strategy to use will depend on the business-IT relations. Barrys post fired a question in ebizQ with few but interesting responses. In a response to ebizQ's question, Avi Rosenthal distinguishes both approaches based on what you are building: SOA is architectural style. Building architecture is Top-Down and not Bottom-Up. Web Service, sometimes wrongly defined as SOA, are technical. Web Services are build Bottom-Up. Building SOA Bottom-UP is a wrong approach some times called ABOS (A Bunch Of Services). If you build SOA Bottom-Up probably you will end with a lot of redundancy and no architecture at all. However, the result of building SOA only Top-Down could be perceptual Architecture building with no run time artifacts, so some SOA efforts should be Bottom-Up efforts. To sum up: Initially SOA is a Top-Down approach but pragmatic approach requires mixing Top-Down approach with Bottom-Up approach. In another response to the question, Michael Poulin says consumer-centric nature of SOA forces the top-down approach: If you start constructing service from what you have - bottom-up - you have a very high risk to end up with what you have, not with what your consumers need. SOA is the consumer-centric business-oriented architecture. Starting with the consumer needs you do not have a chance to avoide the top-down. This is the start point, always. However, in the next step, you better assess your capabilities, i.e. look at the consumer needs from your bottom laying recources. This debate is not new. Back in 2005, John Crupi posted that SOA was a Business-Driven Architectural Style and as such, it must be top-down to be successful: And top-down, means problem to architecture to solution. It does not mean, working from what we have and just wrapping it with new technologies just because we can. This bottom-up approach is quite natural and easy and is the perfect recipe for a SOA failure. Back then, other voices like Bill de hras reaction post went against the idea of a "top-down or fail principle":

The difficulty with a solely top-down approach is that there is no top. SOA systems in reality tend to be decentralised - there's no one point of architectural leverage or governance, no one person who's going to be able to say and then enforce "a decision in ten minutes or the next one is free". This debate has been going on for years. At the end of the day, it seems that some tool vendors have chosen the bottom-up strategy. The advantage with a bottom-up approach is that you can use the exposed end-points as building blocks for functionality and integration tasks that you didn't even think of when you started out.

Top Down vs. Bottom Up


There are several project delivery approaches that can be employed to build services. The bottom-up strategy, for example, is tactically focused in that it makes the fulfilment of immediate business requirements a priority and the prime objective of the project. On the other side of the spectrum is the top-down strategy, which advocates the completion of an inventory analysis prior to the physical design, development, and delivery of services.

Figure 1 - A comparison of bottom-up and top-down delivery strategies. As shown in the figure, each approach has its own benefits and consequences. While the bottom-up strategy avoids the extra cost, effort, and time required to deliver services via a top-down approach, it ends up imposing increased governance burden as bottom-up delivered services tend to have shorter lifespans and require more frequent maintenance, refactoring, and versioning. The top-down strategy demands more of an initial investment because it introduces an up-front analysis stage focused on the creation of the service inventory blueprint. Service candidates are individually defined as part of this blueprint so as to ensure that subsequent service designs will be highly normalized, standardized, and aligned.

Service classification in SOA


Service classification is an important concept to realize SOA. In order to classify services, ontology can play a vital role. In SOA, we generally talk about identifying the services, implementing it, composing it and governing the services. The key factor that missing out here is service classification and service capability model. Understanding what type of service exists like technical, common services, industry specific, business services, what are the capabilities supported by services, like service supporting only swift format for payment transfer, what are relationship between services and business process , would help to effectively utilize these services. One of the phases in SOA is service selection. Having a classification would help the runtime to pick up the right service implementation based on client request. The classification system for SOA can be viewed as the meta-data for service which makes the service smarter and self describing. Having a classification model and capability model for services in SOA can also aid in realizing dynamic BPM SOA enabled solutions.

S-ar putea să vă placă și