Sunteți pe pagina 1din 91

This research note is restricted to the personal use of jesus.rendon@itesm.

mx G00246894

Hype Cycle for Operational Technology, 2013


Published: 31 July 2013 Analyst(s): Kristian Steenstrup, Geoff Johnson

The rise in emerging technologies that reflect the convergence and integration of OTs with IT is significant, but many critical technologies are at least two to 10 years from mainstream enterprise adoption. This research shows that it will be a protracted and complex path.
Table of Contents
Analysis.................................................................................................................................................. 3 What You Need to Know.................................................................................................................. 3 The Hype Cycle................................................................................................................................ 4 Technologies Facilitating IT and OT Alignment and Integration.................................................... 4 Trends in IT and OT Alignment....................................................................................................5 Practical Use of this OT Hype Cycle............................................................................................6 The Priority Matrix.............................................................................................................................8 Off the Hype Cycle......................................................................................................................... 10 On the Rise.................................................................................................................................... 10 Lidar......................................................................................................................................... 10 Exploiting Sensor Grids.............................................................................................................11 ITAM Processes for OT.............................................................................................................13 Decisions and Recommendations as a Service.........................................................................15 IT/OT Skilled Workforce............................................................................................................ 17 IT/OT Convergence in Manufacturing........................................................................................19 High-Performance Message Infrastructure................................................................................21 Data Science............................................................................................................................ 23 IT/OT Alignment........................................................................................................................26 IT/OT Impact on EA.................................................................................................................. 28 IT/OT Integration.......................................................................................................................30 At the Peak.....................................................................................................................................31 IT/OT Convergence in Life Sciences......................................................................................... 31

This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

UAS for Business......................................................................................................................32 Networking IT and OT...............................................................................................................35 Operational Technologies for Government................................................................................ 38 Big Data................................................................................................................................... 39 Intelligent Lighting..................................................................................................................... 42 Facilities Energy Management...................................................................................................43 Complex-Event Processing.......................................................................................................45 Open SCADA........................................................................................................................... 47 Operations Intelligence............................................................................................................. 49 System Engineering Software................................................................................................... 51 Sliding Into the Trough....................................................................................................................52 Integrated and Open Building Automation and Control Systems............................................... 52 Operational Technology Security.............................................................................................. 54 Asset Performance Management..............................................................................................56 Machine-to-Machine Communication Services......................................................................... 57 Operational Technology Platform Convergence........................................................................ 61 Real-Time Infrastructure............................................................................................................62 Hardware Reconfigurable Devices............................................................................................ 65 Enterprise Manufacturing Intelligence........................................................................................66 Vehicle-to-Infrastructure Communications................................................................................ 68 Vehicle-to-Vehicle Communications..........................................................................................69 Process Control and Automation.............................................................................................. 71 Climbing the Slope......................................................................................................................... 72 Intelligent Electronic Devices.....................................................................................................72 Public Telematics and ITS.........................................................................................................73 Remote Diagnostics................................................................................................................. 75 Enhanced Network Delivery...................................................................................................... 76 Fleet Vehicle Tracking............................................................................................................... 78 Entering the Plateau....................................................................................................................... 79 Process Data Historians........................................................................................................... 79 Commercial Telematics............................................................................................................ 81 Event-Driven Architecture......................................................................................................... 82 Appendixes.................................................................................................................................... 84 Hype Cycle Phases, Benefit Ratings and Maturity Levels.......................................................... 86 Recommended Reading.......................................................................................................................90

Page 2 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx

Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

List of Tables
Table 1. Hype Cycle Phases.................................................................................................................87 Table 2. Benefit Ratings........................................................................................................................88 Table 3. Maturity Levels........................................................................................................................89

List of Figures
Figure 1. Hype Cycle for Operational Technology, 2013......................................................................... 7 Figure 2. Priority Matrix for Operational Technology, 2013...................................................................... 9 Figure 3. Hype Cycle for Operational Technology, 2012....................................................................... 85

Analysis
What You Need to Know
Operational technology (OT) is hardware and software that detect or cause a change through the direct monitoring and/or control of physical devices, processes and events in the enterprise. It can be also be thought of as the industrial subset of the wider Internet of Things. This Hype Cycle focuses on OT as it relates to the integration with IT. It details a wide variety of ITand OT-related range of technologies, processes and methods that will aid OT's alignment and integration with IT during the next 10 years. These are at different stages of maturity as stand-alone technologies. The wide variety of IT and OT entries are also sometimes being combined into a third (hybrid) category that addresses IT and OT requirements at the intersection of IT and OT. Some of these technologies, processes and methods are intrinsically IT. Some are intrinsically OT. Some are processes or methodologies required to provide OT access to conventional IT portfolios and the management of technology in an organization. The desire for coherent interacting information and processes seems to be immutable and driving investment in OT alignment or integration with IT. Only a third of the technologies described in this Hype Cycle will be mature and in mainstream adoption within the next five years. However, two-thirds of them will have a transformational or high impact on businesses that rely on OT within the next 10 years, so these technologies deserve thorough evaluation. Fully two-thirds of the total are forecast to not be mainstream until five to 10 years from now. This indicates the long path ahead for IT/OT and a lack of quick fixes. Our analysis reveals that there are significant opportunities to be derived from aligning IT and OT, but that they will not necessarily be easily gained in the short term. In some cases, OT will benefit from adopting IT-based tools and processes. In other areas, mutual plans for the integration of data and processes will be needed. Enduring efforts will be required to achieve the potential alignment of IT and OT. IT and OT business planners and architects in asset-intensive and operationally oriented businesses should use this Hype Cycle to judge the relative hype and maturity of the technologies described,

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 3 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

and determine the deployment and risk management steps in undertaking their alignment and integration.

The Hype Cycle


Previous generations of OT systems were hard-wired, electromechanical and proprietary singlepurpose or stand-alone systems. They are now being replaced by more-complex OT software and firmware products that are increasingly reliant or based on generic IT devices and platforms, such as microprocessors, operating systems, communications stacks, servers or data centers. Although OTs do not necessarily need to be connected via a network, most are. As OT products evolve and take on more commercial-oriented software, infrastructure and underpinnings, the governance of the OT portfolio becomes more complex and leads to software life cycle management challenges. Getting OT to interface efficiently with IT systems at a process level is difficult enough for many companies. Getting them to work together to maximize business efficiency, while avoiding negative consequences, security breaches, risks and pitfalls in the process, makes the task even more challenging. Gartner's IT/OT alignment research focuses on OT and its relationship to the IT environment, rather than on just the OT systems themselves (which is a specialized and often industry-specific topic). The technologies profiled reflect that approach.

Technologies Facilitating IT and OT Alignment and Integration


The industrial control and sensing technologies that make up OT have historically been designed to provide critical operational functions. The potential for OT to become aligned with IT has emerged significantly in recent years, as the underlying architecture of OT has converged with technology developments in IT. Most enterprise IT environments have a strong legacy of previously established IT assets and systems. This means that the starting point for aligning enterprise OT into IT begins with two independent and heavily committed technology environments. The make-up and terminology of these IT and OT environments is significantly industry-specific. Because the underlying technologies used in OT systems (hardware, software, platforms, security and communications) are becoming more like those for IT systems (that is, converging), and are increasingly sharing fundamental core technologies (such as operating systems [OSs] and communication stacks), many industries are incrementally moving to IT/OT alignment through:

Using common standards (for economy and information management flexibility) Consolidating separate IT and OT enterprise architectures where possible (to provide a consistent, event-driven architecture and complex-event process) Using common software procurement, configuration and life cycle management practices Supporting increasingly similar security models that do not compromise each other Blending corporate information and process integration (operations intelligence) as tactical or strategic opportunities permit

Page 4 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx

Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

The outcome is that a company can begin to plan and manage all of its IT and OT as an interrelated whole, including the transfer of data across the processes supported by bridging technologies (such as process data historians, remote diagnostics and machine-to-machine communication services). Technology managers and users in enterprises are increasingly working on closely connected IT and OT data and processes that can be made coherent and/or integrated across the enterprise. In asset-intensive industries such as electricity, water or gas utilities, manufacturing, mining, natural resources, transportation and defense traditional OT dominates (with process control and automation, open supervisory control and data acquisition [SCADA], asset tracking, and telematics for facilities and energy management). However, deep integration of IT and OT business processes may remain limited for security reasons. Although some methods are well tested, the opportunities to align OT with IT in separated business units may be limited to data transfer via inefficient but reliable screen or database "scraping" and limited reuse of process data historians into IT environments.

Trends in IT and OT Alignment


Most industries have business drivers from the IT and OT domains that will increase alignment over time. This cannot happen without governance and organization being used to facilitate mutual adoption of the technologies detailed in this Hype Cycle. IT groups and those managing OT environments must mutually engage to assess the scope for alignment or integration opportunities as background technologies converge, or risk being disregarded when major technology decisions are made (such as in big data, networking IT and OT, and managing the IT/OT skilled workforce). Similarly, planners and architects in OT groups must align with IT practices for software governance, security and life cycle management, or risk having their mission-critical OT systems fail unnecessarily (as shown in the IT/OT impact on enterprise architecture [EA] or IT asset management [ITAM] processes for IT or IT/OT alignment). The opportunities that come with ensuring convenient access to corporate OT information, or the reporting of it, must be weighed against the increased risk that things will go wrong because of the interaction between the sophisticated and complex IT and OT environments (using system engineering software). OT faces more-complex software environments in the future. OT systems that were previously hard-wired, electromechanical systems with proprietary single-purpose systems or stand-alone implementations are now being displaced or replaced by OT software and firmware products. Companies that are not aligned in their IT and OT management may also miss opportunities to have their operations staff engage with business decision makers as they set corporate directions and make major technology decisions where the distinctions between IT and OT domains are blurring (as in operations intelligence). Enterprises must expect to manage hybrid projects relying on IT and OT co-investment in the future, and to holistically manage integration between these two classes of systems (OT platform convergence). New additions to this year's Hype Cycle show a wide diversity in types of technology. They include: Lidar for remote terrain and infrastructure sensing, data science as a strategic business capability, the use of unmanned aerial surveillance (UAS) for business, intelligent (programmable) lighting, asset performance management, and decisions and recommendations as a service.

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 5 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Practical Use of this OT Hype Cycle


The slope and plateau of this Hype Cycle show that there are a large number of OTs in mature industrial deployments that have the scope to align with IT. The number of technologies falling into the trough should remind enterprises of the practical complexities that may limit their near-term alignment. Most of the technologies treated in this report can aspire to facilitate OT alignment with IT in the near term, as evidenced by the large number of technologies being at the trigger and peak stages as embryonic, emerging or adolescent technologies. The high density on the Innovation Trigger side also indicates that IT/OT awareness is increasing, and that vendors, standards bodies, and research and development institutions are trying to address it. The progress of the technologies reviewed in this Hype Cycle will be impacted by enterprises' appreciation and adoption of overall corporate governance, organization, information and process management. The rate of the fundamental convergence of IT and OT systems (taking on IT-like characteristics) will also rest on the initiatives and tools chosen to manage complex OT environments and their alignment with the rest of the organization. The maturity levels of IT/OT integration across various industries can be quite inconsistent. Media, content and telecom sectors, for example, are more likely to be early users of advanced or enhanced networking, or networked applications and solutions than perhaps, transportation or utilities. Some of the technologies described are specific to particular industries, but those that are high value or transformational are likely to have wide applicability across sectors, as indicated in the Priority Matrix. Enterprises should look for emerging best practices in applying technologies at the intersection of IT and OT in adjacent or related industries to obtain the earliest possible insight.

Page 6 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx

Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Figure 1. Hype Cycle for Operational Technology, 2013

expectations
Intelligent Lighting Big Data Operational Technologies for Government Networking IT and OT UAS for Business IT/OT Convergence in Life Sciences IT/OT Integration IT/OT Impact on EA IT/OT Alignment Data Science High-Performance Message Infrastructure IT/OT Convergence in Manufacturing IT/OT Skilled Workforce Decisions and Recommendations as a Service ITAM Processes for OT Exploiting Sensor Grids

Facilities Energy Management Complex-Event Processing Open SCADA Operations Intelligence System Engineering Software Integrated and Open Building Automation and Control Systems Operational Technology Security Asset Performance Management

Event-Driven Architecture Commercial Telematics

Machine-to-Machine Communication Services Operational Technology Platform Convergence Real-Time Infrastructure Hardware Reconfigurable Devices

Process Data Historians Fleet Vehicle Tracking

Lidar

Enterprise Manufacturing Intelligence Vehicle-to-Infrastructure Communications

Enhanced Network Delivery Remote Diagnostics Public Telematics and ITS Intelligent Electronic Devices Process Control and Automation Vehicle-to-Vehicle Communications As of July 2013

Innovation Trigger

Peak of Inflated Expectations

Trough of Disillusionment

Slope of Enlightenment

Plateau of Productivity

time
Plateau will be reached in: less than 2 years
Source: Gartner (July 2013)

2 to 5 years

5 to 10 years

more than 10 years

obsolete before plateau

Gartner, Inc. | G00246894

Page 7 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

The Priority Matrix


The 2013 Priority Matrix for OT maps the number of years to mainstream adoption and Gartner's Plateau of Productivity for each technology against the degree of business benefit expected to be created by its use in enterprises. The emerging nature of OT's alignment with IT becomes clear in the Priority Matrix. Only eventdriven architecture is forecast to reach sufficient technical and commercial maturity to achieve early mainstream adoption in less than two years. Of the other 13 technologies that will reach mainstream adoption in two to five years, only IT/OT convergence in life sciences is forecast to deliver transformational business benefits. Eight technologies will deliver high business benefits (enhanced network delivery, enterprise manufacturing intelligence, facilities energy management, IT/OT skilled workforce, operational technologies for government, process control and automation, and remote diagnostics). Moderate benefits are expected from another four technologies (commercial telematics, fleet vehicle tracking, networking IT and OT, and process data historians). Forecasting the future roles of technologies in the five to 10 year time-frame is full of difficulties, complexities and uncertainty, given the rapidly evolving nature of IT and OT technologies, and especially in their compound use in IT/OT integration. Seven technologies seen as transformational and reaching maturity in that period include big data, complex-event processing, IT/OT convergence in manufacturing, machine-to-machine communication services, operations intelligence, real-time infrastructure and vehicle-to-vehicle communications. The 11 technologies with high benefit ratings maturing in that period are data science, decisions and recommendations as a service, high-performance message infrastructure, integrated and open building automation and control systems, IT/OT alignment, IT/OT's impact on enterprise architecture (EA), IT/OT integration, Lidar, operational technology platform convergence, operational technology security, and system engineering software. Four technologies will deliver moderate benefits in that period intelligent electronic devices, IT asset management (ITAM) processes for OT, open SCADA and UAS for business. The fact that Gartner forecasts that vehicle-to-infrastructure communications, exploiting sensor grids, public telematics and transport systems, will not reach maturity for more than 10 years is a reflection of the technical complexity and business difficulty of bringing OT and IT together. Given the long time frames for many of these technologies to reach maturity, IT infrastructure and operations leaders should be teaming with their OT counterparts to establish the organizational mechanisms necessary for them to jointly plan, prioritize and evaluate investments in the transformational and high-potential areas shown in this Priority Matrix. This will demonstrate their stewardship in their enterprise roles, support governance, conserve scarce resources for the long haul, and provide a focus for business to elect how and when they may align their OT with IT.

Page 8 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx

Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Figure 2. Priority Matrix for Operational Technology, 2013

benefit

years to mainstream adoption


less than 2 years 2 to 5 years
IT/OT Convergence in Life Sciences

5 to 10 years
Asset Performance Management Big Data Complex-Event Processing IT/OT Convergence in Manufacturing Machine-to-Machine Communication Services Operations Intelligence Real-Time Infrastructure Vehicle-to-Vehicle Communications

more than 10 years


Vehicle-to-Infrastructure Communications

transformational

high

Event-Driven Architecture

Enhanced Network Delivery Enterprise Manufacturing Intelligence Facilities Energy Management Hardware Reconfigurable Devices IT/OT Skilled Workforce Operational Technologies for Government Process Control and Automation Remote Diagnostics

Decisions and Recommendations as a Service High-Performance Message Infrastructure Integrated and Open Building Automation and Control Systems Intelligent Lighting IT/OT Alignment IT/OT Impact on EA IT/OT Integration Lidar Operational Technology Platform Convergence Operational Technology Security System Engineering Software

Data Science Exploiting Sensor Grids

moderate

Process Data Historians

Commercial Telematics Fleet Vehicle Tracking Intelligent Electronic Devices Networking IT and OT

ITAM Processes for OT Open SCADA UAS for Business

Public Telematics and ITS

low
As of July 2013
Source: Gartner (July 2013)

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 9 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Off the Hype Cycle


IT/OT integration providers do not now appear as a separate entry in this OT Hype Cycle because they are listed as example vendors in the IT/OT integration entry.

On the Rise
Lidar
Analysis By: Randy Rhodes Definition: Lidar is an optical remote-sensing technique for precisely scanning surfaces from a distance with laser light. Lidar systems use an active optical sensor that transmits laser beams toward a target. The reflection of the laser from the target is detected and analyzed by receivers for range distance and calculated position. Positional measurements of scanned surfaces are produced en masse along prescribed survey routes and combined into point cloud datasets that can be managed, visualized, and analyzed. Position and Adoption Speed Justification: Lidar sensors can be aerial or ground-based. Airborne laser mapping applications include topographic lidar for creating models of the Earth's surface and its natural and man-made structures, and bathymetric lidar, which uses water-penetrating laser technology to capture both water surfaces and underwater surfaces and objects. Ground-based systems can be static, in the form of tripod-mounted sensors, for engineering, mining, surveying, or archaeology applications; more often they are mobile. Mobile systems can be mounted on trucks, cars, trains or boats. The major hardware components of a lidar system include the collection platform, the laser scanner system, GPS sensors and inertial navigation system (INS) sensors (to measure roll, pitch and heading of the collection platform). Lidar data is most often collected, postprocessed and sold by surveying organizations to customers through data services contracts. Public-sector organizations, utility organizations and architect/ engineer/constructor (AEC) organizations are the largest consumers of lidar point cloud datasets. Most vendors originally providing surveying services and data have refocused on providing lidar datasets, seeing new growth opportunities with the aging of public infrastructure and the arrival of new regulatory mandates. Traditional vendors of computer-automated engineering (CAE) and computer-aided design (CAD) software are adding lidar data management tools to support 3D engineering and Building Information Modeling (BIM) processes. Likewise, geographic information systems are evolving to geospatial services platforms that can support end-user processing of 3D models and point clouds. Lidar point cloud data is most often delivered in LAS format, a standard binary file format published by the American Society for Photogrammetry and Remote Sensing, and supported by most CAD and GIS systems. User Advice: CIOs must monitor the acquisition of Lidar datasets to ensure business needs are adequately addressed. Data management infrastructure can be extensive, because datasets are large and some processing steps may be required for certain applications. IT organizations can provide guidance on consolidating data that is often maintained in multiple systems, such as structure locations, access roads, associated aerial orthophotography, land base data, or property data. Standard data extraction techniques and integration methods will be needed to integrate
Page 10 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

spatial data types from the lidar datasets, GIS data, and CAD/CAE files. Accuracy improvements in these systems may be required to adequately integrate information from multiple sources and correlate perspectives from user-generated still or video imagery. When evaluating vendor offerings, understand their participation in industry standards efforts to mitigate lock-in to proprietary tools. Business Impact: Business uses of dense point clouds include the monitoring of forest canopy and vegetation growth; to manage facilities; and to survey highways, railways, bridges, tunnels and waterways for maintenance needs. Municipal organizations can use lidar datasets for urban redesign initiatives. Navigation services vendors use lidar to construct 3D city models. Energy and utility organizations that own or operate transmission facilities use lidar to track vegetation growth, to rate transmission capacity and to mitigate service interruptions. Land management organizations can use lidar to ensure compliance with rental contracts and to avoid easement violations. Benefit Rating: High Market Penetration: 5% to 20% of target audience Maturity: Adolescent Sample Vendors: Exelis; GeoCue; GeoDigital; Hexagon; Network Mapping; Optech; Riegl USA; Topcon; Trimble Recommended Reading: "Consumerized Geography Will Change Your Utility GIS Strategy" "Governments Must Plan for Alternative GIS Strategies Created by the Nexus of Forces" "Solving Two Key Challenges Is Critical If Using UAVs for Aerial Photography and Mapping"

Exploiting Sensor Grids


Analysis By: John Mahoney Definition: Exploitation of networks of smart physical objects depends on emerging sensor grid technologies. Smart objects sense and report their status and, frequently, their location to one another or to a central node. They may also be able to interact with other entities. Exploitation of interconnected sensor grid systems is creating new business capabilities for optimizing existing activities, and for innovating in existing and new markets. Exploitation enables the four Internet of Things business opportunities: manage, monetize, extend and control. Position and Adoption Speed Justification: This development, which is part of the emerging Internet of Things, has more than 10 years before full maturity, but is already visible through many substantive applications. The trend applies equally to the consumer domain of the Internet of Things and to its industrial domain in operational technology (OT). Some applications are already in evidence, and specific sectors are developing at different speeds. The trend is linked with and is a driver of the accelerating growth of big data, because the number of potential devices and the cumulative volume of their output will create massive data volumes and

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 11 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

novel opportunities for analysis and correlation that will generate new insights about activities and behaviors, which will, in turn, generate monetizable revenue streams or improved social or security facilities. It's a part of the emerging Internet of Things, parts of which vendors variously name as the "industrial Internet" (GE) or "Internet of Everything" (Cisco). Also, there are applications where connected sensors and other devices are starting to be organized coherently to add value for example, sports equipment, waste management vehicles and receptacles, and medical equipment (pill bottles and diapers). To take an established example, SFpark, the city-operated parking system in San Francisco, uses sensors in parking meters that are linked to a central system to communicate the availability of parking spaces to drivers, reducing traffic congestion and pollution. Growing numbers of healthcare providers can use sensors to monitor the status and location of vulnerable individuals, such as residents in care homes for the elderly, and take action on negative status indicators, thus improving care outcomes and saving money. Many countries are installing sensor grids to monitor levels and flows of critical inland and coastal waterways to provide early warnings of possible flooding, which can save lives and reduce financial loss. User Advice: In addition to the continued growth of sensor and grid technologies themselves, two factors will have a significant impact on the extent and speed of this development: standardization, and security. The standardization of devices, communication protocols, application and device interfaces, process elements, and data structures will decrease cost, increase the range of application, and facilitate creative uses by people not needing deep technology expertise. It's notable that China mandated the development of national standards for the Internet of Things in early 2012. On the other hand, security concerns about operational vulnerability or the privacy of corporate and personal data could, if unaddressed, will retard the trend's progress. The outlook is for sensor grid systems to move beyond a simple sense-and-respond capability to an interaction capability, and then to the autonomous initiation of activity. As this happens, the domain of this trend will become multidisciplinary, and its architecture will expand across enterprise boundaries, ultimately encompassing whole business ecosystems, and probably creating new ones. Business Impact: Value: High

Risk: High Technology intensity: High Core technologies: Sensors; power scavenging; satellite telecom and GPS; multihop, alwayson wireless network technologies; complex-event processing; business intelligence; and big data Strategic policy change: High change to aspects of industry operating models Organization change: Moderate some revised organizations, with new specializations; and convergence of existing disciplines, such as IT and OT Culture change: Moderate core values remain stable, but methods change

Page 12 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx

Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Process change: High major processes may change in novel ways; and changes will frequently span existing process and enterprise boundaries Competitive value: Process adaptability, innovation and effectiveness impacts on cost, margins, pricing and value service proposition; faster regulatory compliance; and process tradability Industry disruption: There are three broad categories of industries, split on a combination of related characteristics the physical nature of the work product (for example, mining versus banking), the associated capital asset intensity of the business, the value added by the service and knowledge in the business models, and the relative information intensity of the industry:

Weightless (such as insurance): Initially low, becoming moderate later Mixed (such as retail and consumer packaged goods [CPG]): High Heavy (such as construction): Moderate

Benefit Rating: High Market Penetration: 1% to 5% of target audience Maturity: Emerging Recommended Reading: "Hype Cycle for Smart Grid Technologies, 2012" "Hype Cycle for Semiconductors and Electronics Technologies, 2011" "Internet of Things Research for CIOs" "A Guide to Adapting IT Tools for Smart Grid OT Management Challenges" "Falling Cost of Components Will Help Drive the Pervasiveness of the Internet of Things"

ITAM Processes for OT


Analysis By: Patricia Adams; Kristian Steenstrup Definition: The IT asset management (ITAM) process entails capturing and integrating inventory, financial and contractual data to manage technology assets throughout their life cycles. ITAM encompasses the financial management and contract terms and conditions associated with the asset. As operational technology (OT) evolves into more-complex software products and converges with IT architecture, companies will need to have enhanced processes to automate often informal manual processes that may have used spreadsheets of whiteboards. Position and Adoption Speed Justification: OT has slowly been moving toward commercialized and common platforms, and away from proprietary systems or hardwired systems, which directly affects the life cycle management of the OT hardware and software. This comes about because many proprietary or hardwired systems were essentially unchanging, but with commercial platforms

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 13 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

such as Microsoft or Linux, managing the release level and the patching process is important, as are software license entitlements. This process, when integrated with tools, is adopted during business cycles that reflect the degree of emphasis that enterprises put on controlling costs and managing OT assets. No longer are they seen as components of a piece of industrial equipment; rather, OT software is an asset to be managed on its own. With an increased focus on software audits, configuration management databases (CMDBs), business service management (BSM), managing virtualized software, developing IT service catalogues and tracking software uses in the cloud, ITAM initiatives are gaining increased visibility, priority and acceptance in IT operations procurement. ITAM processes and data can be leveraged to understand the total cost of ownership (TCO) for a product or business service. A constraining factor for adoption will be that there are slower (longer) replacement cycles for OT assets, so the dynamics are less obvious. Many OT custodians will be unfamiliar with ITAM tools and concepts, and ITAM tools designed for IT assets may not be ideally suited to OT products. Because OT systems are sometimes supported by the manufacturer, complexities and a mixed solution for internal ITAM processes with externally provided support processes may emerge. User Advice: If you have a significant investment in OT systems, then you should implement ITAM processes to manage the embedded software. This applies to intelligent hardware as well, which we see in smart metering, supervisory control and data acquisition/energy management system (SCADA/EMS) and substation automation software. Many companies embark on ITAM initiatives in response to specific problems, such as impending software audits (or shortly after an audit), CMDB or cloud implementations, virtual software sprawl or OS migrations. Inventory and software usage tools, which feed into an ITAM repository, can help ensure software license compliance and monitor the use of installed applications. However, without ongoing visibility, companies will continue in a reactive, firefighting mode, without achieving a proactive position that diminishes the negative effect of an audit or providing the ability to see how effectively the environment is performing. Build a road map for problems that ITAM can solve or for which ITAM can enable visibility to other areas within IT. ITAM has a strong IT operational focus, with tight linkages to IT service management, thereby creating efficiencies and effectively using software and hardware assets. ITAM data can easily identify opportunities, whether for the appropriate purchase of software licenses, the efficient use of installed software or the assurance that standards are in place to lower support costs. This is slowly becoming the case in the OT world, as software reusability and the actual cost of OT software become more apparent. To gain value from an ITAM program, a combination of people, policies, processes and tools needs to be in place. As process maturity improves, ITAM will focus more on the financial and spending management related to managing asset investment costs, and will build integrations to project and portfolio management, as well as enterprise architecture. These integrations will enable technology rollouts and assist with strategy planning, respectively. In addition, ITAM processes and best practices are playing a role in how operational assets are being managed. Companies should plan for this evolution by coordinating the management of IT and OT software products.

Page 14 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx

Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Business Impact: All ITAM operations controls, processes and software tools are designed to achieve at least one of three goals: lower costs for OT operations, improve quality of service and agility, and reduce business risks. As more enterprises upgrade their OT systems, an understanding of the costs will become essential, as will security, recoverability, interoperability and version control. In addition, ensuring that the external vendor contracts are in place to deliver on the specified service levels that the business requires is a necessity. Because ITAM financial data is a feed into a CMDB or content management system, the value of ITAM will be more pronounced in organizations that are undertaking these projects. Benefit Rating: Moderate Market Penetration: 1% to 5% of target audience Maturity: Emerging Sample Vendors: BMC Software (Remedy); CA Technologies; HP; IBM Tivoli; ServiceNow Recommended Reading: "Improve Security Risk Assessments in Change Management With Risk Questionnaires" "Technology Overview for Inventory Tools" "MarketScope for the IT Asset Management Repository" "Characteristics of IT Asset Management Maturity"

Decisions and Recommendations as a Service


Analysis By: Alfonso Velosa; Hung LeHong Definition: Decisions and recommendations as a service (DRaaS) is a business model where enterprises receive recommendations from a trusted provider. This model takes the concept of monetizing data one step further to provide optimized and automated decision choices based on specific information and business unit (BU) goals. DRaaS can be used on a continuous or asrequired basis. Position and Adoption Speed Justification: The types of decisions and recommendations delivered by a DRaaS provider can be a set of action choices (for example, route to drive or opening/closing valves to maximize flow), settings to optimize asset use (for example, industrial machine settings to maximize yield), policies to optimize processes (for example, service-level policies to maximize availability and minimize cost), or recommended prices or offers (for example, price optimization or next best offer selection). The person or team at the enterprise can then chose whichever DRaaS choice meets their criteria the best, as part of a process to accelerate the speed of high-quality decisions. Most enterprises still operate on, and make decisions based on, the historical data about how their business models and historical data structure have evolved. However, several trends are emerging

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 15 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

that are driving enterprises to consider new decision models and sources for automated decision making:

The emerging availability of big data sources as diverse as IT systems, customer interactions, partner systems and the Internet of Things (IoT). The growing need to leverage real-time data or granular operational data. The acceptance of outsourcing models in conjunction with the rise of cloud-based services. Service-oriented architecture (SOA) and cloud-/API-based application development that allows for easier integration of decision services into business processes and systems. The severe budget constraints that many organizations face, particularly government enterprises.

Enterprises are increasingly exploring the outsourcing of not just data collection but also the data analysis and resulting prescriptive recommendations. The core benefit of DRaaS is that it reduces the potential capital and operational expenditure the enterprises may have had to accrue to collect the data. Moreover, it allows BUs to leverage other providers' core expertise by outsourcing the data analysis to expert providers. Examples of this are:

The traffic analysis and recommendations that Bitcarrier provides. The maintenance advice that GE provides based on its engine sensors and analysis, or the technology and service provider getting operational technology (OT) data from the client directly and sending back maintenance interval and intervention advice The oncology diagnosis that IBM-Wellpoint's Clinical Oncology Advisor (based on Watson) can provide to doctors.

Internet-based industries use recommendation engines and offer/ad engines that are precursors to the DRaaS model. As the building block technologies and business models mature, we expect to see new models and opportunities as enterprises leverage DRaaS to increase their competitiveness. For further information on this see "Uncover Value From the Internet of Things With the Four Fundamental Usage Scenarios." User Advice: Senior managers should conduct experiments in 2013 to 2014 to firmly understand the business potential of the DRaaS model while limiting their risk. Particular development areas to focus on will be centered on these two areas:

Business potential. Understand how this impacts standard business metrics such as time-tomarket improvements, new performance benchmarks and cost mitigation, as well as look at the potential for new business or service capabilities. Risk mitigation. SLA terms and conditions as well as getting a deeper understanding of risks from privacy policies and the loss of key enterprise intellectual property.

Page 16 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx

Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

The decisions are only as good as the input and causal data that is provided to the DRaaS vendor so make sure data sources are reliable and clean or have the DRaaS provider get you there. Also make sure that you take the steps to build trust in the recommendations and decisions supplied by the DRaaS provider. This is done by slowly introducing automated decision choices and verifying that they are improving business metrics. This model will need to be tested and analyzed in controlled, risk-mitigated settings for factors such as the soundness of the decision tree outputs or privacy considerations, before being considered for use across an entire BU. Use risk mitigation, for example, where an enterprise will want to assess the legal implications of picking a choice from an outsourced set of automated decision tools instead of from a human expert. Note also that implementers will want to ensure the system provides multiple recommendations and that people are trained in its use and limitations. This is to minimize any intimidation issues for people not wanting to risk their jobs/careers by contradicting the "expert" system. From a technology perspective, DRaaS is easier to implement in enterprises that have pursued an SOA or Web-based-architecture. Business Impact: This trend is applicable to almost all industry contexts, sizes of organizations and geographies. DRaaS can be applied to both core and secondary competencies so it can lead to incremental improvement as well as competitive improvements. DRaaS can improve decision making that is already in place, such as asset optimization via improved maintenance cycles. It can also be used to support completely new operations and revenue areas. These new capabilities could be smart-city operations, such as improving rush hour traffic by monitoring traffic and recommending better traffic lane settings to city planners. They could also be new retail revenue-generation opportunities, where malls track the density of shopper traffic to generate real-time sales or discounts in lower traffic sections. Benefit Rating: High Market Penetration: 1% to 5% of target audience Maturity: Emerging Sample Vendors: Bitcarrier; GE; IBM Recommended Reading: "Uncover Value From the Internet of Things With the Four Fundamental Usage Scenarios"

IT/OT Skilled Workforce


Analysis By: Kristian Steenstrup; Geoff Johnson Definition: With the rising tide of IT/OT convergence, and rising interest in IT/OT integration, particularly in asset-intensive industries with substantial plants and equipment, there is a need for a new type of employee who can see across IT and OT. This individual needs knowledge, skills and experience in managing IT software projects, but with an understanding of engineering and

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 17 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

operational areas and, thus, an innate understanding of the criticality and availability issues pertaining to OT. Position and Adoption Speed Justification: To keep pace with the technology change, many companies are taking action to carry out the staffing change. A new type of job description and personal skill specification are emerging in organizations that are significantly addressing convergence, alignment or integration of IT and OT. Individuals with specific IT, OT and vertical industry experience are sometimes being hired specifically to manage or lead IT/OT integration tasks. We have seen this particularly with smart grid projects in the power sector, but also in the manufacturing, mining and gas sectors, where new, more automated projects and facilities create a demand for holistic oversight of the integration of information and processes. Increased levels of plant automation are being undertaken to create productivity gains, which in turns allow for, or can trigger a reduction in workforce, highlighting the need for a newer, hybrid skill set. This newer hybrid worker needs to combine IT and OT savvy just as the converged OT products include IT architecture. This personal combination of skills is still a relative rarity. Availability of resources trails the job demand brought about by IT/OT alignment in most cases. Industries with high OT intensity are still in the early stages of training (through apprenticeships or graduate recruitment) and recruiting (usually individuals with OT experience in their industry vertical and IT experience obtained elsewhere). This trend will gather momentum as IT/OT hybrid projects become more common not just about interfacing IT to operations, but also projects planned from the outset to use a combination of IT systems and modern, connected and converged OT systems. This is exemplified by the power industry's smart grid, but also is repeated in areas of automated manufacturing, remote-control mining operations and automated transportation. User Advice: Look to promote from within, or hire from outside, key individuals who can lead or partake in hybrid projects, with primary skills in OT through their own training, knowledge or experience relevant to your industry. Their secondary IT skills should be sufficient to interact with enterprise IT architects. Any ability to interact at the system, solution or device level would be a bonus. Within job descriptions, skill requirements should typically include a minimum of five years' work experience in the vertical industry, with knowledge of the business structures, work practices and architecture of operational technologies. For example, this could be data historians in utilities; process control systems in energy distribution, manufacturing and refining; and operations support systems or business support systems in the telecom industry. A parallel requirement in the person specification should include prior training, work experience and practical knowledge of IT as commonly deployed horizontally in any industry, appreciation of enterprise IT architecture, and personal aptitude for blending solutions to support IT and OT domains and to work in a combined structured project context under a PMO. Enterprises must prepare for IT/OT integration by sizing and staffing their organizations for the specific skills required. High-level determinations of the scope for convergence, alignment or integration of IT and OT must be made by solution architects with IT and OT capabilities. A larger number of system engineers will be needed with more-specific abilities. Practitioners with low-level
Page 18 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx

Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

and detailed experience with OT hardware, programmable logic controllers (PLCs), chipsets and coding will also be needed. Experience with software life cycle management as practiced in IT will be a prerequisite. Businesses evaluating IT and OT integration should consider how they assemble teams with a mix of fast-learning technology generalists (capable of bridging IT and OT domains) and OT domain experts with work experience in system engineering and with adequate IT competency who can negotiate with their IT expert peers. Business Impact: Getting the right people is always key, but at the present stage of IT and OT deployments in most industries, this joint skill set is not common. Increasingly, individuals who can bring a unique combination of IT and OT skills and knowledge to a project to initiate and steer IT/OT integration initiatives will emerge and be highly valued. HR departments in asset-intensive industries are likely to be tasked with preparing formal job descriptions and person specifications once the need becomes clear. Benefit Rating: High Market Penetration: 1% to 5% of target audience Maturity: Adolescent Recommended Reading: "Case Study: Using RACI Charts to Define OT Responsibilities" "Best Practices in Staffing for Technology Innovation" "New Skills for the New IT"

IT/OT Convergence in Manufacturing


Analysis By: Leif Eriksen; Simon F Jacobson Definition: Gartner defines operational technology (OT) as the hardware and software that detects or causes a change of state through the direct monitoring and/or control of physical devices, processes and events in the enterprise. The IT and OT convergence in manufacturing brings the two worlds together to deliver new levels of decision support. Putting a business context on various data streams comes from the IT world, while the use of embedded and proprietary systems to capture and distribute machine/operations data comes from the OT world. Position and Adoption Speed Justification: New and expanding sources of data, growing needs for interorganizational and intraorganizational collaboration, and new cloud-based tools to manage data and its delivery are some of the catalysts for investments in IT/OT convergence. In addition, the expanding use of mobile and wireless technologies in manufacturing requires new approaches to technology management, and often pulls IT into the OT realm. The former is often transactional and static, while the latter is usually time-based and dynamic. Location is increasingly being added to the equation as new wireless technologies emerge to deliver it. The primary driver to bring the

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 19 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

two worlds together is the need to use diverse data to improve decision making across the supply chain, including manufacturing. IT/OT convergence is a paradigm shift, and most organizations will not go through it without some disruption. It has significant organizational impacts, since it requires new skills and, for most manufacturers, the IT/OT organizational divide is quite large. Finding the right talent in the marketplace has been difficult, and years of workforce reductions have left an experience gap. In some instances, corporate IT is well-positioned to fill the vacuum. This is particularly true in cases where OT is based on core IT technologies, such as Internet Protocol (IP)-based networking and nonproprietary operating systems. However, the nature of manufacturing operations, with its diversity of installed systems and variety of data types, is different enough that it is unlikely IT can manage the convergence without support from resources that have experience in the OT world. Investments and, therefore, benefits will be driven by specific industry needs. OEMs with a service business will focus on asset data. A similar dynamic is unfolding in asset-intensive manufacturing companies from the oil and gas and chemical industries. Safety and compliance improvements will deliver benefits in many industries, but food and beverage and pharmaceuticals will be particular beneficiaries. However, the challenges presented by IT/OT convergence both technological and organizational will be a drag on adoption. User Advice: Manufacturers should identify opportunities for improved decision making, ones where having the right information in the right form and at the right time would result in better decisions to boost productivity and mitigate risk. Potential areas for taking an IT/OT approach include risk and compliance, asset management, quality, safety, and sustainability. Manufacturers should also take a hard look at their engineering/IT talent with respect to the ability to support nextgeneration architectures that blend IT and OT components. Business Impact: Better analysis and decision making in the areas of asset management, quality, production, safety and sustainability will be the beneficiaries of IT/OT convergence. In addition, IT/OT convergence will help enterprises manage risk, whether from better visibility or simply more effective management of the OT systems. Benefit Rating: Transformational Market Penetration: 1% to 5% of target audience Maturity: Emerging Sample Vendors: ABB; Cisco; GE Intelligent Platforms; Honeywell Process Solutions; Invensys Operations Management; OSIsoft; Rockwell Automation; SAP; Siemens; ThingWorx Recommended Reading: "Align IT and OT Investments for Competitive Advantage" "Leverage IT and OT for 21st Century Energy Management in Manufacturing" "The Nexus of Forces Is Ready to Advance Manufacturing 2.0" "Predicts 2013: IT and OT Alignment Has Risks and Opportunities"

Page 20 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx

Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

"Maverick* Research: Crowdsource Your Management of Operational Risk (The Supply Chain View)"

High-Performance Message Infrastructure


Analysis By: W. Roy Schulte; Massimo Pezzini Definition: High-performance message infrastructure consists of software or appliances that provide program-to-program communication with high quality of service (QoS), including assured delivery and security. These products use innovative design concepts and in-memory computing to support higher throughput (tens of thousands of messages per second), lower latency (less than ten microseconds for local message delivery), or more message producers (senders) and consumers (receivers), than traditional message-oriented middleware (MOM) products. Position and Adoption Speed Justification: The hype associated with big data is spilling over from MapReduce and Hadoop, which are technologies for big data at rest, to the world of highperformance message delivery products technologies for big data in motion. High-performance message technology is part of the big data movement because of its ability to handle high volumes of data with high-velocity processing and a large variety of message types. High-performance message delivery has received less notice than big database technologies, so it is still near the beginning of its journey along the Hype Cycle. However, as companies implement more big data solutions, the need to use high-performance message delivery with those solutions will grow. Moreover, the demands of real-time systems, particularly the Internet of Things, mobile devices and world-class cloud applications, will drive adoption of high-performance message delivery, even when big data database technology is not involved. The volume of data used in business, science and governmental applications is increasing because the cost of recording data, moving it across a network and computing is dropping. At the same time, enterprises need data to be delivered faster as they move toward real-time analytics and straight-through processing. High-performance message delivery used to be an issue for only a few applications in financial trading and certain operational technology (OT) systems, such as aerospace and digital control systems. These applications have become more demanding, while many other new kinds of high-end message-based application including Web-based gaming systems, CRM, social computing, smart electrical grids, smart cities and other systems have appeared. Standard wire protocols and traditional communication middleware cannot support the speed (low latency), high-message volume or large numbers of message producers and consumers found in a growing number of applications. This has driven vendors to develop a new generation of highperformance message delivery products that exploit a variety of innovative design concepts. Some products use hardware assists (appliances); others use brokerless peer-to-peer architecture, parallel processing (for grids that scale out) and other optimizations. All these products are examples of in-memory technology because the data is never put out to disk except for recovery purposes (and even this is handled asynchronously to avoid delaying message delivery). Moreover, when message senders and receivers are on the same server (for example, running on different cores), the communication is done through in-memory, interprocess communications, thus avoiding the network protocol stack. Such messages can be delivered in less than a microsecond. These

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 21 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

products are somewhat faster and better than earlier high-performance MOM products, and much faster and better than conventional MOM or standard wire protocols, such as unadorned TCP/IP and HTTP. Some high-performance message infrastructure products originated in the OT world designed for embedded systems while others were initially designed for financial or telco applications. However, virtually all can be used for OT, financial and other applications. User Advice:

Architects and middleware experts should become familiar with the capabilities and nonfunctional characteristics of emerging high-performance message delivery products, including software- and appliance-based products. Companies should expect to support a variety of standard protocols, MOM and highperformance message products to handle the variety of communication requirements in their applications one size does not fit all. Architects, business analysts and development managers should fully evaluate the communication requirements of every new application, paying particular attention to potential requirements for throughput, latency and QoS. They should not assume that conventional MOM or industry-standard wire protocols will be able to handle all applications. Architects and middleware experts should select high-performance message infrastructure primarily by the quality of the product and the vendor, with secondary consideration of whether the product supports industry standards such as Java Message Service (JMS), Data Distribution Service (DDS) and Advanced Message Queuing Protocol (AMQP).

Business Impact: The availability of good, commercial high-performance message delivery products makes it practical to implement demanding applications without having to write custom communication protocols into the application:

High-performance message infrastructure can be used for "fan-in" that is, collecting data from many distributed message producers, such as users on mobile devices or stationary workstations, embedded devices and sensors. This infrastructure also helps load big databases and move data from one big database to another quickly (including to or from in-memory databases). This infrastructure can be used to distribute ("fan out") the results of analytics, betting odds, or gaming or other data to many consumers which may be people, devices, systems or distributed databases.

The new generation of commercial, high-performance message delivery products can support most or all of today's applications, as well as most of those likely to emerge during the current five-year planning horizon (2013 to 2018). Companies that plan ahead and build a high-performance communication infrastructure before the demand occurs will enjoy relatively smooth operations. Those that underestimate their message workloads or wait too long to build a sound infrastructure will endure slow performance, unhappy customers, lost messages and service interruptions. However, companies that do not have high-volume or low-latency applications will not need this technology.

Page 22 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx

Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Benefit Rating: High Market Penetration: 1% to 5% of target audience Maturity: Emerging Sample Vendors: IBM; Informatica; PrismTech; Push Technology; Red Hat; RTI; Software AG; Solace Systems; Tervela; Tibco Software; Weswit Recommended Reading: "Adopt Web Streaming to Enable Messaging for Mobile and BrowserBased Applications" "Cool Vendors in IT/OT Alignment and Integration, 2013" "Use High-Performance Infrastructure to Support Big Data in Motion" "What You Need to Know About Publish-and-Subscribe"

Data Science
Analysis By: Douglas Laney; Roxane Edjlali Definition: Data science is the business capability and associated discipline to model, create and apply sophisticated data analysis against disparate, complex, voluminous and/or high-velocity information assets as a means to improve decision making, operational performance, business innovation or marketplace insights. Position and Adoption Speed Justification: Data science is a discipline that spans data preparation, business modeling and analytic modeling. Hard skills include statistics, data visualization, data mining, machine learning and database and computer programming. Soft skills that organizations frequently desire in their data scientists include communication, collaboration, leadership and a passion for data (see "Emerging Role of the Data Scientist and the Art of Data Science"). This fast-emerging capability is often also associated with big data, which Gartner defines as "information assets with volumes, velocities and/or variety requiring innovative forms of information processing for enhanced insight discovery, decision making and process automation." Unlike other business capabilities, such as CRM for example, data science does not describe the vector through which the capability delivers strategic benefit. Data science is still an emerging discipline where practices and ROI benefits are not yet established. The increasing availability of big data, combined with the arrival of new analytics specialists called data scientists, indicates a confluence of resources and skills that can help organizations achieve transformative advances in business performance and innovation. Today, data gathering comes in many forms: from our transactional systems and social collaborative systems, but also in video and audio and from outside the enterprise in the form of complex electronic maps, syndicated data and vast government datasets. Beyond the usual operational data, organizations are moving toward a world of instrumentation in which sensors collect continuous detailed data on every manner of device and process. Even low-level computer system operating data logs are finding new uses (for

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 23 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

example, in the forensic examination of record update time stamps, user behavior modeling, or preventative maintenance). Modern analytics tools, high-level script programming languages, powerful algorithms, simple visualization tools, techniques such as video analytics and cloud sharing of datasets when all combined have demonstrated the potential to transform almost any organization in any sector or geography. Information-centric companies such as Google, Amazon and Facebook base far more of their decisions on complex ad hoc analysis of data. Data scientists need to be aware that in the realm of decision making the data being used is continually morphing and evolving, so that decisions made possible today may be more richly informed than those made previously (see "Toolkit: Role Description: Data Scientist"). Like many similar areas, data science is not entirely new and it has historic precursors in specialized capabilities, such as yield and revenue management, actuarial science, algorithmic trading, and informatics in various biosciences. In many ways, it extends the scope of existing business analytics with new and innovative approaches for optimizing business performance. However, the range of data types, the scale and detail of data becoming available, and the breadth of business use mark out a completely new level of capability; also, many of the tools, techniques and skills used to support data science are new. The best people are needed to make it work and they must operate within a culture that thinks differently about the way decisions are made. The traditional combination of reactive, requirements-based data warehousing and business intelligence (BI) is quickly giving way to a more proactive, opportunistic and experimental mode of execution that combines big data and data science. The term "data science" alone hints at the inclination to follow the scientific method: one of hypothesis, problem modeling, data gathering, data analysis, conclusion and retesting. However, since the term speaks more to a spectrum of analytic techniques than an overall purpose-built capability, we caution management to embrace it with business objectives in mind (see "No Data Scientist is an Island in the Ocean of Big Data"). The data scientist role is critical for organizations looking to extract insight from information assets for big data initiatives, and requires a broad combination of skills that may be fulfilled better as a team, for example:

Collaboration and teamwork is required for working with business stakeholders to understand business issues. Analytical and decision modeling skills are required for discovering relationships within data and detecting patterns. Data management skills are required to build the relevant dataset used for the analysis.

This role is new and the skills are scarce: leading to a dearth of talent of several hundred thousand qualified professionals through to the end of the decade. The shortage is so pronounced, and demand so high, that more than 50% of organizations are now attempting to build these skills from within rather than pay extreme premiums to source them externally. While universities are scrambling to come to the rescue, it will be a few years before they churn out data scientists with any abundance.

Page 24 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx

Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

The data management side of data science is also giving rise to a role that is becoming more prevalent, that of the chief data officer (CDO, see "CEO Advisory: Chief Data Officers Are Foresight, Not Fad"). As information becomes an acknowledged asset, rather than just talked about as one, CDOs will emerge as the ultimate stewards of these assets. CIOs regularly contend that they are too consumed with technology-related or functionality-enabling issues to give sufficient attention to the need for improved curation, management and governance of information. The role of the CDO is to maximize value and use of data across the enterprise and to manage the associated risk. They will often focus on the places and ways in which certain information assets will have more impact on the organization. And, just as other key corporate resources have independent executive oversight and organizations (such as material assets, financial assets, human capital), information assets are also beginning to do so. As such, CIOs, CDOs and COOs (or line-of-business leaders) are starting to form a new and exciting management triumvirate. User Advice: Catalog and consider the range of data sources available within the organization and the greater ecosystem of information assets available. Hypothesize and experiment, looking to other industries for astounding ideas to adopt and adapt. Create sandboxes for data scientists to "play" in, and don't conflate your data warehouse or BI competency center with the data science function. Then, confirm the relative economic value of findings and the organization's ability to leverage results (technically and culturally). Where could they have the most impact and is the organization ready to enact them? Recognize that data scientists are different from statisticians or BI analysts in terms of both skill set and goals. But also recognize that they are in short supply, so incubating skills internally or paying handsomely for top talent are the only options. Data science teaming arrangements that have the requisite skills in aggregate can work, but are not the same as individuals with end-to-end abilities. And, as leadership is noted as one of the key inhibitors to benefiting from big data, strive to develop deeper data literacy and acceptance throughout your management ranks about the transformative potential of data science. Business Impact: Businesses that are open to leveraging new data sources and analytic techniques can achieve considerable competitive leaps in operational or strategic performance over those of traditional query and reporting environments. Advances in data science have yielded significant innovations in sales and marketing, operational and financial performance, compliance and risk management and new product and service innovation, and have even spawned capabilities for directly or indirectly productizing data itself. While every organization should not expect to generate quantum advances, incremental ones with increments larger than before are to be expected. Shifting investments and goals from hindsight-oriented descriptive analytics to more insight-oriented diagnostic analytics and foresight-oriented predictive and prescriptive analytics will hasten success.

Risk: Moderate Failure to evolve from basic BI is evident in many current business results. Technology intensity: High IT core concept origins, unusual additional IT investment. Strategic policy change: Moderate Expansion of analytics and institution of fact-based execution. Organization change: Moderate Some new specialists and perhaps a new support function needed.

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 25 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Culture change: Substantial Belief that data (as much as experience) should drive decisions can be very hard to instill. Process change: Low Some decision processes will change, but core production and administration processes will not. Competitive value: New industries, radical product and service innovation, cost reduction and quality improvement, employee and compliance risk reduction. Industry disruption: Three broad categories of industries (as shown below), split on a combination of related characteristics the physical nature of the work product (for example, mining versus banking), the associated capital asset intensity of the business, the service and knowledge value-add in the business models, and the relative information intensity of the industry. Weightless (for example, insurance): High New opportunities to transform enterprise performance. Mixed (for example, retail and consumer product goods): Moderate Long-term competitive rankings changed. Heavy (for example, construction): Low Visible benefits to a minority of players.

Benefit Rating: High Market Penetration: 1% to 5% of target audience Maturity: Emerging Recommended Reading: "Emerging Role of the Data Scientist and the Art of Data Science" "How Data Science Will Drive Digital Marketing Toward Influence Engineering" "Big Data Strategy Components: IT Essentials" "Predicts 2013: Information Innovation" "Toolkit: Role Description: Data Scientist"

IT/OT Alignment
Analysis By: Kristian Steenstrup Definition: Operational technology (OT) is hardware and software that detect or cause a change through the direct monitoring or control of physical devices, processes and events in the enterprise. IT/OT alignment is an approach to deal with the internal changes within an organization in response to the ongoing and increasing IT/OT convergence opportunities and occurrences. Position and Adoption Speed Justification: As the nature of the OT systems starts to change, organizations need to respond by aligning the standards, policies, tools, processes and staff between IT and the business that traditionally are most involved in OT product support. Alignment is
Page 26 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

about cultural and, in some cases, organizational change. The alignment must be between the traditional custodians of the OT systems and other groups that deal with technology, usually the IT groups. Because of the entrenched positions and practices associated with OT, it takes time and follows the corporate realization of the impact coming from IT/OT convergence. It also takes some time to align policies and procedures, as well as to align, potentially, staff within the organization. User Advice: After there is realization that convergence can have a positive or a negative impact, there should be a plan across both the IT and the operational parts of the company to examine its management process for software and determine how much of what is done in IT is applicable to OT and how to get them aligned. This may also include aligning hardware platform and architecture choices to ensure compatibility between IT and OT systems, and arriving at common standards for software and hardware. An enterprise architecture plan that embraces both IT and OT will be a key element of this. Additionally, security should be scrutinized. OT has traditionally relied on "security through obscurity" and the mistaken belief that firewalls and network separation provided adequate security. Companies should regard security in a holistic way, so that there are no weak links in the security of the technology. Alignment, not duplication, is key, because not all IT security tools, procedures and policies can be enacted in the OT world. A valuable tool to help manage this transition and to map out responsibilities for different parts of an OT environment is a responsible, accountable, consulted and informed (RACI) chart approach. Business Impact: The impact of IT/OT alignment falls across five categories:

Cost avoidance by not duplicating licensing and support for common software components Cost avoidance by consolidating and co-locating servers and hardware in a common data center Risk avoidance by building up OT security and OT patching and upgrade processes Information and process integration by having more interoperable systems sharing data Agility by being able to start new projects and react to change in a common way

Benefit Rating: High Market Penetration: 5% to 20% of target audience Maturity: Adolescent Recommended Reading: "Convergence of Digital Technologies Needs Expanded and Strengthened IT Governance" "Five Critical Success Factors for IT and OT Convergence" "The CIO's Role in Managing the Expanding Universe of Digital Technologies" "Agenda Overview for Operational Technology Alignment With IT, 2013"

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 27 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

"Establish Common Enterprise Semantics for IT/OT Integration"

IT/OT Impact on EA
Analysis By: R. Scott Bittler Definition: Operational technology (OT) is hardware and software that detects or causes a change through the direct monitoring and/or control of physical devices, processes and events. It is used typically in enterprise assets (such as utility grids, hospital equipment and industrial machines) and, when on a network, is a subset of the Internet of Things. IT/OT impact on EA refers to the specific activities and practices that EA practitioners should apply to enable alignment and integration of IT and OT. Position and Adoption Speed Justification: Historically, the IT and OT "worlds" in organizations have been almost entirely separate, and they remain so even today in most organizations. The journey to align and integrate IT and OT began in the mid-1980s, motivated by leveraging IT best practices to reduce security and management risks associated with using IT in OT systems and leveraging data created by OT systems to improve the business. However, the number of organizations practicing such integration was small and has remained so. Organizations need a mechanism to get these IT and operations/engineering groups to work together, yet few are using the EA discipline to help address IT/OT alignment and integration. However, as EA programs increasingly take a more holistic view of their scope to lead strategic transformation, they have the opportunity to add value as a bridge builder between the IT and OT worlds, as some leading-edge organizations have been quietly doing for years. The electric utility industry is causing a strong rise in the hype concerning IT/OT convergence and the role of EA. Specifically, the strong push toward implementing smart grid technology in the power distribution system is inherently an example of IT/OT integration, which is receiving high visibility around the world. Another visible driving force is the increasing use of computer technology and networking in vehicle systems. In addition, the larger trend toward the Internet of Things is driving increased hype about this. Entirely new business models (and profits) are even enabled by this practice. For example, a bearing manufacturer transformed from selling bearings to paper mills to selling a "machine uptime" service by using IT/OT integration for real-time machine vibration monitoring to proactively anticipate the need to replace bearings. The proportion (that is, market penetration) of EA programs seriously addressing IT/OT alignment and integration remains relatively small, but recognition of this factor and interest in it are rising. Therefore, for this Hype Cycle, the "dot" for IT/OT impact on EA appears a few steps up toward the Peak of Inflated Expectations. User Advice: Asset-intensive companies with a substantial OT installed base are strongly advised to use the EA discipline to build the bridge between IT and OT, to the benefit of both and, ultimately, to the business. More specifically, architects should:

Page 28 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx

Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Determine the business outcomes being sought that IT/OT alignment and integration can enable. Examples may include reduced OT cost, improved equipment uptime and reliability, improved production process performance and/or product quality, and improved safety. Design a communications strategy/plan to ensure that all new stakeholders understand and embrace the benefits of an EA-driven IT/OT integration process. Understand the governance regarding OT investments and opportunities to extend the benefits of initiatives toward business outcomes by adding IT/OT integration. Embark on an IT/OT integration journey, using EA as the overarching process and framework. Begin by making the business case for inclusion of the OT staff, typically engineering, in the EA process and its appropriate teams. Upon gaining agreement, amend the EA charter to include OT within the scope of the EA program. Add OT to the enterprise context as a factor with its own trends, strategy, requirements and principles that need to be integrated to derive maximum strategic benefit. Position and use EA-driven IT/OT integration as an initial, pragmatic step toward addressing the larger Internet of Things trend and its broader benefits for your enterprise. Be careful to ensure that regulatory and compliance issues are addressed that are particular to the OT environment and that could be impacted by IT/OT integration (for example, U.S. FDA for medical devices and U.S. OSHA for safety). Describe the impact of OT on the enterprise business architecture, the enterprise information architecture, the enterprise technical architecture and the enterprise solution architecture.

Business Impact: Enterprise architects and the EA discipline can and should play a central role in responding to the convergence of IT/OT and creating an alignment and integration strategy. The synergies that result from this convergence turn into business benefits that include increased leverage of information assets and operational efficiency improvements enabled by better use of factory/plant data, and even new business models made possible by the Internet of Things, the reduction of business downtime risk, and the reduction of security risk due to network vulnerability. Benefit Rating: High Market Penetration: 1% to 5% of target audience Maturity: Emerging Recommended Reading: "Architecting the Convergence of OT and IT" "Growth of IT/OT Convergence Creates Risks of Enterprise Failure" "Agenda Overview for Operational Technology Alignment With IT, 2013" "Bridging the Divided Silos of Enterprise Technology and Information: A Step Toward Intelligent Business"

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 29 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

"How CIOs Should Address Accountability and Responsibility for IT and OT" "Predicts 2013: IT and OT Alignment Has Risks and Opportunities" "IT/OT Convergence and Implications" "IT/OT: Bringing Operational Data to the Enterprise" "Securing Operational Technology in the Smart Grid and Other OT-Centric Enterprises"

IT/OT Integration
Analysis By: Kristian Steenstrup Definition: Operational technology (OT) is hardware and software that detect or cause a change through the direct monitoring or control of physical devices, processes and events in the enterprise. IT/OT integration is the end state sought by organizations (most commonly, asset-intensive organizations) where, instead of a separation of IT and OT as technology areas with different areas of authority and responsibility, there is integrated process and information flow. Position and Adoption Speed Justification: Truly integrated approaches to IT and OT are difficult to achieve, because there is a deeply rooted tradition in the IT world in which engineers and operations staff have historically been the "owner operators" of OT. As IT and OT converge, there are benefits to an organization in aligning how it manages IT and OT. There will be clear opportunities and demonstrable benefits to integrating the systems in a way that information can be shared and process flows are continuous, with no arbitrary interruptions. This brings the benefit of being a more agile and more responsive organization. The data from OT systems will be the fuel for better decision making in areas such as operations (adjusting and responding to production events) and plant maintenance. Few organizations have a mature, systemic approach to IT/OT integration. For most, there may be touchpoints, but IT and OT are often managed by separate groups with different approaches to technology and different vendors they relate to. Significant barriers exist from entrenched positions and attitudes on the IT and engineering sides of the company. In some industries, such as utilities, we are seeing reverse integration in the sense that OT systems now seek access and integration with commercial systems (IT), such as demand-response and billing, to improve process performance, such as distribution network management. In short, the flow of data can be both ways. User Advice: Understand the IT/OT convergence in your industry and company, first. Then, look to a program of alignment. After that, you can commence to progressively have a more integrated approach to technology, regardless of whether it is IT or OT. This integration should extend at least to the exchange of data and the maintenance of the platforms, with particular attention to communications, security and enterprise architecture. In some organizations, this will result in a fully integrated staff who no longer delineate between IT and OT. Business Impact: The benefits for asset-intensive businesses will be an organization much more capable of managing information and processes. For example, a company might implement a basis
Page 30 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

for better reliability and maintenance strategies through more-direct access to condition and usage data for plants and equipment. Operational intelligence will provide better production management, control and responses to events in the supply chain and production processes. For utilities, this is closely connected to the smart grid. Enhanced capabilities will be in billing (integration with smart metering), distribution management, power trading and generation scheduling. Benefit Rating: High Market Penetration: 1% to 5% of target audience Maturity: Emerging Sample Vendors: Eurotech; RTI Recommended Reading: "Convergence of Digital Technologies Needs Expanded and Strengthened IT Governance" "Five Critical Success Factors for IT and OT Convergence" "The CIO's Role in Managing the Expanding Universe of Digital Technologies" "Agenda Overview for Operational Technology Alignment With IT, 2013" "Establish Common Enterprise Semantics for IT/OT Integration"

At the Peak
IT/OT Convergence in Life Sciences
Analysis By: Simon F Jacobson; Leif Eriksen Definition: Gartner defines operational technology (OT) as the hardware and software that detects or causes a change of state through the direct monitoring and/or control of physical devices, processes and events in the enterprise. The IT and OT convergence in manufacturing brings the two worlds together to deliver new levels of decision support. Putting a business context on various data streams comes from the IT world, while the use of embedded and proprietary systems to capture and distribute machine/operations data comes from the OT world. Position and Adoption Speed Justification: Although OT that is, control systems, data historians and other production equipment has, in itself, been around for a while, it is only recently that the integration of these systems with transactional IT applications and broader IT governance has come to bear. Deeper traceability and serialization mandates for industry are driving life science organizations to use this convergence as an opportunity to redefine how they manage and control production data through networked programmable logic controllers (PLCs), and supervisory control and data acquisition (SCADA). They are also an opportunity to collate and analyze process data to support process analytical testing (PAT) and manufacturing execution systems (MESs), ultimately creating a solid data platform for regulatory compliance. Uptake will be

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 31 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

slower in life sciences than in other industries, due to the impact of validation and regulatory requirements, but, with the U.S. Food and Drug Administration (FDA) actively pushing a risk-based approach to 21st century manufacturing practices, it will not be too long before we see multisystem benefits and accelerated momentum. User Advice: Life science manufacturers should consider IT/OT convergence as transformational, but the cost of integrating IT and OT systems may be difficult to justify across a whole manufacturing facility. There are not only financial constraints, but also architectural, regulatory, security and interdepartmental gaps that need to be addressed. Life science manufacturers should identify products or lines that suffer with continual process-related issues to implement a proof of concept in a controlled manner, especially where the cost of validating the data for quality use may be inhibitive. However, being able to justify the release of a single, high-value batch that would have previously been discarded provides good justification for further implementation. Bear in mind that, while this convergence creates an opportunity for a whole new level of manufacturing intelligence data that will transform the way organizations monitor and measure performance with real-time process data at the enterprise level, managing the increased volume of process information ultimately requires advanced enterprise information management models and introduces a need for comprehensive IT/OT governance. This may be a challenge for organizations where OT is managed at a local level, but is achievable with the correct level of IT ownership and support. Business Impact: IT and OT convergence in life science manufacturing stands to deliver new levels of decision support. Putting a business context on various data streams comes from the IT world, while the use of embedded and proprietary systems to capture and distribute machine/operations data comes from the OT world. Convergence enables organizations to achieve greater integration between operations and other areas of the organization, creating the agility to change and regulation compliance, as well as accelerating batch release cycle times. Benefit Rating: Transformational Market Penetration: 1% to 5% of target audience Maturity: Adolescent Sample Vendors: Emerson Process Management; Honeywell Process Solutions; Rockwell Automation; SAP; Siemens; ThingWorx; Werum Software & Systems Recommended Reading: "Operational Technology Convergence With IT: Definitions" "The Nexus of Forces Is Ready to Advance Manufacturing 2.0" "Predicts 2013: IT and OT Alignment Has Risks and Opportunities"

UAS for Business


Analysis By: Randy Rhodes; Jeff Vining

Page 32 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx

Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Definition: Unmanned aircraft (or aerial) systems (UASs) are unmanned aircraft and all of their associated support equipment, which includes control stations, data links, telemetry, communications, functional payloads and navigation equipment necessary for operation. Position and Adoption Speed Justification: While UASs are a technology well proven in defense and military-sector applications, business applications are just now emerging. Many service providers that have used helicopters and conventional fixed-wing aircraft for business applications are now investigating the low cost and efficiency of UASs. (These may be remotely piloted or may operate autonomously according to preprogrammed flight instructions.) UASs may be launched from the ground, sea or air. Most consumers are familiar with the term "unmanned aerial vehicle (UAV)"; however, most industry regulators have adopted "UAS" as the preferred term because it encompasses all aspects of deploying these systems not just the flight platform alone. Many UAS vendors now offer potential solutions for business as a result of several converging technology trends. Camera and video imaging systems have dramatically reduced weight and price while growing in functionality. Chip-based GPS capability has enabled reasonably precise registration of flight paths. Onboard flash card memory storage is extensive and affordable. The availability of low-cost, onboard micro-electromechanical system (MEMS) chip-based gyroscopes enable calculation of the exterior orientation of onboard cameras. Research in autonomous robotics and artificial intelligence is improving flight stability and programmability. Other innovations include the mathematical algorithms necessary to create dense 3D digital surface models from aerial photography. Finally, cloud computing is enabling the scalable computational power needed for image processing. Today, uploaded image files can be processed to create a continuous "stitched" orthophoto (coordinate-registered) file to produce an associated digital surface model, and for some applications to identify specific ground features. Orthographically referenced video imagery can be technically achieved with UAS deployment. The primary challenge for UAS advancement is uncertainty around airspace authority jurisdiction. National aviation authorities have been focusing on issues of platform categorization, operator qualification and flight limitations. In most jurisdictions internationally, there are no means to obtain a formal authorization for commercial UAS operations in national airspace. Some countries are well ahead of the U.S. (Japan, Australia, Uruguay, Argentina and Brazil). However, interested parties may apply for an experimental certificate for the purposes of research, development, market surveys and crew training. In the U.S., a federal law mandates that the Federal Aviation Administration open up national airspace to drones by 2015. In response, some states have enacted privacy laws restricting UAS use. Active research for utility applications has been reported recently. The Electric Power Research Institute (EPRI) in the U.S. conducted test flights of prototype UAS models with video and other sensing payloads. Tests were limited to two vendor rotary wing models weighing less than 55 pounds operated at altitudes of less than 100 feet. EPRI concluded these systems could be effectively deployed to assess electric transmission and distribution (T&D) systems damage following severe storms, and now plans to expand testing with more potential applications and more vendor models.

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 33 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Military and defense UAS infrastructure can distribute sensor data collected during flights across various data centers, and can integrate data from collections of UAVs deployed by multiple organizations. Despite the advanced maturity within the defense sector, this Hype Cycle entry is focused on business applications, and so is positioned within the Peak of Inflated Expectations. Interest is growing rapidly, but business applications are still being defined. Widespread commercial adoption will depend on how well business and IT leaders can assess overall costs, define service levels required for operational use cases, and judge the overall impact of UAS usage to their organizational risk profiles. User Advice: CIOs must monitor governmental aviation authorities and regional legislation to assess risk and prepare for business applications. CIOs should provide guidance for managing the data and imagery generated by UAS systems for business applications. Implementations may require high-performance computing and improved security measures. Data acquired from multiple sources can create new intellectual property, or include sensitive data of a personal, legal, or jurisdictional nature. UAS-generated data, much like lidar data, should be registered to a geographical coordinate system within an enterprise GIS to have maximum usefulness. Business users procuring services should recognize the importance of first establishing a relationship with a governmental or research agency to obtain flight certification. Preliminary flight planning, as well as postflight data processing, must be carefully specified. IT organizations should require users to balance the prospect of UAS usage against the availability of other commercial alternatives, such as high-resolution "intelligent imagery" provided as a data service. Where needed, customer service organizations must be included in deployment of this technology so that enterprises can mitigate privacy concerns and avoid public relations issues by notifying concerned parties of flyover plans. Business Impact:

Electric utility organizations can use UASs for regular planned inspections of transmission towers and insulators; checking field conditions where tree growth intrudes on line clearances; monitoring protected species (such as salmon migration past hydroelectric dams); assessing damage to lines and poles following severe ice or wind storms; and performing aerial assessments of easement violations along transmission corridors. Emergency management organizations can use UASs to survey damage during major disasters, to identify access path during earthquakes, and to assist with survivor rescue. Forest management organizations can use UAS-based reconnaissance during forest fires, and security agencies can use UASs for surveillance of security zones. Mining and civil engineering companies can deploy UASs for professional surveys before commencing development projects. UAS data can be used to accurately estimate volumetric computations for cut-and-fill operations and for a visual record of day-to-day changes. Agricultural organizations can deploy UASs for aerial imagery and more-precise distribution of pesticides. (Unmanned helicopters have been used for crop dusting of rice paddies in Japan since the early 1990s.) Real estate organizations can use UASs for compelling aerial video imagery of commercial and high-end residential real estate.

Page 34 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx

Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Sports products and recreational companies will find UAS-generated footage far less expensive than helicopter-based alternatives.

Benefit Rating: Moderate Market Penetration: Less than 1% of target audience Maturity: Embryonic Sample Vendors: Adaptive Flight; Aeryon Labs; Hawkeye; Pix4D; senseFly; Trimble Recommended Reading: "Get Ahead of the Curve on the Question of Encryption for UAV Data" "As UAVs Proliferate, CIOs Must Monitor Standards, Policies and Regulations to Integrate Them Into Commercial Airspace" "Solving Two Key Challenges Is Critical If Using UAVs for Aerial Photography and Mapping" "Top 10 Technology Trends Impacting the Energy and Utility Industry in 2013"

Networking IT and OT
Analysis By: Geoff Johnson Definition: Networking IT and operational technologies (OT) refers to the practice of facilitating wide use of TCP/IP (Internet Protocol) networking in both IT and OT domains to bring OT into IT environments and allow OT to benefit from developments in IT networking. OT is being integrated into IT environments to improve the agility of businesses by using familiar IT networking technologies (particularly TCP/IP or IP networking). It includes the OT use of related networking domains, such as the Internet of Things and machine-to machine networking. Position and Adoption Speed Justification: For good reasons, OT has existed alone, using its own technologies and infrastructure: It had to meet mission-critical demands and be highly available, resilient and reliable. Enterprises in many industries, including utilities, energy, media, transport and government, have concluded there is increasing value in internetworking OT into IT. IT and OT network managers face issues about various approaches to IT and OT interworking, where the OT is stand-alone, converged or integrated. The business logic driving the fractured, uneven, but general trend toward IT/OT integration relying on networking is based on innovating to improve agility and competitive ability, especially as OT suppliers move toward increasing use of IP networking and common use of microprocessors and Web browsers to monitor and control devices and assets. OT occurs in some form in all asset-intensive industries, and it is dominant in industries like electricity and energy utilities, process control or manufacturing, transport and municipal operations, where OT is more likely to be based on distributed control systems (DCSs), and supervisory control and data acquisition (SCADA). Integration of SCADA or DCS real-time data into enterprise business processes and reporting is a common driver for considering IT/OT integration. There are clear generations of SCADA development. Originally, monolithic SCADA was not networked and stood alone. A second

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 35 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

generation carried distributed SCADA in real time on LANs, and a third generation networked SCADA using open standards and protocols across a WAN. An emerging fourth generation is occurring as converged SCADA, DCS, or output from data historians or process controllers is expected to work with IT to create corporate performance dashboards, feed business analytics, and support real-time optimization and business reporting (operations intelligence). In the future, expect a fifth-generation SCADA environment to be delivered as hosted software as a service via Web portals, virtual private networks, firewalls and Secure Sockets Layer, just as networked IT applications are today. An example is the IEC 61850 standard that uses Web browsers to displace electromechanical and programmable logic controller equipment in the operation of electricity substations. This transition will not occur easily because industries using substantial OT have heavy charter, regulatory, compliance and business responsibilities for delivering their services on a very large scale, and they will not accept any diminution of their extremely high-reliability responsibilities or introduction of cybersecurity risks that they have avoided by stringent engineering (OT) control for decades. User Advice: A mature future state of either converged or tightly integrated IT and OT networking is years away in many OT-intensive industries (such as manufacturing and utilities). IT and OT leaders must ensure that OT networking always meets its fundamental role requirements of being able to join up, control and share OT's "data streams" to survive disruptions; alarm, restore and support OT via connectivity; add coherence to operations; and feed OT process supervisory systems. IT and OT communications will effectively converge if the technical characteristics of IT and OT networking in a particular industry, such as the availability of standards and network infrastructure, aid their coalescence, and as enterprise performance increasingly demands accurate, coherent and timely information through supervisory systems, dashboards or management summary reports. The IT organization's performance must be exemplary and trusted before CIOs will be allowed to take on any business-critical IT and OT management or networking. OT managers need to team with their IT networking counterparts under a scheme of IT and OT governance, driven by policies and with a joint work plan. Securing communications in each IT and OT domain is a major issue in any organization. Securing their joint operation multiplies the complexity and risk. Appraise the OT infrastructure and security practices in place, and then jointly evaluate IT/OT options, with security as the driving focus. Base any OT integration with IT on holistic security applied across both domains as well as the whole enterprise. In some industries, IT and OT networking may remain isolated entirely for security reasons. Networking tools for IT and OT are dissimilar, with some commonalities. Test whether existing operations management and network management systems can be shared, loosely coupled, layered or consolidated. Evaluate the prospects for your enterprise's specialized OT to use commoditized IT. Leverage IP for consistent IT/OT networking.

Page 36 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx

Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Future IT/OT networks will be based on the following network attributes: high-bandwidth fiber infrastructure (often carrying video); low latency; browser applications; complete geographic coverage; context awareness; leveraging OT protocols carried in TCP/IP; signaling with Session Initiation Protocol; using presence to indicate the status of people and resources in unified communications and collaboration, and communications-enabled business processes; and feeding real-time agile corporate IT systems. Business Impact: Each enterprise needs to prepare its own comprehensive understanding of all the related applications, systems, and industrial and business processes to be carried by a potential converged IT and OT network. Effective IT and OT interworking is a necessary precursor to obtaining any transformational effects in combining IT and OT systems and operations. The trend is for OT to increasingly use IT-derived technologies and become integrated with IT environments, although OT investment cycles are an order of magnitude longer than those for IT. Expect organizational friction as the previously separate IT and OT domains are tasked with cooperating with each other. Begin by respecting the inevitable IT-versus-OT politics at work. IT and OT need to agree on a single architecture. Use Gartner's Infrastructure Maturity Model to evaluate whether stand-alone, integrated or converged networking is appropriate for your IT and OT architecture and deployment plans. Any IT/OT integration needs to be driven by your industry's best practices. Know the competitive status of IT/OT integration in your business relative to your industry peers. Target the best infrastructure, benchmarks and practices for your purposes and your business's competitive plans. Get IT/OT integration onto your business managers' agenda, and obtain policy decisions and direction for its use and restrictions in supporting business transformation. Expect to use emerging networking services, such as application acceleration/WAN optimization, which iteratively manages bandwidth; content delivery networks; WAN optimization controllers; application delivery controllers; protocol manipulation; and end-to-end network performance. Benefit Rating: Moderate Market Penetration: 5% to 20% of target audience Maturity: Adolescent Sample Vendors: ABB; Alstom Grid; Cisco; Emerson Process Management; General Electric; Honeywell; Huawei; Invensys; Omron; Rockwell Automation; Schneider Electric; Siemens; Yokogawa Electric; ZTE Recommended Reading: "Predicts 2013: IT and OT Alignment Has Risks and Opportunities" "Cool Vendors in IT/OT Alignment and Integration, 2013" "Top 10 Technology Trends Impacting the Energy and Utility Industry in 2013" "Organizing IT and OT Networking for Future Integration"

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 37 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

"A Network Manager's Primer on Addressing Operational Technologies"

Operational Technologies for Government


Analysis By: Jeff Vining Definition: Operational technology (OT) for government facilities and service delivery is a broad concept that encompasses both hardware and software that detects or causes a change, through the direct monitoring and/or control of physical devices, processes and events. As a consequence, there is an opportunity to connect and manage these embedded devices/sensors (physical assets) to create, send and receive data. Position and Adoption Speed Justification: A more intelligent interaction between IT and OT information system is key for governments to contain costs, support growth and improve services. In particular, this requires the ability to leverage communications infrastructure to gather information from a full range of OT devices to collect and manage data, such as remote sensors, Wi-Fi, cellular ID, RFID tags and GIS/GPS devices. Data gathering and management approaches based on these technologies are becoming increasingly important in the context of "smart city" programs around the globe: A rapid adoption of operational technologies plays a key role both in greenfield developments, where new cities or new city areas are being built, and in brownfield developments, where existing city infrastructure is being retrofitted with sensors and devices to improve resource utilization, reduce pollution or collect more money. Government should be aware of the strong technology push from vendors that bundle products and services into a solution set. However, this does not take into account the maturity of an organization's enterprise architecture, software configuration practices and information integration challenges. Challenges such as these surface when attempts are made to integrate OT into IT governance models and processes. User Advice: Government must demonstrate sustainable public value to justify the exploration of technology partner relationships and IT/OT convergence models. For example, the Port of Los Angeles now deploys more devices and applications to create a more event-driven approach to decision making. Government IT administration should begin by connecting devices and sensors to provide information and services that were not available before (such as ordinary lampposts being equipped with environmental sensors that can communicate conditions), or link people (workers) processes and systems. For example, several government organizations are creating the ability to monitor transportation conditions (such as potholes and other weather-related factors), as well as locate and track equipment to more quickly prioritize and mobilize them. Increasingly, OT is playing a larger part of the data collection strategy for government. However, government should be aware of latency and other risks (such as open software environments or the Stuxnet virus) as more IPaddressable devices are added to the communications infrastructure. To manage these risks, develop processes and tools such as field sensors, mobile devices and master data management disciplines for real-time data management and analysis. Business Impact: The value of OT for government is in its ability to observe, report and control processes and data collection points in real time, which can range from incremental improvements (meters communicating conditions) to more transformational changes, such as automating digital signs. Government is unprepared for the multilayers of managing interfaces to various devices and applications, as well as supporting architectures. These layers have governance and technology
Page 38 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

impacts. Thus, C-level executives first should consider modernizing their communications infrastructure and dedicated networks. They should then take incremental steps, such as IPenabling environmental devices/sensors or using GPS devices, to make better forecasting and asset management decisions. For example, natural resource and wildlife management agencies are turning to unmanned aerial vehicle (UAV) platforms using lidar (light detection and ranging) to produce terrain maps to better comply with regulatory requirements (such as noting terrain elevation or deforestation changes). Although operating costs for lidar drop, they still require service providers to manage and import data collected into existing mapping systems. Think about the overall design process that could integrate lidar data, aerial photography and other data sources (such as spectral imaging systems or miniature synthetic aperture radar) to improve accuracy and reduce costs to assess flood risks and infrastructure management. Benefit Rating: High Market Penetration: 1% to 5% of target audience Maturity: Emerging Sample Vendors: ABB; IBM; Oracle; Rockwell Automation; SAP; Siemens (UGS); Tibco Software Recommended Reading: "Agenda Overview for Operational Technology Alignment With IT, 2013" "Solving Two Key Challenges Is Critical if Using UAVs for Aerial Photography and Mapping" "IT and Operational Technology: Convergence, Alignment and Integration" "Internet of Things Research for CIOs"

Big Data
Analysis By: Mark A. Beyer; Sid Deshpande Definition: Big data is high volume, velocity and variety information assets that demand costeffective, innovative forms of information processing for enhanced insight and decision making. Position and Adoption Speed Justification: Big data is almost at the Peak of Inflated Expectations. It will become an embedded and state-of-the-art practice by 2018, and it is more likely that big data management and analysis approaches will be incorporated into a variety of existing solutions in existing markets (see "Big Data Drives Rapid Changes in Infrastructure and $232 Billion in IT Spending Through 2016"). Notably, organizations have begun to indicate that existing analytics will be modified and enhanced by big data and not replaced (only 11% of data warehouse leaders indicated they would consider replacing the warehouse with a NoSQL or big data solution as of November 2012, down from just over 20% in 2011). Practices are diverging at this point, with confusion starting to emerge regarding exactly what constitutes big data and how it should be addressed. Some very traditional vendors that have not been considered for big data solutions should be considered, and this confusion may

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 39 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

be their entry point into the debate about which tools to use. Other vendors will simply relabel their existing products as big data and not actually offer anything new. Beginning late in 2014 and through the end of 2015, big data will descend into the Trough of Disillusionment as conflicting concepts of what it is and how organizations can benefit from its management and analysis multiply. There are two significant facts that will drive it into the trough.

Tools and techniques are being adopted ahead of learned expertise and any maturity/ optimization, which is creating confusion. The inability to spot big data opportunities by the business, formulate the right questions and execute on the insights.

MapReduce continues to persist as the "darling" of big data processing. Even with new additions or wider use of the Hadoop project (such as HCatalog) it remains a batch solution and so has to be combined with other information management and processing technologies. Hadoop implementations require expert-level staff or system implementers. As anticipated in 2011, attempts to combine MapReduce with Graph have followed and inadequate attempts to address other big data assets, such as images, video, sound and even threedimensional object modeling, will drive big data into the trough. Some big data technologies represent a great leap forward in processing management, especially relevant to narrow but deep (many records) datasets, such as those found in operational technology, sensor data, medical devices and mobile devices, among others. Big data approaches to analyzing data from these technologies represent the potential for big data solutions to overtake existing technology solutions when the demand emerges to access, read, present or analyze any data. The larger context of big data refers to the wide variety and extreme size and count of data creation venues in the 21st century. Gartner clients have made it clear that big data must include large volumes processed in streams, as well as batch (not just MapReduce) and an extensible services framework deploying processing to the data or bringing data to the process, spanning more than one variety of asset type (for example, not just tabular, or just streams or just text). Importantly, different aspects and types of big data have been around for more than a decade it is only recent market hype around legitimate new techniques and solutions that has created this heightened demand. User Advice:

Identify existing business processes that are hampered in their use of information because the volume is too large. There are many information gaps that could be filled by new information types (variety) or the velocity will create processing issues. Then identify business processes that are currently attempting to solve these issues with one-off or manual solutions. Review existing information assets that were previously beyond existing analytic or processing capabilities (referred to as "dark data") and determine if they have untapped value to the business, making them a first or pilot target of your big data strategy.

Page 40 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx

Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Plan on utilizing scalable information management resources, whether public cloud, private cloud or resource allocation (commissioning and decommissioning of infrastructure), or some other strategy. Do not forget that this is not just a storage and access issue. Complex, multilevel, highly correlated information processing will demand elasticity in compute resources, similar to the elasticity required for storage/persistence. Extend the metadata management strategies already in place and recognize that more is needed to enable the documentation of big data assets, their pervasiveness of use and the fidelity or assurance of the assets by tracking how information assets relate to each other and more.

Business Impact: There are three principal aspects to big data success will be limited unless all are addressed. The quantitative aspects of big data generally do not emerge one by one. Volume, variety and velocity most often occur together. The second aspect is that innovation must be costeffective both in costs to deploy and maintain and in terms of time to delivery solutions that arrive too late are useless, regardless of cost. Finally, the focus must be on increased insight by the business into process optimization from immediate automation through the development of completely new business models. Big data permits greater analysis of all available data, detecting even the smallest details of the information corpus a precursor to effective insight and discovery. The primary use cases emerging include leveraging social media data and combining operational technology (machine data) with back-office and business management data and further validating existing assets (increasing their "fidelity"). Perhaps the most important business benefit of big data management and analysis techniques are that analytics and decision processing can include multiple scenarios, including highly disparate definitions and temporality of events in the data. This means that analytics can now comprise many different scenarios. Each scenario could have different starting and ending points, and differing relationships within the data and circumstantial inputs. Finally, analysts would be able to attach probabilities to each scenario and monitor many of them simultaneously. Benefit Rating: Transformational Market Penetration: 5% to 20% of target audience Maturity: Adolescent Sample Vendors: Cloudera; EMC (Greenplum); HortonWorks; IBM; MapR; Teradata (Aster Data) Recommended Reading: "Big Data Drives Rapid Changes in Infrastructure and $232 Billion in IT Spending Through 2016" "Big Data' Is Only the Beginning of Extreme Information Management" "How to Choose the Right Apache Hadoop Distribution"

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 41 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

"CEO Advisory: 'Big Data' Equals Big Opportunity" "The Importance of Big Data: A Definition"

Intelligent Lighting
Analysis By: Simon Mingay; Stephen Stokes Definition: Intelligent lighting is lighting in any application that combines the ability to make use of highly efficient illumination technologies, such as light-emitting diodes (LEDs), motion, light, time and other sensors, with information and communication technology (ICT) to provide a solution that is automated, dynamic, adaptable and efficient, and adjusted to the nature and level of activity being undertaken. Position and Adoption Speed Justification: While many of the technologies combined into intelligent lighting solutions are somewhat mature, the combination and the software providing the intelligence are relatively new. Relatively high capital costs, vendor lock-in and a lack of familiarity remain the main hurdles to adoption, and are slowing the pace of this hybrid technology through the Hype Cycle. User Advice: Lighting is a significant consumer of electricity, usually the second-highest behind heating and cooling in most commercial buildings:

According to the International Energy Agency, lighting accounts for approximately 19% of global electricity consumption. The U.S. Department of Energy estimates that lighting accounts for, on average, 30% of electricity consumption in commercial buildings, while the Energy Information Administration puts the figure at 38%.

This is one of the areas in which the combination of operational technology (OT) and IT can make order-of-magnitude not just marginal improvements in efficiency, along with significant operational cost savings. As such, lighting, and the application of ICT to lighting, is worthy of attention from both the facilities management and IT teams. There are instances of intelligent lighting emerging in residential, public service, commercial and industrial (particularly warehouse) applications. Such applications deliver highly efficient lighting solutions and are capable of achieving substantially lower running costs. These solutions integrate mature technologies such as movement and heat sensors, daylight compensation, wired and wireless networking, ZigBee, occupancy, and task-oriented knowledge to adjust the location, intensity and direction of lighting to appropriate levels for the work or activity being conducted in the area at any particular time. The mature technologies are used in harness with the latest disruptive lighting technologies, such as high-efficiency white LEDs, and building and lighting control software. They also provide usage data time series for later analysis and reporting. With the expected ongoing significant rises in the costs of electricity for many regions, it is important that, despite higher capital costs, the potential application of intelligent lighting be considered.

Page 42 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx

Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

There have been two critical disruptions in lighting efficiency in the past decade. The first was a shift to compact fluorescent lights. The second has been the emergence of high-power and shortwavelength LEDs. LEDs produce photons as a band-gap interaction, so the opportunities for further technology disruption are limited. We expect an ongoing trend of improving efficiency and reducing unit cost. Most enterprises have the potential to reduce lighting costs. Intelligent lighting solutions will offer many enterprises, particularly those with light-intensive applications, significant additional savings. As such, facilities managers and, increasingly, those IT organizations with responsibilities to look at OT solutions should consider intelligent lighting technologies for new builds and refurbishments. Business Impact: Within the context of lighting, electricity consumption and greenhouse gas emissions, the impact is high, particularly when LEDs can be used as the light source. Overall, within the context of enterprise operations, the impact is expected generally to be moderate to significant, depending on the spectrum of electricity-intensive activities in a given industry, and the light source that can be used. For warehousing and other light-intensive applications, the impact will be substantial, with quoted efficiency gains at the level of 95% per unit area. Benefit Rating: High Market Penetration: 1% to 5% of target audience Maturity: Emerging Sample Vendors: Acuity Brands; Adura Technologies; Daintree Networks; Digital Lumens; Encelium; Enlighted; GE Energy; Lumenergi; Lutron Electronics; Osram; Philips Dynalite; Philips Lightolier; Redwood Systems; Schneider Electric Recommended Reading: "Cool Vendors in Sustainability, 2010" "Market Trends: High-Brightness White LEDs will Boom From LCD TV and Industrial Lighting Demand" "Hype Cycle for Wireless Networking Infrastructure, 2011" "Market Trends: Energy Management in the Smart Home"

Facilities Energy Management


Analysis By: Stephen Stokes; Simon Mingay Definition: Facilities energy management involves the use of a combination of advanced metering and IT and other operational technology (OT) that tracks, reports, analyzes energy consumption and alerts operational staff in real time or near real time. Systems are capable of allowing highly dynamic visibility and operator influence over building and facility energy performance. They also provide dashboard views of energy consumption levels, with varying degrees of granularity, and allow data feeds from a wide range of building equipment and subsystems.

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 43 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Position and Adoption Speed Justification: Real-time energy management within buildings is a central element of the emergent megatrend of "smart systems." Frequently called building information management systems (BIMs) or integrated workplace management systems, they are at the heart of the disruptive convergence of the traditional hard-wired, proprietary-protocol, facilities management world, with the software-driven, open-network-enabled world of information and communication technology (ICT), and with additional contributions from a spectrum of established automation and advanced metering infrastructure (AMI) vendors. This convergence is bringing the traditional building controls vendors, such as Johnson Controls, Schneider Electric and Honeywell, into "co-opetition" with ICT vendors large and small, such as IBM, Cisco, SCIenergy and BuildingIQ. The integration of AMI data with ICT technologies is rapidly establishing a new level of facility operational performance expectations. Such systems allow for diagnosing high- and low-energy consumption areas, regions or geographies. Facilities energy management is an area of sustainable business that allows immediate and quantifiable costs and benefits over all time scales and across real estate portfolios literally from warehouses and distribution centers to the Empire State Building. User Advice: Cost cutting, concerns over emission reduction requirements, and future energy price levels and volatility have converged to make energy management a key element of sustainability and business strategies in the emerging low-carbon economy. Some of the lowest of the lowhanging, energy-efficiency fruit can be found in the energy footprint of buildings. It has been estimated that they are the single-largest consumer of electricity globally, and about 40% of this energy consumption can be removed by implementing existing and mature efficient technologies, as well as operating and information technologies. Although most solutions focus on energy management, others incorporate a wider portfolio of building-related data, including occupancy levels, water usage and other factors. Current generations of applications can provide granular and real-time insights into performance. A small number of vendors are now touting "dynamic intervention" to allow automated optimization without user input. Enterprises with large buildings or a large real estate portfolio for which they are responsible for the energy bill should be actively looking at this technology. Delivering visibility into a facility's energy usage allows ongoing investment and innovation for performance improvement, and provides a basis for systematic facilities portfolio management for the perspective of energy consumption. Many jurisdictions are planning legislation to force reporting and minimum energy-efficiency performance. As such, the thresholds for which this technology becomes worthwhile are dropping, particularly as SaaS offerings become available, so it makes sense for even midsize organizations to be actively testing or making use of these products. The IT team should work with the facilities management team to exploit the convergence of building systems and IT, and leverage the data as part of a sustainable business system architecture. Business Impact: There is a significant potential business impact relating to energy usage visibility and cost savings. Integration of on-site renewable energy sources and their management within overall workplace energy consumption are additional, increasing factors. Benefit Rating: High

Page 44 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx

Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Market Penetration: 1% to 5% of target audience Maturity: Adolescent Sample Vendors: BuildingIQ; Honeywell; IBM; Johnson Controls; OSIsoft; Schneider Electric; SCIenergy; Siemens IT Solutions and Services; Verisae Recommended Reading: "An Integrated Building Management System Puts the 'Smart' in Smart Building" "Vendor Guide for Industrial Energy Management, 2013" "An Energy-Efficiency and Sustainable Buildings Case Study: Johnson Controls Demonstrates Leadership in Design and Execution" "Sustainable Buildings: Dynamic, High-Performance Corporate Assets"

Complex-Event Processing
Analysis By: W. Roy Schulte; Zarko Sumic; Nick Heudecker Definition: Complex-event processing (CEP), sometimes called event stream processing, is a kind of computing in which incoming data about events is distilled into more useful, higher-level and more complex event data that provides insights into what is happening. Multiple events from one or more event streams are correlated on the basis of having a common value in a key field, and patterns and trends are detected. One complex event may be the result of calculations performed on dozens, hundreds or even thousands of input (base) events. Position and Adoption Speed Justification: CEP has progressed slightly on the Hype Cycle, putting it just past the Peak of Inflated Expectations. It attracts considerable attention because it is the technology used for many kinds of real-time analytics on big data. However, companies are adopting CEP at a relatively slow rate because its architecture is so different from conventional system designs. Although most developers are unfamiliar with this technology, it is the only way to get many kinds of insights from event streams in real time or near real time, so it will inevitably be adopted in multiple places within virtually every company. It may take up to 10 years for CEP to reach the Plateau of Productivity and be in use in the majority of applications for which it is appropriate. The amount of real-time event data is growing rapidly. It comes from transactional application systems, sensors, market data providers, social computing (activity streams), tweets, Web-based news feeds, email systems and other sources. Companies need to tap the information in these event streams to be able to respond faster and smarter to changing conditions but conventional software architectures can't address these business requirements. Conventional applications use a save-and-process paradigm in which incoming data is stored in databases (in memory or on disk), and then queries are applied to extract the relevant subset of data needed for a particular application function (in other words, the query comes to the data). This paradigm is too slow when the volume of incoming data is high and the results of computation must be made available

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 45 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

immediately. Architects are therefore driven to use an alternative, event-driven design in which the computation is triggered immediately as input data is received into memory. In this "process first" CEP paradigm, the application runs continuously and the data comes to the query because the query is in place before the data arrives. This is known as "data in motion." User Advice: Companies should use CEP to enhance their situation awareness and to automate certain kinds of decisions, particularly those that must be made in real time or near real time. Situation awareness means understanding what is going on so that you can decide what to do. CEP should be used in operational activities that run continuously and need ongoing monitoring, using a sense-and-respond approach. For example, it can apply to near-real-time precision marketing (cross-sell and upsell), fraud detection, factory floor and website monitoring, customer contact center management, trading systems for capital markets and transportation operation management (for airlines, trains, shipping and trucking). In a utility context, CEP can be used to process a combination of supervisory control and data acquisition (SCADA) events and "last gasp" notifications from smart meters to determine the location and severity of a network fault, and then to trigger appropriate remedial actions. Developers can obtain CEP functionality by custom coding it into their application or tool, or by acquiring an event-processing platform and tailoring it to their specific requirements. Prior to 2004, developers wrote custom code for CEP logic as part of their application or tool, because off-theshelf event-processing platforms were not available. Developers still write custom CEP logic for many purposes, but a growing number of developers leverage commercial and open-source eventprocessing platforms to reduce the time and cost required to implement CEP-based applications (see examples of vendors listed below). Some commercial event-processing platform products have extensive analytic capabilities, such as built-in statistical functions, tools for building business dashboards, off-the-shelf adapters for packaged applications or industry message-format standards, alerting mechanisms and graphical development tools. In many cases, CEP will be acquired in a packaged application or tool, or obtained as part of a SaaS offering. Leading-edge architects and developers have become aware of event processing, event-driven architecture and CEP, and are making build-versus-buy decisions in an increasing number of projects. Event-processing platforms are sometimes used in conjunction with intelligent business process management suites or operational intelligence platform products to provide more intelligent process monitoring, or to help make business decisions on a dynamic, context-aware basis. CEP plays a key role in event-driven business process management, alongside rule engines and structured process-orchestration capabilities. Business Impact: CEP:

Improves the quality of decision making by presenting information that would otherwise be overlooked. Enables faster response to threats and opportunities. Helps shield business people from data overload by eliminating irrelevant information and presenting only alerts and distilled versions of the most important information. Reduces the cost of manually processing the growing volume of event data.

Page 46 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx

Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

CEP adds real-time intelligence to operational technology (OT) and business IT applications. OT is hardware and software that detects or causes a change through the direct monitoring and/or control of physical devices, processes and events in the enterprise. OT goes by various names in different industries, and is often owned and operated independently of IT systems. For example, utility companies use CEP as a part of their smart grid initiatives, to analyze electricity consumption and to monitor the health of equipment and networks. Elsewhere, CEP helps to process feeds of event data such as temperature, vibration and revolutions-per-second that, when analyzed together, may predict impending equipment failure. CEP is also used in business-activity monitoring applications that have a high rate of input data (high throughput), require fast (low latency) responses or require the detection of complex patterns (especially those that are temporal or location-based). CEP is one of the key enablers of context-aware computing and intelligent business operations strategies. Some of the more sophisticated operational intelligence platform products use CEP to provide pattern matching and situation detection capabilities. The biggest single source of future demand for CEP may be the emerging Internet of Things. Social computing may be the second largest source of new data and demand for CEP. Benefit Rating: Transformational Market Penetration: 5% to 20% of target audience Maturity: Adolescent Sample Vendors: Apache; EsperTech; FeedZai; Grok; HStreaming; IBM; Informatica; LG CNS; Microsoft; Oracle; Red Hat; SAP (Sybase); SAS (DataFlux); ScaleOut Software; Software AG; Splunk; SQLstream; Tibco Software; Vitria; WestGlobal; WS02 Recommended Reading: "Use Complex-Event Processing to Keep Up With Real-time Big Data" "Best Practices for Designing Event Models for Operational Intelligence" "Cool Vendors in Analytics" "Apply Three Disciplines to Make Business Operations More Intelligent"

Open SCADA
Analysis By: Randy Rhodes Definition: Supervisory control and data acquisition (SCADA) systems include a human-machine interface, control interfaces for plant systems, data acquisition units, alarm processing, remote terminal units or programmable logic controllers, trend analysis, and communication infrastructure. Open SCADA systems are built on industry and de facto standards, rather than closed, proprietary platforms. They may include open-source software.

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 47 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Position and Adoption Speed Justification: Early SCADA systems were built on proprietary, event-driven operating systems. Today's SCADA systems increasingly depend on commonly available hardware and operating systems. Microsoft and Linux operating systems have been more readily accepted and are common among utility SCADA applications, particularly on client workstations. Most communication subsystems now depend on standard, openly published protocols such as IEC 60870-5-101 or IEC 60870-5-104, IEC 61850 and Distributed Network Protocol 3 (DNP3) rather than vendor-specific protocols of the past. Support for the OPC Foundation's Unified Architecture is widespread (based on Microsoft Windows technology, OPC originally meant OLE for Process Control, but now it stands for Open Process Control), and extensions for communications over TCP/IP are available from most vendors. Adoption of open SCADA has slowed due to industry awareness of security issues. Utilities are showing some caution due to the mission-critical nature of modern utility SCADA systems; a worstcase SCADA security disruption could cause costly equipment failure. Network security vendors are addressing specialized security risks with ruggedized industrial firewall solutions for SCADA networks. SCADA vendors are adding enhanced security management and compliance monitoring features that are consistent with IT desktop management systems. A few self-organizing groups offer open-source SCADA code typically, these include Linuxbased SCADA servers, a few distributed control system interfaces, SCADA alarm processing, and time-series trending. Ongoing support is still limited, however, and development road maps are uncertain at best. User Advice: For electric, gas and water utility applications, open SCADA will be more commonly adapted by small and midsize municipal organizations, where there is less need for complex analytical applications requiring in-depth vendor customization of the overall technology stack. Utilities should rely on not only the business unit technical operations staff, but also the internal IT support staff to ensure that these systems are fully maintained throughout their entire life cycles. The IT staff should assist in establishing operational technology (OT) governance on all SCADA projects including network security, access monitoring, patch administration, and backup and restoration management. The IT staff also should take the lead in establishing clear accountability for ongoing operational support and maintenance via service-level agreements between the IT and OT staffs. While open architecture systems offer improved flexibility and lower costs, commoditized platforms typically offer a broader "attack surface" to potential intruders. Additional security controls will likely introduce more technical complexity, unexpected implementation delays and increased support requirements. Business Impact: Open SCADA changes will affect process control and distribution automation functions in electric, gas, water and wastewater utilities. Benefit Rating: Moderate Market Penetration: 1% to 5% of target audience Maturity: Adolescent
Page 48 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Sample Vendors: CG Automation; Efacec ACS; GE Intelligent Platforms; Open Systems International (OSI); Survalent Technology Recommended Reading: "Security Lessons Learned From Stuxnet" "How CIOs Should Address Accountability and Responsibility for IT and OT" "Five Mistakes to Avoid When Implementing Open-Source Software"

Operations Intelligence
Analysis By: Simon F Jacobson; Leif Eriksen Definition: Operations intelligence (OI) takes manufacturing data beyond the traditional reporting facilitated by enterprise manufacturing intelligence (EMI) applications. It incorporates business process management (BPM) and business intelligence (BI) disciplines. This focus on a greater range of data and information enables organizations to place manufacturing's performance in the context of longer-term business outcomes. Position and Adoption Speed Justification: As companies demand more predictability from their manufacturing operations, demand for analytical capabilities that enable a feedback loop between business performance and manufacturing capabilities increases. Real-time environments need near-real-time information. Early adopters of EMI are looking for more than the historical, descriptive, and post facto analytics provided by their current dashboards and widgets. EMI provides a mechanism for data capture, aggregation and analysis. OI goes a step further than EMI by incorporating the creation and management of data models, using data mining and discovery tools, and leveraging simulation and scenario analyses to monitor and add context to the large volumes of data generated from complex production processes that are housed within distributed and heterogeneous plant systems. It also adds business process management (BPM) discipline to enable an event-driven architecture that supports the ability to report and act on realtime discovery of events, not just alarms. It also helps guide operators or targeted stakeholders to the appropriate actions. The end goal is to create an architecture that places real-time operational data (such as asset availability and capability, work in process, test and quality status, and inventory movements) in a more holistic and actionable business context. Currently, Gartner sees more adoption in strategy and pilots than in wider-scale deployments. The emergence of enterprise use cases looking at this level of architecture has pushed it closer to the peak from its past position. As more commercially available offerings continue to emerge from industrial automation providers like GE Intelligent Platforms or specialist providers like Savigent Software, when systems become easier to deploy (via exploiting cloud and mobile technologies) than today's "build it yourself" architectural frameworks to stitch together multiple applications and as companies report back business value, the maturity will progress, and a shorter time to plateau can be anticipated.

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 49 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

User Advice: Do not confuse OI with EMI. This is more than reporting and isn't a capability that's easily achieved with traditional BI applications or data warehousing techniques and skill sets. Realtime environments need real-time systems. Specialty vendors and skill sets are required. Here are two key points to consider for companies starting this path:

Identify suitable uses cases that can span scenarios across new product design and introduction (NPDI), asset performance management, energy efficiency, quality, contract manufacturing, customer sales and aftermarket services, and manufacturing network design. A prevalent scenario is one where a chemicals producer is combining historical data and unit metrics on asset performance, cost factors and process cycle times by shift, day and month with current standard process models and real-time performance data from the automation and controls layer with ERP-based data. It has created simulation models that use current and historical performance data to predict the impacts of unplanned changes in energy and material costs on asset performance and total margins. Over time, this will impact decision making on production sequencing, product portfolio management and activity-based costing. Define how real-time OI needs to be. Identify which users need what information and when to support advanced decision making. Also, ensure that information usage, value and dissemination happen quickly and efficiently. Business leadership from manufacturing, supply chain and IT must share responsibility when deciding where the information should be used and building the business case.

Business Impact: OI delivers manufacturers an ability to mine, model, manage and simulate operations data over an extended hierarchy of time scales and business priorities that extend significantly beyond the production environment. It's the foundation for a closed-loop capability that enables a knowledge-based enterprise where not only executives, but functional workers, have access to the kinds of analytics they would not otherwise have. Overall, this increases visibility into how the company is running and what is happening in its external environment. Individual contributors and managers have improved situational awareness, so that they can make faster and better decisions based on high-quality information that's extracted and distilled from multiple data points. Benefit Rating: Transformational Market Penetration: 5% to 20% of target audience Maturity: Emerging Sample Vendors: Apriso; AspenTech; Camstar; Dassault Systemes; GE Intelligent Platforms; Invensys Operations Management; Microsoft; Rockwell Automation; Savigent Software; SAP; SAS Recommended Reading: "The Nexus of Forces Is Ready to Advance Manufacturing 2.0" "The Manufacturing Performance Dilemma, Part 2: From Enterprise Manufacturing Intelligence to Operations Intelligence" "Operations Intelligence Adds Context to Manufacturing Metrics That Support DDVN Goals"

Page 50 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx

Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

"Asset Management in DDVN, Part 2: Data Shows Performance Improvements Are Within Reach"

System Engineering Software


Analysis By: Marc Halpern Definition: System engineering software enables the modeling and simulation of products as engineering systems. This software employs critical engineering parameters that govern system behavior. Engineers use it to identify the critical parameters that would yield product designs with predictable performance across the range of expected operating conditions. Position and Adoption Speed Justification: The concept of product functional analysis and system modeling has been around for decades. However, the rapid rate of infusing software in physical products as "software-enabled products" has elevated system engineering as commercial software. Companies historically did their system engineering work using in-house tools, Microsoft Excel, or math tools that can solve systems of equations either numerically or symbolically. Enterprise vendors such as Dassault Systemes, IBM, PTC and Siemens have high interest in system engineering. Commercial software developers are recognizing the importance of system engineering, investing in this technology and promoting their interest. Product life cycle management (PLM)-centric conferences are focusing more attention on system engineering. During the past year, several PLM vendors have incorporated system engineering into their product suites. However, due to the complexity of the many design domains found across a system, they do not yet meet the complete needs of the user community. For example, most system engineering software does not conveniently capture software logic or physical behaviors, such as optics, as parts of an electromechanical system. User Advice: Large manufacturers of complex engineered product systems involving operational technology should be investing 10% to 20% of PLM spending in this class of software. Examples include consumer electronics, military electronics, aerospace and defense products, transportation vehicles, industrial equipment, and heavy machinery. Ideally, these large manufacturers should use system engineering software with product requirements management software. Manufacturers should prioritize the quality of interfaces between requirements management software and system engineering software when choosing among candidate software providers. This integration should provide mappings between requirements and the technical specifications that meet the requirements. Additionally, integration of the system engineering software itself should enable users to link critical design parameters from the system models to the technical specifications. Business Impact: System engineering software can integrate the functional behavior of electronics, software and mechanical features in a single model. For example, in the design of anti-lock brakes, the system engineering model would incorporate the key physical effects related to conditions, such as friction and deceleration, that would need to be included in the design to understand the operating parameters and the critical variables that must be controlled for reliable performance. Increasing the role of system engineering has the potential to transform how manufacturers produce products. Today, engineers use these tools to identify critical parameters that have the greatest

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 51 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

influence on product behavior. They study those parameters to direct design decisions. It can reduce the number of iterations necessary to ensure that product designs match product requirements, compliance issues, manufacturability objectives and serviceability needs. Increasingly, manufacturers are recognizing that system engineering can dramatically improve the reusability of existing parts and subsystems to create a greater variety of products and options that enable them to reach broader markets scalably and more efficiently. Benefit Rating: High Market Penetration: 5% to 20% of target audience Maturity: Emerging Sample Vendors: Comet Solutions; Dassault Systemes; IBM (Telelogic); Jama Software; MathWorks; Phoenix Integration; PTC; Siemens Recommended Reading: "Expand Markets Yet Reduce Costs With Winning Product Portfolios"

Sliding Into the Trough


Integrated and Open Building Automation and Control Systems
Analysis By: Simon Mingay; Stephen Stokes Definition: These systems integrate and optimize the management and control of heterogeneous building infrastructure equipment, using IP and open standards such as BACnet, data descriptions and protocols. These systems will integrate the management of power distribution; uninterruptible power supplies; heating, ventilation and air conditioning; lighting; energy monitoring; access controls; surveillance and security; and on-site energy generation. They support remote access and management, as well as distributed control over secure IP networks. Position and Adoption Speed Justification: These systems help support energy efficiency and building automation, and can automate a response to pricing signals and demand response requests from the utility. They may take other external data feeds, such as weather forecasts, and based on a set of user-definable policies and rules take appropriate measures. The data from these systems is made available to enterprise facilities management systems, and other enterprise applications, through appropriate integration or data exchange mechanisms that make use of open and industry standards, such as SOAP and XML. That may include published APIs and Web services, and comply with any data standards that ensure data consistency. Most of the individual parts of these solutions, such as industry standard communication protocols, are available and are mature technologies. Many building systems are simply connected, rather than integrated, although increasingly there are single-vendor solutions available that integrate much of this. However, most enterprises don't have the luxury of a single-vendor environment, and are faced with a heterogeneous portfolio of building assets and a capital budget that won't extend to cover the associated costs. The challenge is to integrate through open standards ideally across a range of equipment from multiple vendors and be able to securely integrate the systems

Page 52 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx

Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

and data at low cost into existing enterprise networks, systems and building information management software. The technology will continue to accelerate due to a combination of continued convergence of the networking layers, increased awareness and demand for efficient building performance, as well as wider deployment of sensors and networks capable of sensing environmental and resource consumption data on increasingly real-time scales. The solutions will increasingly integrate with cloud-based options. Building management and automation systems are widely used to achieve new levels of visibility and insight into campus, building and building subsystem performance. Building management and building energy efficiency, in particular, are among the hottest spaces within the sustainable business marketplace. Basic automation technologies have been around for a considerable time. What is new is the integration of these data feeds for the purposes of management and automation, and into rich dashboards and cockpits, allowing for real-time analysis, process optimization and, in many cases, significant shifts from reactive to preventive maintenance. It will also be easier to integrate the management of both compute and building systems within the data center an area data center infrastructure management is tackling. User Advice: IT organizations will increasingly be drawn into discussions about building automation and control, on the basis of architecting and operating converged building and enterprise networks, or around security and remote access issues, or for data analytics. They should seek to influence solution architectures and technology choices toward open systems and protocols that facilitate integration and access to data. Early involvement in new builds and major refurbishments will be important if influence is to be effective. Business Impact: The business impact of building automation and control systems is considerable, which increases in proportion to the extent to which building operational expenses represent a substantial portion of total expenses. Application of existing, mature physical technologies and related automation and control systems are capable of reducing energy consumption and associated emissions from commercial building environments by more than 40%. Payback periods on such projects and programs are shrinking quickly, and are commonly cited on time scales of less than three years. Benefit Rating: High Market Penetration: 1% to 5% of target audience Maturity: Adolescent Sample Vendors: BuildingIQ; Honeywell; IBM; Johnson Controls; Powerit Solutions; Schneider Electric; Siemens Recommended Reading: "An Integrated Building Management System Puts the 'Smart' in Smart Building" "IBM and the Green Sigma Coalition Smarten Up the Built Environment by Turning Buildings Into Business Priorities"

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 53 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

"An Energy-Efficiency and Sustainable Buildings Case Study: Johnson Controls Demonstrates Leadership in Design and Execution" "Cool Vendors in Green IT and Sustainability, 2013"

Operational Technology Security


Analysis By: Earl Perkins Definition: Operational technology (OT) security is the practice and technology used to protect information, processes, and assets associated with systems that monitor and/or control physical devices, processes and events that initiate state changes in infrastructure. Industrial control systems (ICSs) are examples of OT. Position and Adoption Speed Justification: OT security products and services provide security for and protection of OT, hardware and software. OT detects or causes changes through the direct monitoring and/or control of physical devices, processes and events. OT goes by names such as supervisory control and data acquisition (SCADA) systems, distributed control systems (DCSs), process control systems (PCSs) and process control networks (PCNs). OT security covers a wide spectrum of technologies at the device, system, network and application levels. There is a worldwide network of billions of sensors and other intelligent devices in all industries that have security considerations. OT security is required for real-time monitoring and process control of systems in industries such as energy, manufacturing, utilities and transportation. For example:

Healthcare has intelligent MRI scanners and monitoring devices in hospitals. Transportation systems collect traffic flow incident information. Automobiles have hundreds of sensors and embedded computers for many functions. Retail includes intelligent tags for products and tracks transported goods through OT. Utility line workers and soldiers may have clothing/devices that are connected to and reporting real-time information. Video, data and voice from field operations, maintenance and situational awareness. Most of these endpoints are networked. Gartner refers to the Internet of Things or the "industrial Internet" to describe the explosive growth of OT and the Internet of Everything to include humans as well as devices.

OT security protects OT and operational processes, with some security technologies and processes unique from IT security, particularly in networking and endpoint protection. OT security technologies can be robust, mature IT security solutions employed in an OT role, or more-targeted solutions for proprietary OT technologies. While some IT security governance and management practices are mature, the OT security equivalent is still evolving and (in some cases) integrating with IT security. OT security implementation faces significant obstacles due to:

Page 54 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx

Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

The impact (and uncertainty) of mandated security regulations in some OT industries Organizational separation of OT and IT personnel and practice in many industries OT security technology maturity (and IT security product abilities to address OT security requirements) Modernization of OT systems via repurposed IT systems and the associated security impact

As a result, the market for OT security technologies and services is growing, with numerous product and service vendors from IT as well as new OT security players. Governance and organizational challenges related to OT security represent a significant opportunity for consulting and system integration providers. Few industries have yet devoted a complete practice to security matters for OT, and even fewer have established an IT/OT integration strategy from an organizational, planning or governance perspective. OT security has advanced in Hype Cycle maturity as more OT industries adopt better practices, exercise IT/OT organizational integration and demand more-effective security technology for their OT-centric systems. User Advice:

Establish integrated IT/OT security governance for major decisions regarding security, allocating specific organizational assets to support it. Engage reputable consultants and system integrators when needed to assess existing OT security and develop an effective OT security management strategy. Integrate IT/OT security management to establish a program for managing regulation, rather than the other way around. Do not allow regulatory requirements to define your security program. Prioritize IT/OT security integration technology decisions should proceed from organization and process change.

Business Impact: OT security implementations will be found in many areas of the enterprise in the network, and in systems, applications and data stores. OT security implementations can be managed; for example, in system control centers of specific industries centers chartered with observing, reporting, and controlling networks of control points and data collection points. Implementing effective OT security technology, along with the supporting processes and organization, will mean the difference between success or failure as these networks and systems are modernized because of regulatory or performance pressures. As networks and systems are upgraded or replaced, their potential complexity grows exponentially (in terms of the points of control available, the granularity of control, and the availability of much more information regarding activities and performance). As IT and OT networks are integrated and access to worldwide networks grows, the degree and scale of threats increase. Most OT-centric enterprises today are unprepared for the impact that

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 55 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

modernization of their OT infrastructures has on their requirements for security. Advances in OT and IT security program maturity are required urgently to address this shift in these industries. Benefit Rating: High Market Penetration: 5% to 20% of target audience Maturity: Emerging Sample Vendors: AlertEnterprise; Bayshore Networks; Boeing; Byres Security; Digital Bond; Echelon; Emerson Process Management; General Electric; Honeywell; IBM; Industrial Defender; Invensys; IOActive; Lockheed Martin; McAfee (NitroSecurity); Microsoft; Mocana; OSIsoft; Owl Computing Technologies; Quantum Secure; Schneider Electric; Siemens; Waterfall Security Solutions Recommended Reading: "The Impact of Critical Infrastructure Protection Standards on Security" "Predicts 2013: IT and OT Alignment Has Risks and Opportunities" "Top 10 Business Trends Impacting the Utility Industry in 2013"

Asset Performance Management


Analysis By: Leif Eriksen; Kristian Steenstrup Definition: Asset performance management (APM) encompasses the capabilities of data capture, integration, visualization and analytics tied together for the explicit purpose of improving the reliability and availability of physical assets. APM includes the concepts of condition monitoring, predictive forecasting and reliability-centered maintenance (RCM). Position and Adoption Speed Justification: Until recently, APM has existed primarily in the shadows of enterprise asset management (EAM). With the exception of more advanced practices in the energy and utilities industries, APM activity has typically consisted of isolated engineering projects in single locations. A number of factors are responsible for bringing APM to the top of the list of investments by operators of mission-critical assets. The primary drivers are the recognition that mature EAM investments have reached a point of diminishing returns, the emergence of new technologies or new uses of technology, and the convergence of IT and operational technology (OT). So, while APM has been practiced for over 10 years in a handful of industries and mostly by the largest companies its broader adoption has been accelerated by these recent developments. It should pick up speed as it heads into the trough. APM is characterized by a variety of approaches, including analyzing performance history, using sophisticated analytical tools to detect patterns, and using operations intelligence (OI) capabilities to improve the visualization of real-time operating and condition data. APM typically, but not necessarily, follows the deployment of EAM. Good data that is, historical service data and operational data is a necessary condition for successful APM projects. Recent advances in wireless sensors and mobile technology have been catalysts for APM deployments. Another catalyst has been OEMs that develop smarter machines and new service offerings to support APM.
Page 56 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

In "Asset Management and Reliability: A Strategic Road Map," Gartner identified seven levels of maintenance and reliability strategies. The first three levels, including both reactive and preventative techniques, can be accomplished using an EAM system alone. However, while EAM provides a foundation for the next four levels, including both predictive maintenance and RCM, APM tools and technologies are necessary for success. User Advice: Asset-intensive businesses, such as utility and manufacturing enterprises seeking the next level of asset performance improvement, should deploy APM. They shouldn't expect to get APM capabilities from their EAM vendors, although some are making investments in this area. This means that, in many circumstances, third-party products may need to be interfaced into EAM. Other industries and businesses should expect to follow the lead of early adopters and target tactical wins, sometimes using their OI infrastructures. They should also expect to make investments in infrastructure to capture operational data where it doesn't exist today. APM leverages the convergence of IT and OT, and will require new skills that are familiar with both worlds' data structures and communication conventions. In some instances, companies looking at APM projects will benefit from cloud-based approaches to data sharing and multiparty collaboration. Business Impact: Successful APM deployments deliver measurable improvements in availability, as well as reduce maintenance and inventory carrying costs. Most APM projects are built on reducing equipment downtime (the inverse of availability) and, therefore, increasing the reliability of production. APM is becoming a critically important component of demand-driven value network (DDVN) strategies. Benefit Rating: Transformational Market Penetration: 5% to 20% of target audience Maturity: Adolescent Sample Vendors: C3global; GE Intelligent Platforms; Honeywell Process Solutions; Invensys Operations Management; Ivara; MaxGrip; Meridium; Rockwell Automation; SAS; Siemens; SKF; ThingWorx Recommended Reading: "Asset Management in DDVN, Part 2: Data Shows Performance Improvements Are Within Reach" "Asset Management and Reliability: A Strategic Road Map"

Machine-to-Machine Communication Services


Analysis By: Tina Tian; Sylvain Fabre Definition: Machine-to-machine (M2M) communication services are used for automated data transmission and measurement between mechanical or electronic devices. The key components of an M2M system are:

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 57 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Field-deployed wireless devices with embedded sensors or RFID technology Wireless and wireline communication networks, including cellular communication, Wi-Fi, ZigBee, WiMAX, generic DSL (xDSL) and fiber to the x (FTTx) networks A back-end network that interprets data and makes decisions

Position and Adoption Speed Justification: M2M technology continues to fuel new business offerings and support a wide range of initiatives, such as smart metering, road tolls, smart cities, smart buildings and geofencing assets, to name a few. Revenue growth is now 30% to 40% per year. Communications service providers (CSPs), business development managers and architects in many industries should take a closer look at how M2M communication services can help grow their business. There is currently no one service provider than can deliver M2M services end-to-end. The value chain remains fragmented. Service providers are trying to partner with others to create a workable ecosystem. M2M services are currently provided by three types of provider:

M2M service providers. Mobile virtual network operators and companies associated with an operator that can piggyback on that operator's roaming agreements (for example, Wyless, Kore Telematics and Jasper Wireless). CSPs. Some CSPs, such as Orange in Europe and AT&T in North America, have supplied M2M services for several years, but have not publicized them widely. However, CSPs are now marketing M2M services more vigorously, and those that have not had a strong M2M presence so far are treating it more seriously by increasing their marketing or creating dedicated M2M service divisions (for example, T-Mobile, Telenor and Vodafone). Other organizations with different M2M strengths. These are combining to enter the market. Jasper Wireless, for example, has also signed an agreement with AT&T to provide dedicated support for M2M devices created jointly.

All three types will be viable options for clients to consider, and can be customized to meet M2M requirements awaiting verification. Besides the service providers mentioned above, there are companies with certain skills in strategy and rollout that can manage the daunting value chain needed to implement M2M solutions. Examples are:

Ventyx, an ABB company Walsh Vision, which has rolled out an M2M-based pay-as-you-drive insurance solution Capgemini, a leader in smart-grid and advanced metering infrastructure solutions Integron, a logistics and integration partner for M2M solutions.

Wireless access is one of the many important links in an M2M deployment chain. CSPs have to be well positioned for their role in the M2M market, based on an evaluation of their own strengths in
Page 58 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

terms of multinational network coverage, application development skills and IT management ability and their choice of a suitable business model and partner. CSPs also have to be in a position to sell a series of new data plans (that accommodate an M2M solution's business requirements) as well as provide some form of second- or third-tier support. These demands are being placed on CSPs whose core expertise lies in the provision of mass-market voice and data services to consumers. One of the key technology factors that may affect M2M service deployment is mobile-networksupporting capability. Early M2M services were smart meters, telematics and e-health monitors, which are expected to be widely used in the future. In its Release 10, the Third Generation Partnership Project (3GPP) has been working on M2M technology to enhance network systems in order to offer better support for machine-type communications (MTC) applications. The 3GPP's TS 22.368 specification describes common and specific service requirements for MTC. The main functions specified in Release 10 are overload and congestion control. The recently announced Release 11 investigates additional MTC requirements, use cases and functional improvements to existing specifications. End-to-end real-time security will also become an important factor when more important vertical applications are brought into cellular networks. Another key factor on the technology side that may impact mass deployment of M2M communication services is the level of standardization. Some key M2M technology components such as RFID, location awareness, short-range communication and mobile communication technologies have been on the market for quite a long time. But there remains a lack of the standardization necessary to put the pieces of the puzzle together to make M2M services costeffective and easy to deploy, and therefore to enable this market to take off. M2M standardization may involve many technologies (like the Efficient XML Interchange [EXI] standard, Constrained Application Protocol [CoAP] and Internet Protocol Version 6 over Low-Power Wireless Personal Area Networks [IPv6/6LoWPAN]) and stakeholders (including CSPs, RFID makers, telecom network equipment vendors and terminal providers). The European Telecommunications Standards Institute has a group working on the definition, smart-metering use cases, functional architecture and service requirements for M2M technology. User Advice: As M2M communications grow in importance, regulators should pay more attention to standards, prices, terms and conditions. For example, the difficulty of changing operators during the life of equipment with embedded M2M technology might be seen by regulators as a potential monopoly. Regulators in France and Spain already require operators to report on M2M connections, and we expect to see increased regulatory interest elsewhere. For the end user, the M2M market is very fragmented because no single end-to-end M2M provider exists. A number of suppliers offer monitoring services, hardware development, wireless access services and hardware interface design and other functions to enterprise users. As a result, an M2M solution adopter has to do a lot of work to integrate the many vendors' offerings, on top of which, business processes may need redefining. M2M will speed up IT/operational technology alignment and convergence, as IT and communications solutions will come closer to users' operations and control through M2M technology. An enterprise's M2M technology strategy needs to consider the following issues:

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 59 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Scope of deployment System integration method Hardware budget Application development and implementation Wireless service options Wireless access costs

Business Impact: M2M communication services have many benefits for users, governments and CSPs. They can dramatically improve the efficiency of device management. As value-added services, they also have considerable potential as revenue generators for CSPs the success of these services will be important for CSPs' business growth plans. M2M communication services are expected to be the critical enablers for many initiatives that fall under the "smart city" umbrella and contribute to the Internet of Things. Examples are smart-grid initiatives with connected smart-grid sensors to monitor distribution networks in real time, and smart-transportation initiatives with embedded telematics devices in cars to track and control traffic. M2M communication services will also connect billions of devices, causing further transformation of communication networks. M2M communication services should be seen as an important set of facilitating technologies for use in operational technologies. At an architectural level, particular care should be taken when choosing M2M solutions to ensure they facilitate the alignment, convergence or integration of operational technology with IT. Benefit Rating: Transformational Market Penetration: Less than 1% of target audience Maturity: Adolescent Sample Vendors: AT&T; France Telecom; KDDI; Qualcomm; Sprint Nextel; Telefonica; Telenor; Verizon; Vodafone Recommended Reading: "How CSPs Can Successfully Optimize M2M Opportunities for Growth" "The Time is Right for CSPs to Move Into M2M Management" "Sourcing Strategies Must Assess External Providers When Integrating M2M Communications Into IT/OT Initiatives" "Competitive Landscape: M2M Platform and Software Providers for CSPs, 2012" "Market Trends: Overcoming Machine-to-Machine Challenges to Realize the Market Potential" "Technology Overview: Mobile M2M Network in Japan"

Page 60 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx

Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

"IT and Operational Technology: Convergence, Alignment and Integration"

Operational Technology Platform Convergence


Analysis By: Kristian Steenstrup Definition: Operational technology (OT) is hardware and software that detect or cause a change through the direct monitoring or control of physical devices. The convergence we are seeing throughout the OT industry reflects that the platforms on which the OT is developed and delivered (that is, the operating systems and communications architecture) are changing and converging with those commonly used in IT. Typically, this is characterized by a move by OT vendors to Microsoft and Linux operating systems, RDBMSs, and TCP/IP communications. Position and Adoption Speed Justification: OT is a widely diverse set of products, but generally has displayed a trend in the past 10 years to move to IT-like architectures and is now approaching the Trough of Disillusionment, where the drawbacks and complexities are occupying the minds of vendors and customers alike. This is highlighted by failures such as security breaches and patching challenges, which were formerly of little importance to staff responsible for the OT products. Change will take time the actual devices and platforms have a longer life span than IT, so the change-out rate is much slower (although the change has been in progress for over 10 years in most industries). Additionally, a lot of organizational entropy surrounds the management of OT, where engineers and operations groups will protect their turf in the belief that "ownership" is important, rather than separating out a group's authority over a technology from the responsibility for support. Although the convergence has been going on for some time now, it has been done on a piecemeal basis the engineering groups making the decisions do so on a site-by-site basis. What's missing is an ITlike approach to standardizing systems and managing them. Momentum is gathering though, as vendors like GE, Schneider Electric and Ventyx, an ABB company, undertake significant investment in their software portfolios. User Advice: This change will herald two consequences. The upside is that OT systems are becoming more interoperable and accessible to the rest of the technology world most importantly, IT. So information and process flows can cross IT and OT boundaries more readily. The downside is that the management of OT systems needs to radically change as the systems take on more IT-like characteristics. So, software security, patching, upgrading and disaster recovery need to be managed. Companies should measure the impact of this convergence in their own contexts. First, how much OT exists? For asset-intensive companies, those with substantial plants and equipment, this is a potentially large issue, since all of the heavy equipment, plants and facilities are progressively being automated with OT software. The second point to consider is how much new-generation or "converged" OT is there? For some companies with less converged OT, immediate change is not required. For others with a preponderance of converged OT, changes will be necessary to manage the software life cycle of the OT systems, and clear change initiatives must be developed to ensure support for IT and OT alignment and integration.

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 61 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

As IT departments get involved with OT products, there are similar changes and, therefore, business impacts on IT organizations as they come to terms with the different and often higher performance requirements of OT systems. In either case, someone has to take responsibility. Also, the benefits that come from standardization should be considered, including more and better access to information across the organization, more rapid adoption of new IT technologies such as mobile and wireless, and improved efficiencies. Business Impact: As OT converges with IT architecture, demand will rise for an enterprise approach to the management and oversight of OT products. The organization will plan and manage standards, platforms, architecture planning, IT asset management (ITAM) for OT, configuration management database (CMDB), security, patching, upgrades and disaster recovery for OT systems. These duties may go to the existing IT departments, because the converged platforms have more in common with IT systems than their antecedents. In other cases, companies are creating OT support groups within the existing operations or engineering groups to formalize these responsibilities and manage the software life cycle in a consistent and coherent way. Benefit Rating: High Market Penetration: 20% to 50% of target audience Maturity: Early mainstream Sample Vendors: ABB Group; Alstom Grid; GE; Honeywell; Invensys; Johnson Controls; Schneider Electric; Siemens Recommended Reading: "Convergence of Digital Technologies Needs Expanded and Strengthened IT Governance" "Five Critical Success Factors for IT and OT Convergence" "The CIO's Role in Managing the Expanding Universe of Digital Technologies" "Agenda Overview for Operational Technology Alignment With IT, 2013"

Real-Time Infrastructure
Analysis By: Donna Scott Definition: RTI represents a shared IT infrastructure in which business policies and SLAs drive the dynamic allocation and optimization of IT resources so that service levels are predictable and consistent despite unpredictable IT service demand. RTI provides the elasticity, functionality, and dynamic optimization and tuning of the runtime environment based on policies and priorities across private, public and hybrid cloud architectures. Where resources are constrained, business policies determine how resources are allocated to meet business goals. Position and Adoption Speed Justification: The technology and implementation practices are immature from the standpoint of architecting and automating an entire data center and its IT services for real-time infrastructure (RTI). However, point solutions have emerged that optimize

Page 62 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx

Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

specific applications or environments, such as dynamically optimizing virtual servers (through the use of performance management metrics and virtual server live-migration technologies) and dynamically optimizing Java Platform, Enterprise Edition (Java EE)-based shared application environments that are designed to enable scale-out capacity increases. RTI is also emerging in cloud management solutions, initially for optimizing the placement of workloads or services upon startup based on pre-established policies. Many cloud management platform (CMP) vendors have enabled models or automation engines to achieve RTI (for example, through the implementation of logical service models with policies that are defined for the minimum and maximum instances that can be deployed in a runtime environment). However, these vendors have not yet implemented all the analytical triggers and the deployment automation to make elasticity truly turnkey. Rather, IT organizations must still write custom code (for example, automation and orchestration logic) to achieve their overall dynamic optimization goals, such as to scale a website up/down or in/out in order to utilize optimal resources for increasing or decreasing service demand. Moreover, although virtualization is not required to architect for RTI, many CMP solutions only support virtualized environments instead of offering more complex alternatives that require integration to physical resources. In addition, RTI may be architected for a specific application environment and not as a generalized operations management offering. Lack of architecture and application development skills in the infrastructure and operations (I&O) organization hampers implementation of RTI in all but the most advanced organizations. Organizations that pursue agile development for their Web environments will often implement RTI for these services in order to map increasing demand on their sites with an increasing supply of resources. In another RTI use case, enterprises are implementing shared disaster recovery data centers, whereby they dynamically reconfigure test/development environments to look like the production environment for disaster recovery testing and disaster strikes. This type of architecture can typically achieve recovery time objectives in the range of one to four hours after a disaster is declared. Typically, implementation is not triggered automatically but is manually initiated where the automation is prewritten. Because of the advancement in server virtualization and cloud computing, RTI solutions are making progress in the market, especially for targeted use cases where enterprises write specific automation, such as to scale a website up/down and in/out. However, there is low market penetration, primarily because of a lack of service modeling (inclusive of runtime policies and triggers for elasticity), standards and strong service governors/policy engines in the market. For customers who desire dynamic optimization to integrate multiple technologies together and orchestrate analytics with actions, a great deal of integration and technical skills is required. Gartner believes that RTI will go through another round of hype in the market as vendors seize on the "software defined" terminology that generally has the same connotation as RTI: automation and optimization. As in the past, we will see individual vendor progress, especially in "software stacks," but not in largely heterogeneous environments because of the lack of standards and the desire for vendors that build such functionality to benefit their platforms (and not their competitors' platforms). User Advice: Surveys of Gartner clients indicate that the majority of IT organizations view RTI architectures as desirable for gaining agility, reducing costs and attaining higher IT service quality and that about 20% of organizations have implemented RTI for some portion of their portfolios.

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 63 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Overall progress is slow for internal deployments of RTI architectures because of many impediments, especially the lack of IT management process and technology maturity levels, but also because of organizational and cultural issues. RTI is also slow for public cloud services, where application developers may have to write to a specific and proprietary set of technologies to get dynamic elasticity. Gartner sees technology as a significant barrier to RTI, specifically in the areas of root cause analysis (which is required to determine what optimization actions to take), service governors (the runtime execution engines behind RTI analysis and actions) and integrated IT process/tool architectures and standards. However, RTI has taken a step forward in particular focused areas, such as:

Dynamic and policy-based provisioning of development/testing/staging and production environments across private, public and hybrid cloud computing resources Optimally provisioned cloud services based on capacity and policies (for example, workload and service placement) Server virtualization and dynamic workload movement and optimization Reconfiguring capacity during failure or disaster events Service-oriented architecture (SOA) and Java EE environments for dynamic scaling of application instances Specific and customized automation that is written for specific use cases (for example, scaling up/down or out/in a website that has variable demand)

Many IT organizations that have been maturing their IT management processes and using IT process automation tools (aka run book automation tools) to integrate processes (and tools) to enable complex, automated actions are moving closer to RTI through these actions. IT organizations that desire RTI should focus on maturing their management processes using ITIL and maturity models (such as Gartner's ITScore for I&O Maturity Model) as well as their technology architectures (such as through standardization, consolidation and virtualization). They should also build a culture that is conducive to sharing the infrastructure and should provide incentives such as reduced costs for shared infrastructures. Gartner recommends that IT organizations move to at least Level 3 proactive on the ITScore for I&O Maturity Model in order to plan for and implement RTI; before that level, a lack of skills and processes derails success. Organizations should investigate and consider implementing RTI solutions early in the public or private cloud or across data centers in a hybrid implementation, which can add business value and solve a particular pain point, but should not embark on data-center-wide RTI initiatives. Business Impact: RTI has three value propositions, which are expressed as business goals:

Reduced costs that are achieved by better, more efficient resource use and by reduced IT operations (labor) costs Improved service levels that are achieved by the dynamic tuning of IT services

Page 64 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx

Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Increased agility that is achieved by rapid provisioning of new services or resources and scaling the capacity (up and down) of established services across both internally and externally sourced data centers

Benefit Rating: Transformational Market Penetration: 5% to 20% of target audience Maturity: Emerging Sample Vendors: Adaptive Computing; Amazon; BMC Software; CA Technologies; IBM; Microsoft; Oracle; Red Hat; RightScale; ServiceMesh; Tibco Software; VMTurbo; VMware Recommended Reading: "Cool Vendors in Cloud Management, 2013" "Cool Vendors in Cloud Management, 2012" "Provisioning and Configuration Management for Private Cloud Computing and Real-Time Infrastructure" "How to Build an Enterprise Cloud Service Architecture"

Hardware Reconfigurable Devices


Analysis By: Jim Tully Definition: Hardware reconfigurable devices are extremely flexible and can be configured to perform different functions not through software changes, but through reconfigurable hardware. Arrays of computational elements with a programmable interconnect are one example. Fieldprogrammable gate arrays (FPGAs) are excluded from this definition, but FPGA blocks could be embedded in hardware reconfigurable architectures. The Panasonic UniPhier platform is an early example of reconfigurability in the commercial world; most other examples exist in academia. Position and Adoption Speed Justification: Embedded programmable logic blocks were a promising component of reconfigurable systems, but these are now receiving less investment. A lack of design tools is also proving to be an issue. The extreme flexibility offered by these devices requires tools that guide designers toward a constrained set of functions. These factors are a cause for concern in the development of reconfigurable systems. Much of the current focus on reconfigurable architectures is for digital signal processing purposes in multimedia applications. It is from this direction that we expect to see the earliest successes. It is also likely that particular subsystems will be targeted for reconfigurability earlier than other subsystems. Input/output subsystems are likely to be an early adopter of the technology. We see very little evidence of commercial organizations developing this technology. Most work is to be found in research groups in universities. We have therefore left the position of the technology unchanged in this year's Hype Cycle. User Advice: System architects in electronic equipment enterprises and in semiconductor vendor organizations should plan and anticipate the eventual adoption of reconfigurable technology.

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 65 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Designers must carefully balance the amount of configurable logic with hard-coded logic. Too much configurability can lead to increased die size, as well as to greater complexity in testing and verification. These factors increase the cost of devices something that will be felt most acutely in price-sensitive consumer markets. Business Impact: This technology is high impact and can save suppliers and users a lot of money when updating systems in the field. However, the initial setback rests with users, as they do not know how to handle reconfigurable devices at this time. Benefit Rating: High Market Penetration: Less than 1% of target audience Maturity: Emerging Sample Vendors: Akya; Celoxica; Panasonic; Stretch

Enterprise Manufacturing Intelligence


Analysis By: Simon F Jacobson Definition: Enterprise manufacturing intelligence (EMI) depicts the performance of manufacturing operations by synthesizing and analyzing information from highly granular, manufacturing-related data made visible and understandable through dashboards and portals. Therefore, it is useful in providing decision support to various business and operational roles. Position and Adoption Speed Justification: Organizations seeking to leverage manufacturing's capabilities to make better supply chain decisions continue to struggle with visibility into their operations. This prevents them from computing composite metrics from key performance indicators (KPIs) aggregated from multiple plant sources, and doesn't allow the business to understand manufacturing's true capabilities, costs and constraints. Client activity and inquiry on EMI continues to increase, and clients have a plethora of options from which to choose: point solutions, manufacturing execution system (MES) vendor add-on modules, and frameworks provided by ERP and service firms. Additionally, more manufacturing segments are starting to embrace EMI. As adoption rates grow, it will move beyond the early mainstream of maturity. The hazy functional scopes of most EMI offerings are pushing it further into the trough. EMI applications and frameworks should encapsulate the following capabilities:

Aggregate Aggregate information from a variety of real-time and diverse back-end data sources, including automation, historians, MES operational databases, laboratory information management systems (LIMSs) and relational database systems. Contextualize Create and maintain persistent functional/operational relationships between data elements from disparate sources. It may be useful, for example, to maintain relationships (context) between certain named process variables and ranges of time series data.

Page 66 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx

Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Analyze Transform data into real-time performance intelligence through the application of business rules (that is, calculating the range of KPIs using raw process performance and costbased information from ERP and other business-level systems). Visualize Provide intuitive, graphical representation of intelligence that supports contextbased navigation of information based on persistent interrelationships, enabling drill-down from multiplant representations to individual facilities and systems. Propagate Automatically transfer relevant operational performance information to the appropriate business-level systems (such as enterprise asset management [EAM], ERP, supply chain management [SCM] or product life cycle management [PLM]).

The majority of EMI approaches do not provide much more than analysis and visualization. This functional inadequacy of many EMI providers means that any associated process changes are done separately, and local quick wins at the local level tend to take center stage, versus using the intelligence to drive more widespread improvement of product supply capabilities. The following will accelerate the EMI market's ascension toward the plateau: Some EMI providers continuing to add deeper functionality through products designed to do more than provide descriptive analytics and report on manufacturing performance. These secondgeneration or add-on products will be participating in the operations intelligence (OI) market, which is gaining momentum at this stage. Do not expect EMI to become obsolete or fully overtaken by OI at this stage. The two technologies are complimentary of one another, and it is not uncommon that a vendor will have offerings in both camps. Overcoming multisite scalability hurdles. The majority of EMI approaches start with simple overall equipment effectiveness (OEE) dashboards to understand performance on a local level (such as line or asset), with the intention of scaling into multiple line or multiple site deployments. These initial beachheads often deliver quick wins and rapid returns on initial cash outlays, validating that EMI is a low-risk, high-reward investment. However, it also elongates the deployment cycle for multiple sites, which is why the time to plateau has been elongated. User Advice: EMI provides the following kinds of decision support:

Machine-state data (up, down, stopped and idle) translated into OEE Information on energy efficiency and consumption Process variable history translated into dollars-per-unit volume production Quality-reject data translated into dollars per finished goods scrapped (or the cost of poor quality [COPQ])

Organizations without a clear understanding of what it is they need to measure will face longer time to value from their investments. Whether or not an application can satisfy your performance-monitoring requirements depends largely on how the application is architected. Conventional analytics applications operate on

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 67 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

datasets that have been staged, but this introduces latency into the process, since data is captured, transformed and then stored into the analysis set. There are also applications that can perform analytics "on the fly" as data is extracted from shop-floor sources, but this places a heavy burden on the network, thus limiting the applicability of these tools for high-volume, high-refresh applications. The kind of EMI application to use should be dictated by how "real time" your organization's data and information requirements are. In some cases, using the add-on EMI module from an incumbent provider might be more sensible than layering a third-party application. Business Impact: Manufacturers seeking competitive advantage understand that manufacturing operations must no longer constrain supply network responsiveness. They are linking supply and demand, while decreasing manufacturing costs and increasing agility. To strike this balance, KPIs are needed to provide visibility into asset availability and capability, work in process, and inventory movements. EMI helps overcome the visibility hurdle that stands in the way of this realization. Benefit Rating: High Market Penetration: 20% to 50% of target audience Maturity: Early mainstream Sample Vendors: Apriso; Epicor; GE Intelligent Platforms; InfinityQS; Invensys; IQity; Oracle; Parsec; Rockwell Automation; SAP; Shoplogix; Siemens Recommended Reading: "Debunking the Hype of OEE" "The Manufacturing Performance Dilemma, Part 1: Overcoming Visibility Hurdles With Enterprise Manufacturing Intelligence" "The Nexus of Forces Is Ready to Advance Manufacturing 2.0"

Vehicle-to-Infrastructure Communications
Analysis By: Thilo Koslowski Definition: Vehicle-to-infrastructure communication technologies create autonomous data networks using dedicated frequencies, such as DSRC or LTE, between vehicles and the road infrastructure for safety, traffic management, environmental or e-mobility applications, such as electric vehicle (EV) charging station finders and availability. For example, if an accident occurs, an affected road section could be shut down automatically, and information will be sent to traffic signs or navigation solutions, which will redirect traffic to new, unobstructed areas. Position and Adoption Speed Justification: Vehicle-to-infrastructure communications require costly investments in road infrastructure and automobiles. Government-sponsored initiatives to improve traffic management and overall traffic safety, such as the U.S. government's Vehicle Infrastructure Integration (VII) and IntelliDrive initiative, are critical for the long-term success of carto-infrastructure communication efforts, but more committed funding is needed to accelerate progress. Recent government focus on smart cities, automated/autonomous driving, and transport-

Page 68 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx

Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

related emissions from automobiles and e-mobility, in particular, has led to renewed interest in vehicle-to-infrastructure initiatives. User Advice: Automotive companies, municipalities and technology companies: Lobby for more support by governments for vehicle-to-infrastructure initiatives, and generate public awareness for this new technology. Leverage the increased sensitivity and awareness for environmental responsibility in these efforts. Major cities, which are more likely to suffer from traffic congestion and accident rates, should be prioritized. New road infrastructure initiatives should consider vehicleto-infrastructure-related technology requirements. Identify innovative vendors in this space that can help accelerate deployment of such efforts. Potentially consider investing in some of these vendors to help expedite their market reach. Consider the use of portable consumer devices to help collect relevant driving data before a ubiquitous infrastructure exists. Stay on top of privacy-related data issues to minimize user rejection and to define use cases. Business Impact: Vehicle-to-infrastructure communication technologies enable the automotive industry and governments to address growing traffic management, environmental and safety issues, but can also offer new revenue opportunities in the form of safety and driver assistance offerings. Benefit Rating: Transformational Market Penetration: 5% to 20% of target audience Maturity: Emerging Sample Vendors: Bosch; Cisco; Continental; Delphi; IBM; Infosys; Nokia; Verizon Recommended Reading: "Innovation Insight: The Connected Vehicle Will Dominate Automotive and Mobility Innovations" "Predicts 2013: Mobile, Cloud and Information Fuel the Automotive Era of Smart Mobility"

Vehicle-to-Vehicle Communications
Analysis By: Thilo Koslowski Definition: Vehicle-to-vehicle (V2V) communication technology enables automobiles to share safety, traffic and other data autonomously, using an ad hoc network enabled by a wireless communication technology that is embedded in the vehicle. The U.S. and EU governments have allocated specific wireless spectrums for this purpose and to support intelligent traffic safety applications (for example, the U.S. allocated 75MHz of spectrum in the 5.9GHz band as "dedicated short-range communication" [DSRC] to be used by intelligent transportation systems). Position and Adoption Speed Justification: Interest in vehicle-to-vehicle communication has somewhat been limited to individual company efforts instead of broader industry consortiums. While the potential of V2V communication technologies is significant, some of the expectations have been inflated, and the customer value proposition is unclear beyond specific safety applications (for example, a sensor could indicate icy road conditions as the vehicle crosses a

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 69 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

bridge). The V2V communication technology could then automatically inform other drivers approaching the bridge about this road hazard via an electronic voice prompt or warning light on the dashboard. Required technology implementation costs are high and will not provide initial benefits to early adopters. Investments in electric-vehicle-charging infrastructure and overarching traffic management objectives are likely to boost efforts regarding V2V communication, because such networks could be used to allow vehicles to communicate with each other to improve traffic flow. A key challenge for V2V communication is the long product development cycles in the automotive industry, which require automotive companies to develop long-lasting technology solutions to enable this form of communication, and which represent a cost factor. Testing of the technology in controlled environments, like the testing currently ongoing in Michigan, is helping to advance the technology's deployment likelihood. However, companies and governments alike will need to agree on benefits versus cost considerations before making long-term commitments regarding future deployment across numerous vehicles. User Advice: For consumers, develop value propositions that provide daily relevance (for example, traffic information) by collecting speed and driving-condition information. Determine cost implications, and design alternative business models that can offset initial investments. (It's unclear whether consumers are willing to pay for this technology.) Seek government support to offset the required infrastructure costs. In addition, explore the benefits and limitations of portable devices that can be connected to the vehicle to develop car-to-car communication scenarios without the need to deploy a costly infrastructure. The use of portable devices can also help to establish communication between vehicles, albeit less reliably because of battery limitations and required user behavior (for example, drivers need to bring their mobile phones with them). Business Impact: This technology can give vehicle manufacturers improved product appeal and differentiation. Furthermore, it will help the automotive industry to proactively address growing traffic management issues that, if not dealt with, are likely to trigger stricter traffic-related laws by local governments. Benefit Rating: Transformational Market Penetration: 5% to 20% of target audience Maturity: Emerging Sample Vendors: Bosch; Continental; TRW Automotive Recommended Reading: "U.S. Consumer Vehicle ICT Study: Web-Based Features Continue to Rise" "German Consumer Vehicle ICT Study: Demand for In-Vehicle Technologies Continues to Evolve" "Predicts 2013: Mobile, Cloud and Information Fuel the Automotive Era of Smart Mobility"

Page 70 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx

Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Process Control and Automation


Analysis By: Simon F Jacobson; Stephen Stokes Definition: Process sensing and control technologies include energy management and optimization, greenhouse gas (GHG) emissions reporting, materials impact analysis, and hazardous inventory substance volume tracking (SVT). These new operational excellence strategies produce the highest yield and margin at the lowest energy, resource and overall environmental cost. For success, they must be supported by manufacturing process modeling and simulation against asset availability and reliability, raw materials costs, and regulations. Position and Adoption Speed Justification: Since the 1990s, manufacturing organizations, especially refineries, chemical producers and other highly automated environments, have focused on energy reduction, waste elimination and, in some industries like brewing and bottling, reduced water consumption. However, as corporate sustainability continues to enter the mainstream, companies across all manufacturing and distribution sectors are looking for ways to consume less energy, reduce emissions and produce cheaper energy, while still optimizing their processes to attain the highest yield and margins. This is shifting energy cost and risk management from an exercise in hedged energy price planning to more sophisticated uses of information from incumbent process automation and control technologies (such as effective sensors, networks and data historians) to the delivery of details on process performance and improvement opportunities for sustainable manufacturing. Many producers are rediscovering the opportunity to leverage existing investments, hence the positioning beyond the trough. In parallel, for those that are not leveraging existing investments, the high cost of entry and capital outlay can prohibitive. We predict that, as investments continue and deployments are executed, progression up the continuum will follow. User Advice: Companies seeking to model and predict process outcomes must look beyond the historian layer for aggregating, compressing, archiving and trending detailed runtime data. Although historians do play a role, other capabilities, such as process modeling, advanced process simulation and control (APS/APC), and analytics, are also essential foundational elements. Companies are encouraged to look for platform approaches that not only integrate with distributed control systems (DCSs), laboratory information management systems (LIMSs), programmable logic controllers (PLCs), sensors and other forms of operational technology (OT) to continuously monitor manufacturing processes, but also tie to procurement financial systems. This will create a more holistic capability to connect real-time, margin-sensitive manufacturing processes with variables, such as lead times, capacity availability and energy costs, to make more sustainable trade-offs for product mix and planned profitability across the supply network, not just at the site or line levels. Smart manufacturers have already developed models that assign product costs based on the resources consumed to produce them, using profit velocity analysis to drive sales toward highmargin products. However, without a clear view of how variable, external forces like energy prices can impact production costs/outputs and profitability, it's impossible to create any macromodels that represent what the specific production processes should be.

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 71 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Business Impact: For manufacturers and producers especially, the varying degrees of instrumentation, automation and operator inputs provide aggregated and analyzed data. This data from operations provides the basis for developing and refining operational models used to monitor, simulate, predict, and control future performance and outputs, based on a few key control variables. It also provides core data for estimates of embodied energy, carbon and raw materials within products. This information, along with variable cost data (based on the volatility of some inputs, such as energy), helps add intelligence to the sales and operations process, impacting future decisions about when, where and how to produce a product to ensure economic payback. Benefit Rating: High Market Penetration: 20% to 50% of target audience Maturity: Mature mainstream Sample Vendors: ABB; AspenTech; GE Intelligent Platforms; Honeywell Process Solutions; Invensys Operations Management; Rockwell Automation; Schneider Electric; Siemens Recommended Reading:

Climbing the Slope


Intelligent Electronic Devices
Analysis By: Randy Rhodes Definition: Intelligent electronic devices (IEDs) are a category of operational technology (OT) within utilities. The included devices are field-deployed sensors, controls and communications devices deployed in control centers, power plants and substations, as well as on transmission and distribution lines. Examples include multifunction meters, digital protection relays, automated switches, reclosers, line sensors and capacitor controls. Position and Adoption Speed Justification: Utilities have been steadily moving beyond analog monitoring and control devices with more advanced digital instruments. IEDs provide the status of transmission and distribution systems, including connectivity information, loading conditions, voltage, temperature, gas analysis, fault conditions and other operational parameters of the asset. Information from these devices may be routed to supervisory control and data acquisition (SCADA) systems (commonly referred to as operational data) or brought back separately (as "nonoperational" data). With the current smart grid expansion, utilities can expect exponential growth of IED field devices. IED manufacturers are driving the shift toward more-open communications, such as Modicon Modbus or DNP over TCP/IP. The Institute of Electrical and Electronics Engineers (IEEE) is formally mapping DNP (now IEEE standard P1815), with the International Electrotechnical Commission's (IEC's) 61850 object models. IED communication designs within substations increasingly feature high-bandwidth Ethernet over fiber optics, embedded Windows operating systems, IP network devices hardened for electromagnetic immunity, multiple protocol support, improved remote device

Page 72 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx

Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

upgrades, better diagnostics and improved redundancy. Devices located on distribution line feeders are increasingly self-powered, with more advanced onboard digital signal processing and communication capabilities. User Advice: Utilities that have deployed IEDs are typically using only a fraction of the data available. In "big data" terms, this represents a large store of "dark data" that engineering and operations staff can leverage for business benefit. IEDs can give business applications new life through detailed asset and process information helping energy and utilities companies manage their risks effectively. Utilities must address technology governance requirements for the life cycle of IEDs. This should include configuration management, backup and recovery, patch management, security measures, and upgrade planning. Business Impact: This technology will affect supply and delivery, and will gain more prominence as an essential part of smart grid projects that will require timely access to real-time device operational parameters and asset-loading conditions. Benefit Rating: Moderate Market Penetration: 20% to 50% of target audience Maturity: Early mainstream Sample Vendors: ABB; Advanced Control Systems; Cooper Power Systems; GE Energy; Schneider Electric; SEL; Siemens; S&C Electric Recommended Reading: "A Guide to Adapting IT Tools for Smart Grid OT Management Challenges" "Top 10 Technology Trends Impacting the Energy and Utility Industry in 2013" "How to Make Your Grid Smarter: An Intelligent Grid Primer" "The Management Implications of IT/OT Convergence"

Public Telematics and ITS


Analysis By: Thilo Koslowski Definition: Public telematics represent automotive information and communication technologies/ services with the aim of improving traffic flow and congestion, improving taxation, addressing environmental issues, and providing intelligent transport system (ITS) solutions. These initiatives are often government-initiated, but they also leverage private-sector-based services and offerings (for example, parking management solution providers). Position and Adoption Speed Justification: Most public telematics initiatives continue to be deployed by governments in Europe, Asia/Pacific and the U.S. The main focus is on minimizing

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 73 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

traffic congestion, enforcing toll road collection, reducing emissions, addressing driver distraction, and providing alternative pay-per-use taxation and insurance solutions. Legislative requirements regarding automobiles are another market driver and include the European Union's eCall (emergency call) initiative. Such mandates can increase the number of telematics-enabled vehicles that can receive consumer-focused telematics services. To drive the broad market adoption of public telematics, companies must address consumer privacy concerns and demonstrate value propositions to consumers. The U.S. and EU governments' growing attention to public transportation and emission reductions from automobile traffic is likely to increase investments in public telematics focused on traffic and energy management (for example, the U.S. government's IntelliDrive initiative). The increased interest by governments in "smart city" initiatives further emphasizes the importance of public telematics efforts including smart parking infrastructures and road network utilization. Although automobiles will increasingly be connected going forward, the infrastructure requirements needed to enable public telematics initiatives will require significant resources. Given the increasing or remaining global economic challenges, many governments and municipalities are likely to make required infrastructure investments for ITS more slowly than originally planned. User Advice: Proactively seek opportunities to participate in government-related public telematics initiatives that address growing traffic management challenges. It's in the automotive industry's interest to ensure social compliance and to minimize vehicle-related problems. Observe new vendor and solution developments focused on smart-city initiatives to identify potential collaboration or partnership opportunities. Explore data-collecting and sharing opportunities to accelerate public telematics initiatives (for example, sharing of speed data to identify areas of congestion) via aggregation to avoid potential privacy implications for your customers. Business Impact: This technology provides revenue opportunities via congestion charging, toll road, real-time parking, emission management, traffic management and pay-per-use services (for example, taxation). It supports consumer- or fleet-oriented telematics offerings that can take advantage of the hardware installed in vehicles for public telematics. The technology will also accelerate the emergence of intermodal transportation solutions that can leverage widely available transportation information to optimize transportation choices based on traffic, cost/price and user needs. Benefit Rating: Moderate Market Penetration: 5% to 20% of target audience Maturity: Emerging Sample Vendors: Atos; Deutsche Bahn; IBM; Kapsch; SAIC; Streetline; Toll Collect Recommended Reading: "Innovation Insight: The Connected Vehicle Will Dominate Automotive and Mobility Innovations" "Business Drivers for Technology Investments in the Automotive Industry" "BMW i Launch Exemplifies Gartner's Prediction of a New Mobility Era"

Page 74 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx

Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

"Public Telematics Offers Chance to Remedy Traffic Congestion" NA-0904-0042

Remote Diagnostics
Analysis By: Thilo Koslowski Definition: Remote-diagnostic technologies provide the ability to deliver onboard vehicle-related performance and quality data to a central monitoring application for the purpose of improving parts performance and reliability insights for engineering and product development. Remote diagnostics can also improve CRM by automating repair and service scheduling. Position and Adoption Speed Justification: Despite a clearly defined value proposition for end users, vendors and business partners in the automotive and transportation value chains outside the commercial vehicle segment, remote-diagnostic applications have yet to be deployed on a broad scale. To realize potential benefits, such as cost savings, quality improvements and enhanced customer experiences, companies are carefully exploring their technology options and business models. The main challenge lies in developing and automating business processes that can take advantage of remote-diagnostic services. For the automotive sector, this will involve collaboration from vehicle manufacturers, major parts suppliers and the dealership-servicing network, which will take much organization and investment and a transformation of processes. Past automotive industry initiatives for remote diagnostics have focused on prototype and launch vehicle-testing efforts. The idea is to use remote-diagnostic technologies to capture telemetry data from the vehicle and communicate it to the manufacturer's testing or quality center, but companies are facing process challenges to execute on this vision. Recent remote-diagnostic activities in the automotive industry have leveraged telematics to provide consumers with basic vehicle system updates (for example, oil level and brake conditions) for CRM purposes. This has raised awareness for the potential of remote diagnostics as a marketing tool and will accelerate the technology's adoption in the future. In the commercial transportation segment, adoption for remote diagnostics is higher because freight carriers typically control their own services and are more likely to connect remotediagnostic capabilities to other vehicle service and support functions. Remote diagnostics still show relatively low penetration across all automobiles, but are increasingly offered in commercial vehicles. There is also increasing growth albeit still low market penetration for remotediagnostic applications in passenger vehicles as part of telematics offerings from premium and volume brands in the U.S., Western Europe and parts of Asia. Automotive companies are beginning to leverage insights gained from remote diagnostics for internal quality and warranty-related objectives. User Advice: Develop a business case for remote diagnostics that focuses on minimizing warranty costs, improving product quality and enhancing the customer's ownership experience throughout the vehicle product life cycle by providing peace of mind for the driver. Seek input and participation from multiple departments and value chain partners to maximize benefits. For example, dealerships should be involved to provide an optimized experience based on the benefits of remote diagnostics (that is, the dealer automatically orders replacement parts and schedules a service appointment based on information collected from the remote-diagnostic system). Consider various remotediagnostic communication channels, including email, SMS and even social media. For example, automotive companies can consider "machine tweets." Involve stakeholders from the quality

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 75 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

department and suppliers to identify ways to best leverage remote-diagnostic data throughout the product life cycle. This will emphasize the need for big data analysis and predictive analytics. Business Impact: Remote-diagnostic telematics improve vehicle quality, minimize warranty costs and, ultimately, can improve profit margins. They empower OEMs, dealers and fleet operators to maximize CRM potential, they accelerate and automate repair/maintenance scheduling and parts ordering with dealers and suppliers, and they improve the reporting of recurring mechanical vehicle problems to manufacturers, suppliers and institutions (for example, the Transportation Recall Enhancement, Accountability and Documentation Act). Furthermore, remote diagnostics can benefit manufacturers during the product-testing phase and ensure high-quality reliability. Fleet operators can use remote diagnostics to monitor vehicle performance and to proactively manage service and repairs as well as improve on product designs. Remote diagnostics will also help to introduce electric vehicle offerings by monitoring battery performance and will help with the charging process via wireless data communication (such as interrupted charging process and battery level). Benefit Rating: High Market Penetration: 5% to 20% of target audience Maturity: Emerging Sample Vendors: Agero; Airbiquity; GE; GM (OnStar); IBM; WirelessCar Recommended Reading: "U.S. Consumer Vehicle ICT Study: Web-Based Features Continue to Rise" "German Consumer Vehicle ICT Study: Demand for In-Vehicle Technologies Continues to Evolve" "Vehicle ICT Evolution: From the Connected Car to the Connected Driver" "From Enlightenment to Mainstream: The Resurgence and Transformation of Telematics"

Enhanced Network Delivery


Analysis By: Joe Skorupa Definition: Enhanced network delivery is an approach to delivering application performance. It uses multiple technologies, including network-based acceleration services, WAN optimization, application delivery controller/asymmetrical optimization and WOC equipment-based deployments, to improve performance of cloud applications. Enhanced network delivery uses a combination of techniques, including protocol spoofing, route control, HTML rewrite, compression and caching, along with quality of service and bandwidth bursting, and others. Position and Adoption Speed Justification: With the maturing of the cloud computing market, consumers and providers are realizing that applications often suffer from performance problems as network latency increases. Also, providers realize that their bandwidth costs will be a significant component of their overhead. As a result, consumers and providers are simultaneously driving this market. Many software as a service (SaaS) consumers (including a number of Microsoft Business

Page 76 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx

Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Productivity Online Standard Suite customers) are demanding that their preferred WAN optimization controllers (WOC) be supported by their SaaS provider, primarily to optimize HTTP/HTTPS traffic. SaaS providers are realizing significant increases in customer satisfaction and lower support costs when they bundle network-based optimization (often from Akamai) into their offerings. To eliminate the need to deploy WOCs in the SaaS providers' data centers and to provide end-to-end optimized delivery, Akamai and Riverbed have announced general availability of a jointly developed offering that combines Riverbed's WOCs and Akamai's network. Aryaka blends network-based services with on-premises devices for bandwidth constrained locations. Aryaka and Virtela offer WAN optimization capabilities that are embedded in the network as part of their managed WAN services offerings. In some cases, SaaS providers are leveraging asymmetrical acceleration for HTTP/HTTPS applications from companies such as Riverbed/Aptimize, F5 Networks and Radware. Cloud infrastructure as a service (IaaS) providers (particularly cloud storage) are adopting WOC-based acceleration to lower bandwidth costs and to improve performance of backup and replication traffic. In the case of cloud IaaS providers, users have the option of using an enhanced network delivery or implementing these network services as a do-it-yourself implemented workload on the virtual machines leased from the IaaS provider. Enhanced network delivery comprises a combination of mature technologies, such as application delivery controllers and WAN optimization controllers, and route optimization with emerging technologies such as bandwidth bursting, bandwidth on demand and quality of service to address the rapidly growing need of integration as IaaS and SaaS providers. During the next five years, software-defined networking will enable a richer set of services on demand. The adoption of enhanced network delivery is being driven by the need for immediate solutions to application performance problems across the WAN and by a mix of products and services that bring immediate value. Over time, all these approaches will become mainstream. User Advice: Cloud services consumers should test applications across actual network topologies rather than assume that they will perform adequately. If performance problems (latency- or bandwidth-related) appear, then insist that your cloud provider support the required WAN optimization products or services. Cloud providers that have customer-facing HTTP/HTTPS applications should consider enhanced network delivery based on WAN optimization services and equipment-based solutions as part of their core portfolios. Users should also implement capabilities to monitor and measure the performance of service, either on their own or through an independent third-party provider. Customers considering enhanced network delivery as an alternative to traditional device-based deployments should ensure that the service providers have points of presence that are close enough to user locations to deliver the needed optimizations. Business Impact: Enhanced network delivery can deliver significant gains in application performance and customer satisfaction, while reducing the cost of WAN bandwidth.

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 77 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Benefit Rating: High Market Penetration: Less than 1% of target audience Maturity: Emerging Sample Vendors: Akamai; Aryaka; AT&T; BT; Orange Business Services; Riverbed Technology; Verizon Business; Virtela Recommended Reading: "You Can't Do Cloud Computing Without the Right Cloud (Network)" "Magic Quadrant for WAN Optimization Controllers" "Cloud Network Services Are Essential Enablers of Cloud Computing" "Optimize Enterprise Networks to Improve SaaS Performance"

Fleet Vehicle Tracking


Analysis By: Thilo Koslowski Definition: Fleet-vehicle-tracking technologies enable fleet operators to locate and track the movements of mobile assets in real time, using onboard sensors and cellular or satellite links. Carriers can monitor exact tractor-trailer locations, engine performance, fuel state, hours of operation, maintenance problems, cargo temperature and tampering alerts. This is also an essential component in dynamic driver-load-matching applications. Position and Adoption Speed Justification: Fleet-vehicle-tracking technologies require sophisticated communication devices and software that small or midsize carriers initially couldn't afford. Early offerings were too costly to achieve mass penetration, but new low-cost solutions (for example, GPS-enabled smartphones, laptops or aftermarket embedded devices) enable small or midsize carriers' data analysis and driver feedback features. Fleet-tracking technology solutions are becoming more ubiquitous, and vendors are embracing specific aspects, such as fuel savings and safe driving behavior, in their solutions, in addition to simple asset tracking. More advances and cost reductions in GPS and networking technologies will increasingly achieve broad market adoption during the next five years. New impulses for fleet vehicle tracking come from "smart city" projects that are tying individual fleet-tracking efforts with broad traffic management and environmental aspects (for example, fuelefficient routing). Going forward, legislation in some countries may require fleet tracking for these purposes or to drive monetization (for example, road tolling, congestion charges and taxation). User Advice: Smaller fleets should evaluate off-the-shelf solutions that support their specific business needs. Larger fleet operators should explore whether newer, less-expensive solutions justify a switch from current solutions, or even develop their own tracking systems by buying, combining and integrating required technology and process components. Focus on integrating navigation data with logistics and asset management applications. Operators should implement

Page 78 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx

Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

vehicle-tracking systems when the pain of empty loads, idle drivers and "lost" tractor-trailers visibly affect customer satisfaction or operating efficiency. Business Impact: Fleet operators can use tracking technologies to improve their operations' effectiveness, dispatching and routing processes and security systems. Furthermore, the technology can improve routing, optimize fuel consumption and provide driver education. Benefit Rating: Moderate Market Penetration: More than 50% of target audience Maturity: Mature mainstream Sample Vendors: GreenRoad; IBM; PeopleNet; Qualcomm; SageQuest; Trimble; US Fleet Tracking Recommended Reading: "In-Vehicle Technologies Provide Differentiation Opportunities for U.S. Commercial Vehicle Manufacturers"

Entering the Plateau


Process Data Historians
Analysis By: Leif Eriksen; Randy Rhodes Definition: Process data historians are purpose-built to acquire time series data from operational technology (OT) systems and make that data available for detailed analysis. These historians routinely store years of time series history at a monthly to millisecond granularity for thousands of continuous data streams. This data comes from a variety of distributed control systems (DCSs), supervisory control and data acquisition (SCADA) systems, and other types of control and monitoring systems. Position and Adoption Speed Justification: Organizations are looking for tools to make betterinformed decisions for improving operational effectiveness, mitigating operational risk and improving sustainability. Manufacturers are also increasingly using the systems to reduce energy consumption and improve reliability. The smart grid focus in electric utilities is driving up monitoring and historical storage requirements, and these solutions are also finding a home in new renewable energy markets, such as wind and solar. The decreasing costs, and improving sophistication of sensor technology and communications options, are driving up data volumes. Process data historians offer easy-to-deploy solutions; allow engineers and operators to access very large volumes of process data; and create analyses and visualizations, with minimum support from IT. Historians have also found a role as the system of record for distributing time-stamped operations data across the enterprise. They come with graphical user interfaces (GUIs), some analytical capabilities and, typically, either tight integration with, or export capabilities to, desktop spreadsheet tools. More advanced features include high-availability architecture (seamless client and server failover); event-framing tools; server-side calculation engines; and modeling tools for better integration of time series data, with relational data and software development toolkits for

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 79 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

compiled applications. Historian software is finding a role in situational awareness applications due to its combination of easy data accessibility and user-based interface design (thick client and thin client). It is often embedded in other commercial off-the-shelf (COTS) control and monitoring applications. User Advice: Time-series process data history should be kept in a process data historian, as opposed to an organization's relational database. Modern historians have very good out-of-the-box data compression, graphical display capabilities, security features, and integration with back-office business systems and real-time operational systems. Many process data historian users are looking to upgrade point solutions into modern enterprise solutions. Therefore, process data historians should be modular and scalable, and preferably built on a modern service-oriented architecture (SOA) platform. Users must consider systems that integrate easily with a wide range of systems, such as SCADA, mobile, condition-based maintenance (as a part of enterprise asset management), GISs and other sources of operational data, to make mission-critical data accessible to everyone who needs it. Finally, because licensing plays an integral role in how widely used a solution will be, companies should closely evaluate the long-term price of the licensing agreements as systems expand and users increase. As more data is added to the historian, its value and applications increase, leading to more widespread usage. In the utility market, as revenue metering moves into smaller intervals to support energy-efficiency programs and other enterprise needs, time-series-based data historians may be used to complement meter data management systems. Further archiving of metering data and improved data persistence will be required where metering data is intended to be used for "extended realtime" distribution network analysis and dynamic distribution asset operations monitoring. The deployment of phasor measurement units for wide-area measurement systems at reliability coordinators and transmission system operators is also driving expanded use of historian software. Business Impact: Process data historians are the system of record for time series data in energy and process manufacturing industries. They are indispensable to operations analysts and engineers as they work to improve operational performance and equipment reliability, as well as reduce costs. In addition, they are used to help support the decision process for long-term asset investment and maintenance planning strategies; roll up production and quality data into accounting systems, such as ERP; and provide better customer service by aligning asset performance with customer needs. More recently, they have become the foundation for energy management and broader sustainability efforts. Benefit Rating: Moderate Market Penetration: More than 50% of target audience Maturity: Mature mainstream Sample Vendors: AspenTech; GE Intelligent Platforms; Honeywell Process Solutions; InStep Software; Invensys Operations Management; OSIsoft; Rockwell Automation; Siemens Energy and Automation

Page 80 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx

Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Recommended Reading: "Growing Transmission Grid Complexity Requires More Situational Awareness" "Historian Software at Utilities: A Market Overview" "Vendor Guide for Industrial Energy Management, 2013"

Commercial Telematics
Analysis By: Thilo Koslowski Definition: Commercial telematics are fleet- and trucking-segment-targeted automotive information and communication technologies/services that use embedded technology or mobile and aftermarket devices. They support networks between commercial vehicles/fleets and IT applications, and focus on improving productivity, efficiency, profitability, reduced cost (such as lower insurance premiums) and sustainability objectives. Position and Adoption Speed Justification: Service and IT vendors, as well as commercial vehicle manufacturers, continue to improve functionality, back-end integration and business value for commercial telematics applications, which are leading to the growing penetration of such applications in the commercial fleet market (aftermarket). Fleet operators' expectations regarding the functionality of in-vehicle technologies have increased, and companies expect to pay less for such offerings. Recent vendor offerings increasingly focus on the integration of mobile and aftermarket devices into the vehicle, and focus application benefits on safety, fuel efficiency and driver monitoring. Increasing vendor competition will lead to more cost-effective solutions that can increasingly be utilized by small and midsize fleets. The growing maturity of the technology is likely to lead to market consolidation, especially among smaller vendors. User Advice: Communicate and market a clear value proposition: productivity improvements, cost savings and reduced risks for fleet managers (for example, via remote diagnostics). Establish partnerships with integrators to develop an effective telematics solution that ties into back-end logistics and fleet management applications. Explore opportunities to integrate with ERP systems, leveraging fleet-specific information. Use commercial telematics applications to raise driver awareness for fuel-efficient, safe and nondistracted driving. Business Impact: Commercial telematics can provide improved asset management, higher profit margins and cost avoidance for fleet operators. They represent a market opportunity for service and technology providers, and enable vehicle manufacturers (for example, truck makers) to offer improved options and, potentially, increase service revenue. Commercial telematics also offer the opportunity to lower insurance premiums for fleet operators by tracking vehicles and goods. Benefit Rating: Moderate Market Penetration: More than 50% of target audience Maturity: Mature mainstream

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 81 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Sample Vendors: Atos; Axeda; Qualcomm; SageQuest; Sprint; US Fleet Tracking; Verizon; WirelessCar Recommended Reading: "Competitive Landscape: M2M Services in the Automotive Industry" "nPhase Alliance With Carriers Simplifies Global M2M Deployment" "In-Vehicle Technologies in Europe Present Opportunities and Challenges for Commercial Vehicle Manufacturers"

Event-Driven Architecture
Analysis By: W. Roy Schulte; Mark Driver; Ray Valdes Definition: Event-driven architecture (EDA) is an IT design paradigm in which a software component executes in response to receiving one or more event notifications. EDA is more loosely coupled than the client/server paradigm because the component that sends the notification doesn't know the identity of the receiving components at the time the system is compiled. Position and Adoption Speed Justification: The general concept of EDA is mature and largely taken for granted by most architects and many developers. It has arrived at the Plateau of Productivity and will not appear as a separate entry on the Hype Cycle next year. EDA is, however, a fundamental and important part of IT, so it will play an important role in numerous future trends. Because new technologies that depend on EDA will continue to come to market, EDA will have an implicit presence within many other entries on future Hype Cycles. EDA is used by all companies, and in many different ways, because it is the only practical technique for implementing many kinds of loosely-coupled, asynchronous processes. Users, and sometimes even developers, use EDA without realizing it and it has long been used in OSs, communication protocols, device management for IT and operational technology (OT) systems, simulation and graphical user interfaces. More recently, EDA has been increasingly applied in higher levels of application architecture for systems as diverse as smart grids, other OT systems and social computing systems (in the form of activity streams). Business events such as customers placing orders, changing their addresses, making credit card transactions and clicking through websites are a growing focus in application design. EDA was historically underused in high-level design because some analysts and application developers were unfamiliar with event-processing concepts and thus defaulted to traditional client/ server or batch design paradigms. The demand for faster and more flexible business processes has, however, caused architects and developers to question the basic nature of contracts between software components, leading to an increase in their use of EDA and representational state transfer (REST), while decreasing the use of method-oriented client/server and batch designs. User Advice: Architects should educate business leaders about the benefits of continuous processes compared with batch processes, and the benefits of CEP-based, continuous-intelligence dashboards, compared with nightly or weekly reports, for making certain kinds of decisions. Architects and analysts should use EDA in sensor-based systems and other OT systems that require asynchronous and loosely coupled relationships. EDA will play a major role in the Internet of
Page 82 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Things (for example, every RFID or bar code reading is an event). Architects and business analysts should identify and document significant business events when they model business processes and application data for large new business systems. A typical business process or OT system has event-, time- and request-driven aspects:

Event-driven EDA relationships are the best approach when the sender and receiver are asynchronous and the system developers must have maximum flexibility to change the sender or receiver without side effects for the other party. Time-driven relationships should be used instead of event-driven relationships when the nature and timing of an activity can be anticipated and scheduled in advance (in a sense, time-driven components are a special case of event-driven components where the event is a time change). Request-driven relationships (as found in REST or method-oriented client/server architecture) are appropriate when the nature of the application is understood and jointly agreed on by the developers of the clients and servers (respectively) so that a synchronous relationship can be implemented.

Architects can use available methodologies and tools to build good EDA applications, but must consciously impose an explicit focus on events because conventional development methodologies and tools sometimes ignore or downplay events (see "How to Improve Your Company's Event Processing"). As architects and developers weigh the trade-offs between REST and methodoriented SOA, they should consider EDA as a third possible alternative (EDA should be used in some situations, REST or method-oriented interfaces elsewhere). Business Impact:

Systems that use EDA are easier to modify than those designed with other paradigms because a change to one component has few side effects, making it more practical to change or add components to the system with less time, effort and cost. As such, EDA is relevant to every industry. Companies will use EDA more as they evolve toward always-on, continuous processes that respond more quickly to customer requests, competitors' moves, and other events in the internal or external environment. The growing adoption of EDA will improve company timeliness, agility and information availability in applications, thus improving the efficiency of business processes.

Benefit Rating: High Market Penetration: More than 50% of target audience Maturity: Mature mainstream Sample Vendors: IBM; Informatica; Microsoft; Oracle; Red Hat; SAP; Software AG; Tibco Software Recommended Reading: "How to Improve Your Company's Event Processing"

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 83 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

"How to Choose Design Patterns for Event-Processing Applications" "Apply Three Disciplines to Make Business Operations More Intelligent"

Appendixes

Page 84 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx

Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Figure 3. Hype Cycle for Operational Technology, 2012

expectations
Open SCADA Big Data IT/OT Convergence in Life Sciences Facilities Energy Management Networking IT and OT IT/OT Integration Providers IT/OT Alignment IT/OT Convergence in Manufacturing IT/OT Impact on EA ITAM Processes for OT High-Performance Message IT/OT Integration Infrastructure IT/OT Skilled Workforce Exploiting Sensor Grids Operational Technologies for Government Complex-Event Processing Operations Intelligence System Engineering Software Operational Technology Security

Machine-to-Machine Communication Services Operational Technology Platform Convergence Enterprise Manufacturing Intelligence Commercial Telematics Car-to-Infrastructure Communications Transportation Mobile Asset Tracking Process Data Historians Event-Driven Architecture Fleet Vehicle Tracking Electronic Onboard Recorders Remote Diagnostics Public Telematics Process Control and Automation

As of July 2012

Technology Trigger

Peak of Inflated Expectations

Trough of Disillusionment

Slope of Enlightenment

Plateau of Productivity

time
Plateau will be reached in: less than 2 years
Source: Gartner (July 2012)

2 to 5 years

5 to 10 years

more than 10 years

obsolete before plateau

Gartner, Inc. | G00246894

Page 85 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Hype Cycle Phases, Benefit Ratings and Maturity Levels

Page 86 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx

Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Table 1. Hype Cycle Phases


Phase Innovation Trigger Peak of Inflated Expectations Definition A breakthrough, public demonstration, product launch or other event generates significant press and industry interest. During this phase of overenthusiasm and unrealistic projections, a flurry of well-publicized activity by technology leaders results in some successes, but more failures, as the technology is pushed to its limits. The only enterprises making money are conference organizers and magazine publishers. Because the technology does not live up to its overinflated expectations, it rapidly becomes unfashionable. Media interest wanes, except for a few cautionary tales. Focused experimentation and solid hard work by an increasingly diverse range of organizations lead to a true understanding of the technology's applicability, risks and benefits. Commercial off-the-shelf methodologies and tools ease the development process. The real-world benefits of the technology are demonstrated and accepted. Tools and methodologies are increasingly stable as they enter their second and third generations. Growing numbers of organizations feel comfortable with the reduced level of risk; the rapid growth phase of adoption begins. Approximately 20% of the technology's target audience has adopted or is adopting the technology as it enters this phase. The time required for the technology to reach the Plateau of Productivity.

Trough of Disillusionment Slope of Enlightenment Plateau of Productivity

Years to Mainstream Adoption


Source: Gartner (July 2013)

Gartner, Inc. | G00246894

Page 87 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Table 2. Benefit Ratings


Benefit Rating Transformational High Definition Enables new ways of doing business across industries that will result in major shifts in industry dynamics Enables new ways of performing horizontal or vertical processes that will result in significantly increased revenue or cost savings for an enterprise Provides incremental improvements to established processes that will result in increased revenue or cost savings for an enterprise Slightly improves processes (for example, improved user experience) that will be difficult to translate into increased revenue or cost savings

Moderate Low

Source: Gartner (July 2013)

Page 88 of 91

Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Table 3. Maturity Levels


Maturity Level Embryonic Emerging Status Products/Vendors

In labs Commercialization by vendors Pilots and deployments by industry leaders

None First generation High price Much customization Second generation Less customization Third generation More out of box Methodologies Several dominant vendors

Adolescent

Maturing technology capabilities and process understanding Uptake beyond early adopters Proven technology Vendors, technology and adoption rapidly evolving

Early mainstream

Mature mainstream

Robust technology Not much evolution in vendors or technology Not appropriate for new developments Cost of migration constrains replacement Rarely used

Legacy

Maintenance revenue focus

Obsolete

Used/resale market only

Source: Gartner (July 2013)

Gartner, Inc. | G00246894

Page 89 of 91

This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Recommended Reading
Some documents may not be available as part of your current Gartner subscription. "Understanding Gartner's Hype Cycles" "Agenda for IT/OT Alignment, 2013" "Predicts 2013: IT and OT Alignment Has Risks and Opportunities" "How CIOs Should Address Accountability and Responsibility for IT and OT" "IT and Operational Technology: Convergence, Alignment and Integration" Evidence Each entry on this Hype Cycle has been prepared by subject matter experts. They have drawn on the body of research published within their respective disciplines as Gartner research deliverables, and have also used related external secondary research sources. Many of the entries in this Hype Cycle are particularly forward-looking. Analysts contributing to this report cover a wide range of technologies and industries. They have typically drawn on their ongoing monitoring and research interest in new intellectual property developments, on the work of numerous vendors' R&D teams and on the developments occurring in global standards-setting institutions to prepare these research positions. The descriptions and forecasts given have been reviewed by analysts who are subject matter experts in the particular discipline that has been addressed. They have also been reviewed by other knowledgeable analysts outside the immediate discipline as a validity check to confirm that the research positions given are sound across IT and OT industries and technologies. More on This Topic This is part of an in-depth collection of research. See the collection:

Gartner's Hype Cycle Special Report for 2013

Page 90 of 91
This research note is restricted to the personal use of jesus.rendon@itesm.mx

Gartner, Inc. | G00246894

This research note is restricted to the personal use of jesus.rendon@itesm.mx

GARTNER HEADQUARTERS Corporate Headquarters 56 Top Gallant Road Stamford, CT 06902-7700 USA +1 203 964 0096 Regional Headquarters AUSTRALIA BRAZIL JAPAN UNITED KINGDOM

For a complete list of worldwide locations, visit http://www.gartner.com/technology/about.jsp

2013 Gartner, Inc. and/or its affiliates. All rights reserved. Gartner is a registered trademark of Gartner, Inc. or its affiliates. This publication may not be reproduced or distributed in any form without Gartners prior written permission. If you are authorized to access this publication, your use of it is subject to the Usage Guidelines for Gartner Services posted on gartner.com. The information contained in this publication has been obtained from sources believed to be reliable. Gartner disclaims all warranties as to the accuracy, completeness or adequacy of such information and shall have no liability for errors, omissions or inadequacies in such information. This publication consists of the opinions of Gartners research organization and should not be construed as statements of fact. The opinions expressed herein are subject to change without notice. Although Gartner research may include a discussion of related legal issues, Gartner does not provide legal advice or services and its research should not be construed or used as such. Gartner is a public company, and its shareholders may include firms and funds that have financial interests in entities covered in Gartner research. Gartners Board of Directors may include senior managers of these firms or funds. Gartner research is produced independently by its research organization without input or influence from these firms, funds or their managers. For further information on the independence and integrity of Gartner research, see Guiding Principles on Independence and Objectivity.

Gartner, Inc. | G00246894 This research note is restricted to the personal use of jesus.rendon@itesm.mx

Page 91 of 91

S-ar putea să vă placă și