Documente Academic
Documente Profesional
Documente Cultură
Dr. Guido Soeldner directs the Division of Cloud Automation and Software Development at Soeldner Consult GmbH in
Nuremberg. His company focuses on the virtualizion infrastructure of VMware, Citrix and Microsoft, as well as
specialized cloud technologies from Amazon, Pivotal and Microsoft. He is also a VMware Certified Instructor, Amazon
AWS Instructor and a SpringSource/Pivotal Trainer.
Jens-Henrik Soeldner is head of the Business Infrastructure at Soeldner Consult GmbH in Nuremberg. He has been
awarded VMware vExpert status anually, since 2013. He holds various manufacturer certifications, such as VMware
Certified Mentor Instructor, EMC2 Instructor and has been a Microsoft Trainer for many years.
Constantin Soeldner directs the Business and IT Consulting Division at Soeldner Consult GmbH in Nuremberg. He is
authorized as a VMware Certified Instructor and Instructor of AWS among others.
www.soeldner-consult.de
All rights reserved. No part of this book may be reproduced or transmitted in any form or
by any means, electronic or mechanical, including photocopying, recording, or by any
information storage and retrieval system, without written permission from the authors,
except for the inclusion of brief quotations in a review.
ISBN-13:978-1515264330
ISBN-10:1515264335
Limit of Liability/Disclaimer of Warranty: The publisher and the authors make no representations or warranties with respect to the accuracy or
completeness of this work and specifically disclaim all warranties, including without limitation warranties of fitness for a particular purpose. No
warranty may be created or extended by sales or promotional materials. The advice and strategies contained herein may not be suitable for every
situation. This work is sold with the understanding that the publisher is not engaged in rendering legal, accounting, or other professional services. If
professional assistance is required, the services of a competent professional person should be sought. Neither the publisher nor the authors shall be
liable for damages arising herefrom. The fact that an organization or Web site is referred to in this work as a citation and/or potential source of further
information does not mean that the authors or the publisher endorses the information the organization or Web site may provide or recommendations it
may make. Further, readers should be aware that internet Web sites listed in this work may have changed or disappeared between when this work was
written and when it is read.
Trademark Acknowledgments
VMware: vSphere, vCenter, vRealize Automation, vRealize Orchestrator, vCenter Orchestrator, NSX, vCloud and vRealize Business are all registered
trademarks of VMware, Inc. All other trademarks are the property of their respective owners. Soeldner Consult GmbH is not associated with any
product or vendor mentioned in this book.
Preface
Cloud Computing has rapidly become the main focus of the IT industry. Hence, it has
naturally become a topic of vital importance to many companies. They therefore need to
decide on how to deal with this subject in the future and to what extent they will introduce
cloud technologies. In doing this, they will of course face many challenges. Big
companies will seek a solution that helps them to better manage resources in the private,
public, or hybrid cloud. Service providers require a product that supports their customers
in managing hosted services more efficiently. Finally, IT departments are looking for a
solution that allows automation of their internal processes and empowers them to be
cloud-ready (IT as a service).
VMware’s vRealize Automation can help you answer all these questions. Being a central
component of VMware’s Software Defined Datacenter (SDDC) strategy, it assists
companies that are introducing cloud computing to their operations.
In conclusion it gives both a summary of the products, where they fit, how they can be
used, as well as covering in depth all the technical aspects you need to get up and running
and au fait with VMware vRealize Automation.
Table of Contents
Cloud Computing
What is cloud computing?
Cloud service models
Deployment models in the cloud
Elements of cloud computing
Advantages of cloud computing
Deployment Architecture
Summary
Basic Installation
Overview of installation and configuration steps
Installation
Installation prerequisites and considerations
Deployment and configuration of the Identity Appliance
Deployment and configuration of the vRA Appliance
Summary
Design and Configuration of vRealize Automation
Basic vRealize Automation configuration
Tenant as an organizational unit
System administrator privileges
Creating and configuring a tenant
General settings
Identity stores
Uploading a license
Adding endpoints
Background: Data collection
AWS Endpoint
Creating and configuring fabric groups
Defining business groups
Creating reservations
Storaging Policies
Configuring Network Profiles
Cloud Blueprints
Provisioning with Amazon AWS
Defining a blueprint
Provisioning with OpenStack
Provisioning with vCloud Air and vCloud Director
Comparison of Amazon AWS with vCloud Air
Preparing for vCloud Air
Physical blueprints
Integrating the vRealize Guest Agent
Installing the guest agent on Windows
Installing the guest agent on Linux
Executing scripts with the guest agent
Summary
Network Profiles and Multimachine Blueprints
Basics of network profiles
Creating external network profile
Private network profiles
Routed network profiles
NAT network profiles
Summary
Working with the Service Catalog
Configuring the service catalog
Creating services
Managing catalog items
Creating entitlements and assigning permissions
Approval processes
Specifying approval policy information
Creating one or more approval level
Configuring an approval form
Summary
Reclamations
Reclamation workflow overview
Identifying unused machines in vRealize Automation
Capacity reports
Summary
Custom Properties and Build Profiles
Custom properties basics
Machine lifecycle
Request phase
Approval phase
Provisioning phase
Post approval phase
Manage phase
Retire phase
Custom Properties
Order of custom properties
Custom property categories
Read-only custom properties
Internal custom properties
External custom properties
Updated custom properties
Configuration of custom properties
Build profiles
Create build profiles
Property dictionary
Using the property dictionary
Create a property definition
Configure property attributes
Create the parent property definition
Create the child property definition with a relationship attribute
Write a Value Expression
Formatting the XML and upload to vRealize Automation
Add the properties to a build profile or blueprint
Configure property layout (optionally)
Summary
Extensibility Overview
Extensibility Options and Tools
Extensibility with vRealize Designer and VMware Orchestrator
VMware vCenter Orchestrator
Advanced Service Designer
Cloud Development Kit
Summary
Working with vRealize Automation Designer
The vRealize Automation IaaS model
Background: LINQ
vRealize Designer
Use case: Invoke a PowerShell script as part of the provisioning process.
Implementing the workflow
Background: Workflows in vRealize Designer
Background: How to activate workflows
Additional Workflow activities
Summary
vRealize Orchestrator
Introduction to vRealize Orchestrator
vRealize Orchestrator configuration
Start Orchestrator Appliance
Create Orchestrator Endpoints
Installation of the Orchestrator client
Background: Adding additional user for the Orchestrator client
Orchestrator configuration
Import SSL certificates
Configure the vRealize Automation plug-ins
Orchestrator Use Cases
Summary
Advanced Service Designer
XaaS use cases
Advanced Service Designer Configuration Role assignment
Role assignment
Endpoint configuration
Summary
Financial Management
Overview on financial management
Basic features of financial management tools
Understanding your costs
Establishing prices for services
Comparing costs
Showback costs
Providing reports
Summary
DevOps and vRealize Automation
Foundations of DevOps
DevOps Tools
Ticket systems
Server deployments
Configuration management
Continuous integration
Continuous delivery
Continuous deployment
Log analysis
This chapter aims to give a brief introduction to cloud computing. We will explain the
basics of cloud computing, show its advantages and discuss the different deployment
models available. We will also address the different service models in the cloud.
From a technical perspective, cloud computing never comes alone – instead cloud
computing is based on multiple technologies, amongst these virtualization and automation.
Cloud computing is seen as being one of the most important trends in the IT industry. It
allows application software to be operated using a wide variety of internet-enabled
devices. The word cloud acts like a metaphor to depict an abstraction layer. This layer
hides the complexity of the underlying infrastructure.
Besides these three offerings, there is another service model that can be encountered from
time to time:
IaaS is the most basic cloud-service model. Its basic function is to provide compute
capabilities – i.e. compute, memory and storage resources. At the heart of IaaS is
virtualization – a hypervisor such as VMware ESXi, Xen, KVM or Hyper-V runs the virtual
machine as a guest. This allows the parallel use of multiple guest virtual machines on the
same base hardware. Importantly, it also provides isolation between these machines.
However, installing large numbers of such virtual machines manually would take too long
- so that’s where automation comes into play. Besides the aforementioned features, cloud
providers also offer raw block storage, object storage, firewall, load balancers, IP
addresses or virtual local area networks as IaaS resources.
The most famous service providers within the public cloud are Amazon with its EC2
instances, Google App Engine or Microsoft Azure.
Companies aiming to introduce cloud computing need some degree of virtualization
within their enterprise. Once running, they can use vRealize Automation or OpenStack as
a cloud management platform.
• Application PaaS (aPaaS) helps developers to rapidly develop applications for the
cloud. Such aPaaS systems usually support different programming languages and
frameworks. They also can take care about the deployment of the applications and
its runtime management. Popular aPaaS-systems are Amazon Beanstalk, Pivotal
Cloudfoundry, CloudBees, Google App Engine, Heroku or also Red Hat Open Shift.
• Mobile PaaS (mPaaS) aims at providing deployment capabilities for mobile
applications.
• Open PaaS (oPaaS) does not include hosting, but offers capabilities to run
applications in other environments.
Software as a Service (SaaS)
Another popular service model within cloud computing is SaaS. Within SaaS, users have
direct access to applications. In the SaaS model, the cloud provider manages the whole
infrastructure as well as the application. SaaS applications are usually priced on a pay-per-
use basis or using a subscription fee. There are many popular SaaS offerings, for example
the Salesforce CRM or Microsoft Office 365.
Everything as a Service (XaaS)
The XaaS deployment model is mostly used in private clouds. Any service, which can be
provisioned in an automated way, can be published within the private cloud and thus used
as an XaaS service within the enterprise.
Right now, there are many different deployment models within the cloud:
• Public cloud computing is certainly the most mature and popular deployment model
of the cloud. End users, from companies that have not yet made their internal IT
‘cloud-ready’, can easily use service offerings from existing cloud providers such as
Amazon ASW, Microsoft Azure or Google – to mention but a few. The biggest
advantages of using public cloud computing are its flexibility and the absence of up-
front investment requirements.
• Companies can also build their own private cloud. Setting up your own private
cloud involves a significant level of engagement and means re-evaluating and
redesigning the internal IT environment. In order to make such a transition more
feasible, there are cloud management platforms such as VMware vRealize
Automation or OpenStack on the market.
• Hybrid clouds represent a combination of both approaches. The hybrid cloud can be
either a combination of different public cloud service providers or an extension of
the private cloud by a public cloud. VMware offers vCloud Air and targets
customers that need a certain level of temporary capacity.
• Multicloud computing is a recent trend, in which services from multiple cloud
providers are used in order to reduce reliance on a single vendor and to mitigate the
effects of disasters.
During the last few years, there is a significant increase of service offerings from cloud
providers. Several years ago, cloud computing was limited to the most basic infrastructure
services. Right now, cloud offerings nearly encompass the whole set of services and
applications of traditional datacenters. While the range of available services is still
increasing, they can be categorized at a base level into core infrastructure services and
advanced platform services.
Advanced services offerings include or allow you to build the following services:
Cloud computing has become extremely popular within recent years – and for good
reason. In this section, we will shortly enumerate and explain the most important
advantages of cloud computing.
The number-one reason companies move to the cloud is certainly agility. Traditional
processes within companies tend to be very slow – especially when something new has to
be built. With cloud computing on the other side, users can request resources in minutes
instead of days, weeks or longer. From a service consumer perspective, there is no
expensive hardware to be ordered, installed or configured. Consumers only specify their
hardware expectations – i.e. how much memory, storage and computing capabilities they
need – and request them from the cloud provider. If there is any change in the
requirements, consumers can easily request additional resources or release them.
Another advantage is that there is no need to forecast your capacity. Consumers can scale
up to meet the needs of workloads. As an example, imagine a web page that experiences
spikes and higher load during the Christmas time. A monitoring system could constantly
check the incoming traffic of a load balancer and – if there is a significant rise – add
additional web service instances behind the load balancer. As soon as there is lower traffic,
resources can be released again. Such a behavior is known as scaling-in and scaling-out.
From a financial point of view there are advantages for the consumer as well. Due to
ability to scale-in or out, money needs only to be spent when the resources are actually in
use. Consequently, consumers only pay for the infrastructure they need. In addition, there
is no upfront investment for consumers.
Of course, running a cloud requires building up new knowledge. However, as cloud
services can easily be centralized, there is no need for cloud consumers to acquire all that
knowledge as well – instead they only need to concentrate on using those cloud services.
Another advantage of the cloud stems from the distribution of your datacenters.
Depending on the location of your datacenters (many public cloud providers act globally
as well), you will also be able to bring some benefits to your end users. Imagine a backend
for a mobile application that is deployed in datacenters located in different regions of the
world. End users will surely experience higher performance and lower latency when
using such applications.
VMware, a leader in virtualization, also has a long history in cloud computing. The
company has bundled its cloud products within the vRealize-Suite (formerly known as
vCloud – the rebranding took place in 2014). Companies can use the vRealize-Suite to
implement their own private cloud. VMware is also pushing forward its own cloud
strategy – the Software-Defined Data Center, which involves private, public and hybrid
cloud computing.
Managing the cloud requires a cloud management platform and VMware vRealize
Automation acts as such a platform. Originally, VMware had another product as their
cloud flagship – the vCloud Director, but between 2012 and 2013, VMware realized that
vCloud Director was not capable of meeting all the demands of private cloud computing
in the enterprise. Therefore, VMware acquired DynamicOps in 2012 and rebranded their
cloud management product first as vCloud Automation Center and later as vRealize
Automation. vCloud Director still exists, but can only be used by service providers and
not by normal enterprise customers. vCloud Director provides basic cloud management
platform capabilities, but focuses on a VMware-only stack.
In order to request resources, end users log in to the self-service portal and can request
services from a service catalog.
Behind the scenes, vRealize Automation manages the lifecycle of any requested
resource. This begins with approval policies that can be applied to any service published
to the service catalog. Users need the appropriate permissions to request a service from the
catalog as well. There are means to determine where a machine is deployed as well. For
example, it is possible to form different service categories (e.g. gold, silver or bronze
hardware) and create a mapping between a service and the hardware to which such a
service should be deployed.
Network virtualization
While in most cases provisioning virtual machines is enough, there are plenty of examples
where networks need to be created dynamically. This is especially true when multi-tier
applications need to be deployed. Imagine a multi-tier application consisting of a web
server, an application server and a database. Such applications should often be deployed to
a dedicated network that is created on the fly. Those networks include components such as
firewall, load balancers, routers or security groups and these can all be created with
VMware NSX. vRealize Automation can interact with NSX to dynamically create NSX
components as part of any deployment process.
Multitenancy
Another important feature of a cloud management platform is multitenancy. You can use a
vRealize Automation instance to host and run the virtual machines of different customers
while isolating them from each other.
Service catalog
End users can log in into the service portal and use the service catalog to request and
provision services from within vRealize Automation. Administrators can set up fine-
grained permissions to control the access to the service catalog. The user interface of the
service catalog is inspired by the look and feel of an App Store.
IaaS, PaaS and XaaS provisioning
Besides being able to automate your infrastructure and empower users to provision
services according to their needs, it is essential to keep track of ongoing costs. This
includes finding out which costs are incurring in your datacenter, how they are allocated
and how to map them to services that end users can be charged for using.
To accomplish that, vRealize Automation includes a financial tool – vRealize Business
Standard – that helps with the aforementioned problems.
1.2.2. Extensibility
In most projects, sooner or later, a point will come when not all of the requirements can be
covered by means of the administrative graphical user interface. The same holds true for
vRealize Automation.
The main tool for extensibility of vRealize Automation is the vRealize Orchestrator. The
Orchestrator constitutes a workflow engine which is tightly integrated with vRealize
Automation. It allows customizing vRealize Automation IaaS machines throughout their
whole lifecycle.On the other hand, Orchestrator can also help to automate vRealize
Automation tasks.
In our experience having a proper cloud management software is a key requirement for
building and maintaining a private and hybrid cloud. While many companies develop their
own set of programs and tools for managing the cloud, sooner or later they end up in a lot
of work to keep their software up-to-date.
This book shows all aspects of vRealize Automation – the cloud management platform
from VMware. It offers the background knowledge as well as detailed instructions and
hands-on examples of real use cases.
Target audience
This book is intended for system administrators with experience using VMware’s vSphere
hypervisor as well as for developers who want to extend vRealize Automation and
vRealize Orchestrator. Consultants and architects will also benefit from reading the book
by gaining an in-depth understanding of vRealize Automation.
What this book covers
The topics of this book are usually introduced by a high-level overview of the chapters
and sections and the particular challenges. Then, the book proceeds to discuss how to
implement the various topics introduced.
Chapter 1 covers a discussion of Cloud Computing concepts, including what this term
means, what challenges exist in Cloud Computing and how Cloud Computing differs from
traditional IT.
Chapter 3 gives detailed information about the components that form vRealize
Automation and discusses how the product can be deployed in small, medium and large
environments.
Chapter 4 covers the installation of the products. The installation steps are explained step-
by-step.
Chapter 5 explains the most important concepts of vRealize Automation and shows how to
configure the system.
Chapter 6 explores blueprints. The chapter will introduce virtual, cloud and physical
blueprints and show how they interact with deployment techniques in order to provision
machines.
Chapter 7 discusses network profiles multimachine blueprints. Both features together help
administrators to dynamically create networks and clusters of machines.
Chapter 8 shows how to set up the Service Catalog. This involves publishing blueprints
and set up permissions for end users.
Chapter 9 talks about reclamations and how they can be used to reclaim capacity in the
datacenter.
Chapter 10 explores Custom Properties, Build Profiles and the Property Dictionary. These
items help administrators to easily change the runtime behavior of workflows and change
the user interface for the Request Form in the service catalog.
Chapter 11 covers advanced administration topics, such as the command line tool,
monitoring vRealize Automation or how to perform a backup and a recovery.
Chapter 14 describes how to set up and configure vRealize Orchestrator and integrate it
into vRealize Automation. Once you know how to get along with Orchestrator, you will
learn how to extend vRealize Automation.
Chapter 15 covers the Advanced Service Designer – a tool that helps to publish all kind of
services to the service catalog. Once again, the chapter provides many real life use cases.
Chapter 16 introduces financial management in the private cloud. It also shows how to set
up and integrate vRealize Business.
Chapter 17 gives an introduction into DevOps. The core principles of DevOps are
discussed. The chapter introduces the Advanced Services, Puppet, vRealize Code Stream
and vSphere Photon (a Docker container).
Chapter 18 expaines the concepts of DNS, DHCP and IP Address Management Tools.
2. Architecture and vRealize Automation Product Overview
After having given a brief overview of cloud and automation in general, the following
chapter will now focus on the architecture of vRealize Automation. The main components
of the product can be identified as follows:
Besides these core components, there are several other appliances and resources which can
further extend and compliment the base installation. These components will be described
in the following section. Furthermore, the different ways of licensing vRealize
Automation will also be outlined.
• vRealize Business Standard Edition: Bundled with the Advanced license of vRealize
Automation, vRealize Business Standard Edition helps you gain insight into the
costs of your datacenter. There is a showback functionality for the overall costs of
your environment. However, the resulting costs for business departments or
individual machines can also be seen. In addition to that, it can be used to compare
the costs of running machines in the private, hyprid, or public cloud.
• vRealize Application Services (formerly known as VMware vCloud Application
Director): This is another appliance, which contributes PaaS functionality to
vRealize Automation. It helps companies to provide sophisticated services such as
“Database as a Service”, “Middleware as a Service” or other complex applications.
Technically, the Application Services take care of the installation, configuration and
deployment of operating systems and hosted applications. During operations,
Application Services also help with the running of these deployed services – e.g.
scale-in, scale-out or updating applications.
• vRealize Code Stream: Technically, with vRealize Automation 6.2 onwards, the
vRealize Automation appliance also hosts vRealize Code Stream - an automation
tool that helps DevOps teams build a release pipeline and provision new
applications to a production environment. However, whilst technically hosted on the
vRealize Automation Appliance, it is an independent product and is not part of the
vRealize Automation or vRealize Suite licenses.
It is important to note, however, that in order to get the IaaS components running, an
instance of Microsoft SQL Server is required. Also noteworthy is the interaction between
the Identity Appliance and Active Directory. Most companies will use Active Directory
for user authentication. This can be leveraged by the Identity Appliance which
authenticates users and groups for vRealize Automation. Thus, by integrating Active
Directory, the Identity Appliance has access to all users and groups from Active Directory.
Therefore, you don’t need to create separate users in the Identity Appliance.
Figure 2-1 Logical overview of vRealize Automation components
This appliance is the heart of vRealize Automation. Introduced with version 6 of the
product, it now hosts the user interface for both administration and end users. Internally, it
consists of the following components:
The IaaS components have to be installed on a Windows machine. As they are quite
complex at first glance, they deserve some explanation. Locally, the following set of
services and web sites are hosted or required on the IaaS machines:
• IaaS Website
• Model Manager
• vCloud Automation Center Manager Services
• IaaS Database (usually installed on a separate node)
• Distributed Execution Manager
• Management agent
It is worth noting, that despite the product name having changed to vRealize Automation,
some of the component names still retain references to “vCloud Automation Center”.
(VMware has not managed to change all references to the new product name so far). Until
version 5.2, vRealize Automation (back then still vCloud Automation Center) was a pure
Windows product and so all these components are installed in Microsoft .net. As VMware
tries to shift away from Windows, we will see these components somehow migrated or re-
implemented on Linux or Orchestrator in future versions.
Fig. 2-2 depicts how these components reside on a Windows machine. As these
components are quite important, we’ll now describe them in detail.
The IaaS Website is based on the Internet Information Server (IIS) hosts both the Mondel
Manager and the Management interface. In practice, this means that, whilst the user
interface is hosted on the vRealize Automation Appliance, the underlying IaaS
functionality is still hosted on Windows. The Automation Appliance only uses frames to
display the IaaS information. These capabilities are available under the Infrastructure tab,
on the Automation Appliance user interface.
2.2.2.2. Model Manager
The model manager represents the heart of the IaaS components. It runs within IIS. Its
basic tasks are to provide a model with vRealize Automation entities (under the hood, its
data is stored within the Microsoft SQL Server database), provide access to that via a web
service interface and talk to the IaaS Website. Furthermore, all the workflow information
and logic used to provision resources is stored in the Model Manager. Distributed
Execution Managers (DEMs), use this information to execute the workflows and talk to
the external environment. Internally, the model manager has the following components:
Data Model and REST interface
Ass mentioned, the IaaS components store all its information in a Microsoft SQL Server
database. However, this database cannot be queried directly by other vRealize
components. Instead, a REST interface has to be used. This helps to expose all data as
entities. In former vCloud Automation Center times (up to version 5.2), this REST API
was used to implement your own user interface (nowadays the Linux Appliance has its
own REST interface, which should be used instead).
REST Web services
Representational State Transfer (REST) represents an architectural style, which can be
used to implement web services. Because of its simplicity, compatibility and scalability, it
is currently the most popular technique used when implementing web services.
Security information
The Model Manager also stores permissions, such as who is able to see which type of data
and actions can be invoked on IaaS resources.
Workflows
We have already discussed that vRealize Automation provides the means to provision
resources to different platforms (physical, virtual or cloud platforms). However, depending
on the target platform and the provisioning method, different workflows will be used. All
of them are stored centrally within the Model Manager. It is also possible to extend these
workflows and even store your own .NET based workflow tasks (remember that the IaaS
components are still implemented in .NET). However, this requires you to additionally
license the Cloud Development Kit. For example, you could modify the basic provisioning
workflow, to automatically publish information into a Configuration Management
Database (CMDB).
Events and Triggers
Another important feature of the Model Manager is that you can define custom events
within the data model. For example, whenever you change the datastore of one of your
provisioned machines, it would be possible to update some virtual machine properties.
However, an event could also be triggered from outside of vRealize Automation. A user
could trigger a machine action and as a consequence a workflow run could be triggered
(e.g. the installation of some software).
Distributed execution of workflows
While the workflow logic itself is defined in the Model Manager, it is not the Model
Manager that executes the workflow. This is done by DEM Workers (there is also a DEM
Orchestrator, but it only schedules the workflow runs). However, different DEM workers
can exist. Sometimes we want to define where a certain workflow should be run (e.g. in a
special location or a machine which meets some requirements). This can be configured by
the means of ‘Skills’. Essentially, a ‘Skill’ is a relationship between a DEM Worker and a
workflow. This means that a workflow can only be run by a custom DEM Worker, which
resides on a certain host. Technically, to run a workflow, a DEM Worker will contact the
Model Manager and download all the artifacts needed (e.g. some scripts) to run the code.
DEM Workers regularly contact the Model Manager and ask for new work. Fig. 2-3
depicts the interaction between the Model Manager and DEM Workers.
As we have already discussed, IaaS uses a Microsoft SQL Server to store all its data.
Currently only Microsoft SQL Server is supported. If you want to prepare for high-
availability, you should consider implementing a Microsoft SQL Server failover cluster.
While database mirroring or replication is basically possible, VMware’s ‘preferred way’ is
to set up a cluster.
2.2.2.4. vCloud Automation Center Manager Service
The ‘vCloud Automation Center Manager Service’ runs as a Windows service and is
responsible for the interaction between the Model Manager, vRealize Automation agents,
Active Directory and SMTP. There is not much to configure in this service, however, you
have to make sure that this service is running and not duplicated in your environment.
2.2.2.5. Distributed Execution Managers
As mentioned earlier, DEMs are responsible for interacting with the environment and the
provisioning of machines. They talk to the Model Manager in order to fetch all
information required for provisioning. There are two kinds of DEMs:
• The DEM Orchestrator communicates with the Model Manager and schedules the
workflow execution.The DEM Orchestrator only monitors the execution of the
workflow, it does not do the work itself. As there is a lot of interaction between the
DEM Orchestrator and the Model Manager, it is recommended to install the DEM
Orchestrator on, or ‘near’ to the Windows machine hosting the Model Manager. At
any time, only one DEM Orchestrator can be active within the vRealize Automation
installation. For high-availability reasons, it is therefore always good policy to have
a failover machine, which could become active in the case of a failure.
• DEM Workers are responsible for the execution of workflows. In contrast to DEM
Orchestrators, multiple workers can be active at the same time. This might be useful
for both high-availability, as well as for scalability purposes. Because DEM Workers
interact with external resources, they should be installed as ‘near’ as possible to the
system they are provisioning.
As discussed, DEMs interact with external systems in order to run workflows on them.
Unfortunately, in some cases they lack the knowledge to interact directly and thus need the
help of agents. This is especially true of hypervisors (e.g. vSphere, Hyper-V), which were
implemented in the first versions of vRealize Automation (support for other hypervisors
like KVM, Red Hat Enterprise Linux OpenStack, Amazon Web Services, Dell iDRAC or
Cisco UCS was later directly implemented in the DEM). Right now, there are several
different kinds of agents:
• ‘Virtualization proxy’ agents are used to interact with hypervisors. This can be done
in order to provision a machine or to synchronize hypervisor data with the vRealize
Automation database (e.g. which templates can be used on the hypervisor). Agents
are installed as Windows services and must be installed and configured for every
single virtualization environment. This means, if you want to address three different
vCenter Servers, you must install three different agents. There are virtualization
agents for vSphere, Hyper-V and XenServer.
• ‘Virtual Desktop Integration’ (VDI) agents help with registering virtual machines in
external desktop management systems. One of the most popular VDI systems is
Citrix XenDesktop. After the provisioning and the registration, vRealize Automation
provides the owners of registered machines with a direct connection to the
XenDesktop Web interface. One installed agent can communicate with a single
Desktop Delivery Controller (DDC) or with multiple DDCs.
• ‘External Provisioning Integration’ (EPI) PowerShell agents help with the on-
demand streaming of Citrix disk images, from which the machines boot and run.
• EPI Agent for Visual Basic, helps to run visual basic scripts as an additional step in
the provisioning process (the script can be invoked before or after provisioning a
machine or when deposing).
• ‘Windows Management Instrumentation’ (WMI) agents allow the collection of
information, for machines under vRealize Automation control. This is required if
you want to provision machines via Windows Image File (WMI).
• The Management Agent was newly introduced with vRealize Automation 6.2 and is
also installed as a Windows service. It is used to register IaaS nodes in distributed
deployments and to collect support and telemetry information from these nodes.
However, it is also possible to run a Guest Agent from within a deployed machine (in that
case, you should ensure that the guest agent is already part of the image used to deploy
your machine). Guest agents communicate - via SSL - with vRealize Automation and are
useful in post-installation tasks of non-VMware machines (if you provision to vSphere it
is easier to instead use the Guest API of the VMware Tools together with Orchestrator).
Guest agents can help you in a variety of use cases, for example:
Guest Agents are available for Windows as well as for Linux operating systems.
As discussed, the Identity Appliance does not need to be deployed if you have a vCenter
Server 5.5 1b or higher running. However, some companies tend to deploy their own
Identity Appliance to have more control over their installation (for example, vRealize
Automation does not support changing passwords of the SSO Administrator or may
become inaccessible due to an expired password). On the other side, deploying your own
Identity Appliance means more work (also don’t forget to provide high availability for the
appliance).
In most environments, it is important to have insight into your costs (i.e. to find out ‘who’
is responsible for what costs and show these costs in a transparent manner). For vCloud
Director there is a tool called vCenter Chargeback manager. However, this tool is already
end-of-life, but there is a new one instead, vRealize Business. If you are working with
vRealize Automation Advanced Edition or higher, you are free to deploy the vRealize
Business Standard Appliance. This appliance also helps you to compare your own
datacenter costs with that of Amazon Web Services or Microsoft Azure and helps to
answer the following questions:
Fig. 2-4 shows the user interface after the integration of the vRealize Business Standard
Appliance into vRealize Automation. Users can compare the costs of deploying a set of
virtual machines into the private datacenter, Amazon Web Services or Microsoft Azure.
2.2.4.1. Application Services
Application Services require the Enterprise license of vRealize Automation, They are a
powerful tool for hosting PaaS services in your environment. This comprises not only
deploying machines to your environment, but also setting up complete software stacks.
Typically, companies want to provide some “IaaS+” features in their environments (for
example ‘Database as a Service’ or ‘Middleware as a Service’). Normally, this would
involve a lot of scripting in order to get such machines installed and configured. Luckily,
with the help of Application Services, such services can be configured graphically (via
Drag’n-and-Drop techniques). Those services are called Application Blueprints and are
depicted in Fig. 2-5.
The application blueprint in Fig. 2-5 displays a web application consisting of three
different components: A load balancer, a set of JBoss Application Servers and a database
on a backend server. If you create such a blueprint, the first step is to choose an operating
system from the menu on the left and drag it to the main area. In the next step, you choose
the services you want to have on your machine and move them to your operating system
instances. The last step would be to deploy your own code (your Windows dlls, Java war
files, etc.) onto those machines. If there is no Puppet module for your software, you are
still free to implement und use your own code in Application Services.
Application Services allow you to deploy your blueprint to different supported
environments (vRealize Automation, Open Stack, vCloud Director or Amazon Web
Services). There is a special cloud abstraction layer, which hides the details of the
underlying environment. Therefore, there is no platform-specific configuration in your
Application Blueprints.
Like IaaS or XaaS services, Application Director also allows you to deploy its
application blueprints directly to the service catalog (so that end users can request them).
When compared to a traditional deployment, all the services can be installed and
configured in minutes rather than several days.
Another important tool within vRealize is Orchestrator and this helps you to automate
your environment. The vRealize Automation Appliance already comes with an embedded
Orchestrator instance (including plug-ins for vRealize Automation, NSX and vRealize
Code Stream). If you don’t want to use the embedded Orchestrator instance (for
performance and high-availability reasons), you can still deploy your own Orchestrator
instance, or use the one bundled with vCenter Server.
Orchestrator is required for the following scenarios, if you need to customize and extend
vRealize Automation:
Figure 2-5 Creation of an Application blueprint
• Firstly, if you want to provision XaaS services to the service catalog, these
underlying workflows must run in Orchestrator. From a developer’s point of view,
the first step is to create these workflows in Orchestrator. Once you are finished that
process, the next step would be to build a graphical frontend with the Advanced
Service Designer (ASD), publishing the workflow to the service catalog. There are
many use cases for the ASD. For example, you could integrate a plug-in for your
storage into Orchestrator, then publish it via ASD to vRealize Automation. Hence,
many small tasks could be automated. For example, creating or extending a LUN –
without bothering the storage administrator anymore. Under the hood, Orchestrator
already comes with over 300 different workflows.
• You also need Orchestrator if you want to customize the way machines are
provisioned. There are plenty of use cases for that . For example, you are not happy
with the hostname and IP address assigned by vRealize Automation, so you could
call a workflow to invoke an IP address management (IPAM) tool like Infoblox
instead. Many companies also need to register their resources into a configuration
management database (CMDB). There are plenty of other scenarios where
Orchestrator would be the right component for customization - we will talk more
about this later in Chapter 15. As a summary, Fig. 2-6 shows the machine lifecycle
of a virtual machine in vRealize Automation and the separate stages where
customization can occur.
• And last but not least, Orchestrator is also required for NSX configuration within
vRealize Automation.
2.2.6. NSX
Another important VMware product within the cloud ecosystem is NSX. NSX provides
network virtualization and a security platform for the software-defined datacenter.
Amongst other things, NSX has the following capabilities and features:
When integrated into vRealize Automation, these features and components can be
automatically created from an ordinary deployment process. This is useful when the mere
provision of virtual machines is not enough. Popular use cases for the integration of NSX
into vRealize Automation are:
In many ways, NSX is the next version of (the discontinued) vCNS and has a lot of new
features. However, vCNS is part of the vCloud Suite, while NSX has to be licensed as a
stand-alone product. Like NSX, vCNS can be integrated into vRealize Automation.
2.3 vRealize Automation Licensing
Like many other products from VMware, vRealize Automation comes in different
editions: There is a standard, an advanced and an enterprise edition available.
Furthermore, vRealize Automation is available as a stand-alone product or as part of the
vRealize Suite (however in that case you can only provision into VMware environments).
Furthermore, the standard edition is only available as part of the vRealize Suite. If you
want to deploy the stand-alone product, you will have to start with the advanced edition.
In that case, you need to license your managed operating system instances (OSIs), which
are available in packs of 25 instances.
There is also a Cloud Development Kit (CDK), which allows some sophisticated
customization – however the CDK also has to be licensed additionally for each vRealize
Automation instance.
As a summary, Table 2-1 shows the different editions and their features[1].
vRealize Automation
Custom Services X X
Business Management
Solution Extensibility
2.4 Summary
This chapter discussed the architecture of vRealize Automation. The main components are
the vRealize Automation appliance and the IaaS components. Furthermore, there are
additional components like the Identity Appliance, vRealize Orchestrator or vRealize
Business. Altogether, they make up the vRealize Automation environment. It is crucial to
understand the roles of these different components in order to create a design for a
vRealize deployment and begin with the installation.
3. Design and Deployment
In the previous chapters, we discussed cloud computing in general and introduced the
architecture of vRealize Automation.
Now it’s time to discuss the hardware and software requirements for vRA and find out
how vRA should be designed. This also involves learning how vRA can be scaled to
support thousands of managed virtual machines and how we can guarantee high
availability.
To tackle these issues, we will first talk about the hardware requirements of vRA and
then we will show how vRA can be scaled for small, medium and big environments.
Disk: 10 GB
Network: 1 GB/s
Disk: 2 GB
Network: 1 GB/s
IaaS Web server IaaS Web site CPU: 2 vCPU CPU: 2 vCPU
Model Manager RAM: 2 GB RAM: 4 GB
Disk: 40 GB Disk: 40 GB
Network: 1 GB/s Network: 1 GB/s
Disk: 40 GB
Network: 1 GB/s
Disk: 12 GB
Network: 1 GB/s
Please also note the compatibility between the vRA component and its underlying
software:
Supported databases:
Supported databases:
Authentication:
After having made sure that the hardware requirements have been met, you need to decide
on whether or not you need some kind of high-availability in your environment. You also
need to work out how much load you want to serve with vRealize Automation . With that
knowledge in mind, we can figure out how many servers we need to deploy. Whilst in test
and lab environments a minimal deployment will be sufficient, for bigger loads we will
have to scale up our environment. For many environments, the vSphere HA feature will be
sufficient. However if you want to minimize downtime, you will have to investigate each
component in vRA and figure out how HA can be guaranteed.
We have already mentioned that you can use either the vSphere vCenter SSO or the
Identity Appliance as the SSO component in vRA. However, if you want to guarantee
high-availability, you must stay with the vSphere SSO. The vSphere SSO can be operated
in an active-passive configuration with an F5 load balancer in front.
High-availability and scaling of the vRA Appliance are essential features when building
an active-active cluster. However, in this case we can no longer use the embedded
PostgreSQL database, but instead need to deploy the PostgreSQL database as a separate
node (you can also operate the PostgreSQL database in a cluster). The same applies to the
embedded Orchestrator instance – a separated Orchestrator node is needed. Last but not
least, a load balancer is required.
When accessing the vRA appliance, users should always be redirected to the very same
node in subsequent requests. Therefore the ‘sticky session’ or the ‘session affinity’ feature
should be activated on the load balancer. For load balancing, only port 443 is used. While
you could use any load balancer, VMware recommends the use of F5 BIG-IP hardware
and F5 BIG-IP Virtual Edition. These load balancers have already been tested by VMware
and white papers exist (which describe in detail how to configure them in a vRA
environment).
Most of the workload and time required for configuring high-availability and scalability,
has to be spent on the IaaS components. In small environments, all of these components
are installed on a single node. However, in bigger environments, the components have to
be distributed over different nodes and each of them has to be configured for HA and
scalability. Remember, we have several different components on the IaaS server:
• The IaaS web server hosts the Model Manager and the IaaS user interface (displayed
as a frame from the vRA appliance).
• The Manager Service coordinates the communication between the database, LDAP
or Active Directory and a SMTP Server.
• Distributed Execution Managers (DEMs) communicate with external system to
provision resources.
• Proxy agents know how to interact with hypervisors.
The IaaS web server can be placed behind a load balancer – in the same manner as the
vRA appliance – hence supporting an active-active cluster configuration. However, as no
user is directly accessing the web server (only the vRA appliance), there is no need for
‘sticky session’ or ‘session affinity’. Instead, the ‘Least Response Time’ or ‘round-robin’
algorithm can be used. As previously mentioned, any load balancer with these features can
be used. However, VMware recommends the F5 BIG-IP hardware and the F5 BIG-IP
Virtual Edition.
It is important to act with caution when installing IaaS web servers in your environment.
While you can install many instances of the Website component, only one node is allowed
to run the Model Manager data component (usually the first node in the deployment). The
other nodes will just run the Website component.
At runtime, the IaaS web server is usually CPU-bound. So, if you notice some latency in
your environment, please monitor the IaaS web servers. Scaling-up your web server (add
more CPUs or memory) is also easier than scaling-out (add another node behind the load
balancer).
3.2.5. Manager Service
As opposed to the aforementioned components, you cannot run the Manager Service in an
active-active cluster configuration – instead, a second server, is needed which runs as a
disaster recovery cold standby server. From a performance point of view, this shouldn’t be
a problem, as one Manager Service can easily serve tens of thousands of managed virtual
machines. The failover itself can happen manually or - better practice - via a load balancer
– in this case, however, you cannot use any load balancing algorithm.
Just like the Manager Service, only one instance of a DEM Orchestrator can run at any
time in a vRealize Automation environment. Because of this, an active-passive
configuration is also required . That being said, there is no need to configure a load
balancer for the DEM Orchestrator, as DEM Orchestrators can automatically monitor
themselves. When a DEM Orchestrator is started, it automatically searches for another
running DEM Orchestrator instance. If none are found, it becomes the primary DEM
Orchestrator. If there is already a working DEM Orchestrator, it will start as a secondary
node and monitor the primary. If the primary DEM Orchestrator fails, the secondary
automatically takes its place. Later, if the former primary DEM Orchestrator comes back
again, it will detect that there is another instance running and switch to secondary mode .
It is also important to note that the DEM Orchestrator and the Model Manger work closely
together to execute all kinds of workflows. Consequently, they should be placed near to
each other and should have a high network bandwidth available for communication.
The components that actually run the workflows are called ‘DEM Workers’. They should
be deployed near the external resources they are communicating with. For example, if you
have different datacenter locations, make sure you have a DEM Worker running in each of
these locations.
DEM Workers are mainly CPU-bound, as they run the workflows. A DEM Worker can
run up to 15 workflows in parallel. If the workflow queue is constantly high, consider
vertical scaling first (i.e. add additional CPU and RAM). Nevertheless, you can easily add
an additional DEM Worker. Further to this, you can configure the workflow scheduling in
order to run certain workflows during off-hours or to increase the interval in which they
run.
If a DEM Worker becomes unavailable, the DEM Orchestrator will cancel all its
workflow tasks and assign them to other available workers.
3.2.8. Proxy Agents
You should deploy the Proxy Agent in the datacenter it is associated with. Together with
the DEMs, they are responsible for data collection from vSphere-, Hyper-V or Xen
environments. The best practice is to run these workflows during off-hours (or manually,
if you have to trigger a data import from the environment). Up to two workflow runs can
run at the very same time. If you increase this value, you might encounter performance
issues in medium to big environments. It is also a good idea to consider deploying a
redundant Proxy Agent.
3.2.9. MS SQL Server
The last component to consider, within the software stack, is the Microsoft SQL Server.
As the database is a critical component, you should also think about how you could attain
high availability. There are different methods for creating high availability of the
Microsoft SQL Server. This could be as a cold standby server, a mirrored standby server
or a cluster (recommended by VMware).
One vRealize Business Appliance can scale up to 20,000 virtual machines, in up to four
different vCenter servers. When you synchronize for the very first time, it will take up to
three hours to finish. Later synchronizations will take between one and two hours. Like
the vRA Appliance, the vRA Business Appliance can be put behind a load balancer.
However, you must consider that data collection can only take place from one node of the
cluster.
A running instance of vRealize Application Services can handle over 10,000 virtual
machines and more than 2,000 library items. Over 40 concurrent deployments and 100
users can be connected at the same time.
The vRA Application Service appliance relies mainly on CPU and memory capacities.
There is a Java virtual machine under the hood, so just increasing the memory size of the
VM is not enough. You need to adjust the maximum heap size within the Java properties
of the underlying tc Server. Please also take into account that placing a load balancer in
front of the Application Services is not supported.
3.3 Deployment Architecture
Now that we have finished discussing how to scale and provide high availability for the
different components of IaaS, it’s time to address possible deployment architectures. To
help ease these issues, VMware differentiates between deployments: Small, medium and
big deployments. In the following, we will focus on each of these in turn.
Small environments
• Identity Appliance
• vRealize Automation Appliance
• IaaS Server (with all IaaS roles installed)
• MS SQL Server
• vRealize Automation Application Services Appliance
• vRealize Business Standard Appliance
Figure 3-1 Minimum footprint for small environments
The deployment is depicted in Fig. 3-1. Please note the ports that are required for
communication. The Identity Appliance is accessed via port 7444, LPAP via 389 (secure
LDAP over 636), access to the appliance console is done via port 5480 and the MS SQL
database port 1433 should also be opened. The Application Services additionally need
ports 8443 and 5671 for communication with the Application Services Agents.
Such small environments run only the minimum of appliances and do not provide any
form of high availability.
Medium environments
To build such a solution, the following servers should be deployed (Fig. 3-2):
• Two vCenter Single Sign-On Servers
• Two vRA appliances (active-active)
• IaaS Web/Manager Server 1 (Active Web Server/ active DEM Orchestrator, active
Manager Service)
• IaaS Web/Manager Server 2 (Active Web Server / passive DEM Orchestrator,
passive Manager Service)
• Two IaaS DEM Worker Server
• Two IaaS Agent Servers
• Clustered MSSQL Server
• vCenter Single Sign-On load balancer
• vRA Appliance load balancer
• IaaS Web load balancer
• IaaS Manager Service load balancer
• vRealize Automation Application Services Appliance
• vRealize Business Standard Appliance
Large environments
The large deployment plan (Fig. 3-3) serves the biggest environments:
3.4 Summary
This chapter described all the components of vRealize Automation and its hardware and
software requirements. Depending on your deployment size, VMware gives
recommendations regarding scalability and high availability. With that background
knowledge in mind, we can now continue with the installation and configuration of vRA.
4. Basic Installation
Having discussed the architecture and design of a vRealize Automation solution in the
previous chapters, we can now occupy ourselves with its installation and configuration. As
this encompasses a lot of tasks, we will split these tasks into different chapters.
Firstly, we will introduce the required steps. This chapter covers the basic installation. In
the following chapters, we will discuss how to continue with the configuration.
4.1 Overview of installation and configuration steps
The first steps cover basic installation:
Hint:
The installation is very time-consuming and there is plenty of room to make mistakes.
While some mistakes can be easily reverted, others may well require reconfiguring some
parts from scratch (especially with the Windows components). So it is always a good idea
to take a snapshot if you created a task or you are about to start with the next one.
4.2 Installation
In the following segment, we will discuss the installation procedure step-by step. So let’s
begin with the first task.
• It is important to remember that we need correct DNS names for all the machines in
our vRealize environments. vRA internally uses the fully-qualified domain (FQDN)
names for communication. Please be aware that no underscore (“_”) is allowed in
any FQDN.
• It is a good idea to create a special domain account for vRA and assign local
administrator privileges on the IaaS machines to that account.
• You need a Microsoft SQL Server for IaaS components. During the installation, the
database schema will be created. The database default name will be “vCAC”.
Assigning database sysadmin privileges to a vRealize Automation domain account
makes the installation quite easy. If you only have the databas owner role, please
assure that you have created the “vCAC” database in advance. Furthermore, on the
MS SQL server, please check the following settings:
o The TCP/IP protocol must be activated.
o The relevant ports must be open (1433).
• You also need the Microsoft Distributed Transaction Coordinator Service (MS
DTC) running on your IaaS machines, as well as on the MS SQL Server.
• Access to vCenter Server is needed for the deployment.
• Please check the hardware requirements as discussed in chapter 2.
• vRA is using SSL for the communication. vRA can create self-signed certificates,
however check your company guidelines on signed certificates, you may need to
request and import signed certificates.
• Like in any distributed environment, time synchronization is crucial. You have the
following choices for the configuration:
o Use NTP (port 123 must be open).
o Windows machines can use the W32Time services.
o Use the VMware tools.
After reviewing all the prerequisites, we can now continue with the deployment of the
Identity Appliance (if you want to use the vCenter Single Sign-On component, you can
skip these steps). We will be using the classical vSphere Client, but needless to say the
Web Client can also be used for the deployment.
1. Open the vSphere Client and choose File > Deploy OVF Template in the menu.
2. The assistant opens and asks you to choose a source file for deployment. Click on
the Browse button, search for your Identity Appliance file (with the .ova or .ovf
ending) and click Open and then Next.
3. Review the settings on the OVF Template Details screen and click Next to
continue.
4. Confirm the End User License Agreement by clicking on Accept and continue
with Next.
5. The dialog Name and Location asks you to specify a Name and the Location in
the vSphere inventory. Please provide the values and continue with Next.
6. Please choose the Cluster where your appliance should be deployed, on the Host /
Cluster mask. Proceed with Next.
7. The next page is about the Resource Pool. Specify the pool to be used and carry on
with Next.
8. On the page Disk Format, you can choose if you want to have your appliance
deployed as Thick Provisioned Lazy Zeroed, Thick Provisioned Eager Zeroed or
Thin Provisioned. Click on Next afterwards.
9. The next step deals with networking. Please choose the network to which your
appliance should be connected (dropdown list Destination Network) and continue
with Next.
10. You need to specify a couple of properties on the next Property dialog box:
a. Enter and confirm a password for the root account of the appliance in the
Initial root password section.
b. Specify if you want to enable root access via SSH, using the Enable SSH
service in the appliance checkbox.
c. In the Hostname textbox, please specify the full host name (FQDN) of your
virtual machine.
d. Define the Default Gateway, DNS Server, Network 1 IP Address and
Network 1 Mask within the Networking Properties.
Click Next to continue.
11. On the last Ready to Complete screen, please review your settings, choose Power
on after deployment and start the deployment process with Finish.
The deployment process will last approximately one to three minutes. After the
deployment, we can continue with configuring the appliance. There are a couple of issues
which need to be configured:
After creating a working vSphere SSO Single Sign-On Server or Identity Appliance, we
can move forward with the vRA Appliance. Like the Identity Appliance, the vRA
Appliance is packaged as an .ovf file and must be deployed accordingly. As we have
already described these steps in detail, we will now show you how to configure the
appliance.
Figure 4-1 Configure host settings
Like the configuration of the Identity Appliance, the configuration takes place via a web
browser:
1. Open your web browser and navigate to the URL https://<vrealize-automation-
appliance.domain.name>:5480.
2. Accept any security warning and continue to the configuration page.
3. Log in with username root and your provided password.
4. First check the time settings: Change to the System > Time Zone page and review
the settings. Once finished save your configuration with the Save Settings button.
5. Change to the vRA Settings menu. The Host Settings page configures the host
name and the SSL configuration (Fig.4-1). Review your hostname in the vRA Host
Settings section. If you are using a load balancer, please type the FQDN of the
balancer in the Host Name textbox, otherwise ensure that the FQDN of the vRA
Appliance is within the textbox.
6. Within the SSL Configuration section, a certificate must be configured. If you do
not want to import an existing certificate, please provide the following input:
• Common Name: FQDN of your vRA Appliance
• Organization: Usually your company
• Organization Unit: Usually your department
• Country Code: Code of your country (e.g. DE for Germany)
Click Save Settings to save your configuration and update the certificate
information (Fig. 4-2).
7. In the next step, we have to connect our appliance to the SSO component.
Therefore, change to the SSO menu. Provide the following information:
• SSO Host: FQDN of the SSO component
• SSO Port: Usually on port 7444
• SSO Default Tenant: vsphere.local
• SSO Admin User: administrator
• SSO Amin Password: Your SSO password
Click on Save Settings for your configuration to take effect (it will take several
minutes). If the configuration was successful, the SSO Info will show the status as
connected (Fig. 4-3).
8. Now we have to upload the license key. Change to the Licensing configuration
page, enter your license and click on Submit Key.
9. Change to the Database configuration. Note that we are using an embedded
Postgres database, so there is no need for configuration.
10. Review the Messaging page. Once again, there is no configuration to be done and
vRA is already running a working RabbitMQ instance.
11. Change to the Telemetry menu. If you want to participate in the Customer
Experience Improvement Program and send anonymous data about your
environment to VMware, check the appropriate textbox.
12. The last step is to review the time synchronization settings. To change the settings,
go to Admin > Time Settings and change the values accordingly.
Hint:
The SSO hostname must be entered in exactly the same way as the SSO component. If
you specified an IP address there, you also have to specify the IP address in the vRA
Appliance. Please also ensure that any characters are the same case format as in the SSO
component.
The next step in the installation process is to set up the IaaS components. The installation
source files can be found on the vRA Appliance. However, before we can start with the
installation, we should first consider the following issues:
• The user performing the installation must have local administrator privileges.
• It is best practice to have a dedicated service account for vRealize Automation. The
recommended way is to create an Active Directory user and add this user to the local
administrator group on all machines hosting IaaS components.
• If we have a dedicated node for the Microsoft SQL Server, we need privileges to the
server. A good practice would be to create a database for vRealize Automation first
and to assign the database owner role to the vRealize service account.
Regarding Microsoft Internet Information Server (IIS) 7.5, there are a couple of
configuration steps to be taken:
There are also a couple of prerequisites for the IaaS Manager Service:
The installation of a DEM additionally requires Microsoft .NET Framework 4.5.2 and
PowerShell.
Luckily, there is a PowerShell script on the official VMware blog available[2], which
unburdens you from most of the configuration tasks. There is no warranty from VMware
regarding this script. However, in most environments, it will run without modifications.
Nevertheless, it is still a good idea to take a closer look at its logic, so that you can modify
it if required. The script is specifically designed to configure the requirements for a
complete installation of the Windows components, on a single node. If you have a
distributed environment, you should take a deeper look at the script, so that you can
understand which parts of the script need to be run on the different nodes. Before you
execute the script, please follow these instructions:
1. Install Microsoft .NET framework 4.5.2.
2. Run Windows PowerShell with Administrator privileges.
3. Change to the folder where the script is located.
4. Run the command Set-Execution-Policy unrestricted.
5. Start the script and provide all the requested information. The script will take a
couple of minutes to run (see Fig. 4-4).
After all prerequisites have been met, the installation itself can begin. First of all, we need
the installation source. To download the setup file, please open the vRA Appliance page
(https://<vrealize-automation-appliance.domain.name>:5480) and change to vRA Settings
> IaaS. Finally, click on the Download the IaaS Installer link (Fig. 4-5). Please also
make sure that the location of the vRA Appliance is coded within the setup-filename
(setup_hostname@5480.exe). Usually, the downloaded file should already be named
appropriately, if you access the download page via the FQDN and not via the IP address.
Consequently, if you change the filename, it might not be possible to run the installation
properly.
1. Start the installation with administrator privileges (right click > Run as
administrator).
2. Click Next on the Welcome page.
3. Accept the end user license agreement by clicking the I accept the terms in the
license agreement checkbox and proceed with Next.
4. The next page requests the User name and the Password for the vRA Appliance.
Type in root as username and your assigned password. Click on the Accept
certificate checkbox and continue with Next.
5. On the Installation Type dialog, choose Complete Install and click Next.
6. In the next step, the Prerequisite Checker runs. If everything is fine, proceed with
Next. Otherwise click on any issue, fix the requirement and check again.
9. The next dialog requests all information needed, in order to register the IaaS
components within the vRA Appliance (see Fig. 4-7):
• Firstly, review the vRA Appliance server name in the Server textbox, load and
check the SSO Default Tenant name and download the certificate. View the
certificate and set the Accept Certificate checkbox.
• Secondly, type in the name of the SSO Administrator
(administrator@vsphere.local) along with its password. Click on Test – the test
should pass.
• Finally, enter the hostname or IP address of the local machine. Again click on
Test to check if the settings are valid.
10. The last screen summarizes all these settings. Please review the output and start
the installation. Depending on your hardware, the installation will need between 5-
15 minutes to finish. Once the installation has been successfully completed, the
wizard will ask you to work through the initial system configuration.
c.3.5. Installation with CA-signed certificates
While it is quite common to run a server with self-signed certificates for testing, even in a
small environment, it is still recommended to use CA-signed certificates. Notwithstanding
security concerns, this can also be important for user experience. As end users can access
the service catalog of vRealize Automation, they might be confused with any warning
messages related to self-signed certificates. Changing certificates is not trivial in most of
VMware’s products and this is especially true for vRA. The reason behind this that all
vRA installations are distributed environments. That means we have a minimum of three
machines, where this has to take place. That’s motivation enough to describe the process
of changing certificates in detail.
Another important issue is the order in which the certificates are replaced. The reason
behind this is that there are trust relationships between the components that have to be
met. First, the certificate of the Identity Appliance can be changed, then, the vRealize
Automation Appliance and finally, the certificates of the IaaS components can be
replaced.
If you only want to change a specific certificate, please consider the following:
• If you change the certificate of the Identity Appliance, you will also need to register
that certificate with the vRealize Automation appliance and the IaaS-components.
• Replacing the vRealize Automation certificate causes you to register the IaaS
components again.
• If you replace the certificate of an IaaS component, you will have to register that
component again with the vRealize Automation Appliance.
At this point, we will describe how to perform the installation with CA-signed certificates.
Later we will show how to replace certificates.
Figure 4-8 Active Directory Certification Services configuration
If you don’t have any running certification authority, now it is the proper time to set up
one (if there is a running certification authority you can skip these steps). We will show
these steps on a Windows system.
4. Select Root CA on the next page (Specify CA Type) if you are configuring your
first CA and continue with Next.
5. The next screen (Set up Private Key) lets you create your new private key. Click on
Next to continue.
6. The next step in the process is to configure the cryptography settings for your CA
(mask Configure Cryptography for CA). Please ensure you have the following
settings (see Fig. 4-9) and click on Next:
• CSP: RSA#Microsoft Software Key Storage Provider
• Key character length: 2048
• Hash algorithm: SHA1
7. Please define a name for your CA on the Configure your CA Name page and
continue with Next.
8. The last step of the wizard is to define the validity period of your certificates (Set
Validity Period). Click on the Next button and finish the wizard.
The next step within the configuration is to create a vRealize Automation certificate
template. This template can be reused for all other subsequent templates. We will also
update the Microsoft CA settings, in order to allow Subject Alternative Names (SANs)
within the attributes. We will continue with the following steps:
Once we have finished these configuration steps, we can continue with adding this
template to the list of Certificate Templates:
As well as creating the certificates for the IaaS components, we also need certificates for
the Linux appliances. This can be done via OpenSSL on both the Linux and windows
operating systems. You will need a running OpenSSL installation, with version 1.0.0 or
upwards.
c.5.1. Creation and configuration of vRealize Appliance certificates
The following steps have to be taken in order to create and configure the vRealize
certificates:
Hint: Use a configuration file for your CSR. When creating a CSR, we need to specify
some basic information about the CSR itself. This approach increases the reuse of the data
and is a good way to document your settings.
Open a text editor (on the computer with OpenSSL installed) and paste the following
configuration:
[ req ]
default_bits = 2048
default_keyfile = rui.key
distinguished_name = req_distinguished_name
encrypt_key = no
prompt = no
string_mask = nombstr
req_extensions = v3_req
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = digitalSignature, keyEncipherment, dataEncipherment, nonRepudiation
extendedKeyUsage = serverAuth, clientAuth
subjectAltName = DNS:vcva550b, IP:10.10.1.40, DNS:vcva550b.sc.lan
[ req_distinguished_name ]
countryName = DE
stateOrProvinceName = BY
localityName = Nuremberg
0.organizationName = SC
organizationalUnitName = SC
commonName = vcva550b.sc.lan
Before saving the file, make sure to replace all the server names and IP addresses with
your own settings. Then save the file as vrealizeid.cfg (this is the configuration file for the
Identity Appliance). Once finished, repeat the procedure and create a second file for your
vRealize Automation Appliance, save it as vrealizeapp.cfg.
Now we have the input needed to create the CSRs and export the private key. Go to your
command prompt, change to the bin-directory of OpenSSL and type the following
commands:
Now we have to convert the certificates into the RSA format. This can be done as shown:
openssl rsa -in c:\certs\identity\rui-orig.key -out c:\certs\identity\rui.key
The next step in the process is to create the certificates in the Microsoft CA. We will show
the procedure for the vRealize Automation Identity Appliance, however do not forget to
repeat the following steps for the vRealize Automation Appliance:
c.5.1.
c.5.2.
c.5.3.
Before we can upload the certificates, there is one last step – we must convert them to the
correct format. Remember, that both the vRealize Identity and Automation Appliance need
the PEM format. So run the following commands for the conversion:
openssl pkcs12 -export -in C:\certs\identity\rui.crt -inkey C:\certs\identity\rui.key -certfile c:\certs\Root64.cer -name
“rui” -passout pass:Vmware1! -out C:\certs\identity\rui.pfx
Finally, we can upload the certificates to the Identity and Automation Appliance. You can
do that by following these steps:
c.5.1.
c.5.2.
c.5.3.
c.5.4.
c.5.5.
From a conceptual point of view, the steps taken to replace certificates for the IaaS
components resemble those of the appliances:
We have already shown how to conduct the first two steps, so we will immediately start
here with the third step. Please also note that if you run IIS and IasS Manager Service on
different nodes, you will have to create two different certificates.
To convert the format of the certificate we need OpenSSL once again. Please open the
command prompt, change to the OpenSSL-bin directory and type the following command:
openssl pkcs12 -export -in c:\certs\vcrealizeiaas\rui.crt -inkey c:\certs\ vcrealizeiaas \rui.key -certfile c:\certs\Root64.cer
-name “rui” -passout pass:Vmware1! -out c:\certs\ vcrealizeiaas \rui.pfx
Before we can use the certificate, we first have to register it with Windows. Work through
the following steps:
1. Click on Start > Run and type mmc.exe to open the Microsoft Management
Console.
2. Click File > Add/Remove Snap-in.
3. Within the list of Snap-ins, on the left area of the screen, choose the Certificates
Snap-in and add it to the list of Snap-ins via the Add buttons, click on OK.
4. Choose Computer Account on the next page and continue with Next.
5. Now we have to select Local Computer and can end the wizard by clicking on
Finish and then OK.
6. The Certificate Snap-in now lets us add certificates. Right-click the folder Personal
and choose All Tasks > Import.
7. Upload the certificate in the PXF-format.
Once you have registered your certificates, you can reference them during the installation.
If you want to change the certificate later, you must use the vRealize Automation
Configuration Tool (you will find that in the start menu under Programs > VMware >
vCAC). With the tool opened, choose the appropriate service within the dropdown list
Available certificates. However, when using the certificate, please make sure to set the
option Suppress certificate mismatch.
The certificate for the IaaS Manager Service can be changed accordingly.
Even after installation you will have to replace certificates from time to time. This can
happen, e.g. if you performed your initial installation with self-signed certificates and
want to replace them with CA-signed ones or the validity of your certificates expires. In
the following stage, we will show the required steps in order to replace certificates.
The following steps have to be taken in order to update the Identity Appliance certificate:
As we have already shown how to change the certificate, at the Identity Appliance, we
will skip these steps and explain how to perform the second and the third step.
1. Start PuTTY, or any other SSH client, to connect to the vRealize Automation
Appliance.
2. Log in on the appliance using root as username and your password.
3. Execute the following command (replace the value of the url-parameter with the
FQDN of your Identity Appliance):
/usr/sbin/vcac-config import-certificate —alias websso —url https://identity-
hostname.domain.name:7444
4. Restart the vRealize Automation Appliance (this can be done via the web console in
the menu System > Reboot).
5. After the appliance has restarted, make sure to check that the following services are
running (System > Service):
• Authorization
• Authentication
• Eventlog-service
• Shell-ui-app
• Branding-service
• Plugin-Service
Be aware that you might have to wait up to ten minutes for the appliance to be
rebooted and have all services running.
In the final stage, we must perform an update of the IaaS components. It is sufficient to
update the Model Manager – vRealize will update the other components in the
background.
To start the procedure, open command prompt with administrator privileges and change to
the following directory:
C:\Program Files (x86)\VMware\vCAC\Server\Model Manager Data\Cafe
Now we can download the certificates from the Identity Appliance and place them in the
local certificate trust store of Windows:
Now we only need to restart the IIS via the iisreset command and we are finished.
The following steps have to be followed in order to update the vRealize Automation
Appliance certificate:
The first and the third step should already be well understood, so we will only concentrate
on updating the IaaS server here. Open a command prompt with administrator privileges
and change to the following directory:
Here you can execute the following command (please add the FQDN of your database
server):
Just as with the other nodes, changing certificates at the IaaS level involves a couple of
different steps. Firstly, the certificate must be replaced in IIS. Secondly, you have to
configure your vRealize Automation Appliance in order to ensure that the trust
relationship between the appliance and the IaaS server is fully working.
1. Login to the IaaS Windows host and open the IIS Windows console.
2. Navigate to the Connections-field on your server on the left hand side and choose
Server Certificates on the right hand side.
3. Choose Import (menu Actions), search for your PDX certificate and type your
password. Click on OK.
4. Select your Default Web Site and click on Action Bindings.
5. A dialog open. Click on https and then Edit.
6. Now you can select your new certificate. Click on OK to close the dialog.
The final step is to restore the trust relationship between the IaaS server and the vRealize
Automation Appliance. Open a command prompt with administrator privileges and
change to the following directory:
Please run the following commands – however, do not forget to change the endpoints
according to your environment:
Recently, VMware published a certificate generation tool to assist you with creating
signed certificates. Knowledge base article 210781[3] provides an overview of the
generation tool as well as how to access the tool.
c.1
c.2
c.3
c.4
c.5
Like in every software product, things can go wrong in vRealize Automation as well.
While we cannot describe all the problems that can occur in a vRealize Automation
environment, we want to give at least some basic information how to do troubleshooting.
First of all, if anything goes wrong, please ensure you have the correct versions of all
components working. If you have different versions working together, erratic behavior can
occur or modules can be broken at all.
As described, the SSO component is used for authentication. If any problems occur during
the login process, first check if SSO works in general by trying to login to other VMware
services, such as vCenter Server. Also check the following issues:
• DNS: All the components should have a fully qualified domain name and registered
with the DNS server.
• Time synchronization: Make sure that all your components have a correct time
synchronization.
• Certificates: Please ensure that all certificates are correctly installed and valid. If
you have problems on a Windows Server, check the IIS website and its binding.
c.6.2. Problems with the vCAC website portal
As described above, a lot of parts from the self-service portal are actually coming from the
Windows IaaS servers, for example the whole content of the Infrastructure menu and the
request screen from the service catalog. If you encounter a 404 error message within these
pages, please check if the website components are all working correctly and take a look
into the log files. The locations of the log files can be found in the VMware knowledge
base article number 2074803[4].
If any component fails, please also check if the appropriate services have started and are
registered with the vRealize Automation appliance. You can check the services by logging
in into the web console of the vRealize Automation appliance (https://<vrealize-
automation-app.domain.name>:5480) .
If there is a problem with a service, the best idea is usually to restart the services.
However, you cannot restart services individually, you only can restart all the services in
on time by executing the following command from the command line:
Please note that the restart of the services takes some time – it can take up to 15 minutes.
All the vRealize Automation appliance services are hosted within a Tomcat instance, so it
is possible to trace the boot process by viewing the messages log file with the following
command:
tail –f /var/log/messages
If you want to restart the Windows services, either restart the appropriate service or the
IIS.
You can also see additional information about the status of the services by browsing to
the following URL:
https://<vrealize-automation-appliance.domain.name>/component-registry/services/status/current
When viewing the result, please search for any error or warning message – they contain
more information about the error or warning.
c.7 Summary
This chapter described how to perform the installation of vRealize Automation. There are
different deployment scenarios, based on the size of your environment. Installing vRealize
Automation in medium or large environments involves a lot more steps, due to
distribution issues. As well as the initial installation and configuration, you must also give
consideration to certification. A common task is to replace some or all of the certificates in
an environment. To make your life easier, we have shown in detail how to conduct all
these steps.
5. Design and Configuration of vRealize Automation
We described how to perform the installation in the last chapter. In this chapter we will
continue with the configuration of vRealize Automation. At the beginning of the last
chapter, we described the basic steps to be taken. In case that you have forgotten these
steps, let’s just recap what we are doing in this chapter (we begin with step 6):
• Configure the admin portal.
• Upload a license.
• Add endpoints.
• Create and configure fabric groups.
• Define business groups.
• Create reservations.
• Configure reservation policies and network profiles (optional).
We will not only show you how to perform these installation steps – we will also discuss
the design aspects which influence the way you install vRealize Automation. At this point
in the process, some important design decisions, regarding the architecture, have to be
taken and these decisions can have quite profound consequences. This is especially true of
tenant design (we will introduce tenants within a short time). Once the tenants are created,
we can continue to explain how all the other components within vRealize Automation are
configured.
After the installation has been completed, you are able to log in to the vRealize
Automation web console. The installation automatically creates a first tenant – the default
tenant. The name of the default tenant is vsphere.local and you have to log in as a system
administrator (you have assigned the password during installation).
Tenants will be covered in detail further on, but for now, let’s explain what a tenant is:
When you log in for the first time, you will do so as a system administrator – however
there are many other roles in vRealize Automation and we will cover each of them. A
system administrator has the following privileges and responsibilities:
vRealize Automation can send email notifications as well as receive emails from users
during runtime. In order to configure the email settings, go to Administration > Email
Servers and click on the [Add] button in the header line of the main table. Once a modal
dialog has opened, you can specify your email settings. Please click on Test Connections
before you save your configuration.
5.1.2.2. Branding of the homepage
For many companies it is quite important to adjust the branding of the self-service portal,
in order to better reflect corporate identity within vRealize Automation . However, the
options available are quite limited. You can change the following items within the user
interface:
• Header logo
• Company name
• Product name
• Background color
• Text color
• Copyright notice
• Privacy policy link
• Contact link
You can find these configuration settings within the Administration > Branding menu.
If you edit the default tenant, you cannot change the settings on the General tab. However,
with any other tenant, you can modify the description and the contact email. The URL for
the default tenant is always https://<vrealize-automation-appliance.domain.name>/vcac. If
you create an additional tenant, there is a naming convention for the URL. vRealize
Automation uses the tenant name as a suffix for the URL. If you tenant name is sc, the
URL for the self-service portal will be https://<vrealize-automation-
appliance.domain.name>/vcac/org/sc. If you click on Next, you will be able to configure
your Identity Stores.
The next tab lets you configure the identity stores. Before you can assign any user
permissions in vRealize Automation, you must configure an authentication source. As you
know, vRealize Automation is already connected to the vSphere SSO, or to the vRealize
Identity Appliance, so if you add an identity store to a tenant, it will basically be added to
the underlying SSO component. In that respect, it is worth mentioning that this mask
somehow serves as another graphical frontend for the SSO component – however it is
only used in vRealize Automation. There are different directory services available for
selection:
• OpenLDAP.
• Active Directory.
• Active Directory Native mode (more secure, but only available to the default
tenant).
At this point we can discuss the configuration in detail (see Fig. 5-3):
The last step involves setting up administrator privileges, for the tenant and its resources.
There are two different roles to be assigned:
• A tenant administrator is responsible for tasks, like e.g. assigning user permissions,
within a tenant (managing the service catalog or configure approvals).
• IaaS Administrators connect vRealize Automation to the environment in order to
have resources for provisioning
Choose at least one user or one group for each of these roles and click on Update or Add
to finish the configuration.
• Depending on the kind of license, you will have different features in vRealize
Automation. There are different editions for vRealize Automation (Standard,
Advanced or Enterprise) and depending on the license you will have a different
feature set.
• vRealize Automation can be licensed as part of the vCloud Suite or as a standalone
product. If you are using the vCloud Suite license, you can only provision to
vSphere. If you have the standalone license, you can deploy to any hypervisor or to
different cloud providers.
The process of uploading a license for the IaaS components is pretty easy: Go to the
Infrastructure > Administration > Licensing page, where you will find an Add License
button on the right hand side of the main pane (see Fig. 5-4). When you click on the
button, a model dialog will open and there you can enter the license key. Once finished,
save your data by clicking OK.
Once you have finished the setup of an endpoint, vRealize Automation can use the
underlying compute resources. There is a 1:n relationship between an endpoint and its
computer resources (which is depicted in Fig. 5-5). An endpoint represents the connection
information to connect to certain compute resources. There are different kinds of compute
resources – they can be vSphere Clusters, Hyper-V hosts, virtual datacenters or even
Amazon AWS regions.
While endpoints are needed, vRealize Automation does not use them directly to
communicate with the underlying systems. Instead, DEMs and/or agents are used. Please
note that you must install and configure an agent for some hypervisors and for others you
can skip this step (this depends on what you want to connect to).
Endpoints can be created and configured by navigating to the Infrastructure > Endpoints
> Endpoints page. In the main pane, you can create a new endpoint by clicking on the
New Endpoint button, or you can hover over an existing endpoint and click Edit.
5.2.3.1. Creating a vSphere Endpoint
The vSphere agent is usually installed by default when performing a full installation of
vRealize Automation. This means there is usually nothing to do other than to check if the
vSphere Agent is running as a Windows service. However, while installing you also
specified the endpoint name the vSphere Agent is connected to (there is always a one-to-
one relationship between a vSphere agent and a vSphere endpoint). So when you create
the endpoint, please make sure to use the same name as specified during the IaaS
installation.
The next prerequisite for creating an endpoint is to define the credentials used. Credentials
can be reused, which makes perfect sense if you have a user account with privileges on
more than one vCenter server.
Configuring credentials in vRealize Automation is pretty easy (they consist of only a
username/password combination and a description), just navigate to the Infrastructure >
Endpoints > Credentials page and click on the New Credentials button (Fig 5-6 depicts the
credentials table). With the modal dialog opened, type your credential data. Please
consider the format of the username: For vSphere, a username must be in the format
domain\username. Once you have entered all information, save the data by clicking on the
Save button.
Hint: Endpoints
This chapter focuses mainly on vSphere Endpoints. However, as there are many kinds of
endpoint, we want to briefly describe them:
The last stage is to create the endpoint. Navigate to Infrastructure > Endpoints >
Endpoints. On the right hand side, in the endpoint table header, there is a button. Click on
New Endpoint > Virtual > vSphere (vCenter). A configuration dialog opens (see Fig. 5-7).
Perform the following steps for the endpoint configuration:
1. Type the name of the endpoint, within the Name textbox, as it was specified during
the installation.
2. Give a description of the Endpoint (optional).
3. To communicate with vSphere, the vSphere web service API is used. You have to
specify the URL of the vSphere API accordingly (the format is https://vcenter-
server/sdk) in the Address field.
4. If you are using NSX in your environment, you can check the Specify manager for
network and security platform checkbox and enter the URL of the server (format:
https://nsx-server).
5. Click OK to finish.
Hint: Check if the endpoints are working
It usually takes some time for the configuration to take effect (up to five minutes). After
that period, vRealize should have found the compute resources behind the endpoint. If you
need to see if something is going wrong, check the log entries. These can be found under
Infrastructure > Monitoring > Logs.
If you are adding resources (or changing some configurations) and want vRealize
Automation to be aware of these changes, you must synchronize vRealize Automation
with vSphere (also referred to as ‘data collection’). While synchronization happens
automatically once a day, you can also trigger it manually. Navigate to Infrastructure >
Compute Resources > Compute Resources and hover over the resource you want to
synchronize – there is a menu item to start the Data Collection.
Once the first data collection has been completed successfully, the compute resources
will have been added to vRealize Automation. All the compute resources together are
called fabric. All compute resources added by configuring an endpoint are added to the
fabric.
5.2.4. Background: Data collection
Data collection is the process of synchronizing the environment with the vRealize
Automation database. Data collections take place at fixed intervals and there are different
kinds of data collection:
• The Infrastructure Source Endpoint Data Collection regularly (once a day) loads
information regarding host and virtual machines templates into vRealize
Automation. When using Amazon AWS, additional information regarding regions
and available virtual machines is retrieved. With physical machines, the available
memory and the CPU will be loaded into the database.
• The Inventory Data Collection analyzes the machines. This involves checking the
networking properties and memory as well. Information relating to machines not
provisioned and managed by vRealize Automation is also retrieved. By default, the
Inventory Data Collection occurs daily.
• The State Data Collection checks the state of single machines and verifies if the
machines are still available. This workflow runs every 15 minutes.
• The Performance Data Collection loads performance data into the vRealize
Automation database (this happens once every day).
• The vCloud Networking and Security Inventory Data Collection detects new objects
in vCNS.
• The WMI data collection can retrieve information referring to Windows-machines in
the environment.
Creating an AWS connection differs a little bit from the creation of other endpoints. So we
will illustrate how to configure such an endpoint separately. However, before being able to
configure this, you must first have your AWS access key and secret key.
The first move is to create the Amazon AWS credentials (see Fig. 5-8). This can be done
by following the steps outlined here:
After a short while, the data collection will take place and you will have your Amazon
regions as compute resources available in vRealize Automation.
The next big step in configuring vRealize Automation is the creation and configuration of
fabric groups. Fabric groups are used to group your compute resources into different
manageable entities in order to be able to configure these groups separately. There are
several reasons to put your resources in different fabric groups – for example, if you want
to isolate your hardware resources from each other, so that different departments cannot
share them.
Each fabric group needs a fabric administrator, who is responsible for configuring the
resources of the fabric group and assign them to different user groups. The relationship
between a fabric, fabric groups and endpoints is depicted in Fig. 5-10. The IaaS
administrator (the role was assigned when creating a tenant) is responsible for the fabric.
Endpoints are used to add compute resources to the fabric. Compute resources can be
grouped into fabric groups and are maintained by fabric administrators.
Figure 5-10 Fabric, fabric groups and endpoints
To create a fabric group, navigate to the page Infrastructure > Groups > Fabric Group
and click New Fabric Group on the right-hand side of the screen. Provide the following
information:
Once you have filled in the relevant fields, click OK to save your fabric group (see Fig. 5-
11)
• They have to adhere to DNS naming conventions, i.e. they can only contain ASCII
characters from a-z, A-Z and digits between 0-9. Any other special characters are
not permitted.
• If a machine prefix is used for provisioning Windows machines, there is a maximum
length of 15 characters in total.
Machine prefixes can be created by navigating to the Infrastructure > Blueprints >
Machine Prefixes page and clicking on the New Machine Prefix button. Then fill out the
different textboxes as described.
In many cases, using the built-in machine prefixes will be sufficient, however from time
to time there is need for more sophisticated naming conventions. This can be achieved by
using vRealize Orchestrator.
5.2.7. Defining business groups
Remember, we created fabric groups in order to create isolated entities for managing
hardware resources. Analogous to group hardware resources, we also want to group users
of the self-service portal to business groups for ease of management. In daily life, business
groups are usually mapped to organizational units like teams (for example financial teams,
DevOps teams and so on) or departments.
Once created, business groups must be mapped to fabric groups. That way they will have
the permissions to provision further resources (the mapping is done via reservations,
which will be discussed later in great detail).
Creating a business group also allows you to specify user roles within the organizational
unit. vRealize Automation is aware of the following roles within a business group:
• There must be at least one business group manager. This role can approve the
provision of machines. Furthermore, it is this user who checks how much of the
available resources are currently in used. He also controls what kind of resources
can be deployed. Of course, the business group manager can also provision
resources alone – or on behalf of other users.
• Support users are able to provision and manage machines for themselves or on
behalf of other users (this is especially useful if normal users only have the right to
work with existing machines and are not allowed to provision resources themselves.
Support users can also help with machines whose owner is currently on holidays).
• In order to create and manage a machine within a business group, you must have
membership of a user group in that business group. Of course, users can also be
members of many different business groups.
Figure 5-12 Creation of a business group
With that background knowledge, it should be easy to create a business group. Please
navigate to the Infrastructure > Groups > Business Groups page and click on the New
Business Group button. Afterwards we can fill out the dialog fields as described (see Fig.
5-12):
• If you already have an existing business group, you can use the Copy from existing
group dropdown list.
• Specify a Name for the business group.
• Optionally you can type in a Description.
• Choose the Default Machine prefix for the Business Group. If you have not created
one yet, you also have the possibility to configure a new machine prefix.
• Provisioned Windows machines can be placed in an Active directory container. You
have to specify the Distinguished Name (DN) of the container to use this feature.
• Specify who will become a member of the Group Manager role in the appropriate
text box.
• You can send notifications regarding provisioning, if you fill out the Send manager
emails textbox. Please note, you must configure an outbound email server first (as
described in Chapter 4).
• Specify which users will become members of the Support Roles.
• Configure all normal members in the User Roles text field.
• For most of the cases, you can keep the Custom Properties empty. We will talk later
about custom properties in detail.
• Click OK to save your changes.
Figure 5-13 Business groups, reservations
Once you have created the first business group, the business group manager can see
resource consumption on the business group main page. This page displays information
regarding the total number of machines, quotas, allocated memory and storage (in % and
in total GB).
Now we have created our fabric groups (groups defining which resources can be used for
provisioning) and business groups (groups defining who is able to request any new
resources). However, if there are multiple fabric groups and business groups, we still have
the question which fabric group(s) should be used by a given business group. vRealize
Automation uses reservations to link these groups to each other. Fig. 5-13 depicts the
relationship between these entities. You can see that there are three different business
groups available (A, B and C). While group B can access all underlying fabric groups via
three reservations, both business group A and C are only linked to two fabric groups.
In order to be able to create reservations, you must first be a holder of the fabric
administrator role. In addition, it is important to understand that there are different kinds
of reservations:
• A physical reservation always allocates the whole machine. It is not possible to have
multiple reservations on the very same hardware. If another reservation is needed,
you have to delete the existing one first.
• You can also have reservations on a cloud environment. To configure a cloud
reservation for AWS Amazon, you need to configure an AWS endpoint first. For
vCloud Director you need a vCloud Director endpoint respectively.
Figure 5-14 Reservation information
• If there is more than one reservation for a business group, the priority value helps
determine which one to is use for provisioning. By default, vRealize Automation
selects the reservation with the highest priority. If that reservation has run out of
capacity, the reservation with the next highest priority will be used. If there are
several reservations within the same priority, vRealize Automation will use a round
robin algorithm to balance the workload among the different reservations.
Figure 5-15 Reservation resource allocation
Next you can move on by clicking the Resources tab (see Fig. 5-15). Here you can specify
which share of the compute resource will be used by the reservation:
• The upper memory table lets you define the memory share (you also see how much
memory is available).
• The storage table helps you to reserve storage on a storage path, along with priorities
(if you have multiple storage paths).
• A resource pool to be used.
The third register tab is for configuring networks (Fig. 5-16). Please check all the network
paths, which should be available for provisioning. There is a dropdown link for the
network profiles as well – however we will talk about them later.
The last screen refers to alerts. By default, capacity alerts are turned off. If you turn them
on, you can specify the thresholds at which alerts should be fired (there are alerts for
storage, memory, CPU and machine quota). Furthermore, don’t forget to add some
recipients (and optional check if the group manager should also be notified). The last item
to configure is the reminder frequency (days), if you want to send multiple notifications.
5.2.8.1. Configuring Reservation Policies (optional)
At this point, we have covered nearly all preliminary work required before we can
configure which machines should be deployable from vRealize Automation.
However, before talking about blueprints, we first need to introduce another powerful
concept of vRealize Automation – reservation policies. In short, reservation policies are
used if your business group has several reservations available, but you need to restrict
which of these reservations should be used for a particular machine. In practice, a
reservation policy can be used in many places. For example, a desktop machine will
probably have different hardware requirements to that of an SQL Server. Consequently,
you could have two different fabric groups – one fabric group called “Bronze FG” with
entry-level performance and another fabric group called “Gold FG” with high-
performance hardware. Now you could deploy desktop machines to the Bronze FG, and
SQL Server to the Gold BG.
Reservation policies help you to implement such use cases. Technically, reservation
policies simply connect blueprints and reservations (blueprints specify the machines to be
deployed). Fig. 5-17 shows such a mapping: There are two different hardware levels that
can be used for provisioning. Such definition of levels is also called tiering.
There is a 1:n relationship between reservations and reservation policies. Therefore each
reservation can have exactly one reservation policy, but the very same reservation policy
can be mapped to different reservations.
Three steps are necessary to configure reservation policies:
Go to the Infrastructure > Reservations > Reservation Polices page and click the New
Reservation policy button. Once the dialog has opened, you can specify a name and
description for the policy, then save it with the green Save button.
So far we haven’t talked about creating blueprints – nevertheless, if you already have
one, navigate to Infrastructure > Blueprints > Blueprints. If you edit an existing
blueprint, or create a new one, there is a dropdown list named Reservation policy on the
first configuration page (Blueprint information) where you can specify the policy.
The third step is to assign a reservation policy to a reservation. Once again, this is quite
easy. We have already mentioned that there is a dropdown list on the reservation
configuration page. Once we have created a reservation policy, the dropdown list will
reflect this, with a new policy ready to select.
In addition to the normal reservation policy, there is also a second kind of reservation
policy - storage reservation policies. From a conceptual point of view, they are quite
similar to standard reservation policies; however they specifically apply to storage
volumes.
Storage reservation policies can be created on the Infrastructure > Reservations >
Reservation Policies page as well. You also have to assign them to a blueprint (they are
configured on the second build information tab within the storage volume table).
However, the last step is slightly different (they will be assigned to a compute resource
instead of a reservation). Work through the following checklist:
• External Network Profiles help you to override the external network settings with
your own configuration.
• You define a Private Network if you want to deploy your virtual machines to an
isolated network environment.
• Routed Networks allow a segmentation of an IP address area with its own routing
table.
• NAT Networks consist of a private network, where you deploy your virtual
machines and an external or routed network that is connected to the private network
via a NAT Router.
There is a big difference between external network profiles and the other three. An
external network is an existing physical network, statically configured at the hypervisor
level. The other three network profiles all define virtual networks that are created at the
provisioning stage. These networks cannot be created directly by vRealize Automation,
but rather are built with the help of NSX.
In the following section, we will show you how to create and map an external network
profile to a reservation. The other three network profiles will be discussed in chapter 7.
If you want to create a network profile, be sure to log in as a fabric administrator. Then
you can perform the steps as described:
1. Navigate to the Infrastructure > Reservations > Network Profiles page.
2. Choose New Network Profile > External.
3. Specify a name.
4. Optionally you can assign a description.
5. Type a mask address in the Subnet mask text box.
6. Configure the IP address in the Gateway text box.
7. Specify a DNS/WINS group for the network profile.
If you want vRealize Automation to assign the IP addresses, you can move along to the IP
Ranges tab and create a pool of IP addresses:
After having discussed the most important configuration steps, we now want to shift our
focus to security. We already mentioned that there are different roles in vRealize
Automation. vRealize Automation is security-trimmed on the graphical user interface, i.e.
you only see the configuration items in the menus you are permitted access to. This can
sometimes be quite confusing. Especially when you do not know if you lack the
permission to do something, or you just forgot where a certain configuration item can be
found in the graphical user interface.
Essentially, vRealize Automation comes with three different categories of roles: system-
wide roles, tenant-roles and business group roles.
There are three system-wide roles: the system administrator, IaaS administrator and the
tenant administrator.
For each tenant, there are also different roles: the tenant administrator, the service
architect and the approval administrators.
When creating a business group, different roles can be defined as well: the business group
manager, support users, users and approvers.
5.4 Summary
This chapter showed the basic configuration stages of vRealize Automation. This involved
a couple of steps. Firstly, we showed how to set up global settings. Next, we created a
tenant and showed you how to add compute resources, via endpoints, to the fabric. We
then introduced the concepts of fabric groups and business groups and described how to
relate them via reservations. We also introduced reservation polices – a sophisticated
mechanism to implement rules based on where to provision machines. At the end, we
shortly discussed network profiles, roles and permissions.
Role Permission
Fabric administrator Manage physical machines and compute resources within a fabric group
Manage reservations and reservation policies
Manage build profiles and machine prefixes
Define cost profiles
Approver Approver
So far we have shown how to initially configure vRealize Automation, it is now time to
deal with blueprints. Blueprints represent the most important entity within vRealize
Automation – they define how to provision and manage the lifecycle of IaaS resources.
There are different kinds of blueprints in vRealize Automation: Physical blueprints,
virtual blueprints, cloud blueprints and multi-machine blueprints. This chapter will
address the first three kinds of blueprints, the multi-machine blueprint will be explained
chapter 7.
A blueprint defines all aspects of a lifecycle. Besides the question of how to provision an
IaaS resource, there are other aspects like approval, managing at runtime, retiring the
machine or archiving. These stages make up the lifecycle of a machine. Such lifecycles
are depicted in Fig. 6-1. There are different blueprints presented – one for a desktop
machine, another for a production system and a third for a dev/test-system. Please note
that all of these have their own settings regarding security, SLAs and cost profiles or
policies.
Extensibility
The feature set of vRealize Automation is already quite comprehensive. However, as with
any other standard software, probably not all the requirements of a project can be
delivered out-of-the box. There is a lot of room for adaptations of a machine’s lifecycle.
For example, many companies have their own IP address management tool (or any other
third-party tool) which should be integrated into the provisioning workflows. For such
scenarios, blueprints have a specific extension mechanism. You have probably noticed that
the graphical user interface often allows defining custom properties. These properties can
also be used to customize the lifecycle of machines provisioned by vRealize Automation.
vRealize Orchestrator - which is part of vRealize Automation - also makes use of custom
properties to implement additional functionality. We will be talking more about
extensibility later on.
Blueprints can be defined globally per tenant or on a business group level. No matter what
kind of blueprint is used, if a blueprint is published, you need to have relevant
permissions. Tenant administrators are in charge of creating and managing global
blueprints. Global blueprints can be marked as master blueprints in order to use them as
templates. Business group managers can create blueprints as well – however, these
blueprints are locally assigned to that business group.
Machine leases
While it is quite easy for end users to provision their own machines, the situation might
arise, at some point in time, where all the available resources for provisioning have been
exhausted. In addition, there are costs arising from machines that are not used anymore.
To help avoid such issues, machine leases can be assigned. This means, when creating a
blueprint, it can be defined how long a provisioned machine will be deployed. When the
lease is over, it has to be extended, otherwise the machine gets archived or destroyed, and
its resources will be released.
Reclamations
Even when you have machine leases configured, the remaining capacity within your
datacenter can be low and no further machines can be created. From an administrator point
of view, it is difficult to choose a machine for expiry. This is because in most cases
administrators do not have any knowledge regarding machine use (i.e., if it is still needed).
In this case, vRealize Automation helps to identify underused machines (in terms of CPU,
memory, network or disk consumption). Administrators can then ask the owners of such
machines if they are still required. Based on the owners’ reply, the machine resources can
be released or kept.
While blueprints specify the hardware configuration for newly provisioned machines,
changes at runtime are quite common. These reconfigurations are only possible when the
blueprint allows for scale-out or scale-in. This occurs when we define intervals for the
provisioned hardware resources. So, for example, we initially assign 4 GB of memory, but
allow end users to have up to 8 GB of memory. At runtime, end users can then use the
self-service portal to apply these changes and restart machines with the new settings.
Machine lifecycle
While there are different blueprints in vRealize Automation, they all follow a master
workflow when provisioning new machines. This master workflow has the following
stages:
Expired Machine lease time has expired and machine is turned off
Finalized Machine has been disposed and is about to be removed from the
management
Depending on the kind of blueprint, the lifecycle encompasses additional stages. For
example, virtual blueprints which are provisioned by cloning can have an additional stage
for applying guest specifications to a machine.
Monitoring workflows and viewing logs
As with any software, errors can also occur in vRealize Automation and especially during
provisioning. To troubleshoot these errors, vRealize Automation provides several log files,
all of which are viewable from the graphical user interfaces. These log files serve different
purposes which are described in the following:
• Infrastructure > Monitoring > Audit Logs: Provides information about the status
of virtual, Amazon and multi-machines. NSX, reclamation and reconfiguration
events are also logged.
• Infrastructure > Monitoring > Distributed Execution Status: View the status of
DEMs and the details of scheduled workflows.
• Infrastructure > Monitoring > Log: Default log information.
• Infrastructure > Monitoring > Workflow History: Status and history of executed
DEMs and other workflows.
• Administration > Logs: Display logs for the vRealize Automation installation
(only for system administrators).
• Requests: Status of current provisioning (for end users).
Blueprint Information
Fig. 6-2 depicts the Blueprint Information register. When creating a new blueprint,
please provide the following information:
Hint: Blueprint
While the description field is optional, it is nevertheless highly recommended to enter
some meaningful information here, as this text will appear in the service catalog. The
format of the description text should be consistent across all blueprints. It should include
information covering the virtual machine’s installed software, such as operating system,
installed tools (like VMware Tools or Puppet) or any other custom software. By providing
a meaningful description, the self-service catalog becomes much more intuitive to end
users.
<CustomDataType>
<Data Name=”Nuernberg” Description=”Nuernberg Datacenter” />
</CustomDataType>
After having saved the file, we must restart the Manager Service. Finally we can associate
a location with our Compute Resource. Navigate to Infrastructure > Compute
Resources > Compute Resources, edit our resource (click on Edit) and choose a location
for that object.
Build Information
There are a lot of different ways to provision machines - dependent on the type of
blueprint.. We will discuss this tab later in this chapter.
Properties
The Properties tab (Fig. 6-3) is used to override default settings related to the lifecycle of a
blueprint. As discussed, every machine passes through different stages, during its
lifecycle. It begins with the request and approval of a machine. Once a machine is
approved, it will be provisioned and managed by a user. At the end of the lifecycle it will
have been expired, archived and finally destroyed. These stages also represent points in
time, where additional behavior can be “hooked” in. There are plenty of use cases, for
example:
• The hostname custom property overrides the default machine prefix, so that a user
can enter the hostname at the point of request (see Fig. 6-4),
• There is a custom property to define where the vCenter folder of a provisioned
virtual machine should be placed to.
In most cases, you would like to add not just a single custom property, but a set of
different custom properties to your blueprint (the custom property documentation covers
more than 50 pages). As adding custom properties to blueprints can be quite cumbersome,
vRealize Automation uses build profiles to group custom properties. Once defined, you
only need to add the build profile to your blueprint and all the contained custom properties
will be applied. vRealize Automation already provides a set of build profiles, for example
Fig. 6-5 shows the Remove from Active Directory build profile. This can be used to
clean your Active Directory environment, when decommissioning a machine.
Actions
Once a machine is provisioned, the users gains control and can manage it. Users can
perform a set of actions on a machine, and this list of “possible” actions can be configured
on the Actions tab. Fig. 6-6 depicts the set of allowed actions on a machine. It is important
to note that actions are not permissions, they just define the technical changes that are
possible on a machine. Permissions – on the other hand – are assigned when configuring
the service catalog.
Publishing a blueprint
Once a blueprint is created, there is a final step required before we are able to add it to the
service catalog: it must be published. Publishing involves the following steps:
• vSphere (vCenter)
• Hyper-V
• Hyper-V (SCVMM)
• KVM (RHEV)
• Xen Server
• Generic
The basic workflow only creates an empty container for the operating system and hence is
the easiest one to configure. Open or create a blueprint and perform the following steps:
Basic All Create a virtual machine container without operating system. The
end user can manually install the OS.
Clone vSphere Provisioning via cloning, based on a template. This works for
KVM (RHEV) Windows as well as for Linux. The OS can be customized via a
customization specification
SCVMM
Linked Clone vSphere Compared to full clones, linked clones conserve disk space and
allow multiple virtual machines to use the same software
installation.
Flex Clone vSphere Disk conserving cloning of a virtual machine, based on the
NetApp FlexClone technology.
Linux Kickstart All Provisioning of a Linux virtual machine by booting from an ISO
image. A kickstart or autoyast configuration file is needed. A
distribution image must be provided as well.
WIM Image All Provisioning via WinPE and a Windows Imaging File (WIM)
file.
External All Used to integrate 3rd party automation systems like HP Server
Automation and BMC Blade logic into vRealize Automation.
vRealize Automation initiates the provisioning and then hands
over control to the automation system.
5. If you are working with Hyper-V (SCVMM) you can optionally assign a Virtual
hard disk, a Hardware profile and an ISO. If you are using the KVM (RHEV)
blueprint, there is an additional ISO field in the basic workflow.
6. The next step is to configure the machine resources regarding CPU, Memory (MB)
and Storage (GB). You always have to provide a minimum value. The values are
fixed if you leave the Maximum boxes empty. Otherwise, users are free to choose a
value between the minimum and maximum.
7. Specify the Lease (days). A blank value means there is no expiration date.
8. Optionally, you can define the Storage Volumes to be mounted
9. Define the Maximum volumes.
10. Choose the number of network adapters, which could be attached to a virtual
machine.
In addition to this, if you are using vSphere, there is an additional custom property to be
configured, i.e. the operating system must be set (this is a required input when creating a
vSphere virtual machine container object):
1. Change to the Properties tab and click on the New Property button.
2. Name the custom property VMware.VirtualCenter.OperatingSystem
3. Provide the value for your operating system, e.g. windows7Server64Guest for
Windows Server 2008 (and R2) 64 bit. Please take a look at the vSphere
documentation for a complete list of values.
4. Click the Save button.
The most popular provisioning workflows for virtual blueprints are clone and linked clone.
This is because they are both quite easy to configure and the virtual machine itself can be
deployed in a relatively short period of time.
Unlike clones, linked clones are based on snapshots. When creating new virtual
machines based on snapshots, vSphere does not copy all the original files, but only saves
delta files for the changes between the parent files and the new virtual machine. As a
consequence, linked clones represent a disk-preserving and timesaving alternative to full
clones. Full clones, on the other hand, copy the whole set of disks each time a new
machine is provisioned. Fig. 6-7 compares the two approaches.
Perform the following steps, in order to configure provisioning based on a linked or full
clone:
The rest of the fields can be filled out according to the basic workflow. Fig. 6-8 shows a
picture of the user interface.
Once you have done this you can use the FlexClone workflows. Compared to Linked
Clones, FlexClones have some advantages. One of the main characteristics of linked
clones is that they need their base disks on the same disks as the delta disks. Furthermore,
FlexClones can support hardware offload. This means that all copying is essentially done
by the storage array and vSphere does not need to be involved in copying. However, so
far, FlexClones are only supported on NFS datastores, VMFS datastores are currently not
supported.
The Linux kickstart workflow helps to automate the provision of Linux machines.
However, there are a couple of steps that have to be taken, in order to get the Linux
kickstart workflow running.
6.3.4.1. Preparing the Linux boot ISO
In nearly every Linux distribution, a bootable image can be created via isolinux tool. With
most of the CentOS DVDs there is already an isolinux folder. This can be used as a basis
for creating the boot ISO. Once you have mounted the DVD, you need to extract the
folder and then edit the isolinux.cfg (the following snippets show the configuration for
Cent OS 6.5):
# Begin isolinux.cfg
default vesamenu.c32
timeout 1
Please customize the isolinux.cfg according to your needs (change the webserver and also
ensure that the network card is correct).
Next navigate to the directory containing the isolinux folder and execute the following
command:
The file boot.cat will be created by the command. This is the catalog needed for the boot
loader. The file isolinux.bin is the image of a bootloader. The output file will be
RHEL6x64.iso
If you want to create the bootable image in Windows, there are also free tools available,
for example ImgBurn.
6.3.4.2. Creating kickstart config
The next step is to create a kickstart config file. Fortunately, there is already a sample
configuration file that can be used as a starting point. To locate the file, first unpack the
LinuxGuestAgent package and then locate the subdirectory, which corresponds to the
operating system you wish to deploy. Open the sample-https.cfg file in an editor. The file
looks like this:
The kickstart file is configuring a lot of things, e.g. the root password, locale settings,
partitioning of the guest machine’s hard disk and the installation of the guest agent on that
machine. We will now discuss how to understand and modify the file according to your
needs:
http://support.microsoft.com/kb/297337.
Once you have created the bootable ISO image, and finished and uploaded your kickstart
file, we have to make sure that there is a DHCP server available on the network. So that
our new Linux instance will have an IP address and can safely boot.
The last step is to configure the blueprint itself. Follow these steps to set up the Linux
kickstart workflow.
• The NetBIOS name of the SCCM host is required and must be resolvable from at
least the DEM.
• vRealize Automation and the SCCM server must be on the same network.
• A SCCM software package, which includes the vRealize Automation guest agent,
must be created.
• The SCCM package must install the vRealize guest agent.
• The SCCM files must be packaged as a bootable ISO.
• The bootable ISO must be accessible to the network.
Once this has been done, we can continue with the blueprint configuration:
The Windows Imaging File (WIM) provisioning is certainly the most complex one to
configure. There are several reasons for this. Firstly, WIM has quite a lot of prerequisites.
Secondly, the maintenance requires a long time and is complex. Thirdly, it can only be
applied to Windows machines. Last but not least, the provisioning time itself is quite long.
On the other hand, there are also some benefits. Once you have a running WIM
provisioning, it can be used in nearly all environments, not only in vRealize Automation.
If a company already has some experience in WIM, it only takes a short time to configure
it in vRealize Automation.
From a technical point of view, WIM is a file-based image format that was developed by
Microsoft in order to deploy operating systems in distributed environments. WIM images
must be provisioned on existing volumes respectively. WIM itself does not provide a tool
for creating such volumes or partitions, it is left up to the user to do this manually
(however there is a tool from Microsoft named Diskpart which creates and format
partitions).
In order to create a WIM image, you also need the ImageX command line tool, which is
part of the Windows Automated Installation Kit (KIT).
g.iii.4.7. Preparing for WIM provisioning
1. First of all, a staging area has to be configured. This staging area will be used to
upload the required files for the provisioning process. The staging area must be
accessible from the vRealize Automation as well as from the machine being
provisioned. The preferred way to communicate with the staging area is using a
network drive or a UNC path.
2. A DHCP server is needed as well.
3. To create the template, a Windows reference WIM machine must be installed.
4. Once, the reference machine is created, you can sysprep the machine using the
System Preparation Utility.
5. The WIM image can be created after having sysprepped the Windows machine.
6. Usually, additional scripts should be invoked after the WIM provisioning. This can
happen using the PEBuilder tool, which is provided by vRealize Automation. Install
the tool on a development machine and include all the scripts to be invoked as part
of the provisioning process. The PEBuilder installation files can be found on the
vRealize Automation Appliance, at the URL https://<vrealize-automation-
appliance.domain.nam>:5480/installer. You need Microsoft .NET 4.5 and the
Windows Automated Installation Kit (AIK) for Windows 7 (including WinPE 3.0)
as a prerequisite. The scripts to be invoked should be placed in the Plugins\VRM
Agent\VRMGuestAgent\site\ work item subdirectory of the PEBuilder installation
directory.
7. Create a WinPE image using PEBuilder and insert the guest agent into the WinPE
image as follows:
a. Run PEBuilder.
b. Enter the host name of vRealize Automation in the vCAC Hostname textbox.
c. Type the vCAC Port.
d. Enter the path of the PEBuilder plugin directory.
e. Type the output path for the ISO file in the ISO Output Path text box.
f. Click File > Advanced.
g. Select the Include vCAC Guest Agent in WinPE ISO checkbox.
h. Click OK.
i. Click Build.
8. Upload the WinPE image to the staging environment
Once the WIM Image and other prerequisites have been completed, we can continue with
the blueprint configuration:
From time to time, it is required to provision resources onto the cloud. Like for virtual
blueprints, vRealize Automation must be prepared in order to use cloud blueprints. These
steps involve storing credentials and creating endpoints, bringing resources under vRealize
Automation management, create business groups and manage reservations. In Chapter 5
we have already shown how to perform these configurations.
vRealize Automation supports the following platforms for cloud blueprints:
• vCloud Director
• Amazon AWS
• OpenStack (Havana)
• vCloud Air
These blueprints also offer different workflows for provisioning which are shown in the
following table:
The following issues must be considered when configuring vRealize Automation for
Amazon AWS:
Chapter 5 has already covered the basics of AWS and how to do the preparation within
vRealize Automation. In the following stage, we will discuss how to setup a blueprint.
The other steps (including fabric groups, reservations, machine prefixes and business
groups) do not differ that much from the configuration as described before, hence we will
not focus on them anymore.
Currently, vRealize Automation allows for provisioning with virtual machine images,
Linux kickstart, or WIM provisioning. Section 6-x already describes how to prepare for
Linux kickstart and WIM provisioning. The third provisioning mechanism – virtual
machine image provisioning – comes in two flavors:
m1.tiny 1 1 512
m1.small 1 20 2048
m1.medium 2 40 4096
m1.large 4 80 8192
vRealize Automation can provision vCloud Air and vCloud Director resources as well.
Essentially, the same preparation steps are required:
Internally, vCloud Director uses vApps as a container entity for provisioning resources. A
vApp consists of different component machines that are provisioned and managed as a
single entity. Besides acting as a container for the different components, a vApp can also
take care of network provisioning. While vRealize Automation can trigger the
provisioning of vApps, using vCloud Director, it does not inherently know about the
concept of vApps. vRealize Automation uses multimachine blueprints, whose
functionality can be compared to vApps. Multimachine blueprints are covered in chapter
7.
We have already described how vRealize Automation can provision machines on- and off-
premise. While Amazon is the leader in the public cloud market, deploying to vCloud Air
can also be an interesting alternative. Because this is a book about a VMware product, we
want to address how vCloud Air differs from Amazon AWS and what the advantages are:
• First of all, vCloud Air comes with automatic redundancy – hot standby redundant
capacity is included and it’s free. Virtual machines are monitored by default. If a
failure occurs, the machine is automatically restarted with the same network
configuration (IP addresses, MAC addresses, etc.). While resiliency is possible with
Amazon, too, it is not an out-of-the-box feature, so it must already have been
considered while designing the application.
• vCloud Air datacenters monitor their hosts and in case a machine is overloaded,
virtual machines automatically get live migrated to other machines. With Amazon,
however, resource congestions happen at the hosts from time to time.
• One of the principles of AWS is that everything can fail. Therefore you must design
your application with that in mind. When there is a problem with an Amazon host or
maintenance work is done, virtual machines in AWS might be switched off
immediately without any warning or with only short notice. vCloud Air, on the other
hand, migrates machines to other hosts before doing any maintenance work.
• vCloud Air is also more flexible in terms of instance sizes. You can have any VM
size and can even resize the machine (VM or disk), while it’s running. Hardware
resizing is not possible with AWS – you have to switch the instance type, which
means additional administrative work.
• Most companies have vSphere running in their on-premise network. When moving
machines to vCloud Air, no conversion is needed. Furthermore, as on- and off-
premise environments are based on the same vSphere technologies, you can keep
your app support.
• vCloud Air allows stretched layer 2 networks between your data center and vCloud
Air. This can be done with Direct Connect, a dedicated link between your datacenter
and vCloud Air. Because it is a layer 2 network, it appears like a single flat LAN
segment. Amazon – on the other hand – also offers Direct Connection access to their
datacenter, however they are using IP (layer 3) network connectivity.
The first step is creating an endpoint. However, to get the address, you have to first
connect to your vCloud Air environment.
As mentioned previously, you need to prepare vCloud Air settings in order to configure
the provisioning in vRealize Automation.
First of all, you need the URL of the vCloud Air datacenter. You can retrieve that by
logging in to your management website. On the dashboard you will see a list of
datacenters. Click on your datacenter to drill down onto the details. On the right-hand side
of the datacenter details page, click on the vCloud Director API URL and copy the URL
to your clipboard. This value is needed in the vCloud Director endpoint configuration
page (Address field) in vRealize Automation; however remove everything after the port
number in the URL.
When creating an endpoint for a vApp(vCloud), an Organization must be provided as
well. You can find out which organization to provide by navigating to your data center
details and clicking the VDC Name & Description link. Copy the name and paste it to the
Organization textbox field in vRealize Automation.
1. You should also note the network details for the vApp. Being on the details page of
a data center, click on the network tab to see a list of the defined networks. Select
the network of interest and note the settings. It is recommended to create a network
profile based on these settings including the IP Ranges from the vCloud Air data
center.
2. The discovered compute resources have to be added to a fabric group.
Once these preparations have been completed, you can continue creating a blueprint:
g.2
g.3
g.4
The remaining kind of blueprints to be discussed in this chapter are the physical
blueprints. Most commonly, they are used for legacy reasons or if virtual or cloud
provisioning is not possible for some reason. However, not every type of hardware is
supported, only hardware with special out-of-band management facilities. In particular,
this includes:
• HP iLO
• Dell iDRAC
• Cisco UCS
Like their virtual and cloud counterparts, physical blueprints support a variety of
provisioning workflows. These are shown in the following table 6-5:
While physical blueprints are very rarely used, there are nevertheless use cases for
configuring them:
• vRealize Automation can manage the provisioning of physical machines via a PXE
boot server. At the very beginning of the provisioning, a network boot strap program
(NBP) is started. This in turn downloads an image and subsequently installs the
operating system.
• Linux Kickstart and AutoYaST can be used to provision physical machines as well.
A configuration file is used to find and download an ISO image and then install the
operating system afterwards.
• Microsoft System Center Configuration Manager (SCCM) can be used to create a
SCCM task, which in turn can be used by vRealize Automation for the installation.
vRealize Automation takes care of booting the machine and hands over the control
to SCCM afterwards. After the provisioning has finished, additional customization
is possible using the guest agent.
• The Windows Imaging File (WMI) format can also be used for the deployment of
Windows machines.
The configuration of endpoints, fabric groups, reservations and the blueprints resemble
their virtual and cloud counterparts. Therefore we will not dvelve too deep into these
topics. However, please consider that, depending on the provisioning workflow, additional
custom properties must be defined. Table 6-6 shows these custom properties:
Image.ISO.Location
Image.WIM.Path
Image.WIM.Name
Image.WIM.Index
Image.Network.User
Image.Network.Password
SCCM Image.ISO.Location
Image.ISO.Name
SCCM.Collection.Name
SCCM.Server.Name
SCCM.Server.SiteCode
SCCM.Server.UserName
SCCM.Server.Password
PXE Boot -
We have already introduced the vRealize guest agent and mentioned that it can be used to
further customize the operating system at the end of the provisioning. The guest agent is
not mandatory, because guest customization can also be done by other means. Most
commonly used is the customization spec, but this only works for clones and linked
clones. When provisioning to vSphere, vCenter Orchestrator can also be used. Besides
these alternatives, there are a couple of use cases for the guest agent:
• To launch guest scripts. This has always been the most important reason for
installing the vRealize Automation guest agent. The invocation of guest scripts can
be controlled by setting custom properties. The definition of these custom properties
is quite flexible (custom properties can even reference each other), so it is possible
to execute scripts and pass the required parameters to the command. If the scripts to
be executed reside on a network share, the maintenance work for the machine
template can be further reduced. Therefore you are able to cope with additional
requirements and functionalities, without the need to edit a machine template. Of
course, besides invoking simple scripts, unattended installation of software is also
possible.
• In many cases, configuration tools like Puppet or Chef are used for the
customization of machines. The guest agent can call them as well.
• It is possible to configure a blueprint to attach additional hard drives to a machine.
However without the guest agent it is up to the administrator to partition and format
the hard drives. With the guest agent installed, this can be done automatically.
• Additional NICs can be configured as well.
• The Application Services use the guest agent as well.
The guest agent can be installed on Windows and Linux machines. In the following
section, we will describe how to install it on both operating systems.
1. Navigate to the vRealize Automation Appliance and open the download page
(https://<vRealize-appliance.domain.name>:5480/installation).
2. Download the Windows guest agent files (32bit or 64bit).
3. Unpack GugentZip_version into the C drive on the reference machine. The files
reside in the C:\VRMGuestAgent directory. Keep this directory, do not move or
rename it.
4. Next, the guest agent must be configured to communicate with the Manager Service.
This can be done by running the following command:
winservice -i -h Manager_Service_Hostname_fdqn[: portnumber -p ssl]
If you have a load balancer for the Manager Service, you have to use the host of the
load balancer as the host parameter (-h).
5. Once the installation has completed, the guest agent can be started.
The following procedure describes how to install the guest agent on Linux:
1. Navigate to the vRealize Automation Appliance and open the download page
(https://vRealize-appliance.domain.name>:5480/installation).
2. Download the Linux guest agent files.
3. Upload and unpack the guest agent files on the reference machine.
4. Next, the guest agent must be installed. This can be done by running the following
command:
rpm –i gugent-version.x86_64.rpm
5. Now, the guest agent must be configured:
cd /usr/share/gugent
./installgugent.sh -<vRealize Automation Appliance >:443 ssl
6. Test if the installation was successful by running the command ./rungugent.sh
g.6.3. Executing scripts with the guest agent
Once the guest agent has been installed, blueprints can be configured to run scripts during
the provisioning of a machine. This can be achieved by placing a script in the machine or
on a network share, and then configuring custom properties to call the script. The
following custom properties are required:
Once the build profile has been created and configured, it can be associated with a
blueprint (see Fig. 6-10):
Once configured the guest agent can be used for customization. Please note that the guest
agent gets invoked after the custom specification is completed. If there is no custom
specification, the invocation happens directly after the deployment.
During deployment, the guest agent copies all the custom properties to a local file
(C:\VRMGuestAgent\site\workitem.xml).
g.7 Summary
This chapter introduced the different kinds of blueprints available in vRealize Automation:
virtual, cloud and physical blueprints. We demonstrated how to configure the blueprints
and manage the lifecycle of machines. To allow further customization of the deployment
process, it is possible to add custom properties to blueprints. Furthermore, the guest agent
can be used to invoke additional scripts after provisioning.
After a blueprint has been configured, it must be published to the self-service catalog so
that end users can request machines. The configuration of the self-service catalog will be
shown in chapter 8.
This chapter did not deal with multi-machine blueprints. Multi-machine blueprint will be
described in the next chapter.
7. Network Profiles and Multimachine Blueprints
In chapter 6, we covered virtual, cloud and physical blueprints. These blueprints help us to
deploy single machines. However, from time to time providing single machines might not
be enough, instead we have to provision a group of machines. There are plenty of use
cases for this, amongst them:
• The provisioning of multi-tier applications is probably the most important use case.
This usually goes hand-in-hand with development teams that need to deploy their
applications. A traditional n-tier application consists of a network load balancer, one
or more web frontend servers, an application server and a database. To increase
security, the different application components can also be placed into different
subnets (with dedicated firewall rules between the subnets). Depending on the
environment, the network to be used is either pre-configured or must be created
dynamically at runtime.
• Multimachine blueprints can also be used for deploying the very same application
several times for different purposes. For example, there could be a test, integration
or production environment.
• Multimachine blueprints are also very well suited to training environments, where
the same set of machines must be deployed multiple times.
We have already covered the basics of network profiles in chapter 5. However, as network
profiles represent an important prerequisite for setting up multimachine blueprints, we will
further explore this topic.
Essentially, network profiles perform two main functions in vRealize Automation.
Firstly, they are responsible for the NIC configuration (i.e. IP address, subnet mask,
default gateway, DNS). Secondly, in conjunction with NSX, they provide edge services
router configuration (route or NAT functionality). There are five different principle types:
As stated earlier network profiles are responsible for the NIC configuration. They provide
an easy way for virtual machines to reference IPs addressed to a machine during the
lifecycle and provisioning through vRealize Automation. This in turn allows to reclaim IP
addresses when the machine is eventually destroyed. This is a very handy feature for
companies which do not have an IP address management (IPAM) tool like Infoblox
running.
The following points are required for vRealize Automation to successfully inject an IP
address into a machine:
• Guest customization specification – IP addresses will be assigned using the guest
customization specification. Without configuring this, IP addresses will just be
reserved from the pool of the network profile, but they will not be applied to the
provisioned machine.
• VMware tools – behind the scenes, the configuration of virtual machines with new
IP addresses happens with the VMware tools.
• VirtualMachine.Network0.Address
• VirtualMachine.Network0.SubnetMask
• VirtualMachine.Network0.Gateway
• VirtualMachine.Network0.PrimaryDNS
• VirtualMachine.Network0.SecondaryDNS (optional)
• VirtualMachine.Network1.Address
• VirtualMachine.Network1.SubnetMask
• …
These custom properties could be read by an Orchestrator workflow, which in turn would
invoke another workflow to assign the IP address using the Guest API (we will talk about
Orchestrator in a later chapter in detail). If Orchestrator is not a valid choice, you could
also place a script in the machine template. This script receives some input and configures
the networking settings. The script would be triggered by the guest agent (we talked about
triggering scripts in chapter 6).
Depending on the desired functionality, you need to choose the appropriate network
profile to be created.
Figure 7-1 External network profile configuration
In order to create an external network profile, you need to complete the following steps:
NAT Network
Routed Network
Figure 7-2 Network profiles
6. Click on the tab IP Ranges to configure IP address management. Please note that
this step is optional. There are different ways to configure the IP addresses; you can
define a fixed range, you can configure them one by one, or you can upload a CSV
file containing the IP addresses. If you do not configure IP ranges, vRealize
Automation will rely on a static IP or on DHCP, for assigning IP addresses.
7. Click OK to save your network profile.
As the name implies, private networks do not have upstreaming traffic (north – south) or
routing during deployment. The created networks are connected to a deployed edge
gateway, which in turn can provide east – west routing (however with no connectivity to
an external network). The network architecture itself is depicted in Fig. 7-2.
1. Navigate to Infrastructure > Reservations > Network Profiles and hover over New
Network Profile and select Private. The configuration page opens (see Fig. 7-3).
2. Assign a Name for the network.
3. Define a description (optional).
4. Specify a subnet mask (for example 255.255.0.0).
5. Specify if you want to have a DHCP enabled. If yes, provide values for the IP
range start and IP range end textboxes.
6. If you want to configure a list of assignable IP addresses, change to the IP Ranges
tab.
7. Click OK to save your routed network profile.
As already mentioned, routed network profiles are based on NSX and are only used with
multimachine blueprints. At runtime, vRealize Automation defines networks by means of
routed network profiles. The routed network is connected to an external network by means
of a deployed edge gateway. Hence, when creating a routed network profile, an external
network profile must be specified first. Furthermore, because routed networks create
further networks, they must also define IP ranges. These ranges are used to allocate a
range of IPs to a specific machine, within a multimachine blueprint. Fig. 7-4 depicts a
network profile.
A typical use case for routed network profiles is the provisioning of multi-tiered
applications. For example, taking security in mind, individual servers of a particular tier
should be placed in different networks.
Figure 7-5 Routed network profile
To create a routed network profile, carefully work through the following tasks:
1. Navigate to Infrastructure > Reservations > Network Profiles and hover over New
Network Profile and select Routed. The configuration page opens (see Fig. 7-5).
2. Assign a Name for the network.
3. Define a description (optional).
4. Use the dropdown list External network profile to associate your network with a
physical network.
5. Specify a subnet mask (for example 255.255.0.0).
6. Configure the Range subnet mask. The range subnet mask determines how many
networks will be created. For example, typing in 255.255.240.0 implies creating 16
networks, because within the third quadruple specifies 4 bit for the amount of
networks.
7. Assign a Base IP address to specify the first network.
Figure 7-6 NAT network profile
8. Review or provide input for the Primary DNS, Secondary DNS, DNS suffix, DNS
search suffix, Preferred WINS and Alternate WINS. Once you have connected
your profile with an external network profile, these values automatically get pre-
filled.
9. Change to the IP Ranges tab to specify and review the IP addresses.
10. Click OK to save your routed network profile.
The final kind of network profile that can be created is the NAT network profile. A NAT
network has similarities to a routed network, in that it is connected to an external network
via an edge gateway. NAT network profiles come in two different flavors:
A NAT network is best suited to deploying identical networks. For example, if you want to
provide a training environment for students or to have identical networks for the
production/integration/testing of a network.
To create a routed network profile, we must complete the following:
1. Navigate to Infrastructure > Reservations > Network Profiles and hover over New
Network Profile and select NAT. The configuration page opens (see Fig. 7-6).
2. Assign a Name for the network.
3. Define a description (optional).
4. Use the dropdown list External network profile to associate your network with a
physical network.
5. Choose if you want to create a One-To-One or a One-To-Many NAT network type.
6. Review or provide input for the Primary DNS, Secondary DNS, DNS suffix, DNS
search suffix, Preferred WINS and Alternate WINS. Once you have connected
your profile with an external network profile, these values automatically get pre-
filled.
7. If your network profile is of type One-to-Many, it is possible to define IP ranges. In
that case, define an IP range start and an IP range end as well as the Lease time
(seconds) (optional).
8. If you want to configure a list of assignable IP addresses change to the IP Ranges
tab.
9. Click OK to save your routed network profile.
Historically, vCloud Director already had the ability to provision a set of different virtual
machines, along with creating new networks. For this purpose, the concept of a vApp was
introduced. vApps are not known within vRealize Automation, but they have similarities
with multimachine blueprints. That is reason enough to show a comparison of both:
Provisioning of physical, virtual and cloud Only cloud machines can be deployed
machines
Machines are managed by vRealize Automation vApps can be managed by vRealize Automation
Access: Access:
Microsoft Remote Desktop Microsoft Remote Desktop
SSH SSH
VMware Remote Console VMware Remote Console
After provisioning, additional machines can be Machines cannot be added after provisioning
added
Boot order of the virtual machine is defined vApp determines start order
within the multimachine blueprint.
Table 7-1 Multi-machine blueprint vs. vCloud Director vApp
Before setting up a multimachine blueprint with dynamic network provisioning, there are
a couple of steps to be carried out in advance:
The NSX endpoint is created via VMware Orchestrator. Therefore a running Orchestrator
instance, which is connected to vRealize Automation via an endpoint, is required. We will
talk about Orchestrator in greater detail in chapter X. For now, we will only describe the
basic steps required in order to integrate NSX with vRealize Automation:
Once the workflow has been run, the Distributed Firewall (DFW) rules (defined in the
security policy) are applied. However, these are only applied to the vNICs of the security
group members to which this security policy applies.
The next step within the configuration is to setup network profiles. We discussed network
profiles at the beginning of the chapter, so we shall not spend more time on them.
After the network profiles have been created and configured, you have to continue with
the configuration of the reservations. Transport zones can be discovered after an inventory
scan, once the endpoints have been correctly configured for NSX. So at runtime, vRealize
Automation is able to create the appropriate networks. In addition, you are able to
configure security groups. Security groups can be compared to firewalls, between the
different provisioned networks.
Run through the configuration, performing the steps as follows:
• You can define the start, as well as the shutdown order, for each of the blueprints
enclosed in the multimachine blueprint.
• For each of the enclosed blueprints a network, where the machine should be
deployed to, can be assigned.
• Within the blueprint you can assign a transport zone, network profiles and routed
gateways.
• The Distributed Execution Manager (DEM) can invoke scripts during the lifecycle
of a multimachine deployment. There are six different hooks for registering these
scripts (see following table).
Phase Description
Pro-Provisioning A script is invoked – after the approval of the workflow, but before
the beginning of the provisioning
Post-Provisioning Run a script after the provisioning (and after turning on the machines)
You must perform the following steps in order to create a multimachine blueprint:
Like other blueprints, the multimachine blueprint also needs to be published. Only once
you have done this, can you add it to your service catalog.
If you already have running virtual machines, it is possible to bring them under the control
of vRealize Automation. When shifting away from vCloud Director, this is the way to
migrate to vRealize Automation.
We have already mentioned that there is no upgrade path from vCloud Director to
vRealize Automation. However, in this chapter we showed how vCloud Director concepts
can be translated to vRealize Automation concepts. The basic process is the same as when
importing compute resources from other systems. While there is no official guide
regarding how machines should be migrated, you can nevertheless find the following steps
useful (as the best way is to clone these machines):
1. Make sure you have sufficient room on your storage for cloning the machines. It is
recommended that the destination cluster of your clone is already a compute
resource within your vRealize Automation environment.
2. Go to the vSphere Web Client, navigate to your VM and go to the vApp Options
tab and uncheck the Enable vApp Options checkbox. The result is that vCloud
Director properties are removed from the VM (see Fig.7-8).
3. Navigate to Infrastructure > Compute Resources > Compute Resources, hover
over your cluster and click Data Collection. Trigger an Inventory data collection.
4. Go to Infrastructure > Infrastructure Organizer > Infrastructure Organizer.
5. Click Next.
6. Choose the compute resources you want to configure on step 1 and click Next.
7. Within the next step, check the box beside the compute resource that contains the
clone VM.
8. Ensure the compute resource maps to a fabric group and optionally a cost profile
and click Next.
9. In step 3, select the machines that you want to import. For each machine you want
to add, click the Edit icon and associate the machine with a business group. Once
you have finished, click on Next to continue.
10. In step 4, assign a blueprint, reservation and machine owner to the machine. vApps
must be mapped to multimachine blueprints. Click Next to continue.
11. Verify the settings in the last steps and click Finish.
Once the import has been completed, machine owners can see the machines in their
catalog and should be able to log on into those machines.
If you want to import more than a few machines, bulk import is a viable option. Bulk
imports are useful in a variety of use cases:
• You can make global changes to a set of virtual machines, for instance, changing a
virtual machine property such as storage path settings.
• Import unmanaged machines.
• Import machines into an upgraded deployment.
You can use the bulk import feature from the graphical user interface, or you can use the
CloudUtil command-line interface. We will talk about using the CloudUtil tool later, in
chapter 13. Using the bulk import feature requires both a fabric administrator and a
business group role membership. You can perform the bulk import by executing the
following steps:
1. The first step is to create a virtual machine CSD data file. Navigate to
Infrastructure > Infrastructure Organizer > Bulk Imports and click on the
Generate CSV file button.
2. Provide input for the following options (see Fig. 7-9):
3. Machines: Unmanaged or Managed
a. The Business group for the bulk import (optional)
b. The Owner (optional)
c. A specific blueprint (optional)
d. A resource filter: You can filter on a specific Compute Resource or Endpoint.
4. Click OK to export the CSV file.
5. Correct or complement the CSV file. If there is missing data for the different virtual
machines, you will have entries beginning with “INVALID” or “UNDEFINED”.
The following categories exist and must be reviewed:
a. #Import—Yes or No: Set to No to skip a virtual machine for importing.
b. Virtual Machine Name: Do not change.
c. Virtual Machine ID: Also do not change.
d. Provide a valid Host reservation (Name or ID).
e. Assign a valid storage (Name or ID).
f. Type in the ID or name of a blueprint.
g. Assign an Owner name.
6. Now the bulk import can be started. On the Bulk Import page, click the New Bulk
Import button.
7. Provide a name for the bulk import.
8. Upload the CSV file.
9. Define the start time for the import.
10. You can define a Delay (seconds) and a Batch size. You can define these values if
you have a large set of virtual machines and the import load for the import may be
too high.
11. Optionally you can Ignore managed machines, skip user validation (can
decrease the import time) or specify if you only want to start a test import run.
12. Click OK to start the bulk process.
g.4 Summary
In the previous chapters we have shown you how to create and configure blueprints. Like
the other entities and components in vRealize Automation, the service catalog also needs
careful planning. Before the implementation starts, several issues have to be considered:
The following chapter addresses these issues and demonstrates how to build the service
catalog. Before digging into the implementation itself, the most important points regarding
the service catalog should be reviewed (see Fig. 8-1):
• The service catalog hosts services. So far we have only dealt with IaaS services, but
XaaS services and Application Services can also be published.
• You can navigate to the service catalog by clicking on the Catalog menu tab.
• A single published item (e.g. blueprint) is called catalog item.
• Catalog items (e.g. published blueprints) are grouped into services.
• Provisioned resources are accessible within the Items tab.
• Users can perform actions on items. There is a set of predefined actions (e.g. turn
on/off a machine, reset a machine, connect via remote desktop connection), but it is
also possible to implement your own actions using Orchestrator and to associate it
with an item or blueprint.
• Entitlements describe permissions on a service, catalog item or action.
• Before being able to add a blueprint to the service catalog, it must have been
published.
8.1 Configuring the service catalog
On the following pages, we will show how these tasks can be achieved.
In order to create a new service within the catalog, the tenant administrator or the service
architect role is required. If this condition is fulfilled, we can perform the following tasks:
Once a service has been created, it is possible to add catalog items to it. This requires
either a tenant administrator, service architecture or a business group manager
membership:
The next step within the configuration is to assign the appropriate permissions. vRealize
Automation is using entitlements for assigning permissions on catalog items for users and
groups. An entitlement stores the following information:
Like many other services in a company, requesting a service from the service catalog
needs approval from time to time. vRealize Automation supports approval processes for
the requesting of machines. Approval policies can have one or more levels of approval.
Each level specifies one or more approvers and the condition that triggers the approval.
Specifying conditions for approvals can be quite a powerful tool. For example, you can
specify that machines with low costs are provisioned without any approval. Whereas
expensive machines would need manual approval in order to proceed with provisioning.
When specifying approvers, specific users or groups can be selected. Alternatively, if
approvers are not known beforehand, they could also be chosen dynamically from the
request itself. When choosing a group for approval, you must also specify whether anyone
from the group is allowed to approve or whether all members of the group have to
approve.
Figure 8-5 Approval polices
Fig. 8-5 shows a sample approval process with three stages. The first stage specifies QA-
approvers, the second stage RD-approvers and the third stage vRealize Admin approvers.
Only if there is approval from an approval member at each stage, the provisioning can
begin.
It is worth noting that the levels specified in an approval can be of different types:
• Pre-approval levels specify users and groups who have to approve a request before
provisioning.
• Post-approval levels specify users and groups who have to approve a request after
provisioning. While it may not be very common to specify post-approval levels,
there might be use cases from time to time (for example, if somebody has to check
that a machine is working correctly, or the machine has to meet some constraints).
When creating an approval policy, the first step is to define the approval policy type,
name, description and status. There are different kinds of approval policies:
Actually, these approval policies do not differ very much. Depending on the type of
approval policy, different information is shown or can be requested from the approver on
the approval form.
Perform the following steps to create an approval policy:
1. Change to the Administration > Approval Policies page.
2. Click the [+]-icon.
3. Choose the appropriate approval policy type.
4. Click OK.
5. Provide a Name and optionally a Description for the approval policy.
6. Set the Status to Active in order to be able to use it.
Depending on the kind of approval policy type you have selected, it is possible to
configure the approval form. Approvers can change the values of system properties for
machine resources settings as CPU, lease, memory or custom properties. If any custom
properties are changed, custom properties defined in the blueprint or at any other place
will be overridden. Approval forms can be configured as follows:
At this point in time, we can finally use the service catalog to request and provision new
resources. When end users are logged in into the service catalog, they usually see the
following tabs within the user interface:
• Home screen - this page can host different widgets, which show the most important
information to users. By default, only the My Inbox widget is shown. However, end
users can customize the page and add additional widgets to the home screen.
• Catalog – the service catalog.
• Items – the resources that have been provisioned.
• Requests – all the requests that have been issued and are currently in built or have
failed.
8.3.1. Configuring notifications
From time to time users have to interact with vRealize Automation, even when they do not
want to provision new resources or use existing items. For example, when they are part of
the approval process. In these cases, it is very useful to receive notifications via email.
Before such notifications can be activated (on a per-user basis), they must be globally
configured:
Once this has been done, users can subscribe to notifications in their user preferences:
All the published services can be found within the catalog tab. To request a machine is a
relatively easy process, users only have to click on the service they want to provision.
Once a catalog item has been chosen, a new window opens (see Fig. 8-8). Users can see
which workflows will be used to provision a machine. They can also see its description
and daily cost. Furthermore, users can provide input for the following:
Custom properties can be seen and edited by clicking on the Properties tab. When
requesting a service based on a multimachine blueprint, the GUI slightly differs:
• You can configure for each enclosing blueprint how many machines should be
deployed.
• Custom properties on a multimachine level override custom properties defined on a
normal blueprint.
Once you have provided all the necessary input, you can submit the request. Saving a
request does not start the provisioning - it only saves the request.
Once a request has been submitted, vRealize Automation immediately starts the
provisioning workflow. You can see its current status by clicking on the Requests menu
(see Fig. 8-9). To keep track of your own requests, you can apply a filter using the
dropdown lists Submitter and Filter by State. Furthermore, you can see the provision
details of a request by clicking the View Details button.
If you are member of an approver group and a machine with an approval policy has been
requested, you will be able to see the incoming request within your inbox on the home
screen. An approver can open the appropriate links from the inbox and view the details of
the request (see Fig. 8-10).
Machines that have been successfully provisioned can be managed via the graphical user
interface of vRealize Automation (Items tab). vRealize Automation will show the
machines based on the user role membership (user, support user or business group
manager). Fig. 8-11 shows the user interface. By default, users can perform the following
actions on a machine:
• Create snapshots
• Configure the machine
• Change the lease time
• Reprovision the machine
• Expire the machine
• Install the VMware tools
• Connect by using RDP
• Connect by using VMware Remote Console
• Connect via SSH
• Power On, Off, Restart
The actual actions available at runtime differ based on the permission granted to the users.
You can also provide additional actions via Orchestrator.
If you have a running instance of vRealize Operation 6 or higher, you can also see the
integration between these two products. The Health Status badge on the right-hand side
of the machine details, comes directly from vRealize Operations. However, before the
health status can be displayed, a vRealize Operations endpoint must be configured. Work
through the following steps to do this:
1. Navigate to the page Administration > Tenant Machines > Metrics Provider
Configuration (see Fig. 8-12).
2. Activate the vRealize Operations Manager endpoint checkbox.
3. Provide the URL for the endpoint.
4. Type in a Username and a Password for the vRealize Operations instance.
5. Click on Test Connection and if this succeeds click on Save.
If a machine is not needed anymore, it can be released, even when there is lease time
remaining. As mentioned before, there is also a difference between Expire and Destroy.
While expiring sets a machine to archive mode (the machine keeps switched off until the
archive period is completed and eventually gets deleted), destroying a machine means you
immediately release all of its resources.
Figure 8-12 Metrics provider configuration
8.4 Summary
This chapter showed us how to configure the service catalog. We also covered the creation
of services, how to configure catalog items and set up appropriate permissions. In
addition, we demonstrated how to implement approval policies.
9. Reclamations
Previously, installing new machines was quite a tedious task. It was not uncommon for
requesters of machines to wait several days before someone from IT had the time to take
action. Virtualization, automation and tools like vRealize Automation make it easy to
provision new machines. Provisioning of machines now be realised within a couple of
minutes or even just a few seconds when using techniques such as linked clones.
However, this ease of provisioning machines has also created some new drawbacks. It
can lead to people requesting too many machines, without really considering the need. So,
much more machines might be created than are actually required, and potentially, lots of
these are then being forgotten (especially, when there is no pricing implemented). This
gluttony of machines can prove costly in resource management.
vRealize Automation already has available some “countermeasures” against such
behavior. We already discussed how to configure lease times, with automatically expiring
machines. Another measure available to you is the ability to charge for machine costs, thus
making requesters more aware of the impact of doing so.
Due to a lack of resources, however, a situation still might arise where it is not possible
to provision any new machines. In that case, machines need to be destroyed in order to
make room for new machines. However, which machines can be safely destroyed? That
question is not easily answered, as in most cases administrators have no knowledge of a
machine’s use. Questions like ‘why it was provisioned’, ‘what is its function’ or ‘does it
even interact with other machines’ arise.
vRealize Automation can also help with this situation. As discussed earlier, vRealize
Automation regularly collects information from all virtual machines, including its runtime
behavior. It can help identify machines that are idle (in terms of CPU, memory, hard disk
and network traffic). Once such machines have been located, we can ask the machine
owners if the machines are still required. Depending on the outcome, resources can be
released. This concept, together with a built-in workflow, is called reclamation in vRealize
Automation.
This chapter will give you some insight into reclamations and how to start a reclamation
workflow. If you also have vRealize Operations running in your environment, vRealize
Automation can interact with this and can give you more detailed machine information.
Figure 9-1 Reclamation overview
Fig. 9-1 depicts the entire workflow a basic level. There are two different reasons to start a
reclamation workflow:
1. First, idle machines are identified. The administrator then sends a reclamation
request to the machine owner and asks whether the machine is still required or
whether it can be destroyed.
2. If the machine owner states that the machine is still in use, the administrator can
initiate a new reclamation request on another idle machine.
3. If the machine owner approves the reclamation request, the machine lease will
instantly expire. If there is an archival period set, the machine will only be switched
off and deleted later, as per policy. Otherwise, its resources will be released
immediately.
We have already discussed the interaction of vRealize Automation with vRealize
Operations. For example, we demonstrated how users are able to display the current health
status of a provisioned virtual machine (actual health data comes from vRealize
Automation). vRealize Operations can also help us by running idle resource scans to
identify candidates for shut down. vRealize Operations can even initiate a reclamation
workflow, once idle machines have been found.
When identifying virtual machines for reclamation, the tricky question is what exactly
qualifies a machine as such. Fig. 9-2 shows how to configure vRealize Operations with
settings that feed into decision-making. Values such as the minimum acceptable network
IO, CPU usage and datastore IO can be taken into account when flagging a VM based on
an overall percentage of idleness. However, you must consider configuring an endpoint for
vRealize Operations in advance, as discussed in Chapter 9.
These techniques do not only apply to VMs, they can also be used for oversized and
underutilized disk space.
If you need to identify unused machines, you must first have membership of the tenant
administrator role. The process can then be started by navigating to the Administration >
Tenant Machines > Reclamations page. This page displays all machines within the
tenant. However, there is also an Advanced Search for better identifying machines (see
Fig. 9-3). There are many search options – among these:
• CPU Usage
• Memory Usage
• Disk Usage
• Network Usage
• Complex metric – this will use vRealize Operation for identifying virtual machines.
Any VM that show up on the vRealize Operations Idle VMs Report will show up in
the list of VMs that can be reclaimed.
Figure 9-4 Reclamation request
Once you have selected a machine for reclamation, you can start the reclamation process
by clicking on the Reclaim Virtual Machine button (see Fig. 9-4). Each submitted
reclamation request asks the machine owner if the machine is still in use. Unfortunately,
from time to time, machine owners do not answer these requests. In this case, the lease
period can automatically be decreased. You can also specify how many days vRealize
Automation should wait before doing this in the Wait before forcing lease (days) input
field. Do not forget to review and modify the New lease length (days) and Reason for
request input fields.
You can find all reclamation requests, that are in process, within the Administration >
Reclamation > Reclamation Requests page. There are different statuses possible for a
request: Pending, Approved or Rejected.
After you have submitted a request, the machine owner will be notified of the request
and is able to open it in his inbox (see Fig. 9-5). The machine owner can answer the
request directly, by choosing one of the following three answers:
• The user can answer by clicking the Release for Reclamation button. This means,
the machines are not needed anymore and hence expire immediately. If no archive
period is defined, the underlying resources are instantly released.
• If the machine is still in use, the machine owner can reply with Item in Use. The
reclamation workflow finishes without releasing any resources.
• If the machine owner does not reply to the reclamation request within the defined
period, the lease time will be adjusted in line with the reclamation request settings.
vRealize Automation offers the following reports for capacity and cost savings:
Once the portlets have been added to the home screen, tenant administrators have the
ability to monitor and review the reports online. They also have the possibility to
download the report data as a CSV file (see Fig. 9-6).
Alternatively, you can review idle machine reports in vRealize Operations by performing
the following steps:
9.1
9.2
9.3 Summary
This chapter demonstrated how deployed resources within vRealize Automation can be
monitored. We went into detail regarding reclamations, which are an important tool in
helping you reclaim resources (when your capacity is getting low or if you need to reduce
costs). There are several different reports available in vRealize Automation. However, if
you have vRealize Operations, you can also configure an endpoint to obtain more detailed
reports.
10. Custom Properties and Build Profiles
Before digging into the details, we first want to briefly sketch out how customization
happens in vRealize Automation. Customization can be carried out at different levels (see
Fig. 10-1):
• By using custom properties and build profiles, you can perform simple
customizations in a machine’s lifecycle. This allows not only customizing the user
interface for requesting a machine, but also customizing all stages of the lifecycle
(requesting, provisioning, managing, and retiring). Custom properties can easily be
defined via the graphical user interface.
• If the requirements become more complex and there is some kind of logic required,
there are different tools available to help you in vRealize Automation: The vRealize
Automation Orchestrator, vRealize Automation Designer and the Advanced Service
Designer.
• Alternatively, if very complex special workflows are needed or the data model
within the Model Manager must be changed, you can create your own workflows in
Visual Studio. However, you must also purchase the Cloud Development Kit (CDK)
for it.
This chapter only addresses custom properties and build profiles. The other tools will be
described later in this book.
Figure 10-1 vRealize Automation extensibility
As mentioned previously, every machine has a lifecycle within vRealize Automation. The
most important stages are depicted in Fig. 10-2. As customizations can apply to each one
of these stages, it makes sense to discuss them in the following:
10.1.2. Request phase
During the request phase, users provide all kinds of input required for provisioning. We
have already demonstrated how to request a machine from the service catalog. However,
using custom properties, you can ask for additional input from the end user and employ it
during the provisioning. For example:
This is the phase in which the machine is built. vRealize Automation supports different
ways of building a machine, e.g. cloning, WIM provisioning, Linux kickstart and many
more. The provisioning process can be customized as well. The customization settings are
provided via custom properties. These can be defined statically for each machine build or
entered during the machine request.
Approvers are able to inspect a machine and review its settings before it becomes
available to the requester.
Machines spend the majority of their time in the manage phase. End users can perform
actions as defined in the blueprint. There are plenty of possible use case scenarios for
customization. However, custom properties are usually not powerful enough to implement
these. Instead, usually vRealize Automation Orchestrator or Visual Studio is used. Some
of the most common use cases are described as follows:
The final stage is the retirement phase. At the end of the lifecycle, all resources will be
released. As already mentioned, there is also a distinction between expiring and deleting.
Once a machine gets fully destroyed, plenty of work can be done. For example, releasing
IP addresses or unregistering the machine or service within a Configuration Management
Database (CMDB).
10.2 Custom Properties
We have already introduced custom properties when we talked about blueprints. However,
custom properties represent a very important concept within vRealize Automation. That’s
reason enough to dig further into the details of custom properties. There are of course
many use cases for using these custom properties (we will address some of them later in
the chapter in detail):
Essentially, custom properties can be viewed as tags. There are a lot of custom properties
predefined, but you can also create your own. Once created, they can be applied to objects
such as blueprints, build profiles and so on. Custom properties are exclusive to the IaaS
components of vRealize Automation. When you create your own custom properties, you
are free to name it whatever you want. However, descriptive names, including namespaces
(the dot “.” acts as a separator), are recommended. If you create your own custom property
and attach it to any object (for example, a blueprint), nothing will happen, as custom
properties act as a kind of metadata. However, the vRealize Automation standard built-in
workflows are aware of the predefined custom properties. They have some logic inbuilt,
which is then executed upon detection of a custom property. Consequently, if you create
your own custom properties, you need a ‘handler’ to run some kind of logic. Usually,
vCenter Orchestrator is used to implement such logic within a workflow.
We have already shown you how to add custom properties to a blueprint. However, you
might also have noticed that these custom properties can be configured in other levels of
vRealize Automation. In fact, you can define the very same custom property at different
levels. There is an order to which these properties are evaluated (the first one has the
highest priority):
• Business Group
• Blueprint
• Build profile
• Endpoint
• Reservation
• Compute Resource
• Storage
The idea of defining the same custom property at different levels is that you can configure
some kind of standard behavior and then override it if required. Nevertheless, you should
be careful with this. When a custom property is applied to a blueprint, there is no tool to
find out at which level it has been applied. In fact, if you forget a custom property
somewhere, the provisioning and the other workflows might not work the way you
envisioned.
• VirtualMachine.Admin.UUID
• VirtualMachine.Admin.AgentID
• VirtualMachine.Admin.Name
10.2.4. Internal custom properties
The next category of custom properties is only used internally to help store the metadata
of virtual machines. These properties do not affect the state of a machine. An example
internal custom property is a machine approver. Other internal properties are enumerated
in the following:
• VirtualMachine.Admin.Owner
• VirtualMachine.Admin.Description
• VirtualMachine.Admin.AdministratorEmail
• VirtualMachine.Admin.ConnectionAddress
• VirtualMachine.Admin.NameCompletion
External custom properties store information relating to a virtual machine. Once these
properties have been set, they will not change. If changes are made to virtual machines
e.g. via the vSphere Web Client, vRealize Automation will not update these properties.
This means that the values of external custom properties could be outdated in vRealize
Automation. The following list depicts some examples for external custom properties:
• Hostname
• VirtualMachine.Admin.ClusterName
• VirtualMachine.Admin.ForceHost
• VirtualMachine.Admin.ThinProvision
• VirtualMachine.Admin.AddOwnerToAdmins
• VirtualMachine.Admin.AllowLogin
• VirtualMachine.Storage.Name
• VMware.Memory.Reservation
• VMware.VirtualCenter.Folder
• VMware.VirtualCenter.OperatingSystem
10.2.6. Updated custom properties
The last category is quite similar to the external custom property category. However, the
difference here is that the updated custom properties automatically reflect changes caused
by machine modification. Depending on the custom property, such behavior is quite
important. Consider a scenario where an end user requests a low-end machine from the
service catalog. Due to limited capacity, the machine is quite cheap. However, if the user
has appropriate vCenter permissions, they could theoretically log in to the vSphere Client
and increase the machine resources. In this case, it is important that vRealize Automation
can detect such hardware modifications. If not, it would be possible for users to continue
paying the original low price, as indicated in the service catalog, instead of a higher
correctly adjusted price.
Based on that scenario, we can conclude that basic hardware settings must also apply to
custom properties. These properties are called updated custom properties. Other examples
for this category are:
• VirtualMachine.Admin.Hostname
• VirtualMachine.Admin.TotalDiskUsage
• VirtualMachine.Memory.Size
• VirtualMachine.CPU.Count
10.2.7. Configuration of custom properties
Defining custom properties is pretty straightforward. Fig. 10-3 depicts the configuration of
a custom property at a blueprint level. The following parameters can be defined for a
custom property:
• A predefined value.
• Optionally, the value of a custom property can be encrypted.
• If there is no predefined value, vRealize Automation can prompt the user (Prompt
User).
Fig. 10-4 shows how vRealize Automation adds an additional input field on the request
page, when the Prompt User option has been set.
Please also note that the definition of a custom property is always case sensitive. If there is
any typing error, the custom property will not be working.
In most cases, assigning a custom property is not sufficient. Instead, a set of custom
properties must be applied to a blueprint. If there are many blueprints, there will be a lot of
work regarding the assignment. However, in such cases using build profiles is a viable
alternative to applying many single custom properties. A build profile helps to group
custom properties and allows reusing them. You can define your own build profiles, but
vRealize already includes a predefined set of build profiles:
• ActiveDirectoryCleaningPlugin
• CitrixDesktopProperties
• PxeProvisioningProperties
• SysprepProperties
• VMwareXXXXXProperties
10.3.1. Create build profiles
The build profiles already provided by vRealize Automation may help you in many cases.
However, if you want to perform additional customization, it is quite handy to be able to
create your own build profiles.
Having your own build profiles helps you to increase the manageability of your vRealize
Automation solution. Custom properties can be centrally grouped in build profiles and
store global variables such as Active Directory domain information, scripts to be executed
or any cleanup logic for de-provisioning.
Furthermore, build profiles can also help you to get a grip on ‘blueprint sprawl’. Many
organizations tend to create many different blueprints with only minor differences (e.g.
hardware equipment, services running, etc.). It would be better practice to use a generic
blueprint and then utilize build profiles to adjust it to the requestor’s requirements. Later
in this chapter, we will demonstrate how to achieve such a behavior using build profiles.
Right now, we will take a look at how to create such build profiles in the first place:
As you will have already seen, there are many predefined custom properties in vRealize
Automation. You can of course create your own custom properties as well. In any case, if
the value assigned to a custom property is not valid, there will be most likely an error in
any workflow that evaluates these custom properties later on. Consequently, it is crucial to
test your custom properties before using them in production. However, things become
more complex when asking your users for some input. If the input is not valid, any
running workflow will of course also fail. In order to circumvent such failures, we need a
way to predefine valid inputs, before a request can be submitted.
The property dictionary can help us with these issues. It offers the following functionality:
• You can define data validations to ensure input is entered in the right format. For
example, you can validate the data type of an input, e.g. if you want to have an
integer, a text or a date. Entering input that does not meet the right format is then
prohibited.
• Additionally, you can define constraints. Constraints help to validate the content of
an input. There are different types of constraints: You can configure a minimum or
maximum value, intervals, value sets for dropdown lists or regular expressions for
text input.
• To increase the usability, tooltips can be configured and attached to a control. Once
a user hovers over such a control, the appropriate text is shown.
• There are different sets of controls available and they are able to interact with each
other. For example, when choosing a location, it is possible to only show the
networks available at this location. These controls can be ordered by using Ordered
user control layouts.
• Textboxes
• Checkboxes
• DateTimeEdit
• DropDown
• DropDownList
• Integer textbox
• Labels
• Links
• Notes (multiline textbox)
• Password (hidden textbox)
10.4.1. Using the property dictionary
Adding a new item into the property dictionary usually requires you to perform the
following steps:
• First, a new definition has to be created. This involves defining a name, description
and choosing the right kind of user control. As there can be many different
definitions, it is certainly useful to think about a naming strategy with namespaces.
For namespaces, the “.”-character is recommended.
• The second step is to add property attributes. This involves adding property
attributes for data validation, constraints, tooltips or relationships.
• The final step is to define the ordering of the controls on the user interface.
Bearing this knowledge in mind, let’s see how we can implement these steps.
Once a property definition has been created, additional property attributes can be attached
to it. These attributes can be used to configure constraints, data validations, tooltips or
relationships between property definitions. Work through the following steps in order to
create a property attribute:
1. Log in into vRealize Automation as a fabric administrator.
2. Change to the Infrastructure > Blueprints > Property Dictionary page.
3. Select a property definition and click the appropriate Edit button.
4. Click on New Property Attribute.
5. Choose a Type for the property attribute (the types available depend on the control
chosen). You can choose the following types:
a. Help Text
b. Order Index
c. Relationship
d. Value Expression
e. Value
f. MinValue
g. MaxValue
h. Interval
Most of the types are quite self-explanatory – only the Value Expression and the
Relationship types need some additional explanation (which is given later).
6. Enter a name attribute in the Name textbox (this name is not visible in the user
interface).
7. Provide a Value for the property attribute.
8. Click the Save-icon.
9. Save the property attributes by clicking OK.
The most complex property attribute type is that of relationship. It can be used to create
nested dropdown lists – the values visible within a dropdown list are dependent on the
chosen value of another dropdown field. There are plenty of use cases for this. For
example, a user could chose a location where a blueprint should be deployed and then
dependent on the selection, different networks and storage paths are shown in the user
interface. Fig. 10-6 shows such a relationship between two controls. However the
implementation of such a scenario is not easy. From a conceptual point of view, the
following steps must be carefully worked through:
1. Create two property definitions – one for the parent element, and one for the child
element.
2. Define a relationship attribute on the child side. The relationship attribute must
reference the parent element.
3. Write a Value Expression, which defines which elements in the child control will be
shown dependent on the chosen parent value.
4. Assign the Value Expression to the child property definition.
5. Add both properties to a blueprint or build profile.
6. Configure a property layout (optionally)
With the conceptual concepts being explained, we can now show how to implement such
dropdown lists.
h.4.4. Create the parent property definition
Now we have to define the values for the parent dropdown list.
1. Within the VirtualMachine.Network.Environment row, click the Edit link.
2. Create a New Property Attribute.
3. Choose ValueList as a Type.
4. Enter a name attribute in the Name textbox (this name is not visible in the user
interface).
5. In the Values field, enter “Production, Test”.
6. Click the Save-icon.
7. Click OK.
At this point, we have created controls for the parent as well as for the child element. The
next step is to create a relationship between both these controls, so they can interact with
each other:
The most difficult part of this process is writing the value expression. The value
expression defines the values displayed in the child element, when a value in the parent
element has been selected. Fig. 10-8 already depicted this relationship in our scenario. The
next step is to “translate” this relationship into an XML code. Open a XML editor and
paste in the following text:
The xml code certainly needs some explanation. The purpose of this code is to show all
combinations between the two dropdown lists. Our sample scenario has four different
combinations. Hence, the XML document also has four PropertyValue elements.
Each PropertyValue has a FilterName element. This FilterName depicts where a
PropertyValue can be applied – in our case it can be applied to the
VirtualMachine.Network.Environment.
The remaining two sub elements, FilterValue and Value, define the actual combinations.
Creating such xml codes can be a pain, especially because you have to take extra care not
to make any mistakes. These mistakes can cause your dropdown lists to stop working as
expected. Fortunately, there is help available in the form of a small Excel macro, which
can create the XML document for you[6]. If you are creating the XML document with this
macro, you can skip the next step and upload it to vRealize Automation. Otherwise you
must take care that the XML is in the right format.
The ValueExpression input field in vRealize Automation expects the XML text to be in a
single line, so all line breaks must be removed first. Once this has been done, it can be
uploaded:
1. Locate the VirtualMachine.Network0.Name property and click the Edit link in the
same row.
2. Create a New PropertyAttribute.
3. Choose ValueExpression as a Type.
4. Enter Expression as a Name.
5. Paste the XML text into the Value field.
6. Click the Save-icon.
7. Click OK.
The final step is simple. Create a new build profile and add the custom properties to it.
Then navigate to your blueprint and add the build profile. Alternatively, you can directly
add the custom properties to your blueprints. Please also don’t forget to mark the custom
properties as required.
Once everything has been completed, go to your service catalog and request the
appropriate IaaS service. If everything works as expected, you should see both input fields
interacting with each other (see Fig. 10-8).
Property layouts act as a container for properties. When creating a property layout, you
only have to assign a name. In a second step, you can add the properties, along with an
ordering of controls. Once you have created a property layout, you can add it to a build
profile or blueprint.
Perform the following steps to configure a property layout for our scenario:
Once you have created the property layout, you can add it to a blueprint or build profile.
We have already mentioned that vRealize Automation comes with a huge set of custom
properties. These are described in a distinct document within the official vRealize
Automation Documentation, the Custom Property Reference. Besides using the pre-
defined custom properties, users can also create their own custom properties. For example,
these can be used to ask for additional input from the blueprint service request. The input
can be passed, as a parameter, to a script residing in the machine to be provisioned (we
have learnt how the guest agent can run scripts with external parameters in Chapter 6).
When using your own custom properties, please consider that there is always a handler
required. This handler will evaluate the custom properties and execute some business
logic. In chapter 14, we will show how to use Orchestrator with custom properties in order
to implement a set of use cases.
h.6 Summary
This chapter introduced custom properties and build profiles. There are several reasons to
use custom properties and build profiles. Firstly, they help to change the behavior of the
built-in workflows. Secondly, they allow you to change the user interface, when
requesting an IaaS service from the service catalog. Internally, there are different kinds of
custom properties: internal, read-only, external and updated.
We also introduced build profiles as a way of grouping custom properties. Furthermore,
we showed how to use the Property Dictionary within vRealize Automation.
11. Advanced Administration
So far, we have done all the administrative work manually. However, during operations
there is a lot of repetitive administrative work to be carried out. Consequently, there is a
need to automate these tasks. A common use case is, for example, performing bulk
operations. That is if you want to make resource changes to multiple systems or need to
power on/off a number of systems at a specified time. You can also coordinate changes
across tenants, groups or vRealize Automation instances. Using the CloudClient, you can
call vRealize Automation functionality from other applications and even perform
composite tasks across different VMware products.
VMware published vRealize CloudClient – a command line tool that provides easy
access to vRealize Automation, vRealize Orchestrator and VMware Site Recovery
functions.
Technically, CloudClient is a command-line tool that is based on Java. Hence you have to
install a Java 1.7 JRE (other Java versions might not work) first. CloudClient can be used
on Windows as well as on Linux and MacOS operating systems. When setting up your
Java environment, please make sure that the Java bin path is within your ‘Path’
environment variable. The CloudClient tool itself can be downloaded from the VMware
Developer Center[7].
When you first start the tool, you have to accept the EULA. Once this has been done you
can start running scripts. CloudClient supports auto completion and has a built-in help,
which can be invoked by issuing the help command.
The first step is to login into a vRealize Automation instance. This can be done by the
following command:
Alternatively, you can configure an automatic login. This can be achieved with a
CloudClient.properties file or by having system properties set. If you are using the
automatic login feature, you should first store your password in an encrypted file. This can
be done with the following command:
The following table 11-1 shows the variables that can be set for autologin:
Variable Description
vra_username vRA Username - to log in to the top level system administrator, the
username is “”administrator@vsphere.local
You can also create a CloudClient.properties file by typing the following command:
login autologinfile
Once the environment variables have been set, you can test if everything is working:
At this point, we want to show some examples in order to show how to interact with the
CloudClient. The first example will switch off a v virtual machine:
#!/bin/sh
#
# Setup environment variables for auto login to CloudClient Shell
. ./env.sh
# Execute CloudClient
./cloudclient.sh vra provisioneditem action execute —id $machine —action $action
#!/bin/sh
export vra_server=server_name
export vra_username=user1
export vra_keyfile=keyfile.enc
The next example shows how to start an Orchestrator workflow from Cloud Client:
#!/bin/sh
#
# Setup environment variables for auto login to CloudClient Shell
vco_server=10.10.50.60
vco_username=administrator@vsphere.local
vco_password=mypassword
# Execute CloudClient
./cloudclient.sh vco workflow detail —id $requestFile —requestfile $requestFile
In order to start the workflow, you have to replace the content of the wflowId variable. You
can retrieve the workflow ID by selecting the workflow in the vCO Java Client and
pressing Ctrl-C (keyboard copy shortcut) and then pasting that into a text editor. This will
actually display the XML-representation of the workflow.
Due to the large number of separate components within vRealize Automation, monitoring
it is essentially quite difficult. In the following we will show the different components
composing a vRealize Automation installation:
Due to the large number of services, monitoring each of them independently is not
feasible. Fortunately, VMware provides a plug-in called VMware Hyperic and a
management pack for VMware vRealize Operations, that together relieve the burden from
setting up component monitoring.
Hyperic is an agent-based monitoring system that automatically collects metrics on the
performance and availability of hardware resources, operating systems, middleware and
applications in physical, virtualized and cloud environments.
The following steps have to be done before vRealize Operations can be configured for
vRealize Automation monitoring:
• Deploy Hyperic
• Deploy vRealize Operations
• Install the management pack for Hyperic in vRealize Operations
Once these prerequisites have been met, the following steps can be done:
With the plug-in installed, all we have to do is to wait for vRealize Hyperic to
autodiscover the new services. It also takes a short time to propagate the changes to
vRealize Operations.
Within vRealize Operations, you will be able to find a new inventory tree (go to the
Environment overview).
There is also a dedicated management pack for vRealize Automation. This management
pack gives visibility to the performance health and capacity risk of tenants and business
groups. More specifically, it provides the following features and functionalities:
• There are out-of-the box dashboards that provide an overview of tenants, business
groups, reservations and reservation policies (see Fig. 11-2).
• The relationship between vSphere objects like VMs, clusters or datastores and
vRealize Automation objects like tenants, business groups or reservations is shown.
• Smart alerts help to detect any performance issues.
As with every software system, there should be a backup plan for vRealize Automation.
Administrators should backup the entire vRealize Automation installation. If there is any
system failure, the system can be recovered by restoring the last known correctly working
backup. Essentially, the following components should be backed up:
Backing up the IaaS MS SQL and the PostgreSQL can be carried out with the built-in
database tools or with any external backup software with support for the databases.
If a failure occurs, the database backup can be used to restore to the most recent status. If
only one database fails, it might be reasonable to restore it and revert the functional
database to the same version as that in use at the time the backup was created.
11.3.2. Identity appliance or SSO appliance
If you want to restore the appliance, you can make use of these techniques again.
When backing up the vRealize Automation appliance, you have to make a copy of the
configuration files. Before you start the backup, verify that the size of the encryption.key
file is greater than 0. If the file size equals zero , type these commands:
• /etc/vcac/encryption.key
• /etc/vcac/vcac.keystore
• /etc/vcac/vcac.properties
• /etc/vcac/security.properties
• /etc/vcac/server.xml
• /etc/vcac/solution-users.properties
• /etc/apache2/server.pem
• /etc/vco/app-server/sso.properties
• /etc/vco/app-server/plugins/*
• /etc/vco/app-server/vmo.properties
• /etc/vco/app-server/js-io-rights.conf
• /etc/vco/app-server/security/*
• /etc/vco/app-server/vco-registration-id
• /etc/vco/app-server/vcac-registration.status
• /etc/vco/configuration/passwd.properties
• /var/lib/rabbitmq/.erlang.cookie
• /var/lib/rabbitmq/mnesia/**
Of course, you can also create a snapshot of the virtual appliance. This might be a good
idea before doing any configuration changes, but is not an alternative to a full backup.
As aforementioned, the load balancer should also be backed up. vRealize Automation
does not provide a built-in load balancer – instead, a third-party load balancer will be
used. Consequently, take a look at your load balancer vendor’s instructions for creating a
backup.
When restoring the appliance, perform the following steps:
The following steps must only be performed when the hostname or the IP address has
changed:
Backing up the IaaS components is slightly more difficult. You must backup the following
components:
• For the primary Web node only, in the Model Manager Data folder (<vCAC
Folder>\Server ):
o ConfigTool folder (applicable only for the primary Web node)
o policy.config file
• The following file located in the installation folder (<vCAC
Folder>\Server\Website\):
o Web.config file
• The following files located in the installation folder (<vCAC Folder>\Web API\):
o Web.config file
o policy.config file
• The name of the IIS instance.
c.3.6. Certificates
The following certificates should be backed up at installation time or when certificates are
changed.
c.4 Summary
This chapter concludes the Administration part of the book. We introduced how to use the
vRealize Automation CLI, gave an introduction into monitoring and highlighted the
importance of backup and recovery.
12. Extensibility Overview
In most projects, sooner or later, a time will come when not all of the requirements can be
covered by means of the administrative graphical user interface. The same also applies to
vRealize Automation.
Simple extensibility scenarios often apply to the customization of existing workflows.
For example, a user wants to choose a script, which should be executed after the
provisioning of a machine (such as change the hostname or specify the way of
provisioning). While a lot of examples can be realized by means of custom properties,
build profiles and the property dictionary, there are also a lot of requirements where
programming is needed.
These use cases are often quite sophisticated. For example, there could be an integration
of vRealize Automation with the external infrastructure, e.g. an existing IP address
management tool, a tool for defining host names or any other integration with some kind
of business tool. To realize these sorts of requirements, VMware provides the vCenter
Orchestrator and the vRealize Automation Center Designer.
Another scenario represents Anything-as-a-Service (XaaS) offerings. The basic idea
behind XaaS is that vRealize Automation could be used to publish all kinds of services –
not only IaaS offerings. vRealize Automation provides a self-service portal with a fine-
grained permission system. This can be used to host a variety of services. Members of the
HR department could use vRealize Automation to create Active Directory users. Jobs can
be invoked or any other tasks be carried out, which can be managed with VMware
Orchestrator in the background. The only task for administrators is to publish these
services via the Advanced Service Designer.
Sometimes there is need for yet more complex adaptations. It is possible to extend the
vRealize Model Manager to provide a seamless integration with external systems or
implement new workflows and activities directly in Microsoft .NET (the IaaS components
are running on Microsoft Workflow Foundations). To achieve this, the vRealize Cloud
Development Kit (CDK) could be used. Figure 12-1 depicts these extensibility options
The remainder of this chapter gives an in-depth overview of these options. A detailed
discussion will thrn follow in the next chapters.
Basically, adaptations can be performed with the vRealize Designer as well as VMware
Orchestrator. The reasons for these two different tools can be found in the history of
vRealize Automation.
vRealize Designer is a .NET based Windows application. Historically, it is the older
model of both tools (when it comes to vRealize extensibility) and was implemented to
customize workflows (however, not to create new workflows). At a programming level, its
strength lies in the ability to work in much the same way as a standard Windows
application. Therefore it is quite easy to understand for developers.
Shortly after the acquisition of Dynamic Ops by VMware, the work on integration of
vRealize Automation into Orchestrator was started. Finally, with vRealize Automation
release 6, nearly all features of the vRealize Automation Designer were also supported in
Orchestrator.
Orchestrator is VMware’s central tool for all kinds of automation within the datacenter.
In contrast to vRealize Automation Designer, it is implemented in Java and hence runs on
nearly every operating system. Orchestrator is very reach in features. First of all, it is an
intuitive Integrated Development Environment (IDE). Similar to vRealize Automation
Designer, it is possible to create workflows by means of Drag’n Drop techniques.
However, developers can also use JavaScript to implement their own code. Orchestrator
already provides a set of plug-ins, which allow the integration of external systems without
adding any lines of code (for example Active Directory, SSH, Databases, SOAP, REST,
vCenter, etc.). Many other plug-ins also exist on the market and these can be easily
installed in Orchestrator (they can be found in the VMware Solution Exchange).
Furthermore, workflows can be easily invoked from within Orchestrator. There is even an
integrated source code versioning system, with the possibility to see historical workflow
runs.
Figure 12-3 shows the graphical user interface of Orchestrator. Similar to vRealize
Automation Designer, there is a toolbox on the left-hand side, where a broad set of built-in
tasks and activities are already available.
Figure 12-3 vCenter Orchestrator workflow
Due to this large set of features, the use of Orchestrator should be preferred to the use of
vRealize Automation Designer. However, as is often the case, there is an exception to the
rule – when there are already a large number of implemented .NET workflows. But this
should not be the case in most projects.
The vRealize Advanced Edition allows the use of the Advanced Service Designer. This
allows the publishing of XaaS services to the self-service catalog. From a technical point
of view, these workflows run within Orchestrator. Hence, the Advanced Service Designer
gives administrators the possibility to integrate existing Orchestrator workflows into the
vRealize Automation service catalog. This integration encompasses two steps. Firstly, the
user interfaces for the workflow request must be defined. Secondly, the XaaS workflow
must be published to the service catalog.
There are plenty of use cases for the ASD:
While Orchestrator is already quite a powerful tool, there might be certain use cases,
where it is not sufficient.
For certain extensibility use cases, the Cloud Development Kit (CDK) needs to be used.
A special feature of the CDK is the possibility to extend the vRealize Automation entity
model within the Model Manager. This is quite useful, if there are plans to seamlessly
integrate vRealize Automation with external systems (for example CMDBs, IP address
management tools, etc.). After a model has been extended, it is quite easy to access these
systems from within Orchestrator or vRealize Automation Designer. As stated, another
advantage is the possibility to develop workflows directly in .NET. This is interesting, as
the .NET framework is quite powerful in terms of APIs. It also allows existing Windows
dlls to be reused.
However, due to the fact that the current plans of VMware, regarding the IaaS
components, are still not clear, its use should be carefully considered before new
workflows are implemented in .NET.
12.6 Summary
This chapter gave an overview of the extensibility options within vRealize Automation.
All the different tools – vRealize Automation Designer, Orchestrator and the CDK were
introduced. The following chapters will now give a deeper insight into how to work with
these tools.
13. Working with vRealize Automation Designer
We have shown you by what means and tools vRealize Automation can be extended in the
last chapter. We will now delve into how to implement new functionality. This chapter will
cover the vRealize Automation Designer. The following chapters will cover vRealize
Orchestrator and the Advanced Service Designer.
Before creating workflows for IaaS provisioning, it is first crucial that you understand the
IaaS model and thus are able to modify its values as required. In chapter 3 we introduced
the Model Manager and explained how the Microsoft SQL server is used to store its data.
However, working with the database directly is (in most cases) not recommended. Instead,
vRealize Automation provides helper methods in order to deal with this data.
Nevertheless, there is a good tool available to explore the IaaS model - LINQPad[8]. Once
you have installed the tool (a Windows system with .NET is needed), you can explore the
IaaS model:
Once a connection has been established, LINQPad shows all the IaaS entities on the left-
hand side of the screen. Besides showing entities, LINQPad also provides a wizard to
create and execute LINQ queries. For example, you can create a query by right-clicking
on an entity and selecting the first function <Entity name>.Take(100). LINQPad then
executes the query and shows the result set within the main area of the screen (see Fig. 13-
2).
There are many different entities within the vRealize Automation Model Manager – for
example, virtual machines, users, reservations or build profiles. However, you must also
bear in mind that the Model Manager only stores IaaS data – everything else is stored
within the vPostgres database on the Linux appliance.
Each entity in vRealize Automation has a set of properties. Entities are usually linked to
each other. For example, there might be different pending requests for a user, or a virtual
machine has different VirtualMachineProperties. Relationships between entities are
indicated with a fork-icon. There are three different relationships: One-to-One, One-to-
Many and Many-to-One.
13.1.1. Background: LINQ
The first step to using vRealize Designer is – of course – to install it. You can download
the Designer from the vRealize Appliance website (https://<vrealize-automation-
appliance.domain.name>:5480). Like the IaaS services, the designer requires Microsoft
.NET to be preinstalled. Also, before you start the installation, you must have the address
of the vRealize Automation service and the Model Manager at hand. Please also note that
the tool needs an open connection to your vRealize Automation environment at runtime.
Once you have completed the installation, you can start the tool (see Fig. 13-3). The
graphical user interface of this tool is quite intuitive. The ribbon menu only offers a small
number of commands. These include options to load, save, open or send workflows.
Most work is carried out in the main pane – that’s where you can develop your
workflows. A workflow consists of many smaller tasks. These are arranged sequentially
and connected via arcs.
Figure 13-3 vRealize Automation Designer workflow stub
On the left-hand side of the screen you can see the toolbox. It contains a variety of tasks,
grouped into the following categories:
• DynamicOps.Repository: Tasks within this category are used to access the entities
within the vRealize Automation model. You can use this category as well, if you
want to invoke another workflow stored in the Model Manager.
• DynamicOps.VcoModel.Activities: The enclosed tasks are used for interaction
with vRealize Orchestrator. With the latest releases of vRealize Automation, these
tasks have become less important. vRealize Orchestrator and vRealize Automation
can now interact with each other independently, without the need to create a special
workflow in vRealize Designer.
• The DynamicOps.Cdk.Activities category provides a variety of tasks, including
logging, sending emails or retrieving information about virtual machines.
• The ControlFlow and FlowChart categories provide programming constructs for
flows, decisions, or if-statements.
• Basic programming tasks are encompassed within the Primitives categories. For
example, there are tasks for assigning variables, invoking methods, or outputting
something to the console.
• If you need access to arrays or collections, the Collection category provides some
helper tasks.
• For error handling there is also a dedicated category.
You can define the basic structure of your workflow by dragging and dropping elements
from the toolbox into the main pane. Once an element has been placed within the designer
pane, it has to be configured. This can be done by selecting the appropriate element in the
main pane. On the right-hand side of the screen, there is a Properties window. Here
configuration settings can be specified.
Tasks within the designer pane must always be connected to a workflow. Therefore you
must use arcs to connect them with other elements. A minimum workflow always has a
start element and an end element – so if you add additional tasks, you have to place them
within these nodes.
Another very important concept of vRealize Designer is the use of variables. Variables
are needed to pass information between different tasks. For example, if there is a task that
reads the content of a script file and a second that executes the script, then a variable is
needed to store the script content and pass it to the next task. Variables have a context
which determines the visibility of the variable. For example, global variables can be read
and modified everywhere, whereas a local variable is only defined within a certain scope
and may not be touched from outside of this scope.
Workflows within vRealize Designer are not invoked by vRealize Automation. Instead, it
happens as part of the machine’s lifecycle (for example during provisioning), or based on
user interaction within the self-service catalog. In both cases, the workflow needs some
input regarding the context in which it is running. This contains information regarding
which virtual machine is concerned and/or what kind of action was invoked. To store this
information, each workflow must provide a set of variables. Here the aforementioned
information can be found. Variables can be found in the lower part of the screen.
It is important to note that vRealize Designer allows modification of existing workflows,
but does not permit creation of new ones (see Fig. 13-4). However, there is a set of
existing workflow stubs that can be used to implement the required workflow logic. If you
need additional workflows, you must first purchase the Cloud Development Kit (CDK).
There are workflow stubs for the following purposes:
When creating a workflow, from time to time, there is also a need to upload additional
fragments such as PowerShell scripts, configuration files or any other files. When
referring to multimachine blueprints, we have already introduced the Cloudutil-tool. This
is responsible for uploading any content to the server.
Figure 13-4 vRealize Automation Designer workflows
After having given a small introduction to the foundations of vRealize Designer, it is time
to become more practical and show how to use the designer tool itself.
In this scenario, we will write a small script that will be uploaded to the Model Manager.
This can then be invoked from a vRealize Designer workflow during the provisioning of a
virtual machine. The script used in this example is simple, but you could replace it with a
more complex one in your environment.
First, open the PowerShell-ISE or an equivalent editor on your computer and paste the
following code fragment:
$Hostname = $Properties[“VirtualMachineName”]
## script logic begins here
Write-Host $Hostname
Once finished, save the script on your desktop computer and perform the following steps
on a machine with the vRealize Designer installed:
Now we can begin with the implementation of the workflow. Start the vRealize Designer
and follow the steps as described below:
1. Within the ribbon menu, click the Load button and select the
WFStubMachineProvisioned workflow. We will be customizing this workflow and it
will be running after a machine has been provisioned. Click OK. After a short while,
you will see the stub of the workflow (see Fig. 13-6).
2. Double-click on the Machine Provisioned box. You will see that there is some
nested content now shown in the main pane (see Fig. 13-7).
Figure 13-7 Machine provisioned workflow
8. Next, we should load the content of the PowerShell script. Our PowerShell script
receives a single variable as an input. However, as a PowerShell script can receive
multiple parameters, passing the input as dedicated variables is not suitable. Instead,
an array is used to pass arguments. This means we must create a new variable.
Therefore, within the variables section, click on Create Variable and name it args.
Change the type of the variable by clicking on Browse for Types within the
Variable Types column (see Fig. 13-12).
9. Now another dialog opens, in which you have to specify the exact data type. The
PowerShell script expects this input as a variable of type
System.Collection.Generic.Dictionary <TKey, TValue>. This is a dictionary, where
the TKey and TValue define the data type of the key and the data type of the value,
respectively. Within the Type Name textbox, enter the dictionary data type und use
String for both TKey and TValue (Fig. 13-13 shows how to configure this).
Figure 13-11 Task configuration
10. Before we can pass the args-variable to the PowerShell task, we must first enter the
name of our virtual machine. This is done with the help of the Assign task. Locate
the appropriate task within the toolbox, drag and drop it below the
GetMachineName element and connect it with the aforementioned task.
11. Configure the Assign-task (see Fig. 13-14):
To:“args(“VirtualMachineName”)”
Value:“vmName”
12. Finally, we can invoke the PowerShell-Script. There is a special activity called
ExecutePowerShellScipt, which allows us to do that. Drag and drop the element
from the toolbox and connect it with the Assign task (see Fig. 13-15).
13. Like other workflows elements, the ExecutePowerShell task also needs to be
properly configured (see Fig. 13-16):
Script Name: The name of the script which was specified during the upload with
Cloudutil
Machine Id:“VirtualMachineId”
Arguments:“args”
14. Now that the implementation of the workflow has been completed, we just have to
save it. Click the Send button within the ribbon menu in order to do so.
Workflows in vRealize Designer usually encompass a lot of different tasks. However, all
of these tasks often do not fit on the screen and thus are displayed in a nested fashion.
Nevertheless, vRealize Designer tries to show all tasks in a clear manner. This means
arranging the elements on different levels. When you open a workflow, you will see the
top-level element first (the Machine Provisioned Workflow in our scenario). It will
contain a try-catch clause, which encloses a Machine Provisioned Element. The try-catch
clause element is a very powerful construct in programming languages. If any error occurs
within the try-clause, the control flow will automatically jump to the catch-clause and will
execute any exception handling code. Therefore, if any error occurs, at the very least you
want to be notified. Hence, a logging statement is reasonable.
If you expand the Machine Provisioned element, you will see the structure of the
workflow (see Fig. 13-17). Each workflow has a start and an end node. Before running
any custom workflow logic, a log statement first logs basic information regarding the
actual workflow instance. In parallel, a connection to the vRealize Automation repository
is opened. Your custom logic takes place within the Custom Code task. Once this task has
been successfully executed, the state of the workflow changes to ‘complete’ and the
workflow ends with the creation of a log statement.
Once we have completed the implementation of our workflow, we still have to let
vRealize Automation know when to run it. This can be achieved at the blueprint level:
1. Open the blueprint that needs to be configured for this workflow and switch to the
Properties tab.
2. Assign a new property named ExternalWFStubs.MachineProvisioned (see Fig.
14-18).
3. Don’t enter a value for the property.
4. Click OK to save your changes.
We have shown you how to setup a blueprint which runs a workflow using custom
properties. This does not only apply to MachineProvisioned workflows, but also to all
other workflows. Use the following custom properties for the other workflow types:
• WFStubBuildingMachine: ExternalWFStubs.BuildingMachine
• WFStubsMachineDisposing: ExternalWFStubs.MachineDisposing
• WFStubUnprovisionMachine: ExternalWFStubs.UnprovisionMachine
• WFStubMachineRegistered: ExternalWFStubs.MachineRegistered
• WFStubMachineExpired: ExternalWFStubs.MachineExpired
13.2.5. Additional Workflow activities
Besides the activities covered in this scenario, there are a lot of other tasks available. At
this point, we want to introduce the most important ones:
13.3 Summary
This chapter covered the installation and the use of 2the vRealize Designer. The designer
was the preferred tool for customization before vRealize Orchestrator gained support for
vRealize Automaton. Right now, in vRealize Automation, it remains largely as a tool for
backwards compatibility. Therefore, we recommend using vRealize Orchestrator instead
of vRealize Designer for implementing any new workflows. vRealize Orchestrator is
covered in the next chapter.
14. vRealize Orchestrator
vRealize Orchestrator is a great tool for automating your environment and orchestrating
business processes. This makes IT operations faster and less error-prone. While workflows
could be implemented using traditional programming techniques, Orchestrator simplifies
this process. It facilitates the development of workflows with its integrated development
environment and other built-in features. When creating new workflows, it would be nice
not having to implement everything from scratch. In order to achieve this, vRealize
Orchestrator provides a rich set of pre-built workflows which can be reused. Orchestrator
enables workflows to be exported and imported through packages. As workflows often
need to be run over a long period of time, a lot of techniques must be implemented with
traditional workflow programming in order to increase resilience. Orchestrator, however,
provides a built-in workflow engine. The engine takes care of a lot of issues and offers
multiple ways to run workflows.
As there are already over 500 ready-to-use actions and workflows available, in many
cases there is no longer a need to write your own code. If you need to implement your own
code, Orchestrator uses JavaScript (which is widely used and a relatively easy to learn).
In the last couple of years, Orchestrator has already become widely utilized for
automation purposes in many companies. However, the integration with vRealize
Automation has pushed the product to a new level. The tight cooperation is also reflected
in the fact that it is now branded as vRealize Orchestrator (instead of vCenter
Orchestrator).
vRealize Orchestrator can help in many scenarios:
• The lifecycle of infrastructure services can be customized. For example, you can
register a virtual machine within a configuration database after provisioning or you
can assign a custom hostname.
• vRealize Orchestrator can also be used if you want to implement your own
customizations and attach them to a blueprint (for example, imagine a worfklow
assigned to a blueprint which takes care of the backup or your machine)
• Of course, you can use vRealize Orchestrator to manage and automate the vRealize
infrastructure itself.
• vRealize Orchestrator is also a great tool for third-party vendors. If they want to
integrate their solution into vRealize Automation, they can develop their own
workflows, which in turn can be published to the service catalog, using the
Advanced Service Designer. We will cover Advanced Service Designer in the next
chapter. Such published services are also described as Anything-as-a-Service (XaaS)
blueprints.
• Of course, you can still create your own custom services and integrate them within
your infrastructure. Once again, there are many reasons and scenarios for doing so.
Just as an example, you can request a LUN or perform some Active Directory tasks
via the service catalog.
14.2 vRealize Orchestrator configuration
In addition to configuring the vRealize Automation plug-in, there are other plug-ins that
may also be of interest to you. This is because their enclosed workflows are quite likely to
be called by your workflow. This encompasses the vCenter and the Microsoft Active
Directory plug-ins.
With the above in mind, we can start the configuration. If you are using the internal
Orchestrator instance within vRealize Automation, the first thing to do is to check if it is
running – and if not – start it. This can be done as described in the following:
The first step towards integrating Orchestrator with vRealize Automation is to create an
endpoint. This can be realized as follows:
If you have different Orchestrator instances, you can also override the default orchestrator
instance at blueprint level. This can help if you have any special resource-intensive
workflows that need to run on a dedicated instance. If you want to configure this, you
must add the VMware.VCenter.Orchestrator.EndpointName property and assign the name
of the endpoint.
The Orchestrator client can be downloaded directly from the Orchestrator instance
(https://<vrealize-automation-appliance.domain.name>). You can find the download link
on the bottom of the page. The client is implemented in Java version 7, so please ensure
you have this version of Java on your computer.
14.4.2. Background: Adding additional user for the Orchestrator client
To import SSL certificates into Orchestrator, we must open the Orchestrator Configurator
again. Please follow these steps to action:
14.4.2.
14.4.3.
14.4.4.
The next step is to configure the vRealize Automation plug-in. As integration with
Microsoft Active Directory is quite common, you should also be able to set it up
appropriately. You can do this by following these steps:
The next plug-in to be configured is the Microsoft Active Directory plug-in. However, this
plug-in assumes that your connection with Active Directory will be encrypted with SSL. If
your Windows environment does not support this, you must configure it to do so before
continuing (if you are not sure how to check that, there are Windows tools like ldp.exe to
help). In the following, we will assume that LDAP with SSL is activated and listens on
port 636.
Perform the following, to set up an Active Directory connection:
1. In the workflow library, navigate to Library > Microsoft > Active Directory >
Configuration.
2. Right-click Configure Active Directory server and select Start Workflow.
3. A dialog opens and prompts for the following input (see Fig. 14-5):
a. Active Directory host IP/URL as domain controller.
b. The Port is 636.
c. Provide the Root of the domain, for example dc=vdc, dc =lab.
d. Select Yes for the Use SSL checkbox.
e. With the Ask for confirmation before importing the certificate checkbox,
select Yes.
f. Provide your Default Domain, for example vdc.
4. Click Next to move to the next dialog page and provide the following input:
a. For Use shared session, select Yes.
b. Enter the User name for the shared session (format domain\user).
c. Provide your Password for shared session.
5. Click Submit to start the workflow and accept the security and certificate warnings
if necessary. Once again, if the workflow runs successfully, a completed workflow
token will appear.
If you wish to check that configuration of the plug-in works, change to the Administer
view (dropdown list in the upper area of the screen) and expand the Active Directory
node, which is found within the navigation area in the left-hand side of the screen (see Fig.
14-6).
Now we can continue with the configuration of the vRealize Automation plug-in. This
involves two steps:
1. In the workflow library, navigate to Library > vCloud Automation Center >
Configuration.
2. Right-click Add the IaaS host of a vCAC host and select Start Workflow.
3. A dialog opens and prompts for the following input (see Fig. 14-7):
a. Under vCAC Host, click the Not Set link.
b. Within the Select (vCACCafe:VCACHost) dialog box, expand vCloud
Automation Center and select Default (your vCAC host) and click Select.
4. Click Next to move to the next dialog page and provide the following input (see Fig.
14-8):
a. Provide your IaaS server as Host Name.
b. Enter the Host URL
c. Leave the default settings for the Connection timeout.
d. Leave the default settings for the Operation timeout.
5. Click Next to move forward to the next input dialog and provide the following
input:
a. Use Shared session for the Session Mode.
b. Enter an Authentication user name (use the system administrator of your
IaaS Server configuration).
c. Provide the Authentication password.
6. Click Next for the next configuration screen.
7. Provide the following input:
a. For the Workstation for NTLM authentication, leave the default settings.
b. For the Domain for NTLM authentication, enter your domain (for example
vdc)
8. Click Submit to start the workflow.
Once again, you can change to administer mode and expand the vCAC Infrastructure
Administration node to check if your configuration is working.
The final step within this configuration is to install the vCO customizations. The vCO
customizations help when attaching an Orchestrator workflow to a blueprint or installing
an Orchestrator workflow as a menu action within vRealize Automation. This can be done
as follows:
1. In the workflow library, navigate to Library > vCloud Automation Center >
Infrastructure Administration > Extensibility > Installation.
2. Right-click Install vCO customizations and select Start Workflow.
3. Within the Install vCO customization input dialog box, click Not set.
4. Next, you have to choose the vCAC Host for the customizations. Expand vCAC
Infrastructure Administration and select your IaaS host.
5. Click Select and continue with Next.
6. The next configuration screen will add additional workflow stubs. On the “State
change workflows stubs to update to run vCO workflow” page, click Next.
7. Enter 8.0 on the Virtual machines menus to create page.
8. Click Submit to start the workflow. The workflow can take a little while to be
completed.
Now that we have shown you how to set up Orchestrator properly, we want to explore its
more interesting aspects and demonstrate some real life scenarios. We will aim to achieve
this by running through the following examples:
• Running a script on a machine as part of the post provisioning process.
• Integrate Puppet.
• Write a workflow for providing an instance type dropdown list on a blueprint
request page.
b.1
b.2
b.3
b.4
In fact, this is a fairly common scenario, especially when the cloning mechanism is used to
provision a machine. Cloning, together with guest customization, is a powerful
mechanism, used to quickly deploy and integrate a new machine into an existing
environment. Nevertheless, cloning alone is no silver-bullet. Fine-grained adaption and the
installation of some “advanced” software is something that cannot always be achieved
with cloning. However, it is quite handy to be able to run a post-provisioning script after
the cloning process. We have already shown that such a thing can be implemented using
the vRealize Automation guest agent. Using the guest agent is not always feasible though.
The guest agent needs an SSL connection to the IaaS host in order to fetch the instruction
that allows the command to be executed. This can be quite difficult, or indeed impossible,
if the deployed machine is not within the same network as your vRealize Automation
infrastructure. Furthermore, it needs the guest agent installed on the machine. That’s why
we need to show you how to implement this feature within Orchestrator. To take action,
we need to use a State Change workflow – the MachineProvisioned workflow stub
provided by vRealize Automation and configure it to execute our Orchestrator workflow
running our script. Of course, the script to be run should be placed within the virtual
machine (however, Orchestrator could dynamically copy the required script to the virtual
machine as well).
In the following, we will cover the steps of this implementation in detail. However, if
you want to act more quickly, you will find the workflows on the companion’s webpage
too.
The workflow takes some seconds to run. If everything is OK, a completed workflow
token will appear next to the workflow instance.
The rest of the configuration takes place within the vRealize Automation self-service
portal. We must navigate to the blueprint to which this workflow was assigned and
provide information about which script should be run and how to connect to the virtual
machine. This is done as follows:
1. Within the vRealize Automation self-service portal, navigate to Infrastructure >
Blueprints > Blueprints.
2. Click Edit on the blueprint for the state change workflow.
3. Go to the Properties tab (see Fig.14-10). You will see some properties which have
been added by vRealize Automation Orchestrator.
4. First, modify the ExternalWFStubs.MachineProvisioned.vmUsername and
ExternalWFStubs.MachineProvisioned.vmPassword custom properties and
provide the credentials of the virtual machine (the password will be in clear text,
encryption is not working here).
5. Change the ExternalWFStubs.MachineProvisioned.programPath property and
enter the complete path of your script.
6. For the ExternalWFStubs.MachineProvisioned.workingDirectory property,
provide the directory from which to start the program.
7. Leave the ExternalWFStubs.MachineProvisioned.vm field empty. This property
will be filled automatically during the workflow.
8. Press OK to save the changes.
As soon as you have saved the changes, any new virtual machine provisioned from that
workflow will run the script specified in the properties section.
We just have shown how to run a script from Orchestrator once a machine has been
provisioned. This is just fine, if we have smaller customizations, but if we want to run
different scripts from a virtual machine and maintain such functionality in a bigger
environment, there is a lot of work involved. We have to write, maintain and set up these
scripts.
Because this is not a feasible approach in large environments, tools like Puppet or Chef
exist that can help with the configuration management of your software.
Setting up Puppet involves several steps:
• First, a Puppet Server is needed. Puppet comes in two different flavors: Puppet
Enterprise or Puppet Open Source.
• A Puppet agent must be installed on the machine to be customized.
• The Puppet client must be authenticated to the Puppet server in order to
communicate with it and download code fragments.
While the whole procedure can be set up manually, it is easier to use the Orchestrator
Puppet plug-in to configure things. It is important to note that the plug-in is not part of the
standard Orchestrator instance and therefore we must download and install it first (it can
be found on the VMware Solution Exchange[9]).
Once downloaded, perform the following steps to install and configure the plug-in:
Now that the plug-in is installed, we can continue with its configuration. First, make sure
that you have a working Puppet master running in your environment. If you don’t have a
Puppet Master installed yet, you can download a Learning VM directly from the Puppet
website[10] (there is also detailed description how to set up the virtual machine and which
credentials can be used).
Before you can use the automatic agent installation, you need to register the Puppet
master. There are some prerequisites to be fulfilled:
• Verify that Puppet Enterprise 3.7.0, Puppet Enterprise 3.3, Puppet Open Source
3.7.1, Puppet Open Source 3.6.2 or any other compatible version is installed.
• Verify that you can connect to the Puppet Master using SSH from the Orchestrator
server.
• Verify that the SSH daemon on the Puppet master allows multiple sessions. The
SSH daemon parameter to support multiple sessions on the Puppet Master is in the
configuration file /etc/ssh/sshd_config. The session parameter must be set to
MaxSession=10.
Now we can perform the configuration of the plug-in:
1. Start the Orchestrator Java client and log in with your credentials.
2. Change to the Run mode and navigate to the Library > Puppet > Configuration
folder.
3. Right-click the Add a Puppet Master workflow and select Start workflow….
4. Provide the following input for the workflow and click Submit:
a. Puppet Master Name: How the name should appear in vRealize Orchestrator.
b. IP Address
c. Port: 22
d. Username
e. Password
5. Once the configuration has been completed, you can verify it by calling Validate a
Puppet Master workflow. If a completed token appears, everything has been
configured properly.
Finally, you can now use the workflows to install Puppet agents on your machines. There
are two possibilities: Install Linux Agent with SSH and Install Windows Agent with
PowerShell in the Node Management folder. For the Linux Agent, we obviously need
SSH and that shouldn’t be a problem. For the Windows Agent, however, PowerShell is
required. This did not exist prior to Windows Servers 2008. Therefore, in order to use the
workflows with Windows Server 2003, you must first install PowerShell.
Additionally, PowerShell allows no remote access by default. Thus, there is the
requirement to activate it on the servers using “Enable-PSRemoting”. The machine must
also be a member of an active directory domain. Further to this, if the server is not in the
same domain as the client (vRealize Orchestrator), you need to install a certificate on
every server and register it with PowerShell (please replace your IP address, hostname and
the certificate thumbprint):
e.1
e.2
e.3
e.4
e.5
e.6
We have already talked about the so-called blueprint sprawl, a situation in which there are
too many blueprints defined within vRealize Automation. Having so many blueprints
leads to an administrative overhead. This is especially true when maintaining blueprints.
Therefore, our goal is to limit the numbers of blueprints. This can be done, if basic
hardware settings are not taken directly from the blueprint definition, but defined in
custom properties and build profiles. This means we can just use a standard blueprint, as
all hardware variations are defined outside of it.
The first step within the implementation is to define the VM size. In our scenario, we
create four different VM sizes as depicted in the following table.
Micro 1 1 10 GB
Small 1 2 10 GB
Medium 2 4 30 GB
Large 4 8 100 GB
1. Go to the scriptable task within your workflow, hover over the element and click the
pencil to edit it.
2. Change to the OUT tab.
3. Click the Bind to workflow parameter/attribute button.
4. Within the Chooser… dialog box, select Create parameter/attribute in workflow.
5. Create the variable memory of type string and check the Create workflow
ATTRIBUTE with the same name checkbox.
6. Click OK.
7. Repeat steps 4-6 to create the cpu and disk variable.
if (size != “”){
switch(size){
case “micro” : memory = “1024”;
cpu = “1”;
disk = “10“;
break;
Later on, we want to use the values of the output variables assigned to the appropriate
custom properties. Firstly, however, we need to define the custom properties to be
overridden. We will create workflow attributes for this:
1. On the General section of the workflow, scroll down until you see the Attributes
section.
2. Create a new variable named propertyNameMemory of type string and assign the
value VirtualMachine.Memory.Size.
3. Create a new variable named propertyNameCpu of type string and assign the
value VirtualMachine.CPU.Count.
4. Create a new variable named propertyNameDisk of type string and assign the
value VirtualMachine.Disk1.Size.
5. Create a new variable named propertyIsHidden as a Checkbox and set it to No.
6. Create a new variable named propertyIsRuntime as a Checkbox and set it to No.
7. Create a new variable named propertyIsEncrypted as a Checkbox and set it to No.
8. Create a new variable named doNotUpdate as a Checkbox and set it to No.
Next, we must use the output of the scriptable task to modify the appropriate custom
properties of the virtual machine to be deployed. Luckily, there are already workflow tasks
that can be used:
1. From within the All Workflow section, on the left-hand side of the screen, expand
the top level node and navigate to Library > vCloud Automation Center >
Infrastructure Administration > Extensibility > Helpers. Then drag the
Create/update property on virtual Machine Entit, right after your scriptable task,
ensuring it is connected with an arc.
2. Name the workflow UpdateMemory.
3. Link the following workflow parameters, as input parameters, in order to update the
workflow as per the following table:
4. Repeat steps 1-4 for another workflow element. However, rename it to UpdateCPU.
Assign the same set of input parameters to the new workflow element, but modify
the input variables to use the cpu attribute for the PropertyValue input parameter and
the propertNameMemory attribute for the propertyName input parameter.
5. Repeat steps 1-4 for a third workflow element. However, rename it to UpdateDisk.
Assign the same set of input parameters to the new workflow element, but modify
the input variables to use the disk attribute for the PropertyValue input parameter
and the propertyNameDisk attribute for the propertyName input parameter.
6. Save and close the workflow.
At this point, the implementation of our workflow is complete. However, we still need to
associate it with a blueprint. For that – once again – we have to look for the Assign a state
change workflow to a blueprint and its virtual machines workflow, start it and use the
MachineProvisioned workflow stub to enable it. For the end user workflow to run as an
input parameter, you must select your newly created workflow.
Back in vRealize Automation, you can test the new workflow. When requesting a new
machine from your blueprint, you should be able to see the workflow starting and running
in vRealize Orchestrator. If you look at the details of your new machine, you will not see
the new values, because vRealize Automation do not realize them at this point in time, but
if you check the machine itself, you should see the updated values.
e.8 Summary
This chapter introduced the configuration and use of Orchestrator within vRealize
Automation. We learnt that there is a bi-directional communication between them.
Orchestrator can automate vRealize Automation and, in turn, vRealize Automation can
use Orchestrator to call a workflow. Furthermore, we introduced the most important
workflows from the vRealize Automation plug-in and showed how to implement a set of
use cases.
15. Advanced Service Designer
When building up a cloud, providing services to end users is a key factor. vRealize
Automation offers a single pane of glass where the users can provision these services. In
addition, vRealize Automation provides a number of other benefits:
• Firstly, you can provision any service or resource that you desire. You can have
services as you require them, with minimal delay. If you need more resources, you
can scale-out quickly.
• Another advantage is the ability to replicate quickly. Instead of building everything
manually, you can structure your solution as a series of scripts and applications. This
means you can deploy and rebuild as needed.
• You can also create and destroy with ease. Since you are provisioning on demand, it
is relatively easy for you and your users to build up a large set of servers. As these
are VMs, it is of course easy to destroy them when their services are no longer
required.
For the remainder of this chapter, we will dive into the basics and further customizations
of the ASD. First of all, we will demonstrate some common use scenarios for the ASD.
Then, we will explain the steps to configure the ASD. Finally, we will discuss examples of
real life applications implemented with the ASD.
It is quite easy to explain - at a conceptual level - what can be achieved through the ASD:
Bearing this knowledge in mind, we can find use case scenarios for the ASD. Essentially,
the ASD is a useful tool for all kinds of processes, which can be automated and published
as a service, within the self-service catalog. Examples of this include:
• User administration, e.g. the creation of new accounts, activating and deactivating
accounts, or resetting passwords.
• Automate email configuration, such as setting up a new mailbox.
• Provide storage – also called Storage as a Service.
• Create networks.
• Perform backups and recovery.
• Security and compliance processes.
• Installation of new software or updates.
In many cases, the examples given for the ASD seem to be quite trivial. However, when
you consider that many help desks spend most of their time with such tasks, you can better
appreciate the benefits of automating them. That does not mean you should automate
everything in a single leap, rather you could automate processes as and when the need
arises.
The ASD is also very important to third-party vendors, as it allows them to integrate their
solutions into vRealize Automation. They need only deliver Orchestrator plug-ins, which
can then in turn be invoked from vRealize Automation. There are already examples of
such plug-ins and they are very suitable for being integrated within the ASD. For example,
storage workflows from EMC or the NetApp WFA Command package.
Before being able to configure the ASD, you have to make sure to have the proper
permissions, i.e. you need to have the service architect role. This role allows to manage
the XaaS services. The configuration can then be done as follows:
1. Change to the Administration > Users & Groups > Identity Store Users &
Groups page.
2. On the right-hand side in the search box, type in the name of the user to which the
service architect role is assigned to.
3. Select the found user and click on View Details.
4. On the Details tab, please make sure that the Service Architect role is selected and
click Update.
5. Log out and log in again to vRealize Automation.
Once you have logged in into vRealize Automation again, you should be able to see the
Advanced Services menu. Figure 15-1 depicts the Advanced Service Designer menu
within the graphical user interface.
Although we have already created endpoints for vRealize Orchestrator, we must configure
further endpoints for the Advanced Service Designer. The configuration takes place within
the Administration > Advanced Services menu.
Within the Server Configuration menu, you can choose to either use the default
Orchestrator server or switch to an external one. If you configure an external Orchestrator
server, please provide the following input:
• Name of the server (assigned by you)
• Description (optionally)
• Host: the machine where vRealize Orchestrator is installed
• Port: Usually 8281
• Authentication: Basic (by default unless you configure Orchestrator otherwise)
• User name
• Password
The next step is to configure the endpoints for the ASD. ASD provides endpoints for the
following plug-ins:
• Active Directoy
• HTTP-REST
• PowerShell
• SOAP
• vCenter Server
You have to configure the Active Directory endpoint, if you plan to use Active Directory
within any of your XaaS workflows. Setting up this plug-in involves working through the
following steps:
Once you have finished the configuration, you can switch over to Orchestrator and check
if the configuration has been set up successfully. In Orchestrator, switch to the Run-mode
and change to the Library > VCAC > ASD > Endpoint Configuration > Microsoft >
Active Directory > Configuration folder. You should see a green workflow token next to
the Configure Active Directory server workflow.
Working with the ASD means creating and configuring custom resources, service
blueprints, resource mappings and resource actions. Once you have completed the
configuration of the ASD, you are ready to create your own service blueprints and publish
them into the service catalog.
From time to time, it is advantageous not to have to develop everything from scratch. In
these cases, you should instead export ASD components from another environment into
your new one, as it is more convenient. When exporting components from ASD, custom
resources, service blueprints, resource mappings and resource actions can be exported as
a zip file. However, please consider that when exporting ASD components, they do not
include Orchestrator workflows (you could do this from Orchestrator by creating a
.package file).
The exporting and importing procedure is essentially quite simple. Once you have logged
into vRealize Automation, as a service architect, perform the following steps:
1. Navigate to the Administration > Content Management > Export content page.
2. Select the components to export on the Service Blueprints. Resource Actions,
Custom Resources and Resource Mappings tabs.
3. Click Next.
4. From the Export Content tab, click Export Content to begin downloading your
selections.
Importing a package is also quite easy. However, when importing content from other
environments, there is always the risk of overwriting your existing ASD components. This
would be quite unpleasant, so for this reason you are able to specify a prefix for the
imported components. Hence, there is less chance of a component being overridden, as
similar named components are differentiated with this prefix. To import a package, follow
the steps as outlined:
1. Navigate to the Administration > Content Management > Import content page.
2. Select the Prefix only conflicting checkbox, if you want to add a prefix only if
there is any naming conflict (optional).
3. Enter a Prefix to add to imported components.
4. Click Browse… to upload the files to be imported. You can upload a .zip file for the
ASD components and a .package file for Orchestrator workflows.
5. Click Open.
6. Optionally, you can click on Validate to ensure you are not missing any vRealize
Orchestrator workflows required by the ASD components.
7. Click Import Content.
The import procedure will take a few moments to be completed. Once finished, you will
be able to see all the important components within the ASD menus and the imported
workflows within the Orchestrator instance.
Custom resources can be described as the result of an ASD workflow. For example, if
you trigger a workflow to create a new user account, you first have to define a ‘User’
custom resource, which holds the output of this workflow. The Orchestrator workflow is
triggered from the service blueprint. If there is any action performed on a user, for
example, you want to deactivate that user, you need to define a day-2 operation by setting
up a resource action.
If you already have some alternative IaaS deploying mechanism, for example via
PowerShell scripts, you can trigger these scripts from within Orchestrator. It is important
to note, however, that if you want to trigger such a deployment from vRealize
Automation, you first have to map the provisioned resource to a data type in vRealize
Automation. This can be done by means of resource mappings. vRealize Automation
already has a set of predefined mappings (see Fig. 15-3). These encompass the following
resources:
Now we can continue with creating a Service Blueprint. Perform the following steps:
The remaining steps should be well understood by now. After publishing a blueprint, there
is still a need to assign the appropriate permissions and choose how it should be displayed
within the service catalog. As we have already described the necessary steps in previous
chapters, we will skip this here.
Once again, after publishing your resource action, you must ensure that your users have
the right entitlements to invoke this action.
After you have finished the whole configuration, users can finally use the service catalog
to create new Active Directory users and also delete them. Provisioned Active Directory
users can be found on the Items > Active Directory page. You can click on a user object
and trigger the resource action (see Fig. 15-8).
f.5.5. Input validation
When creating the user interface for any kind of software, a certain amount of work has to
be spent on usability and input validation. This applies to vRealize Automation as well. If
there is any wrong input, the workflow should not be started – instead, users should be
prompted to correct their input before being able to trigger a workflow. We already
covered the Service Designer and showed you how to modify and add user controls. At
this point, we want to show how these user controls can be configured to validate the
input.
The first step in designing a user interface is always to choose the appropriate type for the
user controls. vRealize Automation already comes with a large set of user control types:
Constraints are used to limit input values. The following constraints are supported within
the ASD:
• If you have a custom workflow, it can be called from endpoints other than the ASD.
Therefore, it is certainly good practice to validate input within Orchestrator, as it is
the only way to ensure that a validation is performed.
• Orchestrator has more powerful ways of performing validation. This includes
regular expressions.
Regular expressions are very powerful when it comes to validation. We will show a couple
of examples of how Orchestrator might use expressions to validate input:
• ^[\w_-]+$ accepts lower and upper case letters as well as underscore “_” and dash
“-”. This regex might be applicable for the validation of a username.
• ^[_a-z0-9-]+(.[_a-z0-9-]+)*@[a-z0-9-]+(.[a-z0-9-]+)*(.[a-z]{2,4})$ is a little bit
more complicated and can be used for email validation.
Writing regex expressions can be quite hard. Fortunately, in most cases, we do not have to
formulate expressions ourselves, but can find an example that can then be modified
according to our needs. Once you have written your regex expression, it is – of course –
also very important to test it before using it. Fortunately, there are websites like
www.regexr.com available for that.
After having run through the basics of ASD, it is now about time to demonstrate to some
more use cases in practice:
So far, we have learnt how to provision machines based on IaaS blueprints. This is of
course fine, however, we have seen that modifying the user interface for requesting
machines can be quite tedious:
• First of all, classic blueprints only offer a limited set of user controls.
• Configuration of constraints and validations is painstaking.
• User controls can work together (as we have shown), but once again the
configuration is quite difficult.
• Last but not least, there is no such thing as a dynamic user control. This means, for
example, that values in a dropdown list can only be defined statically. It would be
more useful if these values could be generated at runtime, for example by doing a
lookup against a database.
In conclusion, we can state that there are plenty of reasons to use the ASD for deploying
machines. From a technically point of view, there is a workflow called Request a catalog
item, which is located in the Library > vCloud Automation Center > Infrastructure
Administration > Requests folder within Orchestrator.
Running the workflow shows us that there are two input parameters:
• The Catalog item parameter lets you choose any item from the catalog, i.e. an IaaS
blueprint in our scenario.
• The Input Value field accepts a list of composite generic key-value pairs (see Fig.
15-9).
However, there is also additional configuration work to be done, in both Orchestrator and
vRealize Automation, before we can begin with the implementation:
• Firstly, we have to ensure that the service account for the vRealize Automation plug-
in within Orchestrator is a member of the support group within the business group
related to the blueprint. This guarantees that we have all the permissions to run the
workflow appropriately.
• Secondly, the service account must be entitled to the basic blueprints (and XaaS
services if you want to call them instead of a blueprint).
The input parameters are fairly difficult to handle. This is due to the generic nature of the
composite key/value list and to the fact that all inputs are of type “strings”. However, the
workflow helps in the following scenarios:
Before the Request a catalog item workflow can be called, it is important to know the list
of expected input parameters for a blueprint. Another important issue is the syntax of input
parameter naming within workflows: They must always start with the prefix “provider-“.
For example, if there is a parameter “username”, to follow the syntax correctly, the input
parameter must be named “provider-username”. For a typical blueprint, we need the
following input parameters:
• blueprintId (string)
• provisioningGroupId (string)
• Cafe.Shim.VirtualMachine.NumberOfInstances (integer)
• Cafe.Shim.VirtualMachine.TotalStorageSize (decimal)
• VirtualMachine.LeaseDays (integer)
• VirtualMachine.CPU.Count (integer)
• VirtualMachine.Memory.Size (integer)
• VirtualMachine.Disk0.Size (decimal)
• VirtualMachine.Disk0.IsClone (boolean)
The first step within the implementation should be to copy the ‘Request a catalog’
workflow for safety reasons. Once the workflow has been copied, run through the
following steps:
1. Replace the “compositeTypeToProperties” action call with a custom script (see Fig.
15-10).
2. Hover over the custom script and click on the pencil to edit the custom script.
3. Define the following input variables for the script:
a. blueprintId (string)
b. provisioningGroupId (string)
c. numberOfInstances (vCO number)
d. totalStorageSize (vCO number)
e. leaseDays (vCO number)
f. cpuCount (vCO number)
g. memorySize (vCO number)
h. disk0Size (vCO number)
i. disk0IsClone (boolean)
4. Add the properties attribute as an output parameter to the custom script.
5. Type the following scripting code to the custom script:
properties = new Properties();
properties.put(“provider-blueprintId”, blueprintId);
properties.put(“provider-provisioningGroupId”, provisioningGroupId);
properties.put(“provider-Cafe.Shim.VirtualMachine.NumberOfInstances”, new
vCACCAFEIntegerLiteral(numberOfInstances).getValue());
properties.put(“provider-Cafe.Shim.VirtualMachine.TotalStorageSize”, new
vCACCAFEIntegerLiteral(totalStorageSize).getValue());
properties.put(“provider-VirtualMachine.LeaseDays”, new
vCACCAFEIntegerLiteral(leaseDays).getValue());
properties.put(“provider-VirtualMachine.CPU.Count”, new
vCACCAFEIntegerLiteral(cpuCount).getValue());
properties.put(“provider-VirtualMachine.Memory.Size”, new
vCACCAFEIntegerLiteral(memorySize).getValue());
properties.put(“provider-VirtualMachine.Disk0.Size”, new
vCACCAFEIntegerLiteral(disk0size).getValue());
properties.put(“provider-VirtualMachine.Disk0.IsClone”, disk0isClone ? “true” :
“false”);
6. Validate and save the workflow.
The next step is to create an appropriate service blueprint within the Advanced Services,
and to call your workflow from it. Please note, that we are not working with custom
resources in this scenario, as custom resources do not have leases or costs associated with
them. Nevertheless, the deployed resource will appear within the provisioned items in the
self-service catalog, as it was technically deployed by means of an IaaS blueprint.
As a summary, we can state that the main product deficiency with vRealize Automation
when deploying machines, is the request form designer and its complicated setup of
dynamic form fields. By using the ASD as a frontend for the IaaS blueprint, we can
circumvent these two problems.
i.7 Summary
This chapter introduced the ‘Advanced Service Designer’ and XaaS blueprints. After
having given examples of where the ASD can be used, we showed how to configure the
ASD. This involved some endpoint (amongst others, the endpoint for vSphere) and Active
Directory configuration. Once the configuration had been completed, we outlined the
basics of custom resources, service blueprints, custom actions and resource mappings.
One of the strengths of the ASD is its ability to build dynamic request forms. When
creating such forms, input validation is very important. This could be done within the
ASD or - alternatively - Orchestrator as well.
With that knowledge in mind, we implemented several use cases. First, we showed how to
provision and withdraw an Active Directory resource.
The second use case demonstrated how to call an IaaS blueprint from ASD (a good
alternative to the standard approach, when dynamic input fields are needed).
Finally, we introduced storage automation and its challenges. We also looked at the
technical means, with which it is possible to achieve such automation.
16. Financial Management
As well as having the ability to automate your infrastructure (and empower users to
provision their own services), it is also essential to keep track of ongoing costs. This
includes discovering the costs incurred in a datacenter, how they are allocated, and how to
map them to services. This allows for a better charging mechanism.
vRealize Automation includes a financial tool – vRealize Business Standard – that helps
with the aforementioned problems.
This chapter will introduce common challenges and problems for financial tools in
general. We will then shift the focus to vRealize Business. In doing so, we will
demonstrate how to deploy, configure and use, the vRealize Business tool.
Historically, IT has always tried to provide compute resources to the consumers. However,
usually these resources and the way they used to be distributed, used to fall under the
umbrella of the IT department. This helped IT departments, as they could easily stay on
top of resource use and availability. In a traditional datacenter, the IT department controls
how available resources are assigned.
However, things are changing when a self-service portal, like vRealize Automation, is
introduced. Now users have control and the responsibility to request resources according
to their respective budgets.
The main challenge is that most companies do not have a chargeback system for resource
consumption. Financial tools, like vRealize Business, can help to enforce policies for
resource consumption automatically and provide cost transparency to the consumers. By
taking this route, companies move to a provider business model. The resources and
services provided can be a combination of both private and public cloud resources.
As a consequence, it can be stated that the ability to measure and charge for specific
services consumed is an important selling point of cloud computing. So far, we have only
dealt with the automation of resource provisioning. The automation of chargeback, or the
ability to show actual costs, has not yet been covered in this book.
No matter what kind of financial management tool you use, there is a basic set of
functionalities and related requirements, which must be met.
Firstly, it is crucial to have a good handle on the true costs of your IT services. There is a
lot more to it than hardware costs alone – instead, all costs must be captured in order to
calculate the total cost of ownership. These costs cover software, administration,
maintenance, space and power cooling, amongst others. Only if you have a good
understanding of your costs, it is possible to accurately charge for a given service.
Calculating the costs by yourself is a very tedious task, as there are many issues to be
taken into account. Fortunately, financial tools can assist here by providing a reference
database, providing specific values for different cost types. These tools usually come with
an underlying cost model (e.g. TCO or ROI).
Once the total cost of ownership is known, costs can be mapped to services. Prices can be
fixed (for example, you can set the price of a small vm with $5 or the price of a big vm
with $15) or be calculated based on resource consumption. In some cases, the latter option
is quite easy to implement. For example, if you charge for storage consumption. In other
cases this might be more difficult. Financial tools usually provide different models for the
allocation of costs to services. Examples include: equal split/relationship-based,
percentage-based, property-based or weighted.
Tools like vRealize Business also help automate this process. They know the total cost of
ownership and can ascribe the costs to machines based on resource consumption.
A financial tool should not only be able to calculate prices. With regard to services in your
datacenter, it should also assist end-users with comparing prices to those of other cloud
providers (Amazon, Microsoft or VMware). Such competitive benchmarking helps
highlight how efficient you are with your datacenter. Furthermore, when your datacenter is
running out of resources (for example, at the end of the year, when there is big demand for
compute resources), you might want to compare prices. This might lead you to move
workload to the public cloud, instead of acquiring new hardware.
In a traditional IT environment, consumers of resources are less likely to take care of their
ongoing costs of applications and services. This is because IT services are usually
centrally governed by the IT departments. However, when transforming your datacenter
into a more cloud-centric model, consumers become the drivers behind hardware and
software resource requests. Therefore, it is important to achieve cost transparency, so that
users always know how much their services cost and how much they have already spent.
This also helps to influence the consumption of individual users.
While it is important to show how much a resource or service costs, it is also crucial to be
able to generate reports of total costs over time. This report data can be grouped by
service, business groups or any other criteria. Frequently, these reports serve as input data
for billing and as such must be exportable in different formats (such as PDF or CSV). As
well as having the ability to export data, financial tools quite often provide some kind of
dashboard, in which to visualize the data.
While it is fairly easy to understand the need for such a financial cost tracking tool, the
global market for these is still very small. Gartner estimates the current market value to be
approximately $ 250 million in 2013[11]. However, there is a yearly growth of around 20%.
VMware entered the financial tool market in 2011 and has since then become one of the
biggest vendors. The current product is called vRealize Business and is available in several
different editions. vRealize Automation is bundled with vRealize Business Standard
(however, there is a vRealize Business Advanced and a vRealize Business Enterprise
edition – and the two cannot be directly integrated with vRealize Automation, they need
vRealize Business Standard as an intermediary).
As mentioned above, there are three different editions of vRealize Business: Standard,
Advanced and Enterprise. The following table highlights the features of each edition:
Features Standard vRealize Suite Advanced or vRealize Suite Enterprise or
vRealize Business Advanced vRealize Business Enterprise
Virtual Infrastructure X X X
Costing, Usage Metering,
and Consumption
Reporting
IT Financial Management
Delivery Model
On-Premises X X X
SaaS X X
License Model
Perpetual X X X
Subscription X X
The Standard edition only provides showback, usage metering and consumption reporting,
within your virtual infrastructure. The Advanced Edition provides the full range including
IT costing, IT demand management, IT forecasting and planning, IT showback, IT
chargeback, what-if scenarios and IT cost optimization. It also supports the display,
comparison and what-if analysis, using VMware IT benchmarking data (so that you can
model existing cost advantages and future cost savings). The Enterprise Edition helps you
gain transparency and complete control over all IT costs, services and quality (with the
Enterprise edition for CIOs and IT executives). In addition to Business Management for
Cloud plus IT Financial Management capabilities, IT Service Level Management and
Vendor Management capabilities, enable customers to: set, track, report and analyze IT
performance. They also allow value measures for all their services, vendors and
customers, as well as perform root cause and business impact analysis.
In the remainder of this chapter, we will cover the features of the Standard edition. The
Advanced and the Enterprise edition are not covered by this book.
16.3.1. Manual cost calculating
Before we dig into the details of vRealize Business, we first want to show you how to set
up manual pricing in vRealize Automation. When setting up manual pricing, we have to
deal with cost profiles. But how can cost profiles influence the price of storage and virtual
machines? Cost profiles are associated with compute resources. Compute resources are in
turn associated with the fabric and hence fabric groups. Then, fabric administrators can
create reservations based on fabric groups. These reservations are in turn used for the
provision of virtual machines.
There are two kinds of cost profiles:
When creating a cost profile, you have to enter the exact costs for the compute and storage
resources. Please consider that cost profiles are based on daily costs.
Perform the following steps to create a cost profile:
The procedure for creating a Storage Cost Profile is fairly similar, so we will not address it
here in detail.
e.3.3. Assigning a cost profile
If you also need to assign a Storage cost profile, perform the following steps:
Once you have saved your changes, and request a new machine from the service catalog,
you will be able to see the daily costs for that machine.
Cost profiles are helpful when charging consumers for storage, CPU and memory. The
costs are calculated, on a daily basis, depending on how many resources they need.
However, there are many other costs, of course, than those for storage, CPU and memory.
For example, we would like to charge for license costs, electricity, administrative support
and so on. When we are working with manual cost calculation, there is no way to
configure these costs in a fine-grained fashion. Instead, we have to associate a fixed
amount with a blueprint. This amount will be added to the costs defined in the cost
profiles and shown in the service catalog as well.
As soon you have saved your changes and requested a new machine from the service
catalog, you will be able to see the blueprint cost to be taken into account as well.
As with many other products in the VMware portfolio, vRealize Business Standard can
also be deployed as a virtual appliance. The appliance is based on a SUSE SLES
Enterprise Server, a vPostgres SQL database and a Pivotal tcSserver web server.
Once deployed, you must connect the appliance to vRealize Automation. vRealize
Business uses connectors to obtain data from your environment. Right now, there are
connectors for vCenter Server, vRealize Automation, vCloud Director and Amazon Web
Services. As stated before, vRealize Business comes with a cost model and its data is
stored in a reference database. In order to update this reference database, internet
connectivity is needed. Last but not least, vRealize Business Standard can be connected to
vRealize Business Advanced and Enterprise as well. Fig. 16-2 shows the architecture of
vRealize Business.
Figure 16-2 vRealize Automation overview
• 50 GB hard disk
• 4 GB RAM
• 2 vCPUs
The following steps must be taken in order to deploy and configure the vRealize Business
appliance:
The deployment process for the vRealize Business appliance is the same as for vRealize
Automation or the SSO appliance. We have already described this process in chapter 4, so
we will skip the explanation here.
When deploying the appliance, you must determine the currency to be used within
vRealize Business. You can choose from the following currencies:
• US Dollar (USD)
• Euro (EUR)
• British Pound (GBP)
• Australian Dollar (AUD)
• Canadian Dollar (CAD)
• Singapore Dollar (SGD)
• Japanese Yen (JPY)
• Indian Rupee (INR)
• Israel Shekel (ILS)
Once the appliance has been deployed, you have to connect to it (URL: https://<vrealize-
busines.domain.name>:5480 ).
In a second step, change to the vRealize Automation tab and provide the following
information:
Next, select the Accept vRealize Automation certificate checkbox and click Register.
It will take a short while to complete the process.
Next, go to the System tab, click Time Zone and change the settings if needed. Click
Save Settings. At this point in time, the configuration of the appliance is complete and we
can logout from the web management console.
It is recommended that you create a dedicated tenant for vRealize Business. However, it is
possible, while not recommended, to use an existing tenant. The steps needed to create a
new tenant have already been covered in chapter 5 ‘Design and Configuration of vRealize
Automation’. Therefore, they will not be addressed here in detail. Instead we will only
walk through the basic steps:
• Log in, as a system administrator, into the vRealize Automation default tenant
(https://<vrealize-automation-appliance.domain.name>/vcac).
• Create a new tenant.
• Add an identity store to the tenant.
• Assign tenant administrator and infrastructure privileges.
Once you have completed these steps, you can log off from the default tenant and log in
into your newly created tenant.
Configuring vRealize Business is relatively easy. The following steps must be followed:
1. Navigate to the Administration > User & Groups > Identity Store Users &
Groups page.
2. In the Search text box, enter the name of a user or group to whom you want to
assign vRealize Business permissions.
3. Click the magnifying glass icon.
4. Select a user from the list of items.
5. Assign either the Business Management Administrator or the Business Manager
Read only User role.
6. Click Update.
vRealize Business also requires a serial number. This can be added as follows:
The final step is to configure the connectors. The most important of which is the vCenter
one, but you can also set up an Amazon AWS or vCloud Director connector. Perform the
following steps:
Once the configuration of the vRealize Business appliance is complete, there is no longer
a need for manual pricing. If a compute resource is under control of vRealize Business, all
cost pricing can be overitten by vRealize Business.If you want to define prices in vRealize
Business, perform the following steps:
Figure 16-3
Once you have changed the settings, perform the following steps:
vRealize Business Standard is fully integrated into the vRealize Automation self-service
portal and adds new menus and pages to the web page:
• The Overview page shows a dashboard with financial data. This includes the total
cloud cost, an operational analysis and a demand analysis.
• The Cloud Cost page helps you to understand your costs, where they come from
and gives insights what kind of costs you have within your datacenter.
• You can establish prices of your IT services on the Operational Analysis page.
• You can view and sort the list of consumers on the basis of different criteria on the
Demand Analysis page.
• The Cloud Comparision and the Public Cloud page help you to compare your
costs to public cloud alternatives.
• The Reports page allows to generate reports.
As stated above, the ‘Overview dashboard’ gives a brief summary of the total cloud cost,
the operational analysis and the demand analysis. Fig. 16-5 depicts this dashboard. The
total cloud cost can be roughly compared to the Total Cost of Ownership (TCO).The TCO
is a financial estimate, intended to help buyers and owners determine direct and indirect
costs of a product or system. The ‘total cloud cost’ comprises several cost drivers, the
allocation of which is displayed within a pie chart. Costs are further split into CapEx and
OpEx. Capital expenditure (CapEx) refers to the funds used by a company when acquiring
or upgrading physical assets (for example, property, industrial buildings or equipment).
OpEx, is a category of expenditure a business incurs as a result of performing its normal
business operations.
Figure 16-5 Cloud overview
Very few companies have good insights into the true costs of their IT services. Normally,
they know how much money they spent for the hardware, but do not have too much
information about the other costs of their infrastructure.
Internally, vRealize Automation uses a reference database for being able to do some
initial cost calculation. The database includes industry benchmark values and covers many
different cost drivers including hard- and software. vRealize Business calculates cost
drivers across a range of categories. These are depicted in the following table.
Server Hardware Displays server costs by CPU age. The costs are calculated by using declining
balance depreciation over the last three years.
Maintenance Shows operating costs for both operating systems as well as hardware.
Facilities Shows costs based on rent, real estates, cooling and power.
Other costs Summarizes other costs like high availability, management or VMware licensing
costs.
To calculate the exact costs, vRealize Business uses the double declining balance method
and allocates the costs over a five-year period in order to determine yearly depreciation
values. If there are two values, the higher one is used. Monthly costs for server hardware
are calculated by using yearly depreciation values and dividing it by 12.
Fig. 16-6 shows how to show and modify cost drivers. Perform the following steps to
modify a cost driver:
1. Within the Cloud Cost menu, select Edit Cost.
2. Select the cost driver to be modified and click on the calculator icon.
3. Enter the new amount for the cost driver.
• If costs are based on the industry benchmark, a vertical band of orange color is
displayed next to the Edit icon.
• If costs are calculated according to a hardware configuration, the vertical band is
green.
• If you have entered costs manually, the vertical band is shown in blue color.
d.6.3. Operational analysis
Once we know the total costs of the infrastructure and the cost drivers they are comprised
of, we can translate these costs into prices that should be charged to the consumers of our
IT services. As discussed earlier, vRealize Automation supports a number of pricing
models that enable showback/chargeback of the services it delivers. Moreover, prices can
be associated with service blueprints and with different infrastructure resources (CPU,
memory, storage). This way, you can differentiate two services that have the same
resource configuration, but may have different software components. Thus, the price of
one service may be higher than that of the other.
The base rates are calculated on the basis of monthly operating costs, as determined by the
cost drivers. These base rates can in turn be associated with a virtual machine and thus
used to calculate the actual price. However, there are also direct costs, such as operating
licenses or labor costs, which are associated with a virtual machine.
Internally, when vRealize Business calculates the base rate, the following factors are taken
into account:
• Cost
• Capacity
• Expected utilization
• Unallocated costs
When calculating the expected utilization, data from the last three months is used.
Alternatively, you can configure a global value, that is applied to all server clusters, or
define a percentage for each cluster separately. You can see how the base rate is calculated
by clicking on ‘Operational Analysis’ and then on ‘Edit Utilization’ (see Fig. 16-7).
If you want to change the value for a certain cluster, you need to proceed as follows:
1. Within the Set expected utilization of Host CPU and memory area, click on Set
per server cluster.
2. Enter the percentage amount for the CPU and memory.
d.6.4. Automatic base rate calculation
vRealize Business uses the following steps to calculate the base rate:
1. As a first step, the total costs will be calculated based on the cost drivers. After this,
the total costs will be split into CPU and memory (usually 80% for CPU and 20%
memory – the exact values depend on the concrete hardware).
2. The next step is to calculate the CPU base rate. The CPU costs (calculated in step 1)
will be divided by the total available GHz within the cluster. Finally, the result will
be multiplied by the expected utilization within the cluster.
3. The memory base rate will be calculated the same way.
Once the base rates are calculated, vRealize Business can determine the unallocated costs,
too.
If you want to find out who consumes your resources, which costs occur and how
resources are used, you can navigate to the Demand Analysis page. You can also display
and sort by the following criteria:
• Consumer
• Total costs
• Amount of virtual machines
• CPU costs
• Memory costs
• Storage costs
Furthermore, you can group the list using the different grouping hierarchies that are
available:
After having established prices for your IT services, you can compare them with those of
similarly configured providers. These could be from cloud service providers such as
Amazon, Microsoft or VMware.
Such a competitive benchmarking helps us to understand how efficient we are in
comparison to other providers. One of the difficulties of comparing prices between
different public cloud providers is that their offerings also differ from each other in terms
of hardware, availability or SLA. vRealize Automation also takes that burden off your
shoulders, by constantly updating the price information from the reference database and
providing a calculation model (so that a price comparison is possible). This relieves
companies from creating complex spreadsheets and updates them with the most-recent
information from the cloud providers.
The costs of the cloud service providers, can be seen when clicking on the Cloud
Comparison menu, within vRealize Business. In addition, the Cloud Comparison screen
helps you to perform the following tasks:
• Perform a What-if Analysis regarding the costs of moving an application from the
private to the public cloud.
• Model any new workload placements based on the project utilization and costs in
the private and public cloud.
Fig. 16-9 shows the Cloud Comparison. If you hover over the currency value in one of the
widgets, you will see a popup window consisting of cost drivers, which are used for the
cost calculation.
If you want to compare the costs of your virtual machines, with those in the public cloud,
you can do the following:
1. Click on the Compare existing VMs to Cloud link in the upper left corner.
2. Click on Import VMs.
3. Choose the tenant from which you want to import the machine.
4. Select the virtual machine to compare.
5. Click Done.
d.6.7. Reports
The Reports page helps generate showback reports and hence provides valuable insight
into your business. Showback data is quite important, as it can show each organizational
unit a summary of the resources that have been consumed, who has consumed them and
for how long.
Another important issue within cloud environments is charging. Most companies already
have a financial system in place. However, such a system needs to be fed with data.
vRealize Automation can generate reports in different formats (pdf or CSV), that can be
used as an input for these systems.
When navigating to the reports page, you will notice that there are reports available for
the different systems (see Fig. 16-10):
• vCenter Server
• vCloud Director
• vRealize Automation
• Public cloud providers
When navigating to an individual VM’s screen, you can also visualize the OpEx costs,
such as labor and maintenance of that machine. If you want to export this data, click the
export button on the upper left-hand side of the screen.
d.7 Summary
Recently, DevOps techniques have become more and more popular. The most important
parts of DevOps are automation and configuration management – both techniques that are
also part of vRealize Automation. Hence, many companies use VMware products, as part
of the underlying technology, when introducing DevOps in their enterprises.
Consequently, this book also covers DevOps techniques. We will first give a short
overview of DevOps and then demonstrate how vRealize Automation could be used for
DevOps.
17.1 Foundations of DevOps
• The First Way is to optimize the flow of work from development to IT operations.
• The Second Way is about shortening and amplifying the feedback loops.
• The Third Way says that we should encourage experimentation and learn rapidly
from failures.
These principles are similar to the ones known as CAMS (culture, automation,
measurement, and sharing), as discussed by John Willis and other DevOps leaders:
There are many tools to help introduce DevOps within an enterprise. These tools can be
categorized as follows:
• Ticket systems
• Server deployments
• Configuration management
• Continuous integration
• Continuous delivery
• Continuous deployment
• Log analysis
Usually, project managements introduce some kind of ticket systems for task management.
Such a system helps to keep track of problems and also allows some kind of historical
analysis. Furthermore, such a system should help identify the root cause of bottlenecks in
a production environment. It can also help give insight into data regarding all members of
the software project team.
As stated, software deployments should not be handled manually. At this point, vRealize
Automation enters the stage as a new player. However, there are other software programs
which could be used for automated server deployments. For example, VMware Auto
Deploy for vSphere servers, Puppet Enterprise, Chef or Foreman. Recently, Docker has
also become quite popular. Essentially, Docker is an open-source project which helps to
automate the deployment of applications within software containers. We will cover
Docker within this section as well.
Continuous integration is the practice of merging all developers’ progress into a shared
mainline, several times a day. One of the most famous software tools, for continuous
integration, is Jenkins. Jenkins is a tool used to build and test software projects
continuously, as well as to monitor their execution. By introducing such automation, bugs
are found and dealt with more easily.
Continuous deployment is the next step of continuous delivery: Every change that passes
the automated tests is then deployed to production automatically.
When deploying machines and software, you have to guarantee that your implemented
processes are working correctly. The best way to do this would be to check your logs.
However, manual log checking is a tedious and error-prone task. That being said, as it is
the case when deploying machines, automation can help. There are already products
available for this. VMware offers Log Insight, there is also Splunk and then there are the
Open Source tools such as Logstash (combined with Elasticsearch and Kibana).
Once we have covered the basics of DevOps and explained the tools used to establish it
within an enterprise, we want to shift our focus once more to vRealize Automation.
One of the greatest strengths of vRealize Automation is the user-friendly service catalog.
A large part of this book has covered the building and maintenance of the service catalog.
We have also talked about the extension of vRealize Automation. This can happen by
means of vRealize Orchestrator, via IP address management tools like Infoblox and also
with configuration management tools such as Puppet, firewalls, load-balancers or multi-
tier applications. In the following, we want to describe how such features can be
implemented in a vRealize Automation environment.
Application Services have so far not been covered by this book. However, they are also
bundled with the Enterprise edition of vRealize Automation.
When deploying services, we mostly addressed infrastructure services based on IaaS
blueprints. In most cases, developers require more than just infrastructure services, they
need ready-to-use multi-tier applications. Such a complex scenario could consist of the
following components:
• A load balancer
• One or more web servers, placed behind a load balancer
• An application server
• A database server
Such scenarios are easily deployed via vRealize Application Services. The Application
Services provide a graphical user interface, to allow easy creation of multi-tier application
stacks. This is done using an intuitive drag-and-drop palette, to instantiate components and
their relationships to each other. The Application Services are integrated into vRealize
Automation. They allow the deployment of applications into a range of cloud providers,
such as vRealize Automation, vCloud Director or Amazon AWS. Like vRealize
Automation, such stacks have a name - ‘Application Blueprints’. An Application
Blueprint consists of the following parts:
• Cloud templates are virtual machines defined by the cloud provider – for example
an IaaS blueprint from vRealize Automation.
• Logical templates are the mapping of Cloud templates into Application Services.
For instance, a vRealize Automation IaaS blueprint or an Amazon EC2 AMI could
be mapped to a logical template.
• Services are ready-to-use software that can be added to logical templates to create
an application. For example, you could add an Apache Web Server to a CentOS
logical template.
• Tasks can help to run simple scripts and perform configuration changes and
installations.
• Application components represent a software artifact that could be developed by
the DevOps team and should be part of the deployment.
• Policies consist of user-defined sets of definitions and govern the lifecycle
operations of applications. For example, you can create a black list with applications
that are not allowed to be installed.
• Deployment profiles configure application deployments at runtime, for example
they decide how much memory can be used in a certain environment (e.g. test or
production).
When using Application Services, you also have operations available to you at runtime:
• An application can be scaled out. For example, additional web servers can be placed
behind a load balancer.
• Applications can also be scaled to release resources.
• An update of an application can be performed.
• Roll back an update of the application.
• The whole application can be torn down.
Puppet is a configuration management system, which allows you to define the state of
your IT infrastructure, before automatically enforcing a certain configuration. Whether
you’re managing just a few servers or thousands of physical and virtual machines, Puppet
automates tasks that system administrators would otherwise do manually,
Once you have installed a Puppet server, you are able to configure each node (physical
server, device or virtual machine) within the infrastructure with a Puppet agent as well.
There should also be a designated Puppet master running in the environment. At regular
intervals, enforcement takes place. This enforcement is made up of the following steps:
• The Puppet agent collects information about the node’s configuration and sends it to
the Puppet master.
• The Puppet master figures out how the node should look like and sends the
information back to the node.
• The agent makes any change needed to enforce the node’s desired state.
• Once the changes have been applied, the Puppet agent sends a report back to the
Puppet master.
In 2013, Puppet Labs and VMware formed a strategic partnership. They had already been
working together for over a year. VMware invested over $30 million in Puppet Labs in
order to jointly deliver, market and sell products for their customers (however, Puppet also
supports other cloud platforms, amongst them Amazon AW, Cisco, OpenStack, Microsoft
Azure, Eucalyptos, Rightscale, and Zenoss). Puppet products have several ways of
integrating into VMware products:
One of the most recent VMware products vRealize Code Stream. If you take a look at the
workflows packaged within a default Orchestrator instance (of vRealize Automation), you
would see that there are a couple of workflows, which obviously do not interact with
vRealize Automation. These belong to vRealize Code Stream. vRealize Code Stream is
technically bundled with vRealize Automation. However, you need to separately purchase
this and then unlock its functionality, within the graphical user interface, by using a serial
number.
vRealize Code Stream helps teams, that have Continuous Delivery in their company, to
become more productive. Basically, the product helps developers with the following
features:
• It automates the different tasks required to provision, deploy, test, monitor and
decommission software for a specific release.
• It helps to assure standardized configurations by coordinating the artifacts and
processes across each release delivery stage.
• Governance is provided as well. This includes control across the end to end process
in the delivery pipeline.
• Existing tools can be integrated and used.
It is important to realize that vRealize Code Stream does not replace the existing software
development lifecycle tools and processes (it tries to leverage and work with the existing
tools). This can be done by means of an Orchestration engine, which is of course vRealize
Orchestrator.
The architecture of vRealize Code Stream has an integration framework that allows
interacting with the following software tools:
17.5 Docker
VMware supports the running of containers in vSphere. There are two projects: Project
Photon and Project Lightwave. Both work together to run Linux containers and provide
additional features for DevOps application architectures. Let’s introduce the two projects:
• Project Photon is a Linux container host runtime environment for vSphere. Besides
Docker as a container format, it supports Rocket (rkt) and Garden as well. Project
Photon has a small footprint and is a yum-compatible package-based lifecycle
management system. It runs on environments such as VMware Fusion, VMware
Workstation, vSphere, vCloud Air and Google Compute Engine.
• Project Lightwave adds additional enterprise features and can be used in
combination with Project Photon. Firstly, it provides multi-tenancy across
infrastructure and application stacks and to all the stages of an application
development lifecycle. Secondly, it has support for additional security features as
well as for an authentication and authorization mechanism.
Project Photon can be integrated into vRealize Automation. In order to achieve this, there
are a couple of steps that must be completed first:
• Firstly, the latest Photon ISO must be downloaded. This ISO can be downloaded
from Github[13] and there are detailed instructions[14] of how to do it there.
• As there is no support for guest customization in the ISO, a shell script must be
placed within the OS. It is then triggered either by the Guest Agent or vRealize
Orchestrator.
• Make sure that the Photon virtual machine can access the public Docker repository.
If there is no connection to the internet, you need to setup your private Docker
registry.
We will not demonstrate the integration of Project Photon within the vSphere
environment, as this is already covered by the instructions available on Github. We will
therefore only demonstrate how to integrate the Photon VM into vRealize Automation.
There is no further necessary customization of the Photon VM. The template already
comes with Docker installed and is preconfigured for the Docker registry.
Firstly, we need to find a mechanism which assigns an IP address to a newly created
virtual machine. As guest customizations are not supported, we have to do this manually.
This could be done via the Guest Agent or vRealize Orchestrator. As we have already
shown both approaches, we will only sketch out the necessary steps to accomplish Photon
integration, by means of vRealize Orchestrator.
As a prerequisite, we assume that the virtual machine is deployed within a network
where a DHCP server is available. If this is not the case, you could use a network profile
to assign the network settings. However, if we want to quickly provide a hostname via
vRealize Automation, we must place a script in this virtual machine, that can be triggered
by it.
So boot into the machine and place the following script within it:
Save the script as customizeos.sh and make it executable with the following command:
Next, shutdown the machine from your vSphere Client and create a snapshot of the
machine called ‘base’.
Now log in into vRealize Automation and perform the following steps:
After the template has been located, a blueprint must be created and published to the
service catalog. Once again, we already have shown how this can be accomplished and
therefore, it will not be discussed here in further detail.
Once this task has been accomplished, we can focus on vRealize Orchestrator. Continue
with the following steps:
1. Import the Assign workflow to a blueprint and Run program in guest workflow
into Orchestrator.
2. Run the Assign workflow to a blueprint workflow, select the
MachineProvisioned template and use the Run program in guest as the end user
workflow to run.
3. On the Photon blueprint properties section, assign values for the vmUsername,
vmPassword, programPath and workingDirectory properties and save your changes.
Once you have completed the configuration, you should be able to provision a Photon
machine to the service catalog. With the Orchestrator script executed, the hostname should
also have been changed. You can check this by opening an SSH connection to the newly
provisioned machine.
To run Docker, from the command prompt, enter the following command:
To test Docker, you can start an Nginx web server from the Docker hub:
17.7 Summary
Recently, VMware has shifted its focus to DevOps as well. While there are many different
DevOps tools on the market, few of them are integrated with vRealize Automation out-of-
the-box. Consequently, customers are forced to spend a lot of time in pipeline automation.
With the release of Code Stream, VMware now offers a solution to help integrate these
different DevOps teams and thus allows the creation of a fully automated release pipeline.
When introducing DevOps techniques to an organization, vRealize Automation can also
help out. This chapter covered Puppet, which can be used for both configuration
management and automating vSphere environments. vRealize Application Services can
close the gap between developers and administrators, by providing a tighter coupling.
Furthermore, VMware spends a lot of effort in promoting the Solution Exchange – the
marketplace of integrated technologies. In this way, VMware tries to play an important
role in the DevOps market.
18. DNS, DHCP and IP Address Management Tools
The networking landscapes of modern datacenters are rapidly evolving. This is due to
trends and technologies in the areas of security, virtualization or automation. Setting up a
Cloud Management Platform (CMP), like vRealize Automation, can accelerate this
development yet further.
Managing such environments can be quite cumbersome, in terms of DNS, DHCP and IP
address management (DDI), especially when there are no appropriate tools available. For
private cloud projects to succeed, the underlying network fabric needs to be able to
support automation, too. Traditionally, network teams have relied too much on manual
scripting and configuration methods. These methods should be eliminated in the long run,
before private cloud solutions can reach maturity.
Typically, CMPs already include rudimentary IP address management (IPAM)
capabilities. However, many organizations require more robust capabilities. That’s reason
enough to provide more information on this topic.
After that we will show how to integrate the best know IPAM solution – Infoblox – into a
vRealize Automation environment.
The main reason for investing in a DDI solution like Infoblox, is to improve the
manageability of IP addresses. Essentially such a solution increases operational efficiency
and agility providing the following advantages:
1. First, as the IP address management and its underlying workflows are improved, a
faster and more accurate provisioning of DNS/DHCP services is possible. As a
consequence, any service delivery to the end user can be become faster.
2. In recent times, end users may use more than one IP address due to the fact that they
are using mobile devices or provision more virtual machines into a cloud
environment. Having a dedicated IP address management solution eases the
administration.
3. Configuring IP addresses, DNS or DHCP settings can be quite cumbersome and
difficult for less experienced IT administrators. Hence, providing an easy-to-use
graphical user interface will make it possible to delegate such work to those
administrators.
4. A centralized tool further increases the visibility and allows an easy reporting of any
IP address assignment.
5. A DDI solution usually comes with an API and in the case of Infoblox there is even
an VMware Orchestrator plug-in that allows the automation of DNS, DHCP and IP
address management tasks.
6. There is support for IPv6.
7. While the software-defined datacenter is becoming increasingly important in the
market, DDI vendors are already working on support for such solutions. Infoblox –
for example – is working on integrating their solution into VMware NSX.
Infoblox leverages these built-in capabilities by providing their own appliance and
integrating it into vRealize Automation by means of an Orchestrator plug-in. During
provisioning, vRealize Automation can call these workflows to assign an IP address. Once
a machine is deleted, the Infoblox plug-in also automates the process of de-allocating an
IP address and removes its DNS host record. Fig. 18-1 depicts the integration of the
Infoblox lifecycle in a virtual machine’s lifecycle.
Fundamentally, the Infoblox plug-in provides support for vSphere, vCloud Director and
vRealize Automation environments. As our book only covers vRealize Automation, we
will not address the others.
The Infoblox IPAM Plug-In allows both fixed IP address allocation and address
allocation from DHCP ranges. When you use the Infoblox IPAM Plug-In to allocate IP
addresses to virtual machines (VMs), it automatically forwards a DNS request to your
Infoblox IPAM server, i.e. NIOS or vNIOS appliance. NIOS creates a complete host
record in its database, this enables the VMs to be located through their FQDNs. This
information is also replicated in VMware platforms such as vRealize Automation or
vCloud Director.
• Ensure that you have a running vRealize Automation installation with vRealize
Orchestrator integrated.
• Deploy the Infoblox NIOS appliance.
• Install and configure the Infoblox Orchestrator plug-in.
If you want to evaluate Infoblox, there is a special NIOS appliance called vNIOS Trial.
This provides the complete suite of DNS, DHCP and IPAM functionality offered by the
standard Infoblox NIOS appliance. In addition, it also provides
real-time dashboards for IP monitoring, allows managing Microsoft DHCP and DNS
services (hosted on Microsoft servers) and provides additional functionality.
The plug-in, on the other hand, integrates IP address allocation capabilities into vRealize
Automation. Starting with release 3.0.0, the plug-in also supports cloud network
automation. This means that the plug-in can work as an adapter to provision port IP
addresses, subnets or networks in private, public or hybrid cloud computing networks.
The deployment of the appliance is fairly straight forward, once downloaded you can find
a quick start guide on the company’s webpage[15].
When integrating the Infoblox plug-in into vRealize Automation, vRealize Automation
first invokes the Infoblox IPAM plug-in for vCO workflows that either allocate an IP
address and a DNS record to a new VM, or delete them for a removed VM. Fig. 18-2
illustrates the architecture of the Infoblox plug-in.
When deploying the plug-in, it is crucial that you first verify the compatibility of the plug-
in with your environment. The current IPAM plug-in works with the following versions
(please check for updated versions):
• NIOS 7.0.2
• ESXi 5.5.0
• vCenter 5.5.0
• vRealize Orchestrator 5.5.1, 5.5.2, 6.0.1 (standalone), 6.0.2 (embedded)
• vRealize Automation 6.0.1, 6.1, 6.2.0
When deploying the plug-in, you have to perform the following procedures in the listed
order below:
To ensure interoperability of vCenter Orchestrator with the Infoblox plug-In for VMware,
you must first import valid SSL certificates from the NIOS appliance and the vCAC
Infrastructure Administration host (a Windows computer with the IaaS Service installed).
Perform the following steps to import an SSL certificate into vCenter Orchestrator:
1. On the VMware vCenter Orchestrator Configuration page, click the Network tab.
2. In the right panel, click the SSL Trust Manager tab.
3. Under Import from URL, enter the IP address or, under Import from file, select
the certificate file for the NIOS appliance or IaaS host.
4. Click Import, and then click Import again to confirm.
The new SSL certificate appears in the SSL Trust Manager page.
1. Unzip the plug-in archive file into a folder on your management system.
2. Log in to the VMware vCenter Orchestrator Configuration page using a Web
browser.
3. Click the Plug-ins tab.
4. In the right panel, under Install new plug-in, click the Plug-in file field.
5. In the file upload dialog, select All Files, select the .dar file.(o11nplugin-ipam-dar)
for the plug-in version v.2.4.2, and click Open.
6. Click Upload and install. The Infoblox IPAM plug-in for VMware tab appears in
the Orchestrator Configuration page.
7. If the Infoblox IPAM Plug-In for VMware check box is not selected under Enabled
plug-ins installation status, select it and click Apply Changes.
8. On the Startup Options tab, click Restart service and, if necessary, click Restart
the vCO configuration server.
9. Click the Plug-ins tab and make sure that the text “Installation OK” is visible to the
right of the IPAM plug-in. If not, restart vCO till the “Installation OK” message is
visible before you continue with the IPAM plug-in configuration.
After you have installed the Infoblox IPAM Plug-In for VMware and imported the SSL
certificate from NIOS, you need to configure a connection to your Infoblox appliance. You
can add a number of connections to different NIOS servers, or grids, and indicate the
default one. You can then edit or delete the added Infoblox IPAM connections.
Perform the following steps to configure an Infoblox connection:
1. On the VMware vCenter Orchestrator Configuration page, click the Infoblox IPAM
2.4.2 tab.
2. Click New Connection.
3. Provide the following input:
a. Infoblox IPAM Host Name: Enter the IP address or the hostname of the
Infoblox appliance.
b. Infoblox IPAM User Name: Enter the Cloud API user name for the
appliance.
c. Infoblox IPAM Password: Enter the Cloud API Password.
d. Default Network View: Optionally, enter the network view that will be used
as the default in the workflows.
e. Default DNS View: Optionally, enter the DNS view that will be used as the
default in the workflows.
4. Click Apply changes.
5. On the Startup Options tab, click Restart service and, if necessary, Restart the
vCO configuration server.
You must install the vCO proptoolkit package before running any workflows from the
vCAC package:
• com.vmware.pso.vcac.proptoolkit.package
These packages are located in the \2.4.2\vcac folder of the provided .zip archive. To install
the packages, work through the following list:
After a short moment, the vCO client updates to show the new package and its description
in the General tab.
In order to validate the plug-in installation, you can perform the following steps:
Once the Infoblox plug-in has successfully been installed, we can install the vCO
customization wrapper. This workflow allows vRealize Automation to call Infoblox
during the different stages of a machine’s lifecycle:
• Building - the stage when IP addressing is reserved in IPAM and passed back into
vRA during the initial provisioning.
• Provisioned - once the machine is built, this calls the workflow “Update MAC
address for vCAC VM wrapper”, that grabs the as-built MAC address from the VM
(nic0) in order to populate Infoblox with this detail.
• Disposing – when the machine is destroyed, this calls-out to “Remove Host Record
or A/PTR/CNAME/Fixed address/Reservation of vCAC VM wrapper”. In essence,
this removes the entries made by the previous workflows.
In order to install these hooks, the Install vCO customization wrapper workflow must
be invoked:
1. Navigate to the IPAM > vCAC > Configuration folder, select the workflow and
choose Start workflow.
2. In the Common Parameters page, click the The vCloud Automation Center host
instance field.
3. In the Select Host window, click the top vCAC Infrastructure Administration
entry in the left pane’s hierarchical list. The list of current vCAC VM hosts appears.
4. Select the desired host and click Select.
5. Click Submit.
The next step is to apply an Infoblox configuration to a blueprint. Infoblox uses build
profiles to store a specific network configuration. Once again, we use Orchestrator
workflows to create these build profiles. There are different workflows available (see Fig.
18-4):
• The workflow Create build profile for reserve an IP for vCAC VM is used if you
want to define a static IP (you might have obtained this IP from another source such
as a network profile).
• The workflow Create build profile for reserve an IP for vCAC VM in Network
is used, when the blueprint should claim an IP address within a given subnet range.
• The workflow Create build profile for reserve an IP for vCAC VM in Range
refers to a DHCP range defined in IPAM (important: The IP address assignment is
nevertheless static, DHCP is not used).
Once one of the workflows has been invoked, a build profile will have been created. This
can be attached to a blueprint, in order to prepare it for interaction with Infoblox during its
lifecycle. The build profile is shown in Fig. 18-4.
Before a machine can be provisioned, we should first customize the build profile
according to our needs. The values for the build profile can be entered by the user at the
point of request, but it makes more sense to hard code some of them (for example DNS,
network and CIDR). However, when configuring these properties, please keep in mind
that they have a great impact on how vRealize Automation and Infoblox interact. There
are different approaches for the integration of both components:
Figure 18-4 Build Profile for Reserve an IP vor vCAC VM
• The first approach (using the Create build profile for reserve an IP for vCAC VM
workflow) is to keep the existing network profiles with all its properties and only
use Infoblox to register IPs and DNS names. This is the easiest approach and in
many cases companies will take this route. You have to set up a network profile in
advance, but once done there is nothing too big to worry about anymore.
• The second approach (network approach) uses the infoblox built-in IPAM system to
obtain the IP addresses. There are different methods used to find the right network in
Infoblox and they require different sets of custom properties:
o You can specify the network by IP address and CIDR and have to specify the
Infoblox.IPAM.netaddr and Infoblox.IPAM.cidr custom properties.
o You can also search in Infoblox for a network with the following custom
properties: Infoblox.IPAM.searchEa1Name, Infoblox.IPAM.searchEa1Value,
Infoblox.IPAM.searchEa1Comparison, Infoblox.IPAM.searchEa10Name,
Infoblox.IPAM.searchEa10Value and Infoblox.IPAM.searchEa10Comparison.
• The last approach (using a range) requires configuration of the following custom
properties:
o Infoblox.IPAM.startAddress
o Infoblox.IPAM.endAddress
Once this has been done, we need to attach the build profile to the blueprints we want
infoblox to be integrated with. We are then ready to request a new VM.
e.9 Summary
This chapter emphazised the need for a professional IPAM system and showed how the
Infoblox plug-in can be installed and configured. Once again, Orchestrator is doing a lot
of the work. However, there is still some work to be done in order to integrate infoblox
into vRealize Automation and automate the IP address and DNS configuration.
Index
A
Active Directory 54-55, 85-86, 116-117, 251-253
- Active Directory certificate service 66
- Active Directory plugin configuration 271-272
Advanced Service Designer 267
Amazon Web Service 17, 30-33, 91, 296, 319
Anything as a Service (XaaS) 221, 246, 267
Application Services 24, 32-33, 45, 314-317
Approval 170, 191
Approval policy 17,169-172
B
Backup 214-218, 269
Backup and recovery 214-218, 209
Blueprints 109
- virtual blueprint 118
- cloud blueprint 129
- physical blueprint 136
Branding 75, 82-83
Build profile 116, 189, 196
Business group 97-99
Business group roles 107-107
C
Catalog item 165, 168-172, 283
CDK 226
Certificates 64-66, 68-77
- CA-signed certificates 64
- certificate signing request 69
- SSL certificate 55, 330
- certificate template 68
Cisco UCS 17, 25, 89-91, 136-137
Citrix Xen Server 17, 25, 118
Clone 119-122, 120
Cloud costs 301
Cloud Development Kit 226
CloudClient 209-212, 210
Compute resources 89, 92, 99, 104, 115, 122, 161, 186, 193, 289, 293, 300
Cost profile 293-294
Currency 295-297, 306
Custom property 189, 193
Custom resource 275, 273-275
D
Data collection 93-94
DDI 325-326
Dell iDrac 17, 25, 136-138
DEM 43-44, 217
DEM Orchestrator 43
DEM worker 44
Demand analysis 305
DevOps 311
Distributed Execution Manager 29, 60
Docker 318-321
E
Email settings 83
Endpoints 77, 88, 90-93, 132-137, 247-249, 270-272
- vSphere endpoint 92
- Amazon Web services endpoint 93
- NSX endpoint 157
- Orchestrator endpoint 248
Entitlement 52, 166, 168, 169, 170, 279
External network profile 148
External Provisioning Integration 30
F
Fabric admin 95-99, 197-199
Fabric group 95-97
G
Group manager 97-99, 175-177
Guest Agent 138-140
H
HP iLO 17, 25, 89-91, 136-137
Hyper-V 17, 25, 89-91, 118-119
I
IaaS 12, 18, 25-26, 29, 42, 59-65, 73-78
IaaS admin 87-88, 95, 106-107
IaaS web server 43
Identity Appliance 31, 53
Importing machines 160
Infoblox 326
Infoblox NIOS appliance 329-331
infrastructure admin 84, 254, 255-256, 261-263, 283, 330-333
IPAM 330-335
K
KVM 12, 17, 89-91, 118-119
L
LDAP 42, 46, 85-86, 252, 272,
Licensing 116-118, 292
Linked clone 120-122
LINQPad 229
Linux kickstart 123
Logging 15, 78, 86, 124, 135, 232, 237, 241, 249
M
Machine leases 111
Machine prefix 96-98
Management Agents 30
Manager Service 29, 43, 60
Microsoft SCCM 119, 126-127, 137-138,
Model Manager 27-29, 75-77, 217
Multimachine blueprint 145, 155
Multitenancy 17
N
NetApp Flexclone 122
Network profile 104, 145
Notification 98, 174
NSX 35, 92, 146, 151
O
OpenStack 25, 129, 132
Operational analysis 300-304, 302
Orchestrator Appliance 247
P
PaaS 12-13, 18, 24, 32
PostgreSQL 25, 42, 296
Private network 105, 149
Project Photon 319
Property attribute 199-202
Property definition 199-202, 201
Property dictionary 20, 197-206, 221, 225, 261,
Proxy Agent 40, 44
Puppet 31, 139, 255, 258-260, 313-319
R
Reclamation 181
Red Hat KVM 17, 25
Reservation 99
Reservation policy 114, 126-129
Resource action 268, 278
Resource mapping 268, 270, 273, 285
Routed network 105, 146, 169, 151
S
SaaS 13, 292
Service blueprint 155, 268, 273, 276
Service Catalog 18, 165
SMTP (Simple Mail Transfer Protocol) 29, 42
Software-Defined Data Center 16-17
SSL 31, 53-58, 72, 217, 250, 330
SSO 23, 42, 54-59, 215, 297
Storage cost profile 75-76, 86, 293-294
Storage reservation policy 104
T
Tenant 51, 84-87, 106-107, 211, 298
V
vCenter 30-33, 44
vCenter Orchestrator (vCO) 25, 138, 192, 221, 224, 245-250, 330
VDI 30
VDI Agent 30
vCloud 134-135
vCloud Director 134
Virtual Desktop Integration 30
vPostgreSQL 25, 40, 296
vRA Appliance 42, 48, 56-64
vRealize Application Services 40, 45, 314
vRealize Business 24, 31, 40, 212, 292
vRealize Designer 222, 229
vRealize Orchestrator 33, 40, 65, 157, 212-213, 245
W
WIM Provisioning 127
Windows Management Instrumentation (WMI) 30, 93, 137
X
XaaS 12, 18, 268
[1]
Source: http://www.vmware.com/products/vrealize-automation/compare.html
[2]
http://blogs.vmware.com/PowerCLI/2014/12/vrealize-automation-vra-6-2-pre-req-automation-script-formerly-
vcac.html
[3]
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2107816
[4]
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2074803
[5]
http://pubs.vmware.com/vra-62/index.jsp?topic=%2Fcom.vmware.vra.custom.props.doc%2FGUID-9A925449-3DC8-
4BA8-91A5-DF7E1191097B.html
[6]
http://www.virtualizationteam.com/cloud/vcac-6-property-dictionary-relationship-builder.html
[7]
https://developercenter.vmware.com/tool/cloudclient/3.2.0
[8]
LINQPad can be purchased, but there is also cost-free version for download at http://www.lingpad.com
[9]
https://solutionexchange.vmware.com/store/products/vrealize-orchestrator-vro-puppet-plugin
[10]
https://puppetlabs.com/download-learning-vm
[11]
http://www.gartner.com/technology/reprints.do?id=1-212F7AL&ct=140909&st=sb
[12]
https://msdn.microsoft.com/en-us/library/ee825488(v=cs.20).aspx
[13]
http://vmware.github.io/photon
[14]
https://vmware.github.io/photon/assets/files/getting_started_with_photon_on_vsphere.pdf
[15]
https://www.Infoblox.com/sites/Infobloxcom/files/resources/vnios-trial-quick-start-guide_1.pdf
[16]
https://www.Infoblox.com/downloads/software/vmware-vcenter-orchestrator-plug-in