Sunteți pe pagina 1din 352

VMware

vRealize Automation Handbook

Dr. Guido Soeldner directs the Division of Cloud Automation and Software Development at Soeldner Consult GmbH in
Nuremberg. His company focuses on the virtualizion infrastructure of VMware, Citrix and Microsoft, as well as
specialized cloud technologies from Amazon, Pivotal and Microsoft. He is also a VMware Certified Instructor, Amazon
AWS Instructor and a SpringSource/Pivotal Trainer.

Jens-Henrik Soeldner is head of the Business Infrastructure at Soeldner Consult GmbH in Nuremberg. He has been
awarded VMware vExpert status anually, since 2013. He holds various manufacturer certifications, such as VMware
Certified Mentor Instructor, EMC2 Instructor and has been a Microsoft Trainer for many years.

Constantin Soeldner directs the Business and IT Consulting Division at Soeldner Consult GmbH in Nuremberg. He is
authorized as a VMware Certified Instructor and Instructor of AWS among others.

www.soeldner-consult.de

Guido Soeldner, Jens-Henrik Soeldner, Constantin Soeldner

VMware vRealize Automation Handbook


Implementing Cloud Management in the Enterprise Environment
VMware vRealize Automation Handbook

Authors: Guido Soeldner, Jens-Henrik Soeldner and Constantin Soeldner


Creative Designer and Editor: Douglas Bowen

Copyright © Soeldner Consult GmbH


Allersberger Strasse 185
90461 Nürnberg
Bavaria
Germany

Printed by CreateSpace, An Amazon.com Company


Also Available on Kindle

All rights reserved. No part of this book may be reproduced or transmitted in any form or
by any means, electronic or mechanical, including photocopying, recording, or by any
information storage and retrieval system, without written permission from the authors,
except for the inclusion of brief quotations in a review.

Cover Image © Shutterstock 2015

ISBN-13:978-1515264330
ISBN-10:1515264335

Warning and Disclaimer

Limit of Liability/Disclaimer of Warranty: The publisher and the authors make no representations or warranties with respect to the accuracy or
completeness of this work and specifically disclaim all warranties, including without limitation warranties of fitness for a particular purpose. No
warranty may be created or extended by sales or promotional materials. The advice and strategies contained herein may not be suitable for every
situation. This work is sold with the understanding that the publisher is not engaged in rendering legal, accounting, or other professional services. If
professional assistance is required, the services of a competent professional person should be sought. Neither the publisher nor the authors shall be
liable for damages arising herefrom. The fact that an organization or Web site is referred to in this work as a citation and/or potential source of further
information does not mean that the authors or the publisher endorses the information the organization or Web site may provide or recommendations it
may make. Further, readers should be aware that internet Web sites listed in this work may have changed or disappeared between when this work was
written and when it is read.

Trademark Acknowledgments

VMware: vSphere, vCenter, vRealize Automation, vRealize Orchestrator, vCenter Orchestrator, NSX, vCloud and vRealize Business are all registered
trademarks of VMware, Inc. All other trademarks are the property of their respective owners. Soeldner Consult GmbH is not associated with any
product or vendor mentioned in this book.

Preface

Cloud Computing has rapidly become the main focus of the IT industry. Hence, it has
naturally become a topic of vital importance to many companies. They therefore need to
decide on how to deal with this subject in the future and to what extent they will introduce
cloud technologies. In doing this, they will of course face many challenges. Big
companies will seek a solution that helps them to better manage resources in the private,
public, or hybrid cloud. Service providers require a product that supports their customers
in managing hosted services more efficiently. Finally, IT departments are looking for a
solution that allows automation of their internal processes and empowers them to be
cloud-ready (IT as a service).
VMware’s vRealize Automation can help you answer all these questions. Being a central
component of VMware’s Software Defined Datacenter (SDDC) strategy, it assists
companies that are introducing cloud computing to their operations.
In conclusion it gives both a summary of the products, where they fit, how they can be
used, as well as covering in depth all the technical aspects you need to get up and running
and au fait with VMware vRealize Automation.
Table of Contents
Cloud Computing
What is cloud computing?
Cloud service models
Deployment models in the cloud
Elements of cloud computing
Advantages of cloud computing

Overview of vRealize Automation


Features of vRealize Automation
Extensibility

About this book


Architecture and vRealize Automation Product Overview
Architectural overview
The components in detail vRealize Automation Appliance
vRealize Automation Appliance
vRealize Infrastructure as a Service (IaaS)
Identity Appliance
vRealize Business Standard Edition
vRealize Orchestrator
NSX

vRealize Automation Licensing


Summary
Design and Deployment
Physical design
Host support for IaaS components
vRealize Automation Appliances

High-availability and scalability considerations


SSO component
vRealize Automation Appliance
vRealize IaaS components
IaaS web server
Manager Service
DEM Orchestrator
DEM Worker
Proxy Agents
MS SQL Server
vRealize Automation Business Appliance
vRealize Application Services

Deployment Architecture
Summary
Basic Installation
Overview of installation and configuration steps
Installation
Installation prerequisites and considerations
Deployment and configuration of the Identity Appliance
Deployment and configuration of the vRA Appliance

Installation of IaaS components


Installation considerations and prerequisites
IaaS Manager Service
Distributed Execution Manager
Automated installation of Prerequisites
Installation with CA-signed certificates
Considerations before changing certificates

Configuration of the Certification Authority


Creation of a vRealize Automation certificate template
Creation and configuration of vRealize Appliance certificates
Creation of a Certificate Signing Request
Creation of certificates
Convert the certificate to the PEM format
Upload the certificates
Creating and uploading certificates for the IaaS components
Convert the certificate into the PXF format
Upload the certificate to the IIS Server
Replacing certificates
Updating of the Identity Appliance certificate
Update the vRealize Automation Appliance with the new certificate
Update the IaaS servers with the new certificate
Updating of the vRealize Automation Appliance certificate
Updating the IaaS certificate
Replacement at the IIS server
Registration of the new certificate with the vRealize Automation Appliance
Using the vRealize Certificate Generation tool

Troubleshooting installation issues


Problem with the login process
Problems with the vCAC website portal
Verify the services

Summary
Design and Configuration of vRealize Automation
Basic vRealize Automation configuration
Tenant as an organizational unit
System administrator privileges
Creating and configuring a tenant

General settings
Identity stores
Uploading a license
Adding endpoints
Background: Data collection
AWS Endpoint
Creating and configuring fabric groups
Defining business groups
Creating reservations
Storaging Policies
Configuring Network Profiles

The vRealize Automation role concept


Summary
Blueprints
Introduction to blueprints
Blueprints – basic settings
Virtual Blueprints
Basic workflow
Clone and Linked Clone
NetApp FlexClone
Linux kickstart

Cloud Blueprints
Provisioning with Amazon AWS
Defining a blueprint
Provisioning with OpenStack
Provisioning with vCloud Air and vCloud Director
Comparison of Amazon AWS with vCloud Air
Preparing for vCloud Air

Physical blueprints
Integrating the vRealize Guest Agent
Installing the guest agent on Windows
Installing the guest agent on Linux
Executing scripts with the guest agent

Summary
Network Profiles and Multimachine Blueprints
Basics of network profiles
Creating external network profile
Private network profiles
Routed network profiles
NAT network profiles

Introduction to multimachine blueprints


Comparison with vCloud Director vApp
Multimachine blueprint preparations
Configuring a transport zone
Creating an endpoint
Setup of network profiles
Configuring reservations
Creating a multimachine blueprint

Importing machines to vRealize Automation


Bulk import

Summary
Working with the Service Catalog
Configuring the service catalog
Creating services
Managing catalog items
Creating entitlements and assigning permissions

Approval processes
Specifying approval policy information
Creating one or more approval level
Configuring an approval form

Using the service catalog


Configuring notifications
Requesting resources
Monitoring requests
Approving requests
Managing virtual machines
vRealize Operation integration
Releasing a machine

Summary
Reclamations
Reclamation workflow overview
Identifying unused machines in vRealize Automation

Capacity reports
Summary
Custom Properties and Build Profiles
Custom properties basics
Machine lifecycle
Request phase
Approval phase
Provisioning phase
Post approval phase
Manage phase
Retire phase

Custom Properties
Order of custom properties
Custom property categories
Read-only custom properties
Internal custom properties
External custom properties
Updated custom properties
Configuration of custom properties

Build profiles
Create build profiles

Property dictionary
Using the property dictionary
Create a property definition
Configure property attributes
Create the parent property definition
Create the child property definition with a relationship attribute
Write a Value Expression
Formatting the XML and upload to vRealize Automation
Add the properties to a build profile or blueprint
Configure property layout (optionally)

Create your own custom properties


Summary
Advanced Administration
Using the vRealize CloudClient CLI
CloudClient functionalities
Using CloudClient

Monitoring vRealize Automation


Backup and recovery of vRealize Automation
Database backup
Identity appliance or SSO appliance
vRealize Automation appliance
IaaS components
Agents and DEMs
Certificates

Summary
Extensibility Overview
Extensibility Options and Tools
Extensibility with vRealize Designer and VMware Orchestrator
VMware vCenter Orchestrator
Advanced Service Designer
Cloud Development Kit
Summary
Working with vRealize Automation Designer
The vRealize Automation IaaS model
Background: LINQ

vRealize Designer
Use case: Invoke a PowerShell script as part of the provisioning process.
Implementing the workflow
Background: Workflows in vRealize Designer
Background: How to activate workflows
Additional Workflow activities

Summary
vRealize Orchestrator
Introduction to vRealize Orchestrator
vRealize Orchestrator configuration
Start Orchestrator Appliance
Create Orchestrator Endpoints
Installation of the Orchestrator client
Background: Adding additional user for the Orchestrator client
Orchestrator configuration
Import SSL certificates
Configure the vRealize Automation plug-ins
Orchestrator Use Cases

Use Case 1: Run a script on a machine after installation


Use Case 2: Integrate Puppet
Use Case 3: Write a workflow
Additional plug-ins and workflows

Summary
Advanced Service Designer
XaaS use cases
Advanced Service Designer Configuration Role assignment
Role assignment
Endpoint configuration

Configuration of the Active Directory plug-in


Configuration of the vCenter Server endpoint
Working with Advanced Service Designer
Exporting and importing ASD components
Create Custom Resources
Create a Service Blueprint
Define resource actions
Input validation
Default fields
Constraints
Input validation with Orchestrator

Advanced Service Designer use cases


Deploy a machine from ASD

Summary
Financial Management
Overview on financial management
Basic features of financial management tools
Understanding your costs
Establishing prices for services
Comparing costs
Showback costs
Providing reports

Features and licensing of vRealize Business


Manual cost calculating
Creating a cost profile
Assigning a cost profile
Assigning additional costs
Changing the currency

Architecture of vRealize Business Standard


Installation and configuration Prerequisites
Prerequisites
Deployment and configuration of the vRealize Business Appliance
Downloading and deploying the appliance
Configuring the appliance and connecting it to vRealize Automation
Creating a tenant for vRealize Business
Configuring vRealize Business

Using vRealize Business


Cloud overview
Cloud Costs
Operational analysis
Automatic base rate calculation
Demand analysis
Cloud Comparision
Reports

Summary
DevOps and vRealize Automation
Foundations of DevOps
DevOps Tools
Ticket systems
Server deployments
Configuration management
Continuous integration
Continuous delivery
Continuous deployment
Log analysis

vRealize Automation and DevOps


Deploying and automating multi-tier applications with vRealize Application Services
Puppet integration

vRealize Code Stream


Docker
Project Photon
Summary
DNS, DHCP and IP Address Management Tools
Overview about DDI tools
Infoblox and vRealize Automation
Deploying Infoblox
Deploying the Infoblox NIOS appliance
Installing and configuring the Infoblox Orchestrator plug-in

Importing SSL certificates


Installing the Infoblox IPAM Plug-In for VMware
Set up an Infoblox IPAM connection
Install the external VMware package
Working with the Infoblox Orchestrator workflows
Summary
Index
1. Cloud Computing

This chapter aims to give a brief introduction to cloud computing. We will explain the
basics of cloud computing, show its advantages and discuss the different deployment
models available. We will also address the different service models in the cloud.
From a technical perspective, cloud computing never comes alone – instead cloud
computing is based on multiple technologies, amongst these virtualization and automation.

1.1 What is cloud computing?

Cloud computing is seen as being one of the most important trends in the IT industry. It
allows application software to be operated using a wide variety of internet-enabled
devices. The word cloud acts like a metaphor to depict an abstraction layer. This layer
hides the complexity of the underlying infrastructure.

1.1.1. Cloud service models


In the last few years, many service offerings have been created within the cloud. In order
to better categorize them, the National Institute of Standards and Technology (NIST)
defined different service models. These service models have been widely adapted in the
industry:

• IaaS – Infrastructure as a service


• PaaS – Platform as a service
• SaaS – Software as a service

Besides these three offerings, there is another service model that can be encountered from
time to time:

• XaaS – Everything as a service

Let’s briefly describe these service models.

IaaS – Infrastructure as a service

IaaS is the most basic cloud-service model. Its basic function is to provide compute
capabilities – i.e. compute, memory and storage resources. At the heart of IaaS is
virtualization – a hypervisor such as VMware ESXi, Xen, KVM or Hyper-V runs the virtual
machine as a guest. This allows the parallel use of multiple guest virtual machines on the
same base hardware. Importantly, it also provides isolation between these machines.
However, installing large numbers of such virtual machines manually would take too long
- so that’s where automation comes into play. Besides the aforementioned features, cloud
providers also offer raw block storage, object storage, firewall, load balancers, IP
addresses or virtual local area networks as IaaS resources.
The most famous service providers within the public cloud are Amazon with its EC2
instances, Google App Engine or Microsoft Azure.
Companies aiming to introduce cloud computing need some degree of virtualization
within their enterprise. Once running, they can use vRealize Automation or OpenStack as
a cloud management platform.

PaaS – Platform as a service


In the PaaS model, cloud providers typically provision higher-level components and
runtime-environments such as web servers, databases or deployment tools. The PaaS takes
care of the entire lifecycle, so that there is no need to worry about installing or de-
installing any software manually. Essentially, users are not involved with the complexity
of the underlying infrastructure. Because of the many different offerings on the market, it
is useful to further categorize PaaS systems:

• Application PaaS (aPaaS) helps developers to rapidly develop applications for the
cloud. Such aPaaS systems usually support different programming languages and
frameworks. They also can take care about the deployment of the applications and
its runtime management. Popular aPaaS-systems are Amazon Beanstalk, Pivotal
Cloudfoundry, CloudBees, Google App Engine, Heroku or also Red Hat Open Shift.
• Mobile PaaS (mPaaS) aims at providing deployment capabilities for mobile
applications.
• Open PaaS (oPaaS) does not include hosting, but offers capabilities to run
applications in other environments.
Software as a Service (SaaS)

Another popular service model within cloud computing is SaaS. Within SaaS, users have
direct access to applications. In the SaaS model, the cloud provider manages the whole
infrastructure as well as the application. SaaS applications are usually priced on a pay-per-
use basis or using a subscription fee. There are many popular SaaS offerings, for example
the Salesforce CRM or Microsoft Office 365.
Everything as a Service (XaaS)

The XaaS deployment model is mostly used in private clouds. Any service, which can be
provisioned in an automated way, can be published within the private cloud and thus used
as an XaaS service within the enterprise.

1.1.2. Deployment models in the cloud

Right now, there are many different deployment models within the cloud:

• Public cloud computing is certainly the most mature and popular deployment model
of the cloud. End users, from companies that have not yet made their internal IT
‘cloud-ready’, can easily use service offerings from existing cloud providers such as
Amazon ASW, Microsoft Azure or Google – to mention but a few. The biggest
advantages of using public cloud computing are its flexibility and the absence of up-
front investment requirements.

• Companies can also build their own private cloud. Setting up your own private
cloud involves a significant level of engagement and means re-evaluating and
redesigning the internal IT environment. In order to make such a transition more
feasible, there are cloud management platforms such as VMware vRealize
Automation or OpenStack on the market.
• Hybrid clouds represent a combination of both approaches. The hybrid cloud can be
either a combination of different public cloud service providers or an extension of
the private cloud by a public cloud. VMware offers vCloud Air and targets
customers that need a certain level of temporary capacity.
• Multicloud computing is a recent trend, in which services from multiple cloud
providers are used in order to reduce reliance on a single vendor and to mitigate the
effects of disasters.

1.1.3. Elements of cloud computing

During the last few years, there is a significant increase of service offerings from cloud
providers. Several years ago, cloud computing was limited to the most basic infrastructure
services. Right now, cloud offerings nearly encompass the whole set of services and
applications of traditional datacenters. While the range of available services is still
increasing, they can be categorized at a base level into core infrastructure services and
advanced platform services.

Infrastructure services usually involve:

• Virtual machine deployment


• Creation of virtual private networks (VPNs)
• DNS servers
• Load balancers
• Block storages
• Object storages
• Relational databases
• Non-relational databases
• Backup offerings
• Queuing services

Advanced services offerings include or allow you to build the following services:

• Configuration management tools


• Monitoring
• Deployment tools
• Logging
• Distributed workflows
• DevOps
• Mobile computing
• Collaboration
• Email
• Office programs
• Search providers

1.1.4. Advantages of cloud computing

Cloud computing has become extremely popular within recent years – and for good
reason. In this section, we will shortly enumerate and explain the most important
advantages of cloud computing.
The number-one reason companies move to the cloud is certainly agility. Traditional
processes within companies tend to be very slow – especially when something new has to
be built. With cloud computing on the other side, users can request resources in minutes
instead of days, weeks or longer. From a service consumer perspective, there is no
expensive hardware to be ordered, installed or configured. Consumers only specify their
hardware expectations – i.e. how much memory, storage and computing capabilities they
need – and request them from the cloud provider. If there is any change in the
requirements, consumers can easily request additional resources or release them.
Another advantage is that there is no need to forecast your capacity. Consumers can scale
up to meet the needs of workloads. As an example, imagine a web page that experiences
spikes and higher load during the Christmas time. A monitoring system could constantly
check the incoming traffic of a load balancer and – if there is a significant rise – add
additional web service instances behind the load balancer. As soon as there is lower traffic,
resources can be released again. Such a behavior is known as scaling-in and scaling-out.
From a financial point of view there are advantages for the consumer as well. Due to
ability to scale-in or out, money needs only to be spent when the resources are actually in
use. Consequently, consumers only pay for the infrastructure they need. In addition, there
is no upfront investment for consumers.
Of course, running a cloud requires building up new knowledge. However, as cloud
services can easily be centralized, there is no need for cloud consumers to acquire all that
knowledge as well – instead they only need to concentrate on using those cloud services.
Another advantage of the cloud stems from the distribution of your datacenters.
Depending on the location of your datacenters (many public cloud providers act globally
as well), you will also be able to bring some benefits to your end users. Imagine a backend
for a mobile application that is deployed in datacenters located in different regions of the
world. End users will surely experience higher performance and lower latency when
using such applications.

1.2 Overview of vRealize Automation

VMware, a leader in virtualization, also has a long history in cloud computing. The
company has bundled its cloud products within the vRealize-Suite (formerly known as
vCloud – the rebranding took place in 2014). Companies can use the vRealize-Suite to
implement their own private cloud. VMware is also pushing forward its own cloud
strategy – the Software-Defined Data Center, which involves private, public and hybrid
cloud computing.
Managing the cloud requires a cloud management platform and VMware vRealize
Automation acts as such a platform. Originally, VMware had another product as their
cloud flagship – the vCloud Director, but between 2012 and 2013, VMware realized that
vCloud Director was not capable of meeting all the demands of private cloud computing
in the enterprise. Therefore, VMware acquired DynamicOps in 2012 and rebranded their
cloud management product first as vCloud Automation Center and later as vRealize
Automation. vCloud Director still exists, but can only be used by service providers and
not by normal enterprise customers. vCloud Director provides basic cloud management
platform capabilities, but focuses on a VMware-only stack.

1.2.1. Features of vRealize Automation


Explaining this strategy shift to their customers - that vCloud Director would no longer be
VMware’s cloud management platform - was a hard pill to swallow. The existing
customer base had already invested a lot of money and time into vCloud Director.
However, a closer look reveals that the strategy made sense, as vCloud Director had too
many shortcomings, which prevented it from becoming a strong player in the cloud
management market.
vCloud Director is very good at deploying virtual machines and networks in a VMware
environment. However, extending vCloud Director is a painful and intensive job. Also,
crucially, vCloud Director did not support non-VMware environments.
Multivendor and multiplatform provisioning

As aforementioned, vRealize Automation represents a core product within the Software-


Defined Data Center - being able to centrally manage all resources in an enterprise cloud.
As opposed to its predecessor, vRealize Automation is able to manage heterogeneous
virtual and cloud environments. In concrete terms, this means that besides VMware’s
vSphere there is also support for Microsoft Hyper-V, Red Hat KVM or Citrix Xen Server.
vRealize Automation can install physical systems as well, if they support Dell iDrac, HP
iLO or Cisco UCS Manager. Of course, virtual machines can also be deployed to cloud
service providers – vCloud Director, OpenStack and Amazon Web Services are all
supported.
Self-service portal and policies

In order to request resources, end users log in to the self-service portal and can request
services from a service catalog.
Behind the scenes, vRealize Automation manages the lifecycle of any requested
resource. This begins with approval policies that can be applied to any service published
to the service catalog. Users need the appropriate permissions to request a service from the
catalog as well. There are means to determine where a machine is deployed as well. For
example, it is possible to form different service categories (e.g. gold, silver or bronze
hardware) and create a mapping between a service and the hardware to which such a
service should be deployed.
Network virtualization

While in most cases provisioning virtual machines is enough, there are plenty of examples
where networks need to be created dynamically. This is especially true when multi-tier
applications need to be deployed. Imagine a multi-tier application consisting of a web
server, an application server and a database. Such applications should often be deployed to
a dedicated network that is created on the fly. Those networks include components such as
firewall, load balancers, routers or security groups and these can all be created with
VMware NSX. vRealize Automation can interact with NSX to dynamically create NSX
components as part of any deployment process.
Multitenancy

Another important feature of a cloud management platform is multitenancy. You can use a
vRealize Automation instance to host and run the virtual machines of different customers
while isolating them from each other.
Service catalog

End users can log in into the service portal and use the service catalog to request and
provision services from within vRealize Automation. Administrators can set up fine-
grained permissions to control the access to the service catalog. The user interface of the
service catalog is inspired by the look and feel of an App Store.
IaaS, PaaS and XaaS provisioning

As aforementioned, vRealize Automation acts as a cloud management platform and


supports a variety of hypervisors, physical servers and cloud service providers for the
provisioning of machines. Besides the provisioning of IaaS, PaaS and XaaS services are
supported too.
The XaaS services are tightly coupled to VMware’s orchestration engine - the vRealize
Orchestrator. By means of XaaS services, it is possible to publish any existing
Orchestrator workflow. XaaS services are thus published to the service catalog and hence
available to end users. Orchestrator already provides a lot of built-in workflows out-of-
the-box, By means of the Advanced Service Designer they can be published in the
vRealize Automation service catalog. The Advanced Service Designer also helps to build
a graphical front end for the service request form.
Financial management

Besides being able to automate your infrastructure and empower users to provision
services according to their needs, it is essential to keep track of ongoing costs. This
includes finding out which costs are incurring in your datacenter, how they are allocated
and how to map them to services that end users can be charged for using.
To accomplish that, vRealize Automation includes a financial tool – vRealize Business
Standard – that helps with the aforementioned problems.
1.2.2. Extensibility

In most projects, sooner or later, a point will come when not all of the requirements can be
covered by means of the administrative graphical user interface. The same holds true for
vRealize Automation.
The main tool for extensibility of vRealize Automation is the vRealize Orchestrator. The
Orchestrator constitutes a workflow engine which is tightly integrated with vRealize
Automation. It allows customizing vRealize Automation IaaS machines throughout their
whole lifecycle.On the other hand, Orchestrator can also help to automate vRealize
Automation tasks.

1.3 About this book

In our experience having a proper cloud management software is a key requirement for
building and maintaining a private and hybrid cloud. While many companies develop their
own set of programs and tools for managing the cloud, sooner or later they end up in a lot
of work to keep their software up-to-date.
This book shows all aspects of vRealize Automation – the cloud management platform
from VMware. It offers the background knowledge as well as detailed instructions and
hands-on examples of real use cases.
Target audience

This book is intended for system administrators with experience using VMware’s vSphere
hypervisor as well as for developers who want to extend vRealize Automation and
vRealize Orchestrator. Consultants and architects will also benefit from reading the book
by gaining an in-depth understanding of vRealize Automation.
What this book covers

The topics of this book are usually introduced by a high-level overview of the chapters
and sections and the particular challenges. Then, the book proceeds to discuss how to
implement the various topics introduced.

Chapter 1 covers a discussion of Cloud Computing concepts, including what this term
means, what challenges exist in Cloud Computing and how Cloud Computing differs from
traditional IT.

Chapter 2 introduces vRealize Automation and shows a high-level overview of the


different components in vRealize Automation.

Chapter 3 gives detailed information about the components that form vRealize
Automation and discusses how the product can be deployed in small, medium and large
environments.

Chapter 4 covers the installation of the products. The installation steps are explained step-
by-step.

Chapter 5 explains the most important concepts of vRealize Automation and shows how to
configure the system.

Chapter 6 explores blueprints. The chapter will introduce virtual, cloud and physical
blueprints and show how they interact with deployment techniques in order to provision
machines.

Chapter 7 discusses network profiles multimachine blueprints. Both features together help
administrators to dynamically create networks and clusters of machines.

Chapter 8 shows how to set up the Service Catalog. This involves publishing blueprints
and set up permissions for end users.

Chapter 9 talks about reclamations and how they can be used to reclaim capacity in the
datacenter.

Chapter 10 explores Custom Properties, Build Profiles and the Property Dictionary. These
items help administrators to easily change the runtime behavior of workflows and change
the user interface for the Request Form in the service catalog.

Chapter 11 covers advanced administration topics, such as the command line tool,
monitoring vRealize Automation or how to perform a backup and a recovery.

Chapter 12 gives an introduction to the extensibility in vRealize Automation.

Chapter 13 introduces the vRealize Designer – an Integrated Development Environment to


modify built-in workflows in vRealize Automation.

Chapter 14 describes how to set up and configure vRealize Orchestrator and integrate it
into vRealize Automation. Once you know how to get along with Orchestrator, you will
learn how to extend vRealize Automation.

Chapter 15 covers the Advanced Service Designer – a tool that helps to publish all kind of
services to the service catalog. Once again, the chapter provides many real life use cases.

Chapter 16 introduces financial management in the private cloud. It also shows how to set
up and integrate vRealize Business.

Chapter 17 gives an introduction into DevOps. The core principles of DevOps are
discussed. The chapter introduces the Advanced Services, Puppet, vRealize Code Stream
and vSphere Photon (a Docker container).

Chapter 18 expaines the concepts of DNS, DHCP and IP Address Management Tools.
2. Architecture and vRealize Automation Product Overview

After having given a brief overview of cloud and automation in general, the following
chapter will now focus on the architecture of vRealize Automation. The main components
of the product can be identified as follows:

• vRealize Automation Appliance: A preconfigured virtual appliance, in the Open


Virtualization Format (OVF). Essentially, it hosts the web console (with the self-
service portal) and provides the user interface of vRealize Automation.
• vRealize Automation Infrastructure as a Service (IaaS): This component must be
installed on a Windows machine and is responsible for the provisioning of IaaS
resources. Therefore, it must be able to communicate with hypervisors, cloud
environments and physical hosts directly in order to deploy IaaS-resources.
• Identity Appliance: This is another preconfigured appliance, providing Single-Sign-
On (SSO) services for vRealize Automation. Customers with a running vSphere
version 5.5 1b, or higher, can use their existing vCenter SSO.

Besides these core components, there are several other appliances and resources which can
further extend and compliment the base installation. These components will be described
in the following section. Furthermore, the different ways of licensing vRealize
Automation will also be outlined.

2.1 Architectural overview


Besides the already mentioned core components, the following additional components
exist. However, they can only be used with the Advanced or, in some cases, Enterprise
license.

• vRealize Business Standard Edition: Bundled with the Advanced license of vRealize
Automation, vRealize Business Standard Edition helps you gain insight into the
costs of your datacenter. There is a showback functionality for the overall costs of
your environment. However, the resulting costs for business departments or
individual machines can also be seen. In addition to that, it can be used to compare
the costs of running machines in the private, hyprid, or public cloud.
• vRealize Application Services (formerly known as VMware vCloud Application
Director): This is another appliance, which contributes PaaS functionality to
vRealize Automation. It helps companies to provide sophisticated services such as
“Database as a Service”, “Middleware as a Service” or other complex applications.
Technically, the Application Services take care of the installation, configuration and
deployment of operating systems and hosted applications. During operations,
Application Services also help with the running of these deployed services – e.g.
scale-in, scale-out or updating applications.
• vRealize Code Stream: Technically, with vRealize Automation 6.2 onwards, the
vRealize Automation appliance also hosts vRealize Code Stream - an automation
tool that helps DevOps teams build a release pipeline and provision new
applications to a production environment. However, whilst technically hosted on the
vRealize Automation Appliance, it is an independent product and is not part of the
vRealize Automation or vRealize Suite licenses.

Fig. 2-1 depicts the interaction between these components.

It is important to note, however, that in order to get the IaaS components running, an
instance of Microsoft SQL Server is required. Also noteworthy is the interaction between
the Identity Appliance and Active Directory. Most companies will use Active Directory
for user authentication. This can be leveraged by the Identity Appliance which
authenticates users and groups for vRealize Automation. Thus, by integrating Active
Directory, the Identity Appliance has access to all users and groups from Active Directory.
Therefore, you don’t need to create separate users in the Identity Appliance.
Figure 2-1 Logical overview of vRealize Automation components

As mentioned before, vRealize Automation serves as a Cloud Management Platform for


the private, hybrid and public cloud. In the private cloud, there is support for VMware
vSphere, VMware Cloud Director, OpenStack, Red Hat/KVM, Microsoft Hyper-V and/or
Citrix Xen Server. Within the public cloud, you can connect to Amazon AWS. In case you
want to deploy resources to the physical world, there is also support for servers with HP
iLO, Dell iDRAC or Cisco UCS.

2.2 The components in detail


2.2.1. vRealize Automation Appliance

This appliance is the heart of vRealize Automation. Introduced with version 6 of the
product, it now hosts the user interface for both administration and end users. Internally, it
consists of the following components:

• Suse Linux Enterprise Server (SLES) for VMware


• vPostgreSQL as an embedded database
• tcServer as a Java Web server
• vCenter Orchestrator (embedded)
• RabbitMQ Message Broker
2.2.2. vRealize Infrastructure as a Service (IaaS)

The IaaS components have to be installed on a Windows machine. As they are quite
complex at first glance, they deserve some explanation. Locally, the following set of
services and web sites are hosted or required on the IaaS machines:

• IaaS Website
• Model Manager
• vCloud Automation Center Manager Services
• IaaS Database (usually installed on a separate node)
• Distributed Execution Manager
• Management agent

It is worth noting, that despite the product name having changed to vRealize Automation,
some of the component names still retain references to “vCloud Automation Center”.
(VMware has not managed to change all references to the new product name so far). Until
version 5.2, vRealize Automation (back then still vCloud Automation Center) was a pure
Windows product and so all these components are installed in Microsoft .net. As VMware
tries to shift away from Windows, we will see these components somehow migrated or re-
implemented on Linux or Orchestrator in future versions.
Fig. 2-2 depicts how these components reside on a Windows machine. As these
components are quite important, we’ll now describe them in detail.

Figure 2-2 IaaS components

2.2.2.1. IaaS Website

The IaaS Website is based on the Internet Information Server (IIS) hosts both the Mondel
Manager and the Management interface. In practice, this means that, whilst the user
interface is hosted on the vRealize Automation Appliance, the underlying IaaS
functionality is still hosted on Windows. The Automation Appliance only uses frames to
display the IaaS information. These capabilities are available under the Infrastructure tab,
on the Automation Appliance user interface.
2.2.2.2. Model Manager

The model manager represents the heart of the IaaS components. It runs within IIS. Its
basic tasks are to provide a model with vRealize Automation entities (under the hood, its
data is stored within the Microsoft SQL Server database), provide access to that via a web
service interface and talk to the IaaS Website. Furthermore, all the workflow information
and logic used to provision resources is stored in the Model Manager. Distributed
Execution Managers (DEMs), use this information to execute the workflows and talk to
the external environment. Internally, the model manager has the following components:
Data Model and REST interface

Ass mentioned, the IaaS components store all its information in a Microsoft SQL Server
database. However, this database cannot be queried directly by other vRealize
components. Instead, a REST interface has to be used. This helps to expose all data as
entities. In former vCloud Automation Center times (up to version 5.2), this REST API
was used to implement your own user interface (nowadays the Linux Appliance has its
own REST interface, which should be used instead).
REST Web services
Representational State Transfer (REST) represents an architectural style, which can be
used to implement web services. Because of its simplicity, compatibility and scalability, it
is currently the most popular technique used when implementing web services.
Security information

The Model Manager also stores permissions, such as who is able to see which type of data
and actions can be invoked on IaaS resources.
Workflows

We have already discussed that vRealize Automation provides the means to provision
resources to different platforms (physical, virtual or cloud platforms). However, depending
on the target platform and the provisioning method, different workflows will be used. All
of them are stored centrally within the Model Manager. It is also possible to extend these
workflows and even store your own .NET based workflow tasks (remember that the IaaS
components are still implemented in .NET). However, this requires you to additionally
license the Cloud Development Kit. For example, you could modify the basic provisioning
workflow, to automatically publish information into a Configuration Management
Database (CMDB).
Events and Triggers

Another important feature of the Model Manager is that you can define custom events
within the data model. For example, whenever you change the datastore of one of your
provisioned machines, it would be possible to update some virtual machine properties.
However, an event could also be triggered from outside of vRealize Automation. A user
could trigger a machine action and as a consequence a workflow run could be triggered
(e.g. the installation of some software).
Distributed execution of workflows

While the workflow logic itself is defined in the Model Manager, it is not the Model
Manager that executes the workflow. This is done by DEM Workers (there is also a DEM
Orchestrator, but it only schedules the workflow runs). However, different DEM workers
can exist. Sometimes we want to define where a certain workflow should be run (e.g. in a
special location or a machine which meets some requirements). This can be configured by
the means of ‘Skills’. Essentially, a ‘Skill’ is a relationship between a DEM Worker and a
workflow. This means that a workflow can only be run by a custom DEM Worker, which
resides on a certain host. Technically, to run a workflow, a DEM Worker will contact the
Model Manager and download all the artifacts needed (e.g. some scripts) to run the code.
DEM Workers regularly contact the Model Manager and ask for new work. Fig. 2-3
depicts the interaction between the Model Manager and DEM Workers.

Figure 2-3 Model manager and DEM interaction

2.2.2.3. IaaS Database

As we have already discussed, IaaS uses a Microsoft SQL Server to store all its data.
Currently only Microsoft SQL Server is supported. If you want to prepare for high-
availability, you should consider implementing a Microsoft SQL Server failover cluster.
While database mirroring or replication is basically possible, VMware’s ‘preferred way’ is
to set up a cluster.
2.2.2.4. vCloud Automation Center Manager Service

The ‘vCloud Automation Center Manager Service’ runs as a Windows service and is
responsible for the interaction between the Model Manager, vRealize Automation agents,
Active Directory and SMTP. There is not much to configure in this service, however, you
have to make sure that this service is running and not duplicated in your environment.
2.2.2.5. Distributed Execution Managers

As mentioned earlier, DEMs are responsible for interacting with the environment and the
provisioning of machines. They talk to the Model Manager in order to fetch all
information required for provisioning. There are two kinds of DEMs:

• The DEM Orchestrator communicates with the Model Manager and schedules the
workflow execution.The DEM Orchestrator only monitors the execution of the
workflow, it does not do the work itself. As there is a lot of interaction between the
DEM Orchestrator and the Model Manager, it is recommended to install the DEM
Orchestrator on, or ‘near’ to the Windows machine hosting the Model Manager. At
any time, only one DEM Orchestrator can be active within the vRealize Automation
installation. For high-availability reasons, it is therefore always good policy to have
a failover machine, which could become active in the case of a failure.

• DEM Workers are responsible for the execution of workflows. In contrast to DEM
Orchestrators, multiple workers can be active at the same time. This might be useful
for both high-availability, as well as for scalability purposes. Because DEM Workers
interact with external resources, they should be installed as ‘near’ as possible to the
system they are provisioning.

2.2.2.6. Management Agents

As discussed, DEMs interact with external systems in order to run workflows on them.
Unfortunately, in some cases they lack the knowledge to interact directly and thus need the
help of agents. This is especially true of hypervisors (e.g. vSphere, Hyper-V), which were
implemented in the first versions of vRealize Automation (support for other hypervisors
like KVM, Red Hat Enterprise Linux OpenStack, Amazon Web Services, Dell iDRAC or
Cisco UCS was later directly implemented in the DEM). Right now, there are several
different kinds of agents:

• ‘Virtualization proxy’ agents are used to interact with hypervisors. This can be done
in order to provision a machine or to synchronize hypervisor data with the vRealize
Automation database (e.g. which templates can be used on the hypervisor). Agents
are installed as Windows services and must be installed and configured for every
single virtualization environment. This means, if you want to address three different
vCenter Servers, you must install three different agents. There are virtualization
agents for vSphere, Hyper-V and XenServer.
• ‘Virtual Desktop Integration’ (VDI) agents help with registering virtual machines in
external desktop management systems. One of the most popular VDI systems is
Citrix XenDesktop. After the provisioning and the registration, vRealize Automation
provides the owners of registered machines with a direct connection to the
XenDesktop Web interface. One installed agent can communicate with a single
Desktop Delivery Controller (DDC) or with multiple DDCs.
• ‘External Provisioning Integration’ (EPI) PowerShell agents help with the on-
demand streaming of Citrix disk images, from which the machines boot and run.
• EPI Agent for Visual Basic, helps to run visual basic scripts as an additional step in
the provisioning process (the script can be invoked before or after provisioning a
machine or when deposing).
• ‘Windows Management Instrumentation’ (WMI) agents allow the collection of
information, for machines under vRealize Automation control. This is required if
you want to provision machines via Windows Image File (WMI).

• The Management Agent was newly introduced with vRealize Automation 6.2 and is
also installed as a Windows service. It is used to register IaaS nodes in distributed
deployments and to collect support and telemetry information from these nodes.

However, it is also possible to run a Guest Agent from within a deployed machine (in that
case, you should ensure that the guest agent is already part of the image used to deploy
your machine). Guest agents communicate - via SSL - with vRealize Automation and are
useful in post-installation tasks of non-VMware machines (if you provision to vSphere it
is easier to instead use the Guest API of the VMware Tools together with Orchestrator).
Guest agents can help you in a variety of use cases, for example:

• You want to perform additional configuration after provisioning, for example,


partitioning or formatting a hard disk.
• You want to use a configuration tool like Puppet to install and configure additional
software.
• You use vRealize Application Services (formerly Application Director).

Guest Agents are available for Windows as well as for Linux operating systems.

2.2.3. Identity Appliance

As discussed, the Identity Appliance does not need to be deployed if you have a vCenter
Server 5.5 1b or higher running. However, some companies tend to deploy their own
Identity Appliance to have more control over their installation (for example, vRealize
Automation does not support changing passwords of the SSO Administrator or may
become inaccessible due to an expired password). On the other side, deploying your own
Identity Appliance means more work (also don’t forget to provide high availability for the
appliance).

2.2.4. vRealize Business Standard Edition

In most environments, it is important to have insight into your costs (i.e. to find out ‘who’
is responsible for what costs and show these costs in a transparent manner). For vCloud
Director there is a tool called vCenter Chargeback manager. However, this tool is already
end-of-life, but there is a new one instead, vRealize Business. If you are working with
vRealize Automation Advanced Edition or higher, you are free to deploy the vRealize
Business Standard Appliance. This appliance also helps you to compare your own
datacenter costs with that of Amazon Web Services or Microsoft Azure and helps to
answer the following questions:

• What is the total cost?


• What does the cost include?
• How are costs allocated?
• Who is consuming services?
• What is the cost for each business unit?
• How can I create resource consumption reports for my stakeholder?

Fig. 2-4 shows the user interface after the integration of the vRealize Business Standard
Appliance into vRealize Automation. Users can compare the costs of deploying a set of
virtual machines into the private datacenter, Amazon Web Services or Microsoft Azure.
2.2.4.1. Application Services

Application Services require the Enterprise license of vRealize Automation, They are a
powerful tool for hosting PaaS services in your environment. This comprises not only
deploying machines to your environment, but also setting up complete software stacks.
Typically, companies want to provide some “IaaS+” features in their environments (for
example ‘Database as a Service’ or ‘Middleware as a Service’). Normally, this would
involve a lot of scripting in order to get such machines installed and configured. Luckily,
with the help of Application Services, such services can be configured graphically (via
Drag’n-and-Drop techniques). Those services are called Application Blueprints and are
depicted in Fig. 2-5.

Figure 2-4 Cost comparison

The application blueprint in Fig. 2-5 displays a web application consisting of three
different components: A load balancer, a set of JBoss Application Servers and a database
on a backend server. If you create such a blueprint, the first step is to choose an operating
system from the menu on the left and drag it to the main area. In the next step, you choose
the services you want to have on your machine and move them to your operating system
instances. The last step would be to deploy your own code (your Windows dlls, Java war
files, etc.) onto those machines. If there is no Puppet module for your software, you are
still free to implement und use your own code in Application Services.
Application Services allow you to deploy your blueprint to different supported
environments (vRealize Automation, Open Stack, vCloud Director or Amazon Web
Services). There is a special cloud abstraction layer, which hides the details of the
underlying environment. Therefore, there is no platform-specific configuration in your
Application Blueprints.
Like IaaS or XaaS services, Application Director also allows you to deploy its
application blueprints directly to the service catalog (so that end users can request them).
When compared to a traditional deployment, all the services can be installed and
configured in minutes rather than several days.

2.2.5. vRealize Orchestrator

Another important tool within vRealize is Orchestrator and this helps you to automate
your environment. The vRealize Automation Appliance already comes with an embedded
Orchestrator instance (including plug-ins for vRealize Automation, NSX and vRealize
Code Stream). If you don’t want to use the embedded Orchestrator instance (for
performance and high-availability reasons), you can still deploy your own Orchestrator
instance, or use the one bundled with vCenter Server.
Orchestrator is required for the following scenarios, if you need to customize and extend
vRealize Automation:
Figure 2-5 Creation of an Application blueprint

• Firstly, if you want to provision XaaS services to the service catalog, these
underlying workflows must run in Orchestrator. From a developer’s point of view,
the first step is to create these workflows in Orchestrator. Once you are finished that
process, the next step would be to build a graphical frontend with the Advanced
Service Designer (ASD), publishing the workflow to the service catalog. There are
many use cases for the ASD. For example, you could integrate a plug-in for your
storage into Orchestrator, then publish it via ASD to vRealize Automation. Hence,
many small tasks could be automated. For example, creating or extending a LUN –
without bothering the storage administrator anymore. Under the hood, Orchestrator
already comes with over 300 different workflows.
• You also need Orchestrator if you want to customize the way machines are
provisioned. There are plenty of use cases for that . For example, you are not happy
with the hostname and IP address assigned by vRealize Automation, so you could
call a workflow to invoke an IP address management (IPAM) tool like Infoblox
instead. Many companies also need to register their resources into a configuration
management database (CMDB). There are plenty of other scenarios where
Orchestrator would be the right component for customization - we will talk more
about this later in Chapter 15. As a summary, Fig. 2-6 shows the machine lifecycle
of a virtual machine in vRealize Automation and the separate stages where
customization can occur.
• And last but not least, Orchestrator is also required for NSX configuration within
vRealize Automation.

Figure 2-6 Orchestrator workflow extensibility

2.2.6. NSX
Another important VMware product within the cloud ecosystem is NSX. NSX provides
network virtualization and a security platform for the software-defined datacenter.
Amongst other things, NSX has the following capabilities and features:

• Creating logical networks


• Provisioning networking and security gateways (i.e. firewalls, NAT Router, Load
Balancing, VPN or DHCP)
• Creating a VXLAN network (a logical layer 2 network which can encompass
different locations)
• Data Security (analyze network traffic for sensitive data)

When integrated into vRealize Automation, these features and components can be
automatically created from an ordinary deployment process. This is useful when the mere
provision of virtual machines is not enough. Popular use cases for the integration of NSX
into vRealize Automation are:

• Provide identical training environments (every student can work on an identical


network)
• Lab environments
• DevOps
• Dynamic provision of whole networks in a multi-tenant environment

In many ways, NSX is the next version of (the discontinued) vCNS and has a lot of new
features. However, vCNS is part of the vCloud Suite, while NSX has to be licensed as a
stand-alone product. Like NSX, vCNS can be integrated into vRealize Automation.
2.3 vRealize Automation Licensing

Like many other products from VMware, vRealize Automation comes in different
editions: There is a standard, an advanced and an enterprise edition available.
Furthermore, vRealize Automation is available as a stand-alone product or as part of the
vRealize Suite (however in that case you can only provision into VMware environments).
Furthermore, the standard edition is only available as part of the vRealize Suite. If you
want to deploy the stand-alone product, you will have to start with the advanced edition.
In that case, you need to license your managed operating system instances (OSIs), which
are available in packs of 25 instances.
There is also a Cloud Development Kit (CDK), which allows some sophisticated
customization – however the CDK also has to be licensed additionally for each vRealize
Automation instance.

As a summary, Table 2-1 shows the different editions and their features[1].

Features Standard Advanced Enterprise

vRealize Automation

VMware Infrastructure Services, cloning only, vRealize X X X


Orchestrator Integration

Multi-vendor, multi-cloud Infrastructure, and multi-vendor SW X X


provisioning

Custom Services (XaaS), Approvals, Reclamation, Chargeback, X X


multi-tenancy

Application Services, Release Automation, DevOps X

Services Provisioned and Managed

Infrastructure Services (vSphere and vCloud only) X X X

Day-2 Operations for Infrastructure: Reconfigure, Snapshot X X X

Infrastructure Services (multi-vendor virtual, physical, and X X


public cloud)

Custom Services X X

Application Services ( virtual, private and public cloud) X

Day-2 Operations for Applications: Update, Rollback, Scale-in X


and Scale-out

Software Deployment Mechanisms

Hypervisor and vApp Cloning X X X

Multi-vendor software deployment tools (e.g. BMC X X


BladeLogic, HP Server Automation, Microsoft SCCM, NetApp
FlexClone, Citrix Provisioning Server, Linux
Kickstart/AutoYast, Windows WIM Imaging PXE Boot and
others)

Deploy integrated multi-tier applications X

Leverage existing services in new application deployments X

Governance and Controls

Business rules, resource allocation and infrastructure service X X X


definition policies
Multi-tenancy and approvals X X

Application definition and release automation policies X

Business Management

Chargeback and cost display throughout the product X X

Integration with VMware vRealize Business Standard Edition X X

Solution Extensibility

vRealize Orchestrator Integration X X X

Optional vRealize Automation Development Kit (SDK) X X X

VMware Cloud Management Marketplace solutions X X X

Advanced Service Designer X X

Integration with Configuration Management Solutions e.g. X


Puppet, Chef and others

Table 2-1 vRealize Automation features by edition

2.4 Summary

This chapter discussed the architecture of vRealize Automation. The main components are
the vRealize Automation appliance and the IaaS components. Furthermore, there are
additional components like the Identity Appliance, vRealize Orchestrator or vRealize
Business. Altogether, they make up the vRealize Automation environment. It is crucial to
understand the roles of these different components in order to create a design for a
vRealize deployment and begin with the installation.
3. Design and Deployment

In the previous chapters, we discussed cloud computing in general and introduced the
architecture of vRealize Automation.
Now it’s time to discuss the hardware and software requirements for vRA and find out
how vRA should be designed. This also involves learning how vRA can be scaled to
support thousands of managed virtual machines and how we can guarantee high
availability.
To tackle these issues, we will first talk about the hardware requirements of vRA and
then we will show how vRA can be scaled for small, medium and big environments.

3.1 Physical design


We have already discovered that a vRA installation consists of different virtual machines
and components. While in small environments some of these components can be installed
on one node, however, in a larger environment (for high-availability and scaling reasons),
it makes sense to distribute them on different nodes. The following table outlines these
components and the corresponding hardware requirements.

Server role Components Minimum Recommended

Identity Appliance CPU: 1 vCPU Same as required hardware


RAM: 2 GB specification

Disk: 10 GB
Network: 1 GB/s

vCenter Single Sign- CPU: 2 vCPU Same as required hardware


On RAM: 3 GB specification

Disk: 2 GB
Network: 1 GB/s

vRealize Automation tc Server CPU: 2 vCPU CPU: 4 vCPU


Appliance vPostgresSQL RAM: 8 GB RAM: 16 GB
VMware SLES Disk: 30 GB Disk: 30 GB
RabbitMQ Network: 1 GB/s Network: 1 GB/s

IaaS Web server IaaS Web site CPU: 2 vCPU CPU: 2 vCPU
Model Manager RAM: 2 GB RAM: 4 GB
Disk: 40 GB Disk: 40 GB
Network: 1 GB/s Network: 1 GB/s

Infrastructure Manager Service CPU: 2 vCPU CPU: 2 vCPU


Manager Server DEM RAM: 2 GB RAM: 4 GB
Orchestrator Disk: 40 GB Disk: 40 GB
Network: 1 GB/s Network: 1 GB/s

DEM Server CPU: 2 vCPU CPU: 2 vCPU


RAM: 2 GB RAM: 6 GB
Disk: 40 GB Disk: 40 GB
Network: 1 GB/s Network: 1 GB/s

Proxy Server Proxy Agent CPU: 2 vCPU Same as required hardware


RAM: 2 GB specification

Disk: 40 GB
Network: 1 GB/s

vRealize Automaton CPU: 2 vCPU Same as required hardware


Appliance as RAM: 2 GB specification
vPostgres server
Disk: 20 GB
Network: 1 GB/s

MS SQL Server CPU: 2 vCPU CPU: 8 vCPU


RAM: 8 GB RAM: 16 GB
Disk: 20 GB Disk: 80 GB
Network: 1 GB/s Network: 1 GB/s

vRealize CPU: 2 vCPU Same as required hardware


Orchestrator RAM: 3 GB specification

Disk: 12 GB
Network: 1 GB/s

vRealize Application tc Server Small environment: Large environment:


Services PostgresSQL CPU: 2 vCPU CPU: 8 vCPU
VMware SLES RAM: 4 GB RAM: 16 GB
RabbitMQ Disk: 16 GB Disk: 50 GB
Network: 1 GB/s Network: 1 GB/s

vRealize Business tc Server CPU: 2 vCPU Same as required hardware


Appliance PostgresSQL RAM: 4 GB specification

VMware SLES Disk: 50 GB


Network: 1 GB/s

Table 3-1 Hardware requirements

Please also note the compatibility between the vRA component and its underlying
software:

3.1.1. Host support for IaaS components

Supported operating systems:

• Windows Server 2008 R2 (including SPs)


• Windows Server 2012 and Windows Server 2012 R2 (both including SPs)

Supported databases:

• MSSQL 2012 and MSSQL SP1


• MSSQL 2014

3.1.2. vRealize Automation Appliances

Supported databases:

• vPostgres Appliance 9.2.4, 9.2.9.x, 9.3.5.x


• Postgres SQL 9.2.4, 9.2.6, 9.3.4

Authentication:

• Identity Appliance 6.2


• vSphere SSO 5.5 1b or higher

3.2 High-availability and scalability considerations

After having made sure that the hardware requirements have been met, you need to decide
on whether or not you need some kind of high-availability in your environment. You also
need to work out how much load you want to serve with vRealize Automation . With that
knowledge in mind, we can figure out how many servers we need to deploy. Whilst in test
and lab environments a minimal deployment will be sufficient, for bigger loads we will
have to scale up our environment. For many environments, the vSphere HA feature will be
sufficient. However if you want to minimize downtime, you will have to investigate each
component in vRA and figure out how HA can be guaranteed.

3.2.1. SSO component

We have already mentioned that you can use either the vSphere vCenter SSO or the
Identity Appliance as the SSO component in vRA. However, if you want to guarantee
high-availability, you must stay with the vSphere SSO. The vSphere SSO can be operated
in an active-passive configuration with an F5 load balancer in front.

3.2.2. vRealize Automation Appliance

High-availability and scaling of the vRA Appliance are essential features when building
an active-active cluster. However, in this case we can no longer use the embedded
PostgreSQL database, but instead need to deploy the PostgreSQL database as a separate
node (you can also operate the PostgreSQL database in a cluster). The same applies to the
embedded Orchestrator instance – a separated Orchestrator node is needed. Last but not
least, a load balancer is required.
When accessing the vRA appliance, users should always be redirected to the very same
node in subsequent requests. Therefore the ‘sticky session’ or the ‘session affinity’ feature
should be activated on the load balancer. For load balancing, only port 443 is used. While
you could use any load balancer, VMware recommends the use of F5 BIG-IP hardware
and F5 BIG-IP Virtual Edition. These load balancers have already been tested by VMware
and white papers exist (which describe in detail how to configure them in a vRA
environment).

3.2.3. vRealize IaaS components

Most of the workload and time required for configuring high-availability and scalability,
has to be spent on the IaaS components. In small environments, all of these components
are installed on a single node. However, in bigger environments, the components have to
be distributed over different nodes and each of them has to be configured for HA and
scalability. Remember, we have several different components on the IaaS server:

• The IaaS web server hosts the Model Manager and the IaaS user interface (displayed
as a frame from the vRA appliance).
• The Manager Service coordinates the communication between the database, LDAP
or Active Directory and a SMTP Server.
• Distributed Execution Managers (DEMs) communicate with external system to
provision resources.
• Proxy agents know how to interact with hypervisors.

Having this knowledge in mind, we can now concentrate on these components.

3.2.4. IaaS web server

The IaaS web server can be placed behind a load balancer – in the same manner as the
vRA appliance – hence supporting an active-active cluster configuration. However, as no
user is directly accessing the web server (only the vRA appliance), there is no need for
‘sticky session’ or ‘session affinity’. Instead, the ‘Least Response Time’ or ‘round-robin’
algorithm can be used. As previously mentioned, any load balancer with these features can
be used. However, VMware recommends the F5 BIG-IP hardware and the F5 BIG-IP
Virtual Edition.
It is important to act with caution when installing IaaS web servers in your environment.
While you can install many instances of the Website component, only one node is allowed
to run the Model Manager data component (usually the first node in the deployment). The
other nodes will just run the Website component.
At runtime, the IaaS web server is usually CPU-bound. So, if you notice some latency in
your environment, please monitor the IaaS web servers. Scaling-up your web server (add
more CPUs or memory) is also easier than scaling-out (add another node behind the load
balancer).
3.2.5. Manager Service

As opposed to the aforementioned components, you cannot run the Manager Service in an
active-active cluster configuration – instead, a second server, is needed which runs as a
disaster recovery cold standby server. From a performance point of view, this shouldn’t be
a problem, as one Manager Service can easily serve tens of thousands of managed virtual
machines. The failover itself can happen manually or - better practice - via a load balancer
– in this case, however, you cannot use any load balancing algorithm.

3.2.6. DEM Orchestrator

Just like the Manager Service, only one instance of a DEM Orchestrator can run at any
time in a vRealize Automation environment. Because of this, an active-passive
configuration is also required . That being said, there is no need to configure a load
balancer for the DEM Orchestrator, as DEM Orchestrators can automatically monitor
themselves. When a DEM Orchestrator is started, it automatically searches for another
running DEM Orchestrator instance. If none are found, it becomes the primary DEM
Orchestrator. If there is already a working DEM Orchestrator, it will start as a secondary
node and monitor the primary. If the primary DEM Orchestrator fails, the secondary
automatically takes its place. Later, if the former primary DEM Orchestrator comes back
again, it will detect that there is another instance running and switch to secondary mode .
It is also important to note that the DEM Orchestrator and the Model Manger work closely
together to execute all kinds of workflows. Consequently, they should be placed near to
each other and should have a high network bandwidth available for communication.

3.2.7. DEM Worker

The components that actually run the workflows are called ‘DEM Workers’. They should
be deployed near the external resources they are communicating with. For example, if you
have different datacenter locations, make sure you have a DEM Worker running in each of
these locations.
DEM Workers are mainly CPU-bound, as they run the workflows. A DEM Worker can
run up to 15 workflows in parallel. If the workflow queue is constantly high, consider
vertical scaling first (i.e. add additional CPU and RAM). Nevertheless, you can easily add
an additional DEM Worker. Further to this, you can configure the workflow scheduling in
order to run certain workflows during off-hours or to increase the interval in which they
run.
If a DEM Worker becomes unavailable, the DEM Orchestrator will cancel all its
workflow tasks and assign them to other available workers.
3.2.8. Proxy Agents
You should deploy the Proxy Agent in the datacenter it is associated with. Together with
the DEMs, they are responsible for data collection from vSphere-, Hyper-V or Xen
environments. The best practice is to run these workflows during off-hours (or manually,
if you have to trigger a data import from the environment). Up to two workflow runs can
run at the very same time. If you increase this value, you might encounter performance
issues in medium to big environments. It is also a good idea to consider deploying a
redundant Proxy Agent.
3.2.9. MS SQL Server

The last component to consider, within the software stack, is the Microsoft SQL Server.
As the database is a critical component, you should also think about how you could attain
high availability. There are different methods for creating high availability of the
Microsoft SQL Server. This could be as a cold standby server, a mirrored standby server
or a cluster (recommended by VMware).

3.2.10. vRealize Automation Business Appliance

One vRealize Business Appliance can scale up to 20,000 virtual machines, in up to four
different vCenter servers. When you synchronize for the very first time, it will take up to
three hours to finish. Later synchronizations will take between one and two hours. Like
the vRA Appliance, the vRA Business Appliance can be put behind a load balancer.
However, you must consider that data collection can only take place from one node of the
cluster.

3.2.11. vRealize Application Services

A running instance of vRealize Application Services can handle over 10,000 virtual
machines and more than 2,000 library items. Over 40 concurrent deployments and 100
users can be connected at the same time.
The vRA Application Service appliance relies mainly on CPU and memory capacities.
There is a Java virtual machine under the hood, so just increasing the memory size of the
VM is not enough. You need to adjust the maximum heap size within the Java properties
of the underlying tc Server. Please also take into account that placing a load balancer in
front of the Application Services is not supported.
3.3 Deployment Architecture

Now that we have finished discussing how to scale and provide high availability for the
different components of IaaS, it’s time to address possible deployment architectures. To
help ease these issues, VMware differentiates between deployments: Small, medium and
big deployments. In the following, we will focus on each of these in turn.
Small environments

Small environments are considered to meet the following load:

• 10,000 managed items


• 500 catalog items
• 10 concurrent deployments
• 10 concurrent application deployments (with 3 to 14 VM nodes)

The following servers should be deployed for this scenario:

• Identity Appliance
• vRealize Automation Appliance
• IaaS Server (with all IaaS roles installed)
• MS SQL Server
• vRealize Automation Application Services Appliance
• vRealize Business Standard Appliance
Figure 3-1 Minimum footprint for small environments

The deployment is depicted in Fig. 3-1. Please note the ports that are required for
communication. The Identity Appliance is accessed via port 7444, LPAP via 389 (secure
LDAP over 636), access to the appliance console is done via port 5480 and the MS SQL
database port 1433 should also be opened. The Application Services additionally need
ports 8443 and 5671 for communication with the Application Services Agents.
Such small environments run only the minimum of appliances and do not provide any
form of high availability.

Figure 3-2 Minimum footprint for medium environments

Medium environments

According to VMware, a deployment for a medium environment can support a much


greater load and provides for high availability:

• 30,000 managed items


• 1,000 catalog items
• 50 concurrent deployments
• 20 concurrent application deployments (with 3 to 14 VM nodes)

To build such a solution, the following servers should be deployed (Fig. 3-2):
• Two vCenter Single Sign-On Servers
• Two vRA appliances (active-active)
• IaaS Web/Manager Server 1 (Active Web Server/ active DEM Orchestrator, active
Manager Service)
• IaaS Web/Manager Server 2 (Active Web Server / passive DEM Orchestrator,
passive Manager Service)
• Two IaaS DEM Worker Server
• Two IaaS Agent Servers
• Clustered MSSQL Server
• vCenter Single Sign-On load balancer
• vRA Appliance load balancer
• IaaS Web load balancer
• IaaS Manager Service load balancer
• vRealize Automation Application Services Appliance
• vRealize Business Standard Appliance

Large environments

The large deployment plan (Fig. 3-3) serves the biggest environments:

• 50,000 managed items


• 2,500 catalog items
• 100 concurrent deployments
• 40 concurrent application deployments (with 3 to 14 VM nodes)

The following servers are needed for this deployment:

• Two vCenter Single Sign-On Servers (active-passive)


• Two vRA appliances (active-active)
• Two IaaS web servers (active-active)
• Two IaaS Manager servers (active-passive)
• Minimum of two IaaS DEM Worker Servers (active-active)
• Two IaaS DEM Orchestrators (active-passive)
• Two IaaS Agent Servers
• Clustered MSSQL Server
• vRealize Automation Application Services Appliance
• vRealize Business Standard Appliance
• vCenter Single Sign-On load balancer
• vRA Appliance load balancer
• IaaS Web load balancer
• IaaS Manager Service load balancer

Figure 3-3 Minimum footprint for large environments

3.4 Summary

This chapter described all the components of vRealize Automation and its hardware and
software requirements. Depending on your deployment size, VMware gives
recommendations regarding scalability and high availability. With that background
knowledge in mind, we can now continue with the installation and configuration of vRA.
4. Basic Installation

Having discussed the architecture and design of a vRealize Automation solution in the
previous chapters, we can now occupy ourselves with its installation and configuration. As
this encompasses a lot of tasks, we will split these tasks into different chapters.
Firstly, we will introduce the required steps. This chapter covers the basic installation. In
the following chapters, we will discuss how to continue with the configuration.
4.1 Overview of installation and configuration steps
The first steps cover basic installation:

1. Review and configure prerequisites.


2. Deploy and configure the Identity Appliance.
3. Deploy and configure the vRealize Automation Appliance.
4. Install the IaaS Server.
5. Configure the IaaS Server.

The next chapters will the cover the following tasks:

6. Configure the admin portal.


7. Create a tenant.
8. Add endpoints.
9. Create and configure fabric groups.
10. Define business groups.
11. Configure reservation policies and network profiles (optional).
12. Create reservations.
13. Define blueprints.
14. Create catalog services.
15. Manage catalog items.
16. Configure catalog entitlements.
17. Configure governance and approvals.
18. Configure vCO.

Hint:
The installation is very time-consuming and there is plenty of room to make mistakes.
While some mistakes can be easily reverted, others may well require reconfiguring some
parts from scratch (especially with the Windows components). So it is always a good idea
to take a snapshot if you created a task or you are about to start with the next one.
4.2 Installation

In the following segment, we will discuss the installation procedure step-by step. So let’s
begin with the first task.

4.2.1. Installation prerequisites and considerations


There are a couple of issues to review:

• It is important to remember that we need correct DNS names for all the machines in
our vRealize environments. vRA internally uses the fully-qualified domain (FQDN)
names for communication. Please be aware that no underscore (“_”) is allowed in
any FQDN.
• It is a good idea to create a special domain account for vRA and assign local
administrator privileges on the IaaS machines to that account.
• You need a Microsoft SQL Server for IaaS components. During the installation, the
database schema will be created. The database default name will be “vCAC”.
Assigning database sysadmin privileges to a vRealize Automation domain account
makes the installation quite easy. If you only have the databas owner role, please
assure that you have created the “vCAC” database in advance. Furthermore, on the
MS SQL server, please check the following settings:
o The TCP/IP protocol must be activated.
o The relevant ports must be open (1433).

• You also need the Microsoft Distributed Transaction Coordinator Service (MS
DTC) running on your IaaS machines, as well as on the MS SQL Server.
• Access to vCenter Server is needed for the deployment.
• Please check the hardware requirements as discussed in chapter 2.
• vRA is using SSL for the communication. vRA can create self-signed certificates,
however check your company guidelines on signed certificates, you may need to
request and import signed certificates.
• Like in any distributed environment, time synchronization is crucial. You have the
following choices for the configuration:
o Use NTP (port 123 must be open).
o Windows machines can use the W32Time services.
o Use the VMware tools.

4.2.2. Deployment and configuration of the Identity Appliance

After reviewing all the prerequisites, we can now continue with the deployment of the
Identity Appliance (if you want to use the vCenter Single Sign-On component, you can
skip these steps). We will be using the classical vSphere Client, but needless to say the
Web Client can also be used for the deployment.

1. Open the vSphere Client and choose File > Deploy OVF Template in the menu.
2. The assistant opens and asks you to choose a source file for deployment. Click on
the Browse button, search for your Identity Appliance file (with the .ova or .ovf
ending) and click Open and then Next.
3. Review the settings on the OVF Template Details screen and click Next to
continue.
4. Confirm the End User License Agreement by clicking on Accept and continue
with Next.
5. The dialog Name and Location asks you to specify a Name and the Location in
the vSphere inventory. Please provide the values and continue with Next.
6. Please choose the Cluster where your appliance should be deployed, on the Host /
Cluster mask. Proceed with Next.
7. The next page is about the Resource Pool. Specify the pool to be used and carry on
with Next.
8. On the page Disk Format, you can choose if you want to have your appliance
deployed as Thick Provisioned Lazy Zeroed, Thick Provisioned Eager Zeroed or
Thin Provisioned. Click on Next afterwards.
9. The next step deals with networking. Please choose the network to which your
appliance should be connected (dropdown list Destination Network) and continue
with Next.
10. You need to specify a couple of properties on the next Property dialog box:
a. Enter and confirm a password for the root account of the appliance in the
Initial root password section.
b. Specify if you want to enable root access via SSH, using the Enable SSH
service in the appliance checkbox.
c. In the Hostname textbox, please specify the full host name (FQDN) of your
virtual machine.
d. Define the Default Gateway, DNS Server, Network 1 IP Address and
Network 1 Mask within the Networking Properties.
Click Next to continue.
11. On the last Ready to Complete screen, please review your settings, choose Power
on after deployment and start the deployment process with Finish.

The deployment process will last approximately one to three minutes. After the
deployment, we can continue with configuring the appliance. There are a couple of issues
which need to be configured:

• Time zone settings


• SSO configuration
• Host settings review
• SSL certificates
• Active Directory connection
• Time synchronization settings

These steps can be actioned as follows:

1. Open a web browser and type the URL https://<identity-


hostname.domain.name>:5480 in the address bar.
2. Accept that the page is not trustworthy.
3. Log in with username root and the password you specified during the deployment
process.
4. Go to the System menu and choose Time Zone. Configure your time zone and click
Save Settings.
5. Change to the SSO tab. The SSO is not configured so far (see the text in red color).
6. Apply the following settings for the SSO:
a. System Domain: vsphere.local
b. Admin Password
After clicking Apply, it will take a short while for the changes to take effect. If the
configuration was successful, the status text will change to SSO is initialized.
7. Change to the Host Settings register and check that the SSO Hostname is correct.
8. In the next step, the SSL certificate is configured. We will use a self signed
certificate, however this could be changed later.
a. With the Choose action menu, select Generate Self Signed Certificate.
b. Type the FQDN of your Identity Appliance within the Common Name
textbox.
c. Provide a value for the organization.
d. Define the Organizational Unit.
e. Select your Country Code.
Click on Apply Settings. Once again, it will take a short while for the configuration.
After that you will see the output SSL certificate is replaced successfully.
9. The next step is to join an Active Directory (AD) domain. Please change to the
Active Directory menu and provide your AD settings as follows:
a. Domain Name: Your AD name
b. Domain User with read permissions for AD: Please specify in the format
user@domain.
c. Password.
After clicking on Join AD domain, it will take a short while to action. If the joining
was successful, the status line will show the connection information accordingly.
10. Finally, the time synchronization has to be reviewed. By default, the Time Sync
Mode is set to synchronize with the host time. However, you can also provide a
NTP Server or disable time synchronization (not recommended). To change the
settings, go to Admin > Time Settings and change the values accordingly.

Hint: Changing the SSO Administrator password and expiry


Please note that vRealize does not support a change to the SSO administrator password, so
it is not possible to assign a dummy password during installation and change it later. Also
there is a default expiry configured for the root account. One solution is to change it not to
expire anymore which can be achieved via the following command:
chage –M 99999

c.2.3. Deployment and configuration of the vRA Appliance

After creating a working vSphere SSO Single Sign-On Server or Identity Appliance, we
can move forward with the vRA Appliance. Like the Identity Appliance, the vRA
Appliance is packaged as an .ovf file and must be deployed accordingly. As we have
already described these steps in detail, we will now show you how to configure the
appliance.
Figure 4-1 Configure host settings

Figure 4-2 SSL configuration

For a basic installation, we must work through the following steps:

• Configure the time zone.


• Check the host settings.
• Configure SSL.
• Connect the appliance to SSO.
• Import a license.
• Configure time synchronization.

Like the configuration of the Identity Appliance, the configuration takes place via a web
browser:
1. Open your web browser and navigate to the URL https://<vrealize-automation-
appliance.domain.name>:5480.
2. Accept any security warning and continue to the configuration page.
3. Log in with username root and your provided password.
4. First check the time settings: Change to the System > Time Zone page and review
the settings. Once finished save your configuration with the Save Settings button.
5. Change to the vRA Settings menu. The Host Settings page configures the host
name and the SSL configuration (Fig.4-1). Review your hostname in the vRA Host
Settings section. If you are using a load balancer, please type the FQDN of the
balancer in the Host Name textbox, otherwise ensure that the FQDN of the vRA
Appliance is within the textbox.
6. Within the SSL Configuration section, a certificate must be configured. If you do
not want to import an existing certificate, please provide the following input:
• Common Name: FQDN of your vRA Appliance
• Organization: Usually your company
• Organization Unit: Usually your department
• Country Code: Code of your country (e.g. DE for Germany)
Click Save Settings to save your configuration and update the certificate
information (Fig. 4-2).
7. In the next step, we have to connect our appliance to the SSO component.
Therefore, change to the SSO menu. Provide the following information:
• SSO Host: FQDN of the SSO component
• SSO Port: Usually on port 7444
• SSO Default Tenant: vsphere.local
• SSO Admin User: administrator
• SSO Amin Password: Your SSO password
Click on Save Settings for your configuration to take effect (it will take several
minutes). If the configuration was successful, the SSO Info will show the status as
connected (Fig. 4-3).
8. Now we have to upload the license key. Change to the Licensing configuration
page, enter your license and click on Submit Key.
9. Change to the Database configuration. Note that we are using an embedded
Postgres database, so there is no need for configuration.
10. Review the Messaging page. Once again, there is no configuration to be done and
vRA is already running a working RabbitMQ instance.
11. Change to the Telemetry menu. If you want to participate in the Customer
Experience Improvement Program and send anonymous data about your
environment to VMware, check the appropriate textbox.
12. The last step is to review the time synchronization settings. To change the settings,
go to Admin > Time Settings and change the values accordingly.

Figure 4-3 SSO tab

Hint:
The SSO hostname must be entered in exactly the same way as the SSO component. If
you specified an IP address there, you also have to specify the IP address in the vRA
Appliance. Please also ensure that any characters are the same case format as in the SSO
component.

c.3 Installation of IaaS components

c.3.1. Installation considerations and prerequisites

The next step in the installation process is to set up the IaaS components. The installation
source files can be found on the vRA Appliance. However, before we can start with the
installation, we should first consider the following issues:

• The user performing the installation must have local administrator privileges.
• It is best practice to have a dedicated service account for vRealize Automation. The
recommended way is to create an Active Directory user and add this user to the local
administrator group on all machines hosting IaaS components.
• If we have a dedicated node for the Microsoft SQL Server, we need privileges to the
server. A good practice would be to create a database for vRealize Automation first
and to assign the database owner role to the vRealize service account.

The installation itself has the following prerequisites:


• Installation of Microsoft .NET 4.5.2.
• PowerShell must be installed (minimal version 2.0).
• Java 1.7 64 bit must be installed. Please also configure %JAVA_HOME% as the
environment variable for the installation path and add the
%JAVA_HOME%\bin\java.exe binary to the PATH variable.

Regarding Microsoft Internet Information Server (IIS) 7.5, there are a couple of
configuration steps to be taken:

• Configure the following modules:


o Windows Authentication
o Static content
o DefaultDocument
o ASPNET 4.5
o ISAPIExtensions
o ISAPIFilter
• You also have to ensure that the following is set correctly:
o Windows Authentication enabled
o Anonymous Authentication disabled
o Negotiate Provider enabled
o NTLM Provider enabled
o Windows Authentication Kernel Module enabled
o Windows Authentication Kernel Mode enabled
o For certificates using SHA512, TLS1.2 has to be disabled on Windows 2012
machines
• Last but not least, check the following IIS Windows Process Activation Service
roles are configured:
o ConfigurationApi
o NetEnvironment
o ProcessModel
o WcfActivation (Windows 2008 only)
o HttpActivation
o NonHttpActivation
c.3.2. IaaS Manager Service

There are also a couple of prerequisites for the IaaS Manager Service:

• Microsoft .NET Framework 4.5.2 must be installed.


• PowerShell must be installed (minimal version 2.0).
• The SecondaryLogOnService must be running.
• The IIS must be installed and configured as described above.
c.3.3. Distributed Execution Manager

The installation of a DEM additionally requires Microsoft .NET Framework 4.5.2 and
PowerShell.

Figure 4-4 Prerequisite installer

c.3.4. Automated installation of Prerequisites

Luckily, there is a PowerShell script on the official VMware blog available[2], which
unburdens you from most of the configuration tasks. There is no warranty from VMware
regarding this script. However, in most environments, it will run without modifications.
Nevertheless, it is still a good idea to take a closer look at its logic, so that you can modify
it if required. The script is specifically designed to configure the requirements for a
complete installation of the Windows components, on a single node. If you have a
distributed environment, you should take a deeper look at the script, so that you can
understand which parts of the script need to be run on the different nodes. Before you
execute the script, please follow these instructions:
1. Install Microsoft .NET framework 4.5.2.
2. Run Windows PowerShell with Administrator privileges.
3. Change to the folder where the script is located.
4. Run the command Set-Execution-Policy unrestricted.
5. Start the script and provide all the requested information. The script will take a
couple of minutes to run (see Fig. 4-4).

After all prerequisites have been met, the installation itself can begin. First of all, we need
the installation source. To download the setup file, please open the vRA Appliance page
(https://<vrealize-automation-appliance.domain.name>:5480) and change to vRA Settings
> IaaS. Finally, click on the Download the IaaS Installer link (Fig. 4-5). Please also
make sure that the location of the vRA Appliance is coded within the setup-filename
(setup_hostname@5480.exe). Usually, the downloaded file should already be named
appropriately, if you access the download page via the FQDN and not via the IP address.
Consequently, if you change the filename, it might not be possible to run the installation
properly.

Figure 4-5 Download page of the vRealize Automation appliance

Tip: Create a snapshot


This is also an appropriate time to take a snapshot of your system. If anything fails within
the installation, you can revert back to your snapshot and do the installation again.

To perform the installation, we must complete the following tasks:

1. Start the installation with administrator privileges (right click > Run as
administrator).
2. Click Next on the Welcome page.
3. Accept the end user license agreement by clicking the I accept the terms in the
license agreement checkbox and proceed with Next.
4. The next page requests the User name and the Password for the vRA Appliance.
Type in root as username and your assigned password. Click on the Accept
certificate checkbox and continue with Next.
5. On the Installation Type dialog, choose Complete Install and click Next.
6. In the next step, the Prerequisite Checker runs. If everything is fine, proceed with
Next. Otherwise click on any issue, fix the requirement and check again.

Figure 4-6 DEM Configuration

Tip: Prerequisite Checker


From time to time, the Prerequisite Checker complains about issues, which have already
been fixed. If that is the case, a reboot might help.

7. The next step is based in the Server and Account settings:


• User name: This is the user that will be used to run vRA. As discussed, it is best
to use a special service account for vRA.
• Password: Type in the password for your service account.
• Passphrase: All sensitive information in vRA will be encrypted and decrypted by
using this passphrase. Please do not forget the passphrase. Once you have
assigned one, it is not possible to change.
• Specify your database information in the Microsoft SQL Server Database
Installation Information section. If you are using Windows authentication, there
is no need to specify a username and password. Otherwise, provide your SQL
Server credentials.
Click on Next to continue.
8. Now it is time to specify a service name for the DEM Manager and DEM
Orchestrator (see Fig. 4-6). In order to deploy virtual machines to a vSphere
environment, the vSphere Agent is a prerequisite. If you have an installation with a
single node, stick with the provided values and click on Next.

Figure 4-7 IaaS registration

9. The next dialog requests all information needed, in order to register the IaaS
components within the vRA Appliance (see Fig. 4-7):
• Firstly, review the vRA Appliance server name in the Server textbox, load and
check the SSO Default Tenant name and download the certificate. View the
certificate and set the Accept Certificate checkbox.
• Secondly, type in the name of the SSO Administrator
(administrator@vsphere.local) along with its password. Click on Test – the test
should pass.
• Finally, enter the hostname or IP address of the local machine. Again click on
Test to check if the settings are valid.
10. The last screen summarizes all these settings. Please review the output and start
the installation. Depending on your hardware, the installation will need between 5-
15 minutes to finish. Once the installation has been successfully completed, the
wizard will ask you to work through the initial system configuration.
c.3.5. Installation with CA-signed certificates

While it is quite common to run a server with self-signed certificates for testing, even in a
small environment, it is still recommended to use CA-signed certificates. Notwithstanding
security concerns, this can also be important for user experience. As end users can access
the service catalog of vRealize Automation, they might be confused with any warning
messages related to self-signed certificates. Changing certificates is not trivial in most of
VMware’s products and this is especially true for vRA. The reason behind this that all
vRA installations are distributed environments. That means we have a minimum of three
machines, where this has to take place. That’s motivation enough to describe the process
of changing certificates in detail.

c.3.6. Considerations before changing certificates


Before we start changing the certificates, we should discuss which steps are essential and
if there is any preliminary work to be carried out.
First of all, troubleshooting certificate issues is never easy. Therefore all steps should be
taken with care and double checking is a must. However in the real world, even with care,
many things can easily go wrong. In that case, it is always convenient to be able to revert
back to a former state. Hence, a set of snapshots for all machines involved is certainly
very useful.
Furthermore, VMware recommends using either a domain certificate or a wildcard domain
certificate in distributed environments. On Windows such a certificate must be stored in
the PXF format, on Linux (as well as on all the appliances and load balancers) it is the
PEM format.
There are also different encodings for the separate components:

• Identity Appliance: PEM and unencrypted key


• vRealize Automation Appliance: PEM and unencrypted key
• IaaS components: PKCS12
• VMware vRealize Orchestrator: DEC (of CSR)

Another important issue is the order in which the certificates are replaced. The reason
behind this is that there are trust relationships between the components that have to be
met. First, the certificate of the Identity Appliance can be changed, then, the vRealize
Automation Appliance and finally, the certificates of the IaaS components can be
replaced.

If you only want to change a specific certificate, please consider the following:

• If you change the certificate of the Identity Appliance, you will also need to register
that certificate with the vRealize Automation appliance and the IaaS-components.
• Replacing the vRealize Automation certificate causes you to register the IaaS
components again.
• If you replace the certificate of an IaaS component, you will have to register that
component again with the vRealize Automation Appliance.

At this point, we will describe how to perform the installation with CA-signed certificates.
Later we will show how to replace certificates.
Figure 4-8 Active Directory Certification Services configuration

Hint: Certificates and load balancers


If your environment grows larger, you will certainly use load balancers to scale your
solution. Setting up load balancers means additional work. However, changing certificates
can actually become easier. Depending on your setup - you can reduce the number of
nodes on which you have to replace certificates. In that case, you can use CA signed
certificates with the load balancers, staying with self-signed certificates on the other
vRealize Automation nodes.

c.4 Configuration of the Certification Authority

If you don’t have any running certification authority, now it is the proper time to set up
one (if there is a running certification authority you can skip these steps). We will show
these steps on a Windows system.

1. Install the Windows Role Active Directory Certificate Services.


2. Check that this role also has the role services Certification Authority and
Certification Authority Web Enrollment activated (see Fig. 4-8) and then
continue with Next.
3. On the mask Specify Setup Type, please choose Enterprise and then Next.
Figure 4-9 Configuration of CA cryptography

4. Select Root CA on the next page (Specify CA Type) if you are configuring your
first CA and continue with Next.
5. The next screen (Set up Private Key) lets you create your new private key. Click on
Next to continue.
6. The next step in the process is to configure the cryptography settings for your CA
(mask Configure Cryptography for CA). Please ensure you have the following
settings (see Fig. 4-9) and click on Next:
• CSP: RSA#Microsoft Software Key Storage Provider
• Key character length: 2048
• Hash algorithm: SHA1
7. Please define a name for your CA on the Configure your CA Name page and
continue with Next.
8. The last step of the wizard is to define the validity period of your certificates (Set
Validity Period). Click on the Next button and finish the wizard.

c.5 Creation of a vRealize Automation certificate template

The next step within the configuration is to create a vRealize Automation certificate
template. This template can be reused for all other subsequent templates. We will also
update the Microsoft CA settings, in order to allow Subject Alternative Names (SANs)
within the attributes. We will continue with the following steps:

1. Use a Remote Desktop Connection to connect to your CA Server.


2. Click on Start > Run and type certtmpl.msc. Click on OK and wait until the
Certificate Template Console opens.
3. Search your web server (it is located in the middle of the window in the Template
Display Name area).
4. Right click the web server and choose Duplicate Template.
5. On the Duplicate Template mask, please select Windows Server 2003 Enterprise
for backward compatibility.
6. Move on to the General tab.
7. Type a name for your template (e.g. vRealize Template).
8. Change to the Extensions tab.
9. Choose Key Usage and click on Edit.
10. Make sure that the option Signature is proof of origin (nonrepudiation) is
selected.
11. Check the Allow encryption of user data checkbox and click on OK.
12. Click on Application Policies and then on OK.
13. Click on Add and select Client Authentication.
14. Click OK and once again OK.
15. Change to the Subject Name tab.
16. Make sure the checkbox Supply in the request is set.
17. Click on the tab Request Handling.
18. Check the option Allow private key to be exported is activated.
19. Save the template with OK.

Once we have finished these configuration steps, we can continue with adding this
template to the list of Certificate Templates:

1. Go to your CA Server, click on Start > Run and type certsrv.msc.


2. Expand the left node with the [+] icon.
3. Right click Certificate Template and select New > Certificate Template to Issue
from the context menu.
4. Find your vRealize Template and click on OK to finish.

As well as creating the certificates for the IaaS components, we also need certificates for
the Linux appliances. This can be done via OpenSSL on both the Linux and windows
operating systems. You will need a running OpenSSL installation, with version 1.0.0 or
upwards.
c.5.1. Creation and configuration of vRealize Appliance certificates

The following steps have to be taken in order to create and configure the vRealize
certificates:

• Create a Certificate Signing Request (CSR).


• Create the certificates.
• Convert the certificate to the PEM format.
• Upload the certificate.

Hint: Use a configuration file for your CSR. When creating a CSR, we need to specify
some basic information about the CSR itself. This approach increases the reuse of the data
and is a good way to document your settings.

c.5.2. Creation of a Certificate Signing Request

Open a text editor (on the computer with OpenSSL installed) and paste the following
configuration:

[ req ]
default_bits = 2048
default_keyfile = rui.key
distinguished_name = req_distinguished_name
encrypt_key = no
prompt = no
string_mask = nombstr
req_extensions = v3_req

[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = digitalSignature, keyEncipherment, dataEncipherment, nonRepudiation
extendedKeyUsage = serverAuth, clientAuth
subjectAltName = DNS:vcva550b, IP:10.10.1.40, DNS:vcva550b.sc.lan

[ req_distinguished_name ]
countryName = DE
stateOrProvinceName = BY
localityName = Nuremberg
0.organizationName = SC
organizationalUnitName = SC
commonName = vcva550b.sc.lan

Before saving the file, make sure to replace all the server names and IP addresses with
your own settings. Then save the file as vrealizeid.cfg (this is the configuration file for the
Identity Appliance). Once finished, repeat the procedure and create a second file for your
vRealize Automation Appliance, save it as vrealizeapp.cfg.
Now we have the input needed to create the CSRs and export the private key. Go to your
command prompt, change to the bin-directory of OpenSSL and type the following
commands:

openssl req -new -nodes -out c:\certs\identity\rui.csr -keyout c:\certs\identity\rui-orig.key -config


c:\certs\identity\vrealizeid.cfg

openssl req -new -nodes -out c:\certs\vrealizeva\rui.csr -keyout c:\certs\vrealizeva\rui-orig.key -


config c:\certs\vrealizeva\vrealizeapp.cfg

Now we have to convert the certificates into the RSA format. This can be done as shown:
openssl rsa -in c:\certs\identity\rui-orig.key -out c:\certs\identity\rui.key

openssl rsa -in c:\certs\vrealizeva\rui-orig.key -out c:\certs\vrealizeva\rui.key

c.5.3. Creation of certificates

The next step in the process is to create the certificates in the Microsoft CA. We will show
the procedure for the vRealize Automation Identity Appliance, however do not forget to
repeat the following steps for the vRealize Automation Appliance:

1. Open a browser and log in on the Microsoft CA web interface (http://<CA-


Server>/CertSVR).
2. Click on the link Request Certificate > Advanced Certificate Request. On the
next page, choose Submit a certificate Request by using a base-64-encoded
CMC or PKCS #10 file, or submit a renewal request by using a base-64-
encoded PKCS #7 file.
3. Copy the content of the rui.csr file into your clipboard and paste it into the field
Base-64-encoded certificate request.
4. Please make sure you have selected the vRealize Automation certificate template
(see Fig. 4-10).
5. Click on Submit.
6. On the next page, Certificate Issued, select the Option Base64 encoded.
7. Click on Download certificate and save the file in the directory where you saved
your CSR and configuration file.
8. Repeat step 1-7 for your vRealize Automation Appliance certificate.
9. Return to CA main page and click on the link Download a CA Certificate,
certificate chain or CRL.
10. Choose the option Base64 encoded.
11. Click on the link Download a CA certificate chain.
12. Save the file as cachain.p7b.
13. Double click the cachain.7b file and navigate to cachain.p7b > Certificates.
14. Next, export the Root certificate. Right click on the root certificate and choose All
Actions > Export from the context menu. Continue with Next.
15. On the next page, choose Base64-encoded X.509 (.CER) and click on Next to
continue.
16. Save the file as Root64.cer and finish the wizard with Next.

Figure 4-10 Certificate request

c.5.1.

c.5.2.

c.5.3.

c.5.4. Convert the certificate to the PEM format

Before we can upload the certificates, there is one last step – we must convert them to the
correct format. Remember, that both the vRealize Identity and Automation Appliance need
the PEM format. So run the following commands for the conversion:

openssl pkcs12 -export -in C:\certs\identity\rui.crt -inkey C:\certs\identity\rui.key -certfile c:\certs\Root64.cer -name
“rui” -passout pass:Vmware1! -out C:\certs\identity\rui.pfx

openssl pkcs12 -export -in C:\certs\vrealizeva\rui.crt -inkey C:\certs\vrealizeva\rui.key -certfile c:\certs\Root64.cer -


name “rui” -passout pass:Vmware1! -out C:\certs\vrealizeva\rui.pfx

openssl pkcs12 -in C:\certs\identity\rui.pfx -inkey C:\certs\identity\rui.key -out C:\certs\identity\rui.pem –nodes

openssl pkcs12 -in C:\certs\vrealizeva\rui.pfx -inkey C:\certs\vrealizeva\rui.key -out C:\certs\vrealizeva\rui.pem –nodes


Figure 4-11 Certificate replacement

c.5.5. Upload the certificates

Finally, we can upload the certificates to the Identity and Automation Appliance. You can
do that by following these steps:

1. Log into your appliance ( identity app>:5480).


2. Click on the SSO tab and then on SSL.
3. Choose Import PEM encoded certificate in the Choose Option dropdown list.
4. Open the rui.key-file, copy the content to clipboard and paste to the file RSA
Private Key (see Fig. 4-11).
5. Open the rui.pem-file, copy the content to clipboard and paste to the field
Certificate.
6. Type in your password in the Pass Phrase textbox.
7. Click on the button Replace Certificate.
8. Repeat steps 1-7 for the other appliance.

c.5.1.

c.5.2.

c.5.3.

c.5.4.

c.5.5.

c.5.6. Creating and uploading certificates for the IaaS components

From a conceptual point of view, the steps taken to replace certificates for the IaaS
components resemble those of the appliances:

• Create the CSR.


• Create the certificate by means of the Root-CA.
• Convert the certificate into the PXF format.
• Upload the certificate to the IIS manager.
• Upload the certificate to the Manager Service.

We have already shown how to conduct the first two steps, so we will immediately start
here with the third step. Please also note that if you run IIS and IasS Manager Service on
different nodes, you will have to create two different certificates.

c.5.7. Convert the certificate into the PXF format

To convert the format of the certificate we need OpenSSL once again. Please open the
command prompt, change to the OpenSSL-bin directory and type the following command:

openssl pkcs12 -export -in c:\certs\vcrealizeiaas\rui.crt -inkey c:\certs\ vcrealizeiaas \rui.key -certfile c:\certs\Root64.cer
-name “rui” -passout pass:Vmware1! -out c:\certs\ vcrealizeiaas \rui.pfx

c.5.8. Upload the certificate to the IIS Server

Before we can use the certificate, we first have to register it with Windows. Work through
the following steps:

1. Click on Start > Run and type mmc.exe to open the Microsoft Management
Console.
2. Click File > Add/Remove Snap-in.
3. Within the list of Snap-ins, on the left area of the screen, choose the Certificates
Snap-in and add it to the list of Snap-ins via the Add buttons, click on OK.
4. Choose Computer Account on the next page and continue with Next.
5. Now we have to select Local Computer and can end the wizard by clicking on
Finish and then OK.
6. The Certificate Snap-in now lets us add certificates. Right-click the folder Personal
and choose All Tasks > Import.
7. Upload the certificate in the PXF-format.
Once you have registered your certificates, you can reference them during the installation.
If you want to change the certificate later, you must use the vRealize Automation
Configuration Tool (you will find that in the start menu under Programs > VMware >
vCAC). With the tool opened, choose the appropriate service within the dropdown list
Available certificates. However, when using the certificate, please make sure to set the
option Suppress certificate mismatch.

The certificate for the IaaS Manager Service can be changed accordingly.

c.5.9. Replacing certificates

Even after installation you will have to replace certificates from time to time. This can
happen, e.g. if you performed your initial installation with self-signed certificates and
want to replace them with CA-signed ones or the validity of your certificates expires. In
the following stage, we will show the required steps in order to replace certificates.

Hint: Validity of self-signed certificates


Please be aware that self-signed certificates, created during installation, are only valid for
one year (except the SSO identity appliance which has ten years). Once your certificates
expire, you will not be able to log into your appliance successfully, even if your passwords
are correct (instead you will receive a 401 – Unauthorized is denied error message).
As discussed, it is not sufficient to replace a certificate in isolation. Instead you also have
to consider the trust relationships between the different nodes (see considerations before
changing certificates).

c.5.10. Updating of the Identity Appliance certificate

The following steps have to be taken in order to update the Identity Appliance certificate:

• Replace the certificate of the Identity Appliance.


• Update the vRealize Automation Appliance with the new certificate.
• Update the IaaS servers with the new certificate.

As we have already shown how to change the certificate, at the Identity Appliance, we
will skip these steps and explain how to perform the second and the third step.

c.5.11. Update the vRealize Automation Appliance with the new


certificate
Perform the following steps:

1. Start PuTTY, or any other SSH client, to connect to the vRealize Automation
Appliance.
2. Log in on the appliance using root as username and your password.
3. Execute the following command (replace the value of the url-parameter with the
FQDN of your Identity Appliance):
/usr/sbin/vcac-config import-certificate —alias websso —url https://identity-
hostname.domain.name:7444
4. Restart the vRealize Automation Appliance (this can be done via the web console in
the menu System > Reboot).
5. After the appliance has restarted, make sure to check that the following services are
running (System > Service):

• Authorization
• Authentication
• Eventlog-service
• Shell-ui-app
• Branding-service
• Plugin-Service

Be aware that you might have to wait up to ten minutes for the appliance to be
rebooted and have all services running.

c.5.12. Update the IaaS servers with the new certificate

In the final stage, we must perform an update of the IaaS components. It is sufficient to
update the Model Manager – vRealize will update the other components in the
background.

To start the procedure, open command prompt with administrator privileges and change to
the following directory:
C:\Program Files (x86)\VMware\vCAC\Server\Model Manager Data\Cafe

Now we can download the certificates from the Identity Appliance and place them in the
local certificate trust store of Windows:

• vcac-config.exe DownloadRootCertificates —RootCertPath „C:\Program Files


(x86)\VMware\vCAC\Server\Website\SSO root.cer“ —SignCertPath „C:\Program Files
(x86)\VMware\vCAC\Server\Website\SSO signing.cer“ –v
• vcac-config.exe DownloadRootCertificates —RootCertPath „C:\Program Files (x86)\VMware\vCAC\Web
API\SSO root.cer“ —SignCertPath „C:\Program Files (x86)\VMware\vCAC\Web API\SSO signing.cer“ –v

Now we only need to restart the IIS via the iisreset command and we are finished.

c.5.13. Updating of the vRealize Automation Appliance certificate

The following steps have to be followed in order to update the vRealize Automation
Appliance certificate:

• Replace the certificate at the Automation Appliance


• Update the IaaS servers
• Update the SSO settings on the Automation Appliance

The first and the third step should already be well understood, so we will only concentrate
on updating the IaaS server here. Open a command prompt with administrator privileges
and change to the following directory:

C:\Program Files (x86)\VMware\vCAC\Server\Model Manager Data\Cafe

Here you can execute the following command (please add the FQDN of your database
server):

vcac-Config.exe UpdateServerCertificates -d vcac_database -s sql_database_server -v


Do not forget to reset your web server via the iisreset command.

c.5.14. Updating the IaaS certificate

Just as with the other nodes, changing certificates at the IaaS level involves a couple of
different steps. Firstly, the certificate must be replaced in IIS. Secondly, you have to
configure your vRealize Automation Appliance in order to ensure that the trust
relationship between the appliance and the IaaS server is fully working.

c.5.15. Replacement at the IIS server

Work through the following points:

1. Login to the IaaS Windows host and open the IIS Windows console.
2. Navigate to the Connections-field on your server on the left hand side and choose
Server Certificates on the right hand side.
3. Choose Import (menu Actions), search for your PDX certificate and type your
password. Click on OK.
4. Select your Default Web Site and click on Action Bindings.
5. A dialog open. Click on https and then Edit.
6. Now you can select your new certificate. Click on OK to close the dialog.

c.5.16. Registration of the new certificate with the vRealize Automation


Appliance

The final step is to restore the trust relationship between the IaaS server and the vRealize
Automation Appliance. Open a command prompt with administrator privileges and
change to the following directory:

C:\Program Files (x86)\VMware\vCAC\Server\Model Manager Data\Cafe

Please run the following commands – however, do not forget to change the endpoints
according to your environment:

vcac-config RegisterEndpoint –EndpointAddress https:// <iaas-server.domain.name>/vcac –Endpoint ui -v

Vcac-Config.exe RegisterEndpoint —EndpointAddress https://<iaas-server.domain.name>/Repository —Endpoint repo


-v

Vcac-Config.exe RegisterEndpoint —EndpointAddress https://<iaas-server.domain.name>/WAPI —Endpoint wapi -v


Vcac-Config.exe RegisterEndpoint —EndpointAddress https://<iaas-server.domain.name>/api/status —Endpoint status
–v

c.5.17. Using the vRealize Certificate Generation tool

Recently, VMware published a certificate generation tool to assist you with creating
signed certificates. Knowledge base article 210781[3] provides an overview of the
generation tool as well as how to access the tool.

c.1

c.2

c.3

c.4

c.5

c.6 Troubleshooting installation issues

Like in every software product, things can go wrong in vRealize Automation as well.
While we cannot describe all the problems that can occur in a vRealize Automation
environment, we want to give at least some basic information how to do troubleshooting.
First of all, if anything goes wrong, please ensure you have the correct versions of all
components working. If you have different versions working together, erratic behavior can
occur or modules can be broken at all.

c.6.1. Problem with the login process

As described, the SSO component is used for authentication. If any problems occur during
the login process, first check if SSO works in general by trying to login to other VMware
services, such as vCenter Server. Also check the following issues:

• DNS: All the components should have a fully qualified domain name and registered
with the DNS server.
• Time synchronization: Make sure that all your components have a correct time
synchronization.
• Certificates: Please ensure that all certificates are correctly installed and valid. If
you have problems on a Windows Server, check the IIS website and its binding.
c.6.2. Problems with the vCAC website portal
As described above, a lot of parts from the self-service portal are actually coming from the
Windows IaaS servers, for example the whole content of the Infrastructure menu and the
request screen from the service catalog. If you encounter a 404 error message within these
pages, please check if the website components are all working correctly and take a look
into the log files. The locations of the log files can be found in the VMware knowledge
base article number 2074803[4].

c.6.3. Verify the services

If any component fails, please also check if the appropriate services have started and are
registered with the vRealize Automation appliance. You can check the services by logging
in into the web console of the vRealize Automation appliance (https://<vrealize-
automation-app.domain.name>:5480) .
If there is a problem with a service, the best idea is usually to restart the services.
However, you cannot restart services individually, you only can restart all the services in
on time by executing the following command from the command line:

service vcac-server restart

Please note that the restart of the services takes some time – it can take up to 15 minutes.
All the vRealize Automation appliance services are hosted within a Tomcat instance, so it
is possible to trace the boot process by viewing the messages log file with the following
command:

tail –f /var/log/messages

If you want to restart the Windows services, either restart the appropriate service or the
IIS.
You can also see additional information about the status of the services by browsing to
the following URL:

https://<vrealize-automation-appliance.domain.name>/component-registry/services/status/current

When viewing the result, please search for any error or warning message – they contain
more information about the error or warning.

c.7 Summary
This chapter described how to perform the installation of vRealize Automation. There are
different deployment scenarios, based on the size of your environment. Installing vRealize
Automation in medium or large environments involves a lot more steps, due to
distribution issues. As well as the initial installation and configuration, you must also give
consideration to certification. A common task is to replace some or all of the certificates in
an environment. To make your life easier, we have shown in detail how to conduct all
these steps.
5. Design and Configuration of vRealize Automation

We described how to perform the installation in the last chapter. In this chapter we will
continue with the configuration of vRealize Automation. At the beginning of the last
chapter, we described the basic steps to be taken. In case that you have forgotten these
steps, let’s just recap what we are doing in this chapter (we begin with step 6):
• Configure the admin portal.
• Upload a license.
• Add endpoints.
• Create and configure fabric groups.
• Define business groups.
• Create reservations.
• Configure reservation policies and network profiles (optional).

We will not only show you how to perform these installation steps – we will also discuss
the design aspects which influence the way you install vRealize Automation. At this point
in the process, some important design decisions, regarding the architecture, have to be
taken and these decisions can have quite profound consequences. This is especially true of
tenant design (we will introduce tenants within a short time). Once the tenants are created,
we can continue to explain how all the other components within vRealize Automation are
configured.

Figure 5-1 Self-service portal login

5.1 Basic vRealize Automation configuration

After the installation has been completed, you are able to log in to the vRealize
Automation web console. The installation automatically creates a first tenant – the default
tenant. The name of the default tenant is vsphere.local and you have to log in as a system
administrator (you have assigned the password during installation).

5.1.1. Tenant as an organizational unit

Tenants will be covered in detail further on, but for now, let’s explain what a tenant is:

• A tenant is an organizational unit in your vRealize Automation environment.


• Tenants allow the provisioning of resources (like virtual machines).
• Different tenants can share hardware or have their own dedicated hardware.
• They are connected to a directory service like Active Directory for user
management.

5.1.2. System administrator privileges

When you log in for the first time, you will do so as a system administrator – however
there are many other roles in vRealize Automation and we will cover each of them. A
system administrator has the following privileges and responsibilities:

• He is responsible for defining global settings, e.g. branding, configuring inbound


and outbound email server or checking the system log files.
• He can create additional tenants.
To log in to the portal, please open the URL http://<vrealize-automation-
appliance.domain.name>/vcac. You will be redirected to the vRealize Identity Appliance,
where you have to provide your credentials. Once you have been authenticated
successfully, there is a redirect back to the vRealize Appliance (see Fig. 5-1).

Hint: Browser Compatibility


When working with the self-service portal, please note that not all browsers are supported.
While using some of the unsupported browsers may only result in performance issues with
the user interface, others could result in configuration errors. According to the
compatibility matrix, the following browsers are supported with vRealize 6.2:

• Internet Explorer 8-11


• Chrome 38 or 39
• Firefox 32 or 33
5.1.2.1. Configuring email settings

vRealize Automation can send email notifications as well as receive emails from users
during runtime. In order to configure the email settings, go to Administration > Email
Servers and click on the [Add] button in the header line of the main table. Once a modal
dialog has opened, you can specify your email settings. Please click on Test Connections
before you save your configuration.
5.1.2.2. Branding of the homepage
For many companies it is quite important to adjust the branding of the self-service portal,
in order to better reflect corporate identity within vRealize Automation . However, the
options available are quite limited. You can change the following items within the user
interface:

• Header logo
• Company name
• Product name
• Background color
• Text color
• Copyright notice
• Privacy policy link
• Contact link

You can find these configuration settings within the Administration > Branding menu.

Figure 5-2 Tenant configuration

5.1.3. Creating and configuring a tenant

As discussed earlier, there is already a default tenant created ‘out-of-the-box’ during


installation. Later on, we will walk through the creation of additional tenants. However -
depending on your environment’s design - working with the default tenant might also be
possible (we will discuss the design considerations later in this chapter). To configure the
default tenant, go to Administration > Tenants, select the vsphere.local tenant and click on
Edit (in the header of the Tenants pane (see Fig. 5-2)). Configuring the tenant involves
three different steps: Firstly, you have some general settings (like the name). Secondly,
you must connect your tenant to one or more identity stores and the final step is to choose
a tenant administrator and infrastructure administrator (the roles are described in detail at
the end of this chapter). So let’s discuss these steps in detail:
5.2 General settings

If you edit the default tenant, you cannot change the settings on the General tab. However,
with any other tenant, you can modify the description and the contact email. The URL for
the default tenant is always https://<vrealize-automation-appliance.domain.name>/vcac. If
you create an additional tenant, there is a naming convention for the URL. vRealize
Automation uses the tenant name as a suffix for the URL. If you tenant name is sc, the
URL for the self-service portal will be https://<vrealize-automation-
appliance.domain.name>/vcac/org/sc. If you click on Next, you will be able to configure
your Identity Stores.

Figure 5-3 Identity store configuration

5.2.1. Identity stores

The next tab lets you configure the identity stores. Before you can assign any user
permissions in vRealize Automation, you must configure an authentication source. As you
know, vRealize Automation is already connected to the vSphere SSO, or to the vRealize
Identity Appliance, so if you add an identity store to a tenant, it will basically be added to
the underlying SSO component. In that respect, it is worth mentioning that this mask
somehow serves as another graphical frontend for the SSO component – however it is
only used in vRealize Automation. There are different directory services available for
selection:

• OpenLDAP.
• Active Directory.
• Active Directory Native mode (more secure, but only available to the default
tenant).
At this point we can discuss the configuration in detail (see Fig. 5-3):

1. Assign a name in the Name textbox for your connection.


2. Choose your directory service in the Type dropdown list.
3. Specify the address of your directory service in the URL textbox. The address must
be in the format ldap://<ldap server>:port, for example ldap://vdc-dc-01.sc.lan:389.
4. The field Domain alias is optional. However, it is worth configuring, especially if
you have a long domain name. Later on, the domain alias can be used to log in to the
self service portal. For example, if you define a domain alias with value vdc, users
will type username@vdc instead of username@vdc.lab within their credentials.
5. The textbox Login user DN needs a user with permissions to read from the
authentication source. It is important to note the structure of this value, which must
adhere to the LDAP format (for example CN=Administrator, CN=Users, DC=SC,
DC=LAN). Please also don’t forget to provide your password in the Password field.
6. The textbox Group search base DN lets you specify from which point vRealize
Automation will search for groups within your directory service hierarchy. The input
must be in LDAP format as well.
7. Like the textbox above, the User search base DN field can be used to limit the
directory search. This is, however, an optional field.

Hint: Find out Distinguished Names (DN)


If you do not know an exact ‘Distinguished Name’ and you are using Active Directory,
there are some tools to help. One of these is the Active Directory Explorer and this can be
found in Microsoft Tools. AD Explorer allows you to browse an Active Directory and see
the DN of each node in the AD tree.

Hint: Troubleshooting adding identity stores


From time to time, adding an identity store is not always successful (especially if you are
not sure your input has the correct value or format). Unfortunately, the error messages
provided by vRealize Automation are not always clear. In that case, you should connect
via SSH to the vRealize Automation Appliance and read the whole error message in the
file /var/log/messages. If your configuration fails, it is sometimes easier to delete the
whole tenant and create a new one. Sometimes this also involves logging in on the SSO
component and deleting the tenant information from there as well.

Hint: Common errors of the SSO appliance


Even if you configure everything correctly, there might be some other issue, within your
environment, that stops you from adding a tenant. In case of this, we need to show some
of the more frequent problems faced in the configuration. Firstly, always limit the search
base for your users and groups. If you have big environments with thousands, or even ten
thousands of users, you could slow down the authentication, or even run into timeout
problems. If your users are members of too many groups, authentication sometimes does
not work successfully (these group memberships also have to be part of the authentication
request (in the background) and if the size of the request is getting too big, due to too
many memberships, the request is sometimes not valid). In that case, try a username
without any group membership. Also pay attention to special characters within your
password or username – this can also be an issue from time to time.
Last but not least, click on Test Connection to check everything is working, then continue
with Update (or Add if you add a new identity store).
5.2.1.1. Administrators

The last step involves setting up administrator privileges, for the tenant and its resources.
There are two different roles to be assigned:

• A tenant administrator is responsible for tasks, like e.g. assigning user permissions,
within a tenant (managing the service catalog or configure approvals).
• IaaS Administrators connect vRealize Automation to the environment in order to
have resources for provisioning

Choose at least one user or one group for each of these roles and click on Update or Add
to finish the configuration.

Hint: Changing permissions


Please note that if you change permissions, the affected users always have to log out and
log back in, in order for the changes to take effect.

Hint: Assigning permissions


vRealize Automation has a very fine-grained role concept, with ten different roles. This
can be useful if you have a distributed environment with many different administrators
and want to limit what they can see and do in the self-service-portal. On the other hand, if
you only have a limited number of administrators, which should do all kind of tasks, this
approach is rather confusing and we suggest you create a special vRealize Automation
admin group and assign all roles to that group.

5.2.2. Uploading a license


If you remember, we already uploaded a license code when configuring the vRealize
Automation Appliance. However, the IaaS components must be licensed as well. That’s
reason enough to recap the most important license rules:

• Depending on the kind of license, you will have different features in vRealize
Automation. There are different editions for vRealize Automation (Standard,
Advanced or Enterprise) and depending on the license you will have a different
feature set.
• vRealize Automation can be licensed as part of the vCloud Suite or as a standalone
product. If you are using the vCloud Suite license, you can only provision to
vSphere. If you have the standalone license, you can deploy to any hypervisor or to
different cloud providers.

• Licenses can be purchased in bundles – usually per 25 operating system instances


(OSIs). There are two different kinds of OSI bundles, server OSIs and desktop OSIs.
Desktop OSIs are cheaper, but only allow the provision of desktop operating
systems like Window 7 or Windows 8 (not Windows Servers).

Figure 5-4 Upload license to the IaaS server

The process of uploading a license for the IaaS components is pretty easy: Go to the
Infrastructure > Administration > Licensing page, where you will find an Add License
button on the right hand side of the main pane (see Fig. 5-4). When you click on the
button, a model dialog will open and there you can enter the license key. Once finished,
save your data by clicking OK.

5.2.3. Adding endpoints

In order to provision resources, VMware vRealize Automation must be connected to the


external environment – so an endpoint is needed. An endpoint basically represents
connection information for a hypervisor, a cloud provider or a physical machine.
Endpoints are always created and maintained by IaaS administrators, so make sure you are
logged in with that role (of course you also need permissions for the underlying resource –
e.g. vSphere).
vRealize Automation not only uses endpoints for the provision of resources, it also needs
them to gather information about its environments. For example, if you want to provision
virtual machines by cloning a template, vRealize Automation must be able to load
information from existing templates into its own database and update this from time to
time.
There are several different endpoints available. The most important ones are as follows:

• vSphere (vCenter Server)


• Hyper-V
• Microsoft System Center Virtual Machine Manager (SCVMM)
• KVM (Red Hat Virtualization Server)
• XenServer
• vCloud Director
• vCloud Air
• Amazon AWS
• Red Hat Open Stack
• Dell iDrac
• HP ILO
• Cisco UCS
• VMware Orchestrator
• NetApp Flexclone

Figure 5-5 Compute resource and endpoint relationship

Once you have finished the setup of an endpoint, vRealize Automation can use the
underlying compute resources. There is a 1:n relationship between an endpoint and its
computer resources (which is depicted in Fig. 5-5). An endpoint represents the connection
information to connect to certain compute resources. There are different kinds of compute
resources – they can be vSphere Clusters, Hyper-V hosts, virtual datacenters or even
Amazon AWS regions.
While endpoints are needed, vRealize Automation does not use them directly to
communicate with the underlying systems. Instead, DEMs and/or agents are used. Please
note that you must install and configure an agent for some hypervisors and for others you
can skip this step (this depends on what you want to connect to).
Endpoints can be created and configured by navigating to the Infrastructure > Endpoints
> Endpoints page. In the main pane, you can create a new endpoint by clicking on the
New Endpoint button, or you can hover over an existing endpoint and click Edit.
5.2.3.1. Creating a vSphere Endpoint

As most machines provisioned with vRealize Automation will run in avSphere


environment, you certainly want to add a vSphere Endpoint right at the beginning.
Configuring a vSphere Endpoint involves the following steps:

• Check the vSphere Agent status.


• Create the credentials for connecting to vSphere.
• Create the Endpoint.
Checking the vSphere agent status

The vSphere agent is usually installed by default when performing a full installation of
vRealize Automation. This means there is usually nothing to do other than to check if the
vSphere Agent is running as a Windows service. However, while installing you also
specified the endpoint name the vSphere Agent is connected to (there is always a one-to-
one relationship between a vSphere agent and a vSphere endpoint). So when you create
the endpoint, please make sure to use the same name as specified during the IaaS
installation.

Creating the credentials for connecting to vSphere

The next prerequisite for creating an endpoint is to define the credentials used. Credentials
can be reused, which makes perfect sense if you have a user account with privileges on
more than one vCenter server.
Configuring credentials in vRealize Automation is pretty easy (they consist of only a
username/password combination and a description), just navigate to the Infrastructure >
Endpoints > Credentials page and click on the New Credentials button (Fig 5-6 depicts the
credentials table). With the modal dialog opened, type your credential data. Please
consider the format of the username: For vSphere, a username must be in the format
domain\username. Once you have entered all information, save the data by clicking on the
Save button.

Figure 5-6 Manage vCenter credentials

Hint: Endpoints
This chapter focuses mainly on vSphere Endpoints. However, as there are many kinds of
endpoint, we want to briefly describe them:

• A vCloud Director endpoint is used to connect to vCloud Director. vCloud Director


uses vSphere to provision resources. Therefore, take care not to duplicate resources
whilst importing, i.e. once using the vSphere endpoint and then again using the
vCloud Director endpoint.
• A vCloud Air endpoint is used to provision resources in a vCloud Air datacenter (as
opposed to a on-premise location).
• A Microsoft SCVMM endpoint allows the communication with SCVMM for
provisioning. However, there are some smaller issues to consider: The DEM worker
must be installed on the same machine as the Microsoft SCVMM console.
Furthermore, the DEM worker needs access to Microsoft PowerShell, with a
PowerShell Execution Policy set to RemoteSigned or Unrestricted (Set-Execution-
Policy unrestricted /RemoteSigned). If you want to directly communicate with a
Hyper-V host, you also have to install a Hyper-V-proxy agent.
• To provision an Amazon Web Services (AWS) EC2 instance, you would need an
AWS endpoint. You have to configure your credentials with the Amazon access key
and secret key. Furthermore, you must specify the region you want to provision
resources to.
• A Red Hat Open Stack endpoint can be used to provision resources to an Open
Stack-based cloud.
• The Red Hat Enterprise Virtualization KVM endpoint allows for provisioning to
KVM. KVM must be located in the same Active Directory domain as vRealize
Automation.
• You can also communicate with a Xen-Server environment, however, there is an
agent required as well.
• Orchestrator endpoints permit adapting the lifecycle of your provisioned resources.
• Physical endpoints are required to manage physical machines within vRealize
Automation. Currently, only Dell iDrac, HP iLO and Cisco UCS Manager are
supported.
• Storage endpoints can be used to directly communicate with storage arrays and take
advantage of their special features. For example, having a NetApp with
preconfigured endpoints can enhance your provisioning by making use of NetApp
Flexclone technology.

Figure 5-7 Endpoint configuration

5.2.3.2. Creating the endpoint


vSphere endpoint

The last stage is to create the endpoint. Navigate to Infrastructure > Endpoints >
Endpoints. On the right hand side, in the endpoint table header, there is a button. Click on
New Endpoint > Virtual > vSphere (vCenter). A configuration dialog opens (see Fig. 5-7).
Perform the following steps for the endpoint configuration:

1. Type the name of the endpoint, within the Name textbox, as it was specified during
the installation.
2. Give a description of the Endpoint (optional).
3. To communicate with vSphere, the vSphere web service API is used. You have to
specify the URL of the vSphere API accordingly (the format is https://vcenter-
server/sdk) in the Address field.
4. If you are using NSX in your environment, you can check the Specify manager for
network and security platform checkbox and enter the URL of the server (format:
https://nsx-server).
5. Click OK to finish.
Hint: Check if the endpoints are working
It usually takes some time for the configuration to take effect (up to five minutes). After
that period, vRealize should have found the compute resources behind the endpoint. If you
need to see if something is going wrong, check the log entries. These can be found under
Infrastructure > Monitoring > Logs.
If you are adding resources (or changing some configurations) and want vRealize
Automation to be aware of these changes, you must synchronize vRealize Automation
with vSphere (also referred to as ‘data collection’). While synchronization happens
automatically once a day, you can also trigger it manually. Navigate to Infrastructure >
Compute Resources > Compute Resources and hover over the resource you want to
synchronize – there is a menu item to start the Data Collection.
Once the first data collection has been completed successfully, the compute resources
will have been added to vRealize Automation. All the compute resources together are
called fabric. All compute resources added by configuring an endpoint are added to the
fabric.
5.2.4. Background: Data collection

Data collection is the process of synchronizing the environment with the vRealize
Automation database. Data collections take place at fixed intervals and there are different
kinds of data collection:

• The Infrastructure Source Endpoint Data Collection regularly (once a day) loads
information regarding host and virtual machines templates into vRealize
Automation. When using Amazon AWS, additional information regarding regions
and available virtual machines is retrieved. With physical machines, the available
memory and the CPU will be loaded into the database.
• The Inventory Data Collection analyzes the machines. This involves checking the
networking properties and memory as well. Information relating to machines not
provisioned and managed by vRealize Automation is also retrieved. By default, the
Inventory Data Collection occurs daily.
• The State Data Collection checks the state of single machines and verifies if the
machines are still available. This workflow runs every 15 minutes.
• The Performance Data Collection loads performance data into the vRealize
Automation database (this happens once every day).
• The vCloud Networking and Security Inventory Data Collection detects new objects
in vCNS.
• The WMI data collection can retrieve information referring to Windows-machines in
the environment.

5.2.5. AWS Endpoint

Creating an AWS connection differs a little bit from the creation of other endpoints. So we
will illustrate how to configure such an endpoint separately. However, before being able to
configure this, you must first have your AWS access key and secret key.

Hint: Retrieving your Amazon access key and secret key


If you do not have your access key and secret key, you must log in into the AWS
Management Console and perform the following steps:

1. Open the IAM console.


2. Click Users from the navigation menu.
3. Select your IAM user name.
4. Click User Actions and then on Manage Access Keys.
5. Click Create Access Key.
6. Expand Show User Security Credentials and copy both keys.
7. Download credentials (optional) (see Fig. 5-9).

The first move is to create the Amazon AWS credentials (see Fig. 5-8). This can be done
by following the steps outlined here:

1. Change to the page Infrastructure > Endpoints > Credentials.


2. Click on the button New Credentials.
3. Type in a name and a description for the credentials.
4. Paste the access key to the User name field and the Secret Key in the password field.
5. Save your credentials.

Now we can create the AWS endpoint:

1. Navigate to the page Infrastructure > Endpoints > Endpoints.


2. Click on New Endpoint and choose Cloud > Amazon EC2.
3. Specify a name and a description.
4. Specify the credentials to be used for the endpoint.
5. If your vRealize Automation Appliance does not have direct Internet access, you
can specify a proxy.
6. Click on OK to save the endpoint.

After a short while, the data collection will take place and you will have your Amazon
regions as compute resources available in vRealize Automation.

Figure 5-8 Manage AWS credentials

Figure 5-9 Download AWS credentials

5.2.6. Creating and configuring fabric groups

The next big step in configuring vRealize Automation is the creation and configuration of
fabric groups. Fabric groups are used to group your compute resources into different
manageable entities in order to be able to configure these groups separately. There are
several reasons to put your resources in different fabric groups – for example, if you want
to isolate your hardware resources from each other, so that different departments cannot
share them.
Each fabric group needs a fabric administrator, who is responsible for configuring the
resources of the fabric group and assign them to different user groups. The relationship
between a fabric, fabric groups and endpoints is depicted in Fig. 5-10. The IaaS
administrator (the role was assigned when creating a tenant) is responsible for the fabric.
Endpoints are used to add compute resources to the fabric. Compute resources can be
grouped into fabric groups and are maintained by fabric administrators.
Figure 5-10 Fabric, fabric groups and endpoints

To create a fabric group, navigate to the page Infrastructure > Groups > Fabric Group
and click New Fabric Group on the right-hand side of the screen. Provide the following
information:

• Name of the fabric group


• Description
• Fabric administrator(s)
• Compute resources

Once you have filled in the relevant fields, click OK to save your fabric group (see Fig. 5-
11)

Figure 5-11 Fabric group configuration

5.2.6.1. Creating a Machine Prefix

When vRealize Automation provisions machines on behalf of users, it needs to assign a


machine name. While it is theoretically possible for a user to specify a hostname
manually, in most environments this process should be automated. vRealize Automation
provides a simple mechanism that creates hostnames automatically – a machine prefix. A
machine prefix consists of a name and a number that will be appended to that name. Every
time a new machine is created, the number will be incremented. For example, if you create
a machine prefix of ‘vdc’ plus three digits, your hostnames would be vdc001, vdc002,
vdc003 and so on. It is also not required that your machines start with number “1”, higher
numbers are also possible. However, there are some restrictions for the machine prefix
string:

• They have to adhere to DNS naming conventions, i.e. they can only contain ASCII
characters from a-z, A-Z and digits between 0-9. Any other special characters are
not permitted.
• If a machine prefix is used for provisioning Windows machines, there is a maximum
length of 15 characters in total.

Machine prefixes can be created by navigating to the Infrastructure > Blueprints >
Machine Prefixes page and clicking on the New Machine Prefix button. Then fill out the
different textboxes as described.
In many cases, using the built-in machine prefixes will be sufficient, however from time
to time there is need for more sophisticated naming conventions. This can be achieved by
using vRealize Orchestrator.
5.2.7. Defining business groups

Remember, we created fabric groups in order to create isolated entities for managing
hardware resources. Analogous to group hardware resources, we also want to group users
of the self-service portal to business groups for ease of management. In daily life, business
groups are usually mapped to organizational units like teams (for example financial teams,
DevOps teams and so on) or departments.
Once created, business groups must be mapped to fabric groups. That way they will have
the permissions to provision further resources (the mapping is done via reservations,
which will be discussed later in great detail).
Creating a business group also allows you to specify user roles within the organizational
unit. vRealize Automation is aware of the following roles within a business group:

• There must be at least one business group manager. This role can approve the
provision of machines. Furthermore, it is this user who checks how much of the
available resources are currently in used. He also controls what kind of resources
can be deployed. Of course, the business group manager can also provision
resources alone – or on behalf of other users.
• Support users are able to provision and manage machines for themselves or on
behalf of other users (this is especially useful if normal users only have the right to
work with existing machines and are not allowed to provision resources themselves.
Support users can also help with machines whose owner is currently on holidays).
• In order to create and manage a machine within a business group, you must have
membership of a user group in that business group. Of course, users can also be
members of many different business groups.
Figure 5-12 Creation of a business group

With that background knowledge, it should be easy to create a business group. Please
navigate to the Infrastructure > Groups > Business Groups page and click on the New
Business Group button. Afterwards we can fill out the dialog fields as described (see Fig.
5-12):

• If you already have an existing business group, you can use the Copy from existing
group dropdown list.
• Specify a Name for the business group.
• Optionally you can type in a Description.
• Choose the Default Machine prefix for the Business Group. If you have not created
one yet, you also have the possibility to configure a new machine prefix.
• Provisioned Windows machines can be placed in an Active directory container. You
have to specify the Distinguished Name (DN) of the container to use this feature.
• Specify who will become a member of the Group Manager role in the appropriate
text box.
• You can send notifications regarding provisioning, if you fill out the Send manager
emails textbox. Please note, you must configure an outbound email server first (as
described in Chapter 4).
• Specify which users will become members of the Support Roles.
• Configure all normal members in the User Roles text field.
• For most of the cases, you can keep the Custom Properties empty. We will talk later
about custom properties in detail.
• Click OK to save your changes.
Figure 5-13 Business groups, reservations

Once you have created the first business group, the business group manager can see
resource consumption on the business group main page. This page displays information
regarding the total number of machines, quotas, allocated memory and storage (in % and
in total GB).

5.2.8. Creating reservations

Now we have created our fabric groups (groups defining which resources can be used for
provisioning) and business groups (groups defining who is able to request any new
resources). However, if there are multiple fabric groups and business groups, we still have
the question which fabric group(s) should be used by a given business group. vRealize
Automation uses reservations to link these groups to each other. Fig. 5-13 depicts the
relationship between these entities. You can see that there are three different business
groups available (A, B and C). While group B can access all underlying fabric groups via
three reservations, both business group A and C are only linked to two fabric groups.

Hint: vRealize Automation reservations


Many of you are already familiar with the concept of reservations from vSphere.
However, the reservations in vSphere only share a name with the reservations in vRealize
Automation, in reality they are completely different.

In order to be able to create reservations, you must first be a holder of the fabric
administrator role. In addition, it is important to understand that there are different kinds
of reservations:

• A virtual reservation allocates parts of virtual resources. For example, when


referencing a vSphere cluster, you will specify which share of the cluster (i.e. how
many vCPUs, the size of the memory, which datastores or what networks are to be
used) will be assigned to a business group. Consequently, there can be more than
one reservation for a virtual compute resource.

• A physical reservation always allocates the whole machine. It is not possible to have
multiple reservations on the very same hardware. If another reservation is needed,
you have to delete the existing one first.
• You can also have reservations on a cloud environment. To configure a cloud
reservation for AWS Amazon, you need to configure an AWS endpoint first. For
vCloud Director you need a vCloud Director endpoint respectively.
Figure 5-14 Reservation information

Reservations can be configured by navigating to the Infrastructure > Reservations >


Reservations page. Click on New Reservation > Virtual > vSphere to create a vSphere
reservation. In this case, the configuration takes more time – there are four different tabs
for the reservations’ settings.
The first tab configures the basic settings (see Fig. 5-14):

• First, choose which compute resource should be used.


• After having chosen a compute resource, vRealize Automation will automatically
assign a name for the reservation, which can be overridden by the administrator.
• Specify the Tenant of the reservation.
• Choose the Business group.
• Select a reservation policy (we will talk more about reservation policies later on, so
for now you can leave it blank).
• Machine quotas allow you to configure the total number of machines that can be
provisioned via this reservation. If the limit is reached, no further provisioning is
possible. By default, there is no quota, so essentially, un unlimited number of
machines can be created.

• If there is more than one reservation for a business group, the priority value helps
determine which one to is use for provisioning. By default, vRealize Automation
selects the reservation with the highest priority. If that reservation has run out of
capacity, the reservation with the next highest priority will be used. If there are
several reservations within the same priority, vRealize Automation will use a round
robin algorithm to balance the workload among the different reservations.
Figure 5-15 Reservation resource allocation

Next you can move on by clicking the Resources tab (see Fig. 5-15). Here you can specify
which share of the compute resource will be used by the reservation:

• The upper memory table lets you define the memory share (you also see how much
memory is available).
• The storage table helps you to reserve storage on a storage path, along with priorities
(if you have multiple storage paths).
• A resource pool to be used.

Hint: Visibility of resources


vRealize Automation will display all available compute resources when configuring a
reservation. If you want to restrict visibility, it is a good idea to create a special service
account for vRealize Automation in vSphere. There are plenty of objects in vSphere,
which should not be visible in vRealize Automation – for example it makes no sense to
connect resources to the vMotion network, so consequently it should not be available in
the user interface. If you want to restrict visibility in vSphere, you can also use the No
Access permission to restrict access.

The third register tab is for configuring networks (Fig. 5-16). Please check all the network
paths, which should be available for provisioning. There is a dropdown link for the
network profiles as well – however we will talk about them later.

Figure 5-16 Attached networks

Hint: Network paths


If you select different network paths on your reservation and a blueprint provisions to this
reservation, the default behavior is to round robin between these network paths. If you
want to override this feature, you can configure a network profile (or use custom
properties) to let users decide which network should be used for the provisioning of
machines.

The last screen refers to alerts. By default, capacity alerts are turned off. If you turn them
on, you can specify the thresholds at which alerts should be fired (there are alerts for
storage, memory, CPU and machine quota). Furthermore, don’t forget to add some
recipients (and optional check if the group manager should also be notified). The last item
to configure is the reminder frequency (days), if you want to send multiple notifications.
5.2.8.1. Configuring Reservation Policies (optional)

At this point, we have covered nearly all preliminary work required before we can
configure which machines should be deployable from vRealize Automation.
However, before talking about blueprints, we first need to introduce another powerful
concept of vRealize Automation – reservation policies. In short, reservation policies are
used if your business group has several reservations available, but you need to restrict
which of these reservations should be used for a particular machine. In practice, a
reservation policy can be used in many places. For example, a desktop machine will
probably have different hardware requirements to that of an SQL Server. Consequently,
you could have two different fabric groups – one fabric group called “Bronze FG” with
entry-level performance and another fabric group called “Gold FG” with high-
performance hardware. Now you could deploy desktop machines to the Bronze FG, and
SQL Server to the Gold BG.

Figure 5-17 Reservation policies

Reservation policies help you to implement such use cases. Technically, reservation
policies simply connect blueprints and reservations (blueprints specify the machines to be
deployed). Fig. 5-17 shows such a mapping: There are two different hardware levels that
can be used for provisioning. Such definition of levels is also called tiering.
There is a 1:n relationship between reservations and reservation policies. Therefore each
reservation can have exactly one reservation policy, but the very same reservation policy
can be mapped to different reservations.
Three steps are necessary to configure reservation policies:

• Create the reservation policy.


• Assign a reservation policy to a blueprint.
• Assign a reservation policy to a reservation.

Go to the Infrastructure > Reservations > Reservation Polices page and click the New
Reservation policy button. Once the dialog has opened, you can specify a name and
description for the policy, then save it with the green Save button.
So far we haven’t talked about creating blueprints – nevertheless, if you already have
one, navigate to Infrastructure > Blueprints > Blueprints. If you edit an existing
blueprint, or create a new one, there is a dropdown list named Reservation policy on the
first configuration page (Blueprint information) where you can specify the policy.
The third step is to assign a reservation policy to a reservation. Once again, this is quite
easy. We have already mentioned that there is a dropdown list on the reservation
configuration page. Once we have created a reservation policy, the dropdown list will
reflect this, with a new policy ready to select.

5.2.9. Storaging Policies

In addition to the normal reservation policy, there is also a second kind of reservation
policy - storage reservation policies. From a conceptual point of view, they are quite
similar to standard reservation policies; however they specifically apply to storage
volumes.
Storage reservation policies can be created on the Infrastructure > Reservations >
Reservation Policies page as well. You also have to assign them to a blueprint (they are
configured on the second build information tab within the storage volume table).
However, the last step is slightly different (they will be assigned to a compute resource
instead of a reservation). Work through the following checklist:

1. Select Infrastructure > Compute Resources > Compute Resources.


2. Hover over a Compute resource and click Edit.
3. Switch to the Configuration tab.
4. Locate the datastore, to which you want to add your storage reservation policy,
within the Storage table.
5. Click the Edit icon.
6. Choose your storage reservation policy from the Storage Reservation drop-down
menu.
7. Click the Save icon.
8. Click OK.

5.2.10. Configuring Network Profiles

When a machine gets provisioned, it will usually be connected to a network. Which


network is taken depends on the reservation (if there is more than one network available in
the reservation, vRealize Automation does a round robin algorithm).
Another problem is how to configure the network settings for a virtual machine (i.e. the
IP address, default gateway, subnet mask, DNS server). The answer is that, by default,
vRealize Automation will let the underlying network provide these values. So if you have
a DHCP server configured in that network, everything is fine. However from time to time
you do not want to rely on an external network’s settings, but configure everything by
yourself directly within vRealize Automation. Network Profiles help you to do most of the
configuration settings directly in vRealize Automation. There are four different kinds of
Network Profiles:

• External Network Profiles help you to override the external network settings with
your own configuration.
• You define a Private Network if you want to deploy your virtual machines to an
isolated network environment.
• Routed Networks allow a segmentation of an IP address area with its own routing
table.
• NAT Networks consist of a private network, where you deploy your virtual
machines and an external or routed network that is connected to the private network
via a NAT Router.

There is a big difference between external network profiles and the other three. An
external network is an existing physical network, statically configured at the hypervisor
level. The other three network profiles all define virtual networks that are created at the
provisioning stage. These networks cannot be created directly by vRealize Automation,
but rather are built with the help of NSX.
In the following section, we will show you how to create and map an external network
profile to a reservation. The other three network profiles will be discussed in chapter 7.
If you want to create a network profile, be sure to log in as a fabric administrator. Then
you can perform the steps as described:
1. Navigate to the Infrastructure > Reservations > Network Profiles page.
2. Choose New Network Profile > External.
3. Specify a name.
4. Optionally you can assign a description.
5. Type a mask address in the Subnet mask text box.
6. Configure the IP address in the Gateway text box.
7. Specify a DNS/WINS group for the network profile.

If you want vRealize Automation to assign the IP addresses, you can move along to the IP
Ranges tab and create a pool of IP addresses:

1. Click on New Network Range.


2. Assign a Name for the range and optionally a description.
3. Type the Starting IP address.
4. Type the Ending IP address.
5. Save your changes with OK.
6. Save your network profile with OK.
Once you have saved your IP Range, vRealize Automation will populate the Defined IP
Addresses table. Here you will find an overview of all IP addresses and if they are
currently allocated or not.

5.3 The vRealize Automation role concept

After having discussed the most important configuration steps, we now want to shift our
focus to security. We already mentioned that there are different roles in vRealize
Automation. vRealize Automation is security-trimmed on the graphical user interface, i.e.
you only see the configuration items in the menus you are permitted access to. This can
sometimes be quite confusing. Especially when you do not know if you lack the
permission to do something, or you just forgot where a certain configuration item can be
found in the graphical user interface.
Essentially, vRealize Automation comes with three different categories of roles: system-
wide roles, tenant-roles and business group roles.
There are three system-wide roles: the system administrator, IaaS administrator and the
tenant administrator.
For each tenant, there are also different roles: the tenant administrator, the service
architect and the approval administrators.
When creating a business group, different roles can be defined as well: the business group
manager, support users, users and approvers.

The table 5-1 depicts the roles and their permissions.

5.4 Summary

This chapter showed the basic configuration stages of vRealize Automation. This involved
a couple of steps. Firstly, we showed how to set up global settings. Next, we created a
tenant and showed you how to add compute resources, via endpoints, to the fabric. We
then introduced the concepts of fabric groups and business groups and described how to
relate them via reservations. We also introduced reservation polices – a sophisticated
mechanism to implement rules based on where to provision machines. At the end, we
shortly discussed network profiles, roles and permissions.

Role Permission

System administrator Installation of vRealize Automation


Create and modify tenants
Global branding
Global email settings
Monitoring of vRealize Automation

IaaS Administrator Create endpoints and credentials


Manage fabric groups
Configure proxy agents
Administer cloud service accounts

Fabric administrator Manage physical machines and compute resources within a fabric group
Manage reservations and reservation policies
Manage build profiles and machine prefixes
Define cost profiles

Tenant Administrator Manages a tenant


User management within tenant
Administration of service catalog
Manage approval policies
Manage service catalog permissions
Tenant branding
Manage global blueprints

Service Architect Manage Advanced Service Designer and XaaS blueprints

Approval Administrator Manages approval policies

Business group manager Create and manage blueprints


Manage permissions on published blueprints within the service catalog
Monitoring resource consumption
Support user Create and manage resources on behalf of other users

Users Create and manage resources

Approver Approver

Table 5-1 Roles overview


6. Blueprints

So far we have shown how to initially configure vRealize Automation, it is now time to
deal with blueprints. Blueprints represent the most important entity within vRealize
Automation – they define how to provision and manage the lifecycle of IaaS resources.
There are different kinds of blueprints in vRealize Automation: Physical blueprints,
virtual blueprints, cloud blueprints and multi-machine blueprints. This chapter will
address the first three kinds of blueprints, the multi-machine blueprint will be explained
chapter 7.

6.1 Introduction to blueprints

In short, a blueprint can be described as the specification of a physical, a virtual, or a


cloud machine, including the respective lifecycle information. This includes all hardware
settings for the machines, how to provision it, which actions on the machine are allowed at
runtime and how to de-provision the machine. Let’s discuss this in detail:
Providing resources for provisioning

Every blueprint needs available resources in order to provision. We have already


explained the main features und properties associated with a resource in this context,
referring to endpoints and fabric groups. Each blueprint can be global in nature or mapped
to a dedicated business group. However, when a machine gets provisioned, there is an
underlying reservation (that maps to a fabric group), whose resources are used. If a
reservation cannot meet the resources specified in the blueprint, the provisioning will fail.
Provisioning mechanism
Besides determining what kind of resources we require, we also need to know how a
machine is created. vRealize Automation has several ways of provisioning a machine. Of
course, these mechanisms rely on the blueprint type. For example, while virtual blueprints
cloning is popular, physical blueprints would be installed via a PXE boot server, instead.
Internally, the logic behind these provisioning workflows is stored in vRealize
Automation, within the Model Manager.
Lifecycle

A blueprint defines all aspects of a lifecycle. Besides the question of how to provision an
IaaS resource, there are other aspects like approval, managing at runtime, retiring the
machine or archiving. These stages make up the lifecycle of a machine. Such lifecycles
are depicted in Fig. 6-1. There are different blueprints presented – one for a desktop
machine, another for a production system and a third for a dev/test-system. Please note
that all of these have their own settings regarding security, SLAs and cost profiles or
policies.
Extensibility

The feature set of vRealize Automation is already quite comprehensive. However, as with
any other standard software, probably not all the requirements of a project can be
delivered out-of-the box. There is a lot of room for adaptations of a machine’s lifecycle.
For example, many companies have their own IP address management tool (or any other
third-party tool) which should be integrated into the provisioning workflows. For such
scenarios, blueprints have a specific extension mechanism. You have probably noticed that
the graphical user interface often allows defining custom properties. These properties can
also be used to customize the lifecycle of machines provisioned by vRealize Automation.
vRealize Orchestrator - which is part of vRealize Automation - also makes use of custom
properties to implement additional functionality. We will be talking more about
extensibility later on.

Figure 6-1 Blueprint overview


Global and local blueprints

Blueprints can be defined globally per tenant or on a business group level. No matter what
kind of blueprint is used, if a blueprint is published, you need to have relevant
permissions. Tenant administrators are in charge of creating and managing global
blueprints. Global blueprints can be marked as master blueprints in order to use them as
templates. Business group managers can create blueprints as well – however, these
blueprints are locally assigned to that business group.

Machine leases

While it is quite easy for end users to provision their own machines, the situation might
arise, at some point in time, where all the available resources for provisioning have been
exhausted. In addition, there are costs arising from machines that are not used anymore.
To help avoid such issues, machine leases can be assigned. This means, when creating a
blueprint, it can be defined how long a provisioned machine will be deployed. When the
lease is over, it has to be extended, otherwise the machine gets archived or destroyed, and
its resources will be released.

Reclamations

Even when you have machine leases configured, the remaining capacity within your
datacenter can be low and no further machines can be created. From an administrator point
of view, it is difficult to choose a machine for expiry. This is because in most cases
administrators do not have any knowledge regarding machine use (i.e., if it is still needed).
In this case, vRealize Automation helps to identify underused machines (in terms of CPU,
memory, network or disk consumption). Administrators can then ask the owners of such
machines if they are still required. Based on the owners’ reply, the machine resources can
be released or kept.

Configuration changes at runtime

While blueprints specify the hardware configuration for newly provisioned machines,
changes at runtime are quite common. These reconfigurations are only possible when the
blueprint allows for scale-out or scale-in. This occurs when we define intervals for the
provisioned hardware resources. So, for example, we initially assign 4 GB of memory, but
allow end users to have up to 8 GB of memory. At runtime, end users can then use the
self-service portal to apply these changes and restart machines with the new settings.

Machine lifecycle

While there are different blueprints in vRealize Automation, they all follow a master
workflow when provisioning new machines. This master workflow has the following
stages:

Workflow Stage Description

Requested Request for a new machine from the service catalog

Awaiting Approval Workflow is on hold until approved by an approval manager

Register machine Machine is registered

Building machine The actual build is about to start

Machine provisioned Machine build completed successfully

Machine activated The requested machine is activated

Install Tools Hypervisor guest operating system tools are installed

Expired Machine lease time has expired and machine is turned off

Deactivate machine The machine disposal process has started

Unprovision machine The machine un-provisioning process has started

Disposing The hypervisor is disposing the machine

Finalized Machine has been disposed and is about to be removed from the
management

Table 6-1 Machine lifecycle

Depending on the kind of blueprint, the lifecycle encompasses additional stages. For
example, virtual blueprints which are provisioned by cloning can have an additional stage
for applying guest specifications to a machine.
Monitoring workflows and viewing logs

As with any software, errors can also occur in vRealize Automation and especially during
provisioning. To troubleshoot these errors, vRealize Automation provides several log files,
all of which are viewable from the graphical user interfaces. These log files serve different
purposes which are described in the following:

• Infrastructure > Monitoring > Audit Logs: Provides information about the status
of virtual, Amazon and multi-machines. NSX, reclamation and reconfiguration
events are also logged.
• Infrastructure > Monitoring > Distributed Execution Status: View the status of
DEMs and the details of scheduled workflows.
• Infrastructure > Monitoring > Log: Default log information.
• Infrastructure > Monitoring > Workflow History: Status and history of executed
DEMs and other workflows.
• Administration > Logs: Display logs for the vRealize Automation installation
(only for system administrators).
• Requests: Status of current provisioning (for end users).

6.2 Blueprints – basic settings

vRealize Automation supports a wide range of hypervisors, virtual platforms or physical


machines. For all these systems, different kinds of blueprints exist. However, they share a
common range of settings. First, we will describe the basic properties of blueprints and
then later we will dvelve deeper into this topic.
To locate the blueprint page, we must navigate to Infrastructure > Blueprints >
Blueprints. Blueprints have a lot of settings, so the user interface tries to group the
configuration items into the following tabs: Blueprint Information, Build Information,
Properties and Actions.

Figure 6-2 Blueprint information

Blueprint Information
Fig. 6-2 depicts the Blueprint Information register. When creating a new blueprint,
please provide the following information:

1. Type in the Name of the new blueprint.


2. Optionally you can assign a description for the blueprint.
3. The checkbox Master (can be copied) defines whether the blueprint can become a
template and hence be copied.
4. If you check the option Display location on request, end users will be able to see at
which ‘datacenter location’ a machine is provisioned to.
5. The checkbox Shared blueprints marks a blueprint as a global blueprints, hence it
is visible in all business groups.
6. If you want to associate a blueprint with a reservation, you should choose a
Reservation policy (we introduced reservation policies in chapter 5).
7. vRealize Automation needs a machine name to provision a machine. We already
discussed machine prefixes in the last chapter (when talking about business groups).
If you are ok with the business group’s machine prefix, you can keep the default
value; otherwise you have the possibility to override this setting.
8. If you want to have a quota on provisioned machines per user, you can type a value
in the Maximum per user field.
9. Specifying Archive (days) implies that machines are not automatically destroyed
when the lease time expires. They will instead be switched off and remain turned off
until the archive period has expired. If you don’t specify a value, a machine gets
immediately destroyed once the lease time is over.
10. The last field specifies the cost of the machines. The blueprint cost will be added to
the other costs (memory, CPU and storage) based on the cost profile.

Hint: Blueprint
While the description field is optional, it is nevertheless highly recommended to enter
some meaningful information here, as this text will appear in the service catalog. The
format of the description text should be consistent across all blueprints. It should include
information covering the virtual machine’s installed software, such as operating system,
installed tools (like VMware Tools or Puppet) or any other custom software. By providing
a meaningful description, the self-service catalog becomes much more intuitive to end
users.

Hint: “Display location on request”


The blueprint allows us to display location on request. However, in order to configure this,
there is some additional work to be completed.
Firstly, we must log on to the IaaS component server. With the vRealize website still
running, navigate to the vRealize Automation website directory (usually
%SystemDrive%\Program Files x86\VMware\vCAC\Server\WebSite\XmlData). Now
open the file ‘Locations.xml’ in an editor. We must add the following code fragment, for
each of the locations we have:

<CustomDataType>
<Data Name=”Nuernberg” Description=”Nuernberg Datacenter” />
</CustomDataType>

After having saved the file, we must restart the Manager Service. Finally we can associate
a location with our Compute Resource. Navigate to Infrastructure > Compute
Resources > Compute Resources, edit our resource (click on Edit) and choose a location
for that object.
Build Information

There are a lot of different ways to provision machines - dependent on the type of
blueprint.. We will discuss this tab later in this chapter.
Properties

The Properties tab (Fig. 6-3) is used to override default settings related to the lifecycle of a
blueprint. As discussed, every machine passes through different stages, during its
lifecycle. It begins with the request and approval of a machine. Once a machine is
approved, it will be provisioned and managed by a user. At the end of the lifecycle it will
have been expired, archived and finally destroyed. These stages also represent points in
time, where additional behavior can be “hooked” in. There are plenty of use cases, for
example:

Figure 6-3 Blueprint properties


• Joining the machine to an Active Directory domain after provisioning.
• Reducing the number of cores per socket, in order to lower licensing costs.
• Invoke a script on the machine, after provisioning, in order to install additional
software.
• Contact an IPAM tool, in order to obtain an IP address, before provisioning.

Technically, custom properties are fundamentally only a collection of key-value pairs.


This is quite a generic approach, but very suitable for basic customizations.
Chapter 11 will address custom properties in detail, however, to gain a certain level of
understanding, let’s provide two examples:

• The hostname custom property overrides the default machine prefix, so that a user
can enter the hostname at the point of request (see Fig. 6-4),
• There is a custom property to define where the vCenter folder of a provisioned
virtual machine should be placed to.

Figure 6-4 Custom properties definition

When creating custom properties, the following information is required:

• The name of the property.


• The value of the property.
• Optionally, you can define if the value of the custom property should be encrypted.
• The option whether the custom property should be static for all instances of a
blueprint or if the value should be provided by the user when he requests the
machine (Prompt user).

In most cases, you would like to add not just a single custom property, but a set of
different custom properties to your blueprint (the custom property documentation covers
more than 50 pages). As adding custom properties to blueprints can be quite cumbersome,
vRealize Automation uses build profiles to group custom properties. Once defined, you
only need to add the build profile to your blueprint and all the contained custom properties
will be applied. vRealize Automation already provides a set of build profiles, for example
Fig. 6-5 shows the Remove from Active Directory build profile. This can be used to
clean your Active Directory environment, when decommissioning a machine.

Figure 6-5 Build profile defintion

Figure 6-6 Blueprint actions

Actions

Once a machine is provisioned, the users gains control and can manage it. Users can
perform a set of actions on a machine, and this list of “possible” actions can be configured
on the Actions tab. Fig. 6-6 depicts the set of allowed actions on a machine. It is important
to note that actions are not permissions, they just define the technical changes that are
possible on a machine. Permissions – on the other hand – are assigned when configuring
the service catalog.
Publishing a blueprint

Once a blueprint is created, there is a final step required before we are able to add it to the
service catalog: it must be published. Publishing involves the following steps:

• Navigate to Infrastructure > Blueprints > Blueprints


• Hover over the blueprint to be published and choose Publish.
6.3 Virtual Blueprints
The most common blueprints are virtual blueprints. These blueprints support a variety of
platforms:

• vSphere (vCenter)
• Hyper-V
• Hyper-V (SCVMM)
• KVM (RHEV)
• Xen Server
• Generic

In addition to these platforms, integration into HP Server Automation is possible.


Furthermore, vRealize Automation supports different provisioning workflows for these
blueprints. These are shown in Table 6-2.

6.3.1. Basic workflow

The basic workflow only creates an empty container for the operating system and hence is
the easiest one to configure. Open or create a blueprint and perform the following steps:

1. Change to the Build Information tab.


2. Choose if you want to provision a server or desktop operating system, from the
Blueprint type dropdown list (this settings is only relevant for licensing issues).
3. Choose Create from the Action dropdown list.
4. Select the BasicVmWorkflow from the Provisioning workflow dropdown list.

Method Platform Description

Basic All Create a virtual machine container without operating system. The
end user can manually install the OS.

Clone vSphere Provisioning via cloning, based on a template. This works for
KVM (RHEV) Windows as well as for Linux. The OS can be customized via a
customization specification
SCVMM

Linked Clone vSphere Compared to full clones, linked clones conserve disk space and
allow multiple virtual machines to use the same software
installation.

Flex Clone vSphere Disk conserving cloning of a virtual machine, based on the
NetApp FlexClone technology.
Linux Kickstart All Provisioning of a Linux virtual machine by booting from an ISO
image. A kickstart or autoyast configuration file is needed. A
distribution image must be provided as well.

Virtual SCCM All Provisioning of a Windows machine via a System Center


Configuration Manager (SCCM) task. The machine will be
booted by means of an ISO image. After the provisioning, the
machine can be further customized, with the help of the vRealize
guest agent.

WIM Image All Provisioning via WinPE and a Windows Imaging File (WIM)
file.

External All Used to integrate 3rd party automation systems like HP Server
Automation and BMC Blade logic into vRealize Automation.
vRealize Automation initiates the provisioning and then hands
over control to the automation system.

Table 6-2 Provisioning workflows

5. If you are working with Hyper-V (SCVMM) you can optionally assign a Virtual
hard disk, a Hardware profile and an ISO. If you are using the KVM (RHEV)
blueprint, there is an additional ISO field in the basic workflow.
6. The next step is to configure the machine resources regarding CPU, Memory (MB)
and Storage (GB). You always have to provide a minimum value. The values are
fixed if you leave the Maximum boxes empty. Otherwise, users are free to choose a
value between the minimum and maximum.
7. Specify the Lease (days). A blank value means there is no expiration date.
8. Optionally, you can define the Storage Volumes to be mounted
9. Define the Maximum volumes.
10. Choose the number of network adapters, which could be attached to a virtual
machine.

In addition to this, if you are using vSphere, there is an additional custom property to be
configured, i.e. the operating system must be set (this is a required input when creating a
vSphere virtual machine container object):

1. Change to the Properties tab and click on the New Property button.
2. Name the custom property VMware.VirtualCenter.OperatingSystem
3. Provide the value for your operating system, e.g. windows7Server64Guest for
Windows Server 2008 (and R2) 64 bit. Please take a look at the vSphere
documentation for a complete list of values.
4. Click the Save button.

Hint: Storage volumes


If you are configuring storage volumes for a virtual machine, please make sure you have
the guest agent installed on this machine. Without the guest agent, the hard drive will be
added to the virtual machine, but it is neither partitioned nor formatted. However, the
guest agent also needs to be able to communicate, via https, with the vRealize Automation
IaaS component. If such a communication is not possible, you must implement an
Orchestrator workflow that carries out this work for you (but this means you need to have
the VMware tools installed on that machine).

Hint: Attaching CD-ROM


There is another customer property called VirtualMachine.CDROM.Attach, which must
be added and set to True. However, that is not sufficient. You still have to configure
vRealize Automation Orchestrator, to use the workflow Mount CD-ROM, as a new
Action in vRealize Automation. Chapter 14 will address how to configure Orchestrator.

Further customization with custom properties:

• VirtualMachine.Admin.ThinProvisioning: Set to True, for thin provisioning.


• VMware.Hardware.Version: Specifies the VM hardware version.

6.3.2. Clone and Linked Clone

The most popular provisioning workflows for virtual blueprints are clone and linked clone.
This is because they are both quite easy to configure and the virtual machine itself can be
deployed in a relatively short period of time.
Unlike clones, linked clones are based on snapshots. When creating new virtual
machines based on snapshots, vSphere does not copy all the original files, but only saves
delta files for the changes between the parent files and the new virtual machine. As a
consequence, linked clones represent a disk-preserving and timesaving alternative to full
clones. Full clones, on the other hand, copy the whole set of disks each time a new
machine is provisioned. Fig. 6-7 compares the two approaches.

Figure 6-7 Full clones vs. linked clones


Figure 6-8 Blueprint build information

Perform the following steps, in order to configure provisioning based on a linked or full
clone:

1. Change to the Build information tab.


2. Select Server or Desktop from the Blueprint type dropdown list.
3. Choose Clone or Linked Clone from the Action dropdown list.
4. In the Provisioning workflow dropdown list choose CloneWorkflow (the only
choice).
5. Choose the template, or snapshot, from which your virtual machines will be created.
6. If you are using linked clone, you can tick the Delete snapshot after the blueprint
is deleted option to remove the snapshot (that only makes sense if the snapshot is
dedicated for the blueprint)
7. Optionally, choose a Customization spec. The customization spec must be created
in vSphere in advance and you have to type in the exact name of this spec.

The rest of the fields can be filled out according to the basic workflow. Fig. 6-8 shows a
picture of the user interface.

Hint: Data collection


If you just have created a template or a snapshot in vSphere, it might be the case that
during the blueprint configuration, it still does not appear in vRealize Automation. In that
case, you have to trigger a data collection. Navigate to Infrastructure > Compute
Resources > Compute Resources, hover over your resource and click on Data
Collection. The data collection page opens and you can request a new inventory
collection.

6.3.3. NetApp FlexClone


Another alternative for fast provisioning is the NetApp FlexClone. However, this
workflow can only be used if you are using vSphere together with a NetApp storage
solution. The configuration is quite similar to the Clone workflow, but there are some
prerequisites:

• You have to create and configure a NetApp-ONTAP endpoint.


• The reservation must be configured for the use of FlexClone.

Once you have done this you can use the FlexClone workflows. Compared to Linked
Clones, FlexClones have some advantages. One of the main characteristics of linked
clones is that they need their base disks on the same disks as the delta disks. Furthermore,
FlexClones can support hardware offload. This means that all copying is essentially done
by the storage array and vSphere does not need to be involved in copying. However, so
far, FlexClones are only supported on NFS datastores, VMFS datastores are currently not
supported.

6.3.4. Linux kickstart

The Linux kickstart workflow helps to automate the provision of Linux machines.
However, there are a couple of steps that have to be taken, in order to get the Linux
kickstart workflow running.
6.3.4.1. Preparing the Linux boot ISO

In nearly every Linux distribution, a bootable image can be created via isolinux tool. With
most of the CentOS DVDs there is already an isolinux folder. This can be used as a basis
for creating the boot ISO. Once you have mounted the DVD, you need to extract the
folder and then edit the isolinux.cfg (the following snippets show the configuration for
Cent OS 6.5):

# Begin isolinux.cfg
default vesamenu.c32
timeout 1

menu title Welcome to CentOS 6.5!


label linux
menu label ^Install or upgrade an existing system
menu default
kernel vmlinuz
append initrd=initrd.img network —device=eth0 —bootproto=dhcp ks=http://yourwebserver/ks.cfg
#End isolinux.cfg

Please customize the isolinux.cfg according to your needs (change the webserver and also
ensure that the network card is correct).
Next navigate to the directory containing the isolinux folder and execute the following
command:

genisoimage -r -T -J -V “RHEL6AMD64” -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -


boot-info-table -o /working/RHEL6X64.iso /working/

The file boot.cat will be created by the command. This is the catalog needed for the boot
loader. The file isolinux.bin is the image of a bootloader. The output file will be
RHEL6x64.iso
If you want to create the bootable image in Windows, there are also free tools available,
for example ImgBurn.
6.3.4.2. Creating kickstart config

The next step is to create a kickstart config file. Fortunately, there is already a sample
configuration file that can be used as a starting point. To locate the file, first unpack the
LinuxGuestAgent package and then locate the subdirectory, which corresponds to the
operating system you wish to deploy. Open the sample-https.cfg file in an editor. The file
looks like this:

auth —useshadow —enablemd5


bootloader —append=”rhgb quiet” —location=mbr —driveorder=sda
zerombr
clearpart —all —initlabel
text
firewall —disabled
keyboard us
lang en_US
logging —level=info
network —bootproto=dhcp —device=em0 —onboot=on
reboot
rootpw secret
selinux —enforcing
timezone —isUtc America/New_York
install
url —url http://web_server/Install_Files/
part / —asprimary —fstype=”ext3” —size=4096
part swap —asprimary —fstype=”swap” —size=512
%packages
vim-enhanced
%post
function process() {
while true; do
/usr/bin/gugent —host=dcac.example.net —ssl —config=/usr/share/gugent/gugent.properties —
script=/usr/share/gugent/site
if [ $? -eq 0 ]; then
break
fi
sleep 30
done
}
rpm -i http://rpm.example.net/rhel6-x86/gugent.rpm
export AXIS2C_HOME=axis2
export PYTHONPATH=/usr/share/gugent/site/dops
pushd /usr/share/gugent
echo | openssl s_client -connect dcac.example.net:443 | sed -ne ‚/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p‘ >
cert.pem
process # SetupOS
process # CustomizeOS
popd

The kickstart file is configuring a lot of things, e.g. the root password, locale settings,
partitioning of the guest machine’s hard disk and the installation of the guest agent on that
machine. We will now discuss how to understand and modify the file according to your
needs:

• In particular, all instances of the string host=dcac.example.net, must be replaced


with an IP address or a fully qualified domain name, with a port number being set
for the vRealize Automation server host.
• Also, modify the address within the rpm-command, where the gugent.rpm file can
be downloaded.
• The url parameter specifies where to download the installation files. If you skip this
parameter, it will look in the default location for the sources, which is the CD-ROM.
In our case, we are downloading the packages from a web server, so make sure that
the files are accessible and can be downloaded.
• The post-section takes care of the guest agent installation (which will happen after
the initial installation). It will first download the guest agent rpm and then install it.
Internally the guest agent uses Axis 2 (a web service API) and Python, which are
also configured in the script. The last step is to download the vRealize Automation
Server IaaS certificate (the communication between the guest and the agent must be
trusted).

Hint: Guest agent and Windows 2012


If your vRealize Automation IaaS host is running Windows Server 2012, you need to
apply the following patch. Without it, the guest agent will not be able to communicate
with vRealize Automation:

http://support.microsoft.com/kb/297337.

6.3.4.3. Verifying DHCP settings

Once you have created the bootable ISO image, and finished and uploaded your kickstart
file, we have to make sure that there is a DHCP server available on the network. So that
our new Linux instance will have an IP address and can safely boot.

6.3.4.4. Creating blueprint and modifying custom properties

The last step is to configure the blueprint itself. Follow these steps to set up the Linux
kickstart workflow.

1. Click the Build Information tab.


2. Select Create from the Action dropdown menu.
3. Select LinuxKickstartWorkflow from the Provisioning workflow dropdown
menu.
4. Configure the settings for the minimum, maximum, storage volumes, reservation
polices, max # of volumes and max # of network adapters as described in the
basic workflow section.
5. Click the Properties tab.
6. Add the following custom properties to the blueprint
a. Image.ISO.Name:
i. For vCenter Server, define the path to the ISO, including the name (the
value must use forward slashes).
ii. For Hyper-V use the full local path to the ISO file, including the file
name.
iii. For XenServer use the name of the ISO file.
b. Image.ISO.Location: Type the location of the bootable ISO image that you
have created.

b.iii.4.5. Microsoft SCCM workflows

Deployment based on a SCCM workflow can be configured with vRealize Automation as


well. What essentially happens is that vRealize Automation boots a newly provisioned
machine from an ISO image. This later passes control to a SCCM task for further
installation and configuration. Carefully consider, however, that only the installation and
not the software distribution, nor update, can be configured from vRealize Automation.
Before the SCCM workflow can be used, a couple of prerequisites have to be met:

• The NetBIOS name of the SCCM host is required and must be resolvable from at
least the DEM.
• vRealize Automation and the SCCM server must be on the same network.
• A SCCM software package, which includes the vRealize Automation guest agent,
must be created.
• The SCCM package must install the vRealize guest agent.
• The SCCM files must be packaged as a bootable ISO.
• The bootable ISO must be accessible to the network.

Once this has been done, we can continue with the blueprint configuration:

1. Click the Build Information tab.


2. Select Create from the Action dropdown menu.
3. Select VirtualSccmProvisioningWorkflow from the Provisioning workflow
dropdown menu.
4. Configure the settings for the minimum, maximum, storage volumes, reservation
polices, max # of volumes and max # of network adapters as described in the
basic workflow section.
5. Click the Properties tab
6. Add the following custom properties to the blueprint:
a. Image.ISO.Location: Type the location of the bootable ISO image that you
have created.
b. Image.ISO.Name: Provide the location and the name of the ISO Image.
c. SCCM.Collection.Name: The SCCM collection name
d. SCCM.Server.Name: Name of the SCCM file
e. SCCM.Server.SiteCode: SCCM site code
f. SCCM.Server.UserName: The SCCM username
g. SCCM.Server.Password: The SCMM password

g.iii.4.6. WIM provisioning

The Windows Imaging File (WIM) provisioning is certainly the most complex one to
configure. There are several reasons for this. Firstly, WIM has quite a lot of prerequisites.
Secondly, the maintenance requires a long time and is complex. Thirdly, it can only be
applied to Windows machines. Last but not least, the provisioning time itself is quite long.
On the other hand, there are also some benefits. Once you have a running WIM
provisioning, it can be used in nearly all environments, not only in vRealize Automation.
If a company already has some experience in WIM, it only takes a short time to configure
it in vRealize Automation.
From a technical point of view, WIM is a file-based image format that was developed by
Microsoft in order to deploy operating systems in distributed environments. WIM images
must be provisioned on existing volumes respectively. WIM itself does not provide a tool
for creating such volumes or partitions, it is left up to the user to do this manually
(however there is a tool from Microsoft named Diskpart which creates and format
partitions).
In order to create a WIM image, you also need the ImageX command line tool, which is
part of the Windows Automated Installation Kit (KIT).
g.iii.4.7. Preparing for WIM provisioning

The following steps must be completed in order to configure WIM provisioning:

1. First of all, a staging area has to be configured. This staging area will be used to
upload the required files for the provisioning process. The staging area must be
accessible from the vRealize Automation as well as from the machine being
provisioned. The preferred way to communicate with the staging area is using a
network drive or a UNC path.
2. A DHCP server is needed as well.
3. To create the template, a Windows reference WIM machine must be installed.
4. Once, the reference machine is created, you can sysprep the machine using the
System Preparation Utility.
5. The WIM image can be created after having sysprepped the Windows machine.
6. Usually, additional scripts should be invoked after the WIM provisioning. This can
happen using the PEBuilder tool, which is provided by vRealize Automation. Install
the tool on a development machine and include all the scripts to be invoked as part
of the provisioning process. The PEBuilder installation files can be found on the
vRealize Automation Appliance, at the URL https://<vrealize-automation-
appliance.domain.nam>:5480/installer. You need Microsoft .NET 4.5 and the
Windows Automated Installation Kit (AIK) for Windows 7 (including WinPE 3.0)
as a prerequisite. The scripts to be invoked should be placed in the Plugins\VRM
Agent\VRMGuestAgent\site\ work item subdirectory of the PEBuilder installation
directory.
7. Create a WinPE image using PEBuilder and insert the guest agent into the WinPE
image as follows:
a. Run PEBuilder.
b. Enter the host name of vRealize Automation in the vCAC Hostname textbox.
c. Type the vCAC Port.
d. Enter the path of the PEBuilder plugin directory.
e. Type the output path for the ISO file in the ISO Output Path text box.
f. Click File > Advanced.
g. Select the Include vCAC Guest Agent in WinPE ISO checkbox.
h. Click OK.
i. Click Build.
8. Upload the WinPE image to the staging environment

Once the WIM Image and other prerequisites have been completed, we can continue with
the blueprint configuration:

1. Click the Build Information tab.


2. Select Create from the Action dropdown menu.
3. Select WIMImageWorkflow from the Provisioning workflow dropdown menu.
4. Configure the settings for the minimum, maximum, storage volumes, reservation
polices, max # of volumes and max # of network adapters as described in the
basic workflow section.
5. Click the Properties tab.
6. Add the following custom properties to the blueprint:
a. Image.ISO.Name:
i. For vCenter Server, define the path to the WinPE ISO, including the
name (the value must use forward slashes).
ii. For Hyper-V use the full local path to the Win PE ISO file, including
the file name.
iii. For XenServer use the name of the WinPE ISO file.
b. Image.ISO.Location: Type the location of the bootable ISO image that you
have created.
c. Image.WIM.Path: The UNC path to the WIM file.
d. Image.WIM.Name: The name of the WIM file.
e. Image.WIM.Index: The index to be used to extract the desired image from the
WIM file.
f. Image.Network.User: The username under which to map the WIM image
path (Image.WIM.Path) to a network drive.
g. Image.Network.Password: The associated password for the network user
(Image.Network.User).

g.4 Cloud Blueprints

From time to time, it is required to provision resources onto the cloud. Like for virtual
blueprints, vRealize Automation must be prepared in order to use cloud blueprints. These
steps involve storing credentials and creating endpoints, bringing resources under vRealize
Automation management, create business groups and manage reservations. In Chapter 5
we have already shown how to perform these configurations.
vRealize Automation supports the following platforms for cloud blueprints:

• vCloud Director
• Amazon AWS
• OpenStack (Havana)
• vCloud Air

These blueprints also offer different workflows for provisioning which are shown in the
following table:

Workflow Platform Description

CloudProvsioningWorkflow Amazon Provision resources to Amazon and


OpenStack OpenStack

vAppCloneWorkflow vCloud Director Clone vApp templates in vCloud Director

CloudLinuxKickstartWorkflow OpenStack Kickstart/autoyast

CloudWIMImageWorkflow OpenStack Windows Imaging Format


Table 6-3 Cloud provisioning workflows

g.4.1. Provisioning with Amazon AWS

The following issues must be considered when configuring vRealize Automation for
Amazon AWS:

• Cloud blueprints are mapped to AMIs.


• The selected AMI of the blueprint must be accessible from vRealize Automation.
• The requesters user account must be added to the EC2 instance, after provisioning,
in order to be able to log on to the machine

Figure 6-9 AWS blueprint

Chapter 5 has already covered the basics of AWS and how to do the preparation within
vRealize Automation. In the following stage, we will discuss how to setup a blueprint.

g.4.2. Defining a blueprint

There are also a couple of steps necessary for the configuration:

1. Navigate to the Infrastructure > Blueprints > Blueprints page.


2. Click New Blueprints > Cloud > Amazon EC2.
3. On the Blueprint Information tab, fill out the fields as described before.
4. Change to the Build information tab (see Fig. 6-9).
5. Select Server or Desktop from the Blueprint type dropdown list.
6. Select CloudProvisioningWorkflow from the Provisioning workflow dropdown
menu.
7. Choose the Amazon machine image to be used for deployment. Use the filter
option and an AMI ID to identify the machines to be used.
8. Specify Key pair settings: Not specified (uses the settings of the reservation), Auto
Generated per Business Group or Auto Generated per Machine.
9. Setting the Enable Amazon network options on machine tick allows end users to
decide if they want to provision their resources onto a VPC or Non VPC
environment.
10. The table Instance types list is available on Amazon machines. Select at least one
machine for provisioning. If several instance types are chosen, end users can decide
which instance type should be used.
11. Configure the settings for the Minimum and Maximum of CPUs, Memory (MB),
Storage (GB), EBS Storage (GB) and Lease (days) based on the selection of the
instance type.
12. Change to the Properties tab (optionally). Note there are special custom properties
for AWS.
13. Go to the Action tab and review the settings.
14. Click OK to save the blueprints.
15. Publish the blueprint.

Hint: Amazon passwords


The first login on a cloud machine must be via an administrator’s login. Once logged in, it
is possible to add additional credentials to that machine. In case the password is not
known, navigate to the Items page and select the Amazon machine in the list of machines.
From that page, it is possible to click Edit in the Actions dropdown menu and choose
Show Administrator password.

g.4.3. Provisioning with OpenStack

In order to be able to provision resources to OpenStack, endpoints, reservations and


blueprints must be prepared in a similar way to other compute resources. Most of the steps
are the same, so we will concentrate on the differences.
First of all, an OpenStack endpoint must be created:

1. Select Infrastructure > Endpoints > Endpoints.


2. Select New Endpoint > Cloud > OpenStack.
3. Enter a name and a description.
4. Type the URL in the Address text box. The format must be FQDN:5000 or
IP:5000.
5. Select the credentials for the endpoint.
6. Type a tenant name in the OpenStack project textbox.
7. Click OK.

The other steps (including fabric groups, reservations, machine prefixes and business
groups) do not differ that much from the configuration as described before, hence we will
not focus on them anymore.
Currently, vRealize Automation allows for provisioning with virtual machine images,
Linux kickstart, or WIM provisioning. Section 6-x already describes how to prepare for
Linux kickstart and WIM provisioning. The third provisioning mechanism – virtual
machine image provisioning – comes in two flavors:

• Virtual Machine Images are templates that contain a software configuration


including an operating system.
• OpenStack flavors define compute, memory, and storage capacity of computing
instances. In other words, a flavor is an available hardware configuration for a
server. It defines the “size” of a virtual server that can be launched. Flavors are
similar to instance types in Amazon. The default flavors are listed in the following
table:

Flavor vCPU Disk (in GB) RAM (in MB)

m1.tiny 1 1 512

m1.small 1 20 2048

m1.medium 2 40 4096

m1.large 4 80 8192

m1.xlarge 8 160 16384

Table 6-4 OpenStack default flavors

The flavors can be added as instance types in vRealize Automation:

1. Click Infrastructure > Blueprints > Instance Types.


2. Create a new Instance Type by clicking New Instance Type.
3. Provide the values as needed.
4. Click the Save button.
Finally, the blueprint can be created:

1. Navigate to the Infrastructure > Blueprints > Blueprints page.


2. Click New Blueprints > Cloud > Amazon EC2.
3. On the Blueprint Information tab, fill out the fields as described before.
4. Change to the Build information tab.
5. Select Server or Desktop from the Blueprint type dropdown list.
6. Select which workflow to use, from the Provisioning dropdown list. As described
earlier, OpenStack offers the CloudWIMImageWorkflow, the
CloudProvisioningWorkflow and the CloudKickstartWorkflow.
7. Choose an OpenStackImage.
8. Specify how to generate a Key pair: Not specified (uses the settings of the
reservation), Auto Generated per Business Group or Auto Generated per
Machine.
9. Define the Flavors for the blueprint.
10. The next step is to configure the machine resources for the CPU, Memory (MB)
and Storage (GB). You always have to provide a minimum value. The values are
fixed if you leave the Maximum boxes empty. Otherwise, users are free to choose a
value between the minimum and maximum.
11. Specify the Lease (days). A blank value means there is no expiration date.
12. Navigate to the Properties page and add the custom properties as needed. Please
note that the Linux Kickstart and the WIM Provisioning workflows need the same
custom properties as their virtual blueprint counterparts.
13. Review the settings on the Action tab.
14. Click OK to save the blueprint.

g.4.4. Provisioning with vCloud Air and vCloud Director

vRealize Automation can provision vCloud Air and vCloud Director resources as well.
Essentially, the same preparation steps are required:

• Create a vCloud Director endpoint.


• Bring the resources under vRealize Automation management.
• Configure the IaaS features like fabric groups, reservations or business groups.
• Create blueprints.

Internally, vCloud Director uses vApps as a container entity for provisioning resources. A
vApp consists of different component machines that are provisioned and managed as a
single entity. Besides acting as a container for the different components, a vApp can also
take care of network provisioning. While vRealize Automation can trigger the
provisioning of vApps, using vCloud Director, it does not inherently know about the
concept of vApps. vRealize Automation uses multimachine blueprints, whose
functionality can be compared to vApps. Multimachine blueprints are covered in chapter
7.

g.4.5. Comparison of Amazon AWS with vCloud Air

We have already described how vRealize Automation can provision machines on- and off-
premise. While Amazon is the leader in the public cloud market, deploying to vCloud Air
can also be an interesting alternative. Because this is a book about a VMware product, we
want to address how vCloud Air differs from Amazon AWS and what the advantages are:

• First of all, vCloud Air comes with automatic redundancy – hot standby redundant
capacity is included and it’s free. Virtual machines are monitored by default. If a
failure occurs, the machine is automatically restarted with the same network
configuration (IP addresses, MAC addresses, etc.). While resiliency is possible with
Amazon, too, it is not an out-of-the-box feature, so it must already have been
considered while designing the application.
• vCloud Air datacenters monitor their hosts and in case a machine is overloaded,
virtual machines automatically get live migrated to other machines. With Amazon,
however, resource congestions happen at the hosts from time to time.
• One of the principles of AWS is that everything can fail. Therefore you must design
your application with that in mind. When there is a problem with an Amazon host or
maintenance work is done, virtual machines in AWS might be switched off
immediately without any warning or with only short notice. vCloud Air, on the other
hand, migrates machines to other hosts before doing any maintenance work.

• vCloud Air is also more flexible in terms of instance sizes. You can have any VM
size and can even resize the machine (VM or disk), while it’s running. Hardware
resizing is not possible with AWS – you have to switch the instance type, which
means additional administrative work.
• Most companies have vSphere running in their on-premise network. When moving
machines to vCloud Air, no conversion is needed. Furthermore, as on- and off-
premise environments are based on the same vSphere technologies, you can keep
your app support.
• vCloud Air allows stretched layer 2 networks between your data center and vCloud
Air. This can be done with Direct Connect, a dedicated link between your datacenter
and vCloud Air. Because it is a layer 2 network, it appears like a single flat LAN
segment. Amazon – on the other hand – also offers Direct Connection access to their
datacenter, however they are using IP (layer 3) network connectivity.

g.4.6. Preparing for vCloud Air

The first step is creating an endpoint. However, to get the address, you have to first
connect to your vCloud Air environment.
As mentioned previously, you need to prepare vCloud Air settings in order to configure
the provisioning in vRealize Automation.
First of all, you need the URL of the vCloud Air datacenter. You can retrieve that by
logging in to your management website. On the dashboard you will see a list of
datacenters. Click on your datacenter to drill down onto the details. On the right-hand side
of the datacenter details page, click on the vCloud Director API URL and copy the URL
to your clipboard. This value is needed in the vCloud Director endpoint configuration
page (Address field) in vRealize Automation; however remove everything after the port
number in the URL.
When creating an endpoint for a vApp(vCloud), an Organization must be provided as
well. You can find out which organization to provide by navigating to your data center
details and clicking the VDC Name & Description link. Copy the name and paste it to the
Organization textbox field in vRealize Automation.

1. You should also note the network details for the vApp. Being on the details page of
a data center, click on the network tab to see a list of the defined networks. Select
the network of interest and note the settings. It is recommended to create a network
profile based on these settings including the IP Ranges from the vCloud Air data
center.
2. The discovered compute resources have to be added to a fabric group.
Once these preparations have been completed, you can continue creating a blueprint:

1. Navigate to the Infrastructure > Blueprints > Blueprints page.


2. Click New Blueprints > Cloud > Cloud vApp Component (vCloud).
3. On the Blueprint Information tab, fill out the fields as described before.
4. Change to the Build information tab.
5. Select Server or Desktop from the Blueprint type dropdown list.
6. Only the vAppCloneWorkflow is available.
7. The next step is to configure the machine resources for the CPU, Memory (MB)
and Storage (GB). You must always provide a minimum value. The values are fixed
if you leave the Maximum boxes empty. Otherwise, users are free to choose a value
between the minimum and maximum.
8. Specify the Lease (days). A blank value means there is no expiration date.
9. Optionally you can define the Storage Volumes to be mounted.
10. Define the Maximum volumes.
11. Change to the Properties tab (optional). Note there are no special custom
properties for AWS.
12. Go to the Action tab and review the settings.
13. Click OK to save the blueprints.
14. Publish the blueprint.
g.1

g.2

g.3

g.4

g.5 Physical blueprints

The remaining kind of blueprints to be discussed in this chapter are the physical
blueprints. Most commonly, they are used for legacy reasons or if virtual or cloud
provisioning is not possible for some reason. However, not every type of hardware is
supported, only hardware with special out-of-band management facilities. In particular,
this includes:

• HP iLO
• Dell iDRAC
• Cisco UCS

Like their virtual and cloud counterparts, physical blueprints support a variety of
provisioning workflows. These are shown in the following table 6-5:

Workflow Platform Description

PhysicalProvisioningWorkflow HP iLO Linux Kickstart


Dell iDRAC

PhysicalPxeProvisioningWorkflow HP iLO PXE provisioning


Dell iDRAC
Cisco UCS

PhysicalSccmProvisioning HP iLO SCCM provisioning


Dell iDRAC

PhysicalSccmPxeProvsioning HP iLO PXE provisioning


Dell iDRAC
Cisco UCS
Table 6-5 Physical provisioning workflows

While physical blueprints are very rarely used, there are nevertheless use cases for
configuring them:

• vRealize Automation can manage the provisioning of physical machines via a PXE
boot server. At the very beginning of the provisioning, a network boot strap program
(NBP) is started. This in turn downloads an image and subsequently installs the
operating system.
• Linux Kickstart and AutoYaST can be used to provision physical machines as well.
A configuration file is used to find and download an ISO image and then install the
operating system afterwards.
• Microsoft System Center Configuration Manager (SCCM) can be used to create a
SCCM task, which in turn can be used by vRealize Automation for the installation.
vRealize Automation takes care of booting the machine and hands over the control
to SCCM afterwards. After the provisioning has finished, additional customization
is possible using the guest agent.
• The Windows Imaging File (WMI) format can also be used for the deployment of
Windows machines.

The configuration of endpoints, fabric groups, reservations and the blueprints resemble
their virtual and cloud counterparts. Therefore we will not dvelve too deep into these
topics. However, please consider that, depending on the provisioning workflow, additional
custom properties must be defined. Table 6-6 shows these custom properties:

Workflow Custom property Notes

Linux Kickstart Image.ISO.Name Location of the image, e.g.:


https://10.10.1.40/iso

Image.ISO.UserName For Dell iDrac

Image.ISO.Password For Dell iDrac

WIM Image Image.ISO.Name

Image.ISO.Location

Image.WIM.Path

Image.WIM.Name

Image.WIM.Index
Image.Network.User

Image.Network.Password

Image.ISO.UserName For Dell iDrac

Image.ISO.Password For Dell iDrac

SCCM Image.ISO.Location

Image.ISO.Name

SCCM.Collection.Name

SCCM.Server.Name

SCCM.Server.SiteCode

SCCM.Server.UserName

SCCM.Server.Password

Image.ISO.UserName For Dell iDrac

Image.ISO.Password For Dell iDrac

PXE Boot -

Table 6-6 Custom properties for provisioning

g.6 Integrating the vRealize Guest Agent

We have already introduced the vRealize guest agent and mentioned that it can be used to
further customize the operating system at the end of the provisioning. The guest agent is
not mandatory, because guest customization can also be done by other means. Most
commonly used is the customization spec, but this only works for clones and linked
clones. When provisioning to vSphere, vCenter Orchestrator can also be used. Besides
these alternatives, there are a couple of use cases for the guest agent:

• To launch guest scripts. This has always been the most important reason for
installing the vRealize Automation guest agent. The invocation of guest scripts can
be controlled by setting custom properties. The definition of these custom properties
is quite flexible (custom properties can even reference each other), so it is possible
to execute scripts and pass the required parameters to the command. If the scripts to
be executed reside on a network share, the maintenance work for the machine
template can be further reduced. Therefore you are able to cope with additional
requirements and functionalities, without the need to edit a machine template. Of
course, besides invoking simple scripts, unattended installation of software is also
possible.
• In many cases, configuration tools like Puppet or Chef are used for the
customization of machines. The guest agent can call them as well.
• It is possible to configure a blueprint to attach additional hard drives to a machine.
However without the guest agent it is up to the administrator to partition and format
the hard drives. With the guest agent installed, this can be done automatically.
• Additional NICs can be configured as well.
• The Application Services use the guest agent as well.

The guest agent can be installed on Windows and Linux machines. In the following
section, we will describe how to install it on both operating systems.

g.6.1. Installing the guest agent on Windows

Perform the following steps:

1. Navigate to the vRealize Automation Appliance and open the download page
(https://<vRealize-appliance.domain.name>:5480/installation).
2. Download the Windows guest agent files (32bit or 64bit).
3. Unpack GugentZip_version into the C drive on the reference machine. The files
reside in the C:\VRMGuestAgent directory. Keep this directory, do not move or
rename it.
4. Next, the guest agent must be configured to communicate with the Manager Service.
This can be done by running the following command:
winservice -i -h Manager_Service_Hostname_fdqn[: portnumber -p ssl]
If you have a load balancer for the Manager Service, you have to use the host of the
load balancer as the host parameter (-h).
5. Once the installation has completed, the guest agent can be started.

g.6.2. Installing the guest agent on Linux

The following procedure describes how to install the guest agent on Linux:

1. Navigate to the vRealize Automation Appliance and open the download page
(https://vRealize-appliance.domain.name>:5480/installation).
2. Download the Linux guest agent files.
3. Upload and unpack the guest agent files on the reference machine.
4. Next, the guest agent must be installed. This can be done by running the following
command:
rpm –i gugent-version.x86_64.rpm
5. Now, the guest agent must be configured:
cd /usr/share/gugent
./installgugent.sh -<vRealize Automation Appliance >:443 ssl
6. Test if the installation was successful by running the command ./rungugent.sh
g.6.3. Executing scripts with the guest agent

Once the guest agent has been installed, blueprints can be configured to run scripts during
the provisioning of a machine. This can be achieved by placing a script in the machine or
on a network share, and then configuring custom properties to call the script. The
following custom properties are required:

• VirtualMachine.Admin.GuestAgent: Setting it to true activates the guest agent


within the machine.
• VirtualMachine.Customize.WaitComplete: This property advises vRealize
Automation to wait until guest agents have completed their work.
• VirtualMachine.Software0.ScriptPath: This custom property configures which
script the guest agent should invoke. The script must reside on the machine or on a
network share accessible from the machine. The full path must be provided (e.g.
C:\Scripts\MyApplication.bat). It is possible to provide parameters to the script, too.
The parameters can be either hard coded or other custom properties can be
referenced. When referencing other custom properties, the format must be as
follows:
D:\Scripts\MyInstall.bat –param {MyCustom.Property.X}

• VirtualMachine.Software0.Name: The friendly name for the script to be invoked


from the guest agent.

If there is more than one script to be invoked, additional custom properties


(VirtualMachine.SoftwareX.ScriptPath and VirtualMachine.SoftwareX.Name) can be
added to the blueprint. However, with each script the X variable must be incremented
according to the order in which they are to be invoked.
With that knowledge in mind, the blueprint can be customized. In this case, it might be
more useful to add these custom properties via a build profile (build profiles will be
discussed in detail in chapter 10):

1. Navigate to the Infrastructure > Blueprints > Build profiles page.


2. Click on the Add button.
3. Assign a name and optionally a description for the blueprint.
4. Add the custom properties as described above to the build profile.
5. Click OK to save the build profile.

Once the build profile has been created and configured, it can be associated with a
blueprint (see Fig. 6-10):

1. Navigate to the Infrastructure > Blueprints > Blueprints page.


2. Select the blueprint to which the build profile should be added.
3. Click on Edit.
4. Change to the Properties tab.
5. Add the build profile.
6. Click Ok to save the blueprint.

Once configured the guest agent can be used for customization. Please note that the guest
agent gets invoked after the custom specification is completed. If there is no custom
specification, the invocation happens directly after the deployment.
During deployment, the guest agent copies all the custom properties to a local file
(C:\VRMGuestAgent\site\workitem.xml).

Figure 6-10 Adding a guest agent build profile

g.7 Summary

This chapter introduced the different kinds of blueprints available in vRealize Automation:
virtual, cloud and physical blueprints. We demonstrated how to configure the blueprints
and manage the lifecycle of machines. To allow further customization of the deployment
process, it is possible to add custom properties to blueprints. Furthermore, the guest agent
can be used to invoke additional scripts after provisioning.
After a blueprint has been configured, it must be published to the self-service catalog so
that end users can request machines. The configuration of the self-service catalog will be
shown in chapter 8.
This chapter did not deal with multi-machine blueprints. Multi-machine blueprint will be
described in the next chapter.
7. Network Profiles and Multimachine Blueprints

In chapter 6, we covered virtual, cloud and physical blueprints. These blueprints help us to
deploy single machines. However, from time to time providing single machines might not
be enough, instead we have to provision a group of machines. There are plenty of use
cases for this, amongst them:

• The provisioning of multi-tier applications is probably the most important use case.
This usually goes hand-in-hand with development teams that need to deploy their
applications. A traditional n-tier application consists of a network load balancer, one
or more web frontend servers, an application server and a database. To increase
security, the different application components can also be placed into different
subnets (with dedicated firewall rules between the subnets). Depending on the
environment, the network to be used is either pre-configured or must be created
dynamically at runtime.
• Multimachine blueprints can also be used for deploying the very same application
several times for different purposes. For example, there could be a test, integration
or production environment.
• Multimachine blueprints are also very well suited to training environments, where
the same set of machines must be deployed multiple times.

Essentially, a multimachine blueprint is a collection of single blueprints (together with


some additional settings for the multimachine lifecycle and network). This chapter will
cover the basics of multimachine blueprints, as well as how they can be configured in
vRealize Automation.
Another focus of this chapter is the integration of vRealize Automation with NSX. In
many cases, having appropriately sized virtual machines is not enough. From time to time,
it is also a requirement to configure a network on the fly. However, this means not only
automating the machine provisioning, but also the network provisioning. In traditional
networks, we only deal with assigning IP addresses, DNS settings and LAN assignments.
When deploying a complex multi-tiered application, as described in the use cases above,
additional components are needed: Logical switches and networks, security groups,
firewalls, load balancers or firewall rules. Without being able to dynamically create
networks, the provisioning of complex applications cannot be automated. Therefore
manual intervention is still required. When such a dynamic provisioning of networks is
needed, vRealize Automation has to be integrated with VMware NSX.

7.1 Basics of network profiles

We have already covered the basics of network profiles in chapter 5. However, as network
profiles represent an important prerequisite for setting up multimachine blueprints, we will
further explore this topic.
Essentially, network profiles perform two main functions in vRealize Automation.
Firstly, they are responsible for the NIC configuration (i.e. IP address, subnet mask,
default gateway, DNS). Secondly, in conjunction with NSX, they provide edge services
router configuration (route or NAT functionality). There are five different principle types:

• External network profiles – provisioned machines get connected to a network


which is created and configured outside of vRealize Automation.
• Private profile – machines are connected to a network with no external connectivity
at all.
• Routed network – vRealize Automation dynamically creates a network with
different subnets and a routing table.
• One-to-One NAT profile – used to conserve externally routable IP addresses within
a network.
• One-to-Many NAT profile – similar to One-to-One NAT profile, however there is
no source NAT.

As stated earlier network profiles are responsible for the NIC configuration. They provide
an easy way for virtual machines to reference IPs addressed to a machine during the
lifecycle and provisioning through vRealize Automation. This in turn allows to reclaim IP
addresses when the machine is eventually destroyed. This is a very handy feature for
companies which do not have an IP address management (IPAM) tool like Infoblox
running.
The following points are required for vRealize Automation to successfully inject an IP
address into a machine:
• Guest customization specification – IP addresses will be assigned using the guest
customization specification. Without configuring this, IP addresses will just be
reserved from the pool of the network profile, but they will not be applied to the
provisioned machine.
• VMware tools – behind the scenes, the configuration of virtual machines with new
IP addresses happens with the VMware tools.

While it is good practice to have a vRealize Automation guest agent installed on a


template, it is not a prerequisite for assigning an IP address.
As stated, the guest customization specification is required by the internal IPAM system.
However, if this is not the case, you can still use the IPAM system, but additional work is
required (you would have to configure the network device by yourself):
In general, when a VM is deployed with a network profile, the following custom
properties are set:

• VirtualMachine.Network0.Address
• VirtualMachine.Network0.SubnetMask
• VirtualMachine.Network0.Gateway
• VirtualMachine.Network0.PrimaryDNS
• VirtualMachine.Network0.SecondaryDNS (optional)
• VirtualMachine.Network1.Address
• VirtualMachine.Network1.SubnetMask
• …

These custom properties could be read by an Orchestrator workflow, which in turn would
invoke another workflow to assign the IP address using the Guest API (we will talk about
Orchestrator in a later chapter in detail). If Orchestrator is not a valid choice, you could
also place a script in the machine template. This script receives some input and configures
the networking settings. The script would be triggered by the guest agent (we talked about
triggering scripts in chapter 6).
Depending on the desired functionality, you need to choose the appropriate network
profile to be created.
Figure 7-1 External network profile configuration

7.1.1. Creating external network profile

In order to create an external network profile, you need to complete the following steps:

1. Navigate to the page Infrastructure > Reservations > Network profile.


2. Click on the link New Network Profile > External on the right-hand side of the
screen. The configuration dialog opens (see Fig. 7-1).
3. Assign a Name for the network.
4. Define the Subnet mask.
5. The other fields (Gateway, Primary DNS, Secondary DNS, DNS Suffix, DNS
search suffix, Preferred WINS) are all optional.

NAT Network

Routed Network
Figure 7-2 Network profiles

6. Click on the tab IP Ranges to configure IP address management. Please note that
this step is optional. There are different ways to configure the IP addresses; you can
define a fixed range, you can configure them one by one, or you can upload a CSV
file containing the IP addresses. If you do not configure IP ranges, vRealize
Automation will rely on a static IP or on DHCP, for assigning IP addresses.
7. Click OK to save your network profile.

7.1.2. Private network profiles

As the name implies, private networks do not have upstreaming traffic (north – south) or
routing during deployment. The created networks are connected to a deployed edge
gateway, which in turn can provide east – west routing (however with no connectivity to
an external network). The network architecture itself is depicted in Fig. 7-2.

Figure 7-3 Private network profile

To create a private network profile, please perform the following tasks:

1. Navigate to Infrastructure > Reservations > Network Profiles and hover over New
Network Profile and select Private. The configuration page opens (see Fig. 7-3).
2. Assign a Name for the network.
3. Define a description (optional).
4. Specify a subnet mask (for example 255.255.0.0).
5. Specify if you want to have a DHCP enabled. If yes, provide values for the IP
range start and IP range end textboxes.
6. If you want to configure a list of assignable IP addresses, change to the IP Ranges
tab.
7. Click OK to save your routed network profile.

Figure 7-4 Multi-machine script execution

7.1.3. Routed network profiles

As already mentioned, routed network profiles are based on NSX and are only used with
multimachine blueprints. At runtime, vRealize Automation defines networks by means of
routed network profiles. The routed network is connected to an external network by means
of a deployed edge gateway. Hence, when creating a routed network profile, an external
network profile must be specified first. Furthermore, because routed networks create
further networks, they must also define IP ranges. These ranges are used to allocate a
range of IPs to a specific machine, within a multimachine blueprint. Fig. 7-4 depicts a
network profile.
A typical use case for routed network profiles is the provisioning of multi-tiered
applications. For example, taking security in mind, individual servers of a particular tier
should be placed in different networks.
Figure 7-5 Routed network profile

To create a routed network profile, carefully work through the following tasks:

1. Navigate to Infrastructure > Reservations > Network Profiles and hover over New
Network Profile and select Routed. The configuration page opens (see Fig. 7-5).
2. Assign a Name for the network.
3. Define a description (optional).
4. Use the dropdown list External network profile to associate your network with a
physical network.
5. Specify a subnet mask (for example 255.255.0.0).
6. Configure the Range subnet mask. The range subnet mask determines how many
networks will be created. For example, typing in 255.255.240.0 implies creating 16
networks, because within the third quadruple specifies 4 bit for the amount of
networks.
7. Assign a Base IP address to specify the first network.
Figure 7-6 NAT network profile

8. Review or provide input for the Primary DNS, Secondary DNS, DNS suffix, DNS
search suffix, Preferred WINS and Alternate WINS. Once you have connected
your profile with an external network profile, these values automatically get pre-
filled.
9. Change to the IP Ranges tab to specify and review the IP addresses.
10. Click OK to save your routed network profile.

7.1.4. NAT network profiles

The final kind of network profile that can be created is the NAT network profile. A NAT
network has similarities to a routed network, in that it is connected to an external network
via an edge gateway. NAT network profiles come in two different flavors:

• Within a One-To-One NAT network, each machine is assigned to two different IP


addresses, an internal one and a public one.
• A One-To-Many NAT network only offers one external IP address, for a NAT
Router. Internal machines do not have a public IP address.

A NAT network is best suited to deploying identical networks. For example, if you want to
provide a training environment for students or to have identical networks for the
production/integration/testing of a network.
To create a routed network profile, we must complete the following:

1. Navigate to Infrastructure > Reservations > Network Profiles and hover over New
Network Profile and select NAT. The configuration page opens (see Fig. 7-6).
2. Assign a Name for the network.
3. Define a description (optional).
4. Use the dropdown list External network profile to associate your network with a
physical network.
5. Choose if you want to create a One-To-One or a One-To-Many NAT network type.
6. Review or provide input for the Primary DNS, Secondary DNS, DNS suffix, DNS
search suffix, Preferred WINS and Alternate WINS. Once you have connected
your profile with an external network profile, these values automatically get pre-
filled.
7. If your network profile is of type One-to-Many, it is possible to define IP ranges. In
that case, define an IP range start and an IP range end as well as the Lease time
(seconds) (optional).
8. If you want to configure a list of assignable IP addresses change to the IP Ranges
tab.
9. Click OK to save your routed network profile.

Figure 7-7 Multi-machine overview

7.2 Introduction to multimachine blueprints


Multimachine blueprints define a single blueprint entity, which in turn consists of other
blueprints. These blueprints are called component blueprints (the enclosed machines are
called component machines). The lifecycle of a multimachine blueprint resembles that of a
normal blueprint. Furthermore, multimachine blueprints can be configured to run scripts
during their lifecycle (for example, at the time of the provisioning, or after turning a
machine on or off). Fig. 7-7 depicts how a multimachine blueprint is composed of
component blueprints.
At runtime, it is the Distributed Execution Manager (DEM) that executes the
multimachine blueprint workflows.
Requesting multimachines is similar to requesting singular machines. End users can adjust
the settings for provisioning. For example, how many machines should be deployed and
which hardware resources (CPU, memory and storage) should be used. Of course more
fine-grained customization, using custom properties, is possible too.

7.2.1. Comparison with vCloud Director vApp

Historically, vCloud Director already had the ability to provision a set of different virtual
machines, along with creating new networks. For this purpose, the concept of a vApp was
introduced. vApps are not known within vRealize Automation, but they have similarities
with multimachine blueprints. That is reason enough to show a comparison of both:

vRealize Automation multimachine blueprint vCloud Director vApp

Multimachine blueprints act as a container for Based on vApp templates


blueprints

Provisioning of physical, virtual and cloud Only cloud machines can be deployed
machines

Machines are managed by vRealize Automation vApps can be managed by vRealize Automation

Access: Access:
Microsoft Remote Desktop Microsoft Remote Desktop
SSH SSH
VMware Remote Console VMware Remote Console

Network provisioning based on VMware NSX Network provisioning is an out-of-the-box


capability of vApp

After provisioning, additional machines can be Machines cannot be added after provisioning
added

Boot order of the virtual machine is defined vApp determines start order
within the multimachine blueprint.
Table 7-1 Multi-machine blueprint vs. vCloud Director vApp

7.2.2. Multimachine blueprint preparations

Before setting up a multimachine blueprint with dynamic network provisioning, there are
a couple of steps to be carried out in advance:

• First of all, a transport zone has to be configured.


• A NSX or vCNS endpoint has to be created.
• A network profile has to be set up.
• Reservations must be configured accordingly.
• The blueprint can be configured.
7.2.3. Configuring a transport zone

When creating a multimachine template in vRealize Automation, you must set up a


transport zone in order to use a NAT, a private or a routed network. The transport zone
must be configured on both the reservation and the blueprint. With the help of the
transport zone, dynamic networks can be created at runtime. However, before being able
to set up a transport zone, you need a working VXLAN configuration. Setting up the
transport zone is part of the NSX configuration, so we will skip these configuration steps
here.
There is no need to create any virtual wires or edges. They will be automatically created
by vRealize Automation during provisioning time.

7.2.4. Creating an endpoint

The NSX endpoint is created via VMware Orchestrator. Therefore a running Orchestrator
instance, which is connected to vRealize Automation via an endpoint, is required. We will
talk about Orchestrator in greater detail in chapter X. For now, we will only describe the
basic steps required in order to integrate NSX with vRealize Automation:

1. Log into vRealize Orchestrator as an administrator.


2. Select the Workflow tab and navigate through the library to the NSX > NSX
workflows for vCAC folder.
3. Execute the Enable security policy support for overlapping subnet workflow.
4. Select the NSX endpoint as the input parameter for the workflow.

Once the workflow has been run, the Distributed Firewall (DFW) rules (defined in the
security policy) are applied. However, these are only applied to the vNICs of the security
group members to which this security policy applies.

7.2.5. Setup of network profiles

The next step within the configuration is to setup network profiles. We discussed network
profiles at the beginning of the chapter, so we shall not spend more time on them.

7.2.6. Configuring reservations

After the network profiles have been created and configured, you have to continue with
the configuration of the reservations. Transport zones can be discovered after an inventory
scan, once the endpoints have been correctly configured for NSX. So at runtime, vRealize
Automation is able to create the appropriate networks. In addition, you are able to
configure security groups. Security groups can be compared to firewalls, between the
different provisioned networks.
Run through the configuration, performing the steps as follows:

1. Navigate to Infrastructure > Reservations > Reservations.


2. Select and edit the reservation to which you want to apply the appropriate
configuration.
3. Change to the Network tab.
4. As a first step, you must assign the external network. Within the Network Paths
table, choose a network and select the appropriate network profile.
5. Select the Transport Zone.
6. When required, select a security group within the Security group list.
7. If you need a Routed Gateway, activate the checkbox within the Routed Gateway
table, choose a network path and assign an external network profile.
8. Click on the Save-icon.
9. Save your reservation with OK.

7.2.7. Creating a multimachine blueprint


Once the preliminary work has been carried out, the multimachine blueprint itself can be
created and configured. As mentioned already, a multimachine blueprint acts as a
container for blueprints. Therefore, you must create these blueprints in advance.
Compared to normal blueprints, multimachine blueprints have the following differences:

• You can define the start, as well as the shutdown order, for each of the blueprints
enclosed in the multimachine blueprint.
• For each of the enclosed blueprints a network, where the machine should be
deployed to, can be assigned.
• Within the blueprint you can assign a transport zone, network profiles and routed
gateways.
• The Distributed Execution Manager (DEM) can invoke scripts during the lifecycle
of a multimachine deployment. There are six different hooks for registering these
scripts (see following table).

Phase Description

Pro-Provisioning A script is invoked – after the approval of the workflow, but before
the beginning of the provisioning

Post-Provisioning Run a script after the provisioning (and after turning on the machines)

Pre-startup Run a script before switching on the machine

Post-startup Run a script immediately after turning on the machine

Pre-shutdown Run a script before shutting down the machine.

Post-shutdown Run a script after shutting down the machine

Table 7-2 Multi-machine DEM hooks

You must perform the following steps in order to create a multimachine blueprint:

1. Navigate to Infrastructure > Blueprints > Blueprints.


2. Click on New Blueprint > Multimachine.
3. Fill out all required information on the first Blueprint Information tab.
4. Change to the Build Information tab.
5. For each blueprint, which you want to add to the multimachine blueprint, click the
Add Blueprint icon.
6. Select the blueprint to be added and click on OK.
7. Once the blueprint has been added, click the Edit link on the blueprint’s row within
the blueprints table.
8. Configure the following network settings:
a. Click on New Network Adapter and assign a network profile to the different
network adapters. Choose whether you want to configure a static IP address or
a DHCP for the network settings.
b. Once you have configured a transport zone on the network tab, you can define
a load balancer for your machines on the Load Balancer tab.
c. If you want to configure firewall settings, move on to the Security tab. You
can apply Security policies, add components to Security groups and
configure Security tags.
Click on OK to save your network settings.
9. Configure the Lease (days). A minimum and a maximum value can be provided
(blank values mean no expiration date).
10. Switch to the Network tab.
11. If you have configured vRealize Automation for NSX/vCNS, use the Transport
zone dropdown list to select your transport zone.
12. Assign the network profiles.
13. Select a reservation policy for the Routed Gateway.
14. If you are using NSX, you can also activate App Isolation.
15. Change to the Scripting page if you want to configure any workflows that should
run during the multimachine’s lifecycle.
16. You can configure the actions, which should be permitted for the end users on the
Actions page.
17. Click OK to save the blueprint.

Like other blueprints, the multimachine blueprint also needs to be published. Only once
you have done this, can you add it to your service catalog.

Hint: Upload scripts


If you want to configure your multimachine blueprint to run a script on the Scripting tab,
you first have to upload the script file to the Model Manager. This can be done via the
cloudutil.exe command. The cloudutil.exe file can be installed as part of the vRealize
Automation Designer. You will find the installation files on the IaaS-installer page
(https://<vRealize-Automation-apppliance.domain.name>:5480:/installer). Once you have
installed the vRealize Automation Designer, go to the directory C:\Program Files
(x86)\VMware\vCAC\Design Center and run the following command to upload your
script (replace with your script name and file):

Cloudutil.exe FileImport –n MyScript -f MyScript.ps1


As soon as the upload has been completed, the script can be referenced from a
multimachine blueprint.

c.3 Importing machines to vRealize Automation

If you already have running virtual machines, it is possible to bring them under the control
of vRealize Automation. When shifting away from vCloud Director, this is the way to
migrate to vRealize Automation.
We have already mentioned that there is no upgrade path from vCloud Director to
vRealize Automation. However, in this chapter we showed how vCloud Director concepts
can be translated to vRealize Automation concepts. The basic process is the same as when
importing compute resources from other systems. While there is no official guide
regarding how machines should be migrated, you can nevertheless find the following steps
useful (as the best way is to clone these machines):

1. Make sure you have sufficient room on your storage for cloning the machines. It is
recommended that the destination cluster of your clone is already a compute
resource within your vRealize Automation environment.
2. Go to the vSphere Web Client, navigate to your VM and go to the vApp Options
tab and uncheck the Enable vApp Options checkbox. The result is that vCloud
Director properties are removed from the VM (see Fig.7-8).
3. Navigate to Infrastructure > Compute Resources > Compute Resources, hover
over your cluster and click Data Collection. Trigger an Inventory data collection.
4. Go to Infrastructure > Infrastructure Organizer > Infrastructure Organizer.
5. Click Next.
6. Choose the compute resources you want to configure on step 1 and click Next.
7. Within the next step, check the box beside the compute resource that contains the
clone VM.
8. Ensure the compute resource maps to a fabric group and optionally a cost profile
and click Next.
9. In step 3, select the machines that you want to import. For each machine you want
to add, click the Edit icon and associate the machine with a business group. Once
you have finished, click on Next to continue.
10. In step 4, assign a blueprint, reservation and machine owner to the machine. vApps
must be mapped to multimachine blueprints. Click Next to continue.
11. Verify the settings in the last steps and click Finish.

Once the import has been completed, machine owners can see the machines in their
catalog and should be able to log on into those machines.

Figure 7-8 Enable vApp options

c.3.1. Bulk import

If you want to import more than a few machines, bulk import is a viable option. Bulk
imports are useful in a variety of use cases:

• You can make global changes to a set of virtual machines, for instance, changing a
virtual machine property such as storage path settings.
• Import unmanaged machines.
• Import machines into an upgraded deployment.

You can use the bulk import feature from the graphical user interface, or you can use the
CloudUtil command-line interface. We will talk about using the CloudUtil tool later, in
chapter 13. Using the bulk import feature requires both a fabric administrator and a
business group role membership. You can perform the bulk import by executing the
following steps:

1. The first step is to create a virtual machine CSD data file. Navigate to
Infrastructure > Infrastructure Organizer > Bulk Imports and click on the
Generate CSV file button.
2. Provide input for the following options (see Fig. 7-9):
3. Machines: Unmanaged or Managed
a. The Business group for the bulk import (optional)
b. The Owner (optional)
c. A specific blueprint (optional)
d. A resource filter: You can filter on a specific Compute Resource or Endpoint.
4. Click OK to export the CSV file.
5. Correct or complement the CSV file. If there is missing data for the different virtual
machines, you will have entries beginning with “INVALID” or “UNDEFINED”.
The following categories exist and must be reviewed:
a. #Import—Yes or No: Set to No to skip a virtual machine for importing.
b. Virtual Machine Name: Do not change.
c. Virtual Machine ID: Also do not change.
d. Provide a valid Host reservation (Name or ID).
e. Assign a valid storage (Name or ID).
f. Type in the ID or name of a blueprint.
g. Assign an Owner name.
6. Now the bulk import can be started. On the Bulk Import page, click the New Bulk
Import button.
7. Provide a name for the bulk import.
8. Upload the CSV file.
9. Define the start time for the import.
10. You can define a Delay (seconds) and a Batch size. You can define these values if
you have a large set of virtual machines and the import load for the import may be
too high.
11. Optionally you can Ignore managed machines, skip user validation (can
decrease the import time) or specify if you only want to start a test import run.
12. Click OK to start the bulk process.

g.4 Summary

This chapter introduced network profiles and multimachine blueprints. Multimachine


blueprints are an important means of provisioning complex environments in vRealize
Automation. Multimachine blueprints are closely interconnected with NSX and can even
dynamically provision networks. However, to enable this, some preliminary work needs to
be carried out. An endpoint for NSX must be configured, a network profile created and
reservations have to be configured accordingly.

Figure 7-9 Generate CSV File for bulk import


8. Working with the Service Catalog

In the previous chapters we have shown you how to create and configure blueprints. Like
the other entities and components in vRealize Automation, the service catalog also needs
careful planning. Before the implementation starts, several issues have to be considered:

• How to group the different catalog items to services?


• Who can access the services and catalog items?
• Which permissions are granted on the catalog items?
• Is there any approval process needed?

The following chapter addresses these issues and demonstrates how to build the service
catalog. Before digging into the implementation itself, the most important points regarding
the service catalog should be reviewed (see Fig. 8-1):

• The service catalog hosts services. So far we have only dealt with IaaS services, but
XaaS services and Application Services can also be published.
• You can navigate to the service catalog by clicking on the Catalog menu tab.
• A single published item (e.g. blueprint) is called catalog item.
• Catalog items (e.g. published blueprints) are grouped into services.
• Provisioned resources are accessible within the Items tab.

Figure 8-1 Service catalog overview

• Users can perform actions on items. There is a set of predefined actions (e.g. turn
on/off a machine, reset a machine, connect via remote desktop connection), but it is
also possible to implement your own actions using Orchestrator and to associate it
with an item or blueprint.
• Entitlements describe permissions on a service, catalog item or action.
• Before being able to add a blueprint to the service catalog, it must have been
published.
8.1 Configuring the service catalog

The most important steps to configuring a service catalog are:


• Create services.
• Manage catalog items.
• Create entitlements and assign permissions.

On the following pages, we will show how these tasks can be achieved.

8.1.1. Creating services

In order to create a new service within the catalog, the tenant administrator or the service
architect role is required. If this condition is fulfilled, we can perform the following tasks:

1. Navigate to the Administration > Catalog Management > Services page.


2. Click the Add-icon in the header of the Services table. The appropriate window
opens (see Fig. 8-2).
3. Assign a Name for the new service.
4. Provide a Description for the service.
5. Optionally, you can upload an icon for the service (the icon will be displayed within
the service catalog).
6. Set the Status to Active (an inactive status will prevent the service to appear in the
service catalog).
7. The Hours field is optional as well. It specifies the timeframe when support is
available for the service.
8. Specify the service owner in the Owner field.
9. Define the Support Team for the service.
10. If there is a downtime due to maintenance, you can specify a time interval in the
Change Window settings.
11. Click on Add to save the service.
Figure 8-2 Add a service

8.1.2. Managing catalog items

Once a service has been created, it is possible to add catalog items to it. This requires
either a tenant administrator, service architecture or a business group manager
membership:

1. Navigate to the Administration > Catalog Management > Services page.


2. Select the service to which you want to add catalog items (see Fig. 8-3).
3. Click the Manage Catalog Items button.
4. Click on the [+]-icon to add a catalog item.
5. From the Select Catalog Items dialog box, choose the blueprints to be added and
click Add.
6. Click the Close button.

Figure 8-3 Assign permissions

8.1.3. Creating entitlements and assigning permissions

The next step within the configuration is to assign the appropriate permissions. vRealize
Automation is using entitlements for assigning permissions on catalog items for users and
groups. An entitlement stores the following information:

• Basic entitlement information (e.g. name, description)


• A business group
• The users and groups
• Entitled services
• Entitled catalog items
• Entitled actions

To create an entitlement, follow the steps as described here:


1. Log in as a tenant administrator or business group manager.
2. Navigate to the Administration > Catalog Management > Entitlement page.
3. Click the [+]-Add icon to add a new entitlement.
4. Assign a Name to the new service.
5. Provide a Description of the service.
6. Configure an Expiration Date for the entitlement.
7. Set the Status to Active (an inactive status will prevent the entitlement to work)
8. Choose a Business Group for the entitlement.
9. Specify the Users & Groups to whom the permissions will be assigned.
10. Click on Next to navigate to the Items & Approvals page (see Fig. 8-4).
11. Within the first column in the Entitled Services area, click the [+] button to add a
service.
12. Check all the services, which will be added to the entitlement.
13. Optionally use the Apply this Policy to selected items dropdown list to choose an
Approval Policy.
14. Click OK.
15. Within the second column in the Entitled Catalog items area, click the [+] button
to add a catalog item.
16. Check all the blueprints, which should be added to the entitlement.
17. Optionally use the Apply this Policy to selected items dropdown list to choose an
approval policy.
18. Click OK.
19. Within the third column in the Entitled Actions area, click the [+] button to add a
catalog item.
20. Check all the Actions, which should be added to the entitlement.
21. Click OK.
22. Click Add to save and end your entitlement configuration.

Figure 8-4 Configure entitlements

Hint: Snapshot privilege


There are many different options on the entitlement configuration page, however, the
snapshot privilege is missing. Unfortunately snapshots cannot be controlled via
entitlements, they are only activated via the blueprint options. This means that snapshots
can be globally activated or deactivated only on a blueprint.
In case there is more than one entitlement with associated approval policies, you should
consider the activation orders. This can be achieved by completing the following:

1. Navigate to the Administration > Catalog Management > Entitlements page.


2. Click the Prioritize button.
3. Rearrange the order of the entitlements.
4. Click Update respectively Update & Close.

Hint: Entitlement and business groups


Entitlements apply to business groups (that’s why you can only choose users from within
the selected business group when creating an entitlement). So if you want to entitle
different business groups for a shared blueprint, you should create an entitlement for each
of these business groups. You can also consider creating different entitlements for a
business group to assign different set of permissions.

8.2 Approval processes

Like many other services in a company, requesting a service from the service catalog
needs approval from time to time. vRealize Automation supports approval processes for
the requesting of machines. Approval policies can have one or more levels of approval.
Each level specifies one or more approvers and the condition that triggers the approval.
Specifying conditions for approvals can be quite a powerful tool. For example, you can
specify that machines with low costs are provisioned without any approval. Whereas
expensive machines would need manual approval in order to proceed with provisioning.
When specifying approvers, specific users or groups can be selected. Alternatively, if
approvers are not known beforehand, they could also be chosen dynamically from the
request itself. When choosing a group for approval, you must also specify whether anyone
from the group is allowed to approve or whether all members of the group have to
approve.
Figure 8-5 Approval polices

Fig. 8-5 shows a sample approval process with three stages. The first stage specifies QA-
approvers, the second stage RD-approvers and the third stage vRealize Admin approvers.
Only if there is approval from an approval member at each stage, the provisioning can
begin.
It is worth noting that the levels specified in an approval can be of different types:

• Pre-approval levels specify users and groups who have to approve a request before
provisioning.
• Post-approval levels specify users and groups who have to approve a request after
provisioning. While it may not be very common to specify post-approval levels,
there might be use cases from time to time (for example, if somebody has to check
that a machine is working correctly, or the machine has to meet some constraints).

Setting up an approval policy requires a tenant administrator or approval administrator


membership. The procedure itself involves three distinct steps:

1. Specify the approval policy information.


2. Create one or more approval level.
3. Configure an approval form.

8.2.1. Specifying approval policy information

When creating an approval policy, the first step is to define the approval policy type,
name, description and status. There are different kinds of approval policies:

• Approval policies for requests


• Approval policies for catalog items
• Approval policies for resource actions

Actually, these approval policies do not differ very much. Depending on the type of
approval policy, different information is shown or can be requested from the approver on
the approval form.
Perform the following steps to create an approval policy:
1. Change to the Administration > Approval Policies page.
2. Click the [+]-icon.
3. Choose the appropriate approval policy type.
4. Click OK.
5. Provide a Name and optionally a Description for the approval policy.
6. Set the Status to Active in order to be able to use it.

8.2.2. Creating one or more approval level

The next step involves setting up the different levels:

1. On the Pre-Approval or Post-Approval page, click the [+]-icon.


2. Provide a Name and optionally a Description.
3. Select if the approval is based on a condition. You can choose between Always
Required or Required based on conditions. If you select the latter, you can form
up a clause of conditions. These can be linked together in an ‘and’, or, ‘or not’-
operator.
4. From the Who are the Approvers section, choose if you want to select Specific
Users and Groups or Determine approvers from the request.
5. Select if Anyone can approve or All must approve.
6. Click Add.
7. Click Add again.

8.2.3. Configuring an approval form

Depending on the kind of approval policy type you have selected, it is possible to
configure the approval form. Approvers can change the values of system properties for
machine resources settings as CPU, lease, memory or custom properties. If any custom
properties are changed, custom properties defined in the blueprint or at any other place
will be overridden. Approval forms can be configured as follows:

1. Depending if you want to configure a Pre-Approval or Post-Approval policy type,


select the level which should be configured and change to the Approval Form tab
(see Fig. 8-6).
2. Select the system properties to be configured during runtime.
3. Add any custom properties which should be allowed to be configured during the
pre-approval phase.
4. Click Add.

8.3 Using the service catalog

At this point in time, we can finally use the service catalog to request and provision new
resources. When end users are logged in into the service catalog, they usually see the
following tabs within the user interface:

Figure 8-6 Configure approval policy levels

Figure 8-7 Configure notifications

• Home screen - this page can host different widgets, which show the most important
information to users. By default, only the My Inbox widget is shown. However, end
users can customize the page and add additional widgets to the home screen.
• Catalog – the service catalog.
• Items – the resources that have been provisioned.
• Requests – all the requests that have been issued and are currently in built or have
failed.
8.3.1. Configuring notifications

From time to time users have to interact with vRealize Automation, even when they do not
want to provision new resources or use existing items. For example, when they are part of
the approval process. In these cases, it is very useful to receive notifications via email.
Before such notifications can be activated (on a per-user basis), they must be globally
configured:

1. Log in into vRealize Automation with a tenant administrator role membership.


2. Navigate to the page Administration > Notification > Scenarios.
3. Review the scenarios for which to send notifications (see Fig. 8-7).

Figure 8-8 Request a catalog item

Once this has been done, users can subscribe to notifications in their user preferences:

1. Navigate to the Home screen.


2. On the upper right hand side, click the Preferences link.
3. If you want to assign a delegate, type the name of the delegate in the Search box
and click the Search-icon.
4. Click Apply.
5. In the Notifications area, select a language for the notifications.
6. Activate a protocol (e.g. email).
7. Click Apply.
8. Click Close.

8.3.2. Requesting resources

All the published services can be found within the catalog tab. To request a machine is a
relatively easy process, users only have to click on the service they want to provision.
Once a catalog item has been chosen, a new window opens (see Fig. 8-8). Users can see
which workflows will be used to provision a machine. They can also see its description
and daily cost. Furthermore, users can provide input for the following:

• How many Machines will be provisioned?


• The CPUs for the machine.
• The amount of Memory (MB).
• A Description.
• The Owner of the Machine (normal users only can request machines in their own
names, support users and business group managers can request machines on behalf
of other users)
• The Reason for Request.

Figure 8-9 Monitor requests

Custom properties can be seen and edited by clicking on the Properties tab. When
requesting a service based on a multimachine blueprint, the GUI slightly differs:

• You can configure for each enclosing blueprint how many machines should be
deployed.
• Custom properties on a multimachine level override custom properties defined on a
normal blueprint.

Once you have provided all the necessary input, you can submit the request. Saving a
request does not start the provisioning - it only saves the request.

Figure 8-10 Approve requests

8.3.3. Monitoring requests

Once a request has been submitted, vRealize Automation immediately starts the
provisioning workflow. You can see its current status by clicking on the Requests menu
(see Fig. 8-9). To keep track of your own requests, you can apply a filter using the
dropdown lists Submitter and Filter by State. Furthermore, you can see the provision
details of a request by clicking the View Details button.

8.3.4. Approving requests

If you are member of an approver group and a machine with an approval policy has been
requested, you will be able to see the incoming request within your inbox on the home
screen. An approver can open the appropriate links from the inbox and view the details of
the request (see Fig. 8-10).

Figure 8-11 Catalog item details

8.3.5. Managing virtual machines

Machines that have been successfully provisioned can be managed via the graphical user
interface of vRealize Automation (Items tab). vRealize Automation will show the
machines based on the user role membership (user, support user or business group
manager). Fig. 8-11 shows the user interface. By default, users can perform the following
actions on a machine:

• Create snapshots
• Configure the machine
• Change the lease time
• Reprovision the machine
• Expire the machine
• Install the VMware tools
• Connect by using RDP
• Connect by using VMware Remote Console
• Connect via SSH
• Power On, Off, Restart

The actual actions available at runtime differ based on the permission granted to the users.
You can also provide additional actions via Orchestrator.

8.3.6. vRealize Operation integration

If you have a running instance of vRealize Operation 6 or higher, you can also see the
integration between these two products. The Health Status badge on the right-hand side
of the machine details, comes directly from vRealize Operations. However, before the
health status can be displayed, a vRealize Operations endpoint must be configured. Work
through the following steps to do this:

1. Navigate to the page Administration > Tenant Machines > Metrics Provider
Configuration (see Fig. 8-12).
2. Activate the vRealize Operations Manager endpoint checkbox.
3. Provide the URL for the endpoint.
4. Type in a Username and a Password for the vRealize Operations instance.
5. Click on Test Connection and if this succeeds click on Save.

8.3.7. Releasing a machine

If a machine is not needed anymore, it can be released, even when there is lease time
remaining. As mentioned before, there is also a difference between Expire and Destroy.
While expiring sets a machine to archive mode (the machine keeps switched off until the
archive period is completed and eventually gets deleted), destroying a machine means you
immediately release all of its resources.
Figure 8-12 Metrics provider configuration

8.4 Summary

This chapter showed us how to configure the service catalog. We also covered the creation
of services, how to configure catalog items and set up appropriate permissions. In
addition, we demonstrated how to implement approval policies.
9. Reclamations

Previously, installing new machines was quite a tedious task. It was not uncommon for
requesters of machines to wait several days before someone from IT had the time to take
action. Virtualization, automation and tools like vRealize Automation make it easy to
provision new machines. Provisioning of machines now be realised within a couple of
minutes or even just a few seconds when using techniques such as linked clones.
However, this ease of provisioning machines has also created some new drawbacks. It
can lead to people requesting too many machines, without really considering the need. So,
much more machines might be created than are actually required, and potentially, lots of
these are then being forgotten (especially, when there is no pricing implemented). This
gluttony of machines can prove costly in resource management.
vRealize Automation already has available some “countermeasures” against such
behavior. We already discussed how to configure lease times, with automatically expiring
machines. Another measure available to you is the ability to charge for machine costs, thus
making requesters more aware of the impact of doing so.
Due to a lack of resources, however, a situation still might arise where it is not possible
to provision any new machines. In that case, machines need to be destroyed in order to
make room for new machines. However, which machines can be safely destroyed? That
question is not easily answered, as in most cases administrators have no knowledge of a
machine’s use. Questions like ‘why it was provisioned’, ‘what is its function’ or ‘does it
even interact with other machines’ arise.
vRealize Automation can also help with this situation. As discussed earlier, vRealize
Automation regularly collects information from all virtual machines, including its runtime
behavior. It can help identify machines that are idle (in terms of CPU, memory, hard disk
and network traffic). Once such machines have been located, we can ask the machine
owners if the machines are still required. Depending on the outcome, resources can be
released. This concept, together with a built-in workflow, is called reclamation in vRealize
Automation.
This chapter will give you some insight into reclamations and how to start a reclamation
workflow. If you also have vRealize Operations running in your environment, vRealize
Automation can interact with this and can give you more detailed machine information.
Figure 9-1 Reclamation overview

9.1 Reclamation workflow overview

Fig. 9-1 depicts the entire workflow a basic level. There are two different reasons to start a
reclamation workflow:

• Reservations have reached their capacity, due to a limit of resources. Before


provisioning new machines, old ones must be destroyed.
• Costs should be optimized.

Figure 9-2 vRealize Operations capacity configuration

The basic workflow can be described as follows:

1. First, idle machines are identified. The administrator then sends a reclamation
request to the machine owner and asks whether the machine is still required or
whether it can be destroyed.
2. If the machine owner states that the machine is still in use, the administrator can
initiate a new reclamation request on another idle machine.
3. If the machine owner approves the reclamation request, the machine lease will
instantly expire. If there is an archival period set, the machine will only be switched
off and deleted later, as per policy. Otherwise, its resources will be released
immediately.
We have already discussed the interaction of vRealize Automation with vRealize
Operations. For example, we demonstrated how users are able to display the current health
status of a provisioned virtual machine (actual health data comes from vRealize
Automation). vRealize Operations can also help us by running idle resource scans to
identify candidates for shut down. vRealize Operations can even initiate a reclamation
workflow, once idle machines have been found.
When identifying virtual machines for reclamation, the tricky question is what exactly
qualifies a machine as such. Fig. 9-2 shows how to configure vRealize Operations with
settings that feed into decision-making. Values such as the minimum acceptable network
IO, CPU usage and datastore IO can be taken into account when flagging a VM based on
an overall percentage of idleness. However, you must consider configuring an endpoint for
vRealize Operations in advance, as discussed in Chapter 9.
These techniques do not only apply to VMs, they can also be used for oversized and
underutilized disk space.

Figure 9-3 Advanced search for reclamation

9.1.1. Identifying unused machines in vRealize Automation

If you need to identify unused machines, you must first have membership of the tenant
administrator role. The process can then be started by navigating to the Administration >
Tenant Machines > Reclamations page. This page displays all machines within the
tenant. However, there is also an Advanced Search for better identifying machines (see
Fig. 9-3). There are many search options – among these:

• CPU Usage
• Memory Usage
• Disk Usage
• Network Usage
• Complex metric – this will use vRealize Operation for identifying virtual machines.
Any VM that show up on the vRealize Operations Idle VMs Report will show up in
the list of VMs that can be reclaimed.
Figure 9-4 Reclamation request

Figure 9-5 Reclamation request 2

Once you have selected a machine for reclamation, you can start the reclamation process
by clicking on the Reclaim Virtual Machine button (see Fig. 9-4). Each submitted
reclamation request asks the machine owner if the machine is still in use. Unfortunately,
from time to time, machine owners do not answer these requests. In this case, the lease
period can automatically be decreased. You can also specify how many days vRealize
Automation should wait before doing this in the Wait before forcing lease (days) input
field. Do not forget to review and modify the New lease length (days) and Reason for
request input fields.
You can find all reclamation requests, that are in process, within the Administration >
Reclamation > Reclamation Requests page. There are different statuses possible for a
request: Pending, Approved or Rejected.
After you have submitted a request, the machine owner will be notified of the request
and is able to open it in his inbox (see Fig. 9-5). The machine owner can answer the
request directly, by choosing one of the following three answers:

• The user can answer by clicking the Release for Reclamation button. This means,
the machines are not needed anymore and hence expire immediately. If no archive
period is defined, the underlying resources are instantly released.
• If the machine is still in use, the machine owner can reply with Item in Use. The
reclamation workflow finishes without releasing any resources.
• If the machine owner does not reply to the reclamation request within the defined
period, the lease time will be adjusted in line with the reclamation request settings.

9.2 Capacity reports

As well as requiring the ability to release resources, it is crucial for administrators to be


able to proactively monitor the environment to identify capacity issues and cost-saving
opportunities. vRealize Automation offers a set of reports, which show capacity issues and
potential cost savings. These reports can be shown on the home screen by adding
appropriate portlets. To add a portlet, work through the following steps:

1. Navigate to the Home page.


2. Click on the Pencil menu on the right hand side of the screen and choose Add
Portlets.
3. Select a portlet to be added to the home screen.

vRealize Automation offers the following reports for capacity and cost savings:

• IaaS Capacity Usage by Blueprint: Displays the number of machines provisioned


from each blueprint and the total resources used by those machines.
• IaaS Capacity by Usage by Compute Resource: Displays the number of
machines provisioned on each compute resource and the total resources used by
those machines.
• IaaS Capacity usage by Group: Displays the number of machines owned by users
in each business group and the total resources used by those machines.
• IaaS Capacity Usage by Owner: Displays the number of machines for each owner
and the total resources used by those machines.
• IaaS Reclamations Savings by Owner: Displays cost savings achieved by
reclaiming machines sorted by owner.
• IaaS Reclamations Savings by Group: Displays cost savings achieved by
reclaiming machines sorted by business group.
• IaaS Chargeback by Allocated Resource by Group: Displays the cumulative cost
over time of provisioned machines sorted by business group
Figure 9-6 IaaS capacity usage by compute resource widget

Once the portlets have been added to the home screen, tenant administrators have the
ability to monitor and review the reports online. They also have the possibility to
download the report data as a CSV file (see Fig. 9-6).
Alternatively, you can review idle machine reports in vRealize Operations by performing
the following steps:

1. Log in to vRealize Operations by navigating to https://vrops-host.domain/vcops-


web-ent.
2. Click Content in the navigation area on the left-hand side.
3. Choose Reports.
4. Select one of the following reports:
• Idle VMs report.
• Powered Off VMs report.
• Oversized VMs report.
5. Click the Run Template button in the Report Template header.
6. Choose an object group to define the basis for the report and click OK.
7. Click on Generated reports.
8. You can download the report as CSV or PDF.

9.1

9.2

9.3 Summary

This chapter demonstrated how deployed resources within vRealize Automation can be
monitored. We went into detail regarding reclamations, which are an important tool in
helping you reclaim resources (when your capacity is getting low or if you need to reduce
costs). There are several different reports available in vRealize Automation. However, if
you have vRealize Operations, you can also configure an endpoint to obtain more detailed
reports.
10. Custom Properties and Build Profiles

vRealize Automation uses a set of built-in workflows to provision machines. We have


already discussed these workflows in the blueprint chapters. We also outlined that these
provisioning workflows can be customized with the help of custom properties. Custom
properties are a powerful concept within vRealize Automation. Many use cases and
functionalities can be implemented by working with them. That’s reason enough to
provide an in-depth overview of custom properties and (related) build profiles.
Furthermore, we will demonstrate some common use cases, where custom properties
really can help to implement powerful functionalities.

10.1 Custom properties basics

Before digging into the details, we first want to briefly sketch out how customization
happens in vRealize Automation. Customization can be carried out at different levels (see
Fig. 10-1):

• By using custom properties and build profiles, you can perform simple
customizations in a machine’s lifecycle. This allows not only customizing the user
interface for requesting a machine, but also customizing all stages of the lifecycle
(requesting, provisioning, managing, and retiring). Custom properties can easily be
defined via the graphical user interface.
• If the requirements become more complex and there is some kind of logic required,
there are different tools available to help you in vRealize Automation: The vRealize
Automation Orchestrator, vRealize Automation Designer and the Advanced Service
Designer.

• Alternatively, if very complex special workflows are needed or the data model
within the Model Manager must be changed, you can create your own workflows in
Visual Studio. However, you must also purchase the Cloud Development Kit (CDK)
for it.

This chapter only addresses custom properties and build profiles. The other tools will be
described later in this book.
Figure 10-1 vRealize Automation extensibility

10.1.1. Machine lifecycle

As mentioned previously, every machine has a lifecycle within vRealize Automation. The
most important stages are depicted in Fig. 10-2. As customizations can apply to each one
of these stages, it makes sense to discuss them in the following:
10.1.2. Request phase

During the request phase, users provide all kinds of input required for provisioning. We
have already demonstrated how to request a machine from the service catalog. However,
using custom properties, you can ask for additional input from the end user and employ it
during the provisioning. For example:

• Override the hostname of a machine.


• Decide at which location a machine should get deployed.
• Define a cost center.
• Configure to which network a machine should be connected.
• Place a machine within a specified storage.
• Assign a script to be invoked at the end of the provisioning.

Figure 10-2 Machine lifecycle

10.1.3. Approval phase


Once a request has been submitted, the provisioning enters the approval phase (if an
approval policy has been assigned to a blueprint). Approvers see all the details concerning
the requested machine (including all hardware resources, lease time and costs). Approvers
can also modify certain settings, as well as custom properties.
10.1.4. Provisioning phase

This is the phase in which the machine is built. vRealize Automation supports different
ways of building a machine, e.g. cloning, WIM provisioning, Linux kickstart and many
more. The provisioning process can be customized as well. The customization settings are
provided via custom properties. These can be defined statically for each machine build or
entered during the machine request.

10.1.5. Post approval phase

Approvers are able to inspect a machine and review its settings before it becomes
available to the requester.

10.1.6. Manage phase

Machines spend the majority of their time in the manage phase. End users can perform
actions as defined in the blueprint. There are plenty of possible use case scenarios for
customization. However, custom properties are usually not powerful enough to implement
these. Instead, usually vRealize Automation Orchestrator or Visual Studio is used. Some
of the most common use cases are described as follows:

• Additional actions can be implemented and associated with a blueprint or existing


machine. For example, additional software like a virus scanner can be installed.
• Workflows can also run as background jobs. For example, there could be an
Orchestrator workflow, which backs up the machine every night.
• Changes to machines can also happen because of configuration changes. For
example, a machine could be moved to another datastore because of low capacity.

10.1.7. Retire phase

The final stage is the retirement phase. At the end of the lifecycle, all resources will be
released. As already mentioned, there is also a distinction between expiring and deleting.
Once a machine gets fully destroyed, plenty of work can be done. For example, releasing
IP addresses or unregistering the machine or service within a Configuration Management
Database (CMDB).
10.2 Custom Properties

We have already introduced custom properties when we talked about blueprints. However,
custom properties represent a very important concept within vRealize Automation. That’s
reason enough to dig further into the details of custom properties. There are of course
many use cases for using these custom properties (we will address some of them later in
the chapter in detail):

• Choosing a network at request time.


• Choosing a storage location for provisioning.
• Defining a vCenter virtual folder, where the VM object should be placed.
• Configuring administrator access, for the machine owner, in the VM.
• Configuring antivirus software.
• Consolidating blueprints.
• Defining a portgroup, to which a machine should be connected.
• Definition of snapshot policies.
• Activating monitoring.
• Activating directory cleanup, after removing a VM.

Essentially, custom properties can be viewed as tags. There are a lot of custom properties
predefined, but you can also create your own. Once created, they can be applied to objects
such as blueprints, build profiles and so on. Custom properties are exclusive to the IaaS
components of vRealize Automation. When you create your own custom properties, you
are free to name it whatever you want. However, descriptive names, including namespaces
(the dot “.” acts as a separator), are recommended. If you create your own custom property
and attach it to any object (for example, a blueprint), nothing will happen, as custom
properties act as a kind of metadata. However, the vRealize Automation standard built-in
workflows are aware of the predefined custom properties. They have some logic inbuilt,
which is then executed upon detection of a custom property. Consequently, if you create
your own custom properties, you need a ‘handler’ to run some kind of logic. Usually,
vCenter Orchestrator is used to implement such logic within a workflow.

10.2.1. Order of custom properties

We have already shown you how to add custom properties to a blueprint. However, you
might also have noticed that these custom properties can be configured in other levels of
vRealize Automation. In fact, you can define the very same custom property at different
levels. There is an order to which these properties are evaluated (the first one has the
highest priority):

• Business Group
• Blueprint
• Build profile
• Endpoint
• Reservation
• Compute Resource
• Storage

The idea of defining the same custom property at different levels is that you can configure
some kind of standard behavior and then override it if required. Nevertheless, you should
be careful with this. When a custom property is applied to a blueprint, there is no tool to
find out at which level it has been applied. In fact, if you forget a custom property
somewhere, the provisioning and the other workflows might not work the way you
envisioned.

10.2.2. Custom property categories

vRealize Automation has a lot of predefined custom properties. There is a special


reference document, which describes all of them[5]. Internally, vRealize Automation treats
custom properties differently, depending on the category:

10.2.3. Read-only custom properties

Read-only custom properties are assigned by vRealize Automation and cannot be


changed. These properties are usually set for values that must be fixed for the whole
lifecycle. The following list enumerates some of the read-only properties:

• VirtualMachine.Admin.UUID
• VirtualMachine.Admin.AgentID
• VirtualMachine.Admin.Name
10.2.4. Internal custom properties

The next category of custom properties is only used internally to help store the metadata
of virtual machines. These properties do not affect the state of a machine. An example
internal custom property is a machine approver. Other internal properties are enumerated
in the following:

• VirtualMachine.Admin.Owner
• VirtualMachine.Admin.Description
• VirtualMachine.Admin.AdministratorEmail
• VirtualMachine.Admin.ConnectionAddress
• VirtualMachine.Admin.NameCompletion

10.2.5. External custom properties

External custom properties store information relating to a virtual machine. Once these
properties have been set, they will not change. If changes are made to virtual machines
e.g. via the vSphere Web Client, vRealize Automation will not update these properties.
This means that the values of external custom properties could be outdated in vRealize
Automation. The following list depicts some examples for external custom properties:

• Hostname
• VirtualMachine.Admin.ClusterName
• VirtualMachine.Admin.ForceHost
• VirtualMachine.Admin.ThinProvision
• VirtualMachine.Admin.AddOwnerToAdmins
• VirtualMachine.Admin.AllowLogin
• VirtualMachine.Storage.Name
• VMware.Memory.Reservation
• VMware.VirtualCenter.Folder
• VMware.VirtualCenter.OperatingSystem
10.2.6. Updated custom properties

The last category is quite similar to the external custom property category. However, the
difference here is that the updated custom properties automatically reflect changes caused
by machine modification. Depending on the custom property, such behavior is quite
important. Consider a scenario where an end user requests a low-end machine from the
service catalog. Due to limited capacity, the machine is quite cheap. However, if the user
has appropriate vCenter permissions, they could theoretically log in to the vSphere Client
and increase the machine resources. In this case, it is important that vRealize Automation
can detect such hardware modifications. If not, it would be possible for users to continue
paying the original low price, as indicated in the service catalog, instead of a higher
correctly adjusted price.
Based on that scenario, we can conclude that basic hardware settings must also apply to
custom properties. These properties are called updated custom properties. Other examples
for this category are:

• VirtualMachine.Admin.Hostname
• VirtualMachine.Admin.TotalDiskUsage
• VirtualMachine.Memory.Size
• VirtualMachine.CPU.Count
10.2.7. Configuration of custom properties

Defining custom properties is pretty straightforward. Fig. 10-3 depicts the configuration of
a custom property at a blueprint level. The following parameters can be defined for a
custom property:

• A predefined value.
• Optionally, the value of a custom property can be encrypted.
• If there is no predefined value, vRealize Automation can prompt the user (Prompt
User).

Fig. 10-4 shows how vRealize Automation adds an additional input field on the request
page, when the Prompt User option has been set.
Please also note that the definition of a custom property is always case sensitive. If there is
any typing error, the custom property will not be working.

Figure 10-3 Hostname custom property

Figure 10-4 Customized request form

10.3 Build profiles

In most cases, assigning a custom property is not sufficient. Instead, a set of custom
properties must be applied to a blueprint. If there are many blueprints, there will be a lot of
work regarding the assignment. However, in such cases using build profiles is a viable
alternative to applying many single custom properties. A build profile helps to group
custom properties and allows reusing them. You can define your own build profiles, but
vRealize already includes a predefined set of build profiles:

• ActiveDirectoryCleaningPlugin
• CitrixDesktopProperties
• PxeProvisioningProperties
• SysprepProperties
• VMwareXXXXXProperties
10.3.1. Create build profiles
The build profiles already provided by vRealize Automation may help you in many cases.
However, if you want to perform additional customization, it is quite handy to be able to
create your own build profiles.
Having your own build profiles helps you to increase the manageability of your vRealize
Automation solution. Custom properties can be centrally grouped in build profiles and
store global variables such as Active Directory domain information, scripts to be executed
or any cleanup logic for de-provisioning.
Furthermore, build profiles can also help you to get a grip on ‘blueprint sprawl’. Many
organizations tend to create many different blueprints with only minor differences (e.g.
hardware equipment, services running, etc.). It would be better practice to use a generic
blueprint and then utilize build profiles to adjust it to the requestor’s requirements. Later
in this chapter, we will demonstrate how to achieve such a behavior using build profiles.
Right now, we will take a look at how to create such build profiles in the first place:

1. Log in into vRealize Automation as a fabric administrator.


2. Navigate to the Infrastructure > Blueprints > Build Profiles page (see Fig. 10-5).
3. Assign a Name for the build profile.
4. Optionally define a description.
5. Add the custom properties to the build profile. Click the New Property link. A new
row is added to the Custom Properties table.
6. Enter the Name of the custom property in the first column.
7. Define a Value for the custom property.
8. Check the Encrypted checkbox if needed.
9. If the requestor has to provide an input for the custom property, check the Prompt
User checkbox.
10. Save the row by clicking the Save-button.
11. Add additional custom properties as needed.
12. Save the build profile by clicking OK.

Figure 10-5 Build profile definition

10.4 Property dictionary

As you will have already seen, there are many predefined custom properties in vRealize
Automation. You can of course create your own custom properties as well. In any case, if
the value assigned to a custom property is not valid, there will be most likely an error in
any workflow that evaluates these custom properties later on. Consequently, it is crucial to
test your custom properties before using them in production. However, things become
more complex when asking your users for some input. If the input is not valid, any
running workflow will of course also fail. In order to circumvent such failures, we need a
way to predefine valid inputs, before a request can be submitted.
The property dictionary can help us with these issues. It offers the following functionality:

• You can define data validations to ensure input is entered in the right format. For
example, you can validate the data type of an input, e.g. if you want to have an
integer, a text or a date. Entering input that does not meet the right format is then
prohibited.
• Additionally, you can define constraints. Constraints help to validate the content of
an input. There are different types of constraints: You can configure a minimum or
maximum value, intervals, value sets for dropdown lists or regular expressions for
text input.
• To increase the usability, tooltips can be configured and attached to a control. Once
a user hovers over such a control, the appropriate text is shown.
• There are different sets of controls available and they are able to interact with each
other. For example, when choosing a location, it is possible to only show the
networks available at this location. These controls can be ordered by using Ordered
user control layouts.

The property dictionary supports the following user controls:

• Textboxes
• Checkboxes
• DateTimeEdit
• DropDown
• DropDownList
• Integer textbox
• Labels
• Links
• Notes (multiline textbox)
• Password (hidden textbox)
10.4.1. Using the property dictionary

Adding a new item into the property dictionary usually requires you to perform the
following steps:

• First, a new definition has to be created. This involves defining a name, description
and choosing the right kind of user control. As there can be many different
definitions, it is certainly useful to think about a naming strategy with namespaces.
For namespaces, the “.”-character is recommended.

• The second step is to add property attributes. This involves adding property
attributes for data validation, constraints, tooltips or relationships.
• The final step is to define the ordering of the controls on the user interface.

Bearing this knowledge in mind, let’s see how we can implement these steps.

10.4.2. Create a property definition

Perform the following steps to create a new property definition:

1. Log in into vRealize Automation as a fabric administrator.


2. Change to the Infrastructure > Blueprints > Property Dictionary page.
3. Click the link New Property Definition.
4. Assign a Name for the property definition.
5. Define a Display Name.
6. Optionally, enter a Description.
7. If the input should be required, check the required checkbox.
8. Save the property with the Save button.

10.4.3. Configure property attributes

Once a property definition has been created, additional property attributes can be attached
to it. These attributes can be used to configure constraints, data validations, tooltips or
relationships between property definitions. Work through the following steps in order to
create a property attribute:
1. Log in into vRealize Automation as a fabric administrator.
2. Change to the Infrastructure > Blueprints > Property Dictionary page.
3. Select a property definition and click the appropriate Edit button.
4. Click on New Property Attribute.
5. Choose a Type for the property attribute (the types available depend on the control
chosen). You can choose the following types:
a. Help Text
b. Order Index
c. Relationship
d. Value Expression
e. Value
f. MinValue
g. MaxValue
h. Interval
Most of the types are quite self-explanatory – only the Value Expression and the
Relationship types need some additional explanation (which is given later).
6. Enter a name attribute in the Name textbox (this name is not visible in the user
interface).
7. Provide a Value for the property attribute.
8. Click the Save-icon.
9. Save the property attributes by clicking OK.

Figure 10-6 Related dropdown lists

Hint: Property definitions


As described previously, we can set a custom property value to a blueprint (such as
“VirtualMachine.Network0.Name”), in order to override the behavior of the default
workflow (e.g. what network we are going to provision to).
When using the property dictionary with a dropdown list to define a property attribute for
the “VirtualMachine.Network0.Name” custom property, it is possible to limit what the
user can select. However, there is a problem. The same definition is applied to any
blueprint using this custom property. Therefore, if you have different blueprints with
different networks available to choose, this approach does not work. If you want to
overcome this limitation, you can use the Advanced Service Designer along with
Orchestrator. You can also use different property definitions, with a relationship to each
other, as a kind of “dirty workaround”.

The most complex property attribute type is that of relationship. It can be used to create
nested dropdown lists – the values visible within a dropdown list are dependent on the
chosen value of another dropdown field. There are plenty of use cases for this. For
example, a user could chose a location where a blueprint should be deployed and then
dependent on the selection, different networks and storage paths are shown in the user
interface. Fig. 10-6 shows such a relationship between two controls. However the
implementation of such a scenario is not easy. From a conceptual point of view, the
following steps must be carefully worked through:

1. Create two property definitions – one for the parent element, and one for the child
element.
2. Define a relationship attribute on the child side. The relationship attribute must
reference the parent element.
3. Write a Value Expression, which defines which elements in the child control will be
shown dependent on the chosen parent value.
4. Assign the Value Expression to the child property definition.
5. Add both properties to a blueprint or build profile.
6. Configure a property layout (optionally)

With the conceptual concepts being explained, we can now show how to implement such
dropdown lists.
h.4.4. Create the parent property definition

1. Change to the Infrastructure > Blueprints > Property Dictionary page.


2. Create a New Property Definition.
3. Assign a Name for the property definition. In our scenario, we choose
VirtualMachine.Network.Environment.
4. Configure a Display Name (e.g. Environment).
5. Choose DropDownList as a Control type.
6. Activate the Required checkbox.
7. Click on Save.

Now we have to define the values for the parent dropdown list.
1. Within the VirtualMachine.Network.Environment row, click the Edit link.
2. Create a New Property Attribute.
3. Choose ValueList as a Type.
4. Enter a name attribute in the Name textbox (this name is not visible in the user
interface).
5. In the Values field, enter “Production, Test”.
6. Click the Save-icon.
7. Click OK.

Figure 10-7 Property attribute definition

h.4.5. Create the child property definition with a relationship attribute

This can be done by completing the following steps:

1. Click the link New Property Definition.


2. Assign a name for the property definition (in our scenario we want to select a
network, so we choose the “VirtualMachine.Network0.Name” name.
3. In the field Display Name type Network.
4. Choose the DropDownList as a ControlType.
5. Click Save.

At this point, we have created controls for the parent as well as for the child element. The
next step is to create a relationship between both these controls, so they can interact with
each other:

1. Locate the VirtualMachine.Network0.Name row in the Property Dictionary and


click the Edit link.
2. Create a New Property Attribute.
3. Choose Relationship as a Type.
4. Type Parent in the Name field.
5. Enter “VirtualMachine.Network.Environment” as a Value.
6. Save the row.
7. Click OK to save the property attribute (see Fig. 10-7).
h.4.6. Write a Value Expression

The most difficult part of this process is writing the value expression. The value
expression defines the values displayed in the child element, when a value in the parent
element has been selected. Fig. 10-8 already depicted this relationship in our scenario. The
next step is to “translate” this relationship into an XML code. Open a XML editor and
paste in the following text:

<?xml version=”1.0” encoding=”utf-8” standalone=”yes”?> <ArrayOfPropertyValue


xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance”>
<PropertyValue>
<FilterName> VirtualMachine.Network.Environment </FilterName> <FilterValue>Production</FilterValue>
<Value>Internal </Value>
</PropertyValue>
<PropertyValue>
<FilterName> VirtualMachine.Network.Environment</FilterName> <FilterValue>Production</FilterValue>
<Value>External</Value>
</PropertyValue>
<PropertyValue>
<FilterName> VirtualMachine.Network.Environment </FilterName> <FilterValue>Test</FilterValue>
<Value>Test Infrastructure</Value>
</PropertyValue>
<PropertyValue>
<FilterName> VirtualMachine.Network.Environment </FilterName> <FilterValue>Test</FilterValue>
<Value>Test Development</Value>
</PropertyValue>
</ArrayOfPropertyValue>

The xml code certainly needs some explanation. The purpose of this code is to show all
combinations between the two dropdown lists. Our sample scenario has four different
combinations. Hence, the XML document also has four PropertyValue elements.
Each PropertyValue has a FilterName element. This FilterName depicts where a
PropertyValue can be applied – in our case it can be applied to the
VirtualMachine.Network.Environment.
The remaining two sub elements, FilterValue and Value, define the actual combinations.
Creating such xml codes can be a pain, especially because you have to take extra care not
to make any mistakes. These mistakes can cause your dropdown lists to stop working as
expected. Fortunately, there is help available in the form of a small Excel macro, which
can create the XML document for you[6]. If you are creating the XML document with this
macro, you can skip the next step and upload it to vRealize Automation. Otherwise you
must take care that the XML is in the right format.

h.4.7. Formatting the XML and upload to vRealize Automation

The ValueExpression input field in vRealize Automation expects the XML text to be in a
single line, so all line breaks must be removed first. Once this has been done, it can be
uploaded:

1. Locate the VirtualMachine.Network0.Name property and click the Edit link in the
same row.
2. Create a New PropertyAttribute.
3. Choose ValueExpression as a Type.
4. Enter Expression as a Name.
5. Paste the XML text into the Value field.
6. Click the Save-icon.
7. Click OK.

h.4.8. Add the properties to a build profile or blueprint

The final step is simple. Create a new build profile and add the custom properties to it.
Then navigate to your blueprint and add the build profile. Alternatively, you can directly
add the custom properties to your blueprints. Please also don’t forget to mark the custom
properties as required.
Once everything has been completed, go to your service catalog and request the
appropriate IaaS service. If everything works as expected, you should see both input fields
interacting with each other (see Fig. 10-8).

Figure 10-8 Customized request form


h.4.9. Configure property layout (optionally)

Property layouts act as a container for properties. When creating a property layout, you
only have to assign a name. In a second step, you can add the properties, along with an
ordering of controls. Once you have created a property layout, you can add it to a build
profile or blueprint.
Perform the following steps to configure a property layout for our scenario:

1. Navigate to the Infrastructure > Blueprints > Property Dictionary page.


2. Click the New Property Layout link.
3. Assign a Name.
4. Save the layout.
5. Within your property layout instance row, click the Edit link (see Fig. 10-9)
6. Create a New Property Instance and add the
VirtualMachine.Network.Environment property.
7. As an Order, assign “1” and save the property instance.
8. Next, add another property instance for the VirtualMachine.Network0.Name
property definition.
9. Assign “2” as Order and save the instance.
10. Click OK.

Once you have created the property layout, you can add it to a blueprint or build profile.

Figure 10-9 Property instances definition

h.5 Create your own custom properties

We have already mentioned that vRealize Automation comes with a huge set of custom
properties. These are described in a distinct document within the official vRealize
Automation Documentation, the Custom Property Reference. Besides using the pre-
defined custom properties, users can also create their own custom properties. For example,
these can be used to ask for additional input from the blueprint service request. The input
can be passed, as a parameter, to a script residing in the machine to be provisioned (we
have learnt how the guest agent can run scripts with external parameters in Chapter 6).
When using your own custom properties, please consider that there is always a handler
required. This handler will evaluate the custom properties and execute some business
logic. In chapter 14, we will show how to use Orchestrator with custom properties in order
to implement a set of use cases.

h.6 Summary

This chapter introduced custom properties and build profiles. There are several reasons to
use custom properties and build profiles. Firstly, they help to change the behavior of the
built-in workflows. Secondly, they allow you to change the user interface, when
requesting an IaaS service from the service catalog. Internally, there are different kinds of
custom properties: internal, read-only, external and updated.
We also introduced build profiles as a way of grouping custom properties. Furthermore,
we showed how to use the Property Dictionary within vRealize Automation.
11. Advanced Administration

At this point, we have covered most of the administrative concepts of vRealize


Automation. Nevertheless there are still some open issues remaining – and they should be
addressed within this chapter. This includes:

• Using the vRealize CloudClient CLI


• Monitoring vRealize Automation
• Backup and recovery of vRealize Automation

11.1 Using the vRealize CloudClient CLI

So far, we have done all the administrative work manually. However, during operations
there is a lot of repetitive administrative work to be carried out. Consequently, there is a
need to automate these tasks. A common use case is, for example, performing bulk
operations. That is if you want to make resource changes to multiple systems or need to
power on/off a number of systems at a specified time. You can also coordinate changes
across tenants, groups or vRealize Automation instances. Using the CloudClient, you can
call vRealize Automation functionality from other applications and even perform
composite tasks across different VMware products.
VMware published vRealize CloudClient – a command line tool that provides easy
access to vRealize Automation, vRealize Orchestrator and VMware Site Recovery
functions.

11.1.1. CloudClient functionalities

The following tasks can be automated with CloudClient:

• Create blueprints and catalog items


• Request catalog items
• Activate vRealize Automation actions
• Create IaaS endpoints
• Automate SRM fail-overs under vRealize Automation management
• Launch vRealize Orchestrator workflows
• Import and export Advanced Service Designer content
• Support for multi-machine blueprints

11.1.2. Using CloudClient

Technically, CloudClient is a command-line tool that is based on Java. Hence you have to
install a Java 1.7 JRE (other Java versions might not work) first. CloudClient can be used
on Windows as well as on Linux and MacOS operating systems. When setting up your
Java environment, please make sure that the Java bin path is within your ‘Path’
environment variable. The CloudClient tool itself can be downloaded from the VMware
Developer Center[7].
When you first start the tool, you have to accept the EULA. Once this has been done you
can start running scripts. CloudClient supports auto completion and has a built-in help,
which can be invoked by issuing the help command.
The first step is to login into a vRealize Automation instance. This can be done by the
following command:

vra login userpass —user user@domain.com —password VMware1! —server vcac

Alternatively, you can configure an automatic login. This can be achieved with a
CloudClient.properties file or by having system properties set. If you are using the
automatic login feature, you should first store your password in an encrypted file. This can
be done with the following command:

login keyfile —file mypass.txt [—password mypassword]

The following table 11-1 shows the variables that can be set for autologin:
Variable Description

vra_server vRA Server Name or IP Address

vra_tenant Tenant to connect to, defaults to vsphere.local if left empty

vra_username vRA Username - to log in to the top level system administrator, the
username is “”administrator@vsphere.local

vra_password vRA Password

vra_keyfile Location to encrypted keyfile

vra_iaas_server vRA Infrastructure Server Name, if left blank it is automatically


discovered

vra_iaas_username vRA NTLM Username, ie: Administrator

vra_iaas_password vRA NTLM Password

vra_iaas_keyfile Location to encrypted keyfile

Table 11-1 Cloud client variables

You can also create a CloudClient.properties file by typing the following command:

login autologinfile

Once the environment variables have been set, you can test if everything is working:

vra login is authenticated catalog


vra login is authenticated iaas

At this point, we want to show some examples in order to show how to interact with the
CloudClient. The first example will switch off a v virtual machine:

#!/bin/sh
#
# Setup environment variables for auto login to CloudClient Shell
. ./env.sh

# Provide Machine Name and Action, a guid or name is supported


export machine=’”training0011”’
export action=’”Power Off”’

# Execute CloudClient
./cloudclient.sh vra provisioneditem action execute —id $machine —action $action

The imported env.sh file can be seen here:

#!/bin/sh

export vra_server=server_name
export vra_username=user1
export vra_keyfile=keyfile.enc

The next example shows how to start an Orchestrator workflow from Cloud Client:

#!/bin/sh
#
# Setup environment variables for auto login to CloudClient Shell
vco_server=10.10.50.60
vco_username=administrator@vsphere.local
vco_password=mypassword

# Provide WorkflowId and VCO JSON Payload


export wflowId=3a656670-zd72-49bc-9c79-837067b17d43
export requestFile=vco.json

# Execute CloudClient
./cloudclient.sh vco workflow detail —id $requestFile —requestfile $requestFile

In order to start the workflow, you have to replace the content of the wflowId variable. You
can retrieve the workflow ID by selecting the workflow in the vCO Java Client and
pressing Ctrl-C (keyboard copy shortcut) and then pasting that into a text editor. This will
actually display the XML-representation of the workflow.

11.2 Monitoring vRealize Automation

Due to the large number of separate components within vRealize Automation, monitoring
it is essentially quite difficult. In the following we will show the different components
composing a vRealize Automation installation:

• vRealize Automation Appliance


o Apache 2.2.17 Webserver
o vPostgres 9.x database
o Pivotal tc Server
o Rabbit MQ
o vRealize Orchestrator Configurator
o vRealize Orchestrator App Server
• vRealize Automation Infrastructure-as-a-Service (IaaS) Server
o Microsoft SQL Server
o vRealize Automation IaaS Web Server
o vRealize Automation DEM Orchestrator
o IIS 7.x
o vRealize Automation DEM Worker
o vRealize Automation Manager Server
o .NET Framework
• vRealize Business Appliance
o vRealize Business Standard Server
o Pivotal tc Server Runtime
o vRealize Business Standard Data Collector
• vSphere Single Sign-On
o vSphere SSO
o Pivotal tc Server STS Service
• vRealize Orchestrator
o vRealize Orchestrator Configurator
o vRealize Orchestrator App Server

Due to the large number of services, monitoring each of them independently is not
feasible. Fortunately, VMware provides a plug-in called VMware Hyperic and a
management pack for VMware vRealize Operations, that together relieve the burden from
setting up component monitoring.
Hyperic is an agent-based monitoring system that automatically collects metrics on the
performance and availability of hardware resources, operating systems, middleware and
applications in physical, virtualized and cloud environments.
The following steps have to be done before vRealize Operations can be configured for
vRealize Automation monitoring:

• Deploy Hyperic
• Deploy vRealize Operations
• Install the management pack for Hyperic in vRealize Operations
Once these prerequisites have been met, the following steps can be done:

1. Log in into vRealize Hyperic as a system administrator.


2. Click on the Administration tab.
3. Select the Plugin Manager link.
4. If there is a vRealize plugin, select the checkbox in the plugin’s row and click
Delete Selected Plugin(s).
5. Next, click on the Add/Update Plugin(s) button and upload the Hyperic jar-plugin
files (see Fig. 11-1).
6. The process may take some seconds. Check if the new plugins appear and log out
from Hyperic.

With the plug-in installed, all we have to do is to wait for vRealize Hyperic to
autodiscover the new services. It also takes a short time to propagate the changes to
vRealize Operations.
Within vRealize Operations, you will be able to find a new inventory tree (go to the
Environment overview).
There is also a dedicated management pack for vRealize Automation. This management
pack gives visibility to the performance health and capacity risk of tenants and business
groups. More specifically, it provides the following features and functionalities:

Figure 11-1 Upload plug-in in Hyperic

Figure 11-2 vRealize Operations user interface

• There are out-of-the box dashboards that provide an overview of tenants, business
groups, reservations and reservation policies (see Fig. 11-2).
• The relationship between vSphere objects like VMs, clusters or datastores and
vRealize Automation objects like tenants, business groups or reservations is shown.
• Smart alerts help to detect any performance issues.

11.3 Backup and recovery of vRealize Automation

As with every software system, there should be a backup plan for vRealize Automation.
Administrators should backup the entire vRealize Automation installation. If there is any
system failure, the system can be recovered by restoring the last known correctly working
backup. Essentially, the following components should be backed up:

• IaaS MS SQL database


• PostgreSQL database
• Identity appliance or SSO appliance
• vRealize Automation appliance
• IaaS components
• Any load balancer
• Certificates

As well as backing up vRealize Automation at a regular interval, you should create a


backup before changing the configuration. This is especially important when adding or
deleting a tenant or making configuration changes to the identity appliance. It is
recommended to backup all components at the same time.

11.3.1. Database backup

Backing up the IaaS MS SQL and the PostgreSQL can be carried out with the built-in
database tools or with any external backup software with support for the databases.
If a failure occurs, the database backup can be used to restore to the most recent status. If
only one database fails, it might be reasonable to restore it and revert the functional
database to the same version as that in use at the time the backup was created.
11.3.2. Identity appliance or SSO appliance

There are different ways to backup the identity or SSO appliance:


• The Export function
• VMware vSphere Data Protection to backup the whole appliance
• vSphere Replication
• Cloning
• VMware Recovery Manager

If you want to restore the appliance, you can make use of these techniques again.

11.3.3. vRealize Automation appliance

When backing up the vRealize Automation appliance, you have to make a copy of the
configuration files. Before you start the backup, verify that the size of the encryption.key
file is greater than 0. If the file size equals zero , type these commands:

test -s /etc/vcac/encryption.key || dd if=/dev/random of=/etc/vcac/encryption.key bs=48 count=1

The following files should be backed up:

• /etc/vcac/encryption.key
• /etc/vcac/vcac.keystore
• /etc/vcac/vcac.properties
• /etc/vcac/security.properties
• /etc/vcac/server.xml
• /etc/vcac/solution-users.properties
• /etc/apache2/server.pem
• /etc/vco/app-server/sso.properties
• /etc/vco/app-server/plugins/*
• /etc/vco/app-server/vmo.properties
• /etc/vco/app-server/js-io-rights.conf
• /etc/vco/app-server/security/*
• /etc/vco/app-server/vco-registration-id
• /etc/vco/app-server/vcac-registration.status
• /etc/vco/configuration/passwd.properties
• /var/lib/rabbitmq/.erlang.cookie
• /var/lib/rabbitmq/mnesia/**

Of course, you can also create a snapshot of the virtual appliance. This might be a good
idea before doing any configuration changes, but is not an alternative to a full backup.
As aforementioned, the load balancer should also be backed up. vRealize Automation
does not provide a built-in load balancer – instead, a third-party load balancer will be
used. Consequently, take a look at your load balancer vendor’s instructions for creating a
backup.
When restoring the appliance, perform the following steps:

1. Redeploy the virtual appliance.


2. Restore all the backed up files.
3. Check the file permissions and owners for the restored files.
a. Verify that the files within the vcac directory are owned by the vcac user and
have read and write permissions granted to the owner only.
b. The files in the apache2 folder must be owned by root user.
c. The files in the vco folder must be owned by the vco user and have read and
write permissions to the vco user only.

The following steps must only be performed when the hostname or the IP address has
changed:

1. Get the certificate by entering the following command:

C:\Program Files (x86)\VMware\vCAC\Server\Model Manager Data\Cafe


\Vcac-Config.exe GetServerCertificates -url https://< VA FQDN>
—FileName .\Vcac-Config- time-stamp.data -v

2. Next, the solution certificate must be registered:

C:\Program Files (x86)\VMware\vCAC\Server\Model Manager Data\Cafe


\Vcac-Config.exe RegisterSolutionUser -url https:// <VA FQDN> —Tenant vsphere.local
-cu administrator@vsphere.local -cp vmware —FileName .\Vcac-Config- time-stamp.data -v
3. The solution certificate information must be moved to the database

C:\Program Files (x86)\VMware\vCAC\Server\Model Manager Data\Cafe


\Vcac-Config.exe MoveRegistrationDataToDB -d vcac -s localhost
-f .\Vcac-Config- time-stamp.data -v

Next, go to the vRealize Automaton appliance configuration page ( https://<vrealize-


app.domain.name>:5480) and verify that the host, SSL, database, and SSO settings are
correct. Update the settings that have been changed.

c.3.4. IaaS components

Backing up the IaaS components is slightly more difficult. You must backup the following
components:

• Agents and DEMs


• Manager Service
• Websites
c.3.5. Agents and DEMs

For the agents, back up the following information:

• The agent name


• The endpoint name
• The following files in the Agent installation folder (<vCAC Folder>\Agents\<Agent
Name>\):
o VRMAgent.exe.config file
o RepoUtil.exe.config file file

When backing up the DEMs, consider the following files:

• The agent name


• The following files located in the DEM’s installation folder (<vCAC
Folder>\Distributed Execution Manager\<DEM Name>\):
o ManagerService.exe.config file
o policy.config file

The Web components need the following files to be backed up:

• For the primary Web node only, in the Model Manager Data folder (<vCAC
Folder>\Server ):
o ConfigTool folder (applicable only for the primary Web node)
o policy.config file
• The following file located in the installation folder (<vCAC
Folder>\Server\Website\):
o Web.config file
• The following files located in the installation folder (<vCAC Folder>\Web API\):
o Web.config file
o policy.config file
• The name of the IIS instance.

c.3.6. Certificates

The following certificates should be backed up at installation time or when certificates are
changed.

• The SSO certificate and its entire chain.


• vRealize Appliance certificates and the entire corresponding certificate chain.
• IaaS certificates and the entire corresponding certificate chain.
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2084085
http://pubs.vmware.com/vra-62/index.jsp?topic=%2Fcom.vmware.vra.system.administration.doc%2FGUID-70059F2A-
A055-4FE1-A2B1-191C8A2CCDEE.html

c.4 Summary

This chapter concludes the Administration part of the book. We introduced how to use the
vRealize Automation CLI, gave an introduction into monitoring and highlighted the
importance of backup and recovery.
12. Extensibility Overview

12.1 Extensibility Options and Tools

In most projects, sooner or later, a time will come when not all of the requirements can be
covered by means of the administrative graphical user interface. The same also applies to
vRealize Automation.
Simple extensibility scenarios often apply to the customization of existing workflows.
For example, a user wants to choose a script, which should be executed after the
provisioning of a machine (such as change the hostname or specify the way of
provisioning). While a lot of examples can be realized by means of custom properties,
build profiles and the property dictionary, there are also a lot of requirements where
programming is needed.
These use cases are often quite sophisticated. For example, there could be an integration
of vRealize Automation with the external infrastructure, e.g. an existing IP address
management tool, a tool for defining host names or any other integration with some kind
of business tool. To realize these sorts of requirements, VMware provides the vCenter
Orchestrator and the vRealize Automation Center Designer.
Another scenario represents Anything-as-a-Service (XaaS) offerings. The basic idea
behind XaaS is that vRealize Automation could be used to publish all kinds of services –
not only IaaS offerings. vRealize Automation provides a self-service portal with a fine-
grained permission system. This can be used to host a variety of services. Members of the
HR department could use vRealize Automation to create Active Directory users. Jobs can
be invoked or any other tasks be carried out, which can be managed with VMware
Orchestrator in the background. The only task for administrators is to publish these
services via the Advanced Service Designer.

Figure 12-1 Extensibility overview

Sometimes there is need for yet more complex adaptations. It is possible to extend the
vRealize Model Manager to provide a seamless integration with external systems or
implement new workflows and activities directly in Microsoft .NET (the IaaS components
are running on Microsoft Workflow Foundations). To achieve this, the vRealize Cloud
Development Kit (CDK) could be used. Figure 12-1 depicts these extensibility options
The remainder of this chapter gives an in-depth overview of these options. A detailed
discussion will thrn follow in the next chapters.

12.2 Extensibility with vRealize Designer and VMware Orchestrator

Basically, adaptations can be performed with the vRealize Designer as well as VMware
Orchestrator. The reasons for these two different tools can be found in the history of
vRealize Automation.
vRealize Designer is a .NET based Windows application. Historically, it is the older
model of both tools (when it comes to vRealize extensibility) and was implemented to
customize workflows (however, not to create new workflows). At a programming level, its
strength lies in the ability to work in much the same way as a standard Windows
application. Therefore it is quite easy to understand for developers.

Figure 12-2 vRealize Automation Designer

In the current version of vRealize Automation, provisioning workflows is based on the


Windows Workflow Foundation (WWF) library of the .NET framework. As the vRealize
Automation Designer is also implemented in .NET, there is a seamless interaction between
the tool and vRealize automation. Furthermore, .NET developers will feel quite at home
with this tool.
The tool itself is depicted in figure 12-2. There is a toolbox on the left-hand side.
Developers can easily use Drag’n Drop techniques to move tasks into the main pane.
Workflows can be implemented by using predefined tasks and connecting them.
However, the use of this tool also has some drawbacks. Firstly, as it is .NET based, it
must be installed on a Windows operating system. Furthermore, workflows can be
customized, but there is no real programming experience. Although there are a set of
different predefined tasks, which can be used (for example for declarations, control flow,
variable assignments, etc.), there is no possibility to implement your own code with any
kind of programming language. Another disadvantage is that there is no direct access to
external systems like web services or databases. This is only possible by calling an
external script (e.g. Microsoft PowerShell script) or by the use of the CDK. Besides, the
future use of this tool is not clear. Currently, the vRealize Automation IaaS components
are implemented in .NET, but this is not guaranteed for future versions. Hence, developers
should not rely on their current vRealize Automation Designer customization running in
any future versions of vRealize Automation.

12.3 VMware vCenter Orchestrator

Shortly after the acquisition of Dynamic Ops by VMware, the work on integration of
vRealize Automation into Orchestrator was started. Finally, with vRealize Automation
release 6, nearly all features of the vRealize Automation Designer were also supported in
Orchestrator.
Orchestrator is VMware’s central tool for all kinds of automation within the datacenter.
In contrast to vRealize Automation Designer, it is implemented in Java and hence runs on
nearly every operating system. Orchestrator is very reach in features. First of all, it is an
intuitive Integrated Development Environment (IDE). Similar to vRealize Automation
Designer, it is possible to create workflows by means of Drag’n Drop techniques.
However, developers can also use JavaScript to implement their own code. Orchestrator
already provides a set of plug-ins, which allow the integration of external systems without
adding any lines of code (for example Active Directory, SSH, Databases, SOAP, REST,
vCenter, etc.). Many other plug-ins also exist on the market and these can be easily
installed in Orchestrator (they can be found in the VMware Solution Exchange).
Furthermore, workflows can be easily invoked from within Orchestrator. There is even an
integrated source code versioning system, with the possibility to see historical workflow
runs.
Figure 12-3 shows the graphical user interface of Orchestrator. Similar to vRealize
Automation Designer, there is a toolbox on the left-hand side, where a broad set of built-in
tasks and activities are already available.
Figure 12-3 vCenter Orchestrator workflow

Compared to vRealize Automation Designer, Orchestrator offers more features. Firstly, it


is possible to create own workflows – not only to customize existing ones. Besides, the
ability to implement custom code as well as the huge set of existing plug-ins and modules
are real benefits. The vCAC plug-in offers - among other things - the following features:

• Configuration of the Advanced Service Designer (ASD)


• Automation of approval policies
• Accessing the service catalog
• Access to the vRealize entity model
• Creation and Maintaining tenants, business groups and endpoints
• Coding of own workflows
• Maintaining virtual machines
• Programming of custom properties, build profiles and the property dictionary

Due to this large set of features, the use of Orchestrator should be preferred to the use of
vRealize Automation Designer. However, as is often the case, there is an exception to the
rule – when there are already a large number of implemented .NET workflows. But this
should not be the case in most projects.

12.4 Advanced Service Designer

The vRealize Advanced Edition allows the use of the Advanced Service Designer. This
allows the publishing of XaaS services to the self-service catalog. From a technical point
of view, these workflows run within Orchestrator. Hence, the Advanced Service Designer
gives administrators the possibility to integrate existing Orchestrator workflows into the
vRealize Automation service catalog. This integration encompasses two steps. Firstly, the
user interfaces for the workflow request must be defined. Secondly, the XaaS workflow
must be published to the service catalog.
There are plenty of use cases for the ASD:

• Creation of Active Directory account for new personnel


• Password management for end users
• Triggering restore or backup processes
• Perform software updates
• Provide lab environments
• Installation of additional software
• Integration of third party software
12.5 Cloud Development Kit

While Orchestrator is already quite a powerful tool, there might be certain use cases,
where it is not sufficient.
For certain extensibility use cases, the Cloud Development Kit (CDK) needs to be used.
A special feature of the CDK is the possibility to extend the vRealize Automation entity
model within the Model Manager. This is quite useful, if there are plans to seamlessly
integrate vRealize Automation with external systems (for example CMDBs, IP address
management tools, etc.). After a model has been extended, it is quite easy to access these
systems from within Orchestrator or vRealize Automation Designer. As stated, another
advantage is the possibility to develop workflows directly in .NET. This is interesting, as
the .NET framework is quite powerful in terms of APIs. It also allows existing Windows
dlls to be reused.
However, due to the fact that the current plans of VMware, regarding the IaaS
components, are still not clear, its use should be carefully considered before new
workflows are implemented in .NET.

12.6 Summary

This chapter gave an overview of the extensibility options within vRealize Automation.
All the different tools – vRealize Automation Designer, Orchestrator and the CDK were
introduced. The following chapters will now give a deeper insight into how to work with
these tools.
13. Working with vRealize Automation Designer

We have shown you by what means and tools vRealize Automation can be extended in the
last chapter. We will now delve into how to implement new functionality. This chapter will
cover the vRealize Automation Designer. The following chapters will cover vRealize
Orchestrator and the Advanced Service Designer.

13.1 The vRealize Automation IaaS model

Before creating workflows for IaaS provisioning, it is first crucial that you understand the
IaaS model and thus are able to modify its values as required. In chapter 3 we introduced
the Model Manager and explained how the Microsoft SQL server is used to store its data.
However, working with the database directly is (in most cases) not recommended. Instead,
vRealize Automation provides helper methods in order to deal with this data.
Nevertheless, there is a good tool available to explore the IaaS model - LINQPad[8]. Once
you have installed the tool (a Windows system with .NET is needed), you can explore the
IaaS model:

1. Start the LINQPad tool.


2. In the navigation area on the upper left-hand side of the screen, click Add
Connection and choose WCF Data Services 5.5 (OData 3).
3. Type the following URI to connect to your server:
https://localhost/repository/Data/ManagementModeEntities.svc
4. Check the Accept invalid certificates checkbox.
5. Click OK (see Fig. 13-1).

Figure 13-1 LINQPad client connection

Once a connection has been established, LINQPad shows all the IaaS entities on the left-
hand side of the screen. Besides showing entities, LINQPad also provides a wizard to
create and execute LINQ queries. For example, you can create a query by right-clicking
on an entity and selecting the first function <Entity name>.Take(100). LINQPad then
executes the query and shows the result set within the main area of the screen (see Fig. 13-
2).
There are many different entities within the vRealize Automation Model Manager – for
example, virtual machines, users, reservations or build profiles. However, you must also
bear in mind that the Model Manager only stores IaaS data – everything else is stored
within the vPostgres database on the Linux appliance.
Each entity in vRealize Automation has a set of properties. Entities are usually linked to
each other. For example, there might be different pending requests for a user, or a virtual
machine has different VirtualMachineProperties. Relationships between entities are
indicated with a fork-icon. There are three different relationships: One-to-One, One-to-
Many and Many-to-One.
13.1.1. Background: LINQ

LINQ (Language Integrated Query) is a Microsoft technology. It is part of the .NET


framework. The syntax resembles SQL, however LINQ is more powerful in terms of
querying data. For example, when querying more than one entity at the same time, there is
no join-statement required – instead, you can traverse between entities by using the “.”
operator. LINQ supports lambda-expressions as well. Lambda-expressions are anonymous
functions, which can be used on result sets for sorting, filtering or grouping.
Figure 13-2 LINQPad entity visualization

The following statements show how to use LINQ Queries:

Query for a virtual machine named “vm1”:


from vm in VirtualMachines where vm.VirtualMachineName = “vm1”
Query for all compute resources including the credentials of the endpoints:
from hosts in Hosts.Expand(“ManagementEndpoint/Credential”) select host

13.2 vRealize Designer

The first step to using vRealize Designer is – of course – to install it. You can download
the Designer from the vRealize Appliance website (https://<vrealize-automation-
appliance.domain.name>:5480). Like the IaaS services, the designer requires Microsoft
.NET to be preinstalled. Also, before you start the installation, you must have the address
of the vRealize Automation service and the Model Manager at hand. Please also note that
the tool needs an open connection to your vRealize Automation environment at runtime.
Once you have completed the installation, you can start the tool (see Fig. 13-3). The
graphical user interface of this tool is quite intuitive. The ribbon menu only offers a small
number of commands. These include options to load, save, open or send workflows.
Most work is carried out in the main pane – that’s where you can develop your
workflows. A workflow consists of many smaller tasks. These are arranged sequentially
and connected via arcs.
Figure 13-3 vRealize Automation Designer workflow stub

On the left-hand side of the screen you can see the toolbox. It contains a variety of tasks,
grouped into the following categories:

• DynamicOps.Repository: Tasks within this category are used to access the entities
within the vRealize Automation model. You can use this category as well, if you
want to invoke another workflow stored in the Model Manager.
• DynamicOps.VcoModel.Activities: The enclosed tasks are used for interaction
with vRealize Orchestrator. With the latest releases of vRealize Automation, these
tasks have become less important. vRealize Orchestrator and vRealize Automation
can now interact with each other independently, without the need to create a special
workflow in vRealize Designer.
• The DynamicOps.Cdk.Activities category provides a variety of tasks, including
logging, sending emails or retrieving information about virtual machines.
• The ControlFlow and FlowChart categories provide programming constructs for
flows, decisions, or if-statements.
• Basic programming tasks are encompassed within the Primitives categories. For
example, there are tasks for assigning variables, invoking methods, or outputting
something to the console.
• If you need access to arrays or collections, the Collection category provides some
helper tasks.
• For error handling there is also a dedicated category.

You can define the basic structure of your workflow by dragging and dropping elements
from the toolbox into the main pane. Once an element has been placed within the designer
pane, it has to be configured. This can be done by selecting the appropriate element in the
main pane. On the right-hand side of the screen, there is a Properties window. Here
configuration settings can be specified.
Tasks within the designer pane must always be connected to a workflow. Therefore you
must use arcs to connect them with other elements. A minimum workflow always has a
start element and an end element – so if you add additional tasks, you have to place them
within these nodes.
Another very important concept of vRealize Designer is the use of variables. Variables
are needed to pass information between different tasks. For example, if there is a task that
reads the content of a script file and a second that executes the script, then a variable is
needed to store the script content and pass it to the next task. Variables have a context
which determines the visibility of the variable. For example, global variables can be read
and modified everywhere, whereas a local variable is only defined within a certain scope
and may not be touched from outside of this scope.
Workflows within vRealize Designer are not invoked by vRealize Automation. Instead, it
happens as part of the machine’s lifecycle (for example during provisioning), or based on
user interaction within the self-service catalog. In both cases, the workflow needs some
input regarding the context in which it is running. This contains information regarding
which virtual machine is concerned and/or what kind of action was invoked. To store this
information, each workflow must provide a set of variables. Here the aforementioned
information can be found. Variables can be found in the lower part of the screen.
It is important to note that vRealize Designer allows modification of existing workflows,
but does not permit creation of new ones (see Fig. 13-4). However, there is a set of
existing workflow stubs that can be used to implement the required workflow logic. If you
need additional workflows, you must first purchase the Cloud Development Kit (CDK).
There are workflow stubs for the following purposes:

• Workflows that can be invoked from an action on a blueprint


• Workflows that can be run during the machine’s lifecycle. For example,
BuildingMachine, MachineDisposing, MachineExpired, MachineProvisioned,
MachineRegistered and UnprovisionMachine stages are supported.

When creating a workflow, from time to time, there is also a need to upload additional
fragments such as PowerShell scripts, configuration files or any other files. When
referring to multimachine blueprints, we have already introduced the Cloudutil-tool. This
is responsible for uploading any content to the server.
Figure 13-4 vRealize Automation Designer workflows

After having given a small introduction to the foundations of vRealize Designer, it is time
to become more practical and show how to use the designer tool itself.

13.2.1. Use case: Invoke a PowerShell script as part of the provisioning


process.

In this scenario, we will write a small script that will be uploaded to the Model Manager.
This can then be invoked from a vRealize Designer workflow during the provisioning of a
virtual machine. The script used in this example is simple, but you could replace it with a
more complex one in your environment.
First, open the PowerShell-ISE or an equivalent editor on your computer and paste the
following code fragment:

$Hostname = $Properties[“VirtualMachineName”]
## script logic begins here
Write-Host $Hostname

Once finished, save the script on your desktop computer and perform the following steps
on a machine with the vRealize Designer installed:

1. Open the command prompt and change to the C:\Program


Files(x86)\VMware\vCAC\Design Center folder.
2. You can upload your PowerShell script by using the Cloudutil-tool. Type the
following command (see Fig. 13-5):
Cloudutil.exe File-Import –n <name of the file used for the vRealize Automation
repository> -f <PowerShell script file>
3. You can check if your upload was successful by using the Cloudutil.exe File-List
command. Please check if you find your uploaded file within the list of the
repository.
Figure 13-5 Cloudutil command-line

Figure 13-6 Machine provisioned workflow stub

13.2.2. Implementing the workflow

Now we can begin with the implementation of the workflow. Start the vRealize Designer
and follow the steps as described below:

1. Within the ribbon menu, click the Load button and select the
WFStubMachineProvisioned workflow. We will be customizing this workflow and it
will be running after a machine has been provisioned. Click OK. After a short while,
you will see the stub of the workflow (see Fig. 13-6).
2. Double-click on the Machine Provisioned box. You will see that there is some
nested content now shown in the main pane (see Fig. 13-7).
Figure 13-7 Machine provisioned workflow

3. The workflow consists of different tasks to be run in a sequential order. Following a


start element, there is a logging element to output some basic information about the
workflow. The next task (Create ManagementModel Context) will open a
connection to the Model Manager. Next, there is another nested element (Custom
Code). This is the workflow fragment, in which your own customizations are
inserted. To edit, double click this Custom Code element.
4. The custom code element begins with a start element too. Remember, our scenario
requires us to read the content of a PowerShell script file and then run that code. Our
PowerShell script file considers the hostname as a parameter and will output this
hostname to the console. Consequently, we need to somehow grab the name of the
virtual machine that has been provisioned. Unfortunately, we do not have the name.
Our workflow stub only receives the GUID (Global Unique Identifier) of the virtual
machine provisioned (see Fig. 13-8). However, there is a task within vRealize
Designer that can get us the name of a virtual machine when provided with its
GUID. This task is called GetMachineName. Consequently, locate the
GetMachineName task within the toolbox on the left-hand side of the screen and
drag and drop it to your workflow below the start element.
5. Now we need to connect both elements. When you hover over the start element, you
will see some grey boxes appearing at its borders. Click on such a square and drag
the arrow to the GetMachineName node (see Fig. 13-9).
6. Now we must consider how to deal with the GetMachineName task. The task will
receive the GUID as an input and return the name of the virtual machine as an
output. As we require this name for a later task, we must first create a variable to
contain its value. In order to achieve this, first click the Create Variable button
within the lower Variables section and then assign vmName as a variable name. The
definition of the variables is depicted in Fig. 13-10.
7. In the next step, we can configure the GetMachineName element. We have to set
the GUID as an input parameter and the newly created variable as an output
parameter. Double click on the element and configure the element as shown here
(see Fig. 13-11):
Machine Id: “VirtualMachineId”
Machine Name: “vmName”

Figure 13-8 Variable definition

Figure 13-9 GetMachineName Task

8. Next, we should load the content of the PowerShell script. Our PowerShell script
receives a single variable as an input. However, as a PowerShell script can receive
multiple parameters, passing the input as dedicated variables is not suitable. Instead,
an array is used to pass arguments. This means we must create a new variable.
Therefore, within the variables section, click on Create Variable and name it args.
Change the type of the variable by clicking on Browse for Types within the
Variable Types column (see Fig. 13-12).

Figure 13-10 Creation of variables

9. Now another dialog opens, in which you have to specify the exact data type. The
PowerShell script expects this input as a variable of type
System.Collection.Generic.Dictionary <TKey, TValue>. This is a dictionary, where
the TKey and TValue define the data type of the key and the data type of the value,
respectively. Within the Type Name textbox, enter the dictionary data type und use
String for both TKey and TValue (Fig. 13-13 shows how to configure this).
Figure 13-11 Task configuration

Figure 13-12 Data type definition

Figure 13-13 Browse and select a .Net type

Figure 13-14 Assign-task

10. Before we can pass the args-variable to the PowerShell task, we must first enter the
name of our virtual machine. This is done with the help of the Assign task. Locate
the appropriate task within the toolbox, drag and drop it below the
GetMachineName element and connect it with the aforementioned task.
11. Configure the Assign-task (see Fig. 13-14):
To:“args(“VirtualMachineName”)”
Value:“vmName”
12. Finally, we can invoke the PowerShell-Script. There is a special activity called
ExecutePowerShellScipt, which allows us to do that. Drag and drop the element
from the toolbox and connect it with the Assign task (see Fig. 13-15).
13. Like other workflows elements, the ExecutePowerShell task also needs to be
properly configured (see Fig. 13-16):
Script Name: The name of the script which was specified during the upload with
Cloudutil
Machine Id:“VirtualMachineId”
Arguments:“args”
14. Now that the implementation of the workflow has been completed, we just have to
save it. Click the Send button within the ribbon menu in order to do so.

13.2.3. Background: Workflows in vRealize Designer

Workflows in vRealize Designer usually encompass a lot of different tasks. However, all
of these tasks often do not fit on the screen and thus are displayed in a nested fashion.
Nevertheless, vRealize Designer tries to show all tasks in a clear manner. This means
arranging the elements on different levels. When you open a workflow, you will see the
top-level element first (the Machine Provisioned Workflow in our scenario). It will
contain a try-catch clause, which encloses a Machine Provisioned Element. The try-catch
clause element is a very powerful construct in programming languages. If any error occurs
within the try-clause, the control flow will automatically jump to the catch-clause and will
execute any exception handling code. Therefore, if any error occurs, at the very least you
want to be notified. Hence, a logging statement is reasonable.

Figure 13-15 Enhanced workflow

Figure 13-16 ExecutePowerShellScript taks configuration

If you expand the Machine Provisioned element, you will see the structure of the
workflow (see Fig. 13-17). Each workflow has a start and an end node. Before running
any custom workflow logic, a log statement first logs basic information regarding the
actual workflow instance. In parallel, a connection to the vRealize Automation repository
is opened. Your custom logic takes place within the Custom Code task. Once this task has
been successfully executed, the state of the workflow changes to ‘complete’ and the
workflow ends with the creation of a log statement.
Once we have completed the implementation of our workflow, we still have to let
vRealize Automation know when to run it. This can be achieved at the blueprint level:

Figure 13-17 Configuration of a workflow in vRealize Automation

1. Open the blueprint that needs to be configured for this workflow and switch to the
Properties tab.
2. Assign a new property named ExternalWFStubs.MachineProvisioned (see Fig.
14-18).
3. Don’t enter a value for the property.
4. Click OK to save your changes.

13.2.4. Background: How to activate workflows

We have shown you how to setup a blueprint which runs a workflow using custom
properties. This does not only apply to MachineProvisioned workflows, but also to all
other workflows. Use the following custom properties for the other workflow types:

• WFStubBuildingMachine: ExternalWFStubs.BuildingMachine
• WFStubsMachineDisposing: ExternalWFStubs.MachineDisposing
• WFStubUnprovisionMachine: ExternalWFStubs.UnprovisionMachine
• WFStubMachineRegistered: ExternalWFStubs.MachineRegistered
• WFStubMachineExpired: ExternalWFStubs.MachineExpired
13.2.5. Additional Workflow activities

Besides the activities covered in this scenario, there are a lot of other tasks available. At
this point, we want to introduce the most important ones:

• CreateRepositoryServiceContext: Establishes the context with the vRealize


Automation Model Manager
• AddLink/DeleteLink: Creates/Deletes a relationship for objects.
• SetLink: Connects two entities with a link.
• AddObject/UpdateObject/DeletObject: Activities for creating, updating or
deleting entities.
• UpdatesObjekt: Persists an object. Must be called after an AddObject/DeleteObject
invocation.
• AttachTo: Adds a new object within the context.
• LoadProperty: Loads a property.
• InvokeRepositoryWorkflow: Invokes a workflow.
• SaveChanges: Persists changes, must be called after an update.
• ExecuteSshScript: Invokes a SSH script
• GetMachineName: Loads the machine name of a virtual machine.
• GetMachineProperties: Loads all properties of a machine.
• GetScriptForName: Loads the content of a script stored in the Model Manager.
• InvokePowerShell: Invokes a PowerShell script.
• InvokeSSHComand: Executes a SSH statement.
• LogMachineEvent: Writes an entry into the userlog of the machine owner.
• LogMessage: Writes to the DEM log.
• RunProcess: Runs a process.
• SendEmail: Sends an email.
• SetMachineProperty: Modifies a machine’s custom property.
• SetWorkflowResult: Changes the state of the workflow.

13.3 Summary

This chapter covered the installation and the use of 2the vRealize Designer. The designer
was the preferred tool for customization before vRealize Orchestrator gained support for
vRealize Automaton. Right now, in vRealize Automation, it remains largely as a tool for
backwards compatibility. Therefore, we recommend using vRealize Orchestrator instead
of vRealize Designer for implementing any new workflows. vRealize Orchestrator is
covered in the next chapter.
14. vRealize Orchestrator

We have already mentioned vCenter Orchestrator – a worfklow automation tool capable of


automating a wide range of external systems. Due to its open architecture and integration
with vRealize Automation, it is well suited to manage the vRealize Automation cloud
infrastructure. Now it is time to dive deeper into the topic.

This chapter will cover the followings issues:

• Give a brief introduction of vRealize Orchestrator and its benefits.


• Show how to configure and integrate vRealize Orchestrator into a vRealize
Automation infrastructure.
• Describe how workflows can interact with vRealize Automation.
• Implement some real-life examples using Orchestrator.
14.1 Introduction to vRealize Orchestrator

vRealize Orchestrator is a great tool for automating your environment and orchestrating
business processes. This makes IT operations faster and less error-prone. While workflows
could be implemented using traditional programming techniques, Orchestrator simplifies
this process. It facilitates the development of workflows with its integrated development
environment and other built-in features. When creating new workflows, it would be nice
not having to implement everything from scratch. In order to achieve this, vRealize
Orchestrator provides a rich set of pre-built workflows which can be reused. Orchestrator
enables workflows to be exported and imported through packages. As workflows often
need to be run over a long period of time, a lot of techniques must be implemented with
traditional workflow programming in order to increase resilience. Orchestrator, however,
provides a built-in workflow engine. The engine takes care of a lot of issues and offers
multiple ways to run workflows.
As there are already over 500 ready-to-use actions and workflows available, in many
cases there is no longer a need to write your own code. If you need to implement your own
code, Orchestrator uses JavaScript (which is widely used and a relatively easy to learn).
In the last couple of years, Orchestrator has already become widely utilized for
automation purposes in many companies. However, the integration with vRealize
Automation has pushed the product to a new level. The tight cooperation is also reflected
in the fact that it is now branded as vRealize Orchestrator (instead of vCenter
Orchestrator).
vRealize Orchestrator can help in many scenarios:

• The lifecycle of infrastructure services can be customized. For example, you can
register a virtual machine within a configuration database after provisioning or you
can assign a custom hostname.
• vRealize Orchestrator can also be used if you want to implement your own
customizations and attach them to a blueprint (for example, imagine a worfklow
assigned to a blueprint which takes care of the backup or your machine)
• Of course, you can use vRealize Orchestrator to manage and automate the vRealize
infrastructure itself.
• vRealize Orchestrator is also a great tool for third-party vendors. If they want to
integrate their solution into vRealize Automation, they can develop their own
workflows, which in turn can be published to the service catalog, using the
Advanced Service Designer. We will cover Advanced Service Designer in the next
chapter. Such published services are also described as Anything-as-a-Service (XaaS)
blueprints.
• Of course, you can still create your own custom services and integrate them within
your infrastructure. Once again, there are many reasons and scenarios for doing so.
Just as an example, you can request a LUN or perform some Active Directory tasks
via the service catalog.
14.2 vRealize Orchestrator configuration

Because vRealize Orchestrator is such an important tool for extending vRealize


Automation, it is already shipped as part of the vRealize Automation appliance. While you
can use the built-in Orchestrator instance for testing and some smaller projects, it is
nevertheless recommended to deploy a stand-alone Orchestrator appliance.
Before you are able to run a workflow in Orchestrator, a couple of configuration steps
need to be completed. Firstly, integration into vRealize Automation is required. There are
two directions of integration and it is recommended that you use both:

1. An Orchestrator endpoint should be configured within vRealize Automation. This


permits the invocation of Orchestrator workflows from vRealize Automation.
2. There is a vRealize Automation plug-in in Orchestrator to be configured. This plug-
in allows you to manage vRealize Automation entities from within Orchestrator and
facilitates the configuration of IaaS workflows. Needless to say, there are also a lot
of predefined workflows within this plugin.

In addition to configuring the vRealize Automation plug-in, there are other plug-ins that
may also be of interest to you. This is because their enclosed workflows are quite likely to
be called by your workflow. This encompasses the vCenter and the Microsoft Active
Directory plug-ins.

14.3 Start Orchestrator Appliance

With the above in mind, we can start the configuration. If you are using the internal
Orchestrator instance within vRealize Automation, the first thing to do is to check if it is
running – and if not – start it. This can be done as described in the following:

1. SSH to your Orchestrator (or vRealize Automation) instance using root as a


username.
2. Type in the command service vco-configurator start.
3. Open the page https://<vrealize-automation-appliance.domain.name> in a supported
browser.
4. On the lower area of the screen, click the vCenter Orchestrator Configurator link.
5. Log in with the username and password vmware. Once you have logged in, you
need to change the password.
6. Review the current Orchestrator configuration. There should be a green icon in each
category on the left-hand side of the screen indicating that the configuration is fine.
If necessary, configure the licensing. vRealize Orchestrator can be licensed via
vSphere or vRealize Automation.
Figure 14-1 vRealize Orchestrator endpoint configuration

14.4 Create Orchestrator Endpoints

The first step towards integrating Orchestrator with vRealize Automation is to create an
endpoint. This can be realized as follows:

1. Log in to the vRealize Automation self-service portal with IaaS-Administrator


privileges.
2. Swap to the Infrastructure > Endpoints > Endpoints page.
3. Select New Endpoint > Orchestration > vCenter Orchestrator. The configuration
page for the endpoint opens (see Fig. 14-1).
4. Assign a name to the endpoint.
5. Optionally you can type in a Description.
6. You need to type in the address of the Orchestrator instance. The format of the
address differs depending on the version of Orchestrator:
• Orchestrator 5.1: http://<hostname>:port
• Orchestrator 5.5 onwards: http://<hostname>:port/vco
When using the internal Orchestrator appliance, the address would be
https://<vrealize-automation-appliance.domain.name>:8281/vco.
7. Next, you must enter your credentials. Click on Credentials. By default, the
Orchestrator instance has the username administrator@vsphere.local (the password
was set during configuration of the SSO appliance).
8. Every endpoint needs a priority (in case there is more than one orchestrator instance
available, the one with the highest priority is taken – otherwise a round robin
algorithm would be used). Select New Property with a click and name it
VMware.vCenterOrchestrator.Priority. Assign 1 as its value.
9. Save the property by clicking on the Save-icon.
10. Click the OK button to save the endpoint.
Figure 14-2 Assign permissions for Orchestrator

If you have different Orchestrator instances, you can also override the default orchestrator
instance at blueprint level. This can help if you have any special resource-intensive
workflows that need to run on a dedicated instance. If you want to configure this, you
must add the VMware.VCenter.Orchestrator.EndpointName property and assign the name
of the endpoint.

14.4.1. Installation of the Orchestrator client

The Orchestrator client can be downloaded directly from the Orchestrator instance
(https://<vrealize-automation-appliance.domain.name>). You can find the download link
on the bottom of the page. The client is implemented in Java version 7, so please ensure
you have this version of Java on your computer.
14.4.2. Background: Adding additional user for the Orchestrator client

By default, access to Orchestrator is restricted to the administrator@vsphere.local user. If


you need other users to connect to the appliance using the Orchestrator client, you must
add them in the SSO appliance. This can be done by logging in into the WebClient and
navigating to the Administration > Single Sign On > User and Groups menu. Change to
the Groups tab and locate the vcoadmins group. You can then add additional members in
the lower area of the screen by clicking the Add symbol (see Fig. 14-2).
14.4.3. Orchestrator configuration

Not only do we need to configure Orchestrator as an endpoint within vRealize


Automation, we also have to configure Orchestrator itself. This is done in order to make
use of all available functionality. This configuration involves the following:

• Import SSL certificates into Orchestrator.


• Configure the vRealize Automation plug-in.
• Set up the Active Directory plug-in.
14.4.4. Import SSL certificates

To import SSL certificates into Orchestrator, we must open the Orchestrator Configurator
again. Please follow these steps to action:

1. Open the Orchestrator Configuration web page (it runs on https://<orchestrator-


appliance.domain.name>:8283/vco-config - if you are using the integrated Orchestrator
instance, you can follow the link from the bottom of vRealize Automation appliance
homepage as described before).
2. Log in to the vRealize Orchestrator configuration site as user vmware with your
assigned password.
3. Once logged in, go to the Network menu.
4. Click the SSL Trust Manager tab.
5. Within the Import from URL text box, enter the hostname of your vCenter Server and
click Import (see Fig. 14-3).
6. We also have to add the IaaS Servers (the windows machine) to the list of trusted hosts.
So within the Import from URL section, type in the name of your IaaS host and click
Import.
7. Change to the Startup Options menu and check to see if the Orchestrator service is
running.

Figure 14-3 Orchestrator configuration

Figure 14-4 Add a vCenter Server


14.4.1.

14.4.2.

14.4.3.

14.4.4.

14.4.5. Configure the vRealize Automation plug-ins

The next step is to configure the vRealize Automation plug-in. As integration with
Microsoft Active Directory is quite common, you should also be able to set it up
appropriately. You can do this by following these steps:

1. Start the Orchestrator Java client.


2. Log in as user administrator@vsphere.local and your assigned password.
3. In the upper area of the screen, within the dropdown list near the VMware vCenter
Orchestrator label, select Run.
4. In the workflow library, navigate to Library > vCenter > Configuration.
5. Right-click Add a vCenter host instance and select Start Workflow.
6. A dialog opens and prompts for the following input:
a. IP or host name from the vCenter Server
b. The HTTPS port is 443.
c. Enter “/sdk” within the Location of the SDK input box.
d. Select Yes for Will you orchestrate this instance.
e. With the Ignore Certificate Warnings checkbox, select Yes.
7. Click Next to move to the next dialog page and provide the following input (see Fig.
14-4):
a. For Use shared Session select Yes.
b. In the User name for Orchestrator to connect to vCenter Server field, enter
administrator@vsphere.local.
c. Provide the Password of the user.
d. Enter the Domain name.
8. Click Submit to start the workflow. If the workflow completes successfully, a
completed workflow token will appear within the workflow execution history
(expand the workflow to see the history).

The next plug-in to be configured is the Microsoft Active Directory plug-in. However, this
plug-in assumes that your connection with Active Directory will be encrypted with SSL. If
your Windows environment does not support this, you must configure it to do so before
continuing (if you are not sure how to check that, there are Windows tools like ldp.exe to
help). In the following, we will assume that LDAP with SSL is activated and listens on
port 636.
Perform the following, to set up an Active Directory connection:
1. In the workflow library, navigate to Library > Microsoft > Active Directory >
Configuration.
2. Right-click Configure Active Directory server and select Start Workflow.
3. A dialog opens and prompts for the following input (see Fig. 14-5):
a. Active Directory host IP/URL as domain controller.
b. The Port is 636.
c. Provide the Root of the domain, for example dc=vdc, dc =lab.
d. Select Yes for the Use SSL checkbox.
e. With the Ask for confirmation before importing the certificate checkbox,
select Yes.
f. Provide your Default Domain, for example vdc.
4. Click Next to move to the next dialog page and provide the following input:
a. For Use shared session, select Yes.
b. Enter the User name for the shared session (format domain\user).
c. Provide your Password for shared session.
5. Click Submit to start the workflow and accept the security and certificate warnings
if necessary. Once again, if the workflow runs successfully, a completed workflow
token will appear.

Figure 14-5 Active Directory integration

If you wish to check that configuration of the plug-in works, change to the Administer
view (dropdown list in the upper area of the screen) and expand the Active Directory
node, which is found within the navigation area in the left-hand side of the screen (see Fig.
14-6).
Now we can continue with the configuration of the vRealize Automation plug-in. This
involves two steps:

• Configure the IaaS host of vRealize Automation.


• Install the vCO customizations.
Let’s begin with the IaaS host configuration:

1. In the workflow library, navigate to Library > vCloud Automation Center >
Configuration.
2. Right-click Add the IaaS host of a vCAC host and select Start Workflow.
3. A dialog opens and prompts for the following input (see Fig. 14-7):
a. Under vCAC Host, click the Not Set link.
b. Within the Select (vCACCafe:VCACHost) dialog box, expand vCloud
Automation Center and select Default (your vCAC host) and click Select.
4. Click Next to move to the next dialog page and provide the following input (see Fig.
14-8):
a. Provide your IaaS server as Host Name.
b. Enter the Host URL
c. Leave the default settings for the Connection timeout.
d. Leave the default settings for the Operation timeout.

Figure 14-6 Active Directory

Figure 14-7 Add an IaaS host

5. Click Next to move forward to the next input dialog and provide the following
input:
a. Use Shared session for the Session Mode.
b. Enter an Authentication user name (use the system administrator of your
IaaS Server configuration).
c. Provide the Authentication password.
6. Click Next for the next configuration screen.
7. Provide the following input:
a. For the Workstation for NTLM authentication, leave the default settings.
b. For the Domain for NTLM authentication, enter your domain (for example
vdc)
8. Click Submit to start the workflow.

Once again, you can change to administer mode and expand the vCAC Infrastructure
Administration node to check if your configuration is working.
The final step within this configuration is to install the vCO customizations. The vCO
customizations help when attaching an Orchestrator workflow to a blueprint or installing
an Orchestrator workflow as a menu action within vRealize Automation. This can be done
as follows:

Figure 14-8 IaaS host properties

1. In the workflow library, navigate to Library > vCloud Automation Center >
Infrastructure Administration > Extensibility > Installation.
2. Right-click Install vCO customizations and select Start Workflow.
3. Within the Install vCO customization input dialog box, click Not set.
4. Next, you have to choose the vCAC Host for the customizations. Expand vCAC
Infrastructure Administration and select your IaaS host.
5. Click Select and continue with Next.
6. The next configuration screen will add additional workflow stubs. On the “State
change workflows stubs to update to run vCO workflow” page, click Next.
7. Enter 8.0 on the Virtual machines menus to create page.
8. Click Submit to start the workflow. The workflow can take a little while to be
completed.

b.4.6. Orchestrator Use Cases

Now that we have shown you how to set up Orchestrator properly, we want to explore its
more interesting aspects and demonstrate some real life scenarios. We will aim to achieve
this by running through the following examples:
• Running a script on a machine as part of the post provisioning process.
• Integrate Puppet.
• Write a workflow for providing an instance type dropdown list on a blueprint
request page.
b.1

b.2

b.3

b.4

b.5 Use Case 1: Run a script on a machine after installation

In fact, this is a fairly common scenario, especially when the cloning mechanism is used to
provision a machine. Cloning, together with guest customization, is a powerful
mechanism, used to quickly deploy and integrate a new machine into an existing
environment. Nevertheless, cloning alone is no silver-bullet. Fine-grained adaption and the
installation of some “advanced” software is something that cannot always be achieved
with cloning. However, it is quite handy to be able to run a post-provisioning script after
the cloning process. We have already shown that such a thing can be implemented using
the vRealize Automation guest agent. Using the guest agent is not always feasible though.
The guest agent needs an SSL connection to the IaaS host in order to fetch the instruction
that allows the command to be executed. This can be quite difficult, or indeed impossible,
if the deployed machine is not within the same network as your vRealize Automation
infrastructure. Furthermore, it needs the guest agent installed on the machine. That’s why
we need to show you how to implement this feature within Orchestrator. To take action,
we need to use a State Change workflow – the MachineProvisioned workflow stub
provided by vRealize Automation and configure it to execute our Orchestrator workflow
running our script. Of course, the script to be run should be placed within the virtual
machine (however, Orchestrator could dynamically copy the required script to the virtual
machine as well).
In the following, we will cover the steps of this implementation in detail. However, if
you want to act more quickly, you will find the workflows on the companion’s webpage
too.

1. Open vRealize Orchestrator and log in using your credentials.


2. Switch to the Run mode and change to the Workflows tab.
3. Create a new folder by right clicking on the Library folder and selecting Add
folder…
4. Provide a Name for the folder where you will place your own workflows.
5. Navigate to the Library > vCenter > Guest Operations > Processes.
6. Right click on the Run program in guest workflow and select Duplicate
workflow.
7. Provide a new name for the workflow in the New workflow name textbox and
choose the location of your new Workflow folder and click Submit.
8. Next, navigate to the vCloud Automation Center > Infrastructure
Administration > Extensibility > Workflow Stubs and copy the workflow as
well.
9. Now start your copied Assign a state change workflow to a blueprint and its
virtual machines workflow.
10. On the first dialog box for the input parameters, select the MachineProvisioned
entry from the vCAC workflow stub to enable dropdown list.
11. Under vCAC host, click the Not Set link, expand vCAC Infrastructure
Administration and select your IaaS host.
12. Click Next.
13. Choose the target blueprint where the State Change workflow should be applied
(remember that the machine provisioned needs the VMware tools running). Fig. 14-
9 depicts this dialog box.
14. Set Apply machines to existing machines to No and click on Next to continue.
15. Within the next dialog box, choose your copy of the Run program in guest
workflow.
16. Set the Add vCO workflow inputs as blueprint properties to Yes.
17. For the Add last vCO Workflow run input values as blueprint properties
values to No.
18. Press Submit to start the workflow.

Figure 14-9 Blueprint mapping

The workflow takes some seconds to run. If everything is OK, a completed workflow
token will appear next to the workflow instance.
The rest of the configuration takes place within the vRealize Automation self-service
portal. We must navigate to the blueprint to which this workflow was assigned and
provide information about which script should be run and how to connect to the virtual
machine. This is done as follows:
1. Within the vRealize Automation self-service portal, navigate to Infrastructure >
Blueprints > Blueprints.
2. Click Edit on the blueprint for the state change workflow.
3. Go to the Properties tab (see Fig.14-10). You will see some properties which have
been added by vRealize Automation Orchestrator.
4. First, modify the ExternalWFStubs.MachineProvisioned.vmUsername and
ExternalWFStubs.MachineProvisioned.vmPassword custom properties and
provide the credentials of the virtual machine (the password will be in clear text,
encryption is not working here).
5. Change the ExternalWFStubs.MachineProvisioned.programPath property and
enter the complete path of your script.
6. For the ExternalWFStubs.MachineProvisioned.workingDirectory property,
provide the directory from which to start the program.
7. Leave the ExternalWFStubs.MachineProvisioned.vm field empty. This property
will be filled automatically during the workflow.
8. Press OK to save the changes.

As soon as you have saved the changes, any new virtual machine provisioned from that
workflow will run the script specified in the properties section.

Figure 14-10 Blueprint custom properties mapping

b.6 Use Case 2: Integrate Puppet

We just have shown how to run a script from Orchestrator once a machine has been
provisioned. This is just fine, if we have smaller customizations, but if we want to run
different scripts from a virtual machine and maintain such functionality in a bigger
environment, there is a lot of work involved. We have to write, maintain and set up these
scripts.
Because this is not a feasible approach in large environments, tools like Puppet or Chef
exist that can help with the configuration management of your software.
Setting up Puppet involves several steps:
• First, a Puppet Server is needed. Puppet comes in two different flavors: Puppet
Enterprise or Puppet Open Source.
• A Puppet agent must be installed on the machine to be customized.
• The Puppet client must be authenticated to the Puppet server in order to
communicate with it and download code fragments.

While the whole procedure can be set up manually, it is easier to use the Orchestrator
Puppet plug-in to configure things. It is important to note that the plug-in is not part of the
standard Orchestrator instance and therefore we must download and install it first (it can
be found on the VMware Solution Exchange[9]).
Once downloaded, perform the following steps to install and configure the plug-in:

1. Log into the vRealize Orchestrator configuration page (https://<orchestrator-


appliance.domain.name>:8283).
2. Navigate to the General menu.
3. Change to the Install Application tab.
4. Within the Select a file to install dialog, choose your downloaded .vmoapp-file and
upload it. Once finished, the message Will perform installation at next server
startup will be shown
5. Go to the Startup Options menu and click on the Restart service link.

Now that the plug-in is installed, we can continue with its configuration. First, make sure
that you have a working Puppet master running in your environment. If you don’t have a
Puppet Master installed yet, you can download a Learning VM directly from the Puppet
website[10] (there is also detailed description how to set up the virtual machine and which
credentials can be used).
Before you can use the automatic agent installation, you need to register the Puppet
master. There are some prerequisites to be fulfilled:

• Verify that Puppet Enterprise 3.7.0, Puppet Enterprise 3.3, Puppet Open Source
3.7.1, Puppet Open Source 3.6.2 or any other compatible version is installed.
• Verify that you can connect to the Puppet Master using SSH from the Orchestrator
server.
• Verify that the SSH daemon on the Puppet master allows multiple sessions. The
SSH daemon parameter to support multiple sessions on the Puppet Master is in the
configuration file /etc/ssh/sshd_config. The session parameter must be set to
MaxSession=10.
Now we can perform the configuration of the plug-in:

1. Start the Orchestrator Java client and log in with your credentials.
2. Change to the Run mode and navigate to the Library > Puppet > Configuration
folder.
3. Right-click the Add a Puppet Master workflow and select Start workflow….
4. Provide the following input for the workflow and click Submit:
a. Puppet Master Name: How the name should appear in vRealize Orchestrator.
b. IP Address
c. Port: 22
d. Username
e. Password
5. Once the configuration has been completed, you can verify it by calling Validate a
Puppet Master workflow. If a completed token appears, everything has been
configured properly.

Figure 14-11 Puppet integration workflow

Finally, you can now use the workflows to install Puppet agents on your machines. There
are two possibilities: Install Linux Agent with SSH and Install Windows Agent with
PowerShell in the Node Management folder. For the Linux Agent, we obviously need
SSH and that shouldn’t be a problem. For the Windows Agent, however, PowerShell is
required. This did not exist prior to Windows Servers 2008. Therefore, in order to use the
workflows with Windows Server 2003, you must first install PowerShell.
Additionally, PowerShell allows no remote access by default. Thus, there is the
requirement to activate it on the servers using “Enable-PSRemoting”. The machine must
also be a member of an active directory domain. Further to this, if the server is not in the
same domain as the client (vRealize Orchestrator), you need to install a certificate on
every server and register it with PowerShell (please replace your IP address, hostname and
the certificate thumbprint):

New-WSManInstance -ResourceURI winrm/config/Listener -SelectorSet @{Transport=’HTTPS’;


Address=”IP:x.x.x.x”} -ValueSet @{Hostname=”x.y.org”; CertificateThumbprint=”XXXXXXX”}
Now you can start the install workflow. If the installation was successful, the Puppet agent
will be installed as a service daemon in a non-running state. The next step is to configure
the manifests within the Puppet master. You will then be able to start the Configure
Windows Agent with PowerShell / Configure Linux Agent with SSH workflows.
These can be found in the Library > Puppet > Node Management folder. Now all
Puppet agents are running and communicating with the Puppet master (see Fig. 14-11).

e.1

e.2

e.3

e.4

e.5

e.6

e.7 Use Case 3: Write a workflow


Write a workflow, to provid an instance type dropdown list on a blueprint request
page.

We have already talked about the so-called blueprint sprawl, a situation in which there are
too many blueprints defined within vRealize Automation. Having so many blueprints
leads to an administrative overhead. This is especially true when maintaining blueprints.
Therefore, our goal is to limit the numbers of blueprints. This can be done, if basic
hardware settings are not taken directly from the blueprint definition, but defined in
custom properties and build profiles. This means we can just use a standard blueprint, as
all hardware variations are defined outside of it.
The first step within the implementation is to define the VM size. In our scenario, we
create four different VM sizes as depicted in the following table.

Size vCPU RAM Storage

Micro 1 1 10 GB

Small 1 2 10 GB

Medium 2 4 30 GB

Large 4 8 100 GB

Table 14-1 Flavor types


To implement the scenario, we first have to create a new custom property. Navigate to
Infrastructure > Blueprints > Property Dictionary and create a new custom property
named Custom.VirtualMachine.Size. Once created, edit the custom property and add an
appropriate dropdown list with the values Micro, Small, Medium and Large.
Next, add the custom property to the blueprint. You will now be able to see the dropdown
list on the request screen. However, choosing any value does not yet have any effect on
the provisioning.
Now we must continue with the work in Orchestrator. There is a built-in workflow called
Workflow template under Library >vCloud Automation Center >Infrastructure
Administration > Extensibility. This template simply retrieves all custom properties
from a virtual machine requested by vRealize Automation. In that respect it is certainly a
good starting point.
When you open the workflow, you will notice that it consists of only a single scriptable
task. Hover over this task until a small pencil appears, then click on the pencil to modify
it.
Firstly, we have to declare some variables. The variables memory, CPU and disk should all
be set as attributes in the workflow. This can be achieved as follows:

1. Go to the scriptable task within your workflow, hover over the element and click the
pencil to edit it.
2. Change to the OUT tab.
3. Click the Bind to workflow parameter/attribute button.
4. Within the Chooser… dialog box, select Create parameter/attribute in workflow.
5. Create the variable memory of type string and check the Create workflow
ATTRIBUTE with the same name checkbox.
6. Click OK.
7. Repeat steps 4-6 to create the cpu and disk variable.

Now, replace the existing script with the following code:

var size = “medium”;


for each (var key in vCACVmProperties.keys) {
switch(key) {
case “Custom.VirtualMachine.Size” :
size = vCACVmProperties.get(key);
System.log(“Found Custom.VirtualMachine.Size: “ + size);
break;
}
}
This code fragment will iterate over all the custom properties until our new custom
property is found. The value of the custom property is saved within size variable. If there
is no such custom property, medium will be used as a default value. Now we need the
logic required to map the VM size to the right size of memory, CPU and hard disk:

if (size != “”){
switch(size){
case “micro” : memory = “1024”;
cpu = “1”;
disk = “10“;
break;

case “small” : memory = “2048”;


cpu = “1”;
disk = “10“;
break;
case “medium” : memory = “4096”;
cpu = “2”;
disk = “30”;
break;
case “large” : memory = “8096”;
cpu = “4”;
disk = “100”;
break;
}
}

Later on, we want to use the values of the output variables assigned to the appropriate
custom properties. Firstly, however, we need to define the custom properties to be
overridden. We will create workflow attributes for this:

1. On the General section of the workflow, scroll down until you see the Attributes
section.
2. Create a new variable named propertyNameMemory of type string and assign the
value VirtualMachine.Memory.Size.
3. Create a new variable named propertyNameCpu of type string and assign the
value VirtualMachine.CPU.Count.
4. Create a new variable named propertyNameDisk of type string and assign the
value VirtualMachine.Disk1.Size.
5. Create a new variable named propertyIsHidden as a Checkbox and set it to No.
6. Create a new variable named propertyIsRuntime as a Checkbox and set it to No.
7. Create a new variable named propertyIsEncrypted as a Checkbox and set it to No.
8. Create a new variable named doNotUpdate as a Checkbox and set it to No.

Next, we must use the output of the scriptable task to modify the appropriate custom
properties of the virtual machine to be deployed. Luckily, there are already workflow tasks
that can be used:

1. From within the All Workflow section, on the left-hand side of the screen, expand
the top level node and navigate to Library > vCloud Automation Center >
Infrastructure Administration > Extensibility > Helpers. Then drag the
Create/update property on virtual Machine Entit, right after your scriptable task,
ensuring it is connected with an arc.
2. Name the workflow UpdateMemory.
3. Link the following workflow parameters, as input parameters, in order to update the
workflow as per the following table:

Local parameter Source parameter

host vCAC host (in parameter)

virtualMachineEntity virtualMachineEntity (in parameter)

propertyName propertyNameMemory [attribute]

propertyValue Memory [attribute]

propertyIsHidden propertyIsHidden [attribute]

propertyIsRuntime propertyIsRuntime [attribute]

propertyIsEcnrypted propertyIsEncrypted [attribute]

doNotUpdate doNotUpdate [attribute]

Table 14-2 Parameter definition

4. Repeat steps 1-4 for another workflow element. However, rename it to UpdateCPU.
Assign the same set of input parameters to the new workflow element, but modify
the input variables to use the cpu attribute for the PropertyValue input parameter and
the propertNameMemory attribute for the propertyName input parameter.
5. Repeat steps 1-4 for a third workflow element. However, rename it to UpdateDisk.
Assign the same set of input parameters to the new workflow element, but modify
the input variables to use the disk attribute for the PropertyValue input parameter
and the propertyNameDisk attribute for the propertyName input parameter.
6. Save and close the workflow.

At this point, the implementation of our workflow is complete. However, we still need to
associate it with a blueprint. For that – once again – we have to look for the Assign a state
change workflow to a blueprint and its virtual machines workflow, start it and use the
MachineProvisioned workflow stub to enable it. For the end user workflow to run as an
input parameter, you must select your newly created workflow.
Back in vRealize Automation, you can test the new workflow. When requesting a new
machine from your blueprint, you should be able to see the workflow starting and running
in vRealize Orchestrator. If you look at the details of your new machine, you will not see
the new values, because vRealize Automation do not realize them at this point in time, but
if you check the machine itself, you should see the updated values.

e.7.1. Additional plug-ins and workflows

As already mentioned, vRealize Orchestrator already provides a lot of plug-ins. However,


as there are many different customizations possible, at some point in time you will
certainly get into a situation where no workflow is available. Therefore, you could write
your own JavaScript code in vRealize Orchestrator, as demonstrated in the last example.
However, don’t forget that vRealize Orchestrator is mainly an Orchestration engine, so in
most cases it is better to run an existing script, or to invoke an endpoint. For example, if
you have some work for a Windows machine, use the PowerShell plug-in to run a script
that carries this out. Linux scripts can also be easily invoked by means of the SSH plug-in.
If there is no script that can be invoked, you can still use the SOAP or the REST plug-in.

e.8 Summary

This chapter introduced the configuration and use of Orchestrator within vRealize
Automation. We learnt that there is a bi-directional communication between them.
Orchestrator can automate vRealize Automation and, in turn, vRealize Automation can
use Orchestrator to call a workflow. Furthermore, we introduced the most important
workflows from the vRealize Automation plug-in and showed how to implement a set of
use cases.
15. Advanced Service Designer

When building up a cloud, providing services to end users is a key factor. vRealize
Automation offers a single pane of glass where the users can provision these services. In
addition, vRealize Automation provides a number of other benefits:

• Firstly, you can provision any service or resource that you desire. You can have
services as you require them, with minimal delay. If you need more resources, you
can scale-out quickly.
• Another advantage is the ability to replicate quickly. Instead of building everything
manually, you can structure your solution as a series of scripts and applications. This
means you can deploy and rebuild as needed.
• You can also create and destroy with ease. Since you are provisioning on demand, it
is relatively easy for you and your users to build up a large set of servers. As these
are VMs, it is of course easy to destroy them when their services are no longer
required.

So far, we have focused on the provisioning of infrastructure resources. However,


provisioning is not limited to deploying machines – essentially you can set up any service
and publish it to the self-service catalog. In vRealize Automation, these services are called
Anything as a Service (XaaS).
When setting up XaaS services, you need at least the Advanced Edition of vRealize
Automation. The tool used to create these services is called the Advanced Services
Designer (ASD).
From a technical point of view, there are three components working together to provide
XaaS services:

• The vRealize Orchestrator, which runs services as workflows in the background.


• The Advanced Service Designer, which helps to create a graphical frontend for the
Orchestrator workflow.
• The self-service catalog, which hosts the published services.

For the remainder of this chapter, we will dive into the basics and further customizations
of the ASD. First of all, we will demonstrate some common use scenarios for the ASD.
Then, we will explain the steps to configure the ASD. Finally, we will discuss examples of
real life applications implemented with the ASD.

15.1 XaaS use cases

It is quite easy to explain - at a conceptual level - what can be achieved through the ASD:

• Firstly, the resources to be provisioned by the XaaS services have to be defined.


These can be a user, a file, a cluster, a LUN on a storage, a virtual machine, etc.
These resources are called custom resources.
• After resources have been defined, it is possible to perform some actions on them.
For example, a user created by vRealize Automation can be deleted or deactivated.
As with virtual machines, actions can be defined on any service. Due to these
actions being performed at any time after the creation, they are also known as day-2
operations. The actions themselves are called resource actions.
• To create a new XaaS service, you have to first create a service blueprint. Designing
a service blueprint involves creating the user interface for an Orchestrator workflow
and assigning the result to a custom resource. Finally, publishing it to the service-
catalog is required.
• It is also possible to provision infrastructure compute resources via the ASD.
However, when doing so, you should create a mapping between the vRealize
Automation catalog resource type and the Orchestrator inventory type. This
mapping is called resource mapping.

Bearing this knowledge in mind, we can find use case scenarios for the ASD. Essentially,
the ASD is a useful tool for all kinds of processes, which can be automated and published
as a service, within the self-service catalog. Examples of this include:

• User administration, e.g. the creation of new accounts, activating and deactivating
accounts, or resetting passwords.
• Automate email configuration, such as setting up a new mailbox.
• Provide storage – also called Storage as a Service.
• Create networks.
• Perform backups and recovery.
• Security and compliance processes.
• Installation of new software or updates.

In many cases, the examples given for the ASD seem to be quite trivial. However, when
you consider that many help desks spend most of their time with such tasks, you can better
appreciate the benefits of automating them. That does not mean you should automate
everything in a single leap, rather you could automate processes as and when the need
arises.
The ASD is also very important to third-party vendors, as it allows them to integrate their
solutions into vRealize Automation. They need only deliver Orchestrator plug-ins, which
can then in turn be invoked from vRealize Automation. There are already examples of
such plug-ins and they are very suitable for being integrated within the ASD. For example,
storage workflows from EMC or the NetApp WFA Command package.

15.2 Advanced Service Designer Configuration


15.2.1. Role assignment

Before being able to configure the ASD, you have to make sure to have the proper
permissions, i.e. you need to have the service architect role. This role allows to manage
the XaaS services. The configuration can then be done as follows:

1. Change to the Administration > Users & Groups > Identity Store Users &
Groups page.
2. On the right-hand side in the search box, type in the name of the user to which the
service architect role is assigned to.
3. Select the found user and click on View Details.
4. On the Details tab, please make sure that the Service Architect role is selected and
click Update.
5. Log out and log in again to vRealize Automation.

Once you have logged in into vRealize Automation again, you should be able to see the
Advanced Services menu. Figure 15-1 depicts the Advanced Service Designer menu
within the graphical user interface.

Figure 15-1 Advanced Service Designer resource mapping

15.2.2. Endpoint configuration

Although we have already created endpoints for vRealize Orchestrator, we must configure
further endpoints for the Advanced Service Designer. The configuration takes place within
the Administration > Advanced Services menu.
Within the Server Configuration menu, you can choose to either use the default
Orchestrator server or switch to an external one. If you configure an external Orchestrator
server, please provide the following input:
• Name of the server (assigned by you)
• Description (optionally)
• Host: the machine where vRealize Orchestrator is installed
• Port: Usually 8281
• Authentication: Basic (by default unless you configure Orchestrator otherwise)
• User name
• Password

The next step is to configure the endpoints for the ASD. ASD provides endpoints for the
following plug-ins:

• Active Directoy
• HTTP-REST
• PowerShell
• SOAP
• vCenter Server

In the following we will show how to configure these endpoints.


Figure 15-2 ASD endpoint configuration

15.3 Configuration of the Active Directory plug-in

You have to configure the Active Directory endpoint, if you plan to use Active Directory
within any of your XaaS workflows. Setting up this plug-in involves working through the
following steps:

1. Navigate to the Administration > Advanced Services > Endpoints page.


2. Click on the Add-icon.
3. From the Plug-in dropdown list, choose Active Directory and click Next to
continue.
4. Assign a Name for the connection.
5. Optionally enter a Description.
6. Click Next.
7. The connection information can be provided on the Details menu (see Fig. 15-2).
Provide the following input:
a. Type in the DNS name or IP address of the domain controller within the
Active Directory host IP/URL textbox.
b. Provide the Port number (please consider that the LDAP connection has to be
secured, so you cannot use port 389 – instead 636 is used for the
communication)-
c. Within the Root textbox, provide the LDAP base for your connection (e.g.
dc=vdc, dc=lab).
d. For the useSSL dropdown list, select Yes.
e. Define the Default Domain (e.g. @vdc.lab).
f. Within the User name for shared session, provide a user name with
appropriate permissions to Active Directory.
g. Provide the Password for shared session.
8. Click Add to save your changes.

Once you have finished the configuration, you can switch over to Orchestrator and check
if the configuration has been set up successfully. In Orchestrator, switch to the Run-mode
and change to the Library > VCAC > ASD > Endpoint Configuration > Microsoft >
Active Directory > Configuration folder. You should see a green workflow token next to
the Configure Active Directory server workflow.

g.4 Configuration of the vCenter Server endpoint


If you want to use vCenter server from within your XaaS workflows, you must configure
the vCenter Server plug-in, too:

1. Navigate to the Administration > Advanced Services > Endpoints page.


2. Click on the Add-icon.
3. From the Plug-in dropdown list, choose vCenter Server and click Next to continue.
4. Assign a Name for the connection.
5. Optionally enter a Description.
6. The connection information can be provided on the Details menu. Provide the
following input:
a. IP or host name of the vCenter Server instance to add.
b. Port of the vCenter Server instance (port 443).
c. Location of the SDK that you use to connect to the vCenter Server instance
(usually “/sdk”).
d. Click on Next.
7. On the Set the connection properties tab, enter the following information:
a. The HTTP port of the vCenter Server instance (applicable for vCenter
plugin version 5.5.2 or earlier).
b. User name of the user that Orchestrator will use to connect to the vCenter
Server instance.
c. Password of the user that Orchestrator will use to connect to the vCenter
Server instance.
8. Click Add to finish the configuration.

c.5 Working with Advanced Service Designer

Working with the ASD means creating and configuring custom resources, service
blueprints, resource mappings and resource actions. Once you have completed the
configuration of the ASD, you are ready to create your own service blueprints and publish
them into the service catalog.

c.5.1. Exporting and importing ASD components

From time to time, it is advantageous not to have to develop everything from scratch. In
these cases, you should instead export ASD components from another environment into
your new one, as it is more convenient. When exporting components from ASD, custom
resources, service blueprints, resource mappings and resource actions can be exported as
a zip file. However, please consider that when exporting ASD components, they do not
include Orchestrator workflows (you could do this from Orchestrator by creating a
.package file).
The exporting and importing procedure is essentially quite simple. Once you have logged
into vRealize Automation, as a service architect, perform the following steps:

1. Navigate to the Administration > Content Management > Export content page.
2. Select the components to export on the Service Blueprints. Resource Actions,
Custom Resources and Resource Mappings tabs.
3. Click Next.
4. From the Export Content tab, click Export Content to begin downloading your
selections.

Importing a package is also quite easy. However, when importing content from other
environments, there is always the risk of overwriting your existing ASD components. This
would be quite unpleasant, so for this reason you are able to specify a prefix for the
imported components. Hence, there is less chance of a component being overridden, as
similar named components are differentiated with this prefix. To import a package, follow
the steps as outlined:

1. Navigate to the Administration > Content Management > Import content page.
2. Select the Prefix only conflicting checkbox, if you want to add a prefix only if
there is any naming conflict (optional).
3. Enter a Prefix to add to imported components.
4. Click Browse… to upload the files to be imported. You can upload a .zip file for the
ASD components and a .package file for Orchestrator workflows.
5. Click Open.
6. Optionally, you can click on Validate to ensure you are not missing any vRealize
Orchestrator workflows required by the ASD components.
7. Click Import Content.

The import procedure will take a few moments to be completed. Once finished, you will
be able to see all the important components within the ASD menus and the imported
workflows within the Orchestrator instance.
Custom resources can be described as the result of an ASD workflow. For example, if
you trigger a workflow to create a new user account, you first have to define a ‘User’
custom resource, which holds the output of this workflow. The Orchestrator workflow is
triggered from the service blueprint. If there is any action performed on a user, for
example, you want to deactivate that user, you need to define a day-2 operation by setting
up a resource action.
If you already have some alternative IaaS deploying mechanism, for example via
PowerShell scripts, you can trigger these scripts from within Orchestrator. It is important
to note, however, that if you want to trigger such a deployment from vRealize
Automation, you first have to map the provisioned resource to a data type in vRealize
Automation. This can be done by means of resource mappings. vRealize Automation
already has a set of predefined mappings (see Fig. 15-3). These encompass the following
resources:

Figure 15-3 Resource mappings

• vCloud Director virtual machine


• vCenter virtual machine
• vCloud Director vApp

In the following, we will demonstrate the process of integrating a simple vRealize


Orchestrator workflow into ASD, and then publish it to the self-service catalog as a XaaS
service.

c.5.2. Create Custom Resources

Perform the following steps to create a Custom Resource:

1. Within the Advanced Services menu, go to Custom Resources.


2. Click the Add-icon.
3. On the Resource type tab, provide a value for the Orchestrator Type. As we want
to keep things simple, we just type in AD:User for an Active Directory user account
(see Fig. 15-4) .
4. Enter a Name for the Custom Resource (e.g. Active Directory User).
5. Optionally, provide a Description.
6. Optionally, specify a Version (e.g. 1.0.0) for the custom resource.
7. Click on Next.
8. On the Details Form tab you can specify how the Custom Resource will be shown
in the self-service portal. We will discuss the Form Designer later in detail, so just
click Add to finish the wizard.
Figure 15-4 Resource definition

c.5.3. Create a Service Blueprint

Now we can continue with creating a Service Blueprint. Perform the following steps:

1. Within the Advanced Services menu, go to Service Blueprints.


2. Click the Add icon.
3. On the Workflow tab, expand the Orchestrator folder and navigate to Library >
Microsoft > Active Directory > User.
4. Select the Create a user with a password in an organizational unit and click
Next to continue (see Fig. 15-5).
5. On the Details page, review the settings for the Name, Description and Version
and click Next to continue.
6. Once again, the next page shows the Form Designer. The current graphical user
interface for the dialog directly comes from Orchestrator, but you are free to
customize the user interface. You can change the user control of each input
parameter as well as add additional controls (the input entered will be saved as part
of the request). To change a user control, hover over the element and click on the
pencil (see Fig. 15-6). Review and modify the following user controls:
a. Choose OU: Choose a Organization Unit
b. ouContainer: OU-Container
c. accountName: Name
d. password: Password
e. confirmPassword: Confirm Password
f. domain Name: Domain Name
g. display Name: Display Name
h. changePasswordAtNextLogon: Change Password at next Logon.
7. Now move the fields a little bit:
a. Move the Confirm Password field to the same row as the Password user
control.
b. Move the Display Name to the same row as the Name.
8. Add an additional Integer field to the request form and place it appropriately. Name
the field ‘cost center’ and use costCenter as an id.
9. Click Next to continue.
10. On the Provisioned Resource tab, map the outcome of your workflow to the
custom resource created previously. Click Add to finish the wizard.
11. The last step is to Publish the service catalog.

The remaining steps should be well understood by now. After publishing a blueprint, there
is still a need to assign the appropriate permissions and choose how it should be displayed
within the service catalog. As we have already described the necessary steps in previous
chapters, we will skip this here.

Figure 15-5 ASD blueprint

Figure 15-6 ASD blueprint form

b.5.4. Define resource actions

As described earlier, resource actions help us to provide day-2 operations on provisioned


resources. A possible day-2 operation for an Active Directory would be to deactivate or
delete an account (and we will demonstrate this in the following):

1. Within the Advanced Services menu, go to Resource Actions.


2. Click the Add-icon.
3. On the Workflow tab, expand the Orchestrator folder and navigate to Library >
Microsoft > Active Directory > User.
4. Select the Destroy a User workflow (see Fig. 15-7).
5. Click on Next.
6. Review the value for the Resource type and Input parameter dropdown list on the
Input Resources page.
7. Click on Next.
8. On the Details page, review or modify the following settings:
a. Name
b. Description
c. Hide catalog request information page
d. Version
e. Type: Disposal or Provisioning
f. Target criteria: Always available or Available based on conditions
9. Click on Next.
10. On the next tab, you can customize the user interface and add additional user
controls to the Action Form. As we don’t want to modify anything in our scenario,
click the Add-button to save your changes.

Figure 15-7 ASD workflow selection

11. Click OK on the warning dialog.


12. Publish your resource action.

Once again, after publishing your resource action, you must ensure that your users have
the right entitlements to invoke this action.
After you have finished the whole configuration, users can finally use the service catalog
to create new Active Directory users and also delete them. Provisioned Active Directory
users can be found on the Items > Active Directory page. You can click on a user object
and trigger the resource action (see Fig. 15-8).
f.5.5. Input validation
When creating the user interface for any kind of software, a certain amount of work has to
be spent on usability and input validation. This applies to vRealize Automation as well. If
there is any wrong input, the workflow should not be started – instead, users should be
prompted to correct their input before being able to trigger a workflow. We already
covered the Service Designer and showed you how to modify and add user controls. At
this point, we want to show how these user controls can be configured to validate the
input.

Figure 15-8 ASD entitlment

f.5.6. Default fields

The first step in designing a user interface is always to choose the appropriate type for the
user controls. vRealize Automation already comes with a large set of user control types:

• Text field: One line


• Text area: multi line
• Password field: Input is encrypted
• Integer field: If needed, a minimum and a maximal value can be defined, users will
then see a slider.
• Decimal field
• Date & Time
• Checkbox
• Yes/No: Dropdown field
• Dropdown
• List
• Checkbox list
• Radio button group
• Search with auto completion
• Tree
f.5.7. Constraints

Constraints are used to limit input values. The following constraints are supported within
the ASD:

• Required: This constraint indicates if a value is required or optional. The constraint


can be configured as follows:
o Constant: The required setting is always applied.
o Field: There is dependency on another field. The value is only required if
another checkbox is activated.
o Conditional: An expression is evaluated to decide if the value is needed or not.
• Read-Only
• Value: Shows a value. The following options are available:
o Constant
o Field: The value is based on another field.
o Conditional: An expression is used.
• Visible
• Minimum length: For text input
• Maximum length: For text input
• Minimum value
• Maximum value
• Increment: Defines an increment – used for example on a slider
• Minimum Count: Configures how many elements of a control have to be selected
minimally
• Maximum Count: Configures how many elements of a control have to be selected
minimally

f.5.8. Input validation with Orchestrator


The Advanced Service Designer already provides a set of constraints, which can be
applied to forms. However, you should also consider validating input to Orchestrator
workflows. This has two advantages:

• If you have a custom workflow, it can be called from endpoints other than the ASD.
Therefore, it is certainly good practice to validate input within Orchestrator, as it is
the only way to ensure that a validation is performed.
• Orchestrator has more powerful ways of performing validation. This includes
regular expressions.

Regular expressions are very powerful when it comes to validation. We will show a couple
of examples of how Orchestrator might use expressions to validate input:

• ^[\w_-]+$ accepts lower and upper case letters as well as underscore “_” and dash
“-”. This regex might be applicable for the validation of a username.
• ^[_a-z0-9-]+(.[_a-z0-9-]+)*@[a-z0-9-]+(.[a-z0-9-]+)*(.[a-z]{2,4})$ is a little bit
more complicated and can be used for email validation.

Writing regex expressions can be quite hard. Fortunately, in most cases, we do not have to
formulate expressions ourselves, but can find an example that can then be modified
according to our needs. Once you have written your regex expression, it is – of course –
also very important to test it before using it. Fortunately, there are websites like
www.regexr.com available for that.

f.6 Advanced Service Designer use cases

After having run through the basics of ASD, it is now about time to demonstrate to some
more use cases in practice:

f.6.1. Deploy a machine from ASD

So far, we have learnt how to provision machines based on IaaS blueprints. This is of
course fine, however, we have seen that modifying the user interface for requesting
machines can be quite tedious:

• First of all, classic blueprints only offer a limited set of user controls.
• Configuration of constraints and validations is painstaking.
• User controls can work together (as we have shown), but once again the
configuration is quite difficult.
• Last but not least, there is no such thing as a dynamic user control. This means, for
example, that values in a dropdown list can only be defined statically. It would be
more useful if these values could be generated at runtime, for example by doing a
lookup against a database.

Figure 15-9 Request a catalog item workflow

In conclusion, we can state that there are plenty of reasons to use the ASD for deploying
machines. From a technically point of view, there is a workflow called Request a catalog
item, which is located in the Library > vCloud Automation Center > Infrastructure
Administration > Requests folder within Orchestrator.
Running the workflow shows us that there are two input parameters:

• The Catalog item parameter lets you choose any item from the catalog, i.e. an IaaS
blueprint in our scenario.
• The Input Value field accepts a list of composite generic key-value pairs (see Fig.
15-9).

However, there is also additional configuration work to be done, in both Orchestrator and
vRealize Automation, before we can begin with the implementation:

• Firstly, we have to ensure that the service account for the vRealize Automation plug-
in within Orchestrator is a member of the support group within the business group
related to the blueprint. This guarantees that we have all the permissions to run the
workflow appropriately.
• Secondly, the service account must be entitled to the basic blueprints (and XaaS
services if you want to call them instead of a blueprint).

The input parameters are fairly difficult to handle. This is due to the generic nature of the
composite key/value list and to the fact that all inputs are of type “strings”. However, the
workflow helps in the following scenarios:

• Request a catalog item on behalf of a user.


• Request a resource action.
• Request a resource action on behalf of a user.

Before the Request a catalog item workflow can be called, it is important to know the list
of expected input parameters for a blueprint. Another important issue is the syntax of input
parameter naming within workflows: They must always start with the prefix “provider-“.
For example, if there is a parameter “username”, to follow the syntax correctly, the input
parameter must be named “provider-username”. For a typical blueprint, we need the
following input parameters:

• blueprintId (string)
• provisioningGroupId (string)
• Cafe.Shim.VirtualMachine.NumberOfInstances (integer)
• Cafe.Shim.VirtualMachine.TotalStorageSize (decimal)
• VirtualMachine.LeaseDays (integer)
• VirtualMachine.CPU.Count (integer)
• VirtualMachine.Memory.Size (integer)
• VirtualMachine.Disk0.Size (decimal)
• VirtualMachine.Disk0.IsClone (boolean)

The first step within the implementation should be to copy the ‘Request a catalog’
workflow for safety reasons. Once the workflow has been copied, run through the
following steps:

1. Replace the “compositeTypeToProperties” action call with a custom script (see Fig.
15-10).
2. Hover over the custom script and click on the pencil to edit the custom script.
3. Define the following input variables for the script:
a. blueprintId (string)
b. provisioningGroupId (string)
c. numberOfInstances (vCO number)
d. totalStorageSize (vCO number)
e. leaseDays (vCO number)
f. cpuCount (vCO number)
g. memorySize (vCO number)
h. disk0Size (vCO number)
i. disk0IsClone (boolean)
4. Add the properties attribute as an output parameter to the custom script.
5. Type the following scripting code to the custom script:
properties = new Properties();
properties.put(“provider-blueprintId”, blueprintId);
properties.put(“provider-provisioningGroupId”, provisioningGroupId);
properties.put(“provider-Cafe.Shim.VirtualMachine.NumberOfInstances”, new
vCACCAFEIntegerLiteral(numberOfInstances).getValue());
properties.put(“provider-Cafe.Shim.VirtualMachine.TotalStorageSize”, new
vCACCAFEIntegerLiteral(totalStorageSize).getValue());
properties.put(“provider-VirtualMachine.LeaseDays”, new
vCACCAFEIntegerLiteral(leaseDays).getValue());
properties.put(“provider-VirtualMachine.CPU.Count”, new
vCACCAFEIntegerLiteral(cpuCount).getValue());
properties.put(“provider-VirtualMachine.Memory.Size”, new
vCACCAFEIntegerLiteral(memorySize).getValue());
properties.put(“provider-VirtualMachine.Disk0.Size”, new
vCACCAFEIntegerLiteral(disk0size).getValue());
properties.put(“provider-VirtualMachine.Disk0.IsClone”, disk0isClone ? “true” :
“false”);
6. Validate and save the workflow.

The next step is to create an appropriate service blueprint within the Advanced Services,
and to call your workflow from it. Please note, that we are not working with custom
resources in this scenario, as custom resources do not have leases or costs associated with
them. Nevertheless, the deployed resource will appear within the provisioned items in the
self-service catalog, as it was technically deployed by means of an IaaS blueprint.
As a summary, we can state that the main product deficiency with vRealize Automation
when deploying machines, is the request form designer and its complicated setup of
dynamic form fields. By using the ASD as a frontend for the IaaS blueprint, we can
circumvent these two problems.

Figure 15-10 Request a catalog item workflow customization

i.7 Summary

This chapter introduced the ‘Advanced Service Designer’ and XaaS blueprints. After
having given examples of where the ASD can be used, we showed how to configure the
ASD. This involved some endpoint (amongst others, the endpoint for vSphere) and Active
Directory configuration. Once the configuration had been completed, we outlined the
basics of custom resources, service blueprints, custom actions and resource mappings.
One of the strengths of the ASD is its ability to build dynamic request forms. When
creating such forms, input validation is very important. This could be done within the
ASD or - alternatively - Orchestrator as well.
With that knowledge in mind, we implemented several use cases. First, we showed how to
provision and withdraw an Active Directory resource.
The second use case demonstrated how to call an IaaS blueprint from ASD (a good
alternative to the standard approach, when dynamic input fields are needed).
Finally, we introduced storage automation and its challenges. We also looked at the
technical means, with which it is possible to achieve such automation.
16. Financial Management

As well as having the ability to automate your infrastructure (and empower users to
provision their own services), it is also essential to keep track of ongoing costs. This
includes discovering the costs incurred in a datacenter, how they are allocated, and how to
map them to services. This allows for a better charging mechanism.
vRealize Automation includes a financial tool – vRealize Business Standard – that helps
with the aforementioned problems.
This chapter will introduce common challenges and problems for financial tools in
general. We will then shift the focus to vRealize Business. In doing so, we will
demonstrate how to deploy, configure and use, the vRealize Business tool.

16.1 Overview on financial management

Historically, IT has always tried to provide compute resources to the consumers. However,
usually these resources and the way they used to be distributed, used to fall under the
umbrella of the IT department. This helped IT departments, as they could easily stay on
top of resource use and availability. In a traditional datacenter, the IT department controls
how available resources are assigned.
However, things are changing when a self-service portal, like vRealize Automation, is
introduced. Now users have control and the responsibility to request resources according
to their respective budgets.
The main challenge is that most companies do not have a chargeback system for resource
consumption. Financial tools, like vRealize Business, can help to enforce policies for
resource consumption automatically and provide cost transparency to the consumers. By
taking this route, companies move to a provider business model. The resources and
services provided can be a combination of both private and public cloud resources.
As a consequence, it can be stated that the ability to measure and charge for specific
services consumed is an important selling point of cloud computing. So far, we have only
dealt with the automation of resource provisioning. The automation of chargeback, or the
ability to show actual costs, has not yet been covered in this book.

A financial tool essentially has two main functions:


• Provide service cost (either as showback or chargeback)
• Measure the service quality during operations

16.2 Basic features of financial management tools

No matter what kind of financial management tool you use, there is a basic set of
functionalities and related requirements, which must be met.

16.2.1. Understanding your costs

Firstly, it is crucial to have a good handle on the true costs of your IT services. There is a
lot more to it than hardware costs alone – instead, all costs must be captured in order to
calculate the total cost of ownership. These costs cover software, administration,
maintenance, space and power cooling, amongst others. Only if you have a good
understanding of your costs, it is possible to accurately charge for a given service.
Calculating the costs by yourself is a very tedious task, as there are many issues to be
taken into account. Fortunately, financial tools can assist here by providing a reference
database, providing specific values for different cost types. These tools usually come with
an underlying cost model (e.g. TCO or ROI).

16.2.2. Establishing prices for services

Once the total cost of ownership is known, costs can be mapped to services. Prices can be
fixed (for example, you can set the price of a small vm with $5 or the price of a big vm
with $15) or be calculated based on resource consumption. In some cases, the latter option
is quite easy to implement. For example, if you charge for storage consumption. In other
cases this might be more difficult. Financial tools usually provide different models for the
allocation of costs to services. Examples include: equal split/relationship-based,
percentage-based, property-based or weighted.
Tools like vRealize Business also help automate this process. They know the total cost of
ownership and can ascribe the costs to machines based on resource consumption.

16.2.3. Comparing costs

A financial tool should not only be able to calculate prices. With regard to services in your
datacenter, it should also assist end-users with comparing prices to those of other cloud
providers (Amazon, Microsoft or VMware). Such competitive benchmarking helps
highlight how efficient you are with your datacenter. Furthermore, when your datacenter is
running out of resources (for example, at the end of the year, when there is big demand for
compute resources), you might want to compare prices. This might lead you to move
workload to the public cloud, instead of acquiring new hardware.

16.2.4. Showback costs

In a traditional IT environment, consumers of resources are less likely to take care of their
ongoing costs of applications and services. This is because IT services are usually
centrally governed by the IT departments. However, when transforming your datacenter
into a more cloud-centric model, consumers become the drivers behind hardware and
software resource requests. Therefore, it is important to achieve cost transparency, so that
users always know how much their services cost and how much they have already spent.
This also helps to influence the consumption of individual users.

16.2.5. Providing reports

While it is important to show how much a resource or service costs, it is also crucial to be
able to generate reports of total costs over time. This report data can be grouped by
service, business groups or any other criteria. Frequently, these reports serve as input data
for billing and as such must be exportable in different formats (such as PDF or CSV). As
well as having the ability to export data, financial tools quite often provide some kind of
dashboard, in which to visualize the data.
While it is fairly easy to understand the need for such a financial cost tracking tool, the
global market for these is still very small. Gartner estimates the current market value to be
approximately $ 250 million in 2013[11]. However, there is a yearly growth of around 20%.
VMware entered the financial tool market in 2011 and has since then become one of the
biggest vendors. The current product is called vRealize Business and is available in several
different editions. vRealize Automation is bundled with vRealize Business Standard
(however, there is a vRealize Business Advanced and a vRealize Business Enterprise
edition – and the two cannot be directly integrated with vRealize Automation, they need
vRealize Business Standard as an intermediary).

16.3 Features and licensing of vRealize Business

As mentioned above, there are three different editions of vRealize Business: Standard,
Advanced and Enterprise. The following table highlights the features of each edition:
Features Standard vRealize Suite Advanced or vRealize Suite Enterprise or
vRealize Business Advanced vRealize Business Enterprise

Cloud Management Capabilities

Virtual Infrastructure X X X
Costing, Usage Metering,
and Consumption
Reporting

Virtual Machines Hierarchy X X X


Management

Public Cloud Usage X X


Costing and Management

IT Financial Management

Integration with other X X


VMware offerings and 3rd
party offerings

Financial Management for X X


Full IT Service Layers
including Costing,
Planning, Budgeting,
Showback and Chargeback

Service Level Management X


for Full IT Service Layers

Delivery Model

On-Premises X X X

SaaS X X

License Model

Perpetual X X X

Subscription X X

Table 16-1 vRealize Business features by edition

The Standard edition only provides showback, usage metering and consumption reporting,
within your virtual infrastructure. The Advanced Edition provides the full range including
IT costing, IT demand management, IT forecasting and planning, IT showback, IT
chargeback, what-if scenarios and IT cost optimization. It also supports the display,
comparison and what-if analysis, using VMware IT benchmarking data (so that you can
model existing cost advantages and future cost savings). The Enterprise Edition helps you
gain transparency and complete control over all IT costs, services and quality (with the
Enterprise edition for CIOs and IT executives). In addition to Business Management for
Cloud plus IT Financial Management capabilities, IT Service Level Management and
Vendor Management capabilities, enable customers to: set, track, report and analyze IT
performance. They also allow value measures for all their services, vendors and
customers, as well as perform root cause and business impact analysis.
In the remainder of this chapter, we will cover the features of the Standard edition. The
Advanced and the Enterprise edition are not covered by this book.
16.3.1. Manual cost calculating

Before we dig into the details of vRealize Business, we first want to show you how to set
up manual pricing in vRealize Automation. When setting up manual pricing, we have to
deal with cost profiles. But how can cost profiles influence the price of storage and virtual
machines? Cost profiles are associated with compute resources. Compute resources are in
turn associated with the fabric and hence fabric groups. Then, fabric administrators can
create reservations based on fabric groups. These reservations are in turn used for the
provision of virtual machines.
There are two kinds of cost profiles:

• Cost profiles, which include compute cost and storage cost.


• Storage cost profiles, which include only storage cost and override any storage cost
used in cost profiles.

16.3.2. Creating a cost profile

When creating a cost profile, you have to enter the exact costs for the compute and storage
resources. Please consider that cost profiles are based on daily costs.
Perform the following steps to create a cost profile:

1. Log in into vRealize Automation having fabric administrator credentials.


2. Navigate to the Infrastructure > Compute Resources > Cost Profiles page.
3. Click on the New Cost Profile link.
4. Provide the following input:
a. Name, e.g. Gold Compute
b. Description
c. Memory cost (per GB)
d. Storage cost (per GB)
e. CPU Cost
5. Click the green Save button.

The procedure for creating a Storage Cost Profile is fairly similar, so we will not address it
here in detail.
e.3.3. Assigning a cost profile

The next step is to assign the cost profile to a compute resource:

1. Navigate to the Infrastructure > Compute Resources > Compute Resources


page.
2. Hover over the cluster to which you want to assign the cost profile, and click Edit.
You will be taken to the default screen for Compute Resource details.
3. Review the Fabric group settings and go to the Configuration page.
4. From the Cost profile dropdown list, select your newly created cost profile (see Fig.
16-1).

If you also need to assign a Storage cost profile, perform the following steps:

1. Within the Storage Paths table, edit your Storage Path.


2. From the Storage Cost Profile dropdown list, select your newly created Storage
Cost Profile.
3. Click green check icon to save your Storage Path.
4. Click OK to save your Compute Resource changes.

Once you have saved your changes, and request a new machine from the service catalog,
you will be able to see the daily costs for that machine.

e.3.4. Assigning additional costs

Cost profiles are helpful when charging consumers for storage, CPU and memory. The
costs are calculated, on a daily basis, depending on how many resources they need.
However, there are many other costs, of course, than those for storage, CPU and memory.
For example, we would like to charge for license costs, electricity, administrative support
and so on. When we are working with manual cost calculation, there is no way to
configure these costs in a fine-grained fashion. Instead, we have to associate a fixed
amount with a blueprint. This amount will be added to the costs defined in the cost
profiles and shown in the service catalog as well.

Figure 16-1 Cost profile selection


Perform the following steps to associate costs with a blueprint:

1. Navigate to the Infrastructure > Blueprints > Blueprints page.


2. Select the blueprint to be configured for costing.
3. On the Blueprint Information tab, enter the daily costs.
4. Click OK to save your changes.

As soon you have saved your changes and requested a new machine from the service
catalog, you will be able to see the blueprint cost to be taken into account as well.

e.3.5. Changing the currency

You can change the currency for the costs as well:

1. Navigate to Infrastructure > Administration > Global Properties.


2. In the Global Properties table, within the Group:Installation section, edit the
Configured Currency Region Name row.
3. Change the currency for your country. If you want to see the list of values for the
country, go to the appropriate Microsoft MSDN article[12].

e.4 Architecture of vRealize Business Standard

As with many other products in the VMware portfolio, vRealize Business Standard can
also be deployed as a virtual appliance. The appliance is based on a SUSE SLES
Enterprise Server, a vPostgres SQL database and a Pivotal tcSserver web server.
Once deployed, you must connect the appliance to vRealize Automation. vRealize
Business uses connectors to obtain data from your environment. Right now, there are
connectors for vCenter Server, vRealize Automation, vCloud Director and Amazon Web
Services. As stated before, vRealize Business comes with a cost model and its data is
stored in a reference database. In order to update this reference database, internet
connectivity is needed. Last but not least, vRealize Business Standard can be connected to
vRealize Business Advanced and Enterprise as well. Fig. 16-2 shows the architecture of
vRealize Business.
Figure 16-2 vRealize Automation overview

e.5 Installation and configuration


e.5.1. Prerequisites

The hardware prerequisites for vRealize Business are as follows:

• 50 GB hard disk
• 4 GB RAM
• 2 vCPUs

The appliance itself is packaged as an .ova-file.

In addition, the following ports are required:

• 443 to connect to the user interface


• 22 for SSH
• 5432 for the vPostgres SQL database (embedded by default)
• 5480 for the web console
e.5.2. Deployment and configuration of the vRealize Business Appliance

The following steps must be taken in order to deploy and configure the vRealize Business
appliance:

• Download and deploy the appliance.


• Configure the appliance and connect it to vRealize Automation.
• Create a tenant for vRealize Business (optional).
• Configure vRealize Business.
e.5.3. Downloading and deploying the appliance

The deployment process for the vRealize Business appliance is the same as for vRealize
Automation or the SSO appliance. We have already described this process in chapter 4, so
we will skip the explanation here.
When deploying the appliance, you must determine the currency to be used within
vRealize Business. You can choose from the following currencies:

• US Dollar (USD)
• Euro (EUR)
• British Pound (GBP)
• Australian Dollar (AUD)
• Canadian Dollar (CAD)
• Singapore Dollar (SGD)
• Japanese Yen (JPY)
• Indian Rupee (INR)
• Israel Shekel (ILS)

e.5.4. Configuring the appliance and connecting it to vRealize


Automation

Once the appliance has been deployed, you have to connect to it (URL: https://<vrealize-
busines.domain.name>:5480 ).
In a second step, change to the vRealize Automation tab and provide the following
information:

• Host name of your vRealize Automation appliance


• SSO Default Tenant
• SSO Admin User
• SSO Admin Password

Next, select the Accept vRealize Automation certificate checkbox and click Register.
It will take a short while to complete the process.
Next, go to the System tab, click Time Zone and change the settings if needed. Click
Save Settings. At this point in time, the configuration of the appliance is complete and we
can logout from the web management console.

e.5.5. Creating a tenant for vRealize Business

It is recommended that you create a dedicated tenant for vRealize Business. However, it is
possible, while not recommended, to use an existing tenant. The steps needed to create a
new tenant have already been covered in chapter 5 ‘Design and Configuration of vRealize
Automation’. Therefore, they will not be addressed here in detail. Instead we will only
walk through the basic steps:

• Log in, as a system administrator, into the vRealize Automation default tenant
(https://<vrealize-automation-appliance.domain.name>/vcac).
• Create a new tenant.
• Add an identity store to the tenant.
• Assign tenant administrator and infrastructure privileges.

Once you have completed these steps, you can log off from the default tenant and log in
into your newly created tenant.

e.5.6. Configuring vRealize Business

Configuring vRealize Business is relatively easy. The following steps must be followed:

• Assign the appropriate membership roles.


• Enter the serial number.
• Configure vRealize Business connectors.

To assign the appropriate membership roles, follow the instructions as shown:

1. Navigate to the Administration > User & Groups > Identity Store Users &
Groups page.
2. In the Search text box, enter the name of a user or group to whom you want to
assign vRealize Business permissions.
3. Click the magnifying glass icon.
4. Select a user from the list of items.
5. Assign either the Business Management Administrator or the Business Manager
Read only User role.
6. Click Update.

vRealize Business also requires a serial number. This can be added as follows:

1. Click the Business Management tab.


2. Enter your vRealize Business serial number.
3. Click Save.

The final step is to configure the connectors. The most important of which is the vCenter
one, but you can also set up an Amazon AWS or vCloud Director connector. Perform the
following steps:

1. Navigate to the Administration > Business Management page.


2. Click on Manage Connections.
3. Click Manage vCenter Server Connections.
4. Click the plus sign (+).
5. Provide the following input for the vCenter Server connection (see Fig. 16-3):
a. Name
b. vCenter Server
c. User name
d. Password
6. Click Install.
7. Click Save.
8. Click OK.
At this point, the configuration is completed and we can begin to work with vRealize
Business.

d.6 Using vRealize Business

Once the configuration of the vRealize Business appliance is complete, there is no longer
a need for manual pricing. If a compute resource is under control of vRealize Business, all
cost pricing can be overitten by vRealize Business.If you want to define prices in vRealize
Business, perform the following steps:

1. Navigating to the Business Management > Consumption Analysis >


Consumption Overview page and click on Edit > Edit Pricing.
2. Review the settings and change it if needed (see Fig. 16-4).

Figure 16-3

Edit vCenter Service Connection

Once you have changed the settings, perform the following steps:

1. Navigate to the Infrastructure > Compute Resources > Compute Resources


page.
2. Go to your compute resource and remove the current cost profile.
3. Click on Update Costs.
4. On the Confirm Update Costs dialog, click OK. Now all prices will be managed
by vRealize Business. You will not be able to attach your own cost profile anymore.

vRealize Business Standard is fully integrated into the vRealize Automation self-service
portal and adds new menus and pages to the web page:

• The Overview page shows a dashboard with financial data. This includes the total
cloud cost, an operational analysis and a demand analysis.
• The Cloud Cost page helps you to understand your costs, where they come from
and gives insights what kind of costs you have within your datacenter.
• You can establish prices of your IT services on the Operational Analysis page.
• You can view and sort the list of consumers on the basis of different criteria on the
Demand Analysis page.
• The Cloud Comparision and the Public Cloud page help you to compare your
costs to public cloud alternatives.
• The Reports page allows to generate reports.

In the following, we will describe these pages in detail:

Figure 16-4 vRealize Business pricing options

d.6.1. Cloud overview

As stated above, the ‘Overview dashboard’ gives a brief summary of the total cloud cost,
the operational analysis and the demand analysis. Fig. 16-5 depicts this dashboard. The
total cloud cost can be roughly compared to the Total Cost of Ownership (TCO).The TCO
is a financial estimate, intended to help buyers and owners determine direct and indirect
costs of a product or system. The ‘total cloud cost’ comprises several cost drivers, the
allocation of which is displayed within a pie chart. Costs are further split into CapEx and
OpEx. Capital expenditure (CapEx) refers to the funds used by a company when acquiring
or upgrading physical assets (for example, property, industrial buildings or equipment).
OpEx, is a category of expenditure a business incurs as a result of performing its normal
business operations.
Figure 16-5 Cloud overview

d.6.2. Cloud Costs

Very few companies have good insights into the true costs of their IT services. Normally,
they know how much money they spent for the hardware, but do not have too much
information about the other costs of their infrastructure.
Internally, vRealize Automation uses a reference database for being able to do some
initial cost calculation. The database includes industry benchmark values and covers many
different cost drivers including hard- and software. vRealize Business calculates cost
drivers across a range of categories. These are depicted in the following table.

Cost Driver Description

Server Hardware Displays server costs by CPU age. The costs are calculated by using declining
balance depreciation over the last three years.

Storage Shows costs based on the storage type or profile.

OS Licensing Displays license costs for operating systems.

Maintenance Shows operating costs for both operating systems as well as hardware.

Labor Displays labor costs for your infrastructure.

Network Depicts networks costs based on the NIC type.

Facilities Shows costs based on rent, real estates, cooling and power.

Other costs Summarizes other costs like high availability, management or VMware licensing
costs.

Table 16-2 Cost drivers

To calculate the exact costs, vRealize Business uses the double declining balance method
and allocates the costs over a five-year period in order to determine yearly depreciation
values. If there are two values, the higher one is used. Monthly costs for server hardware
are calculated by using yearly depreciation values and dividing it by 12.
Fig. 16-6 shows how to show and modify cost drivers. Perform the following steps to
modify a cost driver:
1. Within the Cloud Cost menu, select Edit Cost.
2. Select the cost driver to be modified and click on the calculator icon.
3. Enter the new amount for the cost driver.

You can also see how costs are defined:

• If costs are based on the industry benchmark, a vertical band of orange color is
displayed next to the Edit icon.
• If costs are calculated according to a hardware configuration, the vertical band is
green.
• If you have entered costs manually, the vertical band is shown in blue color.
d.6.3. Operational analysis

Once we know the total costs of the infrastructure and the cost drivers they are comprised
of, we can translate these costs into prices that should be charged to the consumers of our
IT services. As discussed earlier, vRealize Automation supports a number of pricing
models that enable showback/chargeback of the services it delivers. Moreover, prices can
be associated with service blueprints and with different infrastructure resources (CPU,
memory, storage). This way, you can differentiate two services that have the same
resource configuration, but may have different software components. Thus, the price of
one service may be higher than that of the other.

Figure 16-6 Edit costs

Essentially, the Operational analysis allows the following:

• Optimize the workload placements on the appropriate generation of hardware. For


example, you may discover that the VMs on an older server might cost more than on
a newer one.
• Find out how costs for CPU, memory and storage are developing over time.
In order to calculate prices, different factors are taken into account. Base rates are the
most important concept for calculating costs. There are two different base rates:

• Price per gigahertz for CPU


• Price per gigahertz for memory

The base rates are calculated on the basis of monthly operating costs, as determined by the
cost drivers. These base rates can in turn be associated with a virtual machine and thus
used to calculate the actual price. However, there are also direct costs, such as operating
licenses or labor costs, which are associated with a virtual machine.
Internally, when vRealize Business calculates the base rate, the following factors are taken
into account:

• Cost
• Capacity
• Expected utilization
• Unallocated costs

Figure 16-7 Operational analysis

When calculating the expected utilization, data from the last three months is used.
Alternatively, you can configure a global value, that is applied to all server clusters, or
define a percentage for each cluster separately. You can see how the base rate is calculated
by clicking on ‘Operational Analysis’ and then on ‘Edit Utilization’ (see Fig. 16-7).
If you want to change the value for a certain cluster, you need to proceed as follows:

1. Within the Set expected utilization of Host CPU and memory area, click on Set
per server cluster.
2. Enter the percentage amount for the CPU and memory.
d.6.4. Automatic base rate calculation

vRealize Business uses the following steps to calculate the base rate:

1. As a first step, the total costs will be calculated based on the cost drivers. After this,
the total costs will be split into CPU and memory (usually 80% for CPU and 20%
memory – the exact values depend on the concrete hardware).
2. The next step is to calculate the CPU base rate. The CPU costs (calculated in step 1)
will be divided by the total available GHz within the cluster. Finally, the result will
be multiplied by the expected utilization within the cluster.
3. The memory base rate will be calculated the same way.

Once the base rates are calculated, vRealize Business can determine the unallocated costs,
too.

Figure 16-8 Consumer analysis

d.6.5. Demand analysis

If you want to find out who consumes your resources, which costs occur and how
resources are used, you can navigate to the Demand Analysis page. You can also display
and sort by the following criteria:

• Consumer
• Total costs
• Amount of virtual machines
• CPU costs
• Memory costs
• Storage costs

Furthermore, you can group the list using the different grouping hierarchies that are
available:

• vCenter Server hierarchy (standard)


• vRealize Automation hierarchy (Tenant > Business Group > Blueprint)
• Custom mapping by uploading a CSV-file

Fig. 16-8 depicts how to use the Demand Analysis page.


d.6.6. Cloud Comparision

After having established prices for your IT services, you can compare them with those of
similarly configured providers. These could be from cloud service providers such as
Amazon, Microsoft or VMware.
Such a competitive benchmarking helps us to understand how efficient we are in
comparison to other providers. One of the difficulties of comparing prices between
different public cloud providers is that their offerings also differ from each other in terms
of hardware, availability or SLA. vRealize Automation also takes that burden off your
shoulders, by constantly updating the price information from the reference database and
providing a calculation model (so that a price comparison is possible). This relieves
companies from creating complex spreadsheets and updates them with the most-recent
information from the cloud providers.

Figure 16-9 Cloud comparison

The costs of the cloud service providers, can be seen when clicking on the Cloud
Comparison menu, within vRealize Business. In addition, the Cloud Comparison screen
helps you to perform the following tasks:

• Perform a What-if Analysis regarding the costs of moving an application from the
private to the public cloud.
• Model any new workload placements based on the project utilization and costs in
the private and public cloud.

Fig. 16-9 shows the Cloud Comparison. If you hover over the currency value in one of the
widgets, you will see a popup window consisting of cost drivers, which are used for the
cost calculation.
If you want to compare the costs of your virtual machines, with those in the public cloud,
you can do the following:

1. Click on the Compare existing VMs to Cloud link in the upper left corner.
2. Click on Import VMs.
3. Choose the tenant from which you want to import the machine.
4. Select the virtual machine to compare.
5. Click Done.

d.6.7. Reports

The Reports page helps generate showback reports and hence provides valuable insight
into your business. Showback data is quite important, as it can show each organizational
unit a summary of the resources that have been consumed, who has consumed them and
for how long.
Another important issue within cloud environments is charging. Most companies already
have a financial system in place. However, such a system needs to be fed with data.
vRealize Automation can generate reports in different formats (pdf or CSV), that can be
used as an input for these systems.
When navigating to the reports page, you will notice that there are reports available for
the different systems (see Fig. 16-10):

• vCenter Server
• vCloud Director
• vRealize Automation
• Public cloud providers

You can add cost aspects to the reports as well:


1. On the left-hand side of the screen, select cost aspects to deploy.
2. Scroll to the right side on the report screen to view the cost aspects.

When navigating to an individual VM’s screen, you can also visualize the OpEx costs,
such as labor and maintenance of that machine. If you want to export this data, click the
export button on the upper left-hand side of the screen.

Figure 16-10 Reports

d.7 Summary

This chapter introduced financial management in a cloud environment. Firstly, we


explained the challenges that occur in such environments. Then, we showed how to setup
manual cost calculations in vRealize Automation, including how to assign prices for;
CPU, memory, storage and blueprints. However, the main focus of this chapter was not
manual pricing, but automated pricing, based on vRealize Business.
Consequently, we showed how to deploy and integrate vRealize Business into an existing
vRealize Automation infrastructure. vRealize Business seamlessly integrates into the
vRealize Automation user interface and provides different views: An overview dashboard,
a cloud cost page (that helps to understand the cost within the cloud environment), an
analysis page for establishing prices, a cloud comparison page and a menu for reports.
17. DevOps and vRealize Automation

Recently, DevOps techniques have become more and more popular. The most important
parts of DevOps are automation and configuration management – both techniques that are
also part of vRealize Automation. Hence, many companies use VMware products, as part
of the underlying technology, when introducing DevOps in their enterprises.
Consequently, this book also covers DevOps techniques. We will first give a short
overview of DevOps and then demonstrate how vRealize Automation could be used for
DevOps.
17.1 Foundations of DevOps

Essentially, DevOps is a software development method, which stresses communication,


collaboration (information sharing and web service usage), integration, automation, and
the measurement of cooperation (between software developers and other information-
technology professionals). DevOps techniques are used to shorten the time and reduce the
number of processes, required to take a software project from the initial concept phase to
the final product.
DevOps techniques were made famous by Web-scale IT companies, such as Netflix, Etsy
and Amazon Web Services. Gene Kim, the author of the book The Phoenix Project and
one of the must important contributors in the DevOps scene, states that there are three
ways to DevOps:

• The First Way is to optimize the flow of work from development to IT operations.
• The Second Way is about shortening and amplifying the feedback loops.
• The Third Way says that we should encourage experimentation and learn rapidly
from failures.

These principles are similar to the ones known as CAMS (culture, automation,
measurement, and sharing), as discussed by John Willis and other DevOps leaders:

• Culture: When there is a failure, there should be no prompt blame assignment.


Instead, the way team members think about deployment practices and how they
respond to failures should be changed.
• As manual contributions and software deployments tend to be error-prone,
automation is the way to go. This also reduces the time needed for deployments.
• Monitoring and measurement are essential to success.
• People should share their knowledge with each other.
17.2 DevOps Tools

There are many tools to help introduce DevOps within an enterprise. These tools can be
categorized as follows:

• Ticket systems
• Server deployments
• Configuration management
• Continuous integration
• Continuous delivery
• Continuous deployment
• Log analysis

In the following, we will describe these techniques in more detail.

17.2.1. Ticket systems

Usually, project managements introduce some kind of ticket systems for task management.
Such a system helps to keep track of problems and also allows some kind of historical
analysis. Furthermore, such a system should help identify the root cause of bottlenecks in
a production environment. It can also help give insight into data regarding all members of
the software project team.

17.2.2. Server deployments

As stated, software deployments should not be handled manually. At this point, vRealize
Automation enters the stage as a new player. However, there are other software programs
which could be used for automated server deployments. For example, VMware Auto
Deploy for vSphere servers, Puppet Enterprise, Chef or Foreman. Recently, Docker has
also become quite popular. Essentially, Docker is an open-source project which helps to
automate the deployment of applications within software containers. We will cover
Docker within this section as well.

17.2.3. Configuration management


Once a machine has been deployed, it still has to be configured, before being production-
ready. In this book, we’ve already covered integration of Puppet into vRealize
Automation. Puppet can be used to take control, once vRealize Automation has completed
its deployment process. Other tools to consider are Chef, PowerShell Desired State
Configuration (DSC), or an orchestration framework like Ansible.

17.2.4. Continuous integration

Continuous integration is the practice of merging all developers’ progress into a shared
mainline, several times a day. One of the most famous software tools, for continuous
integration, is Jenkins. Jenkins is a tool used to build and test software projects
continuously, as well as to monitor their execution. By introducing such automation, bugs
are found and dealt with more easily.

17.2.5. Continuous delivery

‘Continuous delivery’, is a software engineering approach, in which teams keep producing


valuable software in short cycles. It also ensures that software can be reliably released at
any time. It is used in software development to automate and improve the process of
software delivery. Its benefits are: accelerated time to market, the knowledge that the right
product is being built, higher productivity and efficiency, reliable releases, a higher quality
product and improved customer satisfaction.

17.2.6. Continuous deployment

Continuous deployment is the next step of continuous delivery: Every change that passes
the automated tests is then deployed to production automatically.

17.2.7. Log analysis

When deploying machines and software, you have to guarantee that your implemented
processes are working correctly. The best way to do this would be to check your logs.
However, manual log checking is a tedious and error-prone task. That being said, as it is
the case when deploying machines, automation can help. There are already products
available for this. VMware offers Log Insight, there is also Splunk and then there are the
Open Source tools such as Logstash (combined with Elasticsearch and Kibana).

17.3 vRealize Automation and DevOps

Once we have covered the basics of DevOps and explained the tools used to establish it
within an enterprise, we want to shift our focus once more to vRealize Automation.
One of the greatest strengths of vRealize Automation is the user-friendly service catalog.
A large part of this book has covered the building and maintenance of the service catalog.
We have also talked about the extension of vRealize Automation. This can happen by
means of vRealize Orchestrator, via IP address management tools like Infoblox and also
with configuration management tools such as Puppet, firewalls, load-balancers or multi-
tier applications. In the following, we want to describe how such features can be
implemented in a vRealize Automation environment.

17.3.1. Deploying and automating multi-tier applications with vRealize


Application Services

Application Services have so far not been covered by this book. However, they are also
bundled with the Enterprise edition of vRealize Automation.
When deploying services, we mostly addressed infrastructure services based on IaaS
blueprints. In most cases, developers require more than just infrastructure services, they
need ready-to-use multi-tier applications. Such a complex scenario could consist of the
following components:

• A load balancer
• One or more web servers, placed behind a load balancer
• An application server
• A database server

Such scenarios are easily deployed via vRealize Application Services. The Application
Services provide a graphical user interface, to allow easy creation of multi-tier application
stacks. This is done using an intuitive drag-and-drop palette, to instantiate components and
their relationships to each other. The Application Services are integrated into vRealize
Automation. They allow the deployment of applications into a range of cloud providers,
such as vRealize Automation, vCloud Director or Amazon AWS. Like vRealize
Automation, such stacks have a name - ‘Application Blueprints’. An Application
Blueprint consists of the following parts:

• Cloud templates are virtual machines defined by the cloud provider – for example
an IaaS blueprint from vRealize Automation.
• Logical templates are the mapping of Cloud templates into Application Services.
For instance, a vRealize Automation IaaS blueprint or an Amazon EC2 AMI could
be mapped to a logical template.
• Services are ready-to-use software that can be added to logical templates to create
an application. For example, you could add an Apache Web Server to a CentOS
logical template.
• Tasks can help to run simple scripts and perform configuration changes and
installations.
• Application components represent a software artifact that could be developed by
the DevOps team and should be part of the deployment.
• Policies consist of user-defined sets of definitions and govern the lifecycle
operations of applications. For example, you can create a black list with applications
that are not allowed to be installed.
• Deployment profiles configure application deployments at runtime, for example
they decide how much memory can be used in a certain environment (e.g. test or
production).

Setting up Application Services usually involves the following steps:

1. First of all, the cloud machines must be installed or configured. As aforementioned,


this could be an IaaS blueprint (from vRealize Automation), or an EC2 instance
(from Amazon).
2. Next, cloud templates must be mapped to logical templates. Services that can be
installed later should also be provided at this point.
3. Application blueprints can be based on logical templates, services and application
components.
4. A Deployment profile has to be paired with the application blueprint.
5. The deployment can be started.

When using Application Services, you also have operations available to you at runtime:
• An application can be scaled out. For example, additional web servers can be placed
behind a load balancer.
• Applications can also be scaled to release resources.
• An update of an application can be performed.
• Roll back an update of the application.
• The whole application can be torn down.

17.3.2. Puppet integration

Puppet is a configuration management system, which allows you to define the state of
your IT infrastructure, before automatically enforcing a certain configuration. Whether
you’re managing just a few servers or thousands of physical and virtual machines, Puppet
automates tasks that system administrators would otherwise do manually,
Once you have installed a Puppet server, you are able to configure each node (physical
server, device or virtual machine) within the infrastructure with a Puppet agent as well.
There should also be a designated Puppet master running in the environment. At regular
intervals, enforcement takes place. This enforcement is made up of the following steps:

• The Puppet agent collects information about the node’s configuration and sends it to
the Puppet master.
• The Puppet master figures out how the node should look like and sends the
information back to the node.
• The agent makes any change needed to enforce the node’s desired state.
• Once the changes have been applied, the Puppet agent sends a report back to the
Puppet master.

In 2013, Puppet Labs and VMware formed a strategic partnership. They had already been
working together for over a year. VMware invested over $30 million in Puppet Labs in
order to jointly deliver, market and sell products for their customers (however, Puppet also
supports other cloud platforms, amongst them Amazon AW, Cisco, OpenStack, Microsoft
Azure, Eucalyptos, Rightscale, and Zenoss). Puppet products have several ways of
integrating into VMware products:

• vSphere Environments: Puppet Enterprise offers support for VMware virtual


machine instances (using vSphere and vCenter). There are commands for creating
new virtual machines, viewing the information of existing machines, configuring
and tearing down machines. Additionally, you can leverage Puppet modules to
manage VMware tools like vCenter or vCenter Log Insight.
• vRealize Automation: We have already seen that Puppet (Enterprise as well as the
Open Source version) can also be integrated into vRealize Automation and also
leverage the IaaS capabilities of vRealize Automation. Once a machine has been
deployed, Puppet can take control of the further configuration of the machine. In the
Orchestrator section, we have already shown how to install and configure the Puppet
Orchestrator plug-in. We also discussed how to implement a workflow based on the
Puppet plug-in.
• vRealize Automation Application Services: Puppet Enterprise can be registered as
a solution within Application Services (see Fig. 18-4 DevOpS). Once this has been
done, Puppet modules can be used as native Application Services objects (via the
drag-and-drop palette). Integration of Puppet into Application Services means that
there is a tighter dependency between the developer tool ‘Puppet’ and the vRealize
Automation infrastructure software. This tighter coupling is one of the core
principles of DevOps.

17.4 vRealize Code Stream

One of the most recent VMware products vRealize Code Stream. If you take a look at the
workflows packaged within a default Orchestrator instance (of vRealize Automation), you
would see that there are a couple of workflows, which obviously do not interact with
vRealize Automation. These belong to vRealize Code Stream. vRealize Code Stream is
technically bundled with vRealize Automation. However, you need to separately purchase
this and then unlock its functionality, within the graphical user interface, by using a serial
number.
vRealize Code Stream helps teams, that have Continuous Delivery in their company, to
become more productive. Basically, the product helps developers with the following
features:

• It automates the different tasks required to provision, deploy, test, monitor and
decommission software for a specific release.
• It helps to assure standardized configurations by coordinating the artifacts and
processes across each release delivery stage.
• Governance is provided as well. This includes control across the end to end process
in the delivery pipeline.
• Existing tools can be integrated and used.
It is important to realize that vRealize Code Stream does not replace the existing software
development lifecycle tools and processes (it tries to leverage and work with the existing
tools). This can be done by means of an Orchestration engine, which is of course vRealize
Orchestrator.
The architecture of vRealize Code Stream has an integration framework that allows
interacting with the following software tools:

• Source code management


• Artifact repository
• Build/Continuous integration
• Infrastructure Provisioning
• Software deployment
• Test frameworks
• Manual tasks

A pipeline usually consists of the following stages:

• First, a developer checks some code into a repository.


• A pipeline execution is triggered, which has three stages: test, staging and
production.
• The pipeline has succeeded. The outcome should be analyzed and in case of an
error, the error should be corrected.

Internally, the integration framework uses an embedded version of JFrog’s Artifactory


repository manager. Code Stream’s pipeline automation is based on vRealize Automation.
Orchestrator workflows can be invoked from various tasks within the Code Stream
pipelines. By using Orchestrator, customers can integrate almost any tool into the release
pipeline.

17.5 Docker

As mentioned previously, Docker helps automate the deployment of applications within


software containers. This is done by providing an additional layer of abstraction and
automating operating-system-level virtualization within Linux. The advantage of Docker
is its performance. Before this, administrators had to instantiate new operating system
instances when trying to scale out applications. Docker, on the other hand, avoids the
overhead of starting new virtual machines. Instead, resource isolation features, of the
Linux kernel, are used to provide independent containers, which are run within a single
Linux instance.
Internally, Docker is based on resource isolation features of the Linux kernel and isolates
an application’s view of the operating system. This includes the CPU and its process trees,
memory, IO, network, user IDs and mounted file systems.
Docker has become quite popular recently, in both cloud and DevOps environments.
Hence, there is a lot of support and integration with various infrastructure tools. This
includes; Puppet, Chef, Ansible, Amazon Web Services, Google Cloud Platform, IBM
Bluemix, OpenStack, Microsoft Azure, Pivotal Cloud Foundry and of course VMware.
Using Docker brings many advantages:

• Firstly, due to its layered approach to dependency management, the configuration of


environments becomes easier to maintain.
• It provides lightweight runtime environment, which allows to run multiple Docker
images on a single machine.
• Docker can be controlled and configured by other DevOps tools like Puppet, Chef or
Ansible, hence making it easier to administer.
• Various runtime options help to customize images, so the same image can be reused
for different applications.
• Network and storage is decoupled from the application. Consequently,
administrators can easily run an image in different environments.

17.6 Project Photon

VMware supports the running of containers in vSphere. There are two projects: Project
Photon and Project Lightwave. Both work together to run Linux containers and provide
additional features for DevOps application architectures. Let’s introduce the two projects:

• Project Photon is a Linux container host runtime environment for vSphere. Besides
Docker as a container format, it supports Rocket (rkt) and Garden as well. Project
Photon has a small footprint and is a yum-compatible package-based lifecycle
management system. It runs on environments such as VMware Fusion, VMware
Workstation, vSphere, vCloud Air and Google Compute Engine.
• Project Lightwave adds additional enterprise features and can be used in
combination with Project Photon. Firstly, it provides multi-tenancy across
infrastructure and application stacks and to all the stages of an application
development lifecycle. Secondly, it has support for additional security features as
well as for an authentication and authorization mechanism.
Project Photon can be integrated into vRealize Automation. In order to achieve this, there
are a couple of steps that must be completed first:

• Firstly, the latest Photon ISO must be downloaded. This ISO can be downloaded
from Github[13] and there are detailed instructions[14] of how to do it there.
• As there is no support for guest customization in the ISO, a shell script must be
placed within the OS. It is then triggered either by the Guest Agent or vRealize
Orchestrator.
• Make sure that the Photon virtual machine can access the public Docker repository.
If there is no connection to the internet, you need to setup your private Docker
registry.

We will not demonstrate the integration of Project Photon within the vSphere
environment, as this is already covered by the instructions available on Github. We will
therefore only demonstrate how to integrate the Photon VM into vRealize Automation.
There is no further necessary customization of the Photon VM. The template already
comes with Docker installed and is preconfigured for the Docker registry.
Firstly, we need to find a mechanism which assigns an IP address to a newly created
virtual machine. As guest customizations are not supported, we have to do this manually.
This could be done via the Guest Agent or vRealize Orchestrator. As we have already
shown both approaches, we will only sketch out the necessary steps to accomplish Photon
integration, by means of vRealize Orchestrator.
As a prerequisite, we assume that the virtual machine is deployed within a network
where a DHCP server is available. If this is not the case, you could use a network profile
to assign the network settings. However, if we want to quickly provide a hostname via
vRealize Automation, we must place a script in this virtual machine, that can be triggered
by it.

So boot into the machine and place the following script within it:

#Display existing hostname


echo “Existing hostname is $hostn”
echo “new hostname is $1”
#change hostname in /etc/hosts & /etc/hostname
sed -i “s/$hostn/$1/g” /etc/hosts
sed -i “s/$hostn/$1/g” /etc/hostname
/etc/init.d/hostname.sh start

#display new hostname


echo “Your new hostname is $1”

Save the script as customizeos.sh and make it executable with the following command:

chmod a+x customizeos.sh

Next, shutdown the machine from your vSphere Client and create a snapshot of the
machine called ‘base’.

Now log in into vRealize Automation and perform the following steps:

1. Navigate to Infrastructure > Compute Resources > Compute. Resources and


hover over the Cluster where your Photon template resides.
2. Click on Data Collection.
3. In the Inventory widget, click on the Request now button to start a new inventory
scan.

After the template has been located, a blueprint must be created and published to the
service catalog. Once again, we already have shown how this can be accomplished and
therefore, it will not be discussed here in further detail.
Once this task has been accomplished, we can focus on vRealize Orchestrator. Continue
with the following steps:

1. Import the Assign workflow to a blueprint and Run program in guest workflow
into Orchestrator.
2. Run the Assign workflow to a blueprint workflow, select the
MachineProvisioned template and use the Run program in guest as the end user
workflow to run.
3. On the Photon blueprint properties section, assign values for the vmUsername,
vmPassword, programPath and workingDirectory properties and save your changes.

Once you have completed the configuration, you should be able to provision a Photon
machine to the service catalog. With the Orchestrator script executed, the hostname should
also have been changed. You can check this by opening an SSH connection to the newly
provisioned machine.

To run Docker, from the command prompt, enter the following command:

systemctl start docker

To test Docker, you can start an Nginx web server from the Docker hub:

docker run -d -p 80:80 vmwarecna/nginx

17.7 Summary

Recently, VMware has shifted its focus to DevOps as well. While there are many different
DevOps tools on the market, few of them are integrated with vRealize Automation out-of-
the-box. Consequently, customers are forced to spend a lot of time in pipeline automation.
With the release of Code Stream, VMware now offers a solution to help integrate these
different DevOps teams and thus allows the creation of a fully automated release pipeline.
When introducing DevOps techniques to an organization, vRealize Automation can also
help out. This chapter covered Puppet, which can be used for both configuration
management and automating vSphere environments. vRealize Application Services can
close the gap between developers and administrators, by providing a tighter coupling.
Furthermore, VMware spends a lot of effort in promoting the Solution Exchange – the
marketplace of integrated technologies. In this way, VMware tries to play an important
role in the DevOps market.
18. DNS, DHCP and IP Address Management Tools

The networking landscapes of modern datacenters are rapidly evolving. This is due to
trends and technologies in the areas of security, virtualization or automation. Setting up a
Cloud Management Platform (CMP), like vRealize Automation, can accelerate this
development yet further.
Managing such environments can be quite cumbersome, in terms of DNS, DHCP and IP
address management (DDI), especially when there are no appropriate tools available. For
private cloud projects to succeed, the underlying network fabric needs to be able to
support automation, too. Traditionally, network teams have relied too much on manual
scripting and configuration methods. These methods should be eliminated in the long run,
before private cloud solutions can reach maturity.
Typically, CMPs already include rudimentary IP address management (IPAM)
capabilities. However, many organizations require more robust capabilities. That’s reason
enough to provide more information on this topic.
After that we will show how to integrate the best know IPAM solution – Infoblox – into a
vRealize Automation environment.

18.1 Overview about DDI tools

The main reason for investing in a DDI solution like Infoblox, is to improve the
manageability of IP addresses. Essentially such a solution increases operational efficiency
and agility providing the following advantages:

1. First, as the IP address management and its underlying workflows are improved, a
faster and more accurate provisioning of DNS/DHCP services is possible. As a
consequence, any service delivery to the end user can be become faster.
2. In recent times, end users may use more than one IP address due to the fact that they
are using mobile devices or provision more virtual machines into a cloud
environment. Having a dedicated IP address management solution eases the
administration.
3. Configuring IP addresses, DNS or DHCP settings can be quite cumbersome and
difficult for less experienced IT administrators. Hence, providing an easy-to-use
graphical user interface will make it possible to delegate such work to those
administrators.
4. A centralized tool further increases the visibility and allows an easy reporting of any
IP address assignment.
5. A DDI solution usually comes with an API and in the case of Infoblox there is even
an VMware Orchestrator plug-in that allows the automation of DNS, DHCP and IP
address management tasks.
6. There is support for IPv6.
7. While the software-defined datacenter is becoming increasingly important in the
market, DDI vendors are already working on support for such solutions. Infoblox –
for example – is working on integrating their solution into VMware NSX.

18.2 Infoblox and vRealize Automation

In many vRealize Automation environments, administrators do not want to give up control


of IP address management to vRealize Automation and instead want to use an IPAM tool
like Infoblox.
Infoblox, being the IPAM vendor with the highest degree of visibility in the market, has
provided an Orchestrator plug-in for several years.
From a technical point of view, assigning an IP address and a DNS host record must be
accomplished during the provisioning process. vRealize Automation already provides a
number of methods for assigning IP addresses. This can be done by dynamically assigning
them via DHCP, by statically assigning them from a pool of IP addresses stored in a
network profile, or even by allowing you to write a solution that allocates them from a
custom database.

Figure 18-1 Infoblox Integration via the Orchestrator Plug-in

Infoblox leverages these built-in capabilities by providing their own appliance and
integrating it into vRealize Automation by means of an Orchestrator plug-in. During
provisioning, vRealize Automation can call these workflows to assign an IP address. Once
a machine is deleted, the Infoblox plug-in also automates the process of de-allocating an
IP address and removes its DNS host record. Fig. 18-1 depicts the integration of the
Infoblox lifecycle in a virtual machine’s lifecycle.
Fundamentally, the Infoblox plug-in provides support for vSphere, vCloud Director and
vRealize Automation environments. As our book only covers vRealize Automation, we
will not address the others.
The Infoblox IPAM Plug-In allows both fixed IP address allocation and address
allocation from DHCP ranges. When you use the Infoblox IPAM Plug-In to allocate IP
addresses to virtual machines (VMs), it automatically forwards a DNS request to your
Infoblox IPAM server, i.e. NIOS or vNIOS appliance. NIOS creates a complete host
record in its database, this enables the VMs to be located through their FQDNs. This
information is also replicated in VMware platforms such as vRealize Automation or
vCloud Director.

18.3 Deploying Infoblox

In order to run infoblox, the following steps must be taken:

• Ensure that you have a running vRealize Automation installation with vRealize
Orchestrator integrated.
• Deploy the Infoblox NIOS appliance.
• Install and configure the Infoblox Orchestrator plug-in.

If you want to evaluate Infoblox, there is a special NIOS appliance called vNIOS Trial.
This provides the complete suite of DNS, DHCP and IPAM functionality offered by the
standard Infoblox NIOS appliance. In addition, it also provides

Figure 18-2 Architecture of the Infoblox Orchestrator plug-in

real-time dashboards for IP monitoring, allows managing Microsoft DHCP and DNS
services (hosted on Microsoft servers) and provides additional functionality.
The plug-in, on the other hand, integrates IP address allocation capabilities into vRealize
Automation. Starting with release 3.0.0, the plug-in also supports cloud network
automation. This means that the plug-in can work as an adapter to provision port IP
addresses, subnets or networks in private, public or hybrid cloud computing networks.

18.3.1. Deploying the Infoblox NIOS appliance

The deployment of the appliance is fairly straight forward, once downloaded you can find
a quick start guide on the company’s webpage[15].

18.3.2. Installing and configuring the Infoblox Orchestrator plug-in


The Infoblox Orchestrator plug-in can also be downloaded from the company’s webpage
(you need to register for that first[16]). Once done, there are two tasks that have to be
accomplished:

• First, the plug-in must be deployed in vRealize Orchestrator.


• The plug-in must be connected to the Infoblox appliance.

When integrating the Infoblox plug-in into vRealize Automation, vRealize Automation
first invokes the Infoblox IPAM plug-in for vCO workflows that either allocate an IP
address and a DNS record to a new VM, or delete them for a removed VM. Fig. 18-2
illustrates the architecture of the Infoblox plug-in.
When deploying the plug-in, it is crucial that you first verify the compatibility of the plug-
in with your environment. The current IPAM plug-in works with the following versions
(please check for updated versions):

• NIOS 7.0.2
• ESXi 5.5.0
• vCenter 5.5.0
• vRealize Orchestrator 5.5.1, 5.5.2, 6.0.1 (standalone), 6.0.2 (embedded)
• vRealize Automation 6.0.1, 6.1, 6.2.0

Furthermore, when deploying the plug-in to Orchestrator, the community package


com.vmware.pso.vcac.proptoolkit.package is also needed (included in the Infoblox IPAM
plugin distribution package). The following figure illustrates components of the vRealize
Automation deployment (Fig. 18-3).

When deploying the plug-in, you have to perform the following procedures in the listed
order below:

1. Import SSL certificates from NIOS and IaaS host.


2. Install the Infoblox IPAM Plug-In for VMware.
3. Set up an Infoblox IPAM connection.
4. Install the external VMware package.
The following description will demonstrate the integration with the infoblox plug-in 2.4.2,
but a more recent version will also work.

Figure 18-3 Components for integration of Infoblox into vRealize Orchestrator

18.4 Importing SSL certificates

To ensure interoperability of vCenter Orchestrator with the Infoblox plug-In for VMware,
you must first import valid SSL certificates from the NIOS appliance and the vCAC
Infrastructure Administration host (a Windows computer with the IaaS Service installed).

Perform the following steps to import an SSL certificate into vCenter Orchestrator:

1. On the VMware vCenter Orchestrator Configuration page, click the Network tab.
2. In the right panel, click the SSL Trust Manager tab.
3. Under Import from URL, enter the IP address or, under Import from file, select
the certificate file for the NIOS appliance or IaaS host.
4. Click Import, and then click Import again to confirm.

The new SSL certificate appears in the SSL Trust Manager page.

18.5 Installing the Infoblox IPAM Plug-In for VMware

To install the Infoblox IPAM plug-in, perform the following tasks:

1. Unzip the plug-in archive file into a folder on your management system.
2. Log in to the VMware vCenter Orchestrator Configuration page using a Web
browser.
3. Click the Plug-ins tab.
4. In the right panel, under Install new plug-in, click the Plug-in file field.
5. In the file upload dialog, select All Files, select the .dar file.(o11nplugin-ipam-dar)
for the plug-in version v.2.4.2, and click Open.
6. Click Upload and install. The Infoblox IPAM plug-in for VMware tab appears in
the Orchestrator Configuration page.
7. If the Infoblox IPAM Plug-In for VMware check box is not selected under Enabled
plug-ins installation status, select it and click Apply Changes.
8. On the Startup Options tab, click Restart service and, if necessary, click Restart
the vCO configuration server.
9. Click the Plug-ins tab and make sure that the text “Installation OK” is visible to the
right of the IPAM plug-in. If not, restart vCO till the “Installation OK” message is
visible before you continue with the IPAM plug-in configuration.

18.6 Set up an Infoblox IPAM connection

After you have installed the Infoblox IPAM Plug-In for VMware and imported the SSL
certificate from NIOS, you need to configure a connection to your Infoblox appliance. You
can add a number of connections to different NIOS servers, or grids, and indicate the
default one. You can then edit or delete the added Infoblox IPAM connections.
Perform the following steps to configure an Infoblox connection:

1. On the VMware vCenter Orchestrator Configuration page, click the Infoblox IPAM
2.4.2 tab.
2. Click New Connection.
3. Provide the following input:
a. Infoblox IPAM Host Name: Enter the IP address or the hostname of the
Infoblox appliance.
b. Infoblox IPAM User Name: Enter the Cloud API user name for the
appliance.
c. Infoblox IPAM Password: Enter the Cloud API Password.
d. Default Network View: Optionally, enter the network view that will be used
as the default in the workflows.
e. Default DNS View: Optionally, enter the DNS view that will be used as the
default in the workflows.
4. Click Apply changes.
5. On the Startup Options tab, click Restart service and, if necessary, Restart the
vCO configuration server.

e.7 Install the external VMware package

You must install the vCO proptoolkit package before running any workflows from the
vCAC package:

• com.vmware.pso.vcac.proptoolkit.package
These packages are located in the \2.4.2\vcac folder of the provided .zip archive. To install
the packages, work through the following list:

1. Open the Orchestrator Java client.


2. Change either into the Administer or Design mode.
3. Click the Packages tab.
4. Click Import Package and then select the com.vmware.pso.vcac.proptoolkit.package
and click Open.
5. On the Import Package page, check that all associated workflows are enabled.
Click Import and Trust Provider and then Import Selected Elements.

After a short moment, the vCO client updates to show the new package and its description
in the General tab.
In order to validate the plug-in installation, you can perform the following steps:

1. Within vRealize Orchestrator, change to the Administer or Design mode.


2. Click the Packages tab.
3. Right-click the com.infoblox.ipam.vcac package and choose Validate Workflows. A
Workflows Validation window appears listing all events including any warnings.
4. Click Close when finished.
e.8 Working with the Infoblox Orchestrator workflows

Once the Infoblox plug-in has successfully been installed, we can install the vCO
customization wrapper. This workflow allows vRealize Automation to call Infoblox
during the different stages of a machine’s lifecycle:

• Building - the stage when IP addressing is reserved in IPAM and passed back into
vRA during the initial provisioning.
• Provisioned - once the machine is built, this calls the workflow “Update MAC
address for vCAC VM wrapper”, that grabs the as-built MAC address from the VM
(nic0) in order to populate Infoblox with this detail.

• Disposing – when the machine is destroyed, this calls-out to “Remove Host Record
or A/PTR/CNAME/Fixed address/Reservation of vCAC VM wrapper”. In essence,
this removes the entries made by the previous workflows.

In order to install these hooks, the Install vCO customization wrapper workflow must
be invoked:

1. Navigate to the IPAM > vCAC > Configuration folder, select the workflow and
choose Start workflow.
2. In the Common Parameters page, click the The vCloud Automation Center host
instance field.
3. In the Select Host window, click the top vCAC Infrastructure Administration
entry in the left pane’s hierarchical list. The list of current vCAC VM hosts appears.
4. Select the desired host and click Select.
5. Click Submit.

The next step is to apply an Infoblox configuration to a blueprint. Infoblox uses build
profiles to store a specific network configuration. Once again, we use Orchestrator
workflows to create these build profiles. There are different workflows available (see Fig.
18-4):

• The workflow Create build profile for reserve an IP for vCAC VM is used if you
want to define a static IP (you might have obtained this IP from another source such
as a network profile).
• The workflow Create build profile for reserve an IP for vCAC VM in Network
is used, when the blueprint should claim an IP address within a given subnet range.
• The workflow Create build profile for reserve an IP for vCAC VM in Range
refers to a DHCP range defined in IPAM (important: The IP address assignment is
nevertheless static, DHCP is not used).

Once one of the workflows has been invoked, a build profile will have been created. This
can be attached to a blueprint, in order to prepare it for interaction with Infoblox during its
lifecycle. The build profile is shown in Fig. 18-4.
Before a machine can be provisioned, we should first customize the build profile
according to our needs. The values for the build profile can be entered by the user at the
point of request, but it makes more sense to hard code some of them (for example DNS,
network and CIDR). However, when configuring these properties, please keep in mind
that they have a great impact on how vRealize Automation and Infoblox interact. There
are different approaches for the integration of both components:
Figure 18-4 Build Profile for Reserve an IP vor vCAC VM

• The first approach (using the Create build profile for reserve an IP for vCAC VM
workflow) is to keep the existing network profiles with all its properties and only
use Infoblox to register IPs and DNS names. This is the easiest approach and in
many cases companies will take this route. You have to set up a network profile in
advance, but once done there is nothing too big to worry about anymore.
• The second approach (network approach) uses the infoblox built-in IPAM system to
obtain the IP addresses. There are different methods used to find the right network in
Infoblox and they require different sets of custom properties:
o You can specify the network by IP address and CIDR and have to specify the
Infoblox.IPAM.netaddr and Infoblox.IPAM.cidr custom properties.
o You can also search in Infoblox for a network with the following custom
properties: Infoblox.IPAM.searchEa1Name, Infoblox.IPAM.searchEa1Value,
Infoblox.IPAM.searchEa1Comparison, Infoblox.IPAM.searchEa10Name,
Infoblox.IPAM.searchEa10Value and Infoblox.IPAM.searchEa10Comparison.
• The last approach (using a range) requires configuration of the following custom
properties:
o Infoblox.IPAM.startAddress
o Infoblox.IPAM.endAddress
Once this has been done, we need to attach the build profile to the blueprints we want
infoblox to be integrated with. We are then ready to request a new VM.

e.9 Summary

This chapter emphazised the need for a professional IPAM system and showed how the
Infoblox plug-in can be installed and configured. Once again, Orchestrator is doing a lot
of the work. However, there is still some work to be done in order to integrate infoblox
into vRealize Automation and automate the IP address and DNS configuration.
Index

A
Active Directory 54-55, 85-86, 116-117, 251-253
- Active Directory certificate service 66
- Active Directory plugin configuration 271-272
Advanced Service Designer 267
Amazon Web Service 17, 30-33, 91, 296, 319
Anything as a Service (XaaS) 221, 246, 267
Application Services 24, 32-33, 45, 314-317
Approval 170, 191
Approval policy 17,169-172

B
Backup 214-218, 269
Backup and recovery 214-218, 209
Blueprints 109
- virtual blueprint 118
- cloud blueprint 129
- physical blueprint 136
Branding 75, 82-83
Build profile 116, 189, 196
Business group 97-99
Business group roles 107-107

C
Catalog item 165, 168-172, 283
CDK 226
Certificates 64-66, 68-77
- CA-signed certificates 64
- certificate signing request 69
- SSL certificate 55, 330
- certificate template 68
Cisco UCS 17, 25, 89-91, 136-137
Citrix Xen Server 17, 25, 118
Clone 119-122, 120
Cloud costs 301
Cloud Development Kit 226
CloudClient 209-212, 210
Compute resources 89, 92, 99, 104, 115, 122, 161, 186, 193, 289, 293, 300
Cost profile 293-294
Currency 295-297, 306
Custom property 189, 193
Custom resource 275, 273-275

D
Data collection 93-94
DDI 325-326
Dell iDrac 17, 25, 136-138
DEM 43-44, 217
DEM Orchestrator 43
DEM worker 44
Demand analysis 305
DevOps 311
Distributed Execution Manager 29, 60
Docker 318-321

E
Email settings 83
Endpoints 77, 88, 90-93, 132-137, 247-249, 270-272
- vSphere endpoint 92
- Amazon Web services endpoint 93
- NSX endpoint 157
- Orchestrator endpoint 248
Entitlement 52, 166, 168, 169, 170, 279
External network profile 148
External Provisioning Integration 30

F
Fabric admin 95-99, 197-199
Fabric group 95-97

G
Group manager 97-99, 175-177
Guest Agent 138-140

H
HP iLO 17, 25, 89-91, 136-137
Hyper-V 17, 25, 89-91, 118-119
I
IaaS 12, 18, 25-26, 29, 42, 59-65, 73-78
IaaS admin 87-88, 95, 106-107
IaaS web server 43
Identity Appliance 31, 53
Importing machines 160
Infoblox 326
Infoblox NIOS appliance 329-331
infrastructure admin 84, 254, 255-256, 261-263, 283, 330-333
IPAM 330-335

K
KVM 12, 17, 89-91, 118-119

L
LDAP 42, 46, 85-86, 252, 272,
Licensing 116-118, 292
Linked clone 120-122
LINQPad 229
Linux kickstart 123
Logging 15, 78, 86, 124, 135, 232, 237, 241, 249

M
Machine leases 111
Machine prefix 96-98
Management Agents 30
Manager Service 29, 43, 60
Microsoft SCCM 119, 126-127, 137-138,
Model Manager 27-29, 75-77, 217
Multimachine blueprint 145, 155
Multitenancy 17

N
NetApp Flexclone 122
Network profile 104, 145
Notification 98, 174
NSX 35, 92, 146, 151

O
OpenStack 25, 129, 132
Operational analysis 300-304, 302
Orchestrator Appliance 247

P
PaaS 12-13, 18, 24, 32
PostgreSQL 25, 42, 296
Private network 105, 149
Project Photon 319
Property attribute 199-202
Property definition 199-202, 201
Property dictionary 20, 197-206, 221, 225, 261,
Proxy Agent 40, 44
Puppet 31, 139, 255, 258-260, 313-319

R
Reclamation 181
Red Hat KVM 17, 25
Reservation 99
Reservation policy 114, 126-129
Resource action 268, 278
Resource mapping 268, 270, 273, 285
Routed network 105, 146, 169, 151

S
SaaS 13, 292
Service blueprint 155, 268, 273, 276
Service Catalog 18, 165
SMTP (Simple Mail Transfer Protocol) 29, 42
Software-Defined Data Center 16-17
SSL 31, 53-58, 72, 217, 250, 330
SSO 23, 42, 54-59, 215, 297
Storage cost profile 75-76, 86, 293-294
Storage reservation policy 104

T
Tenant 51, 84-87, 106-107, 211, 298

V
vCenter 30-33, 44
vCenter Orchestrator (vCO) 25, 138, 192, 221, 224, 245-250, 330
VDI 30
VDI Agent 30
vCloud 134-135
vCloud Director 134
Virtual Desktop Integration 30
vPostgreSQL 25, 40, 296
vRA Appliance 42, 48, 56-64
vRealize Application Services 40, 45, 314
vRealize Business 24, 31, 40, 212, 292
vRealize Designer 222, 229
vRealize Orchestrator 33, 40, 65, 157, 212-213, 245
W
WIM Provisioning 127
Windows Management Instrumentation (WMI) 30, 93, 137

X
XaaS 12, 18, 268

[1]
Source: http://www.vmware.com/products/vrealize-automation/compare.html
[2]
http://blogs.vmware.com/PowerCLI/2014/12/vrealize-automation-vra-6-2-pre-req-automation-script-formerly-
vcac.html
[3]
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2107816
[4]
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2074803
[5]
http://pubs.vmware.com/vra-62/index.jsp?topic=%2Fcom.vmware.vra.custom.props.doc%2FGUID-9A925449-3DC8-
4BA8-91A5-DF7E1191097B.html
[6]
http://www.virtualizationteam.com/cloud/vcac-6-property-dictionary-relationship-builder.html
[7]
https://developercenter.vmware.com/tool/cloudclient/3.2.0
[8]
LINQPad can be purchased, but there is also cost-free version for download at http://www.lingpad.com
[9]
https://solutionexchange.vmware.com/store/products/vrealize-orchestrator-vro-puppet-plugin
[10]
https://puppetlabs.com/download-learning-vm

[11]
http://www.gartner.com/technology/reprints.do?id=1-212F7AL&ct=140909&st=sb
[12]
https://msdn.microsoft.com/en-us/library/ee825488(v=cs.20).aspx
[13]
http://vmware.github.io/photon
[14]
https://vmware.github.io/photon/assets/files/getting_started_with_photon_on_vsphere.pdf
[15]
https://www.Infoblox.com/sites/Infobloxcom/files/resources/vnios-trial-quick-start-guide_1.pdf
[16]
https://www.Infoblox.com/downloads/software/vmware-vcenter-orchestrator-plug-in

S-ar putea să vă placă și