Sunteți pe pagina 1din 194

SQL AND SYSTEM CENTER

SQL SERVER 2012 WITH SYSTEM CENTER 2012


R2 ON SERVER 2012 R2

Deploy SQL 2012 in various different configurations to support System Center 2012 R2
running on Server 2012 R2

Paul Keely MVP


1
SQL AND SYSTEM SQL AND
SYSTEM

CENTER CENTER
Learn how to deploy

Introduction ...........................................................................................6 SQL in Highly Available


1.0 SQL Versions and Supportable Features ...................................7 (HA) scenarios to
1.1 SCOM 2012 R2 ......................................................................7
support any System
1.2 SCCM 2012 R2 ......................................................................8
1.3 SCSM 2012 R2 .......................................................................8 Center product. Then

1.4 SCVMM 2012 R2 ....................................................................8 use System Center to


1.5 SCO 2012 R2 .........................................................................9
monitor, backup,
1.6 SQL Server on Windows Server 2012 .................................. 10
automate and deploy
2.0 Building a SQL Cluster ............................................................. 11
2.1 Sequence for a Cluster Build ................................................ 11 SQL in a highly

2.2 Prepare and Present Storage ............................................... 12 managed environment.


2.3 Initialize the Disks ................................................................. 15
2.4 Create a Windows Cluster .................................................... 19
2.5 SQL Cluster Installation ........................................................ 28
3.0 SQL AlwaysOn ......................................................................... 40
3.1 Install SQL Server on the Replica ......................................... 40
3.2 Enabling AlwaysOn Availability Groups on the Cluster ......... 43
3.3 Adding the Replica to the Failover Cluster ............................ 46
3.4 Adding AlwaysOn Endpoints to the Replica .......................... 52
3.5 Creating a Blank Database to Protect with AlwaysOn ........... 53
3.6 Create the AlwaysOn Seed Share ........................................ 55
3.7 Creating an AlwaysOn Availability Group .............................. 57
3.8 Deploying SCOM into the AlwaysOn Availability Group ........ 65
3.9 Adding the SCOM Databases to the Availability Group ......... 66
3.10 Deploying Additional System Center Products ...................... 73
4.0 Hyper-V Replica ....................................................................... 75
4.1 Testing HVR ......................................................................... 86
2
5.0 Deploying SCOM with Different HA Options .............................................................................. 89
5.1 Basic SCOM Design .............................................................................................................. 90
5.2 Distributed Design ................................................................................................................. 91
5.3 Local Site HA ......................................................................................................................... 93
5.4 Multi-Site HA with SQL AlwaysOn ......................................................................................... 94
5.5 SQL Stretched Cluster ........................................................................................................... 96
5.6 HA Using Separate Management Groups .............................................................................. 97
5.7 Using Hyper-V Replica .......................................................................................................... 99
6.0 SQL in Azure to Support System Center ................................................................................. 101
6.1 Network Connectivity ........................................................................................................... 101
6.2 System Center Support in Windows Azure .......................................................................... 102
6.3 VM Configuration ................................................................................................................. 102
6.4 VM Sizing ............................................................................................................................ 103
6.5 Network Bandwidth .............................................................................................................. 103
6.6 Disk Types and Configurations ............................................................................................ 104
6.6.1 High Availability Disks ................................................................................................... 104
6.6.2 Disk Cache Settings in Windows Azure Virtual Machines ............................................. 105
6.6.3 Enabling Read Write Cache for the OS Disk ................................................................. 106
6.7 Optimizing SQL Configuration in Windows Azure VMs ........................................................ 106
6.8 Use Data Disks for SQL DB Files ........................................................................................ 106
6.9 Disk IOPS in Azure Virtual Machines ................................................................................... 107
6.10 Autogrow Settings................................................................................................................ 107
6.11 High Availability & Disaster Recovery (HADR) ..................................................................... 107
7.0 Backing up SQL with DPM ...................................................................................................... 109
7.1 Introduction to System Center Data Protection Manager 2012 R2 ....................................... 109
7.2 What Can DPM 2012 R2 Protect? ....................................................................................... 109
7.3 Change Tracking ................................................................................................................. 109
7.4 Getting Started with DPM .................................................................................................... 110
7.5 Adding Disk to the DPM Storage Pool ................................................................................. 110
7.6 Deploying DPM Agents ........................................................................................................ 110
7.7 DPMDB ............................................................................................................................... 110
7.8 Special Considerations for Using DPM to Protect SQL Workloads ...................................... 111
7.8.1 RPO, RTO and RLO ..................................................................................................... 111
7.8.2 Recovery Point Objectives ............................................................................................ 111
3
7.8.3 Recovery Time Objectives ............................................................................................ 111
7.8.4 Recovery Level Objectives ........................................................................................... 111
7.8.5 Local SQL Backups ...................................................................................................... 111
7.8.6 SQL Maintenance Tasks .............................................................................................. 112
7.8.7 SQL Management Tasks .............................................................................................. 112
7.8.8 Moving SQL Server Between Domains ......................................................................... 112
7.8.9 Renaming a Computer Running SQL Server ................................................................ 112
7.8.10 Changing the Recovery Model of a Database ............................................................... 112
7.8.11 Replacing a Disk on a SQL Server ............................................................................... 112
7.8.12 Adding Databases to a SQL Server .............................................................................. 113
7.8.13 Changing the Path of a SQL Server Database.............................................................. 113
7.8.14 Renaming a SQL Server Database............................................................................... 113
7.8.15 Parallel Backups ........................................................................................................... 113
7.8.16 Auto-Protection of SQL Instances ................................................................................. 113
7.8.17 Co-Location for Large Scale SQL Server Implementations ........................................... 114
7.9 Windows Azure .................................................................................................................... 115
7.10 Pre-Configuration................................................................................................................. 115
7.11 SQL Recovery Models ......................................................................................................... 115
7.12 Supported SQL Server Versions .......................................................................................... 115
7.13 SQL Clusters ....................................................................................................................... 116
7.13.1 Adding SQL Cluster Members ...................................................................................... 116
7.13.2 Removing SQL Cluster Members ................................................................................. 116
7.14 Mirrored SQL Servers .......................................................................................................... 116
7.15 Unsupported Scenarios ....................................................................................................... 117
7.16 Specific SQL Server Configurations ..................................................................................... 117
7.16.1 Configuration Walkthough............................................................................................. 117
7.17 Configuring Protection Groups for SQL Server .................................................................... 121
7.18 Restoring SQL Server Data from DPM ................................................................................ 141
7.19 Short-Term Recovery Goals Restore ................................................................................... 143
7.19.1 Recovery to the Original Instance of SQL Server .......................................................... 144
7.19.2 Recovery to Any Instance of SQL Server...................................................................... 145
7.19.3 Copy to Network Folder ................................................................................................ 146
7.19.4 Copy to Tape ................................................................................................................ 147
7.20 Long-Term Recovery Goals Restore ................................................................................... 148
4
7.21 Cloud Recovery Restore ...................................................................................................... 149
7.22 Additional DPM Information ................................................................................................. 150
8.0 Monitoring SQL with SCOM - Matthew Long ........................................................................... 151
8.1 Management Pack Audience ............................................................................................... 151
8.2 Configuration and Customization ......................................................................................... 152
8.2.1 Initial Setup ................................................................................................................... 152
8.2.2 Optional Monitoring Features........................................................................................ 153
8.3 Extending the SQL Management Pack with MP Authoring ................................................... 157
8.3.1 SQL Management Pack Structure ................................................................................ 157
8.3.2 Useful modules when authoring SQl Monitoring ........................................................... 159
8.4 Sample VBScript ................................................................................................................. 160
8.5 System.OleDbProbe Module ............................................................................................... 165
8.5.1 When Should You Use It .............................................................................................. 165
8.5.2 Required Configuration ................................................................................................. 166
8.5.3 Optional Configuration .................................................................................................. 166
8.6 Example Configurations ....................................................................................................... 168
8.7 Simple/Specified Authentication........................................................................................... 169
8.7.1 Scenario 1 – Monitoring ................................................................................................ 169
8.7.2 Scenario 2 – Collection ................................................................................................. 169
8.7.3 Links and Further Reading............................................................................................ 170
9.0 Building Out a SQL Server Template with VMM –Craig Taylor ................................................ 171
9.1 Using SQL Server Profiles in SCVMM 2012 R2 ................................................................... 171
9.2 Introduction .......................................................................................................................... 171
9.3 Preparing VMM.................................................................................................................... 172
9.4 Creation of a Sysprep SQL Server Virtual Machine Template .............................................. 173
9.5 Creation of the SQL Server Profile ....................................................................................... 182
9.6 Further Customisation Options ............................................................................................ 186
9.7 Deployment of the Service Template ................................................................................... 189
9.8 Conclusion ........................................................................................................................... 192
10.0 Special thanks ......................................................................................................................... 194
5
INTRODUCTION
Some time ago I wrote a guide for SQL and System Center, at the time of writing System Center 2012
had just released and it took until SP1 for SQL 2012 to be supported with System Center. The previous
guide focused very much on SQL and looked at areas like troubleshooting, best practices etc. In this
guide I have decided to be very much more focused on SQL and its interaction with System Center and
indeed Server 2012. In the last guide I felt that a lot of people were already installing SQL for quite some
time and there was a lot of knowledge out there on SQL installs but not best practices. SQL 2012 and
Server 2012 have brought some real game changers to the story, and because of that this guide is going
to go into way more detail on areas like setup and some of the options we have here. We are not going
to walk you through a basic SQL setup, there are a ton of guides for that on the internet, instead we are
going to look at some of the more complex areas like clustering, AlwaysOn, Storage Spaces and SQL
Azure. If you are new to this subject then I suggest that you download the guide above and use in in
conjunction to this one.

This guide is going to be a group effort, I wrote the last guide by myself, but felt it was better to include
some of the seasoned expert MVP's out there to help with this guide as there is a lot to take in. The idea
behind this guide was to discuss any time SQL comes in contact with System Center, of course the
design and setup are the start to it, but then we want to look at how System Center interacts with SQL
and give examples of it. So how do we build a SQL template in VMM, how do we monitor it in SCOM,
how can we automate things in SQL with SCO, how should we be backing up SQL with DPM. This guide
is going to go through a lot more walk thoughts on these and discuss different options. The people
involved with this guide were Pete Zerger did Azure, Robert Hedblom did DPM, Matthew Long did the
SQL MP chapter, Craig Taylor did the SQL VMM template section, Richard Green and Craig worked with
me on the general editing and work on the cluster builds etc.

Why are these community guide becoming more popular? As we all know things are changing really fast
in IT at the moment, long release cycles are just not cutting it when it comes to keeping up to day with
industry trends, a simple example of this was where the VMM team would release with a support for
versions of VMware that were already two versions old, and so Microsoft is releasing software faster and
faster. As Microsoft continues down the path of "rapid release" whereby R2 will follow SP1 with a year or
so it's going to be harder and harder to write conventional print books. They require too many edits and
can't keep up with the pace of production.

Lastly this guide is a community effort and so it may not be perfect and is not an official Microsoft guide.
It is written by people who work with System Center and SQL every day, and it is our way to share with
you and give a little back for all the hard work others give to us. If you find any errors, have thoughts of
your own or questions please forward them to paul.keely@infrontconsulting.com.

Updates to this guide should be fairly frequent and we will tweet this in the usual groups.

Paul Keely -Cloud and Datacenter Management MVP


6
1.0 SQL VERSIONS AND SUPPORTABLE FEATURES
The first thing to think about when it comes to deploying SQL to support System Center (SC) is what are
the supported configurations for System Center. In this document we are assuming that you are deploying
SQL 2012 on Server 2012. We are going to start off with each System Center product’s one by one,
what features of SQL are supported. The version of SQL listed here is the highest version, you can look
at the TechNet page to get more information. At a very high level here is a TechNet page that overviews
the SQL requirements.

Can I install this on Server 2008 with SQL 2008? The answer to this questions is more should you rather
than can you. Deploying SC on a VM on day 1 on older OS’s and SQL versions is certainty feasible. A
better discussion is would it not be better to deploy your new SC apps in such a way so that you can
leave it in place for 3-5 years from now. If you choose to deploy on Server 2008 you are deploying on
really old code. If you decide to use Windows Clustering in 2008 you are just starting off on a poor footing
compared to Server and SQL 2012. If you are running on an older version of VMware that does not
support deploying Server 2012 then you may be in a bit of a bind. It is a best practice to install your
systems and then just leave them alone, performing either OS upgrades on the Database servers, SQL
upgrades or DB moves in years to come is not the best approach for an enterprise management tool. We
are just going to cover a short overview of the SQL requirements for each product in the suite. There is
also a short commentary on each DB here.

1.1 SCOM 2012 R2


The requirements for SCOM 2012 R2 are listed here on TechNet. The SCOM environment contains up
to 3 DB’s, the operational DB, the DW and ACS. You need to configure the DW settings at primary install
time whether or not you are going to install the DW. You can connect multiple Management Groups (MG)
to the same DW but there must not be much latency between MG’s. If for example you have a preexisting
SCOM MG in Europe and you wanted to add a new MG in the US but share the European DW this would
be a poor SCOM SQL configuration and not the way the product was designed. The SCOM SQL DB
usage is considered one of the highest in the SC suite as agents are sending data to the Management
Servers (MS’s) all the time. If you are building SCOM on the cluster whereby you mount the drives on a
small mount point the SCOM installer cannot read the size of the underlying drives and as such it will fail
if the mount point is not less than the size of the DB’s you are trying to create in the installer.

SQL Version SQL 2012 SP1


SQL components SQL Server Full Text Search is required.
SQL components (DW) SSRS
SQL Clustering Single Active/Passive
SQL AlwaysOn Supported
Dynamic Port Supported
7
1.2 SCCM 2012 R2
The SQL requirements for SCCM are quite different than any of the other SC suite. The SCCM primary
Site (PS) server needs to have local admin rights on the SQL instance and this may impact your decision
to co-locate this SC product with any other DB due to some security concerns. Unlike other products in
the suite SCCM does not have a separate DW so do pay close attention to the SQL backend or report
rendering (that is not the fastest) will be quite slow.

SQL Version SQL 2012 SP1


SQL components SSRS
SQL Clustering Active/Passive Active/Active Multi Instance
SQL AlwaysOn Not Supported
Dynamic Port Not supported

1.3 SCSM 2012 R2


The SCSM SQL requirements are listed in TechNet here. SCSM where the reporting instance is
installed will install up to 7 DB’s. In production SCSM requires a powerful SQL backend, and you
should pay close attention to how you deploy the SQL backend.

SQL Version SQL 2012 SP1


SQL components SSRS, SSAS, AMO, Native Client
SQL Clustering Not specified on TechNet however as AlwaysOn in supported
clustering has to be supported
SQL AlwaysOn Supported
Dynamic Port Not specified on TechNet

1.4 SCVMM 2012 R2


The database requirements for SCVMM on TechNet is available here. The requirements on TechNet
are not very specific for VMM. As you are using VMM to manage your hypervisor environment pay
close attention to how you manage the HA component of your DB. One point of note is that if you are
using Software Defined Networking (SDN) in VMM then all of your networking is located within your
SQL DB, prudence would indicate some form of clustering and AlwaysOn for this DB.

SQL Version SQL 2012 SP2 or earlier


SQL components Not specified on TechNet
SQL Clustering Not specified on TechNet however as AlwaysOn in supported
clustering has to be supported
SQL AlwaysOn Supported
Dynamic Port Not specified on TechNet
8
1.5 SCO 2012 R2
The Orchestrator DB has almost no SQL requirements outside the standard DB engine.

SQL Version SQL 2012


SQL components Not specified on TechNet
SQL Clustering Not specified on TechNet however as AlwaysOn in supported
clustering has to be supported
SQL AlwaysOn Supported
Dynamic Port Not specified on TechNet

9
1.6 SQL Server on Windows Server 2012
Server 2012 has been a real game changer in many ways. The areas that will spring to mind are:

 Better availability
As Server 2012 can use a whole lot more CPU and RAM as a VM it’s possible to have very
large and powerful SQL servers. Server 2012 now supports 640 processors over 64 sockets
with up to 4TB of RAM.

 Migration of RAM Hardware Errors


Where the hardware permits SQL server
 Storage Spaces

Storage Spaces often now referred to as “Spaces” is a new paradigm in storage and has a massive
impact on how we can build SQL. When we look at the SQL cluster build below we are going to be
using a Scale Out File Server (SOFS) Cluster. This cluster uses Storage Spaces as a storage target.
This is the very latest way to use the Server 2012 storage features to build a cluster and present that to
an application cluster as shared storage.

*some points of note are CSV’s and Resilient File System are not supported with SQL 2012*

10
2.0 BUILDING A SQL CLUSTER IF YOUR
The preferred design for any SQL infrastructure that is important is to
cluster the SQL servers. This helps to secure against the failure of a DB’S ARE
SQL compute node, it does not however help with a failure with the
backend storage. In the configuration I am using here we are using a IMPOR-
Scale Out File Server Cluster using storage paces as the backend
infrastructure. What this means is that I have a two node cluster TANT
presenting storage, and a two node cluster presenting the SQL
application layer. With this configuration I can lose one node THEN YOU
presenting storage and one node presenting SQL. The last thing I am
then going to do is add a last layer of HA to the mix by using SQL NEED TO
AlwaysOn (AO) You can just use a SQL cluster and not use AO, the
cost of AO is not for the fait hearted as you will need at least 3 copies CLUSTER
of SQL Enterprise .
YOUR SQL

2.1 Sequence for a Cluster Build INSTANCE!


In this example we are
 Design your SQL cluster
 Build out the SOFS (beyond the scope of a SQL for System going to deploy a multi
Center guide)
 Build out your Hyper-V Cluster (beyond the scope of a SQL for instance SQL cluster
System Center guide)
 Build your SQL cluster (you are in the right place) and then we are going

to add another layer of


There are a ton of resources out there for a SOFS and Hyper-V
clusters. HA with SQL

Let’s first start off with a high level overview of what we are hoping to AlwaysOn.
achieve. We are going to use a 2 node SOFS cluster presenting
storage to a 2 node Hyper-V cluster. On the Hyper-V cluster we have a
VM on each node, SQL1 and SQL2.

The diagram below gives an overview of what the environment looks


like.
11
We are only going to look at the SQL build here and there is a lot to get through. In the following steps
we are going to

 Prepare and present storage


 Create a windows cluster
 Create a SQL cluster

2.2 Prepare and Present Storage


In this SQL cluster I want to build out several instances to host the whole of the System Center suite. I
am going to use 4 or even 5 instances. As this is a virtual guest cluster I am going to be housing the
different SQL files on separate VHDX's. I want to separate data and log not only for the applications
but also for each tempdb on each instance. So I am going to need a minimum of 4 drives per instance.

So the first thing I do is create a load of VHDX's on my storage and present that storage to the SQL
nodes, as you can see I have 31 drives and assigning a drive letter to each volume isn't gonna fly.
12
What I am going to do is create a top level volume, assign it a drive letter and then mount the 4 SQL
drives for each instance inside that drive letter. This is all easier to do from a script but I am going to
show you a run through from a GUI, so you get to understand the process.

Work out how many instances you want and then go and create 5 VHDX's for each instance with the
relative size you want.

I ran this script to create the following drives for a SCOM instance, it’s pretty simple and from it you can
see the disk sizes we are going to use. I ran a script to create 29 drives, I am separating for example
the SCOM DB and the DW, and I am going to do the same for SCSM.
13
And in server manager I send up with this

14
Just a quick note, you need a volume for the OS and if you want to separate
the SQL bits from the OS (and you really should) then you need to add a disk
to each node and ideally make it the same drive letter to host your SQL
instance bits

Disk 2-5 are what I need to use for SCOM. Online the 4 disks (better to just do the 4 you want right
now because if you bring in 30 it’s difficult to manage that in the GUI.

2.3 Initialize the Disks

When they come online you need to format them and give them some form of workable name, I’m not
going to go through all the screen’s but as an example I’ll take the first drive (it’s going to be my empty
root) format it as a 64K drive and name it SCOM_ROOT
15
Why are you making an empty root top level folder that is going to hold the
SQL disks 25GB, surely we only need a 1 GB drive? Well this is a good
point however if you take the SCOM installer it looks at the size of this top
level folder and if its 1 GB then it won’t let you install the SCOM DB’s

Then go to the file explorer and create a top level folder called SCOM_ROOT and in it 4 folders to hold
the drives you are about to mount. (This is of course just a personal preference, you can mount the
drives anywhere you like, just be aware of the complexity of VHDX sprawl)
16
Then go and mount the drives into each folder

My disk structure now looks like this

17
And all the drives are now sitting on the empty root drive on the SQL node 18
2.4 Create a Windows Cluster

You need to do all these steps on each node in the cluster.

As was mentioned earlier, this guide is not going to go into all the details and
nuances of building a Windows cluster as there are so many great references
for that elsewhere
Install the Failover Clustering Feature on each of the nodes.

Then once the feature has been added, launch the Failover Cluster Manager.
19
Strictly speaking you don’t need to validate a cluster but you would be mad not to. Clustering in 2012
has become such a joy to work with and that is especially so when your clusters pass their validation.

20
Add each of the nodes in the cluster (I am going to add another node SQL3 to act as a target for the
AlwaysOn node in the next chapter)

Run the cluster verification tests and test everything.


21
The report has thrown up some warning and we need to review them but we can go ahead with the
cluster build.

22
I give the cluster a name and IP address and these are the Windows cluster details and are separate
from the SQL cluster details.

23
The cluster has been deployed and you should be able to ping the cluster name and IP address.

(This is not the place to go into complex cluster setup troubleshooting, but the main reason for me that
cluster wont deploy in production is that the cluster instance needs to create a computer object in AD,
you need to either have the rights to do that or have it done in advance for you. Get this sorted, provide
shared storage of some description, you can now ping that cluster name (SQLSC in my case) and it
should be ready to go for a SQL deploy.
24
The last step in this process is the add the storage you created in the first step

Now as you can see the cluster brings in the disks but doesn’t name them the way I did in server
manager so I need to sort that out, I start off with finding the root

25
Gen2 VM’s and disk importing. During the build of this guide I created a disk
for the SQL bits and assigned it the D drive, as its hanging off the SCSI
controller. However as it’s a SCSI drive Failover Cluster Manager will allow
you to import this disk into a cluster as it sees this as shared. If you do this
by mistake (and I did) each time you import new disks into the system the
cluster will take ownership of this disk, or at least try to, and you will lose it as
a shared drive and as it’s got the SQL bits your SQL cluster will fail too . So
top tip is only import the number of drives you need.
26
Rename the drive to SCOM_ROOT and do the same with all the other drives

As all of these disks are mounted in the root of SCOM_DATA I need to make them all dependent on
SCOM_DATA so that the cluster knows to bring up the parent drive first, otherwise in a failover the
drives would fail.

27
2.5 SQL Cluster Installation
Now I am ready to go an install SQL on node 1 in the cluster, I am going to name it SCOM and place
the SQL files in the correct drives that we have already prepared.

To begin with I am going to create an empty role on the cluster and assign the disks to that role.

Add your disks to the empty role


28
Now you can see the storage with the empty role

From the SQL media select Installation and then New SQL Server failover cluster installation (I always
run this with admin rights)
29
30
31
32
33
34
35
Now when I go to SQL management studio I open on SCOM
36
The secondary node install is a few clicks and go, as you are just joining an existing cluster there is
almost no configuration, from the SQL media choose “Add node to a SQL failover cluster”
37
38
39
3.0 SQL ALWAYSON
SQL Server AlwaysOn is a new feature of SQL Server 2012. It allows us to replicate a database or a
group of databases to another server. The additional server could be located in a remote location from
the primary as there is no need for shared storage. AlwaysOn is only available with SQL Enterprise so
it’s not cheap. In our example above we have built out a multi instance SQL cluster that we are going
to install the full suite of SC applications. As we have built out the cluster using a clustered storage
configuration (SOFS) we have removed a single point of failure in the storage and SQL host level, what
we are now going to do is install SQL on a 3rd server (SQL3) and we are going to enable and configure
AlwaysOn. Ideally you would configure this in advance of installing the SC applications as you are
effectively generating a new instance, name and port for the applications to connect to.

3.1 Install SQL Server on the Replica


On a new VM, imaginatively called SQL3 we are going to install a standalone SQL instance and then
we will install Failover Clustering and finally join this node to the existing Windows cluster.

It’s a full install, however I am only going to cover the bare number of screens below as you have seen
them in the previous chapters. The important thing to remember for the install is that the product key
you use to install SQL3 must be Enterprise Edition to access the AlwaysOn feature. 40
The whole point of AlwaysOn is that it abstracts the back-end and so you do not need use the same
instance name. Obviously it will make it easier for the purposes of administration and troubleshooting if
you can use the same instance names on both the primary and secondary copies in the AlwaysOn
Availability Group but it’s not a hard and fast requirement.

41
Now for the really important part. All of the Data Directories listed here are using
the exact same drive letter and path as were configured previously on the SQL
cluster with the shared storage. AlwaysOn assumes that we have the same drive
mappings and folder naming in place on all servers in the solution.

42
3.2 Enabling AlwaysOn Availability Groups on the Cluster
Once the installation is complete, head over to the cluster nodes previously deployed, SQL1 and SQL2.

On each of the servers, open SQL Server Configuration Manager to enable AlwaysOn for each of
the SQL Server Database Engine instances which has been installed. In this case, this applies to
SCOM and the SCOM Data Warehouse SQL Server instances.

As you can see from the screens above and below, the Windows Failover Cluster name for the cluster
which this node is a member is populated for us. Tick the Enable AlwaysOn Availability Groups
check box to enable the feature and then hit the OK button.

Rinse and repeat the above for all of the instances you will be enabling AlwaysOn for. The next step is
43

to remove the dynamic port mode and list a static port instead. This is done in the SQL Server
Network Configuration portion of SQL Server Configuration Management on the TCP/IP Protocol
Name for each instance.

Scroll all the way to the end of the IP Addresses tab to reach the IPAll section and clear any value
from the TCP Dynamic Ports field and add a port number value to the TCP Port field. In this example,
I’m using 15100 and 15101 for SCOM and SCOMDWR respectively.

Once again, rinse and repeat these steps for the remaining instances and remember that these steps
need to be completed for all of the cluster node members (SQL1 and SQL2 in this example).
44
Now that AlwaysOn Availability Groups has been enabled you will need to restart the SQL Server
Database Engine service for each of the instances. You will only need to do this on the node which
currently owns the SQL Server database instance as the other node will have its service already in a
stopped state.
45
3.3 Adding the Replica to the Failover Cluster
We now need to add the Failover Clustering feature to SQL3 and add it to our cluster.

To do this, head back over to SQL3 and either from the Server Manager GUI as shown below or using
PowerShell add the Failover Clustering feature. For PowerShell, use the Cmdlet Import-Module
ServerManager | Install-WindowsFeature –Name Failover-Clustering –IncludeManagementTools.

46
Once the feature is enabled, the easiest way to add SQL3 to the cluster is to connect to an existing
node in the cluster, SQL1 in my case. Once connected, use the Add Node option from the GUI to add
SQL3 to the Windows Failover Cluster.

47
On the Select Servers pane of the wizard, enter SQL3 as the server name and then click Add.

On the Confirmation pane of the wizard, be sure to un-check the Add all eligible storage to the
cluster option. Leaving this ticked will attempt to add the local disks from SQL3 to the cluster which
wouldn’t work too well.
48
Once the wizard is completed, you should see SQL3 as a third node in the cluster from the Node view
in the Failover Cluster Manager.

Enabling AlwaysOn Availability Groups on the Replica

Once you have verified the above, make the changes to the SQL Server services that we previously
made to the clustered services to enable the AlwaysOn High Availability feature on SQL3.

Open SQL Server Configuration Management and use the SQL Server Services view and select
Enable AlwaysOn Availability Groups on the SQL Server Database Engine services for each
instance.
49
With AlwaysOn now enabled for the instances, use the SQL Server Network Configuration view to
set the port for each of the instances to a static port for the TCP/IP Protocol.

Use the same port number for each instance as you used on the clustered instances and be sure to
clear any values from the TCP Dynamic Ports field so that Dynamic Port is disabled.
50
51
Once you’ve finished enabling the feature and changing the port number to the new static port for the
instances, restart the SQL Server Database Engine service for each of the instances for the changes to
take effect.

3.4 Adding AlwaysOn Endpoints to the Replica

With the above now done, we need to setup the SQL endpoints on SQL3 which will pair with those
created previously on the clustered SQL instances.

Using PowerShell, enter the following Cmdlets:

$endpoint = New-SqlHadrEndpoint SCOMEndpoint –Port 5022 –Path


SQLSERVER:\SQL\SQL3\SCOM

Set-SqlHadrEndpoint –InputObject $endpoint –State “Started”

$endpoint = New-SqlHadrEndpoint SCOMDWEndpoint –Port 5023 –Path


SQLSERVER:\SQL\SQL3\SCOMDWReporting

Set-SqlHadrEndpoint –InputObject $endpoint –State “Started”


52
3.5 Creating a Blank Database to Protect with AlwaysOn
Now using SQL Server Management Studio establish a connection to the clustered SQL instance
SCOM\SCOM. Open a New SQL Query and create a new blank database with the syntax create
database sql_guide. This is required because the AlwaysOn Availability Group cannot be created
without a database to make Highly Available.

Once the database is created, we must take an initial Full backup of it. This is required because the
database is set to Full Recovery Model (required for AlwaysOn protection).
53
Make a full backup of this DB

54
As the database is the blank sql_guide database we created earlier, this completes in a second or two.

3.6 Create the AlwaysOn Seed Share


The next step in the process for configuring AlwaysOn is to setup the AlwaysOn seed share. This share
is used by the AlwaysOn High Availability wizard to perform the initial data synchronization. This share
must be accessible by all servers and nodes which will participate in the AlwaysOn configuration.
55
Once you have created a new folder for the share, use the Advanced Sharing tab on the folder
properties to enable sharing of the folder, and edit the permissions. We need to grant the service
accounts used to operate the SQL Server Database Engine instance and the service account used for
the SQL Server Agent to the share using the Full Control Allow permission.

Once you have enabled the share and set the share permissions, the same permissions need to be
applied to the NTFS folder. Use the Security tab on the folder to add the NTFS ACEs for the same
service accounts, the SQL Server Database Engine and the SQL Server Agent service accounts.

56
3.7 Creating an AlwaysOn Availability Group
Once all of the above is done, we’re ready to start the AlwaysOn Availability Group wizard. From the
server which you want to be used as the initial primary replica (probably the clustered instance), login to
SQL Server Management Studio.

Right-click on the AlwaysOn High Availability folder and select the New Availability Group Wizard
option.

57
After the introductory step, we are asked what to use as the Availability Group Name. This name will be
added to the Windows Failover Cluster as a new Resource Group with a Virtual IP Address and a
Network Name which will be used going forwards as the network name for our client and application
connections to the database instance.
58
Next, select the databases to protect with AlwaysOn. As this is a new instance with only our sql_guide
database, select this. If you omitted the steps earlier to take a backup of the database, you will not be
able to proceed past this point until it has been done.

With our database now selected, we need to get the clustered SQL Server instance and the standalone
instance talking. Click the Add Replica button and you will be presented with a SQL Server login
dialog. Enter the server and instance name (SQL3\SCOM in this example) and login with Windows
Authentication using your current credentials.
59
On the Endpoints tab, you should see the name Hadr_endpoint and the Port Number 5022 shown. If
these look familiar, good. These are the ports and names we specified earlier using the PowerShell
Cmdlets.

On the Backup Preferences tab you can configure how SQL Server databases on this instance will be
backed up. The default option is Prefer Secondary which is ideal as it allows our primary to continue
servicing client and applications requests for database records while a backup runs from our replica on
the secondary.
60
On the Listener tab we configure the DNS Name (Network Name) which will be configured and the IP
Address and TCP Port. In this example, SCOMAO is the network name, 15100 is the TCP Port and a
Static IP Address of 10.1.10.211 has been defined.

61
Once you have moved on to the Data Synchronization panel, ensure that the Full option is selected to
get things going immediately. Here we specify the path to the file share created earlier through which
the initial synchronization is performed. 62
63
After the summary, SQL Server will setup the Availability Group and make the sql_guide database
highly available for the SCOM instance.

Repeat the steps given in the Creating a Blank Database to Protect with
AlwaysOn and the Creating an AlwaysOn Availability Group sections for your
SCOM Data Warehouse instance.
64
3.8 Deploying SCOM into the AlwaysOn Availability Group
Now when it comes to installing a System Center product database into the AlwaysOn Availability
Group, we need to point the setup to the AlwaysOn listener (SCOMAO in my case) and not the
clustered SQL Server instance name. The same process applies to all SC product installations.

65
Assuming your SCOM installation completes without a hiccup or an issue, you’ve now got SCOM
installed and configured to use the AlwaysOn Listener name which means it’s setup to use AlwaysOn
however SQL itself isn’t quite ready so we need to dive back into SQL to make the SCOM and SCOM
Data Warehouse databases highly available.

3.9 Adding the SCOM Databases to the Availability Group


With SCOM now installed, we’re using our AlwaysOn Listener to connect to the SQL Server however
the SCOM database isn’t being replicated throughout the AlwaysOn configuration currently.

Connect to your SQL Server which is currently the Primary for the Availability Group. You can verify
the primary or secondary status from the AlwaysOn High Availability folder in the SQL Server
Management Studio.
66
Once connected and verified that you are connected to the Primary, we need to make changes to the
SCOM database.

Expand the Databases folder and access the Properties for the OperationsManager database.

From the Recovery Model drop-down menu, change the option from Simple to Full as required by
AlwaysOn.

With the Recovery Model now set, we need to take a full backup of the database before processing
with the AlwaysOn wizards.

67
With the database now ready, return to the AlwaysOn High Availability folder and right-click the
SCOMAO Availability Group name and select the Add Database option from the menu.

68
This will open the Add Database to Availability Group wizard.

On the Select Databases pane, you will see the sql_guide databases already a part of the availability
group and the OperationsManager database should report the status of Meets prerequisites.

Select the check box for the OperationsManager database and hit Next.

On the Select Data Synchronization pane, leave the default option of Full as is and use the same
shared file share which we created earlier in the guide for the initial data transfer.

69
On the Connect to Replicas pane, you don’t need to specify any replicas this time as the connections
are already configured in the Availability Group but you do need to connect to the instance.

Click the Connect button and use Windows Authentication to connect to the remote SQL Server
instance.

On the Validation pane, you should hopefully get green for all of the pre-flight checks.

On the final pane before synchronization begins, the Summary will confirm all of your selections from
the wizard. Select Finish and let the replication begin.
70
All going well, you should get a confirmation that the wizard completed successfully.

Viewing the list of Availability Databases back in the SQL Server Management Studio folder view
should now show the OperationsManager database as per the screen below.
71
Repeat the steps in this section for your SCOM Data Warehouse instance.

Once you have completed this section and repeated the steps against your SCOM Data Warehouse
instance, you’re all set. You’re now using SCOM 2012 R2 with SQL Server 2012 AlwaysOn technology,
protecting your databases across a multi-node, multi-instance SQL cluster and a standalone SQL
server.

72
3.10 Deploying Additional System Center Products
When it comes to deploying the rest of the suite you can present your AlwaysOn instance and
sometimes the port is requested and sometimes it doesn’t. Before we go any further please
understand that SCCM 2012 does not support AlwaysOn, no question. For some time the reason this
was not supported was just that it hadn’t been tested, however as far as I know it has been tested (or
considered) and it has been ruled out by the product group.

SCO looks for the instance and port

And SCSM does not ask for the port it queries the AlwaysOn listener.

73
74
4.0 HYPER-V REPLICA
Hyper-V Replica (HVR) is a host based Hypervisor centric replication process, it is an in-box feature
from Microsoft and is a fairly simple replication setup. Unfortunately HVR is not supported with other
types of SQL replication. There is no native automated failover like there is with a SQL clustering. So
why might I use HVR and not SQL clustering…

HVR is easy to setup

Its host based so the fabric team can setup and configure it

It's fairly easy to setup

It does not require shared storage

It allows you to protect a SQL server across a WAN link

There is a 1:1 I/O hit on the storage write

If you chose SQL clustering

You get automated failover

It is more complex to setup

You can use clustering with AlwaysOn

It is an application level replication process

We have 2 servers HV1 and HV4

Let's start with HV1 that is going to be the replica target for the VM's, I have added a 500Gb drive
called HVR and I am going to use this as my target location.

75
76
77
Now we go to HV4 and select the SQL server, as you can see from the screen below the SQL server is hosting a
ton of System Center DB's, it’s not clustered and we want to have some warmish copy of the data

78
79
80
81
82
83
84
85
4.1 Testing HVR

When you want to test the replication an easy way is to place a file on the desktop and check it go
migrated
86
I am also going to create a new DB called SQL guide

87
Now back on SQL 1 (the replica host I should be able to test the replication, to do that I simply go to re
plication and select "test failover". This will create a new virtual machine with a snapshot of the VHD's.

88
5.0 DEPLOYING SCOM WITH DIFFERENT HA OPTIONS
So we have explored a number of configurations possible when it comes to how you can configure SQL
in terms of HA for the SQL server and for the site where your database or databases might run. So in
this section we are going to walk through the different options for installing SCOM in a multi-site
environment and what considerations you may need to take into account in designing and
implementing your SQL server or servers. SCOM is a great product in the System Center suite to
discuss because a lot of companies require multi-site monitoring and want to have the ability to have a
HA SCOM whereby if they lose a primary date center they want monitoring to failover to a secondary
datacenter or DR site.

We are going to start off with the most basic SCOM deployment and work our way up from there. If you
are a seasoned SCOM pro please excuse the basic nature of
this, but it will be helpful for others to understand where it’s all
building on.

Just in case some of the common abbreviations are not


familiar to you

MG Management Group, the security realm between the SQL Licensing


MS and the SQL DB
MS Management Server, the server that has a writable Licensing is always a complex
connection to the SQL DB issue with System Center, and it
DB Data Base, the SCOM, monitoring and reporting doesn’t get easier with SQL, that
databases that are hosted by SQL
GW Gateway, the SCOM, role that is used in remote being said I have been told from
locations to forward to a MS several sources that the no cost
MP Management Pack, XML document that holds for SQL standard also applies for
discoveries, rules, monitors and reports clustered instances of SQL
standard only being used to
Firstly we need a management server and a SQL server to host house System Center DB’s. It
the SCOM DB. A server we want to monitor has a SCOM agent was also confirmed to the MVP
loaded on it, and it sends its monitoring data to the group that you can deploy
management server and the management server in turns writes SharePoint who’s only purpose if
that data to data bases on a SQL server. If we deploy SQL to house System Center
standard and it is only running to support System Center then dashboards and there is no
there is no cost for the SQL license. licensing requirement

We will then move on to basic SQL clustering whereby we


cluster the SQL DB on two servers. With Server 2012 the cluster build and on-going management is a
much easier prospect than any other version of Windows.

The other scenarios covered in this chapter are quite advanced and many of them are only possible with
either Server 2012 and or SQL 2012. What is important to keep in mind is that any solution that involves
HA and no single point of failure comes at a price, and usually a high one.
89
5.1 Basic SCOM Design

SCOM is installed on a
An agent is installed on single VM with a SQL
a server. instance on a separate
VM.

The SQL is a stand alone


The agent comunicates SQL standard edition
direct with the that comes with the
management Server SCOM licence
(MS)

©Paul Keely MVP


paul@paulkeely.com

Important points of note here are:

 The agent can connect to the MS across a WAN link


 The MS must be located close to the SQL DB where the latency does not exceed 20ms
 All the monitoring and reporting data is stored on the SQL DB’s the management server has no
real information (except MP data) on it.
 There is no HA or DR in this configuration, lose the SQL server or the MS and all monitoring stops
90
5.2 Distributed Design
Now let’s increase the complexity a little and assume we have 2 locations spread across a WAN link, but
please note we are talking about a WAN link with extended distance, so if you have two physical sites
located a few KM’s from each other with fiber between them then that really doesn’t count. In this example
we have an office in Dublin, Europe and one in NYC US, we are just going to expand on the first example
whereby we have the main data center in Europe, some small satellite sites and then an additional large
data center in the US.

91
Important points of note here are:

 We are using a SCOM MS and SQL server in the main EU data center
 In the small satellite office where they are only a small number of servers they can connect directly
across the WAN to report to a MS in the main data center
 In one of the additional EU data center where we placed an additional MS we consider this a
poor design as the network latency across the WAN is greater than 20ms
 In the US data center a GW server is deployed. As the minimum latency between the US and EU
is about 60ms the GW server will collect agent data in the US, compress the data and then forward
it to a MS in the EU
 There is no HA or DR in this configuration
 GW servers bring a layer of complexity and add an additional server so in small satellite offices a
calculation needs to be made to decide at what point you place a GW server versus just allowing
agents to report across the WAN. Where a small office has a domain controller and a multi role
file and print server adding a GW server may add little to the configuration.
 This design is full of single points of failure
 The GW server cannot write direct to the SQL DB, and has to proxy to a MS who in turn writes to
the SQL DB.
 There is no licensing cost for SQL if we are using SQL standard and it’s just for System Center

92
5.3 Local Site HA
In this configuration we are going to overview basic HA with SCOM in a single site. The reason we are
using a single site is to get the basic tenets of design across, to facilitate a discussion on HA, before we
get into more complex multi-site options.

Important points to note here:

 Using two MS’s in a pool allows agents to fail over from one MS to another
 Using a SQL cluster the failure of a single SQL node means the MS can fail over to another node
in the cluster
 This configuration would be considered a basic HA option as we are still using a single copy of
93

the data bases.


 The SQL cluster can be built on SQL standard with no licensing cost
 This is a fairly easy setup and configuration and does not have a high admin effort or TCO
 Can even be built with no shared storage and can use a iSCSI target on Windows Server

5.4 Multi-Site HA with SQL AlwaysOn


This is a more complex and costly option but allows for multi-site, multi-server failover.

Important points of note here:

 We are using a SQL cluster in the main data center, as we are using AlwaysOn in the remote site
we need then need to use SQL Enterprise for all nodes.
94
 The 3rd SQL node in the remote site can be used for actions like backup etc.
 This is an automated fully supported configuration
 We are using double MS’s and GW servers in each site
 In this environment we can still monitor under the following situations
o Loss of the production cluster in the primary site
o Loss of both MS’s in the production site
o In the event of the complete loss of the primary site we can still monitor the secondary
data center
 This is a reasonably complex configuration with an amount of planning and design necessary
 This is a reasonably expensive configuration due to the SQL Enterprise licensing
 It would be possible to collocate other databases on the cluster especially other System Center
products and use the cluster as a management cluster
 If you locate other products on the 2 node cluster in the primary data center the database does
not necessarily need to take part in the AlwaysOn replication node.
 This type of configuration would typically be deployed in a large multi-site environment
 This configuration does not require storage replication
 You can add more than one location for the backup SQL node

95
5.5 SQL Stretched Cluster
In this example we are using a 2 node cluster using a replicated SAN

Points of note for this configuration:

 This design is based on using storage replication from your storage vendor
 The SQL cluster is based on 2 nodes one in each location
 The SQL cluster needs only SQL Standard
 The cluster requires a stretched subnet spanning the two sites
 In the event of a failover in terms of the SQL environment the MS in the primary data center needs
to be powered off to force the agents in the primary data center to now a GW server in that data
96

center
 It is essential to perform component failure analysis, to understand what happens in the event of
failure of individual infrastructure components. This ensures the supporting team knows what is
required to bring the solution back to operation in all failure scenarios
 This is not an easy setup
 While there is a reduction in cost for the SQL in terms of not needing/using SQL enterprise, there
is the cost of storage mirroring.

5.6 HA Using Separate Management Groups


In this configuration we are going to deploy two completely separate MG’s in two separate data centers.
All agents will be deployed to both MG’s but we will only focus on the prod instance.

97
Important points of note here:

 This configuration uses two completely separate MG’s


 All agents are multi-homed sending data to each MG
 All MP’s sealed and unsealed need to be in sync in each location
 The “secondary” MG can have all agents in maintenance mode
 All the SQL can be Standard edition
 There is a reasonable level of administrative effort required to keep both systems in synch
 It is most likely the lowest cost HA solution covering two sites, requiring the least amount of
licensing effort.
 If the production SCOM environment goes down and the DR MG is used there is no way to sync
back reporting and alerting data back to the production MG.

98
5.7 Using Hyper-V Replica
Deploying the SQL servers as VM’s

Points to note about this configuration

 The SQL server is running as a VM on a hypervisor that is Server 2012


 There is no storage replication needed
 There is no stretched VLAN needs
 The SQL can run as Standard edition
 This is not an overly complex solution
 The Hyper-V replica is not overly complex to set up, but does require special configuration to
maintain SQL Server product support & recoverability
99
 Hyper-V replicas are asynchronous, snapshot based replicas, the shortest interval it is possible
to configure being five minutes
 There is a substantial IO hit on the replication
 You can replicate the VM to several locations

100
6.0 SQL IN AZURE TO SUPPORT SYSTEM CENTER
This section is intended to discuss how to implement your System Center SQL Server instances in
Windows Azure virtual machines (VMs). While many of the guidelines for configuring Microsoft SQL
Server are the same, the architecture of VMs in Windows Azure is quite different. Rather than
duplicating information found elsewhere in this guide, this chapter will serve primarily to clarify those
differences and how to optimize performance for your SQL Server instances hosting databases in
Windows Azure.

With this in mind, this chapter does not aim to teach you Windows Azure Virtual Machines (Windows
Azure IaaS) from the ground up. For training on Windows Azure IaaS, see the “Windows Azure for IT
Pros Jump Start” course on the Microsoft Virtual Academy at
http://www.microsoftvirtualacademy.com/training-courses/windows-azure-for-it-pros-jump-
start#?fbid=8YnntofK5KO

Topics covered in this module include:

 Network Connectivity
 System Center Support in Windows Azure
 VM Configuration
 Optimizing SQL Configuration in Windows Azure VMs
 High Availability and Disaster Recovery

6.1 Network Connectivity


It is assumed you have network connectivity equivalent to what System Center would have in an on-
premise scenario. Therefore, it is assumed you have Windows Azure and your corporate datacenter
connected in a site-to-site VPN (S2S), enabling Activity Directory authentication. In this configuration,
the Azure virtual network will use your DNS servers for name resolution.

A full discussion of Windows Azure virtual networking and site-to-site VPN is beyond the scope of this
section, but you can learn more with these resources:

For networking configuration (site-to-site VPN):

 Windows Azure Virtual Networking (TechNet documentation)


 Create a Virtual Network for Site-to-Site Cross-Premises Connectivity

For Active Directory Domain Services (ADDS) in Windows Azure:

 Guidelines for Deploying Windows Server Active Directory on Windows Azure Virtual Machines
 Install a Replica Active Directory Domain Controller in Windows Azure Virtual Networks

Windows Azure VMs are dynamically addressed and you cannot override this by setting a static address.
Any attempt to do so will result in loss of connectivity to the VM.
101
Virtual machines within a VNet will have a stable private IP address. Azure assigns an IP address from
the address range you specify and offer an infinite DHCP lease on it. So the IP address will stay with
the virtual machine for its lifetime. The exception to this is when a virtual machine is in a Stopped
(Deallocated) state, in which case the IP address is returned to the pool allocated to the virtual network
you defined. You should avoid placing production VMs in this state to prevent an unwanted IP address
change.

6.2 System Center Support in Windows Azure


Most System Center 2012 components can be virtualized in Windows Azure. The System Center 2012
SP1 and later components are supported in Windows Azure are:

 App Controller
 Operations Manager
 Orchestrator
 Server Application Virtualization
 Service Manager

You will notice that Virtual Machine Manager is not mentioned is not listed, which is not surprising,
given VMM manages the hypervisor….a layer not under our control in Windows Azure.

For a current list of System Center and Windows component support in Windows Azure, see
http://support.microsoft.com/kb/2721672

6.3 VM Configuration
The considerations for SQL Server running in Windows Azure VMs are largely due to differences in
Windows Azure VM and platform architecture versus that of Hyper-V and vSphere. Considerations
include:

 VM Sizing
 Network Throughput
 Disk Subsystem
 Database File Placement

However, some aspects of VM architecture in Azure differ from the Hyper-V or VMware equivalents and
therefore will be addressed in this chapter.
102
6.4 VM Sizing
The first rule of virtualizing SQL Server is to ensure you have enough compute, storage and compute
resources to match what you used in the physical server. If you review Microsoft documentation around
Windows Azure VMs, the documentation says “Review options and choose a size that matches the
workload”. In Windows Azure, you cannot simply set the CPU cores and memory to exactly the values
you wish. Instead, you must choose one of the available VM template sizes. Even if you upload one of
your own Hyper-V VM templates to Azure, you will have to choose one of these template sizes. There
are currently eight (8) VM templates in Windows Azure, which are shown here.

Based on your desired VM size, you should pick the VM that most closely matches that of your sizing.
The right template may have a slightly more or slightly less resources (in terms of CPU and memory)
than what you would have chosen in an on-premise situation. Since Windows Azure costs are
calculated on a pay-for-what-you-use basis, be sure not to choose a larger VM than necessary!

6.5 Network Bandwidth


Your measure of control over network bandwidth in Windows Azure is based largely on the VM size you
choose. Every virtual machine size has its own limits in terms of network traffic that goes through the
Windows Azure Virtual Network layer. This network traffic also includes all traffic between client
applications and SQL Server in a Windows Azure VM and, any other communication that involves a
SQL Server process, or other processes in a virtual machine, such as ETL packages or backup and
restore operations. Microsoft recommends that you choose a larger virtual machine size with more
network bandwidth. In the word of System Center, SQL Server roles that fall into this category would
include SCCM, SCOM and SCSM database and data warehouse servers.

The bottom line is the CPU and memory resources available in VM templates will almost certainly be
the driver of your decision, and the larger VM sizes provide greater bandwidth.
103
Important: Network communications generated by accessing data disks and the operating system disk
attached to the virtual machine are not considered part of the bandwidth limits.

6.6 Disk Types and Configurations


Proper configuration of your Windows Azure disks (operating system and data) is one of the most
important areas to focus on when optimizing the performance of your SQL Server workloads supporting
your System Center 2012 R2 deployment. This section will cover some basic concepts of disk behavior,
performance and configuration relevant to SQL Server instances hosted in Windows Azure VMs.

Windows Azure Virtual Machines provide three types of disks:

 Operating system disk (persistent): Every virtual machine has one attached operating system
disk (C: drive) that has a limit of 127 GB. You can upload a virtual hard disk (VHD) that can be used
as an operating system disk, or you can create a virtual machine from a platform image, in which
case an operating system disk is created for you. An operating system disk contains a bootable
operating system. It is mostly dedicated to operating system usage and its performance is optimized
for operating system access patterns such as boot up.

 Data disk (persistent): A data disk is a VHD that you can attach to a virtual machine to store
application data. Currently, the largest single data disk is 1 terabyte (TB) in size. You can specify
the disk size and add more data disks (up to 16, depending upon virtual machine size). This is
useful when the expected database size exceeds the 1 TB limit or the throughput is higher than
what one data disk can provide. Data disks are where your SQL database, log files and SQL
application files will reside.

 Temporary local disk (non-persistent): Each virtual machine that you create has a temporary
local disk, the D: drive and which is labeled as TEMPORARY STORAGE. This disk is used by
applications and processes that run in the virtual machine for transient storage of data. It is also
used to store page files for the operating system. Temporary disk volumes are hosted in the local
disks on the physical machine that runs your virtual machine (VM). Temporary disk volumes are not
persistent. In other words, any data on them may be lost if your virtual machine is restarted.
Temporary disk volumes are shared across all other VMs running on the same host.

Each data disk can be a maximum of 1 TB in size. Depending upon your virtual machine size, you can
attach up to 16 data disks to your virtual machine. I/Os per second (IOPS) and bandwidth are the most
important factors to consider when determining how many data disks you need. Storage capacity
planning is similar to the traditional ‘spindle’ count in an on-premises storage design. A single Azure
data disk delivers approximately 500 IOPS, and you can have up to 16 data disks in a single Azure
storage account. Disk configuration and sizing for I/O is covered in "Optimizing SQL Configuration in
Windows Azure VMs" later in this chapter.

6.6.1 HIGH AVAILABILITY DISKS


As of this writing, Windows Azure Storage guarantees in its Service Level Agreements that at least 99.9%
of the time, storage blobs will be available. Using a single data disk simplifies recovery scenarios in case
104

of temporary unavailability. For example, if you want to leverage geo-replication at the Storage Account
level to automatically replicate your data disk to a secondary region, using a single data disk can be
helpful.

IMPORTANT: You cannot use geo-replication with multiple data disks configurations, because consistent
write order across multiple disks is not guaranteed.

6.6.2 DISK CACHE SETTINGS IN WINDOWS AZURE VIRTUAL MACHINES


In Windows Azure virtual machines, data of persisted drives is cached locally to the host machine,
which brings the data closer to the virtual machine. Windows Azure disks (operating system and data)
use a two-tier cache:

 Frequently accessed data is stored in the memory (RAM) of the host machine.
 Less recently accessed data is stored on the local hard disks of the host machine. There
is cache space reserved for each virtual machine operating system and data disks based
on the virtual machine size.
Cache settings help reduce the number of transactions against Windows Azure Storage and can
reduce disk I/O latency in some scenarios. Windows Azure disks support three different cache settings:
Read Only, Read Write, and None (disabled). There is no configuration available to control the size of
the cache.

 Read Only: Reads and writes are cached for future reads. All writes are persisted
directly to Windows Azure Storage to prevent data loss or data corruption while still
enabling read cache.

 Read Write: Reads and writes are cached for future reads. Non-write-through writes are
persisted to the local cache first, then lazily flushed to the Windows Azure Blob service.
For SQL Server, writes are always persisted to Windows Azure Storage because it uses
write-through. This cache setting offers the lowest disk latency for light workloads. It is
recommended for the operating system and data disks with sporadic disk access. Note
that total workload should not frequently exceed the performance of the physical host
local disks when this cache setting is used.

 None (disabled): Requests bypass the cache completely. All disk transfers are
completed against Windows Azure Storage. This cache setting prevents the physical
host local disks from becoming a bottleneck. It offers the highest I/O rate for I/O intensive
workloads. Note that I/O operations to Windows Azure Storage do incur transaction
costs but I/O operations to the local cache do not.

The following table demonstrates the supported disk cache modes:


Disk type Read Only Read Write None (disabled)
Operating system disk Supported Default mode Not supported
Data disk Supported Supported Default mode

Important notes:
105

 Cache can be enabled for up to 4 data disks per virtual machine (VM).
 Changes to caching require a VM restart to take effect.

6.6.3 ENABLING READ WRITE CACHE FOR THE OS DISK


Enabling “Read Write” cache (default setting) for the operating system disk helps improve overall
operating system performance and boot times. Also, it can help reduce the read latency for workloads
with smaller databases (10 GB or smaller) that require a low number of concurrent read I/Os. This is
because the working set can fit into the disk cache or the memory, reducing trips to the backend Blob
storage.

Since “Read Write” cache is enabled by default, no action is typically needed here. Just don’t disable this
setting without good reason!

6.7 Optimizing SQL Configuration in Windows Azure VMs


Since storage architecture in Windows Azure is much different than what you may be familiar with in the
data center, guidance on scaling storage IOPS for SQL databases and logs is provided in this section.

6.8 Use Data Disks for SQL DB Files


For databases larger than 10 GB, Microsoft recommends that you use data disks. For data disks, the
recommended cache setting depends on the I/O pattern and intensity of the workload.

If the workload demands a high rate of random I/Os (such as a SQL Server OLTP workload) and
throughput is important to you, the general guideline is to keep the cache set to the default value of
“None” (disabled). Because Windows Azure storage is capable of more IOPS than a direct attached
storage disk, this setting causes the physical host local disks to be bypassed, therefore providing the
highest I/O rate. This is the setting that will apply to your larger System Center databases, including;

 Configuration Manager Site DB


 Operations Manager operational DB and data warehouse
 Service Manager database and data warehouse

For smaller databases with typically lower IO, including Orchestrator and App Controller, you may
consider placing them on an existing SQL instance, but still placing the database files on a data disk is
recommended.

While the VMM database typically falls into the “small with low I/O category, VMM is not supported in
Windows Azure VMs and therefore is not applicable to this discussion.

The same rules apply for TempDB when running SQL in Azure VMs. TempDB should be placed on the
operating system disk or a separate data disk (preferably a data disk if possible). Do not be tempted to
place this database on a temporary disk!

IMPORTANT: Using files and filegroups across multiple data disks does not help scaling transaction log
throughput and bandwidth (sequential writes) in the same way it does for random IO activity. If you need
106

to scale your transaction log performance, MS recommends use a striped volume instead, such as
Storage Spaces in Windows Server 2012 R2. For step-by-step guidance in using this technology in
Windows Azure, see “Use Windows Azure to learn Windows Server 2012 Storage Spaces”.

NOTE: With regards to Windows Azure disk performance, you may read that Microsoft have observed
a “warm-up effect” that can result in a reduced rate of throughput and bandwidth for a short period of
time. In situations where a data disk is not accessed for a period of time (approximately 20 minutes),
adaptive partitioning and load balancing mechanisms kick in. This warm-up effect is unlikely to be
noticed on systems with workloads in continuous use, such as System Center SQL instances.

6.9 Disk IOPS in Azure Virtual Machines


A single Azure data disk delivers approximately 500 IOPS. In order to achieve higher levels of I/O, you
will need to spread the database across multiple files. For example, for a database requiring 2,000
IOPS, you would need to spread the database across four (4) data disks, for 4,000 IOPS you would
need eight (8) data disks, etc. You can have up to 16 disks in a single Azure storage account, for a total
of 8,000 IOPS for a single Azure storage account. This is enough to meet the needs of virtually any
single System Center SQL instance you are likely to encounter.

For write intensive workloads (which is descriptive of the System Center SQL instances), adding more
data disks increases performance in a nearly linear fashion according to Microsoft testing.

For a sample T-SQL script for splitting (striping) a SQL database into multiple files on multiple drives, see
“Create Database (Transact SQL)” at http://msdn.microsoft.com/en-US/library/ms176061.aspx.

To update an existing database to stripe the database across multiple files, you will need to use the
ALTER DATABASE statement with the FILE option. T-SQL samples are available at
http://msdn.microsoft.com/en-us/library/bb522469.aspx.

IMPORTANT: While you could potentially scale beyond the 4,000 IOPS limit of disks in a single storage
account by attaching disks from multiple storage accounts, Microsoft recommends avoiding this if
possible. Attaching data disk from multiple Azure storage accounts increases complexity and the
likelihood of data consistency issues in some failure scenarios.

6.10 Autogrow Settings


With regards to the Autogrow setting, the same rules you observe on physical or virtual machines in your
data center also apply in Windows Azure.

6.11 High Availability & Disaster Recovery (HADR)


The following HADR technologies in SQL Server are supported in Windows Azure:

 AlwaysOn Availability Groups


 Database Mirroring
107

 Log Shipping
 Backup and Restore with Windows Azure Blob Storage Service

It is possible to not only to use these technologies in Windows Azure, but to combine them to
implement a SQL Server solution that has both high availability and disaster recovery capabilities,
whether in a private cloud, public cloud or hybrid cloud environment. Depending on the technology you
use, a hybrid cloud deployment option may require a site-to-site VPN tunnel with the Windows Azure
virtual network.

For step-by-step guidance in implementing these SQL Server HADR technologies, see the following
tutorials:

 AlwaysOn Availability Groups - see Tutorial: AlwaysOn Availability Groups in Windows Azure
(GUI)
 Database Mirroring - see Tutorial: Database Mirroring for High Availability in Windows Azure
 Log Shipping - see Tutorial: Log Shipping for Disaster Recovery in Hybrid IT
 Backup and Restore with Windows Azure Blob Storage Service - see Backup and Restore
for SQL Server in Windows Azure Virtual Machines

For additional reading on the currently available options and example HADR architectures for SQL
Server 2012 in Windows Azure virtual machines, see

High Availability and Disaster Recovery for SQL Server in Windows Azure Virtual Machines

108
7.0 BACKING UP SQL WITH DPM

7.1 Introduction to System Center Data Protection Manager 2012 R2


System Center Data Protection Manager 2012 R2 (SCDPM or DPM) is Microsoft’s answer for how you
should take backups, perform restores and manage disaster recovery in a modern datacenter. The key
selling point of the product is the ease with which advanced backup & restore operations can be
performed. Microsoft have developed System Center Data Protection Manager to be the best of breed
recovery software for the most common Microsoft enterprise workloads.

7.2 What Can DPM 2012 R2 Protect?


As mentioned in the introduction, SCDPM or DPM is a backup & recovery application with the main
focus being on delivering advanced restore operations with high performance. SCDPM will protect and
restore the following Microsoft workloads:

1. File Services
2. Hyper-V
3. Exchange Server
4. SQL Server
5. SharePoint Server
6. Windows Servers in trusted domains, untrusted domains & Microsoft workgroups
7. Windows Clients
8. System State

System Center Data Protection Manager 2012 R2 will also protect any Windows Failover Server
Cluster based solution, being a cluster-aware backup product.

7.3 Change Tracking


System Center Data Protection Manager 2012 R2 is a true Continuous Data Protection (CDP) product,
meaning that it has a sophisticated change level tracking process. The change tracking process could
most easily be described as identifying block level changes on the actual disk or volume via close co-
operation between the DPM agent, DPM server and Volume Shadow Copy Services (VSS) with VSS
coordinating the interaction between the DPM agent, the VSS Writer and Minifilter drivers on the
protected host.
109
7.4 Getting Started with DPM
After you have installed System Center Data Protection Manager 2012 R2 you will need to perform
some additional steps:

1. Add disk to the DPM disk pool


2. Deploy or in a pre-deployed scenario attach the DPM agents
3. Create a Protection Group

For a full installation guide and further useful information regarding DPM please visit the blog “Robert
and DPM” at http://robertanddpm.blogspot.com/.

7.5 Adding Disk to the DPM Storage Pool


The DPM server requires locally attached disk to store all its recovery points for the protected data it
stores. The following types are supported:

1. SAN
2. NAS
3. DAS
4. VHDX

For further information and guidelines regarding how to design and deploy DPM please visit ‘Robert
and DPM’ at http://robertanddpm.blogspot.com/ or read the ‘DPM 2012 R2 Cookbook’ which will be
released during 2014 by Packt Publishing.

7.6 Deploying DPM Agents


For SCDPM to be able to commence protection of your workloads within your datacenter you must
install and attach a SCDPM agent. This process is also known as deploying a DPM agent.

This can be done from the DPM console but also other System Center components such as System
Center Configuration Manager or System Center Orchestrator. You can also use simple batch files and
GPOs within Active Directory to deploy the agent.

7.7 DPMDB
The most critical part of the SCDPM server platform is the local database, called DPMDB. All
configurations made for Protection Groups, SCDPM agent deployment & backup schedules are stored
within the DPMD. In a high-end implementation of SCDPM it is strongly recommended to place the
DPMDB on a dedicated volume that has the correct sector sizes to ensure high performance and
scalability. The recommended NTFS unit allocation size is 64 Kb since this is matches SQL write
patterns.
110

The location of the DPMDB database on disk can be specified two ways:
1. As a configuration option during the installation of the DPM software
2. As a re-configuration, performed at any later time, using SQL Management Studio

The recommended option is to place the DPMDB on the correct volume during the installation of the
software, to minimize rework.

7.8 Special Considerations for Using DPM to Protect SQL Workloads


In this section we will discuss arguably the most important element of data protection software: The
considerations surrounding restoring of data. All backups are made with one purpose, to be able to be
restored within an organization’s accepted timeframe. We will also discuss why you shouldn’t run local
backups and how you manage SQL management tasks that execute at the same time of the backup.

7.8.1 RPO, RTO AND RLO


To be able to identify an organization’s accepted timeframe for data loss you must have an
understanding of the Service Level Agreement (SLA). The SLA will explain how the companies
perceives & classifies it’s most important services. In those cases where there is not a SLA in place,
you must initially start by classifying your data that represents the services within your datacenter.

7.8.2 RECOVERY POINT OBJECTIVES


The Recovery Point Objectives are an organization’s way of defining how much data they can afford to
lose during an incident. It is the maximum tolerable period in which data might be lost from an IT
service due to a major incident.

7.8.3 RECOVERY TIME OBJECTIVES


The Recovery Time Objectives are an organization’s way of defining how fast they must be able to
bring the services online when an incident has occurred. In those cases a service has a value of zero
(0) the company must deliver the application via some sort of clustering service. Clustering itself
however does not guarantee safety of an organization’s data. One must still consider corrupted data
scenarios, for example users deleting the wrong data or the occurrence of an unwanted malware
intrusion. A combination of cluster and backup is the right way of managing an organization’s most
significant workloads. Clustering services will be used to provide High Availability and a backup
solution, such as SCDPM, will be used to provide the last line of data protection.

7.8.4 RECOVERY LEVEL OBJECTIVES


The Recovery Level Objectives are a means of defining the level of data an organization must be able
to restore in the event of data loss. An example of this is the SharePoint workload, the RLO will define
the granularity of the wished restore for the workload. For example company A must be able to restore
the total SharePoint farm but the company B must be able to restore item-level-restore in form of sites,
lists or documents.

7.8.5 LOCAL SQL BACKUPS


Many database administrators (DBA) uses local backup jobs that they configure in the SQL
Management Studio. There are advantages and disadvantages inherent to this method.

If you use SCDPM you will gain:


111
 A centralized console that will keep track of all your backups of your SQL workloads present in
your datacenter.
 The ability to monitor backup alerts produced by SCDPM in SCOM.
 A means of automating backup alerts and auto-heal broken backups via SCO.

7.8.6 SQL MAINTENANCE TASKS


An important part of being a SQL database admin or DBA is to keep the SQL databases in an optimal
functioning state. As maintenance plans are running in your SQL server environment, SCDPM will
understand the execution state of the maintenance plans and still be able to make a valid snapshot of
your SQL server data.

7.8.7 SQL MANAGEMENT TASKS


At some points in time, the database administrators or DBA’s will need to perform SQL Management
tasks. In this section we will present the most common activities and outline how they will impact the
protection of SQL via DPM

7.8.8 MOVING SQL SERVER BETWEEN DOMAINS


If the DBA moves a SQL server to another domain, System Center Data Protection Manager 2012 R2
will not be able to continue to protect SQL databases hosted on that server without intervention from
the DPM administrator. There are three ways of protecting a SQL Server database in an untrusted
domain:

 Establish a two-way transitive trust between the domains: The author would not casually
recommend this approach due to security considerations.
 Using Workgroup protection: This will not protect any clustered or mirrored databases
 Using Certificate Based Authentication (CBA): This will also protect clustered SQL databases

7.8.9 RENAMING A COMPUTER RUNNING SQL SERVER


If a DBA renames a DPM protected SQL Server, System Center Data Protection Manager 2012 R2 will
no longer continue to protect the SQL Server or its databases. If the DBA needs to perform this activity,
significant amount of planning needs to be performed. The DPM administrator must stop protecting all
data sources including SQL, System State & File System and then they must uninstall the DPM agent.
After the DBA has renamed the SQL Server the DPM administrator can reinstall the DPM agent and
recreate the Protection Group on the DPM server containing the SQL databases.

7.8.10 CHANGING THE RECOVERY MODEL OF A DATABASE


If the DBA wishes to change the recovery model of a SQL Server database, the DPM administrator
needs to stop the protection, but may elect to retain previous backups on the DPM server. The DBA
can then change the recovery model. Following the configuration change, the DPM administrator can
reconfigure SCDPM protection for the SQL database.

7.8.11 REPLACING A DISK ON A SQL SERVER


In the scenario that a disk fails that contains protected SQL Server databases the DBA must use the
same drive letter for the new disk that was assigned to the old disk. System Center Data Protection
Manager 2012 R2 can then restore protected data to the new disk.
112
7.8.12 ADDING DATABASES TO A SQL SERVER
If the DBA adds a database to an instance of SQL that is protected, the database is not by default
included in the DPM protection group. This could be accomplished by using the auto-protection
configuration. For the DPM administrator to use the auto-protection method you must protect all the
databases within that instance.

7.8.13 CHANGING THE PATH OF A SQL SERVER DATABASE


If the DBA changes the path of a protected SQL Server database, an alert is raised in the DPM
console. To continue protecting the database files in their new location, the DPM administrator must
stop protection of and re-establish protection of the SQL Server databases.

7.8.14 RENAMING A SQL SERVER DATABASE


If the DBA renames a SQL database the DPM protection will fail. For System Center Data Protection
Manager 2012 R2 to be able to protect the renamed database the DPM administrator must include the
“new” database in a present Protection Group or create a new.

7.8.15 PARALLEL BACKUPS


System Center Data Protection Manager 2012 R2 can run parallel backups for protected data sources
to optimize the network load. Due to limitations in the SQL Server software System Center Data
Protection Manager 2012 R2 cannot leverage this feature.

7.8.16 AUTO-PROTECTION OF SQL INSTANCES


System Center Data Protection Manager 2012 R2 has the ability to auto-protect all SQL databases
present in an instance. This feature will include any newly added databases but not remove exiting
backups for SQL databases that are removed from the SQL Server instance by the DBA.

As a DPM administrator you can’t be selective with this behavior if you would like to exclude some SQL
databases from the auto-protection. If you enable this feature, System Center Data Protection Manager
2012 R2 dos it for all databases within the SQL Server instance.

113
To enable the Auto-Protection for a SQL Server instance you check the checkbox next to the SQL
Server instance name. A text (Auto) will appear in front of the SQL Server instance name and then you
know that the Auto-Protection feature is enabled.

7.8.17 CO-LOCATION FOR LARGE SCALE SQL SERVER IMPLEMENTATIONS


In a large scale environment the number of SQL Server databases that need to be protected could be
very high. Since System Center Data Protection Manager 2012 R2 includes a feature that will allow you
to protect a larger number of SQL Server databases the actual number of DPM server that needs to be
deployed will decrease during the planning phase of the DPM design for SQL Server database
protection.

By collocating the replica volumes in the DPM disk pool the number of possible protectable SQL Server
databases will increase from approximately 1,000 to 2,000 SQL Server databases. Now keep in mind
that this is a theoretical number meaning that the size of the databases will also play an important role.
The best way to find out how many databases a DPM server can protect is to begin to calculate the
sizes of the databases. More information regarding this can be found on the blog
http://robertanddpm.blogspot.se
114
7.9 Windows Azure
System Center Data Protection Manager 2012 R2 can connect to the Windows Azure Backup Service
to provide off-site archiving for up to 120 days. The supported workloads to be archived into Azure are:

 SQL Server databases


 Hyper-V servers
 File Services file data
For more information regarding how to enable Windows Azure online protection read the Configuring
Protection Groups for SQL Server section.

7.10 Pre-Configuration
In this part of the chapter you will get both a walkthrough of how to setup a protection group and also
special pre-configuration tasks that may need to be performed dependent on your version of SQL
Server.

7.11 SQL Recovery Models


As you probably are familiar with there are different recovery models for a SQL database. For SQL
Server 2012 the available recovery model options are:

 SIMPLE
 FULL
 BULK-LOGGED
The major difference between the SQL database recovery models are that FULL and BULK-LOGGED
recovery models works with transaction logs whereas a database configured with SIMPLE recovery
mode does not use transaction logs. Since System Center Data Protection Manager can interrogate
SQL Server and see what recovery model a database is configured to use, System Center Data
Protection Manager 2012 R2 will provide the appropriate restore options permissible for the given
recovery model.

7.12 Supported SQL Server Versions


System Center Data Protection Manager 2012 R2 can protect the following version of SQL Server:

 SQL Server 2012


 SQL Server 2008 R2
 SQL Server 2008
115
7.13 SQL Clusters
System Center Data Protection Manger 2012 R2 protects a SQL Server failover cluster in a fully
supported manner. On a planned failover System Center Data Protection Manager 2012 R2 will
continue protection of a SQL Server workload and understand the new node ownership, continuing to
provide an optimal and fully supported backup of the SQL Server workload.

In the scenario where a SQL Server failover cluster is performing an unplanned failover in response to
a failure, the DPM server will generate an alert to indicate that a consistency check needs to be
performed. This results in an inconsistent replica. Replica inconsistency can remember, be easily
resolved using the built-in auto-heal function. If you don’t want to wait three hours for the auto-heal
function to trigger you can alternatively create a Runbook in System Center Orchestrator 2012 R2 and
clear any replica of a data source that is inconsistent.

For System Center Data Protection Manager 2012 R2 to be able to protect a SQL Server failover
cluster built on the supported protectable versions of SQL Server a DPM agent must be deployed to all
servers participating in the failover cluster.

7.13.1 ADDING SQL CLUSTER MEMBERS


In the scenario where the SQL Server failover cluster is expanded with additional nodes the DPM
administrator will be presented with an alert stating that the new cluster nodes must have a DPM agent
installed to them for System Center Data Protection Manager 2012 R2 to be able to protect the cluster.

The deployment of DPM agents can be automated for a modern dynamic datacenter by combining the
power of System Center Data Protection Manger 2012 R2, System Center Operations Manager 2012
R2 and System Center Orchestrator 2012 R2.

7.13.2 REMOVING SQL CLUSTER MEMBERS


In the scenario where a SQL Server cluster member has reached end-of-life and needs to be retired
System Center Data Protection Manager 2012 R2 will understand this and will alert the DPM
administrator that the cluster node is no longer a member of the SQL Server cluster.

7.14 Mirrored SQL Servers


System Center Data Protection Manager 2012 R2 can protect mirrored SQL Server database clusters
but there are some considerations that you must apply before configuring the protection:

 Be sure to install the DPM agent on both partners of the mirror


 Do not mirror the database on the same computer
As mentioned System Center Data Protection Manager 2012 R2 will also be able to protect a mirrored
SQL Server cluster. System Center Data Protection Manager 2012 R2 will be able to protect the
following configurations:

 Principal is clustered, mirror in not clustered


 Principal is not clustered, mirror is clustered
 Both principal and mirror are clustered
116
7.15 Unsupported Scenarios
The following scenarios are not supported for SQL Server mirrored database:

 If the database is mirrored on the same SQL Server


 If the SQL Server mirroring session uses an explicitly configured IP address
 If the Always-On feature in SQL Server 2012 is turned on

7.16 Specific SQL Server Configurations


For System Center Data Protection Manager 2012 R2 to be able to protect SQL Server 2012 there are
some special configurations that needs to be made on the SQL Server side. When you initially start to
protect SQL Server 2012, the DPM server will raise an alert saying that the DPM server was “Unable to
configure protection” as shown in the below picture.

As you also can see System Center Data Protection Manager 2012 R2 will also provide you with the
information of what you should do to get this up and running. You must provide the DPMRA service
sysadmin rights on the SQL Server instance that the database resides in.

7.16.1 CONFIGURATION WALKTHOUGH


For System Center Data Protection Manager 2012 R2 to be able to protect a SQL Server 2012
database you need grant the DPMRA service sysadmin privileges on the SQL Server instance. This is
a walkthrough of the configuration.
117

Open the SQL Server 2012 Management Studio and log on to the instance that contains the databases
that you would like to protect using System Center Data Protection Manager 2012 R2.
Create a New Login… by right click the Logins folder.

Add the NT SERVICE\DPMRA account under the Login name. You can also search for it by clicking
the Search… button on the right side of the Login name field.
118
Click on Server Roles page on the left side of the window and add the sysadmin rights for DPMRA.

119
When you now execute the Configure New Protection Group job again in System Center Data
Protection Manager 2012 R2 you will now be able to protect the SQL Server 2012 databases.
120
7.17 Configuring Protection Groups for SQL Server
For System Center Data Protection Manager 2012 R2 to be able to start protect a SQL Server, you
need to create a Protection Group. A Protection Group is a set of instructions that will provide the DPM
agent with the backup schedule and workload information for a successful backup.

To configure a Protection Group you click on the New button in the top left part of the DPM console.

You will be presented with a Welcome screen, click Next >.

You now need to choose if you want to protect Servers or Clients. Select Servers and click Next >.
121
Now it’s time to choose Group Members of the Protection Group.

122
In this example I will expand the server SCSQL and all the present workloads will be presented.

123
Expand All SQL Servers and all SQL Server Instances will be displayed.

124
Now you have two choices, you can:

 Protect specific databases


 Enable auto-protection for an instance
Mark the instance to enable auto-protection or choose a specific database for protection. Choose your
database and click on Next >.
125
You will now choose the way you want to protect your workload, for SQL servers you can choose:

 Shot-term protection using disk, it’s not a recommendation to use tape though it’s a possible
approach.
 Online protection using Azure
 Long-term protection using tape
In this example we will mark Short-term protection for on-premise restore operations and Azure for long
term protection (archiving). Click Next > to continue.
126
Now you must choose your Short-Term recovery goals. Initially, you should consider for how many
days you wish to store the protected data source recovery points in your DPM disk pool. This is called
Retention Range, the default value is five (5) days.

127
Next you must consider the Synchronization frequency by choosing a synchronization interval in the
dropdown list. The last part is to select when Recovery points should be created. By default recovery
points are created at 8 PM every day, to change this click the Modify button.

128
Now you can choose the recovery point times that meet your organizations recovery point objectives
(RPOs). Mark the times and click on the Add > button, also choose the appropriate weekdays for
creating recovery points. When you are done you click on OK.

129
Now you will get an updated summary for the sort-term recovery goals, verify it and click Next > to
continue.
130
On the Review Disk Allocation step, DPM will present a summary of the actual disk allocation that
need to be made for the workload to be protected according to your recovery point schedule. Please
note the checkbox that says “Automatically grow the volumes”. This is one of System Center Data
Protection Manager 2012 R2 auto-heal functions.

You can alter the disk allocation configuration by clicking the Modify button.
131
When you click the Modify button, you will be presented of the actual allocation for the collocated
volumes. You can alter the value that is proposed however if DPM detects that this will not result in a
successful modification you will be prompted with a warning.

132
When you are done altering the disk allocation click the OK button and you will return to the Review
Disk Allocation step. Click Next > to continue to the Choose Replica Creation Method step.

133
In the Choose Replica Creation Method step you have three different options that will provide you
with suggestions how to create your replica, these are:

 Now, replica will be created during the creation of the Protection Group
 Later, replica will be scheduled for creation
 Manually, replica will be created manually by the DPM administrator
In this guide we will create the replica Now. Click the Next > button to continue.
134
The next step is the Consistency check option where you specify another of DPM’s auto-heal
functions called “Run a consistency check if a replica becomes inconsistent”. You can also choose if
the Protection Group you are designing will schedule a daily Consistency Check. This means that
DPM will synchronize any changes from the production workload to the DPM server and making the
replica consistent once again.

When you are finished you click Next > to get to the next step of the configuration.
135
In the next step called Specify online protection data you choose which workloads should be
archived to Windows Azure Backup. The supported workloads are:

 SQL
 Hyper-V
 File Services
Check the checkbox for the workload you would like to put in Azure and click on Next >.
136
In the Specify Online Protection Goals window, you have two options. You can archive recovery
points every day or on a weekly basis. For the weekly archiving basis see the next screenshot.
137
Specify when the DPM server should synchronize the recovery point and click on Next > to continue.

138
Verify the configuration in the Summary step and click on the Create Group button.

139
In the Status step you will see when the configuration is successfully finished and the Close button will
be clickable. Click Close to end the setup wizard for the Protection Group.
140
7.18 Restoring SQL Server Data from DPM
There are three media types that DPM can restore SQL data to:

 From disk (Short-term recovery goals)


 From Tape or Virtual Tape Library (VTL) (Long-term recovery goals)
 From Azure (Cloud)
In this chapter you will be presented with the different restore operations these three different media
types will provide. All operations have the same granularity of restore meaning that you will be able to
restore a single database per restore job.

To restore data from System Center Data Protection Manager 2012 R2 you must click the Restore in
the navigation bar (down left in the console).

With transactional log systems like SQL (recovery model FULL or BULKED-LOGGED only) and
Exchange server you have the possibility to restore the SQL database or Exchange mailbox database
to its last good transaction using the Latest recovery time option. You will only be able to restore the
data to its original location using the Latest restore recovery option.
141
If you would like to restore from a valid recovery point click start by choosing the correct date all dates
that contains valid recovery points have bold numbers. Now you must choose a valid Recovery time in
the dropdown list, choose the time that you would like to perform a restore from.

After you have specified the date and time you right-click the database that you would like to restore
and choose Recover.

If you would like to see all of the available recovery points for restore for the data source choose Show
142

all recovery points.


7.19 Short-Term Recovery Goals Restore
When restoring from short-term recovery goals (disk) you have the following restore options:

 Recover to original instance of SQL Server (Overwrite database)


 Recover to any instance of SQL Server
 Copy to a network folder
 Copy to tape

143
7.19.1 RECOVERY TO THE ORIGINAL INSTANCE OF SQL SERVER
This recovery type will restore the SQL database to its original location and overwrite the database
files. This option is useful in the case of for example corruption or accidental deletion.

In this recovery type you have two different database states that you can choose:

 Leave database operational


 Leave database non-operational but be able to restore additional transaction logs

144
The second database state mentioned (Leave database non-operational but able to restore additional
transactional logs) will make it possible to restore additional log file data. These additional transaction
log must be in a folder on the original SQL Server.

7.19.2 RECOVERY TO ANY INSTANCE OF SQL SERVER


This recovery type will make it possible to restore the database to any instance of a SQL Server that
has a DPM Agent installed and attached to the DPM server when another protected database exists in
that instance This option will give you the possibility to restore SQL databases to any SQL Server
instance and also makes it possible to restore 2008 SQL server databases to SQL Server 2012
instances.
145
Browse the network and select the SQL Server that you would like to restore the database to. You
have the ability to change the database alias so that you are able to restore the database to its original
SQL Server but with a different database name. One important consideration regarding the latter option
is that a new location for the database files must be specified since this recovery type does not
overwrite any existing database files.

7.19.3 COPY TO NETWORK FOLDER


When a DPM administrator need to restore a database to a network folder for the DBA so he or she
can perform additional testing before manually restore the database in production there is one
important thing to keep in mind. You can only restore an Express-Full Recovery Point not any
subsequent Synchronizations.

If you have chosen to restore a Synchronization, DPM will prompt you with the following screen:

146
Now DPM presents the valid recovery points that you can choose to restore to a network folder. When
you have a valid recovery point you will be prompted to select where to restore the database. You can
restore the files to any server that DPM protects and choose a specific folder for the restore.

7.19.4 COPY TO TAPE


This option is generally for archiving purposes. To be able to restore your database to a tape you need
to have a standalone tape drive or library attached, configured and operational.

147
You need to specify the primary library and a copy library if you have one configured. You can choose
to compress or encrypt the data written to the tape. One important thing to keep in mind is if you
choose to encrypt the data written to the tape you will always need that specific certificate that
encrypted the data. If the certificate is lost or expires, you will be unable to restore this data from the
tape.

7.20 Long-Term Recovery Goals Restore


When restoring from long-term recovery goals or tape you must choose a recovery point that is listed
as tape in the dropdown menu.

Right click the database and choose Recover and the Recovery Wizard starts. Keep in mind that only
Express-Full recovery points will be written to tape.

You have the same options as you had when you restored from the short-term recovery goals. There is
one thing which differentiates the two options and that is the Specify database state option. If you
choose to restore the database and set the “Leave database non-operational but able to restore
additional transaction logs” option you won’t be able to restore additional transactional log files, just the
database.

148
7.21 Cloud Recovery Restore
When you choose to restore databases from Azure you need to start by pointing out a recovery point
that was stored in Azure by choosing the right date and time in the Recovery workspace.

Right-click the database and choose Recover and the Recovery Wizard starts. You will have three
options:

 Recover to original instance of SQL Server (Overwrite database)


 Recover to any instance of SQL Server
 Copy to a network folder
149
The recovery types will not differentiate from the steps previously explained in this chapter.

7.22 Additional DPM Information


If you find System Center Data Protection Manger 2012 R2 interesting and would like to know more
how it could be used to enable a sophisticated protection for a modern datacenter you should start be
visiting the Microsoft TechNet DPM webpage: http://technet.microsoft.com/en-us/library/hh758173.aspx

Also available are System Center Data Protection Manager blogs. The ideal starting place is the official
product group blog from Microsoft at http://blogs.technet.com/b/dpm.

There are also several MVP blogs dedicated to DPM such as http://robertanddpm.blogspot.se.

150
8.0 MONITORING SQL WITH SCOM - MATTHEW LONG
System Center Operations Manager is Microsoft’s enterprise monitoring tool, and through its
Management pack system has a deep level of knowledge and monitoring insight on the health of a wide
variety of applications (both Microsoft and otherwise).

One such freely available management pack is the System Center Management Pack for SQL Server.
This management pack provides SCOM with the capabilities to automatically find SQL Servers in your
environment, proactively monitor SQL components for health state changes, and provide dashboards,
architecture diagrams and common task shortcuts for SQL administrators and operations staff.

As of the current release (6.4.1.0) the management pack provides support for:

 SQL Server 2005, 2008, 2008 R2, 2012, 2012 R2


 Database engines Monitoring
o Individual Database Monitoring
o SQL agent Monitoring
o SQL Job Monitoring
 Reporting Services Monitoring
 Analysis Services Monitoring
 Integration Services Monitoring
 AlwaysOn Monitoring
o Database Replicas
o Availability Groups
o Availability Replicas
You can download the SQL Server management pack from Microsoft download. Whilst the SQL
management pack is also available from the SCOM MP Catalog (available within the SCOM console),
this does not include the SQL Presentation management pack which provides SCOM 2012 with
advanced SQL dashboards.

As such we recommend you download the SQL MP from Microsoft download if you are using SCOM
2012 and wish to get the full experience. In addition the SQL mp guide is extremely details and covers
the prerequisites and monitoring capabilities in full. This guide is only available from Microsoft
download and must be read and understood by your SCOM administrators prior to importing the SQL
management pack.

You can download the current SQL management pack release from http://www.microsoft.com/en-
gb/download/details.aspx?id=10631.

8.1 Management Pack Audience


Whilst the SQL management pack provides detailed monitoring, Alert knowledge and resolutions
directly from the SQL product group, it is operating as part of the Operations Manager system. As such
it does not provide second by second performance or transaction monitoring. SQL administrators who
already use dedicated monitoring tools may find that SCOM is not a suitable substitute for the features
151

and tools they have come to use within their roles.


The SQL mp is not designed to replace these monitoring tools, but instead to provide a “single-pane” of
health state information at a high to medium depth. Rather than locking this health state and reporting
capability within a particular silo, SCOM allows operators and application owners of other systems to
gain insight and understanding into the health of the SQL components that their systems rely upon.

This in turn releases resources that the SQL administration team may otherwise have to spend
investigating issues or fielding enquiries into SQL health and defending the platform when issues arise
for which the root cause is not apparent.

The SQL management pack also collects a huge amount of performance information, which is then
stored and aggregated in the SCOM Data Warehouse for long term reporting. Whilst the performance
counter collection interval may be too low to troubleshoot certain issues (with most performance
counters sampled on a 5 or 15 minute interval) this is more than acceptable for trending and reporting
across the last year of operation.

8.2 Configuration and Customization


The SQL management pack is by far one of the most complex management packs to implement within
Operations Manager. Unlike most management packs which can simply be imported into SCOM and
left to their own devices, the SQL MP has a series of important configuration steps that must be
understood and implemented depending on a series of factors in your environment.

8.2.1 INITIAL SETUP


It is absolutely crucial that you read the SQL mp guide, specifically the Security Configurations
section. By default SCOM agents will most likely not have access to correctly discover and monitor
SQL components. This is because starting from SQL Server 2008 the Local System account (which in
most environments is the SCOM agent action account) does not receive the necessary security rights
to log into SQL.

If your organization restores these rights to SQL roles as part of their build process then you will not
need to provision SCOM Run as Accounts. However as this is generally not considered a secure
configuration it is more likely that you will need to provide SCOM with an account that has access to
SQL resources in order to correctly discover and monitor SQL components.

The Low-privilege Environments section details what the necessary accounts and rights should be.
Note that by default Local System (the default agent action account) will likely already have access to
WMI, performance counters and the Windows registry. If you do not use one global account for
monitoring all SQL servers, you may have to provision and configure multiple accounts within
Operations Manager. Take note of the Run As Account distribution model you are using for these
accounts; if you are using the “Most Secure” method you will need to configure each agent monitoring
SQL components to be able to make use of that account. This may impact your system onboarding
process into operations manager.

If you are making use of SQL Clusters or AlwaysOn capabilities, you must ensure that you enable the
Agent Proxy capability on all SCOM agents monitoring those SQL components. Failure to do so will
152

prevent correct discovery of Clustered or AlwaysOn components. Whilst SCOM will raise alerts for
agents missing Run As Accounts or Agent proxy settings, the alerts will not appear in SQL specific
views and can only be rectified by SCOM administrators.

8.2.2 OPTIONAL MONITORING FEATURES


By default the SQL management pack has all generic monitoring capabilities enabled. What this
means is that monitors and rules that you would wish enabled for 99% of production SQL components
are enabled and configured out of the box with sensible thresholds from the SQL product group.

However, there are many situational or scenario specific monitors and rules that are disabled out of the
box as they are only of use in certain configurations, or require thresholds specified by your SQL
administrators before they can provide accurate heath calculations.

The following table lists all content in the SQL 2012 mp (as of 6.4.1.0) that is disabled by default.
Consider enabling these elements in your environment if they would prove useful (these can be
enabled either environment wide or only for specific SQL instances via SCOM Overrides).

Name Type Description Target


General Custom Discovery Discovery of Custom User policies Availability Group
User Policy for AlwaysOn objects
Discovery
Discover SQL Discovery Discovers SQL Server 2012 Agent SQL Server 2012 Agent
Server 2012 jobs.
Agent Jobs
SQL Server 2012 Discovery Discovery of Custom User Policies SQL Server 2012 DB
Database for SQL Server 2012 Database Engine
Custom User
Policy Discovery
Discover Discovery This discovers all SQL Server SQL Server 2012 DB
Replication Replication components on a SQL Engine
Components Server 2012 DB Engine

SQL Server 2012 Discovery This discovers replication SQL Server 2012 DB
Replication publications and subscriptions for Engine
Publications and each SQL Server 2012 database.
Subscriptions
Discovery
Provider
SQL Server Full Monitor This monitor checks the status of SQL Server 2012 DB
Text Search the SQL Full Text Search service. Engine
Service
SQL User Monitor This monitor analysis the user SQL Server 2012 DB
Connections connections to the SQL database Engine
Performance engine over time and calculates a
baseline over the initial learning
153

period.
Blocking Monitor Monitors blocked sessions for a SQL Server 2012 DB
Sessions SQL instance. Engine
Database Backup Monitor This monitor checks the status of SQL Server 2012 DB
Status the database backup as reported by
Microsoft® SQL Server™.
Auto Close Monitor Monitors the auto close setting for SQL Server 2012 DB
Configuration the database
Auto Create Monitor Monitors the auto create statistic SQL Server 2012 DB
Statistics setting for the database
Configuration
Auto Shrink Monitor Monitors the auto shrink setting for SQL Server 2012 DB
Configuration the database
Auto Update Monitor Monitors the auto update statistics SQL Server 2012 DB
Statistics setting for the database
Configuration
Auto Update Monitor Monitors the auto update statistics SQL Server 2012 DB
Statistics Async async setting for the database
Configuration
DB Chaining Monitor Monitors the DB chaining setting for SQL Server 2012 DB
Configuration the database
Recovery Model Monitor Monitors the recovery model setting SQL Server 2012 DB
Configuration for the database.
Page Verify Monitor Monitors the page verify setting for SQL Server 2012 DB
Configuration the database.
Trustworthy Monitor Monitors the trustworthy setting for SQL Server 2012 DB
Configuration the database
DB Total Space Monitor Monitors the space available in the SQL Server 2012 DB
database and on the media hosting
the database in percentage terms.
DB Space Monitor Monitors for a large change in value SQL Server 2012 DB
Percentage of database available space over a
Change set number of sample periods.
Long Running Monitor This monitor checks for long running SQL Server 2012 Agent
Jobs SQL Agent jobs.
Disk Read Monitor Disk Read Latency monitor for 2012 SQL Server 2012 DB
Latency DBs
Disk Write Monitor Disk Write Latency monitor for 2012 SQL Server 2012 DB
Latency DBs
SQL Re- Monitor SQL Re-Compilation for 2012 DB SQL Server 2012 DB
Compilation Engine Engine
Transaction Log Monitor Transaction Log Free Space (%) SQL Server 2012 DB
Free Space (%) monitor for 2012 DBs
154
SQL Server 2012 Rule Detects SQL Server 2012 DB SQL Server 2012 DB
DB Engine is Engine restart. Engine
restarted
SQL Server Rule Detects that at least one endpoint in SQL Server 2012 DB
Service Broker or a SQL Server broker conversation Engine
Database has stopped listening for
Mirroring connections.
Transport
stopped
SQL Server Rule SQL Server Service Broker SQL Server 2012 DB
Service Broker transmitter stopped due to an error Engine
transmitter shut or a lack of memory
down due to an
exception or a
lack of memory
SQL Server Rule SQL Server Service Broker or SQL Server 2012 DB
Service Broker or Database Mirroring is running in Engine
Database Federal Information Processing
Mirroring is Standard (FIPS) compliance mode
running in FIPS
compliance
mode
The SQL Server Rule The SQL Server Service Broker or SQL Server 2012 DB
Service Broker or Database Mirroring endpoint is Engine
Database disabled or not configured.
Mirroring
transport is
disabled or not
configured
SQL Server Rule SQL Server Service Broker SQL Server 2012 DB
Service Broker Manager has shut down Engine
Manager has
shutdown
An SNI call failed Rule SQL Server Service Broker or SQL Server 2012 DB
during a Service Database Mirroring attempted to Engine
Broker/Database access the transport layer through
Mirroring the SQL Network Interface (SNI).
transport SNI returned an error. Transport
operation cannot continue.
An SQL Server Rule A procedure that SQL Server SQL Server 2012 DB
Service Broker Service Broker internally activated Engine
procedure output output results. Internal procedures
results should not output results.
155
The Service Rule SQL Server Service Broker or SQL Server 2012 DB
Broker or Database Mirroring transport has Engine
Database started
Mirroring
Transport has
started
Process Worker Rule This error indicates that there is a SQL Server 2012 DB
appears to be possible problem with a thread not Engine
non-yielding on yielding on a scheduler.
Scheduler
IO Completion Rule I/O completion ports are the SQL Server 2012 DB
Listener Worker mechanism by which SQL Server Engine
appears to be uses a pool of threads that was
non-yielding on created when the service was
Node started to process asynchronous I/O
requests.
An SQL job failed Rule A SQL Server Agent Job failed. SQL Server 2012 Agent
to complete
successfully
IS Service has Rule The Integration Services service SQL Server 2012
attempted to was used to send a request to the Integration Services
stop a running Integration Services runtime to stop
package a running package.
SQL Server Rule SQL Server is stopping because the SQL Server 2012 DB
terminating server is shutting down. Engine
because of
system
shutdown
Table: Creating Rule sp_createstats has generated SQL Server 2012 DB
statistics for the statistics for each eligible column in Engine
following the current database
columns

In addition to the disabled features above, it may be necessary to turn off functionality which is enabled
out of the box because it is noisy, not applicable, duplicating an existing alert, or not violating a best
practice of your organization. The SQL management packs provide a series of version specific and
generic groups that you can use for this purpose.

In addition, it may be worthwhile creating a series of custom dynamic groups that automatically
populate with SQL components matching certain criteria. These groups can then be used as the target
for monitoring overrides, in order to reduce SCOM administrative overhead and shorten the alert tuning
process. Common groups may include:
156
 SQL Express/Developer edition Instances and databases (products may install SQL Express
DBs and self-manage these systems, so SCOM alerting is not required).
 SQL DR systems (these systems are often inactive and will generate false alerts until they are
enabled for production use).
 Clustered SQL Servers (monitoring configuration may differ significantly for a cluster. For
example, the cluster will manage which SQL services should be started at any given time).

8.3 Extending the SQL Management Pack with MP Authoring


Whilst the SQL MP does an excellent job of monitoring SQL out of the box, as it is written without
specialized knowledge of your environment (and consuming applications) there will sometimes be
scenarios that you wish to monitor for that are not provided by the management pack. In these cases
as the management pack platform used by SCOM is open, many people author their own monitoring
workflows in their own custom management pack and make use of the existing discoveries and class
objects from the SQL mp.

8.3.1 SQL MANAGEMENT PACK STRUCTURE


The SQL management pack uses a granular class structure that enables you to target monitoring
workloads easily by SQL role and specific version. The Microsoft.SQLServer.Library management
pack defines version agnostic classes that you can use if you wish to monitor any version of the SQL
component you are targeting.

Each Microsoft.SQLServer.xxxx.Discovery management pack then defines a version specific object


that you can target if your monitoring workflow is only relevant for that version. If you need to target
multiple specifc versions of SQL (such as 2008 or 2008 R2 only) then you will need to either target your
monitoring at the generic classes in the library MP and use overrides to enable them, or implement your
monitor multiple times (once for each version).

The following table illustrates what classes exist in the SQL management packs, which may prove
useful when deciding how to target your custom monitors and rules.

Icon Name Hosted by Comments


SQL Role Windows Computer Target this class if you wish this
monitor to run on every SQL server
(may run multiple times if multiple
roles are installed).
SQL DB Engine Windows Computer SQL Database Instance (will exist
multiple times if multiple instances
installed).
SQL Reporting Windows Computer SQL Reporting Services instance.
Services
SQL Analysis Services Windows Computer SQL Analysis Services instance.

SQL Integration Windows Computer SQL Integration Services instance.


157

Services
SQL Agent SQL DB Engine The SQL Agent is discovered once
per instance of a DB Engine.
SQL Agent Job SQL Agent Discovery is disabled by default.
One instance will be discovered per
job once enabled.
SQL Database SQL DB Engine One instance will exist for each
database. Note that system
databases will also be discovered as
this class. If monitors/rules are
specific for a certain database, leave
them disabled by default and enable
via overrides.
SQL Server DB File SQL Server DB File One instance discovered for each
Group DB file in the file group.
SQL Server DB Log SQL Database One instance discovered per
File database.

158
8.3.2 USEFUL MODULES WHEN AUTHORING SQL MONITORING
Once you have identified the appropriate class that you wish to target with your custom SQL
monitor/rule, you will need to build a module in your custom management pack that can be used to
query SQL and process whatever health check you wish to perform.

There are three main ways that you can easily query SQL using the built in SCOM modules:

Method Used as Create In SCOM Required Notes


part of Console Knowledge
ADODB.Connection VBScript Yes (as long as VBScript skills Medium
the default agent performance
account has impact, flexible
access to SQL).
System.Data.SqlClient. PowerShell No Powershell Heavy
SqlConnection Script Scripting, working performance
with .Net objects impact, flexible
System.OleDbProbe Custom No SCOM MP Lightweight
Composite custom module performance
workflow authoring impact,
requires
knowledge of
other modules

Each of these modules has its own set of set of knowledge prerequisites (VBScript skills, Powershell
script skills, SCOM Composite module creation) so choosing the appropriate method will largely be pre-
deteremined by your available skillset. If you are comfortable with many of these methods, it should be
noted that whilst the OleDbProbe has the lighest performance impact on the SCOM agent, it will require
knowledge of other SCOM modules in order to process the returned data and generate a
health/performance state.

All three modules will allow you to specify a SQL query that will be executed against the targeted
system. If you are creating a Powershell or VBScript, you will need to ensure that you can build a
suitable connection string to connect to the SQL DB in question.

 The database name can be retrieved using the name parameter on the SQL Database class.
 The SQL Server/instance can be retrieved using the “Connection String” property of the SQL
DB Engine hosting the SQL database.
 The connection string should specify one of:
o Integrated security (to run as the user account the script is executing under). You should
specify that your script module needs to use the SQL Server Monitoring Run as profile
(Microsoft.SQLServer.SQLProbeAccount) if the default SCOM agent account does
not have access to query SQL.
o If you are using SQL authentication you will need to create a Run As account and Profile
and use the SecureInput optional parameter of the Script module.
159
8.4 Sample VBScript
The following sample comes from the SQL Server 2012 MP. This script provides the state of each
Database which can then be filtered upon in a monitor to provide health state information.

Option Explicit
SetLocale("en-us")

Function Quit()
WScript.Quit()
End Function

Function IsValidObject(ByVal oObject)


IsValidObject = False

If IsObject(oObject) Then
If Not oObject Is Nothing Then
IsValidObject = True
End If
End If
End Function

Function MomCreateObject(ByVal sProgramId)


Dim oError
Set oError = New Error

On Error Resume Next


Set MomCreateObject = CreateObject(sProgramId)
oError.Save
On Error GoTo 0

If oError.Number <> 0 Then ThrowScriptError "Unable to create automation object '"


& sProgramId & "'", oError
End Function
'#Include File:Error.vbs

Class Error
Private m_lNumber
Private m_sSource
Private m_sDescription
Private m_sHelpContext
Private m_sHelpFile

Public Sub Save()


m_lNumber = Err.Number
m_sSource = Err.Source
m_sDescription = Err.Description
m_sHelpContext = Err.HelpContext
m_sHelpFile = Err.HelpFile
End Sub

Public Sub Raise()


Err.Raise m_lNumber, m_sSource, m_sDescription, m_sHelpFile, m_sHelpContext
End Sub

Public Sub Clear()


m_lNumber = 0
160

m_sSource = ""
m_sDescription = ""
m_sHelpContext = ""
m_sHelpFile = ""
End Sub

Public Default Property Get Number()


Number = m_lNumber
End Property
Public Property Get Source()
Source = m_sSource
End Property
Public Property Get Description()
Description = m_sDescription
End Property
Public Property Get HelpContext()
HelpContext = m_sHelpContext
End Property
Public Property Get HelpFile()
HelpFile = m_sHelpFile
End Property
End Class

Function ThrowScriptErrorNoAbort(ByVal sMessage, ByVal oErr)


On Error Resume Next
Dim oAPITemp
Set oAPITemp = MOMCreateObject("MOM.ScriptAPI")
oAPITemp.LogScriptEvent WScript.ScriptName, 4001, 1, sMessage & ". " &
oErr.Description
End Function

Function ThrowScriptError(Byval sMessage, ByVal oErr)


On Error Resume Next
ThrowScriptErrorNoAbort sMessage, oErr
Quit()
End Function

Sub HandleError(customMessage)
Dim localLogger
If Not (Err.number = 0) Then
Set localLogger = new ScriptLogger
localLogger.LogFormattedError(customMessage)
Wscript.Quit 0
End If
End Sub

Function HandleErrorContinue(customMessage)
Dim localLogger
HandleErrorContinue = False
If Not (Err.number = 0) Then
Set localLogger = new ScriptLogger
localLogger.LogFormattedError(customMessage)
Err.Clear
HandleErrorContinue = True
End If
End Function

'#Include File:ConnectionString.vbs

Function BuildConnectionString(strServer, strDatabase)


161

ON ERROR RESUME NEXT


Err.Clear

BuildConnectionString = "Data Source=" & EscapeConnStringValue(strServer) &


";Initial Catalog=" & EscapeConnStringValue(strDatabase) & ";Integrated Security=SSPI"
End Function

' This function should be used to escape Connection String keywords.


Function EscapeConnStringValue (ByVal strValue)
ON ERROR RESUME NEXT
Err.Clear

EscapeConnStringValue = """" + Replace(strValue, """", """""") + """"


End Function
'#Include File:GetSQL2012DB.vbs
'Copyright (c) Microsoft Corporation. All rights reserved.
' This script takes a single parameter
' Param 0: The SQL connection string to connect to
' All database state is output in a property bag. A number of properties are output
' Name: DBName\State Value: Returns the status of the db. Status can be:
' RESTORING
' RECOVERING
' RECOVERY_PENDING
' SUSPECT
' EMERGENCY
' OFFLINE
'

Const SQL_DISCOVERY_CONNECT_FAILURE = -1
Const SQL_DISCOVERY_QUERY_FAILURE = -2
Const DB_PROPERTY = "Status"

Call GetDBHealth()

Sub GetDBHealth()
If WScript.Arguments.Count = 1 Then
Dim oAPI, oBag
Set oAPI = MOMCreateObject("MOM.ScriptAPI")
Set oBag= oAPI.CreatePropertyBag()
Dim nResult
nResult = DBHealth(WScript.Arguments(0), oBag)
Call oAPI.Return(oBag)
Else
Wscript.quit()
End If
End Sub

Function DBHealth(sSQLConnectionString, byRef oBag)

Dim e
Set e = New Error

Dim cnADOConnection
Set cnADOConnection = MomCreateObject("ADODB.Connection")
cnADOConnection.Provider = "sqloledb"
cnADOConnection.ConnectionTimeout = 30

Dim sProv
sProv = BuildConnectionString(sSQLConnectionString, "master")
162

e.Clear
On Error Resume Next
cnADOConnection.Open sProv
e.Save
On Error Goto 0
If 0 <> Err.number then
DBHealth = SQL_DISCOVERY_CONNECT_FAILURE
ThrowScriptErrorNoAbort "Query execution failed for '" & sSQLConnectionString &
"' SQL Server", e
Exit Function
End If

Dim oResults

e.Clear
On Error Resume Next
' query for the list of databases which are not database snapshots
Set oResults = cnADOConnection.Execute("SELECT name , ISNULL
(DATABASEPROPERTYEX(name," & "'" & DB_PROPERTY & "'),'N/A') from sys.databases")
e.Save
On Error Goto 0

If e.Number <> 0 Then


DBHealth = SQL_DISCOVERY_QUERY_FAILURE
ThrowScriptErrorNoAbort "Query execution failed for '" & sSQLConnectionString &
"' SQL Server", e
If (oResults <> null) Then oResults.Close
Exit Function
End If

Do While Not oResults.EOF

Call oBag.AddValue(oResults(0) & "-State", CStr(oResults(1)))

oResults.MoveNext

Loop

cnADOConnection.Close

End Function

Sample Powershell Script


The following sample comes from the SQL Server 2012 MP. This script provides information on the
number of active connections for each SQL Database.

# ActiveConnectionsDataSource.ps1
# Active Connections Property Bag Script
#
param($computerName, $sqlInstanceName)

# $InstancePath is the 'Connection String' property of SQL DB Engine Class


(Microsoft.SQLServer.DBEngine)
# $InstancePath supposed to be of three types:
# computername
# computername\MSSQLSERVER
# computername\MSSQLSERVER.db1
163

#TODO: Discuss event id


$SCRIPT_EVENT_ID = 7701
#Event Severity values
$INFORMATION_EVENT_TYPE = 0
$ERROR_EVENT_TYPE = 1

function BuildConnectionString($serverName, $databaseName) {


'Data Source="' + ($serverName -replace '"', '""') + '";Initial Catalog="' +
($databaseName -replace '"', '""') + '";Integrated Security=SSPI'
}

function SqlQueryTables($ComputerName, $InstanceName, $query) {

$res = "";
if ($InstanceName -eq "MSSQLSERVER")
{
$InstanceName = "DEFAULT"
}

$principalName = $ComputerName
if ($InstanceName -ne "DEFAULT") {
$principalName = "{0}\{1}" -f $principalName, $InstanceName
}

$UserPolicies = @{}
$msg = "User Policies: {0}" -f [Environment]::NewLine

$SqlConnection = New-Object System.Data.SqlClient.SqlConnection


try
{
$SqlCmd = New-Object System.Data.SqlClient.SqlCommand
$SqlAdapter = New-Object System.Data.SqlClient.SqlDataAdapter
$DataSet = New-Object System.Data.DataSet

$SqlConnection.ConnectionString = BuildConnectionString $principalName 'master'

$SqlCmd.CommandText = $query
$SqlCmd.Connection = $SqlConnection
$SqlAdapter.SelectCommand = $SqlCmd
$SqlAdapter.Fill($DataSet)|out-null

$res = $DataSet.Tables
}
catch
{
$SqlConnection.Close()
throw $_
}
$SqlConnection.Close()
return $res
}

function Main {
param(
$computerName,
$sqlInstanceName
)

#
# Prepare MOM API and property bag object
#
$api = New-Object -comObject "MOM.ScriptAPI"

try
{
$msg = [string]::Empty

$query = "SELECT DB_NAME(dbid) as DBName FROM sys.sysdatabases;


SELECT DB_NAME(dbid) as DBName, COUNT(dbid) as NConnections
from sys.sysprocesses as sp
inner join sys.dm_exec_connections as ex
on sp.spid = ex.session_id
WHERE dbid > 0
164

GROUP BY dbid;"
$res = SqlQueryTables $computerName $sqlInstanceName $query

$dbConections = New-Object 'System.Collections.Generic.Dictionary[String,Double]'

$res[0] | foreach { $dbConections[$_.DBName] = [double]0 }

$dbCount = 0;
$res[1] | foreach {$dbConections[$_.DBName] = [double]$_.NConnections }

$dbConections.GetEnumerator() | foreach {
$bag = $api.CreatePropertyBag()
$bag.AddValue("Name", $_.Key)
$bag.AddValue("Value", $_.Value)
$dbCount++;
$bag
}
$msg = "Active connections were got for {0} DBs on {1}" -f $dbCount, $InstancePath

}
catch
{
$header = "Managegement Group: $Target/ManagementGroup/Name$. Script: {0}" -f
($MyInvocation.MyCommand).Name.ToString()
$msg += "Error occured during Active connection data source
executing.{0}Computer:{1} {0}Reason: {2}" -f [Environment]::NewLine, $env:COMPUTERNAME,
$_.Exception.Message
$api.LogScriptEvent($header, $SCRIPT_EVENT_ID, $ERROR_EVENT_TYPE, $msg)
}
}

Main $computerName $sqlInstanceName

8.5 System.OleDbProbe Module


The System.OleDbProbe Module is a built in probe module that will use an OLEDB provider/driver on
the system to make a database query from the hosting agent. The database, query and other settings
are defined via probe configuration and do not need to be hard coded into the MP (though obviously the
query usually is). The query can be modified using context parameter replacement prior to execution so
you can dynamically insert information into it if need be. It supports integrated and also manually
specified credentials, usually via Run As Profiles.

It also has the nifty ability to retrieve the database settings from specified registry keys, which can avoid
the need to go out and discover those pieces of information. This makes it quite suitable for attaching
onto existing classes from other management packs.

8.5.1 WHEN SHOULD YOU USE IT


 You know in advance which columns you need to access.
 You know how to implement your own module.
 You don’t need to perform complex processing on each returned row.
165
8.5.2 REQUIRED CONFIGURATION
 ConnectionString - The connection string you wish to use to connect to the database. On
windows 2003 or later, this is encrypted by the module. If you are using Integrated security, you
do not need to specify credentials as long as you are using a run as profile with this module (but
make sure you flag the connection as using integrated security!).
 Query - The query you wish to run against the database. Supports context parameter
replacement, so you can use $Config/$ variables etc in your query.

8.5.3 OPTIONAL CONFIGURATION


 GetValue – (true/false) Whether the results of the query should be returned or not (set to false if
you just want to connect to the DB, and you don’t care about the results of the query).
 IncludeOriginalItem - (true/false) Determines if the resulting data item(s) will contain the item
that originally triggered this module. Note that the data is returned as CData, so you won’t be
able to perform XPath queries directly against it.
 OneRowPerItem – (true/false) Should all resulting data be returned in a single data item, or 1
data item returned for each row in the query results? Normally setting this to true is more
useful, as you’ll often want a condition detection to process each row individually, and you won’t
know the order (or number) of resulting rows.
 DatabaseNameRegLocation - Registry key where we can find the database name. Must be
under the HKLM hive.
 DatabaseServerNameRegLocation - Registry key where we can find the database server
name (and instance, if required). Must also be under the HKLM hive.
 QueryTimeout – (Integer) Optional parameter that allows you to perform a query timeout.
 GetFetchTime – (true/false) Optional parameter that allows you to specify that the resulting
data item(s) should contain the fetch time for the query.
An important parameter is the OneRowPerItem. If set to false when you get back data the data item
will look like the below snippit (I’ve omitted the other elements to save space).

<RowLength></RowLength>
<Columns>
<!-- Data for first row returned -->
<Column>Data in first column</Column>
<Column>Data in Second column. </Column>
</Columns>
<Columns>
<!-- Data for Second row returned -->
<Column>Data in first column</Column>
<Column>Data in Second column. </Column>
</Columns>
This can make processing the results in further modules a pain, since your XPath Query is going to
have to specify which row and column specifically you want to access. If you instead set
OneRowPerItem to true then you’ll get multiple return items and can filter them using an Expression
filter with a simple syntax such as $Data/Columns/Column[1]$ You may also wish to filter on the
RowLength property to establish if any rows were returned. Remember that the module will return a
data item if it succeeded to connect but doesn’t have rights to query, so check that data was returned
before you try to do something with it!
166
167
8.6 Example Configurations
Normally if I’m going to use an OleDBProbe to access a database repeatedly I’ll create my own probe
module that sets up the settings I’m going to need and is already set to use my MP’s run as profile for
DB access. That way I don’t have to keep specifying it over and over again. Below is a sample where
I’ve done this, and configured everything other than my query to pass in for a SQL database
probe. Now all my monitors and rules that make use of this know where to locate the DB and what
other query options to use automatically (along with credentials).

<ProbeActionModuleType ID="DBProbe.Library.Probe.DatabaseOledbQuery" Accessibility="Public"


RunAs="MSL!Microsoft.SQLServer.SQLProbeAccount " Batching="false" PassThrough="false">
<Configuration>
<xsd:element minOccurs="1" name="Query" type="xsd:string" />
<xsd:element minOccurs="1" name="OneRowPerItem" type="xsd:boolean" />
</Configuration>
<ModuleImplementation Isolation="Any">
<Composite>
<MemberModules>
<ProbeAction ID="PassThru" TypeID="System!System.PassThroughProbe" />
<ProbeAction ID="OledbProbe" TypeID="System!System.OleDbProbe">
<ConnectionString>Provider=SQLOLEDB;Integrated Security=SSPI</ConnectionString>
<Query>$Config/Query$</Query>
<GetValue>true</GetValue>
<IncludeOriginalItem>false</IncludeOriginalItem>
<OneRowPerItem>$Config/OneRowPerItem$</OneRowPerItem>
<DatabaseNameRegLocation>SOFTWARE\MyRegKey\Database\DatabaseName</DatabaseNameRegLocation>

<DatabaseServerNameRegLocation>SOFTWARE\MyRegKey\Database\DatabaseServerName</DatabaseServerName
RegLocation>
</ProbeAction>
</MemberModules>
<Composition>
<Node ID="OledbProbe">
<Node ID="PassThru" />
</Node>
</Composition>
</Composite>
</ModuleImplementation>
<OutputType>System!System.OleDbData</OutputType>
<TriggerOnly>true</TriggerOnly>
</ProbeActionModuleType>

Here I’ve done the same thing, only without using registry keys to specify the location of my
DB. Normally I’d pass the DB details from my targeted class as I’ll have some property that has been
discovered defining where the database is. Also note that the module has been configured to use the
SQL Monitoring account Run as Profile automatically.

<ProbeActionModuleType ID="DBProbe.Library.Probe.DatabaseOledbQuery" Accessibility="Public"


RunAs="MSL!Microsoft.SQLServer.SQLProbeAccount " Batching="false" PassThrough="false">
<Configuration>
<xsd:element minOccurs="1" name="DatabaseServer" type="xsd:string" />
<xsd:element minOccurs="1" name="DatabaseName" type="xsd:string" />
<xsd:element minOccurs="1" name="Query" type="xsd:string" />
<xsd:element minOccurs="1" name="OneRowPerItem" type="xsd:boolean" />
</Configuration>
168

<ModuleImplementation Isolation="Any">
<Composite>
<MemberModules>
<ProbeAction ID="PassThru" TypeID="System!System.PassThroughProbe" />
<ProbeAction ID="OledbProbe" TypeID="System!System.OleDbProbe">
<ConnectionString>Provider=SQLOLEDB;Server=$Config/DatabaseServer$;Database=$Config/DatabaseName
$;Integrated Security=SSPI</ConnectionString>
<Query>$Config/Query$</Query>
<GetValue>true</GetValue>
<IncludeOriginalItem>false</IncludeOriginalItem>
<OneRowPerItem>$Config/OneRowPerItem$</OneRowPerItem>
</ProbeAction>
</MemberModules>
<Composition>
<Node ID="OledbProbe">
<Node ID="PassThru" />
</Node>
</Composition>
</Composite>
</ModuleImplementation>
<OutputType>System!System.OleDbData</OutputType>
<TriggerOnly>true</TriggerOnly>
</ProbeActionModuleType>

8.7 Simple/Specified Authentication


If you don’t (or can’t) want to use Integrated security, you can pass credentials using simple
authentication and a run as profile. DO NOT hard code the credentials, these are now stored in plain
text and readable. The run as profile creds are encrypted and the connection string is encrypted across
the wire, the MP isn’t!

The syntax for this is shown below. Obviously replace the text in italics with your values.

Provider=SQLOLEDB; Server=ServerName; Database=databaseName; User


Id=$RunAs[Name="RunAsIdentifierGoesHere"]/UserName$;
Password=$RunAs[Name="RunAsIdentiferGoesHere"]/Password$

8.7.1 SCENARIO 1 – MONITORING


In this scenario, you want to monitor a database for a certain condition. Perhaps you are getting the
result of a stored procedure, checking the number of rows in a table or checking rows for a
certain value (perhaps error logs?). Once queried, you pass the data items onto
a System.ExpressionFilter module to filter for your desired criteria and alert as appropriate.

8.7.2 SCENARIO 2 – COLLECTION


Another fairly common scenario, here we are doing the same thing as above as part of an event
collection or performance collection rule. This could even be ignoring the data and just checking how
long it took the query to run, via the InitializationTime, OpenTime, ExecutionTime and
FetchTime properties of the output data. Following your System.OleDBProbe module you’ll usually use
one of the mapper condition detection modules to generate event or performance data (these are quite
nicely documented around the web and on MSDN. This is normally done with property bags, but the
principle is the same).
169
8.7.3 LINKS AND FURTHER READING
You can find documentation for the System.OleDbProbe online at http://msdn.microsoft.com/en-
us/library/ff472339.aspx. A sample output of the System.OleDbProbe module is shown at
http://msdn.microsoft.com/en-us/library/ee533760.aspx.

170
9.0 BUILDING OUT A SQL SERVER TEMPLATE WITH VMM –
CRAIG TAYLOR

9.1 Using SQL Server Profiles in SCVMM 2012 R2

9.2 Introduction

In System Center Virtual Machine Manager (SCVMM or VMM), Microsoft have provided the capability
to deploy SQL Server as part of a Service Template, allowing for the more rapid deployment of multi-
tier applications dependent on the SQL Server database engine.

SQL Server 2008 R2 and later provide a Sysprep functionality as an advanced installation option. This
two-step installation process is similar to the Windows Sysprep tool in use. The two steps are:

1. Image Preparation - Installation of SQL Server setup files


SQL Server Sysprep can be used prior to image generalisation (via Windows Sysprep) to create
an operating system image that includes an un-configured SQL Server installation.

2. Image Completion - Finish SQL installation


Once the image has been deployed, either to a physical server, say via System Center
Configuration Manager, or to a virtual server via a virtual machine template, an end user can
finalise the SQL installation in a far quicker time than they may have deployed SQL from the
setup binaries.

Through embedding a Sysprep installation into an OS image, it is possible for an IT practice to both
accelerate and standardise their SQL Server deployments, whilst retaining the flexibility of
customisation to allow the deployed SQL Server instance to support any given applications specific
database requirements, (e.g. collation setting or FILESTREAM).
171
9.3 Preparing VMM
Preparing Virtual Machine Manager to make use of SQL Server Sysprep is also a two-stage process:

1. Creation and capture as a template of a SQL Server prepared image


During this stage, the administrator deploys an existing virtual machine template, installs a
Sysprep installation of SQL Server & creates a fork of the VM template

2. Creation of a SQL Server Profile & Service Template


During this stage, a SQL Server profile is created and used as part of a VMM Service Template

This chapter will outline the steps required to make use of SQL Server Profiles in Virtual Machine
Manager.

172
9.4 Creation of a Sysprep SQL Server Virtual Machine Template

Deploy a virtual machine from the base template you wish to fork. In this instance we are deploying our
standard Windows Server 2012 R2 template to the 'Production' VMM cloud

173
So that the VHDX files are meaningfully labelled, once captured to template during the VMM template
generalisation phase, we provide a Virtual Machine name that mentions SQL - in our instance -
WS2012R2_DC_SQL
174
175
It is important to ensure that the virtual machine is deployed as a workgroup server. This will prevent
configuration changes being applied to the Windows OS (say, via Group Policy) upon which we will
base our template disk

Once the virtual machine we plan to base our new template on has been deployed, we may run the
required SQL Server Sysprep image preparation.

NB: It is the author's recommendation that installation of the SQL Server pre-requisite, .NET 3.5, be
performed in advance of running the SQL installer, due to the unreliability inherent to this feature's
activation.

Following the activation of .NET 3.5, open the ‘Advanced’ section of the SQL Server Installation Center
and then click 'Image Preparation of a stand-alone instance of SQL Server'
176
It is possible to perform Sysprep-ed installs of Server Database Engine Services and SQL Server
Reporting Services. In this guide we cover preparation of the DB engine.

Select the features required for our template and then configure the installation directories

177
Provide the SQL installation with an Instance ID and Instance root directory, where the instances
system databases and logs will be deployed.

Please note that this Instance ID is immutable and cannot be altered during the Image Completion
process undertaken via the application of the VMM SQL Server profile.

178
179
Once the preparation has succeeded, copy the contents of the SQL media to a drive on the virtual
machine. In our example we copy the binaries to a subfolder of D:\Sources. Then using the VMM
administrators console, create a generalised VM Template based on the virtual machine

180
181
9.5 Creation of the SQL Server Profile

SQL Server Profiles are created in the Library section of the VMM administration console. Much like OS
& HW profiles, they are ultimately stored in the VMM database. No files on the VMM library server are
created as a consequence of their commission.

To create a SQL Server Profile, click the 'Create' drop down button on the Library section of the VMM
console. Then click 'SQL Server Profile'.

Provide a name and description for the SQL Server Profile

182
Under the SQL Server Configuration parent node, we must provide the profile with:

A name

An Instance name - This does not have to match the instance ID specified during the SQL Server
sysprep image preparation

An Instance ID - Note that this instance ID MUST match the one specified during the Sysprep-ed SQL
Server image preparation

A product key - the Image Completion process will fail without this being entered. It is normally found in
the DefaultSetup.ini file found on the installation media under the subfolder 'x64'

A RunAs account with administrative rights on our VM guest. In this instance, we have specified an
account that will be seeded as a local administrator on our server when it is joined to the domain. This
was achieved via the application of Group Policy
183
Under the SQL Server configuration ‘Configuration’ node, we must:

Provide a path for the SQL Server installation media accessible from the deployed virtual machine.
Note we have configured this to be a local path to minimize external dependencies during Service
Template deployment. Specifically, we have configured the path to which we copied the SQL Server
binaries during the template preparation.

Provide the users and groups you wish to provision SA rights on the SQL Server instance.

Specify the security mode you wish the SQL Server instance to operate under, either Windows
Authentication or Mixed Authentication

Define if TCP/IP & named pipes are going to be allowed for remote connections
184
Finally, configure an SA password by selecting a VMM RunAs profile containing the required credential

185
Under the SQL Server configuration ‘Service Accounts’ node, specify the Service Accounts under
which the SQL Server service & SQL Server Agent service will operate under, again using VMM RunAs
accounts:

9.6 Further Customisation Options


On first examination, it may appear that the degree of customisation that can be applied via SQL Server
profiles is somewhat limited. However using a SQL installation INI file, it is possible to configure
settings that are not directly configurable via the GUI, for example instance collation. To do this, a SQL
installation INI file must be present in the VMM library and configured using the Configuration node of
the SQL Server Configuration wizard
186
187
Further information on how to configure a SQL Server deployment using an INI file can be found on
Microsoft TechNet.

If a SQL Server installation INI file is used as part of a profile, the administrator will be asked if they
wish to populate the SQL Server deployment profile with the settings contained in the INI file.

Should the administrator accept this warning, it is recommended to revisit all sections of the profile
wizard to ensure they have retained the expected settings:

188
9.7 Deployment of the Service Template
SQL Server Profiles can only be used as part of a Service Template. In this tutorial, we configure a
single tier service template, containing our SQL Server VM Template.

To apply a our created SQL Server profile to the VM tier right click on the tier and then select
'properties'. Following this, select the ‘SQL Server Configuration’ section of the properties window and
then select the profile we have configured from the SQL Server profile dropdown menu

189
Deploying this service will create a single virtual machine with SQL Server installed to our specification.
190
191
Upon deployment, one should attempt to connect to the deployed SQL instance using SQL
Management Studio, to confirm it meets the defined functional specification.

9.8 Conclusion
Virtual Machine Manager SQL Server profiles provide an easily managed means of rapidly provisioning
192

consistently configured SQL Server Virtual Machines as part of multi-tier Service Templates. The ability
to customise the functional specification, using an administration GUI for common settings and script
files for advanced settings, provide the cloud administrator with a flexible, low-overhead means of
providing standardised SQL infrastructure to their customers, be it for test, development or production
purposes. As many administrators can attest, SQL Server installations are typically a reasonably time-
consuming endeavour. As such, it is the author’s observation that time invested in educating oneself in
automating the process is typically pays dividends in a reasonably short period.

193
10.0 SPECIAL THANKS
This guide is nearly 130 pages of extra content over the last version, this would not have been possible
without Pete’s Azure experience when it comes to System Center in the cloud. Robert showed us why
he still loves DPM. Matthew who is a great MP dev gave a solid explanation of the SQL MP . Craig and
Richard gave a ton of help with the clustering, AlwaysOn and full guide edits.

All of the contributors hope to do a V2 of this guide over the next few months so check back soon, we
will announce the next edition on Twitter @paul_keely

Any mistakes and errors in this guide were inserted by a hacker!

194

S-ar putea să vă placă și